BEIR: Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models

Abstract

Neural IR models have often been studied in homogeneous and narrow settings, which has considerably limited insights into their generalization capabilities. To address this, and to allow researchers to more broadly establish the effectiveness of their models, we introduce BEIR (Benchmarking IR), a heterogeneous benchmark for information retrieval. We leverage a careful selection of 17 datasets for evaluation spanning diverse retrieval tasks including open-domain datasets as well as narrow expert domains. We study the effectiveness of nine state-of-the-art retrieval models in a zero-shot evaluation setup on BEIR, finding that performing well consistently across all datasets is challenging. Our results show BM25 is a robust baseline and Reranking-based models overall achieve the best zero-shot performances, however, at high computational costs. In contrast, Dense-retrieval models are computationally more efficient but often underperform other approaches, highlighting the considerable room for improvement in their generalization capabilities. In this work, we extensively analyze different retrieval models and provide several suggestions that we believe may be useful for future work. BEIR datasets and code are available at https://github.com/UKPLab/beir

Bibtex

@inproceedings{thakur-etal-2021-beir,
    title = "{BEIR}:  Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
    author = {Thakur, Nandan and
      Reimers, Nils and
      R{\"u}ckl{\'e}, Andreas and
      Srivastava, Abhishek and
      Gurevych, Iryna},
  booktitle = {Proceedings of the 2021 Neural Information Processing Systems (NeurIPS-2021): Track on Datasets and Benchmarks},
  year = {2021},
  url = "https://arxiv.org/abs/2104.08663"
}