Datasets:
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10K - 100K
Tags:
biomedical-information-retrieval
citation-prediction-retrieval
passage-retrieval
news-retrieval
argument-retrieval
zero-shot-information-retrieval
License:
Refine README metadata and data splits table
Browse filesSet homepage/repository to beir.ai, remove supported tasks section, add NFCorpus split note, and include direct-download BEIR datasets table.
README.md
CHANGED
|
@@ -69,8 +69,8 @@ dataset_info:
|
|
| 69 |
|
| 70 |
## Dataset Description
|
| 71 |
|
| 72 |
-
- **Homepage:** https://
|
| 73 |
-
- **Repository:** https://
|
| 74 |
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
|
| 75 |
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
|
| 76 |
- **Point of Contact:** nandan.thakur@uwaterloo.ca
|
|
@@ -81,12 +81,6 @@ BEIR is a heterogeneous benchmark built from 18 diverse datasets representing 9
|
|
| 81 |
|
| 82 |
This `arguana` subset is the Argument Retrieval task within BEIR.
|
| 83 |
|
| 84 |
-
### Supported Tasks and Leaderboards
|
| 85 |
-
|
| 86 |
-
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
|
| 87 |
-
|
| 88 |
-
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
|
| 89 |
-
|
| 90 |
### Languages
|
| 91 |
|
| 92 |
All tasks are in English (`en`).
|
|
@@ -145,6 +139,34 @@ qrels = {
|
|
| 145 |
| corpus | corpus | 8,674 |
|
| 146 |
| queries | queries | 1,406 |
|
| 147 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 148 |
## Citation Information
|
| 149 |
|
| 150 |
```bibtex
|
|
|
|
| 69 |
|
| 70 |
## Dataset Description
|
| 71 |
|
| 72 |
+
- **Homepage:** https://beir.ai
|
| 73 |
+
- **Repository:** https://beir.ai
|
| 74 |
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
|
| 75 |
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
|
| 76 |
- **Point of Contact:** nandan.thakur@uwaterloo.ca
|
|
|
|
| 81 |
|
| 82 |
This `arguana` subset is the Argument Retrieval task within BEIR.
|
| 83 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 84 |
### Languages
|
| 85 |
|
| 86 |
All tasks are in English (`en`).
|
|
|
|
| 139 |
| corpus | corpus | 8,674 |
|
| 140 |
| queries | queries | 1,406 |
|
| 141 |
|
| 142 |
+
### NFCorpus Data Splits
|
| 143 |
+
|
| 144 |
+
- `train`, `dev`, `test`
|
| 145 |
+
|
| 146 |
+
You can also download BEIR datasets directly (without loading through Hugging Face datasets) using the links below.
|
| 147 |
+
|
| 148 |
+
| Dataset | Website | BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
|
| 149 |
+
| --- | --- | --- | --- | ---: | ---: | ---: | --- | --- |
|
| 150 |
+
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/) | `msmarco` | `train` `dev` `test` | 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | `444067daf65d982533ea17ebd59501e4` |
|
| 151 |
+
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html) | `trec-covid` | `test` | 50 | 171K | 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | `ce62140cb23feb9becf6270d0d1fe6d1` |
|
| 152 |
+
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | `nfcorpus` | `train` `dev` `test` | 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | `a89dba18a62ef92f7d323ec890a0d38d` |
|
| 153 |
+
| BioASQ | [Homepage](http://bioasq.org) | `bioasq` | `train` `test` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
|
| 154 |
+
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | `nq` | `train` `test` | 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | `d4d3d2e48787a744b6f6e691ff534307` |
|
| 155 |
+
| HotpotQA | [Homepage](https://hotpotqa.github.io) | `hotpotqa` | `train` `dev` `test` | 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | `f412724f78b0d91183a0e86805e16114` |
|
| 156 |
+
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | `fiqa` | `train` `dev` `test` | 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | `17918ed23cd04fb15047f73e6c3bd9d9` |
|
| 157 |
+
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html) | `signal1m` | `test` | 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
|
| 158 |
+
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | `trec-news` | `test` | 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
|
| 159 |
+
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | `arguana` | `test` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | `8ad3e3c2a5867cdced806d6503f29b99` |
|
| 160 |
+
| Touche-2020 | [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | `webis-touche2020` | `test` | 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | `46f650ba5a527fc69e0a6521c5a23563` |
|
| 161 |
+
| CQADupstack | [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | `cqadupstack` | `test` | 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | `4e41456d7df8ee7760a7f866133bda78` |
|
| 162 |
+
| Quora | [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | `quora` | `dev` `test` | 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | `18fb154900ba42a600f84b839c173167` |
|
| 163 |
+
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | `dbpedia-entity` | `dev` `test` | 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | `c2a39eb420a3164af735795df012ac2c` |
|
| 164 |
+
| SCIDOCS | [Homepage](https://allenai.org/data/scidocs) | `scidocs` | `test` | 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | `38121350fc3a4d2f48850f6aff52e4a9` |
|
| 165 |
+
| FEVER | [Homepage](http://fever.ai) | `fever` | `train` `dev` `test` | 6,666 | 5.42M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | `5a818580227bfb4b35bb6fa46d9b6c03` |
|
| 166 |
+
| Climate-FEVER | [Homepage](http://climatefever.ai) | `climate-fever` | `test` | 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | `8b66f0a9126c521bae2bde127b4dc99d` |
|
| 167 |
+
| SciFact | [Homepage](https://github.com/allenai/scifact) | `scifact` | `train` `test` | 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | `5f7d1de60b170fc8027bb7898e2efca1` |
|
| 168 |
+
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | `robust04` | `test` | 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
|
| 169 |
+
|
| 170 |
## Citation Information
|
| 171 |
|
| 172 |
```bibtex
|