matthewkenney commited on
Commit
6d48e57
·
verified ·
1 Parent(s): 0aa97eb

Update dataset card with improved documentation

Browse files
Files changed (1) hide show
  1. README.md +71 -0
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  dataset_info:
3
  features:
4
  - name: dataset
@@ -34,4 +35,74 @@ configs:
34
  data_files:
35
  - split: benchmark
36
  path: data/benchmark-*
 
 
 
 
 
 
 
 
 
 
 
 
37
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ pretty_name: ARIA Search Benchmark v2
3
  dataset_info:
4
  features:
5
  - name: dataset
 
35
  data_files:
36
  - split: benchmark
37
  path: data/benchmark-*
38
+ language:
39
+ - en
40
+ size_categories:
41
+ - 1K<n<10K
42
+ tags:
43
+ - aria
44
+ - benchmark
45
+ - ml-research
46
+ - search
47
+ - retrieval
48
+ task_categories:
49
+ - question-answering
50
  ---
51
+
52
+ # ARIA Search Benchmark v2
53
+
54
+ The ARIA Search Benchmark is part of the [ARIA benchmark suite](https://github.com/AlgorithmicResearchGroup/ARIA), a collection of closed-book benchmarks probing the ML knowledge that frontier models have internalized during training. This dataset tests whether models can answer factual questions about ML research papers, models, datasets, and benchmark results without access to external retrieval.
55
+
56
+ ## Dataset Summary
57
+
58
+ - **Size**: 3,517 question-answer pairs
59
+ - **Split**: `benchmark`
60
+ - **Paper date range**: May 2023 to December 2024
61
+ - **Coverage**: Spans models, datasets, and metrics across CV, NLP, audio, video, and multimodal domains
62
+
63
+ ## Dataset Structure
64
+
65
+ | Field | Type | Description |
66
+ |-------|------|-------------|
67
+ | `dataset` | string | Benchmark dataset referenced |
68
+ | `model_name` | string | Model being evaluated |
69
+ | `paper_title` | string | Source paper title |
70
+ | `paper_date` | timestamp | Publication date |
71
+ | `paper_url` | string | ArXiv paper URL |
72
+ | `code_links` | list[string] | GitHub repository links |
73
+ | `prompts` | string | Question/prompt text |
74
+ | `answer` | string | Ground-truth answer |
75
+ | `paper_text` | string | Full paper text |
76
+ | `year_bin` | string | Year category for stratified evaluation |
77
+ | `benchmark_split` | string | Benchmark split identifier |
78
+
79
+ ## Usage
80
+
81
+ ```python
82
+ from datasets import load_dataset
83
+
84
+ ds = load_dataset("AlgorithmicResearchGroup/aria-search-benchmark_v2-public", split="benchmark")
85
+
86
+ for example in ds.select(range(5)):
87
+ print(f"Q: {example['prompts']}")
88
+ print(f"A: {example['answer']}")
89
+ print(f"Paper: {example['paper_title']}")
90
+ print()
91
+ ```
92
+
93
+ ## Related Resources
94
+
95
+ - [ARIA Benchmark Suite](https://github.com/AlgorithmicResearchGroup/ARIA)
96
+ - [Algorithmic Research Group - Open Source](https://algorithmicresearchgroup.com/opensource.html)
97
+
98
+ ## Citation
99
+
100
+ ```bibtex
101
+ @misc{aria_search_benchmark_v2,
102
+ title={ARIA Search Benchmark v2},
103
+ author={Algorithmic Research Group},
104
+ year={2024},
105
+ publisher={Hugging Face},
106
+ url={https://huggingface.co/datasets/AlgorithmicResearchGroup/aria-search-benchmark_v2-public}
107
+ }
108
+ ```