Datasets:
Update dataset card with improved documentation
Browse files
README.md
CHANGED
|
@@ -1,4 +1,5 @@
|
|
| 1 |
---
|
|
|
|
| 2 |
dataset_info:
|
| 3 |
features:
|
| 4 |
- name: dataset
|
|
@@ -64,4 +65,88 @@ configs:
|
|
| 64 |
data_files:
|
| 65 |
- split: train
|
| 66 |
path: data/train-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 67 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
pretty_name: ARIA Repo Benchmark
|
| 3 |
dataset_info:
|
| 4 |
features:
|
| 5 |
- name: dataset
|
|
|
|
| 65 |
data_files:
|
| 66 |
- split: train
|
| 67 |
path: data/train-*
|
| 68 |
+
language:
|
| 69 |
+
- en
|
| 70 |
+
size_categories:
|
| 71 |
+
- n<1K
|
| 72 |
+
tags:
|
| 73 |
+
- aria
|
| 74 |
+
- benchmark
|
| 75 |
+
- ml-research
|
| 76 |
+
- reproducibility
|
| 77 |
+
task_categories:
|
| 78 |
+
- text-generation
|
| 79 |
---
|
| 80 |
+
|
| 81 |
+
# ARIA Repo Benchmark
|
| 82 |
+
|
| 83 |
+
The ARIA Repo Benchmark is part of the [ARIA benchmark suite](https://github.com/AlgorithmicResearchGroup/ARIA), a collection of closed-book benchmarks probing the ML knowledge that frontier models have internalized during training. This dataset contains 58 curated research paper implementations with metadata for evaluating whether ML experiments described in papers can be reproduced.
|
| 84 |
+
|
| 85 |
+
## Dataset Summary
|
| 86 |
+
|
| 87 |
+
- **Size**: 58 entries
|
| 88 |
+
- **Coverage**: Computer Vision, NLP, Time Series, Graph, Bioinformatics
|
| 89 |
+
- **Purpose**: Evaluate AI agents on their ability to locate, understand, and reproduce ML research experiments
|
| 90 |
+
|
| 91 |
+
Each entry links a research paper to its code repository, dataset, metrics, and compute requirements, along with verification of whether the experiment is reproducible on constrained hardware.
|
| 92 |
+
|
| 93 |
+
## Dataset Structure
|
| 94 |
+
|
| 95 |
+
### Key Fields
|
| 96 |
+
|
| 97 |
+
| Field | Type | Description |
|
| 98 |
+
|-------|------|-------------|
|
| 99 |
+
| `paper_title` | string | Title of the research paper |
|
| 100 |
+
| `paper_url` | string | ArXiv URL |
|
| 101 |
+
| `paper_date` | timestamp | Publication date |
|
| 102 |
+
| `paper_text` | string | Full paper text |
|
| 103 |
+
| `dataset` | string | Dataset used in the paper |
|
| 104 |
+
| `dataset_link` | string | Link to the dataset |
|
| 105 |
+
| `model_name` | string | Model name |
|
| 106 |
+
| `code_links` | list[string] | GitHub repository links |
|
| 107 |
+
| `metrics` | string | Performance metrics reported |
|
| 108 |
+
| `table_metrics` | list[string] | Detailed metrics from tables |
|
| 109 |
+
| `prompts` | string | Evaluation prompts |
|
| 110 |
+
| `modality` | string | Data modality (CV, NLP, Time Series, Graph, etc.) |
|
| 111 |
+
|
| 112 |
+
### Compute & Reproducibility Fields
|
| 113 |
+
|
| 114 |
+
| Field | Type | Description |
|
| 115 |
+
|-------|------|-------------|
|
| 116 |
+
| `compute_hours` | float64 | Estimated training compute hours |
|
| 117 |
+
| `num_gpus` | int64 | Number of GPUs required |
|
| 118 |
+
| `reasoning` | string | Reasoning about compute estimates |
|
| 119 |
+
| `trainable_single_gpu_8h` | string | Trainable on a single GPU in 8 hours |
|
| 120 |
+
| `verified` | string | Verification status |
|
| 121 |
+
| `time_and_compute_verification` | string | Compute verification notes |
|
| 122 |
+
| `link_to_colab_notebook` | string | Google Colab notebook link |
|
| 123 |
+
| `run_possible` | string | Whether the code runs successfully |
|
| 124 |
+
| `notes` | string | Additional notes |
|
| 125 |
+
|
| 126 |
+
## Usage
|
| 127 |
+
|
| 128 |
+
```python
|
| 129 |
+
from datasets import load_dataset
|
| 130 |
+
|
| 131 |
+
ds = load_dataset("AlgorithmicResearchGroup/aria-repo-benchmark", split="train")
|
| 132 |
+
|
| 133 |
+
for entry in ds:
|
| 134 |
+
print(f"{entry['paper_title']} - {entry['modality']} - Reproducible: {entry['run_possible']}")
|
| 135 |
+
```
|
| 136 |
+
|
| 137 |
+
## Related Resources
|
| 138 |
+
|
| 139 |
+
- [ARIA Benchmark Suite](https://github.com/AlgorithmicResearchGroup/ARIA)
|
| 140 |
+
- [Algorithmic Research Group - Open Source](https://algorithmicresearchgroup.com/opensource.html)
|
| 141 |
+
|
| 142 |
+
## Citation
|
| 143 |
+
|
| 144 |
+
```bibtex
|
| 145 |
+
@misc{aria_repo_benchmark,
|
| 146 |
+
title={ARIA Repo Benchmark},
|
| 147 |
+
author={Algorithmic Research Group},
|
| 148 |
+
year={2024},
|
| 149 |
+
publisher={Hugging Face},
|
| 150 |
+
url={https://huggingface.co/datasets/AlgorithmicResearchGroup/aria-repo-benchmark}
|
| 151 |
+
}
|
| 152 |
+
```
|