Datasets:
ArXiv:
License:
Add task categories and links to paper, project page, and code
Browse filesHi, I'm Niels from the Hugging Face community science team. This PR improves the dataset card for AutoResearchBench by:
- Adding the `text-retrieval` task category to the metadata.
- Linking the dataset to its [paper page](https://huggingface.co/papers/2604.25256), project page, and GitHub repository.
- Adding a brief summary of the Deep Research and Wide Research tasks.
- Maintaining the existing instructions for benchmark decryption.
README.md
CHANGED
|
@@ -1,32 +1,57 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
|
| 5 |
# AutoResearchBench
|
| 6 |
|
| 7 |
-
|
| 8 |
|
| 9 |
-
|
| 10 |
|
| 11 |
-
|
|
|
|
|
|
|
| 12 |
|
| 13 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
|
| 15 |
```bash
|
| 16 |
export HF_TOKEN=your_hf_token # required if this dataset repo is private
|
| 17 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
```
|
| 19 |
|
| 20 |
-
## Decrypt
|
| 21 |
|
| 22 |
-
Use `decrypt_benchmark.py` from the
|
| 23 |
|
| 24 |
```bash
|
| 25 |
-
python3 decrypt_benchmark.py
|
|
|
|
|
|
|
| 26 |
```
|
| 27 |
|
| 28 |
Run inference and evaluation on the decrypted `AutoResearchBench.jsonl`, not on the `.obf.json` bundle directly.
|
| 29 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 30 |
|
| 31 |
## ❤️ Citing Us
|
| 32 |
If you find this repository or our work useful, please consider giving a star ⭐ and or citing our work, which would be greatly appreciated:
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- text-retrieval
|
| 5 |
+
tags:
|
| 6 |
+
- science
|
| 7 |
+
- ai-agents
|
| 8 |
---
|
| 9 |
|
| 10 |
# AutoResearchBench
|
| 11 |
|
| 12 |
+
[**Project Page**](https://cheryou.github.io/autoresearchbench.github.io/) | [**Paper**](https://huggingface.co/papers/2604.25256) | [**GitHub**](https://github.com/CherYou/AutoResearchBench)
|
| 13 |
|
| 14 |
+
This repository hosts the obfuscated benchmark bundle for **AutoResearchBench**, a dedicated benchmark for autonomous scientific literature discovery.
|
| 15 |
|
| 16 |
+
AutoResearchBench consists of two complementary task types:
|
| 17 |
+
- **Deep Research**: requires tracking down a specific target paper through a progressive, multi-step probing process.
|
| 18 |
+
- **Wide Research**: requires comprehensively collecting a set of papers satisfying given conditions.
|
| 19 |
|
| 20 |
+
The published file uses public reversible obfuscation for benchmark release. It lowers casual web exposure, but it is not strong access control. Please do not repost decrypted questions or answers in plain text or images online.
|
| 21 |
+
|
| 22 |
+
## Quick Start
|
| 23 |
+
|
| 24 |
+
### 1. Download the released bundle
|
| 25 |
|
| 26 |
```bash
|
| 27 |
export HF_TOKEN=your_hf_token # required if this dataset repo is private
|
| 28 |
+
mkdir -p input_data
|
| 29 |
+
|
| 30 |
+
curl -L \
|
| 31 |
+
-H "Authorization: Bearer ${HF_TOKEN}" \
|
| 32 |
+
-o input_data/AutoResearchBench.jsonl.obf.json \
|
| 33 |
+
https://huggingface.co/datasets/Lk123/AutoResearchBench/resolve/main/AutoResearchBench.jsonl.obf.json
|
| 34 |
```
|
| 35 |
|
| 36 |
+
### 2. Decrypt it locally
|
| 37 |
|
| 38 |
+
Use `decrypt_benchmark.py` from the [GitHub repository](https://github.com/CherYou/AutoResearchBench):
|
| 39 |
|
| 40 |
```bash
|
| 41 |
+
python3 decrypt_benchmark.py \
|
| 42 |
+
--input-file input_data/AutoResearchBench.jsonl.obf.json \
|
| 43 |
+
--output-file input_data/AutoResearchBench.jsonl
|
| 44 |
```
|
| 45 |
|
| 46 |
Run inference and evaluation on the decrypted `AutoResearchBench.jsonl`, not on the `.obf.json` bundle directly.
|
| 47 |
|
| 48 |
+
### 3. Usage
|
| 49 |
+
|
| 50 |
+
After installing dependencies and configuring your `.env` as described in the [code repository](https://github.com/CherYou/AutoResearchBench), you can run inference:
|
| 51 |
+
|
| 52 |
+
```bash
|
| 53 |
+
bash run_inference.sh
|
| 54 |
+
```
|
| 55 |
|
| 56 |
## ❤️ Citing Us
|
| 57 |
If you find this repository or our work useful, please consider giving a star ⭐ and or citing our work, which would be greatly appreciated:
|