File size: 8,216 Bytes
a57427f caf1f7c a57427f caf1f7c a57427f caf1f7c a57427f caf1f7c a57427f caf1f7c a57427f caf1f7c a57427f caf1f7c a57427f caf1f7c a57427f caf1f7c a57427f caf1f7c a57427f caf1f7c a57427f | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 | ---
license: apache-2.0
task_categories:
- question-answering
- text-generation
language:
- en
- zh
tags:
- economics_and_finance
- healthcare_and_medicine
- industry
- law
- natural_science
pretty_name: $OneMillion-Bench
size_categories:
- n<1K
---
# $OneMillion-Bench
A bilingual (Global/Chinese) realistic expert-level benchmark for evaluating language agents across **5 professional domains**. The benchmark contains **400 entries** with detailed, weighted rubric-based grading criteria designed for fine-grained evaluation of domain expertise, analytical reasoning, and instruction following.
## Dataset Structure
Each subdirectory is a **Hugging Face subset** (configuration), and all data is in the **`test`** split.
```
$OneMillion-Bench/
├── economics_and_finance/
│ └── test.json # 80 entries (40 EN + 40 CN, distinct questions)
├── healthcare_and_medicine/
│ └── test.json # 80 entries (40 matched EN-CN pairs)
├── industry/
│ └── test.json # 80 entries (40 matched EN-CN pairs)
├── law/
│ └── test.json # 80 entries (40 EN + 40 CN, distinct questions)
├── natural_science/
│ └── test.json # 80 entries (40 matched EN-CN pairs)
└── README.md
```
| Subset | Split | Entries |
|---|---|---|
| `economics_and_finance` | `test` | 80 |
| `healthcare_and_medicine` | `test` | 80 |
| `industry` | `test` | 80 |
| `law` | `test` | 80 |
| `natural_science` | `test` | 80 |
## Domains & Coverage
| Domain | Categories | Example Subcategories | Bilingual Mode |
|---|---|---|---|
| **Economics & Finance** | Investing, FinTech, Banking, Insurance, M&A | Equities, VC/PE, Cryptocurrency, Commodities | Separate questions per language |
| **Healthcare & Medicine** | Clinical Medicine, Basic Medicine, Pharma & Biotech | Hepatobiliary Surgery, Oncology, Nephrology, Dentistry | Matched translation pairs |
| **Industry** | Telecommunications, ML, Architecture, Semiconductors | Backend Dev, Chemical Engineering, Chip Design | Matched translation pairs |
| **Law** | Civil, Criminal, International, Corporate, IP, Labor | Contract Disputes, Criminal Defense, Copyright, M&A | Separate questions per language |
| **Natural Science** | Chemistry, Biology, Physics, Mathematics | Organic Chemistry, Condensed Matter, Molecular Biology | Matched translation pairs |
## Entry Schema
Each entry is a JSON object with 7 fields:
```jsonc
{
"id": "uuid-string", // globally unique identifier
"case_id": 1, // links bilingual pairs (in matched-pair domains)
"language": "en", // "en" or "cn" (50/50 split in every file)
"system_prompt": "", // reserved (empty across all entries)
"question": "...", // expert-level evaluation prompt
"tags": {
"topics": [ // 3-level taxonomy
"Domain", // e.g. "Economics and Finance"
"Category", // e.g. "Investing"
"Subcategory" // e.g. "Equities"
],
"time_sensitivity": {
"time_sensitivity": "Time-agnostic", // or "Weakly/Strongly time-sensitive"
"year_month": "NA", // "YYYY-MM" when time-sensitive
"day": "NA" // "DD" when applicable
}
},
"rubrics": [ // weighted grading criteria (11-37 per entry)
{
"rubric_number": 1,
"rubric_detail": "...", // specific grading criterion
"rubric_weight": 5, // positive = reward, negative = penalty
"rubric_tag": "..." // category (see below)
}
]
}
```
### Rubric Labels
| Label | Role | Typical Weight |
|---|---|---|
| Factual Information | Tests factual accuracy | +3 to +5 |
| Analytical Reasoning | Assesses depth of analysis | +3 to +5 |
| Structure and Formatting | Evaluates output organization | -2 to -4 (penalty) |
| Instructions Following | Checks compliance with task constraints | mixed |
## Quick Start
```python
from datasets import load_dataset
# Load a subset from Hugging Face (test split)
ds = load_dataset("humanlaya-data-lab/OneMillion-Bench", "natural_science", split="test")
# Filter English entries
en_entries = ds.filter(lambda x: x["language"] == "en")
# Iterate with rubrics
for entry in en_entries.select(range(1)):
print(f"Topic: {' > '.join(entry['tags']['topics'])}")
print(f"Question: {entry['question'][:200]}...")
print(f"Rubrics ({len(entry['rubrics'])}):")
for r in entry["rubrics"][:3]:
print(f" [{r['rubric_weight']:+d}] {r['rubric_tag']}: {r['rubric_detail'][:80]}...")
```
Example output:
```
Topic: Natural Sciences > Chemistry > Organic Chemistry
Question: You are an expert in organic chemistry. A graduate student is researching ...
Rubrics (18):
[+5] Factual Information: Correctly identifies the primary reaction mechanism ...
[+4] Analytical Reasoning: Provides a coherent comparison of thermodynamic vs ...
[-3] Structure and Formatting: Response lacks clear section headings or logica...
```
## Evaluation
### Scoring a single response
Each rubric carries a signed weight: positive weights are points earned when the criterion is met, negative weights are penalties applied when violated. Sum the weights of all satisfied rubrics and normalize against the maximum possible (positive-only) score:
```python
def score_entry(response: str, rubrics: list, judge_fn) -> dict:
"""
judge_fn(response, rubric_detail) -> bool
Returns True if the response satisfies the rubric criterion.
"""
max_possible = sum(r["rubric_weight"] for r in rubrics if r["rubric_weight"] > 0)
earned = sum(r["rubric_weight"] for r in rubrics if judge_fn(response, r["rubric_detail"]))
return {"earned": earned, "max_possible": max_possible, "pct": earned / max_possible if max_possible else 0}
```
### End-to-end rubric-based eval
Below is a complete example that loads the benchmark from Hugging Face, queries a model, and scores every entry with an LLM-as-judge:
```python
from datasets import load_dataset
from openai import OpenAI
SUBSETS = [
"economics_and_finance",
"healthcare_and_medicine",
"industry",
"law",
"natural_science",
]
client = OpenAI() # or any OpenAI-compatible client
def get_model_response(question: str, system_prompt: str = "") -> str:
messages = []
if system_prompt:
messages.append({"role": "system", "content": system_prompt})
messages.append({"role": "user", "content": question})
return client.chat.completions.create(
model="gpt-4o", messages=messages
).choices[0].message.content
def judge_rubric(response: str, rubric_detail: str) -> bool:
verdict = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": (
"You are a strict rubric grader. Determine whether the response "
"satisfies the following criterion. Reply ONLY 'YES' or 'NO'."
)},
{"role": "user", "content": (
f"### Criterion\n{rubric_detail}\n\n### Response\n{response}"
)},
],
temperature=0,
).choices[0].message.content
return verdict.strip().upper().startswith("YES")
def score_entry(response: str, rubrics: list) -> dict:
max_possible = sum(r["rubric_weight"] for r in rubrics if r["rubric_weight"] > 0)
results = []
for r in rubrics:
met = judge_rubric(response, r["rubric_detail"])
results.append({**r, "met": met})
earned = sum(r["rubric_weight"] for r in results if r["met"])
return {"earned": earned, "max_possible": max_possible, "pct": earned / max_possible if max_possible else 0, "details": results}
# --- Run ---
for subset in SUBSETS:
ds = load_dataset("humanlaya-data-lab/OneMillion-Bench", subset, split="test")
for entry in ds:
response = get_model_response(entry["question"], entry["system_prompt"])
result = score_entry(response, entry["rubrics"])
print(f"[{subset}] {entry['id']} score={result['earned']}/{result['max_possible']} ({result['pct']:.1%})")
```
## License
Apache 2.0 |