OneMillion-Bench / README.md
jacklanda's picture
Update README
caf1f7c
|
raw
history blame
8.22 kB
metadata
license: apache-2.0
task_categories:
  - question-answering
  - text-generation
language:
  - en
  - zh
tags:
  - economics_and_finance
  - healthcare_and_medicine
  - industry
  - law
  - natural_science
pretty_name: $OneMillion-Bench
size_categories:
  - n<1K

$OneMillion-Bench

A bilingual (Global/Chinese) realistic expert-level benchmark for evaluating language agents across 5 professional domains. The benchmark contains 400 entries with detailed, weighted rubric-based grading criteria designed for fine-grained evaluation of domain expertise, analytical reasoning, and instruction following.

Dataset Structure

Each subdirectory is a Hugging Face subset (configuration), and all data is in the test split.

$OneMillion-Bench/
├── economics_and_finance/
│   └── test.json      # 80 entries (40 EN + 40 CN, distinct questions)
├── healthcare_and_medicine/
│   └── test.json      # 80 entries (40 matched EN-CN pairs)
├── industry/
│   └── test.json      # 80 entries (40 matched EN-CN pairs)
├── law/
│   └── test.json      # 80 entries (40 EN + 40 CN, distinct questions)
├── natural_science/
│   └── test.json      # 80 entries (40 matched EN-CN pairs)
└── README.md
Subset Split Entries
economics_and_finance test 80
healthcare_and_medicine test 80
industry test 80
law test 80
natural_science test 80

Domains & Coverage

Domain Categories Example Subcategories Bilingual Mode
Economics & Finance Investing, FinTech, Banking, Insurance, M&A Equities, VC/PE, Cryptocurrency, Commodities Separate questions per language
Healthcare & Medicine Clinical Medicine, Basic Medicine, Pharma & Biotech Hepatobiliary Surgery, Oncology, Nephrology, Dentistry Matched translation pairs
Industry Telecommunications, ML, Architecture, Semiconductors Backend Dev, Chemical Engineering, Chip Design Matched translation pairs
Law Civil, Criminal, International, Corporate, IP, Labor Contract Disputes, Criminal Defense, Copyright, M&A Separate questions per language
Natural Science Chemistry, Biology, Physics, Mathematics Organic Chemistry, Condensed Matter, Molecular Biology Matched translation pairs

Entry Schema

Each entry is a JSON object with 7 fields:

{
  "id": "uuid-string",            // globally unique identifier
  "case_id": 1,                   // links bilingual pairs (in matched-pair domains)
  "language": "en",               // "en" or "cn" (50/50 split in every file)
  "system_prompt": "",            // reserved (empty across all entries)
  "question": "...",              // expert-level evaluation prompt
  "tags": {
    "topics": [                   // 3-level taxonomy
      "Domain",                   //   e.g. "Economics and Finance"
      "Category",                 //   e.g. "Investing"
      "Subcategory"               //   e.g. "Equities"
    ],
    "time_sensitivity": {
      "time_sensitivity": "Time-agnostic",   // or "Weakly/Strongly time-sensitive"
      "year_month": "NA",                    // "YYYY-MM" when time-sensitive
      "day": "NA"                            // "DD" when applicable
    }
  },
  "rubrics": [                    // weighted grading criteria (11-37 per entry)
    {
      "rubric_number": 1,
      "rubric_detail": "...",     // specific grading criterion
      "rubric_weight": 5,         // positive = reward, negative = penalty
      "rubric_tag": "..."         // category (see below)
    }
  ]
}

Rubric Labels

Label Role Typical Weight
Factual Information Tests factual accuracy +3 to +5
Analytical Reasoning Assesses depth of analysis +3 to +5
Structure and Formatting Evaluates output organization -2 to -4 (penalty)
Instructions Following Checks compliance with task constraints mixed

Quick Start

from datasets import load_dataset

# Load a subset from Hugging Face (test split)
ds = load_dataset("humanlaya-data-lab/OneMillion-Bench", "natural_science", split="test")

# Filter English entries
en_entries = ds.filter(lambda x: x["language"] == "en")

# Iterate with rubrics
for entry in en_entries.select(range(1)):
    print(f"Topic: {' > '.join(entry['tags']['topics'])}")
    print(f"Question: {entry['question'][:200]}...")
    print(f"Rubrics ({len(entry['rubrics'])}):")
    for r in entry["rubrics"][:3]:
        print(f"  [{r['rubric_weight']:+d}] {r['rubric_tag']}: {r['rubric_detail'][:80]}...")

Example output:

Topic: Natural Sciences > Chemistry > Organic Chemistry
Question: You are an expert in organic chemistry. A graduate student is researching ...
Rubrics (18):
  [+5] Factual Information: Correctly identifies the primary reaction mechanism ...
  [+4] Analytical Reasoning: Provides a coherent comparison of thermodynamic vs ...
  [-3] Structure and Formatting: Response lacks clear section headings or logica...

Evaluation

Scoring a single response

Each rubric carries a signed weight: positive weights are points earned when the criterion is met, negative weights are penalties applied when violated. Sum the weights of all satisfied rubrics and normalize against the maximum possible (positive-only) score:

def score_entry(response: str, rubrics: list, judge_fn) -> dict:
    """
    judge_fn(response, rubric_detail) -> bool
        Returns True if the response satisfies the rubric criterion.
    """
    max_possible = sum(r["rubric_weight"] for r in rubrics if r["rubric_weight"] > 0)
    earned = sum(r["rubric_weight"] for r in rubrics if judge_fn(response, r["rubric_detail"]))
    return {"earned": earned, "max_possible": max_possible, "pct": earned / max_possible if max_possible else 0}

End-to-end rubric-based eval

Below is a complete example that loads the benchmark from Hugging Face, queries a model, and scores every entry with an LLM-as-judge:

from datasets import load_dataset
from openai import OpenAI

SUBSETS = [
    "economics_and_finance",
    "healthcare_and_medicine",
    "industry",
    "law",
    "natural_science",
]

client = OpenAI()  # or any OpenAI-compatible client

def get_model_response(question: str, system_prompt: str = "") -> str:
    messages = []
    if system_prompt:
        messages.append({"role": "system", "content": system_prompt})
    messages.append({"role": "user", "content": question})
    return client.chat.completions.create(
        model="gpt-4o", messages=messages
    ).choices[0].message.content

def judge_rubric(response: str, rubric_detail: str) -> bool:
    verdict = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "system", "content": (
                "You are a strict rubric grader. Determine whether the response "
                "satisfies the following criterion. Reply ONLY 'YES' or 'NO'."
            )},
            {"role": "user", "content": (
                f"### Criterion\n{rubric_detail}\n\n### Response\n{response}"
            )},
        ],
        temperature=0,
    ).choices[0].message.content
    return verdict.strip().upper().startswith("YES")

def score_entry(response: str, rubrics: list) -> dict:
    max_possible = sum(r["rubric_weight"] for r in rubrics if r["rubric_weight"] > 0)
    results = []
    for r in rubrics:
        met = judge_rubric(response, r["rubric_detail"])
        results.append({**r, "met": met})
    earned = sum(r["rubric_weight"] for r in results if r["met"])
    return {"earned": earned, "max_possible": max_possible, "pct": earned / max_possible if max_possible else 0, "details": results}

# --- Run ---
for subset in SUBSETS:
    ds = load_dataset("humanlaya-data-lab/OneMillion-Bench", subset, split="test")
    for entry in ds:
        response = get_model_response(entry["question"], entry["system_prompt"])
        result = score_entry(response, entry["rubrics"])
        print(f"[{subset}] {entry['id']}  score={result['earned']}/{result['max_possible']} ({result['pct']:.1%})")

License

Apache 2.0