OracleProto / README.md
MaYiding's picture
Update README
c10bdb7
|
raw
history blame
24.2 kB
metadata
license: mit
task_categories:
  - question-answering
  - text-generation
language:
  - en
tags:
  - forecasting
  - benchmark
  - llm-evaluation
  - reasoning
  - temporal-reasoning
  - contamination-control
  - leakage-control
  - prediction
  - agent
size_categories:
  - n<1K
pretty_name: OracleProto Forecasting Eval Set

OracleProto: Forecasting Evaluation Set

Chinese Doc: [中文文档]

GitHub Repo: Github

A SQLite-packaged evaluation set of 80 hand-curated forecasting questions on real-world events, with resolution dates between 2026-03-12 and 2026-04-14, released alongside the GitHub Repo. Both the rows and the byte-stable prompt-reconstruction recipe ship inside a single file, forecast_eval_set_example.db, which holds two tables: forecast_eval_set_example (the 80 rows) and dataset_metadata (the recipe).


1. Dataset at a glance

Field Value
Release date 2026-04-29
Rows 80
Splits train (80); single split, intended as a held-out evaluation set
Resolution-date range 2026-03-122026-04-14
Question types yes_no, binary_named, multiple_choice
Choice types single (one correct letter), multi (one-or-more correct letters)
Database file forecast_eval_set_example.db (SQLite 3, ~52 KB)
Tables in the file forecast_eval_set_example (80 rows), dataset_metadata (1 row)
License MIT
Source upstream HuggingFace forecasting questions (levels 1+2), 322 raw → 80 curated

Type distribution

question_type choice_type Rows
yes_no single 37
binary_named single 3
multiple_choice single 32
multiple_choice multi 8
Total 80

yes_no is binary Yes/No; binary_named is binary between two named entities such as sports teams, fighters, or sides; multiple_choice carries at least three labelled options with one or more correct letters allowed, and "None of the above" is a valid answer when listed. Each row stores the exact option labels; letter A maps to options[0], B to options[1], and so on (§3.4 covers labels beyond Z).


2. Files

OracleProto/
├── forecast_eval_set_example.db   # SQLite database file (the dataset; ~52 KB)
├── README.md                      # this file
├── LICENSE                        # MIT
└── .gitattributes                 # standard HF binary attributes

The dataset ships as one SQLite file, not Parquet or JSONL, because the prompt-reconstruction recipe and per-row provenance live in the same file as the rows (in dataset_metadata.features_json). A loader for datasets.Dataset and Parquet conversion appears in §6.


3. Database schema

Two tables: forecast_eval_set_example holds the 80 rows; dataset_metadata holds the canonical recipe. The file takes its name from the primary table.

3.1 Table forecast_eval_set_example (the rows)

CREATE TABLE forecast_eval_set_example (
    id            TEXT PRIMARY KEY,
    choice_type   TEXT NOT NULL CHECK (choice_type IN ('single','multi')),
    question_type TEXT NOT NULL,    -- yes_no | binary_named | multiple_choice
    event         TEXT NOT NULL,    -- the event being predicted
    options       TEXT NOT NULL,    -- JSON array of option labels
    answer        TEXT NOT NULL,    -- canonical correct answer as letter(s)
    end_time      TEXT NOT NULL     -- 'YYYY-MM-DD'
);

CREATE INDEX idx_forecast_eval_set_example_choice_type   ON forecast_eval_set_example(choice_type);
CREATE INDEX idx_forecast_eval_set_example_question_type ON forecast_eval_set_example(question_type);
CREATE INDEX idx_forecast_eval_set_example_end_time      ON forecast_eval_set_example(end_time);

3.2 Table dataset_metadata (the recipe)

A one-row table whose features_json blob carries the prompt template, the four output formats, the outcomes-block rule, the agent-role string, and curation provenance. The full recipe is rendered in §5.

CREATE TABLE dataset_metadata (
    dataset_name      TEXT NOT NULL,
    split_name        TEXT NOT NULL,
    table_name        TEXT NOT NULL,
    row_count         INTEGER NOT NULL,
    imported_at_utc   TEXT NOT NULL,
    features_json     TEXT NOT NULL
);

3.3 Column semantics

Column Type Description
id TEXT Stable source-side question ID inherited from the upstream HuggingFace forecasting set; primary join key.
choice_type TEXT 'single' if exactly one letter is correct, 'multi' if one-or-more letters can be correct. Derived from the number of letters in answer. Drives the single-answer vs multi-select branch in §5.4.
question_type TEXT One of yes_no, binary_named, multiple_choice. Selects which prompt template is rendered (§5).
event TEXT Natural-language description of the event being predicted, author-edited for explicit time anchoring, unit explicitness, and unambiguous binary framing.
options TEXT JSON array of option labels. For yes_no it is fixed to ["Yes","No"]. For binary_named it is two named entities. For multiple_choice it is a list of choice labels whose letter is implied by index (A=options[0], B=options[1], …).
answer TEXT Canonical correct answer encoded as letters. For yes_no and binary_named it is 'A' or 'B'. For multiple_choice it is a comma-separated letter list in option order, e.g. 'A' or 'A, B'.
end_time TEXT Resolution date in YYYY-MM-DD. The column stores a calendar date only; the prompt template (§5.2) supplies the GMT+8 reading. If finer-grained admissibility is needed, treat each resolution as covering the whole calendar day.

3.4 Letter-to-index encoding

Letters map to option indices via index = ord(letter) - ord('A'). Beyond Z (≥27 options) the labels run on as [, \, ], ^, _, `, a, b, …, the contiguous ASCII range starting at A. The reference renderer wraps any non-AZ label in backticks so it survives Markdown rendering. None of the 80 rows exceed 26 options, but the encoding is documented because the framework's parser supports it.


4. Sample rows

{
  "id": "699d9ffc098cca008728b6f0",
  "choice_type": "single",
  "question_type": "yes_no",
  "event": "Will the US PCE annual inflation be greater than 2.9% in January 2026?",
  "options": ["Yes", "No"],
  "answer": "B",
  "end_time": "2026-03-13"
}
{
  "id": "69a2e39e5692ef005cdbf2d3",
  "choice_type": "single",
  "question_type": "binary_named",
  "event": "Will US or Israel strike Iran first?",
  "options": ["US", "Israel"],
  "answer": "B",
  "end_time": "2026-03-31"
}
{
  "id": "6995b1073ea64b005b11f285",
  "choice_type": "single",
  "question_type": "multiple_choice",
  "event": "Which men's basketball team will win the Big 12 Conference Championship tournament in the 2025-26 season?",
  "options": ["Arizona", "Baylor", "Brigham Young University (BYU)",
              "Houston", "Iowa State", "Kansas", "Kansas State"],
  "answer": "A",
  "end_time": "2026-03-14"
}
{
  "id": "698f198bda7a8b006575444c",
  "choice_type": "multi",
  "question_type": "multiple_choice",
  "event": "Which movies will win multiple Oscars? (2026)",
  "options": ["One Battle After Another", "Sinners", "Frankenstein",
              "KPop Demon Hunters", "F1", "Sentimental Value", "Hamnet",
              "Marty Supreme", "The Secret Agent", "Avatar: Fire and Ash",
              "Train Dreams", "Bugonia", "Blue Moon", "It Was Just An Accident"],
  "answer": "A, B, C, D",
  "end_time": "2026-03-15"
}

5. Prompt reconstruction (canonical recipe)

Every row is rendered into a single user message via the recipe stored in dataset_metadata.features_json.prompt_reconstruction. The recipe is byte-stable and is the source of truth for the OracleProto evaluator; downstream users who reconstruct prompts themselves should follow it exactly so results stay comparable.

5.1 Static fragments

agent_role: "You are an agent that can predict future events."

guidance: "Do not use any other format. Do not refuse to make a prediction.
           Do not say \"I cannot predict the future.\" You must make a clear
           prediction based on the best data currently available, using the
           box format specified above."

5.2 Master template

{agent_role} The event to be predicted: "{event} (resolved around {end_time} (GMT+8)).{outcomes_block}"

IMPORTANT: Your final answer MUST end with this exact format:
{output_format}
{guidance}

The literal (GMT+8) inside the user-visible string is what gives end_time its timezone reading; the column itself stores only a date.

5.3 outcomes_block

For yes_no and binary_named: empty, since the option labels are baked into output_format. For multiple_choice: a leading newline followed by one line per option in A. <label> form, e.g. \nA. Arizona\nB. Baylor\nC. Brigham Young University (BYU)\n…. Labels whose derived letter falls outside AZ are wrapped in backticks.

5.4 output_format (one of four, chosen by question_type × choice_type)

yes_no:

Your task is to predict whether the event will occur based on your analysis.
Your prediction will be scored based on its accuracy. You will only receive points if your answer is correct.
Your final answer MUST end with this exact format:
\boxed{Yes} or \boxed{No}

binary_named (the literals <options[0]> and <options[1]> are replaced by the two named entities from options):

Your task is to predict which of the two outcomes will occur based on your analysis.
Your prediction will be scored based on its accuracy. You will only receive points if your answer is correct.
Your final answer MUST end with this exact format:
\boxed{<options[0]>} or \boxed{<options[1]>}

multiple_choice with choice_type='single':

This is a SINGLE-ANSWER question: exactly ONE of the listed options is correct.
Your prediction will be scored on strict equality with the unique correct letter; choosing the wrong letter, or selecting more than one letter, scores zero.
Your final answer MUST end with this exact format:
the single correct letter inside the box, e.g. \boxed{A}.
Do NOT list more than one letter, even if you believe two outcomes are tied — pick the one you find most likely.

multiple_choice with choice_type='multi':

This is a MULTI-SELECT question: ONE OR MORE of the listed options can be correct.
Your prediction will be scored on strict equality with the FULL set of correct letters: any extra letter, any missing letter, or any wrong letter scores zero. You must include ALL correct options and NO incorrect options.
Your final answer MUST end with this exact format:
listing all correct option(s) you have identified, separated by commas, within the box.
For example: \boxed{A} for a single correct option, or \boxed{B, C} for multiple correct options.

5.5 Answer parsing

The reference parser (forecast_eval/parser.py::parse_answer) applies these rules:

  1. Take the last \boxed{...} substring in the model's reply; everything else is reasoning or scratchpad and is ignored.
  2. For yes_no (case-insensitive): YesA, NoB. Anything else is unparsed.
  3. For binary_named (case-insensitive): match the boxed payload against options[0] or options[1]. Anything else is unparsed.
  4. For multiple_choice: split the boxed payload on commas and whitespace, validate that each token is a single letter, and check that each letter resolves to a valid option index. Out-of-range letters or multi-character tokens are unparsed.
  5. Score by strict set equality against the canonical letter set parsed from answer. A missing or unparsed boxed answer is recorded as parse_ok = 0 rather than treated as a parser error; the run records it and moves on.

Reusing the framework's parser is the practical way to get bit-identical scores across implementations.


6. Loading the dataset

6.1 With raw sqlite3 (no extra deps)

import sqlite3
import json

conn = sqlite3.connect("forecast_eval_set_example.db")
conn.row_factory = sqlite3.Row

# Read the rows.
rows = conn.execute("SELECT * FROM forecast_eval_set_example").fetchall()
print(f"loaded {len(rows)} rows")
sample = dict(rows[0])
sample["options"] = json.loads(sample["options"])  # JSON-decode option list
print(sample)

# Read the canonical prompt-reconstruction recipe.
meta_row = conn.execute("SELECT features_json FROM dataset_metadata").fetchone()
meta = json.loads(meta_row["features_json"])
prompt_template = meta["prompt_reconstruction"]["prompt_template"]
print(prompt_template)

6.2 With huggingface_hub

from huggingface_hub import hf_hub_download
import sqlite3, json

db_path = hf_hub_download(
    repo_id="MaYiding/OracleProto",
    filename="forecast_eval_set_example.db",
    repo_type="dataset",
)
conn = sqlite3.connect(db_path)
rows = conn.execute("SELECT * FROM forecast_eval_set_example").fetchall()

6.3 Convert to a datasets.Dataset

import sqlite3, json
from datasets import Dataset

conn = sqlite3.connect("forecast_eval_set_example.db")
cur = conn.execute("SELECT * FROM forecast_eval_set_example")
cols = [c[0] for c in cur.description]

def _row(r):
    d = dict(zip(cols, r))
    d["options"] = json.loads(d["options"])         # list[str]
    d["answer_letters"] = [
        s.strip() for s in d["answer"].split(",") if s.strip()
    ]                                               # list[str]
    return d

ds = Dataset.from_list([_row(r) for r in cur.fetchall()])
print(ds)
print(ds[0])

6.4 Render a prompt (minimal, faithful to the canonical recipe)

def render_prompt(row, meta):
    rcp = meta["prompt_reconstruction"]
    options = row["options"]
    qt, ct = row["question_type"], row["choice_type"]

    if qt == "yes_no":
        outcomes_block = ""
        out_fmt = rcp["yes_no_output_format"]
    elif qt == "binary_named":
        outcomes_block = ""
        out_fmt = (
            rcp["binary_named_output_format"]
            .replace("<options[0]>", options[0])
            .replace("<options[1]>", options[1])
        )
    elif qt == "multiple_choice":
        outcomes_block = "\n" + "\n".join(
            f"{chr(ord('A') + i)}. {label}" for i, label in enumerate(options)
        )
        key = (
            "multiple_choice_single_output_format" if ct == "single"
            else "multiple_choice_multi_output_format"
        )
        out_fmt = rcp[key]
    else:
        raise ValueError(qt)

    return rcp["prompt_template"].format(
        agent_role=rcp["agent_role"],
        event=row["event"],
        end_time=row["end_time"],
        outcomes_block=outcomes_block,
        output_format=out_fmt,
        guidance=rcp["guidance"],
    )

The full reference renderer (with the >26-option backtick rule and an optional reflection / belief-elicitation tail) lives at forecast_eval/prompts.py; reusing it gives byte-identical prompts.


7. Recommended evaluation protocol

Pair the dataset with the OracleProto evaluation harness, which layers information-boundary discipline on top of the bare prompt-and-score loop. Five concrete recommendations:

  1. Declare a knowledge cutoff $\kappa_M$ for every model. A question is admissible for model $M$ only when $\kappa_M \le \chi_i < \tau_i$, where $\chi_i$ is the per-question prediction cutoff and $\tau_i$ is its resolution date. Inadmissible questions are filtered upstream rather than counted as model errors. A model with no declared cutoff cannot be fairly compared to one that has one.

  2. Time-mask any retrieval or browsing tool. If the harness lets the model issue web searches, pin the search-side end_date to $\chi_i + \delta$ with a conservative offset; OracleProto defaults to $\delta = -1$ day. The mechanism behind this barrier (L2) is documented in the framework's DESIGN and FRAME notes.

  3. Run an independent retrieval-content auditor. Each retrieved snippet is passed to a separate LLM auditor that decides whether the snippet leaks the resolution. This is the L3 barrier in the framework's threat model.

  4. Forbid provider-native browsing. OracleProto refuses model slugs ending in :online and similar hosted-browsing variants on three layers: config validation, on-the-wire client, and detector client. This is the L4 residual that must pass before any billable LLM call leaves the process.

  5. Score with strict set equality on letter sets, per §5.5. Optional probability-calibration metrics (Brier, NLL, ECE, Murphy decomposition) are supported when the model emits an additional <belief>{ ... }</belief> JSON block per the v4 belief protocol; the schema is documented in forecast_eval/prompts.py::BELIEF_PROTOCOL.

Without the OracleProto harness in place, treat the resulting numbers as upper bounds on forecasting ability: any model that can browse the open web, or that was trained past a question's end_time, may have memorised the answer. The dataset makes the honesty audit possible; it does not enforce it on its own.


8. Provenance and curation

  • Source. Upstream HuggingFace forecasting questions, restricted to levels 1+2 (the easier two of the upstream difficulty bands). The raw set was harvested as 322 candidate questions.
  • Curation pipeline (5 passes).
    1. Source-side broken-row removal and column flattening.
    2. end_time / answer-encoding / option-label normalization: end_time reduced to a YYYY-MM-DD calendar date; Yes/No mapped to A/B; option labels stripped of stray markdown.
    3. Down-sampling 322 → 200 → 100 → 80 with placeholder removal, deduplication, and an ambiguity audit.
    4. Final HIGH+MEDIUM ambiguity remediation: 4 rows reworded for explicit time anchoring, unit explicitness, and unambiguous binary framing.
    5. CRITICAL fix on one S&P 500 multi-select truth set so it satisfies the monotonic-threshold logic implied by the option ladder.
  • Verification. All 80 ground-truths verified end-to-end via parser round-trip (the rendered prompt is parsed and re-encoded back to the canonical letter set). Final tally: 0 critical / 0 high / 0 medium ambiguity issues remaining.

9. Intended uses and limitations

9.1 Intended uses

  • Forecasting benchmark for LLMs and LLM agents, particularly tool-using agents that combine parametric knowledge with time-masked web retrieval.
  • Reproducibility testbed for forecasting harnesses. The dataset_metadata table makes every prompt byte-stable; pairing it with the OracleProto framework yields a run unit whose scoring artefacts are bit-identical when the configuration matches.
  • Calibration and proper-scoring research. The 80-row size is small enough that per-question analysis (belief evolution, source attribution, calibration plots) stays tractable.

9.2 Out-of-scope uses

  • Training data. Including the rows in any training, fine-tuning, or RLHF corpus contaminates downstream forecasting evaluations of the trained model. The dataset is evaluation-only.
  • Long-horizon forecasting. All resolutions land in a one-month window (2026-03-12 → 2026-04-14); the set does not represent multi-quarter or multi-year forecasting.
  • Open-ended generation. Every question has a closed answer set, so this is not a generation benchmark.

9.3 Known limitations and biases

  • Sample size. 80 rows is small. Confidence intervals on accuracy or Brier are wide; report them alongside point estimates and use paired tests when comparing models on the same set.
  • Topical skew. Questions concentrate in finance and macro indicators, sports events, awards (Oscars, NBA, UEFA, etc.), and US-centric political and geopolitical events, reflecting the upstream HuggingFace market mix. They are not a globally representative sample.
  • English-only. All event and options strings are English.
  • Date-only resolution. end_time is a date, not a timestamp, and the dataset does not carry a timezone column. If finer-grained admissibility is needed, treat each resolution as covering the whole GMT+8 calendar day.
  • Provider-side residual leakage. Any LLM that has ingested the upstream HuggingFace dataset, or that was trained past the resolution window, can recover ground truths from parametric memory. The dataset cannot patch this on its own; it relies on the harness to enforce admissibility ($\kappa_M$).
  • Snapshot of a moving label space. A few questions ("none of the above", "all of the above") interact non-trivially with multi-select scoring; the curation pass fixed the one S&P 500 case, but the convention for similar questions in future revisions may shift. Pin to the schema version if byte-stable behaviour across releases is required.

10. License

Released under the MIT License (see LICENSE). The upstream questions originate from a public HuggingFace forecasting set; the curation work, schema, prompt-reconstruction recipe, and answer encodings in this release are the contribution of this project.


11. Contact and contributions

Issues, schema feedback, and ambiguity reports are welcome. If a row's ground truth has changed, or its framing is ambiguous under §5.5, open an issue in the relevant repository:

Row-level reports should include the id, the disputed framing, and where available a primary source; those are the inputs the curation pipeline needs to update the row in the next release.