--- license: mit task_categories: - question-answering - text-generation language: - en tags: - forecasting - benchmark - llm-evaluation - reasoning - temporal-reasoning - contamination-control - leakage-control - prediction - agent size_categories: - n<1K pretty_name: OracleProto Forecasting Eval Set --- # OracleProto: Forecasting Evaluation Set **Dataset:** [`Hugging Face`](https://huggingface.co/datasets/MaYiding/OracleProto) **GitHub Repo:** [`Github`](https://github.com/MaYiding/OracleProto) **Chinese Doc:** [[`中文文档`](https://huggingface.co/datasets/MaYiding/OracleProto/blob/main/README-ZH.md)] **License:** MIT A SQLite-packaged evaluation set of 80 hand-curated forecasting questions on real-world events, with resolution dates between 2026-03-12 and 2026-04-14, released alongside the [GitHub Repo](https://github.com/MaYiding/OracleProto). Both the rows and the byte-stable prompt-reconstruction recipe ship inside a single file, `forecast_eval_set_example.db`, which holds two tables: `forecast_eval_set_example` (the 80 rows) and `dataset_metadata` (the recipe). --- ## 1. Dataset at a glance | Field | Value | | ---------------------- | ---------------------------------------------------------------------- | | Release date | `2026-04-29` | | Rows | 80 | | Splits | `train` (80); single split, intended as a held-out evaluation set | | Resolution-date range | `2026-03-12` → `2026-04-14` | | Question types | `yes_no`, `binary_named`, `multiple_choice` | | Choice types | `single` (one correct letter), `multi` (one-or-more correct letters) | | Database file | `forecast_eval_set_example.db` (SQLite 3, ~52 KB) | | Tables in the file | `forecast_eval_set_example` (80 rows), `dataset_metadata` (1 row) | | License | MIT | | Source upstream | HuggingFace forecasting questions (levels 1+2), 322 raw → 80 curated | ### Type distribution | `question_type` | `choice_type` | Rows | | ------------------- | ------------- | ------ | | `yes_no` | `single` | 37 | | `binary_named` | `single` | 3 | | `multiple_choice` | `single` | 32 | | `multiple_choice` | `multi` | 8 | | **Total** | | **80** | `yes_no` is binary Yes/No; `binary_named` is binary between two named entities such as sports teams, fighters, or sides; `multiple_choice` carries at least three labelled options with one or more correct letters allowed, and "None of the above" is a valid answer when listed. Each row stores the exact option labels; letter `A` maps to `options[0]`, `B` to `options[1]`, and so on (§3.4 covers labels beyond `Z`). --- ## 2. Files ```text OracleProto/ ├── forecast_eval_set_example.db # SQLite database file (the dataset; ~52 KB) ├── README.md # this file ├── LICENSE # MIT └── .gitattributes # standard HF binary attributes ``` The dataset ships as one SQLite file, not Parquet or JSONL, because the prompt-reconstruction recipe and per-row provenance live in the same file as the rows (in `dataset_metadata.features_json`). A loader for `datasets.Dataset` and Parquet conversion appears in §6. --- ## 3. Database schema Two tables: `forecast_eval_set_example` holds the 80 rows; `dataset_metadata` holds the canonical recipe. The file takes its name from the primary table. ### 3.1 Table `forecast_eval_set_example` (the rows) ```sql CREATE TABLE forecast_eval_set_example ( id TEXT PRIMARY KEY, choice_type TEXT NOT NULL CHECK (choice_type IN ('single','multi')), question_type TEXT NOT NULL, -- yes_no | binary_named | multiple_choice event TEXT NOT NULL, -- the event being predicted options TEXT NOT NULL, -- JSON array of option labels answer TEXT NOT NULL, -- canonical correct answer as letter(s) end_time TEXT NOT NULL -- 'YYYY-MM-DD' ); CREATE INDEX idx_forecast_eval_set_example_choice_type ON forecast_eval_set_example(choice_type); CREATE INDEX idx_forecast_eval_set_example_question_type ON forecast_eval_set_example(question_type); CREATE INDEX idx_forecast_eval_set_example_end_time ON forecast_eval_set_example(end_time); ``` ### 3.2 Table `dataset_metadata` (the recipe) A one-row table whose `features_json` blob carries the prompt template, the four output formats, the outcomes-block rule, the agent-role string, and curation provenance. The full recipe is rendered in §5. ```sql CREATE TABLE dataset_metadata ( dataset_name TEXT NOT NULL, split_name TEXT NOT NULL, table_name TEXT NOT NULL, row_count INTEGER NOT NULL, imported_at_utc TEXT NOT NULL, features_json TEXT NOT NULL ); ``` ### 3.3 Column semantics | Column | Type | Description | | --------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `id` | TEXT | Stable source-side question ID inherited from the upstream HuggingFace forecasting set; primary join key. | | `choice_type` | TEXT | `'single'` if exactly one letter is correct, `'multi'` if one-or-more letters can be correct. Derived from the number of letters in `answer`. Drives the single-answer vs multi-select branch in §5.4. | | `question_type` | TEXT | One of `yes_no`, `binary_named`, `multiple_choice`. Selects which prompt template is rendered (§5). | | `event` | TEXT | Natural-language description of the event being predicted, author-edited for explicit time anchoring, unit explicitness, and unambiguous binary framing. | | `options` | TEXT | JSON array of option labels. For `yes_no` it is fixed to `["Yes","No"]`. For `binary_named` it is two named entities. For `multiple_choice` it is a list of choice labels whose letter is implied by index (`A=options[0]`, `B=options[1]`, …). | | `answer` | TEXT | Canonical correct answer encoded as letters. For `yes_no` and `binary_named` it is `'A'` or `'B'`. For `multiple_choice` it is a comma-separated letter list in option order, e.g. `'A'` or `'A, B'`. | | `end_time` | TEXT | Resolution date in `YYYY-MM-DD`. The column stores a calendar date only; the prompt template (§5.2) supplies the GMT+8 reading. If finer-grained admissibility is needed, treat each resolution as covering the whole calendar day. | ### 3.4 Letter-to-index encoding Letters map to option indices via `index = ord(letter) - ord('A')`. Beyond `Z` (≥27 options) the labels run on as `[`, `\`, `]`, `^`, `_`, `` ` ``, `a`, `b`, …, the contiguous ASCII range starting at `A`. The reference renderer wraps any non-`A`–`Z` label in backticks so it survives Markdown rendering. None of the 80 rows exceed 26 options, but the encoding is documented because the framework's parser supports it. --- ## 4. Sample rows ```json { "id": "699d9ffc098cca008728b6f0", "choice_type": "single", "question_type": "yes_no", "event": "Will the US PCE annual inflation be greater than 2.9% in January 2026?", "options": ["Yes", "No"], "answer": "B", "end_time": "2026-03-13" } ``` ```json { "id": "69a2e39e5692ef005cdbf2d3", "choice_type": "single", "question_type": "binary_named", "event": "Will US or Israel strike Iran first?", "options": ["US", "Israel"], "answer": "B", "end_time": "2026-03-31" } ``` ```json { "id": "6995b1073ea64b005b11f285", "choice_type": "single", "question_type": "multiple_choice", "event": "Which men's basketball team will win the Big 12 Conference Championship tournament in the 2025-26 season?", "options": ["Arizona", "Baylor", "Brigham Young University (BYU)", "Houston", "Iowa State", "Kansas", "Kansas State"], "answer": "A", "end_time": "2026-03-14" } ``` ```json { "id": "698f198bda7a8b006575444c", "choice_type": "multi", "question_type": "multiple_choice", "event": "Which movies will win multiple Oscars? (2026)", "options": ["One Battle After Another", "Sinners", "Frankenstein", "KPop Demon Hunters", "F1", "Sentimental Value", "Hamnet", "Marty Supreme", "The Secret Agent", "Avatar: Fire and Ash", "Train Dreams", "Bugonia", "Blue Moon", "It Was Just An Accident"], "answer": "A, B, C, D", "end_time": "2026-03-15" } ``` --- ## 5. Prompt reconstruction (canonical recipe) Every row is rendered into a single user message via the recipe stored in `dataset_metadata.features_json.prompt_reconstruction`. The recipe is byte-stable and is the source of truth for the OracleProto evaluator; downstream users who reconstruct prompts themselves should follow it exactly so results stay comparable. ### 5.1 Static fragments ```text agent_role: "You are an agent that can predict future events." guidance: "Do not use any other format. Do not refuse to make a prediction. Do not say \"I cannot predict the future.\" You must make a clear prediction based on the best data currently available, using the box format specified above." ``` ### 5.2 Master template ```text {agent_role} The event to be predicted: "{event} (resolved around {end_time} (GMT+8)).{outcomes_block}" IMPORTANT: Your final answer MUST end with this exact format: {output_format} {guidance} ``` The literal `(GMT+8)` inside the user-visible string is what gives `end_time` its timezone reading; the column itself stores only a date. ### 5.3 `outcomes_block` For `yes_no` and `binary_named`: empty, since the option labels are baked into `output_format`. For `multiple_choice`: a leading newline followed by one line per option in `A.