Update README
Browse files
README.md
CHANGED
|
@@ -20,37 +20,13 @@ size_categories:
|
|
| 20 |
pretty_name: OracleProto Forecasting Eval Set
|
| 21 |
---
|
| 22 |
|
| 23 |
-
# OracleProto
|
| 24 |
|
| 25 |
**Dataset:** [`MaYiding/OracleProto`](https://huggingface.co/datasets/MaYiding/OracleProto) ·
|
| 26 |
**Framework:** [`MaYiding/OracleProto`](https://github.com/MaYiding/OracleProto) ·
|
| 27 |
**License:** MIT
|
| 28 |
|
| 29 |
-
A
|
| 30 |
-
resolve between **2026-03-12 and 2026-04-14**. The set is the public example release shipped
|
| 31 |
-
with the [OracleProto framework](https://github.com/MaYiding/OracleProto) — a reproducible
|
| 32 |
-
harness for benchmarking LLM-native forecasting via knowledge-cutoff and temporal-masking
|
| 33 |
-
discipline.
|
| 34 |
-
|
| 35 |
-
The dataset is shipped as **a single SQLite database file** named
|
| 36 |
-
`forecast_eval_set_example.db`, which contains **two tables**:
|
| 37 |
-
|
| 38 |
-
* `forecast_eval_set_example` — the 80 forecasting rows (one row per question).
|
| 39 |
-
* `dataset_metadata` — a one-row table holding the canonical prompt-reconstruction recipe
|
| 40 |
-
(prompt template, output formats, agent role, answer-encoding rules, provenance).
|
| 41 |
-
|
| 42 |
-
The dataset is designed to be the **dataset \\(\mathcal{D}\\)** in the OracleProto run unit
|
| 43 |
-
\\(\mathcal{R} = (\mathcal{D}, M, \kappa_M, \delta, T, C, R, \Psi, \phi, \Gamma)\\): every column,
|
| 44 |
-
prompt template, and answer-encoding rule below is byte-stable and round-trip parsed by the
|
| 45 |
-
reference parser, so a forecasting run on this set is auditable, replayable, and comparable
|
| 46 |
-
across models and across calendar years.
|
| 47 |
-
|
| 48 |
-
> **TL;DR.** 80 yes/no, two-named-entity, and multiple-choice (single-answer / multi-select)
|
| 49 |
-
> questions on real-world events. All ground truths are verified end-to-end via parser
|
| 50 |
-
> round-trip; 0 critical / 0 high / 0 medium ambiguity issues remain. Distributed as a single
|
| 51 |
-
> SQLite database file `forecast_eval_set_example.db` containing tables
|
| 52 |
-
> `forecast_eval_set_example` (rows) and `dataset_metadata` (recipe), so the
|
| 53 |
-
> prompt-reconstruction recipe and per-question metadata stay co-located with the rows.
|
| 54 |
|
| 55 |
---
|
| 56 |
|
|
@@ -61,14 +37,14 @@ across models and across calendar years.
|
|
| 61 |
| Schema version | `v1.0` |
|
| 62 |
| Release date | `2026-04-29` |
|
| 63 |
| Rows | 80 |
|
| 64 |
-
| Splits | `train` (80)
|
| 65 |
-
| Resolution-date range | `2026-03-12` → `2026-04-14`
|
| 66 |
| Question types | `yes_no`, `binary_named`, `multiple_choice` |
|
| 67 |
| Choice types | `single` (one correct letter), `multi` (one-or-more correct letters) |
|
| 68 |
| Database file | `forecast_eval_set_example.db` (SQLite 3, ~52 KB) |
|
| 69 |
| Tables in the file | `forecast_eval_set_example` (80 rows), `dataset_metadata` (1 row) |
|
| 70 |
| License | MIT |
|
| 71 |
-
| Source upstream | HuggingFace forecasting questions (levels 1+2
|
| 72 |
|
| 73 |
### Type distribution
|
| 74 |
|
|
@@ -80,11 +56,7 @@ across models and across calendar years.
|
|
| 80 |
| `multiple_choice` | `multi` | 8 |
|
| 81 |
| **Total** | | **80** |
|
| 82 |
|
| 83 |
-
`yes_no` is binary Yes/No
|
| 84 |
-
teams, fighters, sides), and `multiple_choice` carries ≥3 labelled options where multiple
|
| 85 |
-
correct answers are allowed; *"None of the above"* is a valid answer. Every question lists the
|
| 86 |
-
exact option labels, so labels are the source of truth — letter labels (`A`, `B`, …) are
|
| 87 |
-
implied by index.
|
| 88 |
|
| 89 |
---
|
| 90 |
|
|
@@ -98,27 +70,13 @@ OracleProto/
|
|
| 98 |
└── .gitattributes # standard HF binary attributes
|
| 99 |
```
|
| 100 |
|
| 101 |
-
The dataset ships as
|
| 102 |
-
prompt-reconstruction recipe, the column-level schema, and per-row provenance live in the
|
| 103 |
-
same file (in the `dataset_metadata` table). This keeps the canonical source of truth —
|
| 104 |
-
including the byte-stable `prompt_template` used by the evaluator — co-located with the
|
| 105 |
-
rows. The file `forecast_eval_set_example.db` contains exactly two tables:
|
| 106 |
-
|
| 107 |
-
* **`forecast_eval_set_example`** — the 80 forecasting rows. Schema and column semantics in §3.1 / §3.3.
|
| 108 |
-
* **`dataset_metadata`** — a single-row table whose `features_json` blob holds the prompt
|
| 109 |
-
template, the four output formats, the outcomes-block rule, the agent-role string, and
|
| 110 |
-
the curation provenance. Schema in §3.2; full recipe rendered in §5.
|
| 111 |
-
|
| 112 |
-
A loader example for converting to `datasets.Dataset` / Parquet is provided in §6.
|
| 113 |
|
| 114 |
---
|
| 115 |
|
| 116 |
## 3. Database schema
|
| 117 |
|
| 118 |
-
|
| 119 |
-
`forecast_eval_set_example` (80 forecasting rows) and `dataset_metadata` (1 metadata row).
|
| 120 |
-
Note that the database file and the rows table happen to share the same name — the file is
|
| 121 |
-
named after its primary table.
|
| 122 |
|
| 123 |
### 3.1 Table `forecast_eval_set_example` (the rows)
|
| 124 |
|
|
@@ -140,10 +98,7 @@ CREATE INDEX idx_forecast_eval_set_example_end_time ON forecast_eval_set_ex
|
|
| 140 |
|
| 141 |
### 3.2 Table `dataset_metadata` (the recipe)
|
| 142 |
|
| 143 |
-
A
|
| 144 |
-
`features_json` blob with the **full prompt-reconstruction recipe** used by the OracleProto
|
| 145 |
-
evaluator. This is what makes the run reproducible — the prompt template, the four output
|
| 146 |
-
formats, the outcomes-block rule, and the agent role are all here in canonical form.
|
| 147 |
|
| 148 |
```sql
|
| 149 |
CREATE TABLE dataset_metadata (
|
|
@@ -152,7 +107,7 @@ CREATE TABLE dataset_metadata (
|
|
| 152 |
table_name TEXT NOT NULL,
|
| 153 |
row_count INTEGER NOT NULL,
|
| 154 |
imported_at_utc TEXT NOT NULL,
|
| 155 |
-
features_json TEXT NOT NULL
|
| 156 |
);
|
| 157 |
```
|
| 158 |
|
|
@@ -160,30 +115,22 @@ CREATE TABLE dataset_metadata (
|
|
| 160 |
|
| 161 |
| Column | Type | Description |
|
| 162 |
| --------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
| 163 |
-
| `id` | TEXT | Stable source-side question ID inherited from the upstream HuggingFace forecasting set
|
| 164 |
-
| `choice_type` | TEXT | `'single'` if exactly one letter is correct, `'multi'` if one-or-more letters can be correct. Derived from the number of letters in `answer`. Drives
|
| 165 |
-
| `question_type` | TEXT | One of `yes_no`, `binary_named`, `multiple_choice`. Selects which prompt template is rendered (
|
| 166 |
-
| `event` | TEXT | Natural-language description of the event being predicted
|
| 167 |
-
| `options` | TEXT | JSON array of option
|
| 168 |
-
| `answer` | TEXT | Canonical correct answer encoded as
|
| 169 |
-
| `end_time` | TEXT | Resolution date in `YYYY-MM-DD`
|
| 170 |
|
| 171 |
### 3.4 Letter-to-index encoding
|
| 172 |
|
| 173 |
-
Letters `A`, `
|
| 174 |
-
`index = ord(letter) - ord('A')`. Beyond the 26-letter alphabet (≥27 options) the labels
|
| 175 |
-
land on `[`, `\`, `]`, `^`, `_`, `` ` ``, `a`, `b`, …, i.e. the contiguous ASCII range
|
| 176 |
-
starting at `A`. The reference renderer wraps any non-`A`–`Z` label in backticks so the
|
| 177 |
-
label survives Markdown rendering. None of the 80 example questions exceed 26 options, but
|
| 178 |
-
the encoding scheme is documented for completeness because the evaluator and parser support
|
| 179 |
-
it.
|
| 180 |
|
| 181 |
---
|
| 182 |
|
| 183 |
## 4. Sample rows
|
| 184 |
|
| 185 |
-
A few representative rows (truncated for readability):
|
| 186 |
-
|
| 187 |
```json
|
| 188 |
{
|
| 189 |
"id": "699d9ffc098cca008728b6f0",
|
|
@@ -240,10 +187,7 @@ A few representative rows (truncated for readability):
|
|
| 240 |
|
| 241 |
## 5. Prompt reconstruction (canonical recipe)
|
| 242 |
|
| 243 |
-
Every row is rendered into a single user message
|
| 244 |
-
`dataset_metadata.features_json.prompt_reconstruction`. The recipe is byte-stable and is the
|
| 245 |
-
source of truth for the OracleProto evaluator; downstream users who reconstruct prompts
|
| 246 |
-
themselves should follow it exactly so results stay comparable.
|
| 247 |
|
| 248 |
### 5.1 Static fragments
|
| 249 |
|
|
@@ -258,11 +202,6 @@ guidance: "Do not use any other format. Do not refuse to make a prediction.
|
|
| 258 |
|
| 259 |
### 5.2 Master template
|
| 260 |
|
| 261 |
-
The block below is reproduced **verbatim** from the value stored in
|
| 262 |
-
`dataset_metadata.features_json.prompt_reconstruction.prompt_template`. It is a literal
|
| 263 |
-
string baked into the recipe; the dataset itself does not otherwise carry a timezone field
|
| 264 |
-
on `end_time` (see §3.3).
|
| 265 |
-
|
| 266 |
```text
|
| 267 |
{agent_role} The event to be predicted: "{event} (resolved around {end_time} (GMT+8)).{outcomes_block}"
|
| 268 |
|
|
@@ -271,13 +210,12 @@ IMPORTANT: Your final answer MUST end with this exact format:
|
|
| 271 |
{guidance}
|
| 272 |
```
|
| 273 |
|
|
|
|
|
|
|
| 274 |
### 5.3 `outcomes_block`
|
| 275 |
|
| 276 |
-
|
| 277 |
-
|
| 278 |
-
* For `multiple_choice`: a leading newline followed by one line per option in `A. <label>`
|
| 279 |
-
form, e.g. `\nA. Arizona\nB. Baylor\nC. Brigham Young University (BYU)\n…`. Labels whose
|
| 280 |
-
derived letter falls outside `A`–`Z` are wrapped in backticks.
|
| 281 |
|
| 282 |
### 5.4 `output_format` (one of four, chosen by `question_type` × `choice_type`)
|
| 283 |
|
|
@@ -289,8 +227,7 @@ Your final answer MUST end with this exact format:
|
|
| 289 |
\boxed{Yes} or \boxed{No}
|
| 290 |
```
|
| 291 |
|
| 292 |
-
**`binary_named`** (the literals `<options[0]>` and `<options[1]>` are replaced by the two
|
| 293 |
-
named entities from `options`):
|
| 294 |
```text
|
| 295 |
Your task is to predict which of the two outcomes will occur based on your analysis.
|
| 296 |
Your prediction will be scored based on its accuracy. You will only receive points if your answer is correct.
|
|
@@ -318,25 +255,15 @@ For example: \boxed{A} for a single correct option, or \boxed{B, C} for multiple
|
|
| 318 |
|
| 319 |
### 5.5 Answer parsing
|
| 320 |
|
| 321 |
-
|
| 322 |
-
|
| 323 |
-
|
| 324 |
-
|
| 325 |
-
|
| 326 |
-
|
| 327 |
-
|
| 328 |
-
|
| 329 |
-
|
| 330 |
-
4. For `multiple_choice`: split the boxed payload on commas/whitespace, validate that each
|
| 331 |
-
token is exactly one letter, and check that each letter resolves to a valid option index.
|
| 332 |
-
Out-of-range letters or multi-character tokens → unparsed.
|
| 333 |
-
5. Predictions are scored by **strict set equality** against the canonical letter set
|
| 334 |
-
parsed from `answer`. A missing or unparsed boxed answer is recorded as `parse_ok = 0`
|
| 335 |
-
and is **not** an error of the parser — the run records it and moves on.
|
| 336 |
-
|
| 337 |
-
> The parser is the formal answer-validator \\(\Psi\\) in the OracleProto run unit. Re-using
|
| 338 |
-
> it (rather than rolling your own regex) is the easiest way to get bit-identical scores
|
| 339 |
-
> across implementations.
|
| 340 |
|
| 341 |
---
|
| 342 |
|
|
@@ -443,72 +370,38 @@ def render_prompt(row, meta):
|
|
| 443 |
)
|
| 444 |
```
|
| 445 |
|
| 446 |
-
|
| 447 |
-
> / belief-elicitation tail) lives at
|
| 448 |
-
> [`forecast_eval/prompts.py`](https://github.com/MaYiding/OracleProto/blob/main/forecast_eval/prompts.py)
|
| 449 |
-
> in the OracleProto framework. Re-using it gives byte-identical prompts.
|
| 450 |
|
| 451 |
---
|
| 452 |
|
| 453 |
## 7. Recommended evaluation protocol
|
| 454 |
|
| 455 |
-
|
| 456 |
-
|
| 457 |
-
|
| 458 |
-
|
| 459 |
-
|
| 460 |
-
|
| 461 |
-
|
| 462 |
-
|
| 463 |
-
|
| 464 |
-
|
| 465 |
-
|
| 466 |
-
|
| 467 |
-
|
| 468 |
-
conservative offset (OracleProto defaults to \\(\delta = -1\\) day). This is the L2
|
| 469 |
-
"tool-mediated" leakage barrier.
|
| 470 |
-
|
| 471 |
-
3. **Run an independent retrieval-content auditor.** Each retrieved snippet is passed to a
|
| 472 |
-
separate LLM auditor that decides whether the snippet leaks the resolution. This is the
|
| 473 |
-
L3 "retrieval-content" barrier in the OracleProto threat model.
|
| 474 |
-
|
| 475 |
-
4. **Forbid provider-native browsing.** OracleProto refuses model slugs ending in `:online`
|
| 476 |
-
and similar hosted-browsing variants on three layers (config validation, on-the-wire
|
| 477 |
-
client, and detector client) — the L4 residual that *must* pass before any billable LLM
|
| 478 |
-
call leaves the process.
|
| 479 |
-
|
| 480 |
-
5. **Score with strict set equality on letter sets** (the parser semantics in §5.5). Optional
|
| 481 |
-
probability-calibration metrics (Brier / NLL / ECE / Murphy decomposition) are supported
|
| 482 |
-
when the model emits an additional `<belief>{ ... }</belief>` JSON block per the v4
|
| 483 |
-
belief protocol; the schema is documented in
|
| 484 |
-
[`forecast_eval/prompts.py::BELIEF_PROTOCOL`](https://github.com/MaYiding/OracleProto/blob/main/forecast_eval/prompts.py).
|
| 485 |
-
|
| 486 |
-
If you run *without* OracleProto, treat the numbers as **upper bounds on forecasting
|
| 487 |
-
ability**: any model that can browse the open web or that was trained past a question's
|
| 488 |
-
`end_time` may have memorised the answer. The dataset is designed to make this honesty audit
|
| 489 |
-
*possible*; it does not enforce it on its own.
|
| 490 |
|
| 491 |
---
|
| 492 |
|
| 493 |
## 8. Provenance and curation
|
| 494 |
|
| 495 |
-
* **Source.** Upstream HuggingFace forecasting questions, restricted to *levels 1+2*
|
| 496 |
-
(the easier two of the upstream difficulty bands). The raw set was harvested as 322
|
| 497 |
-
candidate questions.
|
| 498 |
* **Curation pipeline (5 passes).**
|
| 499 |
1. Source-side broken-row removal and column flattening.
|
| 500 |
-
2. `end_time` / answer-encoding / option-label normalization
|
| 501 |
-
|
| 502 |
-
|
| 503 |
-
|
| 504 |
-
|
| 505 |
-
4. Final HIGH+MEDIUM ambiguity remediation: 4 rows reworded for explicit time anchoring,
|
| 506 |
-
unit explicitness, and unambiguous binary framing.
|
| 507 |
-
5. CRITICAL fix on one S&P 500 multi-select truth set so it satisfies the
|
| 508 |
-
monotonic-threshold logic implied by the option ladder.
|
| 509 |
-
* **Verification.** All 80 ground-truths verified end-to-end by parser round-trip (the
|
| 510 |
-
rendered prompt is parsed and re-encoded back to the canonical letter set). Final tally:
|
| 511 |
-
**0 critical / 0 high / 0 medium ambiguity issues remaining**.
|
| 512 |
|
| 513 |
---
|
| 514 |
|
|
@@ -516,83 +409,46 @@ ability**: any model that can browse the open web or that was trained past a que
|
|
| 516 |
|
| 517 |
### 9.1 Intended uses
|
| 518 |
|
| 519 |
-
* **Forecasting benchmark for LLMs and LLM agents**
|
| 520 |
-
|
| 521 |
-
* **
|
| 522 |
-
every prompt byte-stable; pair it with the OracleProto framework to get a run unit that
|
| 523 |
-
yields bit-identical scoring artefacts when the configuration matches.
|
| 524 |
-
* **Calibration / proper-scoring research** — the 80-row size is deliberately small so
|
| 525 |
-
per-question analysis (belief evolution, source attribution, calibration plots) is
|
| 526 |
-
tractable.
|
| 527 |
|
| 528 |
### 9.2 Out-of-scope uses
|
| 529 |
|
| 530 |
-
* **Training data.**
|
| 531 |
-
|
| 532 |
-
|
| 533 |
-
* **Long-horizon forecasting.** All resolutions land in a one-month window
|
| 534 |
-
(2026-03-12 → 2026-04-14). The set is *not* representative of multi-quarter or
|
| 535 |
-
multi-year forecasting tasks.
|
| 536 |
-
* **Open-ended generation.** Every question has a closed answer set; this is not a
|
| 537 |
-
generation benchmark.
|
| 538 |
|
| 539 |
### 9.3 Known limitations and biases
|
| 540 |
|
| 541 |
-
* **Sample size.** 80 rows is small. Confidence intervals on accuracy
|
| 542 |
-
|
| 543 |
-
models on the same set.
|
| 544 |
-
* **Topical skew.** The questions are concentrated in finance / macro indicators, sports
|
| 545 |
-
events, awards (Oscars, NBA, UEFA, etc.), and US-centric political and geopolitical
|
| 546 |
-
events — reflecting the upstream HuggingFace forecasting market mix. They are **not** a
|
| 547 |
-
globally representative sample of forecastable events.
|
| 548 |
* **English-only.** All `event` and `options` strings are English.
|
| 549 |
-
* **Date-only resolution.** `end_time` is a
|
| 550 |
-
|
| 551 |
-
|
| 552 |
-
* **Provider-side residual leakage (L4 channel).** Any LLM that has ingested the upstream
|
| 553 |
-
HuggingFace dataset, or that was trained past the resolution window, can recover ground
|
| 554 |
-
truths from parametric memory. The dataset cannot patch this on its own — it relies on the
|
| 555 |
-
harness to enforce admissibility (\\(\kappa_M\\)).
|
| 556 |
-
* **Snapshot of a moving label space.** A few questions ("none of the above", "all of the
|
| 557 |
-
above") interact non-trivially with multi-select scoring; the curation pass fixed the one
|
| 558 |
-
S&P 500 case but the convention for similar questions in future revisions may shift. Pin
|
| 559 |
-
to the schema-version field if you need byte-stable behaviour across releases.
|
| 560 |
|
| 561 |
---
|
| 562 |
|
| 563 |
## 10. Versioning
|
| 564 |
|
| 565 |
-
* **`v1.0` (2026-04-29)**
|
| 566 |
-
2026-03-12 → 2026-04-14; pipeline passes 1–5 above; 0 critical / 0 high / 0 medium
|
| 567 |
-
ambiguity issues remaining.
|
| 568 |
|
| 569 |
-
The schema version is recorded inside the database
|
| 570 |
-
(`dataset_metadata.features_json.schema_version`), so consumers can pin against it without
|
| 571 |
-
re-deriving from the file's hash.
|
| 572 |
|
| 573 |
---
|
| 574 |
|
| 575 |
## 11. License
|
| 576 |
|
| 577 |
-
|
| 578 |
-
copy, modify, and redistribute it, including for commercial purposes, provided the copyright
|
| 579 |
-
notice and license text are preserved.
|
| 580 |
-
|
| 581 |
-
The upstream questions originate from a public HuggingFace forecasting set; the curation
|
| 582 |
-
work, the schema, the prompt-reconstruction recipe, and the answer encodings in this release
|
| 583 |
-
are the contribution of this project.
|
| 584 |
|
| 585 |
---
|
| 586 |
|
| 587 |
## 12. Contact and contributions
|
| 588 |
|
| 589 |
-
Issues, schema feedback, and ambiguity reports are welcome. If
|
| 590 |
-
truth has changed, or whose framing is ambiguous under the §5.5 parser, please open an issue
|
| 591 |
-
in either of the project repositories:
|
| 592 |
|
| 593 |
-
* Dataset (this repo): [`MaYiding/OracleProto` on Hugging Face](https://huggingface.co/datasets/MaYiding/OracleProto/discussions)
|
| 594 |
-
* Framework code: [`MaYiding/OracleProto` on GitHub](https://github.com/MaYiding/OracleProto/issues)
|
| 595 |
|
| 596 |
-
|
| 597 |
-
available) a primary source — those are the two inputs the curation pipeline needs to update
|
| 598 |
-
the row for the next release.
|
|
|
|
| 20 |
pretty_name: OracleProto Forecasting Eval Set
|
| 21 |
---
|
| 22 |
|
| 23 |
+
# OracleProto: Forecasting Evaluation Set
|
| 24 |
|
| 25 |
**Dataset:** [`MaYiding/OracleProto`](https://huggingface.co/datasets/MaYiding/OracleProto) ·
|
| 26 |
**Framework:** [`MaYiding/OracleProto`](https://github.com/MaYiding/OracleProto) ·
|
| 27 |
**License:** MIT
|
| 28 |
|
| 29 |
+
A SQLite-packaged evaluation set of 80 hand-curated forecasting questions on real-world events, with resolution dates between 2026-03-12 and 2026-04-14, released alongside the [OracleProto framework](https://github.com/MaYiding/OracleProto). Both the rows and the byte-stable prompt-reconstruction recipe ship inside a single file, `forecast_eval_set_example.db`, which holds two tables: `forecast_eval_set_example` (the 80 rows) and `dataset_metadata` (the recipe).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 30 |
|
| 31 |
---
|
| 32 |
|
|
|
|
| 37 |
| Schema version | `v1.0` |
|
| 38 |
| Release date | `2026-04-29` |
|
| 39 |
| Rows | 80 |
|
| 40 |
+
| Splits | `train` (80); single split, intended as a held-out evaluation set |
|
| 41 |
+
| Resolution-date range | `2026-03-12` → `2026-04-14` (GMT+8) |
|
| 42 |
| Question types | `yes_no`, `binary_named`, `multiple_choice` |
|
| 43 |
| Choice types | `single` (one correct letter), `multi` (one-or-more correct letters) |
|
| 44 |
| Database file | `forecast_eval_set_example.db` (SQLite 3, ~52 KB) |
|
| 45 |
| Tables in the file | `forecast_eval_set_example` (80 rows), `dataset_metadata` (1 row) |
|
| 46 |
| License | MIT |
|
| 47 |
+
| Source upstream | HuggingFace forecasting questions (levels 1+2), 322 raw → 80 curated |
|
| 48 |
|
| 49 |
### Type distribution
|
| 50 |
|
|
|
|
| 56 |
| `multiple_choice` | `multi` | 8 |
|
| 57 |
| **Total** | | **80** |
|
| 58 |
|
| 59 |
+
`yes_no` is binary Yes/No; `binary_named` is binary between two named entities such as sports teams, fighters, or sides; `multiple_choice` carries at least three labelled options with one or more correct letters allowed, and "None of the above" is a valid answer when listed. Each row stores the exact option labels; letter `A` maps to `options[0]`, `B` to `options[1]`, and so on (§3.4 covers labels beyond `Z`).
|
|
|
|
|
|
|
|
|
|
|
|
|
| 60 |
|
| 61 |
---
|
| 62 |
|
|
|
|
| 70 |
└── .gitattributes # standard HF binary attributes
|
| 71 |
```
|
| 72 |
|
| 73 |
+
The dataset ships as one SQLite file, not Parquet or JSONL, because the prompt-reconstruction recipe and per-row provenance live in the same file as the rows (in `dataset_metadata.features_json`). A loader for `datasets.Dataset` and Parquet conversion appears in §6.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 74 |
|
| 75 |
---
|
| 76 |
|
| 77 |
## 3. Database schema
|
| 78 |
|
| 79 |
+
Two tables: `forecast_eval_set_example` holds the 80 rows; `dataset_metadata` holds the canonical recipe. The file takes its name from the primary table.
|
|
|
|
|
|
|
|
|
|
| 80 |
|
| 81 |
### 3.1 Table `forecast_eval_set_example` (the rows)
|
| 82 |
|
|
|
|
| 98 |
|
| 99 |
### 3.2 Table `dataset_metadata` (the recipe)
|
| 100 |
|
| 101 |
+
A one-row table whose `features_json` blob carries the prompt template, the four output formats, the outcomes-block rule, the agent-role string, and curation provenance. The full recipe is rendered in §5.
|
|
|
|
|
|
|
|
|
|
| 102 |
|
| 103 |
```sql
|
| 104 |
CREATE TABLE dataset_metadata (
|
|
|
|
| 107 |
table_name TEXT NOT NULL,
|
| 108 |
row_count INTEGER NOT NULL,
|
| 109 |
imported_at_utc TEXT NOT NULL,
|
| 110 |
+
features_json TEXT NOT NULL
|
| 111 |
);
|
| 112 |
```
|
| 113 |
|
|
|
|
| 115 |
|
| 116 |
| Column | Type | Description |
|
| 117 |
| --------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
| 118 |
+
| `id` | TEXT | Stable source-side question ID inherited from the upstream HuggingFace forecasting set; primary join key. |
|
| 119 |
+
| `choice_type` | TEXT | `'single'` if exactly one letter is correct, `'multi'` if one-or-more letters can be correct. Derived from the number of letters in `answer`. Drives the single-answer vs multi-select branch in §5.4. |
|
| 120 |
+
| `question_type` | TEXT | One of `yes_no`, `binary_named`, `multiple_choice`. Selects which prompt template is rendered (§5). |
|
| 121 |
+
| `event` | TEXT | Natural-language description of the event being predicted, author-edited for explicit time anchoring, unit explicitness, and unambiguous binary framing. |
|
| 122 |
+
| `options` | TEXT | JSON array of option labels. For `yes_no` it is fixed to `["Yes","No"]`. For `binary_named` it is two named entities. For `multiple_choice` it is a list of choice labels whose letter is implied by index (`A=options[0]`, `B=options[1]`, …). |
|
| 123 |
+
| `answer` | TEXT | Canonical correct answer encoded as letters. For `yes_no` and `binary_named` it is `'A'` or `'B'`. For `multiple_choice` it is a comma-separated letter list in option order, e.g. `'A'` or `'A, B'`. |
|
| 124 |
+
| `end_time` | TEXT | Resolution date in `YYYY-MM-DD`. The column stores a calendar date only; the prompt template (§5.2) supplies the GMT+8 reading. If finer-grained admissibility is needed, treat each resolution as covering the whole calendar day. |
|
| 125 |
|
| 126 |
### 3.4 Letter-to-index encoding
|
| 127 |
|
| 128 |
+
Letters map to option indices via `index = ord(letter) - ord('A')`. Beyond `Z` (≥27 options) the labels run on as `[`, `\`, `]`, `^`, `_`, `` ` ``, `a`, `b`, …, the contiguous ASCII range starting at `A`. The reference renderer wraps any non-`A`–`Z` label in backticks so it survives Markdown rendering. None of the 80 rows exceed 26 options, but the encoding is documented because the framework's parser supports it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 129 |
|
| 130 |
---
|
| 131 |
|
| 132 |
## 4. Sample rows
|
| 133 |
|
|
|
|
|
|
|
| 134 |
```json
|
| 135 |
{
|
| 136 |
"id": "699d9ffc098cca008728b6f0",
|
|
|
|
| 187 |
|
| 188 |
## 5. Prompt reconstruction (canonical recipe)
|
| 189 |
|
| 190 |
+
Every row is rendered into a single user message via the recipe stored in `dataset_metadata.features_json.prompt_reconstruction`. The recipe is byte-stable and is the source of truth for the OracleProto evaluator; downstream users who reconstruct prompts themselves should follow it exactly so results stay comparable.
|
|
|
|
|
|
|
|
|
|
| 191 |
|
| 192 |
### 5.1 Static fragments
|
| 193 |
|
|
|
|
| 202 |
|
| 203 |
### 5.2 Master template
|
| 204 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 205 |
```text
|
| 206 |
{agent_role} The event to be predicted: "{event} (resolved around {end_time} (GMT+8)).{outcomes_block}"
|
| 207 |
|
|
|
|
| 210 |
{guidance}
|
| 211 |
```
|
| 212 |
|
| 213 |
+
The literal `(GMT+8)` inside the user-visible string is what gives `end_time` its timezone reading; the column itself stores only a date.
|
| 214 |
+
|
| 215 |
### 5.3 `outcomes_block`
|
| 216 |
|
| 217 |
+
For `yes_no` and `binary_named`: empty, since the option labels are baked into `output_format`.
|
| 218 |
+
For `multiple_choice`: a leading newline followed by one line per option in `A. <label>` form, e.g. `\nA. Arizona\nB. Baylor\nC. Brigham Young University (BYU)\n…`. Labels whose derived letter falls outside `A`–`Z` are wrapped in backticks.
|
|
|
|
|
|
|
|
|
|
| 219 |
|
| 220 |
### 5.4 `output_format` (one of four, chosen by `question_type` × `choice_type`)
|
| 221 |
|
|
|
|
| 227 |
\boxed{Yes} or \boxed{No}
|
| 228 |
```
|
| 229 |
|
| 230 |
+
**`binary_named`** (the literals `<options[0]>` and `<options[1]>` are replaced by the two named entities from `options`):
|
|
|
|
| 231 |
```text
|
| 232 |
Your task is to predict which of the two outcomes will occur based on your analysis.
|
| 233 |
Your prediction will be scored based on its accuracy. You will only receive points if your answer is correct.
|
|
|
|
| 255 |
|
| 256 |
### 5.5 Answer parsing
|
| 257 |
|
| 258 |
+
The reference parser ([`forecast_eval/parser.py::parse_answer`](https://github.com/MaYiding/OracleProto/blob/main/forecast_eval/parser.py)) applies these rules:
|
| 259 |
+
|
| 260 |
+
1. Take the **last** `\boxed{...}` substring in the model's reply; everything else is reasoning or scratchpad and is ignored.
|
| 261 |
+
2. For `yes_no` (case-insensitive): `Yes` → `A`, `No` → `B`. Anything else is unparsed.
|
| 262 |
+
3. For `binary_named` (case-insensitive): match the boxed payload against `options[0]` or `options[1]`. Anything else is unparsed.
|
| 263 |
+
4. For `multiple_choice`: split the boxed payload on commas and whitespace, validate that each token is a single letter, and check that each letter resolves to a valid option index. Out-of-range letters or multi-character tokens are unparsed.
|
| 264 |
+
5. Score by strict set equality against the canonical letter set parsed from `answer`. A missing or unparsed boxed answer is recorded as `parse_ok = 0` rather than treated as a parser error; the run records it and moves on.
|
| 265 |
+
|
| 266 |
+
Reusing the framework's parser is the practical way to get bit-identical scores across implementations.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 267 |
|
| 268 |
---
|
| 269 |
|
|
|
|
| 370 |
)
|
| 371 |
```
|
| 372 |
|
| 373 |
+
The full reference renderer (with the >26-option backtick rule and an optional reflection / belief-elicitation tail) lives at [`forecast_eval/prompts.py`](https://github.com/MaYiding/OracleProto/blob/main/forecast_eval/prompts.py); reusing it gives byte-identical prompts.
|
|
|
|
|
|
|
|
|
|
| 374 |
|
| 375 |
---
|
| 376 |
|
| 377 |
## 7. Recommended evaluation protocol
|
| 378 |
|
| 379 |
+
Pair the dataset with the OracleProto evaluation harness, which layers information-boundary discipline on top of the bare prompt-and-score loop. Five concrete recommendations:
|
| 380 |
+
|
| 381 |
+
1. **Declare a knowledge cutoff $\kappa_M$ for every model.** A question is admissible for model $M$ only when $\kappa_M \le \chi_i < \tau_i$, where $\chi_i$ is the per-question prediction cutoff and $\tau_i$ is its resolution date. Inadmissible questions are filtered upstream rather than counted as model errors. A model with no declared cutoff cannot be fairly compared to one that has one.
|
| 382 |
+
|
| 383 |
+
2. **Time-mask any retrieval or browsing tool.** If the harness lets the model issue web searches, pin the search-side `end_date` to $\chi_i + \delta$ with a conservative offset; OracleProto defaults to $\delta = -1$ day. The mechanism behind this barrier (L2) is documented in the framework's DESIGN and FRAME notes.
|
| 384 |
+
|
| 385 |
+
3. **Run an independent retrieval-content auditor.** Each retrieved snippet is passed to a separate LLM auditor that decides whether the snippet leaks the resolution. This is the L3 barrier in the framework's threat model.
|
| 386 |
+
|
| 387 |
+
4. **Forbid provider-native browsing.** OracleProto refuses model slugs ending in `:online` and similar hosted-browsing variants on three layers: config validation, on-the-wire client, and detector client. This is the L4 residual that must pass before any billable LLM call leaves the process.
|
| 388 |
+
|
| 389 |
+
5. **Score with strict set equality on letter sets**, per §5.5. Optional probability-calibration metrics (Brier, NLL, ECE, Murphy decomposition) are supported when the model emits an additional `<belief>{ ... }</belief>` JSON block per the v4 belief protocol; the schema is documented in [`forecast_eval/prompts.py::BELIEF_PROTOCOL`](https://github.com/MaYiding/OracleProto/blob/main/forecast_eval/prompts.py).
|
| 390 |
+
|
| 391 |
+
Without the OracleProto harness in place, treat the resulting numbers as upper bounds on forecasting ability: any model that can browse the open web, or that was trained past a question's `end_time`, may have memorised the answer. The dataset makes the honesty audit possible; it does not enforce it on its own.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 392 |
|
| 393 |
---
|
| 394 |
|
| 395 |
## 8. Provenance and curation
|
| 396 |
|
| 397 |
+
* **Source.** Upstream HuggingFace forecasting questions, restricted to *levels 1+2* (the easier two of the upstream difficulty bands). The raw set was harvested as 322 candidate questions.
|
|
|
|
|
|
|
| 398 |
* **Curation pipeline (5 passes).**
|
| 399 |
1. Source-side broken-row removal and column flattening.
|
| 400 |
+
2. `end_time` / answer-encoding / option-label normalization: `end_time` reduced to a `YYYY-MM-DD` calendar date; `Yes/No` mapped to `A/B`; option labels stripped of stray markdown.
|
| 401 |
+
3. Down-sampling 322 → 200 → 100 → 80 with placeholder removal, deduplication, and an ambiguity audit.
|
| 402 |
+
4. Final HIGH+MEDIUM ambiguity remediation: 4 rows reworded for explicit time anchoring, unit explicitness, and unambiguous binary framing.
|
| 403 |
+
5. CRITICAL fix on one S&P 500 multi-select truth set so it satisfies the monotonic-threshold logic implied by the option ladder.
|
| 404 |
+
* **Verification.** All 80 ground-truths verified end-to-end via parser round-trip (the rendered prompt is parsed and re-encoded back to the canonical letter set). Final tally: 0 critical / 0 high / 0 medium ambiguity issues remaining.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 405 |
|
| 406 |
---
|
| 407 |
|
|
|
|
| 409 |
|
| 410 |
### 9.1 Intended uses
|
| 411 |
|
| 412 |
+
* **Forecasting benchmark for LLMs and LLM agents**, particularly tool-using agents that combine parametric knowledge with time-masked web retrieval.
|
| 413 |
+
* **Reproducibility testbed for forecasting harnesses.** The `dataset_metadata` table makes every prompt byte-stable; pairing it with the OracleProto framework yields a run unit whose scoring artefacts are bit-identical when the configuration matches.
|
| 414 |
+
* **Calibration and proper-scoring research.** The 80-row size is small enough that per-question analysis (belief evolution, source attribution, calibration plots) stays tractable.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 415 |
|
| 416 |
### 9.2 Out-of-scope uses
|
| 417 |
|
| 418 |
+
* **Training data.** Including the rows in any training, fine-tuning, or RLHF corpus contaminates downstream forecasting evaluations of the trained model. The dataset is evaluation-only.
|
| 419 |
+
* **Long-horizon forecasting.** All resolutions land in a one-month window (2026-03-12 → 2026-04-14); the set does not represent multi-quarter or multi-year forecasting.
|
| 420 |
+
* **Open-ended generation.** Every question has a closed answer set, so this is not a generation benchmark.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 421 |
|
| 422 |
### 9.3 Known limitations and biases
|
| 423 |
|
| 424 |
+
* **Sample size.** 80 rows is small. Confidence intervals on accuracy or Brier are wide; report them alongside point estimates and use paired tests when comparing models on the same set.
|
| 425 |
+
* **Topical skew.** Questions concentrate in finance and macro indicators, sports events, awards (Oscars, NBA, UEFA, etc.), and US-centric political and geopolitical events, reflecting the upstream HuggingFace market mix. They are not a globally representative sample.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 426 |
* **English-only.** All `event` and `options` strings are English.
|
| 427 |
+
* **Date-only resolution.** `end_time` is a date, not a timestamp, and the dataset does not carry a timezone column. If finer-grained admissibility is needed, treat each resolution as covering the whole GMT+8 calendar day.
|
| 428 |
+
* **Provider-side residual leakage.** Any LLM that has ingested the upstream HuggingFace dataset, or that was trained past the resolution window, can recover ground truths from parametric memory. The dataset cannot patch this on its own; it relies on the harness to enforce admissibility ($\kappa_M$).
|
| 429 |
+
* **Snapshot of a moving label space.** A few questions ("none of the above", "all of the above") interact non-trivially with multi-select scoring; the curation pass fixed the one S&P 500 case, but the convention for similar questions in future revisions may shift. Pin to the schema version if byte-stable behaviour across releases is required.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 430 |
|
| 431 |
---
|
| 432 |
|
| 433 |
## 10. Versioning
|
| 434 |
|
| 435 |
+
* **`v1.0` (2026-04-29).** Initial public example release. 80 rows; resolution dates 2026-03-12 → 2026-04-14; pipeline passes 1–5 above; 0 critical / 0 high / 0 medium ambiguity issues remaining.
|
|
|
|
|
|
|
| 436 |
|
| 437 |
+
The schema version is recorded inside the database at `dataset_metadata.features_json.schema_version`, so consumers can pin against it without re-deriving from the file's hash.
|
|
|
|
|
|
|
| 438 |
|
| 439 |
---
|
| 440 |
|
| 441 |
## 11. License
|
| 442 |
|
| 443 |
+
Released under the **MIT License** (see `LICENSE`). The upstream questions originate from a public HuggingFace forecasting set; the curation work, schema, prompt-reconstruction recipe, and answer encodings in this release are the contribution of this project.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 444 |
|
| 445 |
---
|
| 446 |
|
| 447 |
## 12. Contact and contributions
|
| 448 |
|
| 449 |
+
Issues, schema feedback, and ambiguity reports are welcome. If a row's ground truth has changed, or its framing is ambiguous under §5.5, open an issue in the relevant repository:
|
|
|
|
|
|
|
| 450 |
|
| 451 |
+
* Dataset (this repo): [`MaYiding/OracleProto` on Hugging Face](https://huggingface.co/datasets/MaYiding/OracleProto/discussions) for row-level questions, ambiguity reports, and label disputes.
|
| 452 |
+
* Framework code: [`MaYiding/OracleProto` on GitHub](https://github.com/MaYiding/OracleProto/issues) for evaluator, parser, or harness behaviour.
|
| 453 |
|
| 454 |
+
Row-level reports should include the `id`, the disputed framing, and where available a primary source; those are the inputs the curation pipeline needs to update the row in the next release.
|
|
|
|
|
|