Update README
Browse files
README.md
CHANGED
|
@@ -1,3 +1,598 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
task_categories:
|
| 4 |
+
- question-answering
|
| 5 |
+
- text-generation
|
| 6 |
+
language:
|
| 7 |
+
- en
|
| 8 |
+
tags:
|
| 9 |
+
- forecasting
|
| 10 |
+
- benchmark
|
| 11 |
+
- llm-evaluation
|
| 12 |
+
- reasoning
|
| 13 |
+
- temporal-reasoning
|
| 14 |
+
- contamination-control
|
| 15 |
+
- leakage-control
|
| 16 |
+
- prediction
|
| 17 |
+
- agent
|
| 18 |
+
size_categories:
|
| 19 |
+
- n<1K
|
| 20 |
+
pretty_name: OracleProto Forecasting Eval Set (Example, v1.0)
|
| 21 |
+
---
|
| 22 |
+
|
| 23 |
+
# OracleProto — Forecasting Evaluation Set (Example, v1.0)
|
| 24 |
+
|
| 25 |
+
**Dataset:** [`MaYiding/OracleProto`](https://huggingface.co/datasets/MaYiding/OracleProto) ·
|
| 26 |
+
**Framework:** [`MaYiding/OracleProto`](https://github.com/MaYiding/OracleProto) ·
|
| 27 |
+
**License:** MIT
|
| 28 |
+
|
| 29 |
+
A small, hand-curated benchmark of **80 real-world forecasting questions** whose ground truths
|
| 30 |
+
resolve between **2026-03-12 and 2026-04-14**. The set is the public example release shipped
|
| 31 |
+
with the [OracleProto framework](https://github.com/MaYiding/OracleProto) — a reproducible
|
| 32 |
+
harness for benchmarking LLM-native forecasting via knowledge-cutoff and temporal-masking
|
| 33 |
+
discipline.
|
| 34 |
+
|
| 35 |
+
The dataset is shipped as **a single SQLite database file** named
|
| 36 |
+
`forecast_eval_set_example.db`, which contains **two tables**:
|
| 37 |
+
|
| 38 |
+
* `forecast_eval_set_example` — the 80 forecasting rows (one row per question).
|
| 39 |
+
* `dataset_metadata` — a one-row table holding the canonical prompt-reconstruction recipe
|
| 40 |
+
(prompt template, output formats, agent role, answer-encoding rules, provenance).
|
| 41 |
+
|
| 42 |
+
The dataset is designed to be the **dataset $`\mathcal{D}`$** in the OracleProto run unit
|
| 43 |
+
$`\mathcal{R} = (\mathcal{D}, M, \kappa_M, \delta, T, C, R, \Psi, \phi, \Gamma)`$: every column,
|
| 44 |
+
prompt template, and answer-encoding rule below is byte-stable and round-trip parsed by the
|
| 45 |
+
reference parser, so a forecasting run on this set is auditable, replayable, and comparable
|
| 46 |
+
across models and across calendar years.
|
| 47 |
+
|
| 48 |
+
> **TL;DR.** 80 yes/no, two-named-entity, and multiple-choice (single-answer / multi-select)
|
| 49 |
+
> questions on real-world events. All ground truths are verified end-to-end via parser
|
| 50 |
+
> round-trip; 0 critical / 0 high / 0 medium ambiguity issues remain. Distributed as a single
|
| 51 |
+
> SQLite database file `forecast_eval_set_example.db` containing tables
|
| 52 |
+
> `forecast_eval_set_example` (rows) and `dataset_metadata` (recipe), so the
|
| 53 |
+
> prompt-reconstruction recipe and per-question metadata stay co-located with the rows.
|
| 54 |
+
|
| 55 |
+
---
|
| 56 |
+
|
| 57 |
+
## 1. Dataset at a glance
|
| 58 |
+
|
| 59 |
+
| Field | Value |
|
| 60 |
+
| ---------------------- | ---------------------------------------------------------------------- |
|
| 61 |
+
| Schema version | `v1.0` |
|
| 62 |
+
| Release date | `2026-04-29` |
|
| 63 |
+
| Rows | 80 |
|
| 64 |
+
| Splits | `train` (80) — single split, intended as a held-out evaluation set |
|
| 65 |
+
| Resolution-date range | `2026-03-12` → `2026-04-14` |
|
| 66 |
+
| Question types | `yes_no`, `binary_named`, `multiple_choice` |
|
| 67 |
+
| Choice types | `single` (one correct letter), `multi` (one-or-more correct letters) |
|
| 68 |
+
| Database file | `forecast_eval_set_example.db` (SQLite 3, ~52 KB) |
|
| 69 |
+
| Tables in the file | `forecast_eval_set_example` (80 rows), `dataset_metadata` (1 row) |
|
| 70 |
+
| License | MIT |
|
| 71 |
+
| Source upstream | HuggingFace forecasting questions (levels 1+2 only), heavily curated |
|
| 72 |
+
|
| 73 |
+
### Type distribution
|
| 74 |
+
|
| 75 |
+
| `question_type` | `choice_type` | Rows |
|
| 76 |
+
| ------------------- | ------------- | ---- |
|
| 77 |
+
| `yes_no` | `single` | 37 |
|
| 78 |
+
| `binary_named` | `single` | 3 |
|
| 79 |
+
| `multiple_choice` | `single` | 32 |
|
| 80 |
+
| `multiple_choice` | `multi` | 8 |
|
| 81 |
+
| **Total** | | **80** |
|
| 82 |
+
|
| 83 |
+
`yes_no` is binary Yes/No, `binary_named` is binary between two named entities (e.g. sports
|
| 84 |
+
teams, fighters, sides), and `multiple_choice` carries ≥3 labelled options where multiple
|
| 85 |
+
correct answers are allowed; *"None of the above"* is a valid answer. Every question lists the
|
| 86 |
+
exact option labels, so labels are the source of truth — letter labels (`A`, `B`, …) are
|
| 87 |
+
implied by index.
|
| 88 |
+
|
| 89 |
+
---
|
| 90 |
+
|
| 91 |
+
## 2. Files
|
| 92 |
+
|
| 93 |
+
```text
|
| 94 |
+
OracleProto/
|
| 95 |
+
├── forecast_eval_set_example.db # SQLite database file (the dataset; ~52 KB)
|
| 96 |
+
├── README.md # this file
|
| 97 |
+
├── LICENSE # MIT
|
| 98 |
+
└── .gitattributes # standard HF binary attributes
|
| 99 |
+
```
|
| 100 |
+
|
| 101 |
+
The dataset ships as **one SQLite database file** rather than as Parquet/JSONL, because the
|
| 102 |
+
prompt-reconstruction recipe, the column-level schema, and per-row provenance live in the
|
| 103 |
+
same file (in the `dataset_metadata` table). This keeps the canonical source of truth —
|
| 104 |
+
including the byte-stable `prompt_template` used by the evaluator — co-located with the
|
| 105 |
+
rows. The file `forecast_eval_set_example.db` contains exactly two tables:
|
| 106 |
+
|
| 107 |
+
* **`forecast_eval_set_example`** — the 80 forecasting rows. Schema and column semantics in §3.1 / §3.3.
|
| 108 |
+
* **`dataset_metadata`** — a single-row table whose `features_json` blob holds the prompt
|
| 109 |
+
template, the four output formats, the outcomes-block rule, the agent-role string, and
|
| 110 |
+
the curation provenance. Schema in §3.2; full recipe rendered in §5.
|
| 111 |
+
|
| 112 |
+
A loader example for converting to `datasets.Dataset` / Parquet is provided in §6.
|
| 113 |
+
|
| 114 |
+
---
|
| 115 |
+
|
| 116 |
+
## 3. Database schema
|
| 117 |
+
|
| 118 |
+
The database **file** is `forecast_eval_set_example.db`. It contains exactly **two tables**:
|
| 119 |
+
`forecast_eval_set_example` (80 forecasting rows) and `dataset_metadata` (1 metadata row).
|
| 120 |
+
Note that the database file and the rows table happen to share the same name — the file is
|
| 121 |
+
named after its primary table.
|
| 122 |
+
|
| 123 |
+
### 3.1 Table `forecast_eval_set_example` (the rows)
|
| 124 |
+
|
| 125 |
+
```sql
|
| 126 |
+
CREATE TABLE forecast_eval_set_example (
|
| 127 |
+
id TEXT PRIMARY KEY,
|
| 128 |
+
choice_type TEXT NOT NULL CHECK (choice_type IN ('single','multi')),
|
| 129 |
+
question_type TEXT NOT NULL, -- yes_no | binary_named | multiple_choice
|
| 130 |
+
event TEXT NOT NULL, -- the event being predicted
|
| 131 |
+
options TEXT NOT NULL, -- JSON array of option labels
|
| 132 |
+
answer TEXT NOT NULL, -- canonical correct answer as letter(s)
|
| 133 |
+
end_time TEXT NOT NULL -- 'YYYY-MM-DD'
|
| 134 |
+
);
|
| 135 |
+
|
| 136 |
+
CREATE INDEX idx_forecast_eval_set_example_choice_type ON forecast_eval_set_example(choice_type);
|
| 137 |
+
CREATE INDEX idx_forecast_eval_set_example_question_type ON forecast_eval_set_example(question_type);
|
| 138 |
+
CREATE INDEX idx_forecast_eval_set_example_end_time ON forecast_eval_set_example(end_time);
|
| 139 |
+
```
|
| 140 |
+
|
| 141 |
+
### 3.2 Table `dataset_metadata` (the recipe)
|
| 142 |
+
|
| 143 |
+
A single row recording the dataset name, split, row count, import timestamp, and a JSON
|
| 144 |
+
`features_json` blob with the **full prompt-reconstruction recipe** used by the OracleProto
|
| 145 |
+
evaluator. This is what makes the run reproducible — the prompt template, the four output
|
| 146 |
+
formats, the outcomes-block rule, and the agent role are all here in canonical form.
|
| 147 |
+
|
| 148 |
+
```sql
|
| 149 |
+
CREATE TABLE dataset_metadata (
|
| 150 |
+
dataset_name TEXT NOT NULL,
|
| 151 |
+
split_name TEXT NOT NULL,
|
| 152 |
+
table_name TEXT NOT NULL,
|
| 153 |
+
row_count INTEGER NOT NULL,
|
| 154 |
+
imported_at_utc TEXT NOT NULL,
|
| 155 |
+
features_json TEXT NOT NULL -- see §5
|
| 156 |
+
);
|
| 157 |
+
```
|
| 158 |
+
|
| 159 |
+
### 3.3 Column semantics
|
| 160 |
+
|
| 161 |
+
| Column | Type | Description |
|
| 162 |
+
| --------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
| 163 |
+
| `id` | TEXT | Stable source-side question ID inherited from the upstream HuggingFace forecasting set. Use this as the primary join key. |
|
| 164 |
+
| `choice_type` | TEXT | `'single'` if exactly one letter is correct, `'multi'` if one-or-more letters can be correct. Derived from the number of letters in `answer`. Drives output-format selection (single-answer vs multi-select instructions). |
|
| 165 |
+
| `question_type` | TEXT | One of `yes_no`, `binary_named`, `multiple_choice`. Selects which prompt template is rendered (see §5). |
|
| 166 |
+
| `event` | TEXT | Natural-language description of the event being predicted. Author-edited for explicit time anchoring, unit explicitness, and unambiguous binary framing. |
|
| 167 |
+
| `options` | TEXT | JSON array of option **labels**. For `yes_no` it is fixed to `["Yes","No"]`. For `binary_named` it is two named entities. For `multiple_choice` it is a list of choice labels; the letter is implied by index (A=options[0], B=options[1], …). |
|
| 168 |
+
| `answer` | TEXT | Canonical correct answer encoded as **letters**. For `yes_no` and `binary_named` it is a single letter `'A'` or `'B'`. For `multiple_choice` it is a comma-separated letter list in option order, e.g. `'A'` (single) or `'A, B'` (multi). |
|
| 169 |
+
| `end_time` | TEXT | Resolution date in `YYYY-MM-DD` format. The dataset stores a date only (no timestamp and no timezone field) — treat resolutions as "occurred during that day" at the granularity of one calendar day. Use this as the source of truth for the question's resolution day. |
|
| 170 |
+
|
| 171 |
+
### 3.4 Letter-to-index encoding
|
| 172 |
+
|
| 173 |
+
Letters `A`, `B`, … map to option indices `0`, `1`, … using the rule
|
| 174 |
+
`index = ord(letter) - ord('A')`. Beyond the 26-letter alphabet (≥27 options) the labels
|
| 175 |
+
land on `[`, `\`, `]`, `^`, `_`, `` ` ``, `a`, `b`, …, i.e. the contiguous ASCII range
|
| 176 |
+
starting at `A`. The reference renderer wraps any non-`A`–`Z` label in backticks so the
|
| 177 |
+
label survives Markdown rendering. None of the 80 example questions exceed 26 options, but
|
| 178 |
+
the encoding scheme is documented for completeness because the evaluator and parser support
|
| 179 |
+
it.
|
| 180 |
+
|
| 181 |
+
---
|
| 182 |
+
|
| 183 |
+
## 4. Sample rows
|
| 184 |
+
|
| 185 |
+
A few representative rows (truncated for readability):
|
| 186 |
+
|
| 187 |
+
```json
|
| 188 |
+
{
|
| 189 |
+
"id": "699d9ffc098cca008728b6f0",
|
| 190 |
+
"choice_type": "single",
|
| 191 |
+
"question_type": "yes_no",
|
| 192 |
+
"event": "Will the US PCE annual inflation be greater than 2.9% in January 2026?",
|
| 193 |
+
"options": ["Yes", "No"],
|
| 194 |
+
"answer": "B",
|
| 195 |
+
"end_time": "2026-03-13"
|
| 196 |
+
}
|
| 197 |
+
```
|
| 198 |
+
|
| 199 |
+
```json
|
| 200 |
+
{
|
| 201 |
+
"id": "69a2e39e5692ef005cdbf2d3",
|
| 202 |
+
"choice_type": "single",
|
| 203 |
+
"question_type": "binary_named",
|
| 204 |
+
"event": "Will US or Israel strike Iran first?",
|
| 205 |
+
"options": ["US", "Israel"],
|
| 206 |
+
"answer": "B",
|
| 207 |
+
"end_time": "2026-03-31"
|
| 208 |
+
}
|
| 209 |
+
```
|
| 210 |
+
|
| 211 |
+
```json
|
| 212 |
+
{
|
| 213 |
+
"id": "6995b1073ea64b005b11f285",
|
| 214 |
+
"choice_type": "single",
|
| 215 |
+
"question_type": "multiple_choice",
|
| 216 |
+
"event": "Which men's basketball team will win the Big 12 Conference Championship tournament in the 2025-26 season?",
|
| 217 |
+
"options": ["Arizona", "Baylor", "Brigham Young University (BYU)",
|
| 218 |
+
"Houston", "Iowa State", "Kansas", "Kansas State"],
|
| 219 |
+
"answer": "A",
|
| 220 |
+
"end_time": "2026-03-14"
|
| 221 |
+
}
|
| 222 |
+
```
|
| 223 |
+
|
| 224 |
+
```json
|
| 225 |
+
{
|
| 226 |
+
"id": "698f198bda7a8b006575444c",
|
| 227 |
+
"choice_type": "multi",
|
| 228 |
+
"question_type": "multiple_choice",
|
| 229 |
+
"event": "Which movies will win multiple Oscars? (2026)",
|
| 230 |
+
"options": ["One Battle After Another", "Sinners", "Frankenstein",
|
| 231 |
+
"KPop Demon Hunters", "F1", "Sentimental Value", "Hamnet",
|
| 232 |
+
"Marty Supreme", "The Secret Agent", "Avatar: Fire and Ash",
|
| 233 |
+
"Train Dreams", "Bugonia", "Blue Moon", "It Was Just An Accident"],
|
| 234 |
+
"answer": "A, B, C, D",
|
| 235 |
+
"end_time": "2026-03-15"
|
| 236 |
+
}
|
| 237 |
+
```
|
| 238 |
+
|
| 239 |
+
---
|
| 240 |
+
|
| 241 |
+
## 5. Prompt reconstruction (canonical recipe)
|
| 242 |
+
|
| 243 |
+
Every row is rendered into a single user message using the recipe stored in
|
| 244 |
+
`dataset_metadata.features_json.prompt_reconstruction`. The recipe is byte-stable and is the
|
| 245 |
+
source of truth for the OracleProto evaluator; downstream users who reconstruct prompts
|
| 246 |
+
themselves should follow it exactly so results stay comparable.
|
| 247 |
+
|
| 248 |
+
### 5.1 Static fragments
|
| 249 |
+
|
| 250 |
+
```text
|
| 251 |
+
agent_role: "You are an agent that can predict future events."
|
| 252 |
+
|
| 253 |
+
guidance: "Do not use any other format. Do not refuse to make a prediction.
|
| 254 |
+
Do not say \"I cannot predict the future.\" You must make a clear
|
| 255 |
+
prediction based on the best data currently available, using the
|
| 256 |
+
box format specified above."
|
| 257 |
+
```
|
| 258 |
+
|
| 259 |
+
### 5.2 Master template
|
| 260 |
+
|
| 261 |
+
The block below is reproduced **verbatim** from the value stored in
|
| 262 |
+
`dataset_metadata.features_json.prompt_reconstruction.prompt_template`. It is a literal
|
| 263 |
+
string baked into the recipe; the dataset itself does not otherwise carry a timezone field
|
| 264 |
+
on `end_time` (see §3.3).
|
| 265 |
+
|
| 266 |
+
```text
|
| 267 |
+
{agent_role} The event to be predicted: "{event} (resolved around {end_time} (GMT+8)).{outcomes_block}"
|
| 268 |
+
|
| 269 |
+
IMPORTANT: Your final answer MUST end with this exact format:
|
| 270 |
+
{output_format}
|
| 271 |
+
{guidance}
|
| 272 |
+
```
|
| 273 |
+
|
| 274 |
+
### 5.3 `outcomes_block`
|
| 275 |
+
|
| 276 |
+
* For `yes_no` and `binary_named`: empty string (the option labels are baked into
|
| 277 |
+
`output_format` instead).
|
| 278 |
+
* For `multiple_choice`: a leading newline followed by one line per option in `A. <label>`
|
| 279 |
+
form, e.g. `\nA. Arizona\nB. Baylor\nC. Brigham Young University (BYU)\n…`. Labels whose
|
| 280 |
+
derived letter falls outside `A`–`Z` are wrapped in backticks.
|
| 281 |
+
|
| 282 |
+
### 5.4 `output_format` (one of four, chosen by `question_type` × `choice_type`)
|
| 283 |
+
|
| 284 |
+
**`yes_no`:**
|
| 285 |
+
```text
|
| 286 |
+
Your task is to predict whether the event will occur based on your analysis.
|
| 287 |
+
Your prediction will be scored based on its accuracy. You will only receive points if your answer is correct.
|
| 288 |
+
Your final answer MUST end with this exact format:
|
| 289 |
+
\boxed{Yes} or \boxed{No}
|
| 290 |
+
```
|
| 291 |
+
|
| 292 |
+
**`binary_named`** (the literals `<options[0]>` and `<options[1]>` are replaced by the two
|
| 293 |
+
named entities from `options`):
|
| 294 |
+
```text
|
| 295 |
+
Your task is to predict which of the two outcomes will occur based on your analysis.
|
| 296 |
+
Your prediction will be scored based on its accuracy. You will only receive points if your answer is correct.
|
| 297 |
+
Your final answer MUST end with this exact format:
|
| 298 |
+
\boxed{<options[0]>} or \boxed{<options[1]>}
|
| 299 |
+
```
|
| 300 |
+
|
| 301 |
+
**`multiple_choice` with `choice_type='single'`:**
|
| 302 |
+
```text
|
| 303 |
+
This is a SINGLE-ANSWER question: exactly ONE of the listed options is correct.
|
| 304 |
+
Your prediction will be scored on strict equality with the unique correct letter; choosing the wrong letter, or selecting more than one letter, scores zero.
|
| 305 |
+
Your final answer MUST end with this exact format:
|
| 306 |
+
the single correct letter inside the box, e.g. \boxed{A}.
|
| 307 |
+
Do NOT list more than one letter, even if you believe two outcomes are tied — pick the one you find most likely.
|
| 308 |
+
```
|
| 309 |
+
|
| 310 |
+
**`multiple_choice` with `choice_type='multi'`:**
|
| 311 |
+
```text
|
| 312 |
+
This is a MULTI-SELECT question: ONE OR MORE of the listed options can be correct.
|
| 313 |
+
Your prediction will be scored on strict equality with the FULL set of correct letters: any extra letter, any missing letter, or any wrong letter scores zero. You must include ALL correct options and NO incorrect options.
|
| 314 |
+
Your final answer MUST end with this exact format:
|
| 315 |
+
listing all correct option(s) you have identified, separated by commas, within the box.
|
| 316 |
+
For example: \boxed{A} for a single correct option, or \boxed{B, C} for multiple correct options.
|
| 317 |
+
```
|
| 318 |
+
|
| 319 |
+
### 5.5 Answer parsing
|
| 320 |
+
|
| 321 |
+
A reference parser is shipped with the OracleProto framework
|
| 322 |
+
([`forecast_eval/parser.py::parse_answer`](https://github.com/MaYiding/OracleProto/blob/main/forecast_eval/parser.py)).
|
| 323 |
+
The rules are:
|
| 324 |
+
|
| 325 |
+
1. Take the **last** `\boxed{...}` substring in the model's reply (everything else is
|
| 326 |
+
reasoning / scratchpad and is ignored).
|
| 327 |
+
2. For `yes_no`: `Yes` (case-insensitive) → `A`, `No` → `B`. Anything else → unparsed.
|
| 328 |
+
3. For `binary_named`: case-insensitive match of the boxed payload against `options[0]` or
|
| 329 |
+
`options[1]`. Anything else → unparsed.
|
| 330 |
+
4. For `multiple_choice`: split the boxed payload on commas/whitespace, validate that each
|
| 331 |
+
token is exactly one letter, and check that each letter resolves to a valid option index.
|
| 332 |
+
Out-of-range letters or multi-character tokens → unparsed.
|
| 333 |
+
5. Predictions are scored by **strict set equality** against the canonical letter set
|
| 334 |
+
parsed from `answer`. A missing or unparsed boxed answer is recorded as `parse_ok = 0`
|
| 335 |
+
and is **not** an error of the parser — the run records it and moves on.
|
| 336 |
+
|
| 337 |
+
> The parser is the formal answer-validator $`\Psi`$ in the OracleProto run unit. Re-using
|
| 338 |
+
> it (rather than rolling your own regex) is the easiest way to get bit-identical scores
|
| 339 |
+
> across implementations.
|
| 340 |
+
|
| 341 |
+
---
|
| 342 |
+
|
| 343 |
+
## 6. Loading the dataset
|
| 344 |
+
|
| 345 |
+
### 6.1 With raw `sqlite3` (no extra deps)
|
| 346 |
+
|
| 347 |
+
```python
|
| 348 |
+
import sqlite3
|
| 349 |
+
import json
|
| 350 |
+
|
| 351 |
+
conn = sqlite3.connect("forecast_eval_set_example.db")
|
| 352 |
+
conn.row_factory = sqlite3.Row
|
| 353 |
+
|
| 354 |
+
# Read the rows.
|
| 355 |
+
rows = conn.execute("SELECT * FROM forecast_eval_set_example").fetchall()
|
| 356 |
+
print(f"loaded {len(rows)} rows")
|
| 357 |
+
sample = dict(rows[0])
|
| 358 |
+
sample["options"] = json.loads(sample["options"]) # JSON-decode option list
|
| 359 |
+
print(sample)
|
| 360 |
+
|
| 361 |
+
# Read the canonical prompt-reconstruction recipe.
|
| 362 |
+
meta_row = conn.execute("SELECT features_json FROM dataset_metadata").fetchone()
|
| 363 |
+
meta = json.loads(meta_row["features_json"])
|
| 364 |
+
prompt_template = meta["prompt_reconstruction"]["prompt_template"]
|
| 365 |
+
print(prompt_template)
|
| 366 |
+
```
|
| 367 |
+
|
| 368 |
+
### 6.2 With `huggingface_hub`
|
| 369 |
+
|
| 370 |
+
```python
|
| 371 |
+
from huggingface_hub import hf_hub_download
|
| 372 |
+
import sqlite3, json
|
| 373 |
+
|
| 374 |
+
db_path = hf_hub_download(
|
| 375 |
+
repo_id="MaYiding/OracleProto",
|
| 376 |
+
filename="forecast_eval_set_example.db",
|
| 377 |
+
repo_type="dataset",
|
| 378 |
+
)
|
| 379 |
+
conn = sqlite3.connect(db_path)
|
| 380 |
+
rows = conn.execute("SELECT * FROM forecast_eval_set_example").fetchall()
|
| 381 |
+
```
|
| 382 |
+
|
| 383 |
+
### 6.3 Convert to a `datasets.Dataset`
|
| 384 |
+
|
| 385 |
+
```python
|
| 386 |
+
import sqlite3, json
|
| 387 |
+
from datasets import Dataset
|
| 388 |
+
|
| 389 |
+
conn = sqlite3.connect("forecast_eval_set_example.db")
|
| 390 |
+
cur = conn.execute("SELECT * FROM forecast_eval_set_example")
|
| 391 |
+
cols = [c[0] for c in cur.description]
|
| 392 |
+
|
| 393 |
+
def _row(r):
|
| 394 |
+
d = dict(zip(cols, r))
|
| 395 |
+
d["options"] = json.loads(d["options"]) # list[str]
|
| 396 |
+
d["answer_letters"] = [
|
| 397 |
+
s.strip() for s in d["answer"].split(",") if s.strip()
|
| 398 |
+
] # list[str]
|
| 399 |
+
return d
|
| 400 |
+
|
| 401 |
+
ds = Dataset.from_list([_row(r) for r in cur.fetchall()])
|
| 402 |
+
print(ds)
|
| 403 |
+
print(ds[0])
|
| 404 |
+
```
|
| 405 |
+
|
| 406 |
+
### 6.4 Render a prompt (minimal, faithful to the canonical recipe)
|
| 407 |
+
|
| 408 |
+
```python
|
| 409 |
+
def render_prompt(row, meta):
|
| 410 |
+
rcp = meta["prompt_reconstruction"]
|
| 411 |
+
options = row["options"]
|
| 412 |
+
qt, ct = row["question_type"], row["choice_type"]
|
| 413 |
+
|
| 414 |
+
if qt == "yes_no":
|
| 415 |
+
outcomes_block = ""
|
| 416 |
+
out_fmt = rcp["yes_no_output_format"]
|
| 417 |
+
elif qt == "binary_named":
|
| 418 |
+
outcomes_block = ""
|
| 419 |
+
out_fmt = (
|
| 420 |
+
rcp["binary_named_output_format"]
|
| 421 |
+
.replace("<options[0]>", options[0])
|
| 422 |
+
.replace("<options[1]>", options[1])
|
| 423 |
+
)
|
| 424 |
+
elif qt == "multiple_choice":
|
| 425 |
+
outcomes_block = "\n" + "\n".join(
|
| 426 |
+
f"{chr(ord('A') + i)}. {label}" for i, label in enumerate(options)
|
| 427 |
+
)
|
| 428 |
+
key = (
|
| 429 |
+
"multiple_choice_single_output_format" if ct == "single"
|
| 430 |
+
else "multiple_choice_multi_output_format"
|
| 431 |
+
)
|
| 432 |
+
out_fmt = rcp[key]
|
| 433 |
+
else:
|
| 434 |
+
raise ValueError(qt)
|
| 435 |
+
|
| 436 |
+
return rcp["prompt_template"].format(
|
| 437 |
+
agent_role=rcp["agent_role"],
|
| 438 |
+
event=row["event"],
|
| 439 |
+
end_time=row["end_time"],
|
| 440 |
+
outcomes_block=outcomes_block,
|
| 441 |
+
output_format=out_fmt,
|
| 442 |
+
guidance=rcp["guidance"],
|
| 443 |
+
)
|
| 444 |
+
```
|
| 445 |
+
|
| 446 |
+
> The full reference renderer (with the > 26-option backtick rule and an optional reflection
|
| 447 |
+
> / belief-elicitation tail) lives at
|
| 448 |
+
> [`forecast_eval/prompts.py`](https://github.com/MaYiding/OracleProto/blob/main/forecast_eval/prompts.py)
|
| 449 |
+
> in the OracleProto framework. Re-using it gives byte-identical prompts.
|
| 450 |
+
|
| 451 |
+
---
|
| 452 |
+
|
| 453 |
+
## 7. Recommended evaluation protocol
|
| 454 |
+
|
| 455 |
+
This dataset is meant to be paired with the **OracleProto** evaluation harness, which adds
|
| 456 |
+
information-boundary discipline on top of the bare prompt-and-score loop. The headline
|
| 457 |
+
recommendations are:
|
| 458 |
+
|
| 459 |
+
1. **Declare a knowledge cutoff $`\kappa_M`$ for every model.** OracleProto admits a question
|
| 460 |
+
for model $`M`$ only when its prediction cutoff $`\chi_i`$ satisfies
|
| 461 |
+
$`\kappa_M \le \chi_i < \tau_i`$, where $`\tau_i`$ is the resolution time. Inadmissible
|
| 462 |
+
questions are filtered upstream (not counted as model errors). This separates *"the model
|
| 463 |
+
failed to forecast"* from *"the model already knew the answer"*. Models with no declared
|
| 464 |
+
cutoff cannot be fairly compared to those with one.
|
| 465 |
+
|
| 466 |
+
2. **Time-mask any retrieval / browsing tool.** If your harness lets the model issue web
|
| 467 |
+
searches (e.g. via Tavily), pin the search-side `end_date` to $`\chi_i + \delta`$ with a
|
| 468 |
+
conservative offset (OracleProto defaults to $`\delta = -1`$ day). This is the L2
|
| 469 |
+
"tool-mediated" leakage barrier.
|
| 470 |
+
|
| 471 |
+
3. **Run an independent retrieval-content auditor.** Each retrieved snippet is passed to a
|
| 472 |
+
separate LLM auditor that decides whether the snippet leaks the resolution. This is the
|
| 473 |
+
L3 "retrieval-content" barrier in the OracleProto threat model.
|
| 474 |
+
|
| 475 |
+
4. **Forbid provider-native browsing.** OracleProto refuses model slugs ending in `:online`
|
| 476 |
+
and similar hosted-browsing variants on three layers (config validation, on-the-wire
|
| 477 |
+
client, and detector client) — the L4 residual that *must* pass before any billable LLM
|
| 478 |
+
call leaves the process.
|
| 479 |
+
|
| 480 |
+
5. **Score with strict set equality on letter sets** (the parser semantics in §5.5). Optional
|
| 481 |
+
probability-calibration metrics (Brier / NLL / ECE / Murphy decomposition) are supported
|
| 482 |
+
when the model emits an additional `<belief>{ ... }</belief>` JSON block per the v4
|
| 483 |
+
belief protocol; the schema is documented in
|
| 484 |
+
[`forecast_eval/prompts.py::BELIEF_PROTOCOL`](https://github.com/MaYiding/OracleProto/blob/main/forecast_eval/prompts.py).
|
| 485 |
+
|
| 486 |
+
If you run *without* OracleProto, treat the numbers as **upper bounds on forecasting
|
| 487 |
+
ability**: any model that can browse the open web or that was trained past a question's
|
| 488 |
+
`end_time` may have memorised the answer. The dataset is designed to make this honesty audit
|
| 489 |
+
*possible*; it does not enforce it on its own.
|
| 490 |
+
|
| 491 |
+
---
|
| 492 |
+
|
| 493 |
+
## 8. Provenance and curation
|
| 494 |
+
|
| 495 |
+
* **Source.** Upstream HuggingFace forecasting questions, restricted to *levels 1+2* only
|
| 496 |
+
(the easier two of the upstream difficulty bands). The raw set was harvested as 322
|
| 497 |
+
candidate questions.
|
| 498 |
+
* **Curation pipeline (5 passes).**
|
| 499 |
+
1. Source-side broken-row removal and column flattening.
|
| 500 |
+
2. `end_time` / answer-encoding / option-label normalization (`end_time` reduced to a
|
| 501 |
+
`YYYY-MM-DD` calendar date; `Yes/No` mapped to `A/B`; option labels stripped of stray
|
| 502 |
+
markdown).
|
| 503 |
+
3. Down-sampling 322 → 200 → 100 → 80 with placeholder/noise removal, deduplication, and
|
| 504 |
+
ambiguity audit.
|
| 505 |
+
4. Final HIGH+MEDIUM ambiguity remediation: 4 rows reworded for explicit time anchoring,
|
| 506 |
+
unit explicitness, and unambiguous binary framing.
|
| 507 |
+
5. CRITICAL fix on one S&P 500 multi-select truth set so it satisfies the
|
| 508 |
+
monotonic-threshold logic implied by the option ladder.
|
| 509 |
+
* **Verification.** All 80 ground-truths verified end-to-end by parser round-trip (the
|
| 510 |
+
rendered prompt is parsed and re-encoded back to the canonical letter set). Final tally:
|
| 511 |
+
**0 critical / 0 high / 0 medium ambiguity issues remaining**.
|
| 512 |
+
|
| 513 |
+
---
|
| 514 |
+
|
| 515 |
+
## 9. Intended uses and limitations
|
| 516 |
+
|
| 517 |
+
### 9.1 Intended uses
|
| 518 |
+
|
| 519 |
+
* **Forecasting benchmark for LLMs and LLM agents** — particularly tool-using agents that
|
| 520 |
+
combine parametric knowledge with time-masked web retrieval.
|
| 521 |
+
* **Reproducibility testbed for forecasting harnesses** — the `dataset_metadata` table makes
|
| 522 |
+
every prompt byte-stable; pair it with the OracleProto framework to get a run unit that
|
| 523 |
+
yields bit-identical scoring artefacts when the configuration matches.
|
| 524 |
+
* **Calibration / proper-scoring research** — the 80-row size is deliberately small so
|
| 525 |
+
per-question analysis (belief evolution, source attribution, calibration plots) is
|
| 526 |
+
tractable.
|
| 527 |
+
|
| 528 |
+
### 9.2 Out-of-scope uses
|
| 529 |
+
|
| 530 |
+
* **Training data.** Do not include the rows in any training, fine-tuning, or RLHF corpus;
|
| 531 |
+
doing so contaminates downstream forecasting evaluations of the trained model. The dataset
|
| 532 |
+
is an **evaluation-only** artefact.
|
| 533 |
+
* **Long-horizon forecasting.** All resolutions land in a one-month window
|
| 534 |
+
(2026-03-12 → 2026-04-14). The set is *not* representative of multi-quarter or
|
| 535 |
+
multi-year forecasting tasks.
|
| 536 |
+
* **Open-ended generation.** Every question has a closed answer set; this is not a
|
| 537 |
+
generation benchmark.
|
| 538 |
+
|
| 539 |
+
### 9.3 Known limitations and biases
|
| 540 |
+
|
| 541 |
+
* **Sample size.** 80 rows is small. Confidence intervals on accuracy / Brier are wide; we
|
| 542 |
+
recommend reporting them alongside point estimates and using paired tests when comparing
|
| 543 |
+
models on the same set.
|
| 544 |
+
* **Topical skew.** The questions are concentrated in finance / macro indicators, sports
|
| 545 |
+
events, awards (Oscars, NBA, UEFA, etc.), and US-centric political and geopolitical
|
| 546 |
+
events — reflecting the upstream HuggingFace forecasting market mix. They are **not** a
|
| 547 |
+
globally representative sample of forecastable events.
|
| 548 |
+
* **English-only.** All `event` and `options` strings are English.
|
| 549 |
+
* **Date-only resolution.** `end_time` is a *date*, not a timestamp, and the dataset does
|
| 550 |
+
not carry a timezone field. If you need finer-grained admissibility, treat resolutions
|
| 551 |
+
conservatively as "occurred any time during that calendar day".
|
| 552 |
+
* **Provider-side residual leakage (L4 channel).** Any LLM that has ingested the upstream
|
| 553 |
+
HuggingFace dataset, or that was trained past the resolution window, can recover ground
|
| 554 |
+
truths from parametric memory. The dataset cannot patch this on its own — it relies on the
|
| 555 |
+
harness to enforce admissibility ($`\kappa_M`$).
|
| 556 |
+
* **Snapshot of a moving label space.** A few questions ("none of the above", "all of the
|
| 557 |
+
above") interact non-trivially with multi-select scoring; the curation pass fixed the one
|
| 558 |
+
S&P 500 case but the convention for similar questions in future revisions may shift. Pin
|
| 559 |
+
to the schema-version field if you need byte-stable behaviour across releases.
|
| 560 |
+
|
| 561 |
+
---
|
| 562 |
+
|
| 563 |
+
## 10. Versioning
|
| 564 |
+
|
| 565 |
+
* **`v1.0` (2026-04-29)** — initial public example release. 80 rows; resolution dates
|
| 566 |
+
2026-03-12 → 2026-04-14; pipeline passes 1–5 above; 0 critical / 0 high / 0 medium
|
| 567 |
+
ambiguity issues remaining.
|
| 568 |
+
|
| 569 |
+
The schema version is recorded inside the database
|
| 570 |
+
(`dataset_metadata.features_json.schema_version`), so consumers can pin against it without
|
| 571 |
+
re-deriving from the file's hash.
|
| 572 |
+
|
| 573 |
+
---
|
| 574 |
+
|
| 575 |
+
## 11. License
|
| 576 |
+
|
| 577 |
+
The dataset is released under the **MIT License** (see `LICENSE`). You are free to use,
|
| 578 |
+
copy, modify, and redistribute it, including for commercial purposes, provided the copyright
|
| 579 |
+
notice and license text are preserved.
|
| 580 |
+
|
| 581 |
+
The upstream questions originate from a public HuggingFace forecasting set; the curation
|
| 582 |
+
work, the schema, the prompt-reconstruction recipe, and the answer encodings in this release
|
| 583 |
+
are the contribution of this project.
|
| 584 |
+
|
| 585 |
+
---
|
| 586 |
+
|
| 587 |
+
## 12. Contact and contributions
|
| 588 |
+
|
| 589 |
+
Issues, schema feedback, and ambiguity reports are welcome. If you find a row whose ground
|
| 590 |
+
truth has changed, or whose framing is ambiguous under the §5.5 parser, please open an issue
|
| 591 |
+
in either of the project repositories:
|
| 592 |
+
|
| 593 |
+
* Dataset (this repo): [`MaYiding/OracleProto` on Hugging Face](https://huggingface.co/datasets/MaYiding/OracleProto/discussions) — for row-level questions, ambiguity reports, and label disputes.
|
| 594 |
+
* Framework code: [`MaYiding/OracleProto` on GitHub](https://github.com/MaYiding/OracleProto/issues) — for evaluator, parser, or harness behaviour.
|
| 595 |
+
|
| 596 |
+
When reporting a row-level issue, please include the `id`, the disputed framing, and (if
|
| 597 |
+
available) a primary source — those are the two inputs the curation pipeline needs to update
|
| 598 |
+
the row for the next release.
|