MaYiding commited on
Commit
3844fd9
·
1 Parent(s): eb3a5cf

Update README

Browse files
Files changed (2) hide show
  1. README-ZH.md +427 -0
  2. README.md +3 -0
README-ZH.md ADDED
@@ -0,0 +1,427 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # OracleProto: Forecasting Evaluation Set
2
+
3
+ **Dataset:** [`Hugging Face`](https://huggingface.co/datasets/MaYiding/OracleProto)
4
+
5
+ **GitHub Repo:** [`Github`](https://github.com/MaYiding/OracleProto)
6
+
7
+ **Chinese Doc:** [[`中文文档`](https://huggingface.co/datasets/MaYiding/OracleProto/blob/main/README-ZH.md)]
8
+
9
+ **License:** MIT
10
+
11
+ A SQLite-packaged evaluation set of 80 hand-curated forecasting questions on real-world events, with resolution dates between 2026-03-12 and 2026-04-14, released alongside the [GitHub Repo](https://github.com/MaYiding/OracleProto). Both the rows and the byte-stable prompt-reconstruction recipe ship inside a single file, `forecast_eval_set_example.db`, which holds two tables: `forecast_eval_set_example` (the 80 rows) and `dataset_metadata` (the recipe).
12
+
13
+ ---
14
+
15
+ ## 1. Dataset at a glance
16
+
17
+ | Field | Value |
18
+ | ---------------------- | ---------------------------------------------------------------------- |
19
+ | Release date | `2026-04-29` |
20
+ | Rows | 80 |
21
+ | Splits | `train` (80); single split, intended as a held-out evaluation set |
22
+ | Resolution-date range | `2026-03-12` → `2026-04-14` |
23
+ | Question types | `yes_no`, `binary_named`, `multiple_choice` |
24
+ | Choice types | `single` (one correct letter), `multi` (one-or-more correct letters) |
25
+ | Database file | `forecast_eval_set_example.db` (SQLite 3, ~52 KB) |
26
+ | Tables in the file | `forecast_eval_set_example` (80 rows), `dataset_metadata` (1 row) |
27
+ | License | MIT |
28
+ | Source upstream | HuggingFace forecasting questions (levels 1+2), 322 raw → 80 curated |
29
+
30
+ ### Type distribution
31
+
32
+ | `question_type` | `choice_type` | Rows |
33
+ | ------------------- | ------------- | ------ |
34
+ | `yes_no` | `single` | 37 |
35
+ | `binary_named` | `single` | 3 |
36
+ | `multiple_choice` | `single` | 32 |
37
+ | `multiple_choice` | `multi` | 8 |
38
+ | **Total** | | **80** |
39
+
40
+ `yes_no` is binary Yes/No; `binary_named` is binary between two named entities such as sports teams, fighters, or sides; `multiple_choice` carries at least three labelled options with one or more correct letters allowed, and "None of the above" is a valid answer when listed. Each row stores the exact option labels; letter `A` maps to `options[0]`, `B` to `options[1]`, and so on (§3.4 covers labels beyond `Z`).
41
+
42
+ ---
43
+
44
+ ## 2. Files
45
+
46
+ ```text
47
+ OracleProto/
48
+ ├── forecast_eval_set_example.db # SQLite database file (the dataset; ~52 KB)
49
+ ├── README.md # this file
50
+ ├── LICENSE # MIT
51
+ └── .gitattributes # standard HF binary attributes
52
+ ```
53
+
54
+ The dataset ships as one SQLite file, not Parquet or JSONL, because the prompt-reconstruction recipe and per-row provenance live in the same file as the rows (in `dataset_metadata.features_json`). A loader for `datasets.Dataset` and Parquet conversion appears in §6.
55
+
56
+ ---
57
+
58
+ ## 3. Database schema
59
+
60
+ Two tables: `forecast_eval_set_example` holds the 80 rows; `dataset_metadata` holds the canonical recipe. The file takes its name from the primary table.
61
+
62
+ ### 3.1 Table `forecast_eval_set_example` (the rows)
63
+
64
+ ```sql
65
+ CREATE TABLE forecast_eval_set_example (
66
+ id TEXT PRIMARY KEY,
67
+ choice_type TEXT NOT NULL CHECK (choice_type IN ('single','multi')),
68
+ question_type TEXT NOT NULL, -- yes_no | binary_named | multiple_choice
69
+ event TEXT NOT NULL, -- the event being predicted
70
+ options TEXT NOT NULL, -- JSON array of option labels
71
+ answer TEXT NOT NULL, -- canonical correct answer as letter(s)
72
+ end_time TEXT NOT NULL -- 'YYYY-MM-DD'
73
+ );
74
+
75
+ CREATE INDEX idx_forecast_eval_set_example_choice_type ON forecast_eval_set_example(choice_type);
76
+ CREATE INDEX idx_forecast_eval_set_example_question_type ON forecast_eval_set_example(question_type);
77
+ CREATE INDEX idx_forecast_eval_set_example_end_time ON forecast_eval_set_example(end_time);
78
+ ```
79
+
80
+ ### 3.2 Table `dataset_metadata` (the recipe)
81
+
82
+ A one-row table whose `features_json` blob carries the prompt template, the four output formats, the outcomes-block rule, the agent-role string, and curation provenance. The full recipe is rendered in §5.
83
+
84
+ ```sql
85
+ CREATE TABLE dataset_metadata (
86
+ dataset_name TEXT NOT NULL,
87
+ split_name TEXT NOT NULL,
88
+ table_name TEXT NOT NULL,
89
+ row_count INTEGER NOT NULL,
90
+ imported_at_utc TEXT NOT NULL,
91
+ features_json TEXT NOT NULL
92
+ );
93
+ ```
94
+
95
+ ### 3.3 Column semantics
96
+
97
+ | Column | Type | Description |
98
+ | --------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
99
+ | `id` | TEXT | Stable source-side question ID inherited from the upstream HuggingFace forecasting set; primary join key. |
100
+ | `choice_type` | TEXT | `'single'` if exactly one letter is correct, `'multi'` if one-or-more letters can be correct. Derived from the number of letters in `answer`. Drives the single-answer vs multi-select branch in §5.4. |
101
+ | `question_type` | TEXT | One of `yes_no`, `binary_named`, `multiple_choice`. Selects which prompt template is rendered (§5). |
102
+ | `event` | TEXT | Natural-language description of the event being predicted, author-edited for explicit time anchoring, unit explicitness, and unambiguous binary framing. |
103
+ | `options` | TEXT | JSON array of option labels. For `yes_no` it is fixed to `["Yes","No"]`. For `binary_named` it is two named entities. For `multiple_choice` it is a list of choice labels whose letter is implied by index (`A=options[0]`, `B=options[1]`, …). |
104
+ | `answer` | TEXT | Canonical correct answer encoded as letters. For `yes_no` and `binary_named` it is `'A'` or `'B'`. For `multiple_choice` it is a comma-separated letter list in option order, e.g. `'A'` or `'A, B'`. |
105
+ | `end_time` | TEXT | Resolution date in `YYYY-MM-DD`. The column stores a calendar date only; the prompt template (§5.2) supplies the GMT+8 reading. If finer-grained admissibility is needed, treat each resolution as covering the whole calendar day. |
106
+
107
+ ### 3.4 Letter-to-index encoding
108
+
109
+ Letters map to option indices via `index = ord(letter) - ord('A')`. Beyond `Z` (≥27 options) the labels run on as `[`, `\`, `]`, `^`, `_`, `` ` ``, `a`, `b`, …, the contiguous ASCII range starting at `A`. The reference renderer wraps any non-`A`–`Z` label in backticks so it survives Markdown rendering. None of the 80 rows exceed 26 options, but the encoding is documented because the framework's parser supports it.
110
+
111
+ ---
112
+
113
+ ## 4. Sample rows
114
+
115
+ ```json
116
+ {
117
+ "id": "699d9ffc098cca008728b6f0",
118
+ "choice_type": "single",
119
+ "question_type": "yes_no",
120
+ "event": "Will the US PCE annual inflation be greater than 2.9% in January 2026?",
121
+ "options": ["Yes", "No"],
122
+ "answer": "B",
123
+ "end_time": "2026-03-13"
124
+ }
125
+ ```
126
+
127
+ ```json
128
+ {
129
+ "id": "69a2e39e5692ef005cdbf2d3",
130
+ "choice_type": "single",
131
+ "question_type": "binary_named",
132
+ "event": "Will US or Israel strike Iran first?",
133
+ "options": ["US", "Israel"],
134
+ "answer": "B",
135
+ "end_time": "2026-03-31"
136
+ }
137
+ ```
138
+
139
+ ```json
140
+ {
141
+ "id": "6995b1073ea64b005b11f285",
142
+ "choice_type": "single",
143
+ "question_type": "multiple_choice",
144
+ "event": "Which men's basketball team will win the Big 12 Conference Championship tournament in the 2025-26 season?",
145
+ "options": ["Arizona", "Baylor", "Brigham Young University (BYU)",
146
+ "Houston", "Iowa State", "Kansas", "Kansas State"],
147
+ "answer": "A",
148
+ "end_time": "2026-03-14"
149
+ }
150
+ ```
151
+
152
+ ```json
153
+ {
154
+ "id": "698f198bda7a8b006575444c",
155
+ "choice_type": "multi",
156
+ "question_type": "multiple_choice",
157
+ "event": "Which movies will win multiple Oscars? (2026)",
158
+ "options": ["One Battle After Another", "Sinners", "Frankenstein",
159
+ "KPop Demon Hunters", "F1", "Sentimental Value", "Hamnet",
160
+ "Marty Supreme", "The Secret Agent", "Avatar: Fire and Ash",
161
+ "Train Dreams", "Bugonia", "Blue Moon", "It Was Just An Accident"],
162
+ "answer": "A, B, C, D",
163
+ "end_time": "2026-03-15"
164
+ }
165
+ ```
166
+
167
+ ---
168
+
169
+ ## 5. Prompt reconstruction (canonical recipe)
170
+
171
+ Every row is rendered into a single user message via the recipe stored in `dataset_metadata.features_json.prompt_reconstruction`. The recipe is byte-stable and is the source of truth for the OracleProto evaluator; downstream users who reconstruct prompts themselves should follow it exactly so results stay comparable.
172
+
173
+ ### 5.1 Static fragments
174
+
175
+ ```text
176
+ agent_role: "You are an agent that can predict future events."
177
+
178
+ guidance: "Do not use any other format. Do not refuse to make a prediction.
179
+ Do not say \"I cannot predict the future.\" You must make a clear
180
+ prediction based on the best data currently available, using the
181
+ box format specified above."
182
+ ```
183
+
184
+ ### 5.2 Master template
185
+
186
+ ```text
187
+ {agent_role} The event to be predicted: "{event} (resolved around {end_time} (GMT+8)).{outcomes_block}"
188
+
189
+ IMPORTANT: Your final answer MUST end with this exact format:
190
+ {output_format}
191
+ {guidance}
192
+ ```
193
+
194
+ The literal `(GMT+8)` inside the user-visible string is what gives `end_time` its timezone reading; the column itself stores only a date.
195
+
196
+ ### 5.3 `outcomes_block`
197
+
198
+ For `yes_no` and `binary_named`: empty, since the option labels are baked into `output_format`.
199
+ For `multiple_choice`: a leading newline followed by one line per option in `A. <label>` form, e.g. `\nA. Arizona\nB. Baylor\nC. Brigham Young University (BYU)\n…`. Labels whose derived letter falls outside `A`–`Z` are wrapped in backticks.
200
+
201
+ ### 5.4 `output_format` (one of four, chosen by `question_type` × `choice_type`)
202
+
203
+ **`yes_no`:**
204
+ ```text
205
+ Your task is to predict whether the event will occur based on your analysis.
206
+ Your prediction will be scored based on its accuracy. You will only receive points if your answer is correct.
207
+ Your final answer MUST end with this exact format:
208
+ \boxed{Yes} or \boxed{No}
209
+ ```
210
+
211
+ **`binary_named`** (the literals `<options[0]>` and `<options[1]>` are replaced by the two named entities from `options`):
212
+ ```text
213
+ Your task is to predict which of the two outcomes will occur based on your analysis.
214
+ Your prediction will be scored based on its accuracy. You will only receive points if your answer is correct.
215
+ Your final answer MUST end with this exact format:
216
+ \boxed{<options[0]>} or \boxed{<options[1]>}
217
+ ```
218
+
219
+ **`multiple_choice` with `choice_type='single'`:**
220
+ ```text
221
+ This is a SINGLE-ANSWER question: exactly ONE of the listed options is correct.
222
+ Your prediction will be scored on strict equality with the unique correct letter; choosing the wrong letter, or selecting more than one letter, scores zero.
223
+ Your final answer MUST end with this exact format:
224
+ the single correct letter inside the box, e.g. \boxed{A}.
225
+ Do NOT list more than one letter, even if you believe two outcomes are tied — pick the one you find most likely.
226
+ ```
227
+
228
+ **`multiple_choice` with `choice_type='multi'`:**
229
+ ```text
230
+ This is a MULTI-SELECT question: ONE OR MORE of the listed options can be correct.
231
+ Your prediction will be scored on strict equality with the FULL set of correct letters: any extra letter, any missing letter, or any wrong letter scores zero. You must include ALL correct options and NO incorrect options.
232
+ Your final answer MUST end with this exact format:
233
+ listing all correct option(s) you have identified, separated by commas, within the box.
234
+ For example: \boxed{A} for a single correct option, or \boxed{B, C} for multiple correct options.
235
+ ```
236
+
237
+ ### 5.5 Answer parsing
238
+
239
+ The reference parser ([`forecast_eval/parser.py::parse_answer`](https://github.com/MaYiding/OracleProto/blob/main/forecast_eval/parser.py)) applies these rules:
240
+
241
+ 1. Take the **last** `\boxed{...}` substring in the model's reply; everything else is reasoning or scratchpad and is ignored.
242
+ 2. For `yes_no` (case-insensitive): `Yes` → `A`, `No` → `B`. Anything else is unparsed.
243
+ 3. For `binary_named` (case-insensitive): match the boxed payload against `options[0]` or `options[1]`. Anything else is unparsed.
244
+ 4. For `multiple_choice`: split the boxed payload on commas and whitespace, validate that each token is a single letter, and check that each letter resolves to a valid option index. Out-of-range letters or multi-character tokens are unparsed.
245
+ 5. Score by strict set equality against the canonical letter set parsed from `answer`. A missing or unparsed boxed answer is recorded as `parse_ok = 0` rather than treated as a parser error; the run records it and moves on.
246
+
247
+ Reusing the framework's parser is the practical way to get bit-identical scores across implementations.
248
+
249
+ ---
250
+
251
+ ## 6. Loading the dataset
252
+
253
+ ### 6.1 With raw `sqlite3` (no extra deps)
254
+
255
+ ```python
256
+ import sqlite3
257
+ import json
258
+
259
+ conn = sqlite3.connect("forecast_eval_set_example.db")
260
+ conn.row_factory = sqlite3.Row
261
+
262
+ # Read the rows.
263
+ rows = conn.execute("SELECT * FROM forecast_eval_set_example").fetchall()
264
+ print(f"loaded {len(rows)} rows")
265
+ sample = dict(rows[0])
266
+ sample["options"] = json.loads(sample["options"]) # JSON-decode option list
267
+ print(sample)
268
+
269
+ # Read the canonical prompt-reconstruction recipe.
270
+ meta_row = conn.execute("SELECT features_json FROM dataset_metadata").fetchone()
271
+ meta = json.loads(meta_row["features_json"])
272
+ prompt_template = meta["prompt_reconstruction"]["prompt_template"]
273
+ print(prompt_template)
274
+ ```
275
+
276
+ ### 6.2 With `huggingface_hub`
277
+
278
+ ```python
279
+ from huggingface_hub import hf_hub_download
280
+ import sqlite3, json
281
+
282
+ db_path = hf_hub_download(
283
+ repo_id="MaYiding/OracleProto",
284
+ filename="forecast_eval_set_example.db",
285
+ repo_type="dataset",
286
+ )
287
+ conn = sqlite3.connect(db_path)
288
+ rows = conn.execute("SELECT * FROM forecast_eval_set_example").fetchall()
289
+ ```
290
+
291
+ ### 6.3 Convert to a `datasets.Dataset`
292
+
293
+ ```python
294
+ import sqlite3, json
295
+ from datasets import Dataset
296
+
297
+ conn = sqlite3.connect("forecast_eval_set_example.db")
298
+ cur = conn.execute("SELECT * FROM forecast_eval_set_example")
299
+ cols = [c[0] for c in cur.description]
300
+
301
+ def _row(r):
302
+ d = dict(zip(cols, r))
303
+ d["options"] = json.loads(d["options"]) # list[str]
304
+ d["answer_letters"] = [
305
+ s.strip() for s in d["answer"].split(",") if s.strip()
306
+ ] # list[str]
307
+ return d
308
+
309
+ ds = Dataset.from_list([_row(r) for r in cur.fetchall()])
310
+ print(ds)
311
+ print(ds[0])
312
+ ```
313
+
314
+ ### 6.4 Render a prompt (minimal, faithful to the canonical recipe)
315
+
316
+ ```python
317
+ def render_prompt(row, meta):
318
+ rcp = meta["prompt_reconstruction"]
319
+ options = row["options"]
320
+ qt, ct = row["question_type"], row["choice_type"]
321
+
322
+ if qt == "yes_no":
323
+ outcomes_block = ""
324
+ out_fmt = rcp["yes_no_output_format"]
325
+ elif qt == "binary_named":
326
+ outcomes_block = ""
327
+ out_fmt = (
328
+ rcp["binary_named_output_format"]
329
+ .replace("<options[0]>", options[0])
330
+ .replace("<options[1]>", options[1])
331
+ )
332
+ elif qt == "multiple_choice":
333
+ outcomes_block = "\n" + "\n".join(
334
+ f"{chr(ord('A') + i)}. {label}" for i, label in enumerate(options)
335
+ )
336
+ key = (
337
+ "multiple_choice_single_output_format" if ct == "single"
338
+ else "multiple_choice_multi_output_format"
339
+ )
340
+ out_fmt = rcp[key]
341
+ else:
342
+ raise ValueError(qt)
343
+
344
+ return rcp["prompt_template"].format(
345
+ agent_role=rcp["agent_role"],
346
+ event=row["event"],
347
+ end_time=row["end_time"],
348
+ outcomes_block=outcomes_block,
349
+ output_format=out_fmt,
350
+ guidance=rcp["guidance"],
351
+ )
352
+ ```
353
+
354
+ The full reference renderer (with the >26-option backtick rule and an optional reflection / belief-elicitation tail) lives at [`forecast_eval/prompts.py`](https://github.com/MaYiding/OracleProto/blob/main/forecast_eval/prompts.py); reusing it gives byte-identical prompts.
355
+
356
+ ---
357
+
358
+ ## 7. Recommended evaluation protocol
359
+
360
+ Pair the dataset with the OracleProto evaluation harness, which layers information-boundary discipline on top of the bare prompt-and-score loop. Five concrete recommendations:
361
+
362
+ 1. **Declare a knowledge cutoff $\kappa_M$ for every model.** A question is admissible for model $M$ only when $\kappa_M \le \chi_i < \tau_i$, where $\chi_i$ is the per-question prediction cutoff and $\tau_i$ is its resolution date. Inadmissible questions are filtered upstream rather than counted as model errors. A model with no declared cutoff cannot be fairly compared to one that has one.
363
+
364
+ 2. **Time-mask any retrieval or browsing tool.** If the harness lets the model issue web searches, pin the search-side `end_date` to $\chi_i + \delta$ with a conservative offset; OracleProto defaults to $\delta = -1$ day. The mechanism behind this barrier (L2) is documented in the framework's DESIGN and FRAME notes.
365
+
366
+ 3. **Run an independent retrieval-content auditor.** Each retrieved snippet is passed to a separate LLM auditor that decides whether the snippet leaks the resolution. This is the L3 barrier in the framework's threat model.
367
+
368
+ 4. **Forbid provider-native browsing.** OracleProto refuses model slugs ending in `:online` and similar hosted-browsing variants on three layers: config validation, on-the-wire client, and detector client. This is the L4 residual that must pass before any billable LLM call leaves the process.
369
+
370
+ 5. **Score with strict set equality on letter sets**, per §5.5. Optional probability-calibration metrics (Brier, NLL, ECE, Murphy decomposition) are supported when the model emits an additional `<belief>{ ... }</belief>` JSON block per the v4 belief protocol; the schema is documented in [`forecast_eval/prompts.py::BELIEF_PROTOCOL`](https://github.com/MaYiding/OracleProto/blob/main/forecast_eval/prompts.py).
371
+
372
+ Without the OracleProto harness in place, treat the resulting numbers as upper bounds on forecasting ability: any model that can browse the open web, or that was trained past a question's `end_time`, may have memorised the answer. The dataset makes the honesty audit possible; it does not enforce it on its own.
373
+
374
+ ---
375
+
376
+ ## 8. Provenance and curation
377
+
378
+ * **Source.** Upstream HuggingFace forecasting questions, restricted to *levels 1+2* (the easier two of the upstream difficulty bands). The raw set was harvested as 322 candidate questions.
379
+ * **Curation pipeline (5 passes).**
380
+ 1. Source-side broken-row removal and column flattening.
381
+ 2. `end_time` / answer-encoding / option-label normalization: `end_time` reduced to a `YYYY-MM-DD` calendar date; `Yes/No` mapped to `A/B`; option labels stripped of stray markdown.
382
+ 3. Down-sampling 322 → 200 → 100 → 80 with placeholder removal, deduplication, and an ambiguity audit.
383
+ 4. Final HIGH+MEDIUM ambiguity remediation: 4 rows reworded for explicit time anchoring, unit explicitness, and unambiguous binary framing.
384
+ 5. CRITICAL fix on one S&P 500 multi-select truth set so it satisfies the monotonic-threshold logic implied by the option ladder.
385
+ * **Verification.** All 80 ground-truths verified end-to-end via parser round-trip (the rendered prompt is parsed and re-encoded back to the canonical letter set). Final tally: 0 critical / 0 high / 0 medium ambiguity issues remaining.
386
+
387
+ ---
388
+
389
+ ## 9. Intended uses and limitations
390
+
391
+ ### 9.1 Intended uses
392
+
393
+ * **Forecasting benchmark for LLMs and LLM agents**, particularly tool-using agents that combine parametric knowledge with time-masked web retrieval.
394
+ * **Reproducibility testbed for forecasting harnesses.** The `dataset_metadata` table makes every prompt byte-stable; pairing it with the OracleProto framework yields a run unit whose scoring artefacts are bit-identical when the configuration matches.
395
+ * **Calibration and proper-scoring research.** The 80-row size is small enough that per-question analysis (belief evolution, source attribution, calibration plots) stays tractable.
396
+
397
+ ### 9.2 Out-of-scope uses
398
+
399
+ * **Training data.** Including the rows in any training, fine-tuning, or RLHF corpus contaminates downstream forecasting evaluations of the trained model. The dataset is evaluation-only.
400
+ * **Long-horizon forecasting.** All resolutions land in a one-month window (2026-03-12 → 2026-04-14); the set does not represent multi-quarter or multi-year forecasting.
401
+ * **Open-ended generation.** Every question has a closed answer set, so this is not a generation benchmark.
402
+
403
+ ### 9.3 Known limitations and biases
404
+
405
+ * **Sample size.** 80 rows is small. Confidence intervals on accuracy or Brier are wide; report them alongside point estimates and use paired tests when comparing models on the same set.
406
+ * **Topical skew.** Questions concentrate in finance and macro indicators, sports events, awards (Oscars, NBA, UEFA, etc.), and US-centric political and geopolitical events, reflecting the upstream HuggingFace market mix. They are not a globally representative sample.
407
+ * **English-only.** All `event` and `options` strings are English.
408
+ * **Date-only resolution.** `end_time` is a date, not a timestamp, and the dataset does not carry a timezone column. If finer-grained admissibility is needed, treat each resolution as covering the whole GMT+8 calendar day.
409
+ * **Provider-side residual leakage.** Any LLM that has ingested the upstream HuggingFace dataset, or that was trained past the resolution window, can recover ground truths from parametric memory. The dataset cannot patch this on its own; it relies on the harness to enforce admissibility ($\kappa_M$).
410
+ * **Snapshot of a moving label space.** A few questions ("none of the above", "all of the above") interact non-trivially with multi-select scoring; the curation pass fixed the one S&P 500 case, but the convention for similar questions in future revisions may shift. Pin to the schema version if byte-stable behaviour across releases is required.
411
+
412
+ ---
413
+
414
+ ## 10. License
415
+
416
+ Released under the **MIT License** (see `LICENSE`). The upstream questions originate from a public HuggingFace forecasting set; the curation work, schema, prompt-reconstruction recipe, and answer encodings in this release are the contribution of this project.
417
+
418
+ ---
419
+
420
+ ## 11. Contact and contributions
421
+
422
+ Issues, schema feedback, and ambiguity reports are welcome. If a row's ground truth has changed, or its framing is ambiguous under §5.5, open an issue in the relevant repository:
423
+
424
+ * Dataset: [`MaYiding/OracleProto` on Hugging Face](https://huggingface.co/datasets/MaYiding/OracleProto/discussions) for row-level questions, ambiguity reports, and label disputes.
425
+ * Code Repo: [`MaYiding/OracleProto` on GitHub](https://github.com/MaYiding/OracleProto/issues) for evaluator, parser, or harness behaviour.
426
+
427
+ Row-level reports should include the `id`, the disputed framing, and where available a primary source; those are the inputs the curation pipeline needs to update the row in the next release.
README.md CHANGED
@@ -23,8 +23,11 @@ pretty_name: OracleProto Forecasting Eval Set
23
  # OracleProto: Forecasting Evaluation Set
24
 
25
  **Dataset:** [`Hugging Face`](https://huggingface.co/datasets/MaYiding/OracleProto)
 
26
  **GitHub Repo:** [`Github`](https://github.com/MaYiding/OracleProto)
 
27
  **Chinese Doc:** [[`中文文档`](https://huggingface.co/datasets/MaYiding/OracleProto/blob/main/README-ZH.md)]
 
28
  **License:** MIT
29
 
30
  A SQLite-packaged evaluation set of 80 hand-curated forecasting questions on real-world events, with resolution dates between 2026-03-12 and 2026-04-14, released alongside the [GitHub Repo](https://github.com/MaYiding/OracleProto). Both the rows and the byte-stable prompt-reconstruction recipe ship inside a single file, `forecast_eval_set_example.db`, which holds two tables: `forecast_eval_set_example` (the 80 rows) and `dataset_metadata` (the recipe).
 
23
  # OracleProto: Forecasting Evaluation Set
24
 
25
  **Dataset:** [`Hugging Face`](https://huggingface.co/datasets/MaYiding/OracleProto)
26
+
27
  **GitHub Repo:** [`Github`](https://github.com/MaYiding/OracleProto)
28
+
29
  **Chinese Doc:** [[`中文文档`](https://huggingface.co/datasets/MaYiding/OracleProto/blob/main/README-ZH.md)]
30
+
31
  **License:** MIT
32
 
33
  A SQLite-packaged evaluation set of 80 hand-curated forecasting questions on real-world events, with resolution dates between 2026-03-12 and 2026-04-14, released alongside the [GitHub Repo](https://github.com/MaYiding/OracleProto). Both the rows and the byte-stable prompt-reconstruction recipe ship inside a single file, `forecast_eval_set_example.db`, which holds two tables: `forecast_eval_set_example` (the 80 rows) and `dataset_metadata` (the recipe).