Update README
Browse files- README-ZH.md +42 -42
- README.md +40 -40
README-ZH.md
CHANGED
|
@@ -1,11 +1,11 @@
|
|
| 1 |
# OracleProto:预测评估集
|
| 2 |
|
| 3 |
-
**英文文档:** [[`English
|
| 4 |
|
| 5 |
-
**GitHub 仓库:** [`
|
| 6 |
|
| 7 |
|
| 8 |
-
一份以 SQLite 打包的评估集,
|
| 9 |
|
| 10 |
---
|
| 11 |
|
|
@@ -34,7 +34,7 @@
|
|
| 34 |
| `multiple_choice` | `multi` | 8 |
|
| 35 |
| **合计** | | **80** |
|
| 36 |
|
| 37 |
-
`yes_no` 是二元 Yes/No;`binary_named`
|
| 38 |
|
| 39 |
---
|
| 40 |
|
|
@@ -49,15 +49,15 @@ OracleProto/
|
|
| 49 |
└── .gitattributes # HF 标准二进制属性
|
| 50 |
```
|
| 51 |
|
| 52 |
-
数据集以单个 SQLite 文件(而非 Parquet 或 JSONL)发布,因为提示重建配方与逐行 provenance 与题目行同
|
| 53 |
|
| 54 |
-
CSV 是 `forecast_eval_set_example` 行表的导出,不含 `dataset_metadata`,因此提示模板仅能从 SQLite 文件中获取。下游流水线只需
|
| 55 |
|
| 56 |
---
|
| 57 |
|
| 58 |
## 3. 数据库 schema
|
| 59 |
|
| 60 |
-
两张表:`forecast_eval_set_example` 存
|
| 61 |
|
| 62 |
### 3.1 表 `forecast_eval_set_example`(题目行)
|
| 63 |
|
|
@@ -79,7 +79,7 @@ CREATE INDEX idx_forecast_eval_set_example_end_time ON forecast_eval_set_ex
|
|
| 79 |
|
| 80 |
### 3.2 表 `dataset_metadata`(配方)
|
| 81 |
|
| 82 |
-
单行表,其 `features_json` blob
|
| 83 |
|
| 84 |
```sql
|
| 85 |
CREATE TABLE dataset_metadata (
|
|
@@ -97,16 +97,16 @@ CREATE TABLE dataset_metadata (
|
|
| 97 |
| 列 | 类型 | 描述 |
|
| 98 |
| --------------- | ------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
| 99 |
| `id` | TEXT | 来自上游 HuggingFace 预测题集的稳定 source-side question ID;主连接键。 |
|
| 100 |
-
| `choice_type` | TEXT | 当且仅当一个字母正确时为 `'single'`,可为一个或多个字母时为 `'multi'`。由 `answer` 中的字母个数推导。
|
| 101 |
| `question_type` | TEXT | 取 `yes_no`、`binary_named`、`multiple_choice` 之一。决定渲染哪一种提示模板(§5)。 |
|
| 102 |
-
| `event` | TEXT | 待预测事件的自然语言描述
|
| 103 |
-
| `options` | TEXT | 选项标签的 JSON 数组。`yes_no` 固定为 `["Yes","No"]`。`binary_named` 是两个命名实体。`multiple_choice` 是
|
| 104 |
-
| `answer` | TEXT | 规范化的正确答案,编码为字母。`yes_no` 与 `binary_named` 为 `'A'` 或 `'B'`。`multiple_choice` 为按选项顺序排列
|
| 105 |
-
| `end_time` | TEXT | resolution date,格式 `YYYY-MM-DD`。该列只存日历日期;GMT+8 的时区读法由提示模板(§5.2)
|
| 106 |
|
| 107 |
### 3.4 字母到下标的编码
|
| 108 |
|
| 109 |
-
字母
|
| 110 |
|
| 111 |
---
|
| 112 |
|
|
@@ -168,7 +168,7 @@ CREATE TABLE dataset_metadata (
|
|
| 168 |
|
| 169 |
## 5. 提示重建(规范配方)
|
| 170 |
|
| 171 |
-
每一行通过 `dataset_metadata.features_json.prompt_reconstruction` 中的配方渲染为一条 user message。该配方字节
|
| 172 |
|
| 173 |
### 5.1 静态片段
|
| 174 |
|
|
@@ -191,12 +191,12 @@ IMPORTANT: Your final answer MUST end with this exact format:
|
|
| 191 |
{guidance}
|
| 192 |
```
|
| 193 |
|
| 194 |
-
字符串中字面 `(GMT+8)`
|
| 195 |
|
| 196 |
### 5.3 `outcomes_block`
|
| 197 |
|
| 198 |
`yes_no` 与 `binary_named`:为空,因为选项标签已嵌入 `output_format`。
|
| 199 |
-
`multiple_choice`:以一个换行符开头,随后每行一个选项,形式为 `A. <label>`,例如 `\nA. Arizona\nB. Baylor\nC. Brigham Young University (BYU)\n…`。派生字母落在 `A`–`Z` 之外
|
| 200 |
|
| 201 |
### 5.4 `output_format`(四选一,由 `question_type` × `choice_type` 决定)
|
| 202 |
|
|
@@ -239,12 +239,12 @@ For example: \boxed{A} for a single correct option, or \boxed{B, C} for multiple
|
|
| 239 |
参考 parser([`forecast_eval/parser.py::parse_answer`](https://github.com/MaYiding/OracleProto/blob/main/forecast_eval/parser.py))应用如下规则:
|
| 240 |
|
| 241 |
1. 取模型回复中**最后一个** `\boxed{...}` 子串;其余视为 reasoning 或 scratchpad,忽略。
|
| 242 |
-
2. `yes_no`(不区分大小写):`Yes` → `A`
|
| 243 |
3. `binary_named`(不区分大小写):将盒内 payload 与 `options[0]` 或 `options[1]` 匹配。其余记为 unparsed。
|
| 244 |
4. `multiple_choice`:按逗号与空白切分盒内 payload,校验每个 token 都是单字母,且每个字母都解析到合法的选项下标。越界字母或多字符 token 记为 unparsed。
|
| 245 |
-
5. 与从 `answer` 解析出的规范字母集合做严格集合相等评分。缺失或 unparsed 的盒内答案记为 `parse_ok = 0`,不视为 parser 错误;
|
| 246 |
|
| 247 |
-
复用 framework 的 parser 是跨实现获得 bit-identical 分数的
|
| 248 |
|
| 249 |
---
|
| 250 |
|
|
@@ -366,38 +366,38 @@ with open("forecast_eval_set_example.csv", encoding="utf-8", newline="") as f:
|
|
| 366 |
print(f"loaded {len(rows)} rows; first event: {rows[0]['event']!r}")
|
| 367 |
```
|
| 368 |
|
| 369 |
-
CSV 路径完全绕过 `dataset_metadata`。要把行与提示模板配对,要么按 §5 手工渲染,要么回
|
| 370 |
|
| 371 |
---
|
| 372 |
|
| 373 |
## 7. 推荐评估协议
|
| 374 |
|
| 375 |
-
将本数据集与 OracleProto 评估 harness 配套使用,后者在
|
| 376 |
|
| 377 |
-
1. **为每个模型声明 knowledge cutoff $\kappa_M$。** 题目 $i$ 对模型 $M$ 而言 admissible 的条件是 $\kappa_M \le \chi_i < \tau_i$,其中 $\chi_i$ 是题目的 prediction cutoff
|
| 378 |
|
| 379 |
-
2. **对任何 retrieval 或 browsing 工具做时间掩蔽。** 当 harness 允许模型发起 web 搜索时,将搜索侧 `end_date`
|
| 380 |
|
| 381 |
3. **运行独立的 retrieval-content 审��员。** 每条召回片段交由独立 LLM 审计员判断是否泄露 resolution。这是 framework 威胁模型中的 L3 屏障。
|
| 382 |
|
| 383 |
-
4. **禁用 provider-native browsing。** OracleProto 在三层拒收以 `:online`
|
| 384 |
|
| 385 |
-
5. **以字母集合上的严格集合相等评分**,参 §5.5。当模型按
|
| 386 |
|
| 387 |
-
未启用 OracleProto harness 时,应将所得数字视为预测能力的上界:任何能浏览开放 web、或训练截止越过题目 `end_time` 的模型都可能记忆了答案。数据集
|
| 388 |
|
| 389 |
---
|
| 390 |
|
| 391 |
## 8. Provenance 与 curation
|
| 392 |
|
| 393 |
-
* **来源。** 上游 HuggingFace 预测题集,限制在 *levels 1+2*(上游难度带中较容易的两档)。原始集合采集
|
| 394 |
* **Curation 流水线(5 pass)。**
|
| 395 |
-
1. Source-side 坏行剔除与列
|
| 396 |
-
2. `end_time` / 答案编码 / 选项标签规范化:`end_time` 归
|
| 397 |
-
3. 322 → 200 → 100 → 80 下采样,伴
|
| 398 |
-
4. 终轮 HIGH+MEDIUM 歧义修复:4 行
|
| 399 |
5. 对一道 S&P 500 multi-select 真值集做 CRITICAL 修复,使其满足选项阶梯隐含的单调阈值逻辑。
|
| 400 |
-
* **验证。** 全部 80 条 ground
|
| 401 |
|
| 402 |
---
|
| 403 |
|
|
@@ -407,36 +407,36 @@ CSV 路径完全绕过 `dataset_metadata`。要把行与提示模板配对,要
|
|
| 407 |
|
| 408 |
* **LLM 与 LLM 智能体的预测基准**,特别是结合参数化知识与时间掩蔽 web retrieval 的工具型智能体。
|
| 409 |
* **预测 harness 的复现性试验台。** `dataset_metadata` 表使每条提示字节稳定;与 OracleProto framework 配套使用时,可得到一个运行单元,其评分工件在配置匹配时 bit-identical。
|
| 410 |
-
* **校准与 proper-scoring 研究。** 80 行规模足够小,使逐题分析(信念演化、来源归因、calibration 图)
|
| 411 |
|
| 412 |
### 9.2 不适用场景
|
| 413 |
|
| 414 |
-
* **训练数据。** 把这些行纳入任何训练、微调或 RLHF
|
| 415 |
* **长时程预测。** 所有 resolution 落在一个月窗口(2026-03-12 → 2026-04-14);该集合不代表跨季度或跨年度预测。
|
| 416 |
* **开放��成。** 每题都有封闭答案集,因此并非生成基准。
|
| 417 |
|
| 418 |
### 9.3 已知局限与偏置
|
| 419 |
|
| 420 |
* **样本量。** 80 行偏小。准确率或 Brier 的置信区间宽;汇报点估计时同时给出区间,并在同一集合上比较模型时使用配对检验。
|
| 421 |
-
* **题材偏置。** 题目集中在金融与宏观指标、体育赛事、奖项(Oscars、NBA、UEFA 等)以及美国为主的政治与地缘政治事件,反映上游 HuggingFace
|
| 422 |
* **仅英文。** 所有 `event` 与 `options` 字符串均为英文。
|
| 423 |
* **仅日期级 resolution。** `end_time` 是日期而非时间戳,且数据集不带时区列。需要更细粒度 admissibility 时,将每条 resolution 视为覆盖整个 GMT+8 日历日。
|
| 424 |
-
* **Provider 侧残留泄漏。** 若 LLM 已摄入上游 HuggingFace 数据集,或训练截止越过 resolution 窗口,则可凭参数化记忆恢复 ground truth。数据集本身无法
|
| 425 |
-
* **移动 label space 的快照。** 少数题目("none of the above"、"all of the above")与 multi-select 评分
|
| 426 |
|
| 427 |
---
|
| 428 |
|
| 429 |
## 10. License
|
| 430 |
|
| 431 |
-
按 **MIT License** 发布(见 `LICENSE`)。上游题目源自公开 HuggingFace 预测集;本版本中的 curation 工作、schema、提示重建配方与答案编码
|
| 432 |
|
| 433 |
---
|
| 434 |
|
| 435 |
## 11. 联系与贡献
|
| 436 |
|
| 437 |
-
欢迎 issue、schema 反馈与歧义报告。
|
| 438 |
|
| 439 |
* 数据集:[Hugging Face 上的 `MaYiding/OracleProto`](https://huggingface.co/datasets/MaYiding/OracleProto/discussions),用于行级问题、歧义报告与标签争议。
|
| 440 |
-
* 代码仓库:[GitHub 上的 `MaYiding/OracleProto`](https://github.com/MaYiding/OracleProto/issues),用于 evaluator、parser 或 harness 行为。
|
| 441 |
|
| 442 |
-
行级报告应包含 `id`、被争议的题面,以及
|
|
|
|
| 1 |
# OracleProto:预测评估集
|
| 2 |
|
| 3 |
+
**英文文档:** [[`English doc`](https://huggingface.co/datasets/MaYiding/OracleProto/blob/main/README.md)]
|
| 4 |
|
| 5 |
+
**GitHub 仓库:** [[`MaYiding/OracleProto`](https://github.com/MaYiding/OracleProto)]
|
| 6 |
|
| 7 |
|
| 8 |
+
一份以 SQLite 打包的评估集,共 80 道经人工精校、围绕现实事件的预测题,resolution date 介于 2026-03-12 与 2026-04-14 之间;与 [GitHub 仓库](https://github.com/MaYiding/OracleProto) 同步发布。题目行与字节稳定的提示重建配方共存于单个 `forecast_eval_set_example.db` 文件,其中两张表分别是 `forecast_eval_set_example`(80 行题目)与 `dataset_metadata`(配方)。
|
| 9 |
|
| 10 |
---
|
| 11 |
|
|
|
|
| 34 |
| `multiple_choice` | `multi` | 8 |
|
| 35 |
| **合计** | | **80** |
|
| 36 |
|
| 37 |
+
`yes_no` 是二元 Yes/No 题;`binary_named` 在两个命名实体之间二选一,例如两支球队、两名参赛者或两方对阵;`multiple_choice` 至少含三个带字母标签的选项,其中一个或多个为正确答案;选项列表中出现 `None of the above` 时它同样是合法答案。每行存储完整的选项标签字面值;字母 `A` 映射到 `options[0]`,`B` 映射到 `options[1]`,依此类推(§3.4 涵盖 `Z` 之后的标签情形)。
|
| 38 |
|
| 39 |
---
|
| 40 |
|
|
|
|
| 49 |
└── .gitattributes # HF 标准二进制属性
|
| 50 |
```
|
| 51 |
|
| 52 |
+
数据集以单个 SQLite 文件(而非 Parquet 或 JSONL)发布,因为提示重建配方与逐行 provenance 与题目行共存于同一个文件(位于 `dataset_metadata.features_json`)。把行转换为 `datasets.Dataset` 的 loader 见 §6.3。
|
| 53 |
|
| 54 |
+
CSV 是 `forecast_eval_set_example` 行表的导出,不含 `dataset_metadata`,因此提示模板仅能从 SQLite 文件中获取。当下游流水线只需这 80 行(用于 pandas、电子表格或 `grep` 过滤)并自行重建提示时,使用 CSV。`options` 列保留为 JSON 编码的数组字符串,按 RFC 4180 转义。
|
| 55 |
|
| 56 |
---
|
| 57 |
|
| 58 |
## 3. 数据库 schema
|
| 59 |
|
| 60 |
+
两张表:`forecast_eval_set_example` 存有 80 行题目;`dataset_metadata` 存有规范配方。文件名取自主表。
|
| 61 |
|
| 62 |
### 3.1 表 `forecast_eval_set_example`(题目行)
|
| 63 |
|
|
|
|
| 79 |
|
| 80 |
### 3.2 表 `dataset_metadata`(配方)
|
| 81 |
|
| 82 |
+
单行表,其 `features_json` blob 中存有提示模板、四种 output_format、outcomes-block 规则、agent role 字符串,以及 curation provenance。完整配方在 §5 中展开。
|
| 83 |
|
| 84 |
```sql
|
| 85 |
CREATE TABLE dataset_metadata (
|
|
|
|
| 97 |
| 列 | 类型 | 描述 |
|
| 98 |
| --------------- | ------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
| 99 |
| `id` | TEXT | 来自上游 HuggingFace 预测题集的稳定 source-side question ID;主连接键。 |
|
| 100 |
+
| `choice_type` | TEXT | 当且仅当一个字母正确时为 `'single'`,可为一个或多个字母时为 `'multi'`。由 `answer` 中的字母个数推导。在 §5.4 中决定使用单选还是多选模板。 |
|
| 101 |
| `question_type` | TEXT | 取 `yes_no`、`binary_named`、`multiple_choice` 之一。决定渲染哪一种提示模板(§5)。 |
|
| 102 |
+
| `event` | TEXT | 待预测事件的自然语言描述;作者已编辑过,使时间锚定、单位与二元框架均明确。 |
|
| 103 |
+
| `options` | TEXT | 选项标签的 JSON 数组。`yes_no` 固定为 `["Yes","No"]`。`binary_named` 是两个命名实体。`multiple_choice` 是选项标签的列表,字母由下标隐式确定(`A=options[0]`, `B=options[1]`, …)。 |
|
| 104 |
+
| `answer` | TEXT | 规范化的正确答案,编码为字母。`yes_no` 与 `binary_named` 为 `'A'` 或 `'B'`。`multiple_choice` 为按选项顺序排列、以逗号分隔的字母列表,例如 `'A'` 或 `'A, B'`。 |
|
| 105 |
+
| `end_time` | TEXT | resolution date,格式 `YYYY-MM-DD`。该列只存日历日期;GMT+8 的时区读法由提示模板(§5.2)在渲染时附上。如需更细粒度的 admissibility,可把每条 resolution 视为覆盖整个日历日。 |
|
| 106 |
|
| 107 |
### 3.4 字母到下标的编码
|
| 108 |
|
| 109 |
+
字母按 `index = ord(letter) - ord('A')` 映射到选项下标。超过 `Z` 之后(即 ≥27 个选项),标签沿以 `A` 起始的连续 ASCII 区间继续延伸:`[`、`\`、`]`、`^`、`_`、`` ` ``、`a`、`b`、…。参考 renderer 会用反引号包裹任何非 `A`–`Z` 标签,使其在 markdown 渲染下保持可读。80 行中没有超过 26 个选项的题;之所以仍写入文档,是因为 framework 的 parser 支持该编码。
|
| 110 |
|
| 111 |
---
|
| 112 |
|
|
|
|
| 168 |
|
| 169 |
## 5. 提示重建(规范配方)
|
| 170 |
|
| 171 |
+
每一行通过 `dataset_metadata.features_json.prompt_reconstruction` 中的配方渲染为一条 user message。该配方字节稳定,是 OracleProto 评估器的事实来源;自行重建提示的下游用户应严格遵循,以保证结果可比。
|
| 172 |
|
| 173 |
### 5.1 静态片段
|
| 174 |
|
|
|
|
| 191 |
{guidance}
|
| 192 |
```
|
| 193 |
|
| 194 |
+
用户可见字符串中字面的 `(GMT+8)` 在渲染时给 resolution date 附上时区读法。
|
| 195 |
|
| 196 |
### 5.3 `outcomes_block`
|
| 197 |
|
| 198 |
`yes_no` 与 `binary_named`:为空,因为选项标签已嵌入 `output_format`。
|
| 199 |
+
`multiple_choice`:以一个换行符开头,随后每行一个选项,形式为 `A. <label>`,例如 `\nA. Arizona\nB. Baylor\nC. Brigham Young University (BYU)\n…`。若派生字母落在 `A`–`Z` 之外,则用反引号包裹该标签。
|
| 200 |
|
| 201 |
### 5.4 `output_format`(四选一,由 `question_type` × `choice_type` 决定)
|
| 202 |
|
|
|
|
| 239 |
参考 parser([`forecast_eval/parser.py::parse_answer`](https://github.com/MaYiding/OracleProto/blob/main/forecast_eval/parser.py))应用如下规则:
|
| 240 |
|
| 241 |
1. 取模型回复中**最后一个** `\boxed{...}` 子串;其余视为 reasoning 或 scratchpad,忽略。
|
| 242 |
+
2. `yes_no`(不区分大小写):`Yes` → `A`,`No` → `B`。其余记为 unparsed。
|
| 243 |
3. `binary_named`(不区分大小写):将盒内 payload 与 `options[0]` 或 `options[1]` 匹配。其余记为 unparsed。
|
| 244 |
4. `multiple_choice`:按逗号与空白切分盒内 payload,校验每个 token 都是单字母,且每个字母都解析到合法的选项下标。越界字母或多字符 token 记为 unparsed。
|
| 245 |
+
5. 与从 `answer` 解析出的规范字母集合做严格集合相等评分。缺失或 unparsed 的盒内答案记为 `parse_ok = 0`,不视为 parser 错误;记录该状态后运行继续,不会中断。
|
| 246 |
|
| 247 |
+
复用 framework 的 parser 是跨实现获得 bit-identical 分数的最简单做法。
|
| 248 |
|
| 249 |
---
|
| 250 |
|
|
|
|
| 366 |
print(f"loaded {len(rows)} rows; first event: {rows[0]['event']!r}")
|
| 367 |
```
|
| 368 |
|
| 369 |
+
CSV 路径完全绕过 `dataset_metadata`。要把行与提示模板配对,要么按 §5 手工渲染,要么切换回 §6.1 的 SQLite 路径。
|
| 370 |
|
| 371 |
---
|
| 372 |
|
| 373 |
## 7. 推荐评估协议
|
| 374 |
|
| 375 |
+
将本数据集与 OracleProto 评估 harness 配套使用,后者在朴素的「提示+评分」循环之上叠加信息边界纪律。五条具体建议:
|
| 376 |
|
| 377 |
+
1. **为每个模型声明 knowledge cutoff $\kappa_M$。** 题目 $i$ 对模型 $M$ 而言 admissible 的条件是 $\kappa_M \le \chi_i < \tau_i$,其中 $\chi_i$ 是题目的 prediction cutoff,$\tau_i$ 是其 resolution date。Inadmissible 题目在上游过滤,不计入模型错误。未声明 cutoff 的模型无法与已声明的模型公平比较。
|
| 378 |
|
| 379 |
+
2. **对任何 retrieval 或 browsing 工具做时间掩蔽。** 当 harness 允许模型发起 web 搜索时,将搜索侧 `end_date` 锁定到 $\chi_i + \delta$ 并采用保守的 offset;OracleProto 默认 $\delta = -1$ 天。该屏障 (L2) 背后的机制记录在 framework 的 DESIGN 与 FRAME 文档中。
|
| 380 |
|
| 381 |
3. **运行独立的 retrieval-content 审��员。** 每条召回片段交由独立 LLM 审计员判断是否泄露 resolution。这是 framework 威胁模型中的 L3 屏障。
|
| 382 |
|
| 383 |
+
4. **禁用 provider-native browsing。** OracleProto 在三层上拒收以 `:online` 等 hosted-browsing 变体结尾的 model slug:config 校验、on-the-wire client 与 detector client。这是 L4 屏障,也是任何一次计费 LLM 调用离开进程前必须通过的最终检查。
|
| 384 |
|
| 385 |
+
5. **以字母集合上的严格集合相等评分**,参 §5.5。当模型按 framework 的 belief-elicitation 协议额外输出 `<belief>{ ... }</belief>` JSON 块时,可选启用概率 calibration 指标(Brier、NLL、ECE、Murphy 分解);schema 见 [`forecast_eval/prompts.py::BELIEF_PROTOCOL`](https://github.com/MaYiding/OracleProto/blob/main/forecast_eval/prompts.py)。
|
| 386 |
|
| 387 |
+
未启用 OracleProto harness 时,应将所得数字视为预测能力的上界:任何能浏览开放 web、或训练截止越过题目 `end_time` 的模型都可能记忆了答案。数据集使得 admissibility 检查成为可能,但它本身并不强制执行。
|
| 388 |
|
| 389 |
---
|
| 390 |
|
| 391 |
## 8. Provenance 与 curation
|
| 392 |
|
| 393 |
+
* **来源。** 上游 HuggingFace 预测题集,限制在 *levels 1+2*(上游难度带中较容易的两档)。从原始集合采集到 322 道候选题。
|
| 394 |
* **Curation 流水线(5 pass)。**
|
| 395 |
+
1. Source-side 坏行剔除与列扁平化。
|
| 396 |
+
2. `end_time` / 答案编码 / 选项标签规范化:`end_time` 归一为 `YYYY-MM-DD` 日历日期;`Yes/No` 映射为 `A/B`;选项标签去除残余 markdown。
|
| 397 |
+
3. 322 → 200 → 100 → 80 的下采样,并伴有 placeholder 移除、去重与歧义审计。
|
| 398 |
+
4. 终轮 HIGH+MEDIUM 歧义修复:重写 4 行,使时间锚定、单位与二元框架均明确。
|
| 399 |
5. 对一道 S&P 500 multi-select 真值集做 CRITICAL 修复,使其满足选项阶梯隐含的单调阈值逻辑。
|
| 400 |
+
* **验证。** 全部 80 条 ground truth 通过 parser 往返做端到端验证(rendered prompt 经 parse 后再编码回规范字母集合)。最终计数:剩余 0 critical / 0 high / 0 medium 歧义问题。
|
| 401 |
|
| 402 |
---
|
| 403 |
|
|
|
|
| 407 |
|
| 408 |
* **LLM 与 LLM 智能体的预测基准**,特别是结合参数化知识与时间掩蔽 web retrieval 的工具型智能体。
|
| 409 |
* **预测 harness 的复现性试验台。** `dataset_metadata` 表使每条提示字节稳定;与 OracleProto framework 配套使用时,可得到一个运行单元,其评分工件在配置匹配时 bit-identical。
|
| 410 |
+
* **校准与 proper-scoring 研究。** 80 行规模足够小,使逐题分析(信念演化、来源归因、calibration 图)仍在可处理范围内。
|
| 411 |
|
| 412 |
### 9.2 不适用场景
|
| 413 |
|
| 414 |
+
* **训练数据。** 把这些行纳入任何训练、微调或 RLHF 数据集,都会污染对所训模型的下游预测评估。该数据集仅用于评估。
|
| 415 |
* **长时程预测。** 所有 resolution 落在一个月窗口(2026-03-12 → 2026-04-14);该集合不代表跨季度或跨年度预测。
|
| 416 |
* **开放��成。** 每题都有封闭答案集,因此并非生成基准。
|
| 417 |
|
| 418 |
### 9.3 已知局限与偏置
|
| 419 |
|
| 420 |
* **样本量。** 80 行偏小。准确率或 Brier 的置信区间宽;汇报点估计时同时给出区间,并在同一集合上比较模型时使用配对检验。
|
| 421 |
+
* **题材偏置。** 题目集中在金融与宏观指标、体育赛事、奖项(Oscars、NBA、UEFA 等)以及美国为主的政治与地缘政治事件,反映上游 HuggingFace 题集的题目结构,并非全球代表性样本。
|
| 422 |
* **仅英文。** 所有 `event` 与 `options` 字符串均为英文。
|
| 423 |
* **仅日期级 resolution。** `end_time` 是日期而非时间戳,且数据集不带时区列。需要更细粒度 admissibility 时,将每条 resolution 视为覆盖整个 GMT+8 日历日。
|
| 424 |
+
* **Provider 侧残留泄漏。** 若 LLM 已摄入上游 HuggingFace 数据集,或训练截止越过 resolution 窗口,则可凭参数化记忆恢复 ground truth。数据集本身无法解决这一点,而是依赖 harness 强制 admissibility($\kappa_M$)。
|
| 425 |
+
* **移动中的 label space 的快照。** 少数题目("none of the above"、"all of the above")与 multi-select 评分有微妙的交互;curation pass 已修复了一例 S&P 500,但未来版本对类似题目的约定可能调整。如需跨发布的字节稳定行为,请锁定到 schema version。
|
| 426 |
|
| 427 |
---
|
| 428 |
|
| 429 |
## 10. License
|
| 430 |
|
| 431 |
+
按 **MIT License** 发布(见 `LICENSE`)。上游题目源自公开 HuggingFace 预测集;本版本中的 curation 工作、schema、提示重建配方与答案编码均为本项目的贡献。
|
| 432 |
|
| 433 |
---
|
| 434 |
|
| 435 |
## 11. 联系与贡献
|
| 436 |
|
| 437 |
+
欢迎 issue、schema 反馈与歧义报告。若某行的 ground truth 已变更,或题面变得有歧义,请到对应仓库开 issue:
|
| 438 |
|
| 439 |
* 数据集:[Hugging Face 上的 `MaYiding/OracleProto`](https://huggingface.co/datasets/MaYiding/OracleProto/discussions),用于行级问题、歧义报告与标签争议。
|
| 440 |
+
* 代码仓库:[GitHub 上的 `MaYiding/OracleProto`](https://github.com/MaYiding/OracleProto/issues),用于反馈 evaluator、parser 或 harness 的行为问题。
|
| 441 |
|
| 442 |
+
行级报告应包含 `id`、被争议的题面,以及在条件允许时附上的一手来源;这是 curation 流水线在下一版更新该行所需的输入。
|
README.md
CHANGED
|
@@ -22,11 +22,11 @@ pretty_name: OracleProto Forecasting Eval Set
|
|
| 22 |
|
| 23 |
# OracleProto: Forecasting Evaluation Set
|
| 24 |
|
| 25 |
-
**Chinese
|
| 26 |
|
| 27 |
-
**GitHub
|
| 28 |
|
| 29 |
-
A SQLite-packaged evaluation set of 80 hand-curated forecasting questions on real-world events, with resolution dates between 2026-03-12 and 2026-04-14, released alongside the [GitHub Repo](https://github.com/MaYiding/OracleProto). Both the rows and the byte-stable prompt-reconstruction recipe
|
| 30 |
|
| 31 |
---
|
| 32 |
|
|
@@ -39,11 +39,11 @@ A SQLite-packaged evaluation set of 80 hand-curated forecasting questions on rea
|
|
| 39 |
| Splits | `train` (80); single split, intended as a held-out evaluation set |
|
| 40 |
| Resolution-date range | `2026-03-12` → `2026-04-14` |
|
| 41 |
| Question types | `yes_no`, `binary_named`, `multiple_choice` |
|
| 42 |
-
| Choice types | `single` (one correct letter), `multi` (one
|
| 43 |
| Database file | `forecast_eval_set_example.db` (SQLite 3, ~52 KB) |
|
| 44 |
| Tables in the file | `forecast_eval_set_example` (80 rows), `dataset_metadata` (1 row) |
|
| 45 |
| License | MIT |
|
| 46 |
-
|
|
| 47 |
|
| 48 |
### Type distribution
|
| 49 |
|
|
@@ -55,7 +55,7 @@ A SQLite-packaged evaluation set of 80 hand-curated forecasting questions on rea
|
|
| 55 |
| `multiple_choice` | `multi` | 8 |
|
| 56 |
| **Total** | | **80** |
|
| 57 |
|
| 58 |
-
`yes_no` is binary Yes/No
|
| 59 |
|
| 60 |
---
|
| 61 |
|
|
@@ -70,9 +70,9 @@ OracleProto/
|
|
| 70 |
└── .gitattributes # standard HF binary attributes
|
| 71 |
```
|
| 72 |
|
| 73 |
-
The dataset
|
| 74 |
|
| 75 |
-
The CSV is a row-table export of `forecast_eval_set_example`; it does not
|
| 76 |
|
| 77 |
---
|
| 78 |
|
|
@@ -100,7 +100,7 @@ CREATE INDEX idx_forecast_eval_set_example_end_time ON forecast_eval_set_ex
|
|
| 100 |
|
| 101 |
### 3.2 Table `dataset_metadata` (the recipe)
|
| 102 |
|
| 103 |
-
A one-row table whose `features_json` blob
|
| 104 |
|
| 105 |
```sql
|
| 106 |
CREATE TABLE dataset_metadata (
|
|
@@ -118,16 +118,16 @@ CREATE TABLE dataset_metadata (
|
|
| 118 |
| Column | Type | Description |
|
| 119 |
| --------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
| 120 |
| `id` | TEXT | Stable source-side question ID inherited from the upstream HuggingFace forecasting set; primary join key. |
|
| 121 |
-
| `choice_type` | TEXT | `'single'` if exactly one letter is correct, `'multi'` if one
|
| 122 |
| `question_type` | TEXT | One of `yes_no`, `binary_named`, `multiple_choice`. Selects which prompt template is rendered (§5). |
|
| 123 |
-
| `event` | TEXT | Natural-language description of the event being predicted, author-edited
|
| 124 |
-
| `options` | TEXT | JSON array of option labels. For `yes_no` it is fixed to `["Yes","No"]`. For `binary_named` it is two named entities. For `multiple_choice` it is
|
| 125 |
| `answer` | TEXT | Canonical correct answer encoded as letters. For `yes_no` and `binary_named` it is `'A'` or `'B'`. For `multiple_choice` it is a comma-separated letter list in option order, e.g. `'A'` or `'A, B'`. |
|
| 126 |
-
| `end_time` | TEXT | Resolution date in `YYYY-MM-DD`. The column stores a calendar date only; the prompt template (§5.2)
|
| 127 |
|
| 128 |
### 3.4 Letter-to-index encoding
|
| 129 |
|
| 130 |
-
Letters map to option indices via `index = ord(letter) - ord('A')`. Beyond `Z` (≥27 options) the labels
|
| 131 |
|
| 132 |
---
|
| 133 |
|
|
@@ -189,7 +189,7 @@ Letters map to option indices via `index = ord(letter) - ord('A')`. Beyond `Z` (
|
|
| 189 |
|
| 190 |
## 5. Prompt reconstruction (canonical recipe)
|
| 191 |
|
| 192 |
-
Every row is rendered into a single user message via the recipe stored in `dataset_metadata.features_json.prompt_reconstruction`. The recipe is byte-stable and is the source of truth for the OracleProto evaluator; downstream users who reconstruct prompts themselves should follow it exactly
|
| 193 |
|
| 194 |
### 5.1 Static fragments
|
| 195 |
|
|
@@ -212,12 +212,12 @@ IMPORTANT: Your final answer MUST end with this exact format:
|
|
| 212 |
{guidance}
|
| 213 |
```
|
| 214 |
|
| 215 |
-
The literal `(GMT+8)` inside the user-visible string is what
|
| 216 |
|
| 217 |
### 5.3 `outcomes_block`
|
| 218 |
|
| 219 |
-
For `yes_no` and `binary_named`: empty, since the option labels are
|
| 220 |
-
For `multiple_choice`: a leading newline followed by one line per option in `A. <label>` form,
|
| 221 |
|
| 222 |
### 5.4 `output_format` (one of four, chosen by `question_type` × `choice_type`)
|
| 223 |
|
|
@@ -263,15 +263,15 @@ The reference parser ([`forecast_eval/parser.py::parse_answer`](https://github.c
|
|
| 263 |
2. For `yes_no` (case-insensitive): `Yes` → `A`, `No` → `B`. Anything else is unparsed.
|
| 264 |
3. For `binary_named` (case-insensitive): match the boxed payload against `options[0]` or `options[1]`. Anything else is unparsed.
|
| 265 |
4. For `multiple_choice`: split the boxed payload on commas and whitespace, validate that each token is a single letter, and check that each letter resolves to a valid option index. Out-of-range letters or multi-character tokens are unparsed.
|
| 266 |
-
5. Score by strict set equality against the canonical letter set parsed from `answer`. A missing or unparsed boxed answer is recorded as `parse_ok = 0` rather than
|
| 267 |
|
| 268 |
-
Reusing the framework's parser is the
|
| 269 |
|
| 270 |
---
|
| 271 |
|
| 272 |
## 6. Loading the dataset
|
| 273 |
|
| 274 |
-
### 6.1 With raw `sqlite3` (no extra
|
| 275 |
|
| 276 |
```python
|
| 277 |
import sqlite3
|
|
@@ -372,9 +372,9 @@ def render_prompt(row, meta):
|
|
| 372 |
)
|
| 373 |
```
|
| 374 |
|
| 375 |
-
The full reference renderer
|
| 376 |
|
| 377 |
-
### 6.5 With the
|
| 378 |
|
| 379 |
```python
|
| 380 |
import csv, json
|
|
@@ -387,25 +387,25 @@ with open("forecast_eval_set_example.csv", encoding="utf-8", newline="") as f:
|
|
| 387 |
print(f"loaded {len(rows)} rows; first event: {rows[0]['event']!r}")
|
| 388 |
```
|
| 389 |
|
| 390 |
-
The CSV path skips `dataset_metadata` entirely. To pair the rows with the prompt template, either
|
| 391 |
|
| 392 |
---
|
| 393 |
|
| 394 |
## 7. Recommended evaluation protocol
|
| 395 |
|
| 396 |
-
Pair the dataset with the OracleProto evaluation harness, which layers information-boundary discipline on top of
|
| 397 |
|
| 398 |
-
1. **Declare a knowledge cutoff $\kappa_M$ for every model.** A question is admissible for model $M$ only when $\kappa_M \le \chi_i < \tau_i$, where $\chi_i$ is the per-question prediction cutoff and $\tau_i$ is its resolution date. Inadmissible questions are filtered upstream rather than counted as model errors. A model with no declared cutoff cannot be fairly compared
|
| 399 |
|
| 400 |
2. **Time-mask any retrieval or browsing tool.** If the harness lets the model issue web searches, pin the search-side `end_date` to $\chi_i + \delta$ with a conservative offset; OracleProto defaults to $\delta = -1$ day. The mechanism behind this barrier (L2) is documented in the framework's DESIGN and FRAME notes.
|
| 401 |
|
| 402 |
3. **Run an independent retrieval-content auditor.** Each retrieved snippet is passed to a separate LLM auditor that decides whether the snippet leaks the resolution. This is the L3 barrier in the framework's threat model.
|
| 403 |
|
| 404 |
-
4. **Forbid provider-native browsing.** OracleProto refuses model slugs ending in `:online` and similar hosted-browsing variants on three layers: config validation, on-the-wire client, and detector client. This is the L4
|
| 405 |
|
| 406 |
-
5. **Score with strict set equality on letter sets**, per §5.5. Optional probability-calibration metrics (Brier, NLL, ECE, Murphy decomposition) are supported when the model emits an additional `<belief>{ ... }</belief>` JSON block
|
| 407 |
|
| 408 |
-
Without the OracleProto harness in place, treat the resulting numbers as upper bounds on forecasting ability: any model that can browse the open web, or that was trained past a question's `end_time`, may have memorised the answer. The dataset makes the
|
| 409 |
|
| 410 |
---
|
| 411 |
|
|
@@ -416,7 +416,7 @@ Without the OracleProto harness in place, treat the resulting numbers as upper b
|
|
| 416 |
1. Source-side broken-row removal and column flattening.
|
| 417 |
2. `end_time` / answer-encoding / option-label normalization: `end_time` reduced to a `YYYY-MM-DD` calendar date; `Yes/No` mapped to `A/B`; option labels stripped of stray markdown.
|
| 418 |
3. Down-sampling 322 → 200 → 100 → 80 with placeholder removal, deduplication, and an ambiguity audit.
|
| 419 |
-
4. Final HIGH+MEDIUM ambiguity remediation: 4 rows reworded
|
| 420 |
5. CRITICAL fix on one S&P 500 multi-select truth set so it satisfies the monotonic-threshold logic implied by the option ladder.
|
| 421 |
* **Verification.** All 80 ground-truths verified end-to-end via parser round-trip (the rendered prompt is parsed and re-encoded back to the canonical letter set). Final tally: 0 critical / 0 high / 0 medium ambiguity issues remaining.
|
| 422 |
|
|
@@ -432,18 +432,18 @@ Without the OracleProto harness in place, treat the resulting numbers as upper b
|
|
| 432 |
|
| 433 |
### 9.2 Out-of-scope uses
|
| 434 |
|
| 435 |
-
* **Training data.** Including the rows in any training, fine-tuning, or RLHF
|
| 436 |
* **Long-horizon forecasting.** All resolutions land in a one-month window (2026-03-12 → 2026-04-14); the set does not represent multi-quarter or multi-year forecasting.
|
| 437 |
* **Open-ended generation.** Every question has a closed answer set, so this is not a generation benchmark.
|
| 438 |
|
| 439 |
### 9.3 Known limitations and biases
|
| 440 |
|
| 441 |
-
* **Sample size.** 80 rows is small. Confidence intervals on accuracy or Brier are wide; report them alongside point estimates and use paired tests when comparing models on the same set.
|
| 442 |
-
* **Topical skew.** Questions
|
| 443 |
-
* **English-only.** All `event` and `options` strings are English.
|
| 444 |
-
* **Date-only resolution.** `end_time` is a date, not a timestamp, and the dataset does not
|
| 445 |
-
* **Provider-side residual leakage.** Any LLM that has ingested the upstream HuggingFace dataset, or that was trained past the resolution window, can recover ground truths from parametric memory. The dataset cannot
|
| 446 |
-
* **Snapshot of a moving label space.** A few questions ("none of the above", "all of the above")
|
| 447 |
|
| 448 |
---
|
| 449 |
|
|
@@ -455,9 +455,9 @@ Released under the **MIT License** (see `LICENSE`). The upstream questions origi
|
|
| 455 |
|
| 456 |
## 11. Contact and contributions
|
| 457 |
|
| 458 |
-
Issues, schema feedback, and ambiguity reports are welcome. If a row's ground truth has changed, or its framing
|
| 459 |
|
| 460 |
* Dataset: [`MaYiding/OracleProto` on Hugging Face](https://huggingface.co/datasets/MaYiding/OracleProto/discussions) for row-level questions, ambiguity reports, and label disputes.
|
| 461 |
-
* Code
|
| 462 |
|
| 463 |
-
Row-level reports should include the `id`, the disputed framing, and where available a primary source;
|
|
|
|
| 22 |
|
| 23 |
# OracleProto: Forecasting Evaluation Set
|
| 24 |
|
| 25 |
+
**Chinese doc:** [[`中文文档`](https://huggingface.co/datasets/MaYiding/OracleProto/blob/main/README-ZH.md)]
|
| 26 |
|
| 27 |
+
**GitHub repo:** [[`MaYiding/OracleProto`](https://github.com/MaYiding/OracleProto)]
|
| 28 |
|
| 29 |
+
A SQLite-packaged evaluation set of 80 hand-curated forecasting questions on real-world events, with resolution dates between 2026-03-12 and 2026-04-14, released alongside the [GitHub Repo](https://github.com/MaYiding/OracleProto). Both the rows and the byte-stable prompt-reconstruction recipe are packaged in a single file, `forecast_eval_set_example.db`, which exposes two tables: `forecast_eval_set_example` (the 80 rows) and `dataset_metadata` (the recipe).
|
| 30 |
|
| 31 |
---
|
| 32 |
|
|
|
|
| 39 |
| Splits | `train` (80); single split, intended as a held-out evaluation set |
|
| 40 |
| Resolution-date range | `2026-03-12` → `2026-04-14` |
|
| 41 |
| Question types | `yes_no`, `binary_named`, `multiple_choice` |
|
| 42 |
+
| Choice types | `single` (one correct letter), `multi` (one or more correct letters) |
|
| 43 |
| Database file | `forecast_eval_set_example.db` (SQLite 3, ~52 KB) |
|
| 44 |
| Tables in the file | `forecast_eval_set_example` (80 rows), `dataset_metadata` (1 row) |
|
| 45 |
| License | MIT |
|
| 46 |
+
| Upstream source | HuggingFace forecasting questions (levels 1+2), 322 raw → 80 curated |
|
| 47 |
|
| 48 |
### Type distribution
|
| 49 |
|
|
|
|
| 55 |
| `multiple_choice` | `multi` | 8 |
|
| 56 |
| **Total** | | **80** |
|
| 57 |
|
| 58 |
+
`yes_no` is binary Yes/No. `binary_named` is a binary choice between two named entities such as two teams, two contestants, or two competing parties. `multiple_choice` has at least three labelled options, one or more of which are correct; "None of the above" is a valid answer when it appears in the option list. Each row stores the exact option labels: letter `A` maps to `options[0]`, `B` to `options[1]`, and so on (§3.4 covers labels beyond `Z`).
|
| 59 |
|
| 60 |
---
|
| 61 |
|
|
|
|
| 70 |
└── .gitattributes # standard HF binary attributes
|
| 71 |
```
|
| 72 |
|
| 73 |
+
The dataset is published as a single SQLite file, not as Parquet or JSONL, because the prompt-reconstruction recipe and per-row provenance share the same file as the rows (in `dataset_metadata.features_json`). A loader that converts the rows to a `datasets.Dataset` is shown in §6.3.
|
| 74 |
|
| 75 |
+
The CSV is a row-table export of `forecast_eval_set_example`; it does not include `dataset_metadata`, so the prompt template is reachable only via the SQLite file. Use the CSV when a downstream pipeline needs only the 80 rows (pandas, a spreadsheet, or a `grep` filter) and reconstructs prompts on its own. The `options` column is preserved as a JSON-encoded array string, escaped per RFC 4180.
|
| 76 |
|
| 77 |
---
|
| 78 |
|
|
|
|
| 100 |
|
| 101 |
### 3.2 Table `dataset_metadata` (the recipe)
|
| 102 |
|
| 103 |
+
A one-row table whose `features_json` blob stores the prompt template, the four output formats, the outcomes-block rule, the agent role, and curation provenance. The full recipe is documented in §5.
|
| 104 |
|
| 105 |
```sql
|
| 106 |
CREATE TABLE dataset_metadata (
|
|
|
|
| 118 |
| Column | Type | Description |
|
| 119 |
| --------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
| 120 |
| `id` | TEXT | Stable source-side question ID inherited from the upstream HuggingFace forecasting set; primary join key. |
|
| 121 |
+
| `choice_type` | TEXT | `'single'` if exactly one letter is correct, `'multi'` if one or more letters are correct. Derived from the number of letters in `answer`. Selects between the single-answer and multi-select templates in §5.4. |
|
| 122 |
| `question_type` | TEXT | One of `yes_no`, `binary_named`, `multiple_choice`. Selects which prompt template is rendered (§5). |
|
| 123 |
+
| `event` | TEXT | Natural-language description of the event being predicted, author-edited to make the time anchor, the units, and the binary framing explicit. |
|
| 124 |
+
| `options` | TEXT | JSON array of option labels. For `yes_no` it is fixed to `["Yes","No"]`. For `binary_named` it is the two named entities. For `multiple_choice` it is the list of choice labels, where each letter is given by its position (`A=options[0]`, `B=options[1]`, …). |
|
| 125 |
| `answer` | TEXT | Canonical correct answer encoded as letters. For `yes_no` and `binary_named` it is `'A'` or `'B'`. For `multiple_choice` it is a comma-separated letter list in option order, e.g. `'A'` or `'A, B'`. |
|
| 126 |
+
| `end_time` | TEXT | Resolution date in `YYYY-MM-DD`. The column stores a calendar date only; the prompt template (§5.2) attaches the GMT+8 reading at render time. If finer-grained admissibility is needed, treat each resolution as covering the whole calendar day. |
|
| 127 |
|
| 128 |
### 3.4 Letter-to-index encoding
|
| 129 |
|
| 130 |
+
Letters map to option indices via `index = ord(letter) - ord('A')`. Beyond `Z` (≥27 options) the labels continue along the contiguous ASCII range that starts at `A`: `[`, `\`, `]`, `^`, `_`, `` ` ``, `a`, `b`, …. The reference renderer wraps any non-`A`–`Z` label in backticks to keep the label intact under Markdown rendering. None of the 80 rows exceed 26 options; the encoding is documented because the framework's parser supports it.
|
| 131 |
|
| 132 |
---
|
| 133 |
|
|
|
|
| 189 |
|
| 190 |
## 5. Prompt reconstruction (canonical recipe)
|
| 191 |
|
| 192 |
+
Every row is rendered into a single user message via the recipe stored in `dataset_metadata.features_json.prompt_reconstruction`. The recipe is byte-stable and is the source of truth for the OracleProto evaluator; downstream users who reconstruct prompts themselves should follow it exactly to keep results comparable.
|
| 193 |
|
| 194 |
### 5.1 Static fragments
|
| 195 |
|
|
|
|
| 212 |
{guidance}
|
| 213 |
```
|
| 214 |
|
| 215 |
+
The literal `(GMT+8)` inside the user-visible string is what attaches a timezone to the resolution date at render time.
|
| 216 |
|
| 217 |
### 5.3 `outcomes_block`
|
| 218 |
|
| 219 |
+
For `yes_no` and `binary_named`: empty, since the option labels are embedded directly in `output_format`.
|
| 220 |
+
For `multiple_choice`: a leading newline followed by one line per option in `A. <label>` form, for example `\nA. Arizona\nB. Baylor\nC. Brigham Young University (BYU)\n…`. Labels whose derived letter falls outside `A`–`Z` are wrapped in backticks.
|
| 221 |
|
| 222 |
### 5.4 `output_format` (one of four, chosen by `question_type` × `choice_type`)
|
| 223 |
|
|
|
|
| 263 |
2. For `yes_no` (case-insensitive): `Yes` → `A`, `No` → `B`. Anything else is unparsed.
|
| 264 |
3. For `binary_named` (case-insensitive): match the boxed payload against `options[0]` or `options[1]`. Anything else is unparsed.
|
| 265 |
4. For `multiple_choice`: split the boxed payload on commas and whitespace, validate that each token is a single letter, and check that each letter resolves to a valid option index. Out-of-range letters or multi-character tokens are unparsed.
|
| 266 |
+
5. Score by strict set equality against the canonical letter set parsed from `answer`. A missing or unparsed boxed answer is recorded as `parse_ok = 0` rather than raised as an error, and the run continues without halting.
|
| 267 |
|
| 268 |
+
Reusing the framework's parser is the simplest way to get bit-identical scores across implementations.
|
| 269 |
|
| 270 |
---
|
| 271 |
|
| 272 |
## 6. Loading the dataset
|
| 273 |
|
| 274 |
+
### 6.1 With raw `sqlite3` (no extra dependencies)
|
| 275 |
|
| 276 |
```python
|
| 277 |
import sqlite3
|
|
|
|
| 372 |
)
|
| 373 |
```
|
| 374 |
|
| 375 |
+
The full reference renderer, which extends the example above with the >26-option backtick rule and an optional reflection / belief-elicitation tail, is implemented in [`forecast_eval/prompts.py`](https://github.com/MaYiding/OracleProto/blob/main/forecast_eval/prompts.py); reusing it produces byte-identical prompts.
|
| 376 |
|
| 377 |
+
### 6.5 With the CSV export (stdlib `csv`, no prompt template)
|
| 378 |
|
| 379 |
```python
|
| 380 |
import csv, json
|
|
|
|
| 387 |
print(f"loaded {len(rows)} rows; first event: {rows[0]['event']!r}")
|
| 388 |
```
|
| 389 |
|
| 390 |
+
The CSV path skips `dataset_metadata` entirely. To pair the rows with the prompt template, either follow §5 by hand or switch back to the SQLite path in §6.1.
|
| 391 |
|
| 392 |
---
|
| 393 |
|
| 394 |
## 7. Recommended evaluation protocol
|
| 395 |
|
| 396 |
+
Pair the dataset with the OracleProto evaluation harness, which layers information-boundary discipline on top of a plain prompt-and-score loop. Five concrete recommendations:
|
| 397 |
|
| 398 |
+
1. **Declare a knowledge cutoff $\kappa_M$ for every model.** A question is admissible for model $M$ only when $\kappa_M \le \chi_i < \tau_i$, where $\chi_i$ is the per-question prediction cutoff and $\tau_i$ is its resolution date. Inadmissible questions are filtered upstream rather than counted as model errors. A model with no declared cutoff cannot be fairly compared against a model that has one.
|
| 399 |
|
| 400 |
2. **Time-mask any retrieval or browsing tool.** If the harness lets the model issue web searches, pin the search-side `end_date` to $\chi_i + \delta$ with a conservative offset; OracleProto defaults to $\delta = -1$ day. The mechanism behind this barrier (L2) is documented in the framework's DESIGN and FRAME notes.
|
| 401 |
|
| 402 |
3. **Run an independent retrieval-content auditor.** Each retrieved snippet is passed to a separate LLM auditor that decides whether the snippet leaks the resolution. This is the L3 barrier in the framework's threat model.
|
| 403 |
|
| 404 |
+
4. **Forbid provider-native browsing.** OracleProto refuses model slugs ending in `:online` and similar hosted-browsing variants on three layers: config validation, on-the-wire client, and detector client. This is the L4 barrier, the final check that any billable LLM call must clear before it leaves the process.
|
| 405 |
|
| 406 |
+
5. **Score with strict set equality on letter sets**, per §5.5. Optional probability-calibration metrics (Brier, NLL, ECE, Murphy decomposition) are supported when the model emits an additional `<belief>{ ... }</belief>` JSON block following the framework's belief-elicitation protocol; the schema is documented in [`forecast_eval/prompts.py::BELIEF_PROTOCOL`](https://github.com/MaYiding/OracleProto/blob/main/forecast_eval/prompts.py).
|
| 407 |
|
| 408 |
+
Without the OracleProto harness in place, treat the resulting numbers as upper bounds on forecasting ability: any model that can browse the open web, or that was trained past a question's `end_time`, may have memorised the answer. The dataset makes the admissibility check possible; it does not enforce it on its own.
|
| 409 |
|
| 410 |
---
|
| 411 |
|
|
|
|
| 416 |
1. Source-side broken-row removal and column flattening.
|
| 417 |
2. `end_time` / answer-encoding / option-label normalization: `end_time` reduced to a `YYYY-MM-DD` calendar date; `Yes/No` mapped to `A/B`; option labels stripped of stray markdown.
|
| 418 |
3. Down-sampling 322 → 200 → 100 → 80 with placeholder removal, deduplication, and an ambiguity audit.
|
| 419 |
+
4. Final HIGH+MEDIUM ambiguity remediation: 4 rows reworded to make their time anchor, units, or binary framing explicit.
|
| 420 |
5. CRITICAL fix on one S&P 500 multi-select truth set so it satisfies the monotonic-threshold logic implied by the option ladder.
|
| 421 |
* **Verification.** All 80 ground-truths verified end-to-end via parser round-trip (the rendered prompt is parsed and re-encoded back to the canonical letter set). Final tally: 0 critical / 0 high / 0 medium ambiguity issues remaining.
|
| 422 |
|
|
|
|
| 432 |
|
| 433 |
### 9.2 Out-of-scope uses
|
| 434 |
|
| 435 |
+
* **Training data.** Including the rows in any training, fine-tuning, or RLHF dataset contaminates downstream forecasting evaluations of the trained model. The dataset is evaluation-only.
|
| 436 |
* **Long-horizon forecasting.** All resolutions land in a one-month window (2026-03-12 → 2026-04-14); the set does not represent multi-quarter or multi-year forecasting.
|
| 437 |
* **Open-ended generation.** Every question has a closed answer set, so this is not a generation benchmark.
|
| 438 |
|
| 439 |
### 9.3 Known limitations and biases
|
| 440 |
|
| 441 |
+
* **Sample size.** 80 rows is a small sample. Confidence intervals on accuracy or Brier are wide; report them alongside point estimates and use paired tests when comparing models on the same set.
|
| 442 |
+
* **Topical skew.** Questions are concentrated in finance and macro indicators, sports events, awards (Oscars, NBA, UEFA, etc.), and US-centric political or geopolitical events, reflecting the upstream HuggingFace question mix. They are not a globally representative sample.
|
| 443 |
+
* **English-only.** All `event` and `options` strings are in English.
|
| 444 |
+
* **Date-only resolution.** `end_time` is a date, not a timestamp, and the dataset does not include a timezone column. If finer-grained admissibility is needed, treat each resolution as covering the whole GMT+8 calendar day.
|
| 445 |
+
* **Provider-side residual leakage.** Any LLM that has ingested the upstream HuggingFace dataset, or that was trained past the resolution window, can recover ground truths from parametric memory. The dataset cannot address this on its own; it relies on the harness to enforce admissibility ($\kappa_M$).
|
| 446 |
+
* **Snapshot of a moving label space.** A few questions ("none of the above", "all of the above") have subtle interactions with multi-select scoring; the curation pass fixed the one S&P 500 case, but the convention for similar questions in future revisions may shift. Pin to the schema version if byte-stable behaviour across releases is required.
|
| 447 |
|
| 448 |
---
|
| 449 |
|
|
|
|
| 455 |
|
| 456 |
## 11. Contact and contributions
|
| 457 |
|
| 458 |
+
Issues, schema feedback, and ambiguity reports are welcome. If a row's ground truth has changed, or its framing has become ambiguous, open an issue in the relevant repository:
|
| 459 |
|
| 460 |
* Dataset: [`MaYiding/OracleProto` on Hugging Face](https://huggingface.co/datasets/MaYiding/OracleProto/discussions) for row-level questions, ambiguity reports, and label disputes.
|
| 461 |
+
* Code: [`MaYiding/OracleProto` on GitHub](https://github.com/MaYiding/OracleProto/issues) for evaluator, parser, or harness behaviour.
|
| 462 |
|
| 463 |
+
Row-level reports should include the `id`, the disputed framing, and, where available, a primary source; these are the inputs the curation pipeline needs to update the row in the next release.
|