Datasets:
Tasks:
Question Answering
Formats:
csv
Languages:
English
Size:
1K - 10K
Tags:
procedural-reasoning
data-flow-analysis
llm-evaluation
benchmark
program-analysis
question-answering
License:
File size: 6,865 Bytes
ed316e1 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 | ---
pretty_name: "FABLE+"
language:
- en
license: mit
size_categories:
- 1K<n<10K
task_categories:
- question-answering
tags:
- procedural-reasoning
- data-flow-analysis
- llm-evaluation
- benchmark
- program-analysis
- question-answering
- tabular
- text
- croissant
---
# Dataset Card for FABLE+
FABLE+ is a diagnostic benchmark for evaluating data-flow reasoning in procedural text. It adapts classical data-flow analyses from software engineering to natural-language procedures and asks whether models can track how entities, constraints, states, and intermediate information are introduced, updated, invalidated, reused, or propagated across ordered steps.
This repository is an anonymized review release for a NeurIPS Evaluations and Datasets Track submission. During review, author names, institutional links, non-anonymous project pages, and non-anonymous repository links are intentionally omitted. These will be restored after review where appropriate.
## Dataset Summary
FABLE+ contains 3,200 question-answer instances spanning four procedural domains:
- Recipes: human-executed cooking procedures with ingredients, tools, and intermediate products.
- Travel routes: semi-automated turn-by-turn navigation procedures generated from routing metadata.
- Automated plans: planner-generated action sequences from classical planning domains.
- Information-seeking dialogs: multi-turn 20 Questions-style dialogs where constraints are revealed over turns.
The benchmark is balanced across four domains and eight data-flow analysis types. Each domain contributes 800 instances, with 100 instances for each analysis type.
FABLE+ is intended to support controlled evaluation of procedural reasoning. It is not designed to measure all aspects of natural-language understanding, nor should it be interpreted as evidence that a model is reliable for high-stakes procedural decision-making.
The dataset file contains the following schema:
```text
Goal,Plan,Question,Ground Truth
```
## Data-Flow Analyses
| Analysis type | Procedural adaptation | Reasoning dimension |
|---|---|---|
| Reaching Definitions | Determines whether an entity, state, predicate, or dialog constraint introduced earlier reaches a later step without being invalidated. | State |
| Very Busy Expressions | Checks whether an entity or expression produced at one point must be consumed or evaluated along future procedural paths. | Causal |
| Available Expressions | Verifies whether a computed or established state remains available for reuse at a later point. | State |
| Live Variable Analysis | Determines whether an entity, resource, predicate, or constraint remains needed for future steps. | State |
| Interval Analysis | Tracks numeric ranges, durations, bounds, or step intervals across the procedure. | Temporal |
| Type-State Analysis | Checks whether entities or predicates follow valid state transitions across steps. | State |
| Taint Analysis | Tracks propagation of undesirable, contaminated, invalid, or unsupported information through later steps. | Causal |
| Concurrency Analysis | Determines whether steps can be reordered or executed concurrently without violating dependencies. | Temporal |
## Dataset Construction
FABLE+ is constructed through a multi-stage pipeline:
1. Domain-specific source data are collected or generated.
2. Procedures are parsed into explicit step and entity or constraint representations.
3. A step-dependency graph and an entity-flow or constraint-flow graph are constructed.
4. Data-flow analysis templates are instantiated over the graph representation.
5. Ground-truth answers are computed from the structured representation.
6. A balanced benchmark subset is sampled across domains and analysis types.
7. Quality checks are applied to verify procedure parsing, question clarity, and answer correctness.
### Recipes
The recipe domain is based on the publicly available English Recipes Dataset. The construction pipeline filters recipes to retain well-formed procedures with at least three steps, extracts candidate entities using spaCy, removes noisy or generic terms, and keeps recipes whose entities can be grounded in a curated vocabulary. The resulting recipe subset contains 1,382 procedures and contributes 800 released QA pairs.
### Travel Routes
The travel route domain consists of turn-by-turn navigation procedures generated using Valhalla over OpenStreetMap data. Start and destination locations are drawn from public datasets of airports, historic sites, government buildings, hospitals, and landmarks in the contiguous United States. Routes are constrained to be intra-state and at most 150 km apart. The benchmark uses 1,500 routes and contributes 800 released QA pairs.
### Automated Plans
The automated planning domain uses classical planning tasks from International Planning Competition domains, including Hanoi, Gripper, Ferry, and Driverlog. Plans are generated with Fast Downward from PDDL specifications and rendered into natural-language action sequences using AutoPlanBench-style mappings. The resulting set contains 1,378 distinct plans and contributes 800 released QA pairs.
### Information-Seeking Dialogs
The dialog domain instantiates a 20 Questions-style information-seeking game. A hidden subject is sampled from hierarchical source data, and each dialog turn updates the set of possible subjects through explicit constraints. The generated dialogs include positive answers, negative answers, unknown answers, omitted dependencies, and unsupported claims so that the benchmark can test constraint propagation, invalidation, and recovery from partial information. The dialog domain contributes 800 released QA pairs.
## Responsible AI and Broader Impact
The benchmark is designed to advance transparent evaluation of procedural reasoning in language models. The released data are derived from public sources or procedurally generated artifacts and are not intended to contain personal, sensitive, or proprietary information.
Potential risks are indirect. Improvements on procedural reasoning benchmarks may encourage deployment of language models in settings where incorrect procedural reasoning can cause harm. Users should not treat benchmark performance as sufficient evidence for safety in high-stakes applications. Systems used in healthcare, navigation, robotics, education, or other consequential settings should include independent verification, human oversight, and structured validation.
FABLE+ aims to support responsible evaluation by making its assumptions, construction process, answer formats, and limitations explicit.
## Licensing
The benchmark release is provided under the license specified in the dataset metadata. Users are responsible for respecting the licenses and terms of use of the underlying public data sources and tools used in the construction pipeline.
|