Update README.md
Browse files
README.md
CHANGED
|
@@ -20,93 +20,81 @@ InductionBench is a new benchmarking suite designed to test the inductive reason
|
|
| 20 |
|
| 21 |
### Dataset Description
|
| 22 |
|
| 23 |
-
|
| 24 |
|
|
|
|
| 25 |
|
|
|
|
| 26 |
|
| 27 |
-
-
|
| 28 |
-
- **Language(s) (NLP):** [English]
|
| 29 |
|
| 30 |
-
|
| 31 |
|
| 32 |
-
|
| 33 |
|
| 34 |
-
-
|
| 35 |
-
- **Paper [https://www.arxiv.org/abs/2502.15823]:** [InductionBench: LLMs Fail in the Simplest Complexity Class]
|
| 36 |
|
| 37 |
-
|
| 38 |
|
| 39 |
-
|
| 40 |
|
| 41 |
-
|
| 42 |
|
| 43 |
-
|
| 44 |
|
| 45 |
-
|
|
|
|
| 46 |
|
| 47 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
|
| 49 |
-
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
|
| 50 |
|
| 51 |
-
|
|
|
|
| 52 |
|
| 53 |
-
## Dataset
|
| 54 |
|
| 55 |
-
<!--
|
| 56 |
|
| 57 |
-
|
|
|
|
| 58 |
|
| 59 |
-
## Dataset Creation
|
| 60 |
|
| 61 |
### Curation Rationale
|
| 62 |
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
[More Information Needed]
|
| 66 |
-
|
| 67 |
-
### Source Data
|
| 68 |
-
|
| 69 |
-
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
|
| 70 |
-
|
| 71 |
-
#### Data Collection and Processing
|
| 72 |
-
|
| 73 |
-
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
|
| 74 |
-
|
| 75 |
-
[More Information Needed]
|
| 76 |
-
|
| 77 |
-
#### Who are the source data producers?
|
| 78 |
-
|
| 79 |
-
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
|
| 80 |
-
|
| 81 |
-
[More Information Needed]
|
| 82 |
-
|
| 83 |
-
### Annotations [optional]
|
| 84 |
-
|
| 85 |
-
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
|
| 86 |
-
|
| 87 |
-
#### Annotation process
|
| 88 |
|
| 89 |
-
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
|
| 90 |
|
| 91 |
-
|
|
|
|
| 92 |
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
<!-- This section describes the people or systems who created the annotations. -->
|
| 96 |
-
|
| 97 |
-
[More Information Needed]
|
| 98 |
-
|
| 99 |
-
#### Personal and Sensitive Information
|
| 100 |
-
|
| 101 |
-
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
|
| 102 |
-
|
| 103 |
-
[More Information Needed]
|
| 104 |
-
|
| 105 |
-
## Bias, Risks, and Limitations
|
| 106 |
-
|
| 107 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
| 108 |
-
|
| 109 |
-
[More Information Needed]
|
| 110 |
|
| 111 |
### Recommendations
|
| 112 |
|
|
@@ -116,30 +104,13 @@ Users should be made aware of the risks, biases and limitations of the dataset.
|
|
| 116 |
|
| 117 |
## Citation [optional]
|
| 118 |
|
| 119 |
-
|
| 120 |
-
|
| 121 |
-
|
| 122 |
-
|
| 123 |
-
|
| 124 |
-
|
| 125 |
-
**APA:**
|
| 126 |
-
|
| 127 |
-
[More Information Needed]
|
| 128 |
-
|
| 129 |
-
## Glossary [optional]
|
| 130 |
-
|
| 131 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
|
| 132 |
-
|
| 133 |
-
[More Information Needed]
|
| 134 |
-
|
| 135 |
-
## More Information [optional]
|
| 136 |
-
|
| 137 |
-
[More Information Needed]
|
| 138 |
-
|
| 139 |
-
## Dataset Card Authors [optional]
|
| 140 |
-
|
| 141 |
-
[More Information Needed]
|
| 142 |
|
| 143 |
-
## Dataset Card
|
| 144 |
|
| 145 |
-
|
|
|
|
| 20 |
|
| 21 |
### Dataset Description
|
| 22 |
|
| 23 |
+
The benchmark is grounded in formal definitions of inductive function classes (e.g., regular functions/transducers, subregular hierarchies like input-strictly-local functions, Left-output-strictly-local functions, and Right-output-strictly-local functions).
|
| 24 |
|
| 25 |
+
These classes have been studied extensively in theoretical computer science, providing provable criteria—such as uniqueness of the optimal solution, search space complexity, and required data size.
|
| 26 |
|
| 27 |
+
Each datapoint contains 7 fields:
|
| 28 |
|
| 29 |
+
- groundtruth: the unique minimum set of rules that generates the data
|
|
|
|
| 30 |
|
| 31 |
+
- datasample: a set of input-output pairs generated by the ground truth rule set, and it is also a characteristic sample
|
| 32 |
|
| 33 |
+
- input: input prompt that you can directly feed to models or API endpoints
|
| 34 |
|
| 35 |
+
- type: the function class that the function represented by the ground truth rule set belongs to, it can be ISL, L-OSL, or R-OSL
|
|
|
|
| 36 |
|
| 37 |
+
- k: the Markovian window size, it ranges from 2 to 4
|
| 38 |
|
| 39 |
+
- vocab_size: ranges from 5 to 8
|
| 40 |
|
| 41 |
+
- number_of_rules: ranges from 3 to 5
|
| 42 |
|
| 43 |
+
Here is an example datapoint belonging to ISL function class, with k = 2, vocab size = 5, and number of rules = 3.
|
| 44 |
|
| 45 |
+
```
|
| 46 |
+
groundtruth: {ag: h, gh: b, ge: d}
|
| 47 |
|
| 48 |
+
datasample:
|
| 49 |
+
'a': 'a', 'b': 'b', 'c': 'c', 'd': 'd', 'e': 'e', 'f': 'f', 'g': 'g', 'h': 'h',
|
| 50 |
+
'dc': 'dc', 'cb': 'cb', 'gd': 'gd', 'hh': 'hh', 'dh': 'dh', 'eh': 'eh',
|
| 51 |
+
'ef': 'ef', 'ah': 'ah', 'af': 'af', 'db': 'db', 'bb': 'bb', 'gf': 'gf',
|
| 52 |
+
'hb': 'hb', 'ga': 'ga', 'bh': 'bh', 'gg': 'gg', 'ac': 'ac', 'ba': 'ba',
|
| 53 |
+
'cc': 'cc', 'fb': 'fb', 'ae': 'ae', 'ed': 'ed', 'hf': 'hf', 'ab': 'ab',
|
| 54 |
+
'ca': 'ca', 'df': 'df', 'bf': 'bf', 'ch': 'ch', 'ha': 'ha', 'da': 'da',
|
| 55 |
+
'ee': 'ee', 'fd': 'fd', 'cg': 'cg', 'fe': 'fe', 'dd': 'dd', 'eg': 'eg',
|
| 56 |
+
'ff': 'ff', 'ad': 'ad', 'ge': 'gd', 'ec': 'ec', 'fh': 'fh', 'bd': 'bd',
|
| 57 |
+
'he': 'he', 'gc': 'gc', 'be': 'be', 'de': 'de', 'ce': 'ce', 'gh': 'gb',
|
| 58 |
+
'cf': 'cf', 'fg': 'fg', 'gb': 'gb', 'aa': 'aa', 'hc': 'hc', 'dg': 'dg',
|
| 59 |
+
'ea': 'ea', 'cd': 'cd', 'hg': 'hg', 'bc': 'bc', 'ag': 'ah', 'fc': 'fc',
|
| 60 |
+
'hd': 'hd', 'fa': 'fa', 'bg': 'bg', 'eb': 'eb',
|
| 61 |
+
'gbebd': 'gbebd', 'gca': 'gca', 'habgd': 'habgd', 'fhc': 'fhc', 'daccfe': 'daccfe',
|
| 62 |
+
'cahgef': 'cahgdf', 'ecah': 'ecah', 'bfaecd': 'bfaecd', 'fefc': 'fefc',
|
| 63 |
+
'eggbad': 'eggbad', 'fcddhg': 'fcddhg', 'gaggac': 'gahgac', 'bech': 'bech',
|
| 64 |
+
'haahf': 'haahf', 'hag': 'hah', 'gfadaa': 'gfadaa', 'bdbdee': 'bdbdee',
|
| 65 |
+
'cbhgba': 'cbhgba', 'hbe': 'hbe', 'ahh': 'ahh', 'gfcdge': 'gfcdgd',
|
| 66 |
+
'fbf': 'fbf', 'aaecc': 'aaecc', 'efgce': 'efgce', 'daecbe': 'daecbe', 'fegb': 'fegb',
|
| 67 |
+
'ffbh': 'ffbh', 'aefc': 'aefc', 'abge': 'abgd', 'hdgb': 'hdgb', 'dec': 'dec',
|
| 68 |
+
'dfbb': 'dfbb', 'ahdbhg': 'ahdbhg', 'dad': 'dad', 'cbdhg': 'cbdhg', 'cbh': 'cbh',
|
| 69 |
+
'hfhhd': 'hfhhd', 'dafff': 'dafff', 'cge': 'cgd', 'hbbabd': 'hbbabd', 'cch': 'cch',
|
| 70 |
+
'gab': 'gab', 'bgdegh': 'bgdegb', 'daac': 'daac', 'efb': 'efb', 'eegg': 'eegg', 'accd': 'accd',
|
| 71 |
+
'faa': 'faa', 'gchb': 'gchb', 'cfgahg': 'cfgahg', 'abhc': 'abhc'
|
| 72 |
+
}
|
| 73 |
+
```
|
| 74 |
|
|
|
|
| 75 |
|
| 76 |
+
- **Curated by:** Wenyue Hua
|
| 77 |
+
- **Language(s) (NLP):** [English]
|
| 78 |
|
| 79 |
+
### Dataset Sources [optional]
|
| 80 |
|
| 81 |
+
<!-- Provide the basic links for the dataset. -->
|
| 82 |
|
| 83 |
+
- **Repository:** https://github.com/Wenyueh/inductive_reasoning_benchmark
|
| 84 |
+
- **Paper:** [InductionBench: LLMs Fail in the Simplest Complexity Class]
|
| 85 |
|
|
|
|
| 86 |
|
| 87 |
### Curation Rationale
|
| 88 |
|
| 89 |
+
Synthetic Data Generation:
|
| 90 |
+
Each sample (input-output pair) is generated using a controlled procedure for each subregular class. For instance, to test ISL (Input Strictly Local) functions, the data is constructed so that the output at a given position depends only on a limited, fixed window of the input. Similar strategies are used for L-OSL (Left-Output Strictly Local) and R-OSL (Right-Output Strictly Local).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 91 |
|
|
|
|
| 92 |
|
| 93 |
+
Rationale:
|
| 94 |
+
This ensures each dataset instance directly reflects the intended property of the function class. It also allows for parametric variation (e.g., altering the window size, length of strings, or complexity of transformations) so we can systematically test how LLMs handle different facets of inductive reasoning.
|
| 95 |
|
| 96 |
+
Ambiguity Control:
|
| 97 |
+
In inductive reasoning, a dataset can sometimes be explained by multiple hypotheses. InductionBench tasks are constructed so that there is a single, best minimal description within the function class.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 98 |
|
| 99 |
### Recommendations
|
| 100 |
|
|
|
|
| 104 |
|
| 105 |
## Citation [optional]
|
| 106 |
|
| 107 |
+
@article{hua2025inductionbench,
|
| 108 |
+
title={InductionBench: LLMs Fail in the Simplest Complexity Class},
|
| 109 |
+
author={Hua, Wenyue and Wong, Tyler and Fei, Sun and Pan, Liangming and Jardine, Adam and Wang, William Yang},
|
| 110 |
+
journal={arXiv preprint arXiv:2502.15823},
|
| 111 |
+
year={2025}
|
| 112 |
+
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 113 |
|
| 114 |
+
## Dataset Card Authors
|
| 115 |
|
| 116 |
+
Wenyue Hua
|