Datasets:
metadata
configs:
- config_name: add_6digit
data_files:
- path: add_6digit/train.parquet
split: train
- path: add_6digit/val.parquet
split: validation
- path: add_6digit/eval_stratified.parquet
split: test
- config_name: add_sub_6digit
data_files:
- path: add_sub_6digit/train.parquet
split: train
- path: add_sub_6digit/val.parquet
split: validation
- path: add_sub_6digit/eval_stratified.parquet
split: test
language:
- en
license: apache-2.0
size_categories:
- 1M<n<10M
tags:
- arithmetic
- interpretability
- sorl
- mechanistic-interpretability
- addition
- subtraction
- quirke
task_categories:
- text-generation
Arithmetic SoRL Data
Training and evaluation data for the SoRL Arithmetic Interpretability Study.
Small transformers trained on integer addition/subtraction, with SoRL to externalize carry/borrow circuits as explicit abstraction tokens.
Reference: Quirke et al., "Understanding Addition and Subtraction in Transformers" (2024)
Dataset Structure
| Subfolder | Operations | Train | Val | Eval (stratified) |
|---|---|---|---|---|
add_6digit |
addition only | 500K | 10K | ~550 (50 per S0-S6 + 200 random) |
add_sub_6digit |
add + sub | 500K | 10K | ~1100 (50 per S0-S6 + M0-M6 + random) |
Columns
| Column | Type | Description |
|---|---|---|
tokens |
list[int] | Full sequence (21 tokens for 6-digit) |
labels |
list[str] | Per-answer-digit sub-task label |
op |
str | "add" or "sub" |
complexity |
str | Quirke complexity: S0-S6 (add) or M0-M6 (sub) |
cascade_depth |
int | Max carry/borrow cascade length |
x_digits |
list[int] | First operand (MSB first) |
y_digits |
list[int] | Second operand (MSB first) |
z_digits |
list[int] | Answer (MSB first, n_digits+1) |
The eval_stratified split has an additional eval_category column.
Sub-task Labels (Quirke et al.)
Each answer digit requires a specific arithmetic operation:
Addition
| Label | Name | Condition | Role |
|---|---|---|---|
| SA | Base Add | Dn + D'n < 9, no carry |
Simplest case |
| SC | Make Carry | Dn + D'n >= 10 |
Generates carry |
| SS | Sum is 9 | Dn + D'n == 9 |
Propagates carry if one arrives |
| UC | Use Carry | carry_in=1, sum != 9 | Consumes incoming carry |
| US | Use Sum-9 | carry_in=1, sum == 9 | Cascade: hardest case |
Subtraction (x >= y)
| Label | Name | Condition | Role |
|---|---|---|---|
| MD | Base Diff | Dn > D'n, no borrow |
Simplest case |
| MB | Make Borrow | Dn < D'n |
Generates borrow |
| ME | Equal digits | Dn == D'n |
Propagates borrow if one arrives |
| UB | Use Borrow | borrow_in=1, Dn != D'n |
Consumes incoming borrow |
| UD | Use Equal | borrow_in=1, Dn == D'n |
Cascade: hardest case |
Complexity Classification (Quirke Table 8)
Complexity = length of longest carry/borrow cascade chain.
Example: 555555+444448=1000003 is S6 — the carry from D0 cascades through
5 consecutive sum-9 positions.
S0: no carries ~10%
S1: isolated carries ~50%
S2: cascade of 2 ~26%
S3: cascade of 3 ~9%
S4: cascade of 4 ~3%
S5: cascade of 5 ~1%
S6: cascade of 6 <0.5%
Data Enrichment
Following Quirke: 60% of batches have 40% of digit positions forced to sum-to-9, increasing cascade frequency so the model sees enough S4-S6 cases.
Usage
from datasets import load_dataset
ds = load_dataset("thoughtworks/arithmetic-sorl-data", data_dir="add_6digit")
print(ds["train"][0])
# {'tokens': [...], 'labels': ['SA', 'UC', 'US', ...],
# 'complexity': 'S3', 'cascade_depth': 3, ...}
# Stratified eval
eval_ds = load_dataset("thoughtworks/arithmetic-sorl-data",
data_dir="add_6digit", data_files="eval_stratified.parquet")
Related
- Model checkpoints: thoughtworks/arithmetic-sorl
- Code: mod_gpt/arithmetic/