addition_dataset / README.md
deqing's picture
Update dataset card: 3MT-3digit (a+b<=999, all single-token)
e162113 verified
metadata
configs:
  - config_name: test
    data_files:
      - split: test
        path: test/test-*.parquet
  - config_name: 1BT
    data_files:
      - split: train
        path: 1BT/train-*.parquet
  - config_name: 10BT
    data_files:
      - split: train
        path: 10BT/train-*.parquet
  - config_name: 3MT-3digit
    data_files:
      - split: train
        path: 3MT-3digit/train-*.parquet
      - split: test
        path: 3MT-3digit/test-*.parquet

Addition Dataset

Addition problems in the format {a} + {b} = {c}.

Subsets

  • test: 5K held-out evaluation examples (operands >= 10, i.e. min 2 digits)
  • 1BT: 85M training examples (1 billion tokens under Llama-3 tokenizer)
  • 10BT: 850M training examples (10 billion tokens)
  • 3MT-3digit: Exhaustive single-token addition: all (a, b) with a, b in [0, 999] and a+b <= 999. 500,500 ordered pairs, ~3M tokens. All of a, b, c are single tokens. Symmetry-safe train/test split (10% test).

Deduplication

  • Commutative dedup: if a + b = c exists, b + a = c is excluded (1BT/10BT)
  • Test exclusion: both orderings of test-set pairs are excluded from train splits
  • 3MT-3digit: both orderings always in the same split (no commutative leakage)

Usage

from datasets import load_dataset

train = load_dataset("deqing/addition_dataset", "1BT", split="train")
test = load_dataset("deqing/addition_dataset", "test", split="test")

# Single-token exhaustive (0-999, a+b<=999)
train_3d = load_dataset("deqing/addition_dataset", "3MT-3digit", split="train")
test_3d = load_dataset("deqing/addition_dataset", "3MT-3digit", split="test")