deqing commited on
Commit
e162113
·
verified ·
1 Parent(s): 0b9ebe4

Update dataset card: 3MT-3digit (a+b<=999, all single-token)

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -12,12 +12,12 @@ configs:
12
  data_files:
13
  - split: train
14
  path: 10BT/train-*.parquet
15
- - config_name: 6MT-3digit
16
  data_files:
17
  - split: train
18
- path: 6MT-3digit/train-*.parquet
19
  - split: test
20
- path: 6MT-3digit/test-*.parquet
21
  ---
22
 
23
  # Addition Dataset
@@ -29,13 +29,13 @@ Addition problems in the format `{a} + {b} = {c}`.
29
  - **test**: 5K held-out evaluation examples (operands >= 10, i.e. min 2 digits)
30
  - **1BT**: ~85M training examples (~1 billion tokens under Llama-3 tokenizer)
31
  - **10BT**: ~850M training examples (~10 billion tokens)
32
- - **6MT-3digit**: Exhaustive 0-999 addition (all 1000x1000 = 1M ordered pairs, ~6.5M tokens). Symmetry-safe train/test split (10% test).
33
 
34
  ## Deduplication
35
 
36
  - Commutative dedup: if `a + b = c` exists, `b + a = c` is excluded (1BT/10BT)
37
  - Test exclusion: both orderings of test-set pairs are excluded from train splits
38
- - 6MT-3digit: both orderings always in the same split (no commutative leakage)
39
 
40
  ## Usage
41
 
@@ -45,7 +45,7 @@ from datasets import load_dataset
45
  train = load_dataset("deqing/addition_dataset", "1BT", split="train")
46
  test = load_dataset("deqing/addition_dataset", "test", split="test")
47
 
48
- # 3-digit exhaustive (0-999)
49
- train_3d = load_dataset("deqing/addition_dataset", "6MT-3digit", split="train")
50
- test_3d = load_dataset("deqing/addition_dataset", "6MT-3digit", split="test")
51
  ```
 
12
  data_files:
13
  - split: train
14
  path: 10BT/train-*.parquet
15
+ - config_name: 3MT-3digit
16
  data_files:
17
  - split: train
18
+ path: 3MT-3digit/train-*.parquet
19
  - split: test
20
+ path: 3MT-3digit/test-*.parquet
21
  ---
22
 
23
  # Addition Dataset
 
29
  - **test**: 5K held-out evaluation examples (operands >= 10, i.e. min 2 digits)
30
  - **1BT**: ~85M training examples (~1 billion tokens under Llama-3 tokenizer)
31
  - **10BT**: ~850M training examples (~10 billion tokens)
32
+ - **3MT-3digit**: Exhaustive single-token addition: all (a, b) with a, b in [0, 999] and a+b <= 999. 500,500 ordered pairs, ~3M tokens. All of a, b, c are single tokens. Symmetry-safe train/test split (10% test).
33
 
34
  ## Deduplication
35
 
36
  - Commutative dedup: if `a + b = c` exists, `b + a = c` is excluded (1BT/10BT)
37
  - Test exclusion: both orderings of test-set pairs are excluded from train splits
38
+ - 3MT-3digit: both orderings always in the same split (no commutative leakage)
39
 
40
  ## Usage
41
 
 
45
  train = load_dataset("deqing/addition_dataset", "1BT", split="train")
46
  test = load_dataset("deqing/addition_dataset", "test", split="test")
47
 
48
+ # Single-token exhaustive (0-999, a+b<=999)
49
+ train_3d = load_dataset("deqing/addition_dataset", "3MT-3digit", split="train")
50
+ test_3d = load_dataset("deqing/addition_dataset", "3MT-3digit", split="test")
51
  ```