Update dataset card for v2: Scylla v2 + SP4096/8192/12288/16384

#4
by Norelec7 - opened
Files changed (1) hide show
  1. README.md +84 -27
README.md CHANGED
@@ -4,7 +4,7 @@ task_categories:
4
  - text-generation
5
  language:
6
  - en
7
- pretty_name: Parameter Golf Competition Data
8
  size_categories:
9
  - 1B<n<10B
10
  tags:
@@ -16,7 +16,17 @@ tags:
16
 
17
  # Parameter Golf Competition Data
18
 
19
- Pre-tokenized [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) shards for the [OpenAI Parameter Golf](https://github.com/openai/parameter-golf) competition. Two tokenizations included. Free checkpoint persistence API. Zero setup friction.
 
 
 
 
 
 
 
 
 
 
20
 
21
  ---
22
 
@@ -24,7 +34,7 @@ Pre-tokenized [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) s
24
 
25
  Think of it like plumbing:
26
 
27
- 1. **The reservoir** is this dataset — 26GB of competition data, pre-processed and ready to flow.
28
  2. **The pipe** is `huggingface-cli download` — one command, and data flows to your GPU pod. Fast, resumable. If the pipe breaks mid-transfer, reconnect and it picks up where it left off.
29
  3. **Your pod** is the sink — data arrives at `/workspace/data/`, ready to use. No processing, no conversion, no waiting.
30
  4. **The safety valve** is checkpoint persistence — every N steps, your training progress flows out to cloud storage. Pod dies? New pod picks up the flow from the last save. No lost work.
@@ -39,13 +49,23 @@ That's it. Data flows in. Checkpoints flow out. You train in between.
39
  # Step 1: Install huggingface-cli (if you don't have it)
40
  pip install huggingface-hub
41
 
42
- # Step 2: Download the competition data
43
  huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_sp1024/*" --local-dir /workspace/data
44
 
45
  # Step 3: That's it. Train.
46
  python train_gpt.py --data_dir /workspace/data/fineweb_sp1024
47
  ```
48
 
 
 
 
 
 
 
 
 
 
 
49
  **I want to save checkpoints so I don't lose work:**
50
 
51
  ```bash
@@ -66,7 +86,7 @@ curl -o checkpoint.pt \
66
  Use our RunPod template: `matotezitanka/proteus-pytorch:community`
67
 
68
  Set these env vars before launch:
69
- - `PGOLF_DATA=sp1024` (or `scylla`)
70
  - `PGOLF_SHARDS=full` (or `mini` for testing)
71
  - `PGOLF_GITHUB_TOKEN=ghp_yourtoken` (optional, for checkpoints)
72
  - `PGOLF_USER=yourgithubname` (optional)
@@ -83,9 +103,15 @@ Hit deploy. SSH in when it's ready. Everything is there.
83
  | Tokenizer | Vocab | Size | Use case |
84
  |-----------|-------|------|----------|
85
  | **SP1024** | 1024 tokens | ~15 GB | Competition default. Most PRs use this. |
86
- | **Scylla** | 998 tokens | ~11 GB | TokenMonster-derived. Used by top entries. |
 
 
 
 
 
 
87
 
88
- Each includes 80 training shards + 1 validation shard + tokenizer models.
89
 
90
  ### Download Options
91
 
@@ -95,13 +121,31 @@ Each includes 80 training shards + 1 validation shard + tokenizer models.
95
  # SP1024 — competition default (~15 GB)
96
  huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_sp1024/*" --local-dir /workspace/data
97
 
98
- # ScyllaTokenMonster 998-token vocab (~11 GB)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
99
  huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_scylla/*" --local-dir /workspace/data
100
 
101
- # Both datasets + all tokenizers (~26 GB)
102
  huggingface-cli download LightSpeedUp/parameter-golf-data --local-dir /workspace/data
103
 
104
- # Just tokenizer models (tiny, < 1 MB)
105
  huggingface-cli download LightSpeedUp/parameter-golf-data --include "tokenizers/*" --local-dir /workspace/data
106
 
107
 
@@ -188,16 +232,24 @@ If any checksum fails, re-download that file. The download is resumable — you
188
 
189
  ```
190
  parameter-golf-data/
191
- ├── fineweb_sp1024/
192
- ├── fineweb_train_000000.bin ... fineweb_train_000079.bin (80 shards)
193
- │ └── fineweb_val_000000.bin
194
- ├── fineweb_scylla/
195
- ├── fineweb_train_000000.bin ... fineweb_train_000079.bin (80 shards)
196
- │ └── fineweb_val_000000.bin
 
 
197
  ├── tokenizers/
198
- │ ├── fineweb_1024_bpe.model (SP1024 SentencePiece)
199
- │ ├── scylla/candidate.vocab (TokenMonster 998)
200
- ── scylla/candidate.meta.npz (byte LUTs)
 
 
 
 
 
 
201
  ├── SHA256SUMS.txt
202
  └── PATENTS.md
203
  ```
@@ -241,14 +293,17 @@ That's fair. You can:
241
 
242
  ## Provenance
243
 
244
- - **Source:** [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) (CommonCrawl-derived, by Hugging Face)
245
- - **SP1024 tokenization:** SentencePiece BPE, 1024 tokens — from the [openai/parameter-golf](https://github.com/openai/parameter-golf) competition repo
246
- - **Scylla tokenization:** TokenMonster vocabulary (998 tokens) by [@simon-marcus](https://github.com/simon-marcus) ([PR #1143](https://github.com/openai/parameter-golf/pull/1143)). Retokenized using our pipeline.
247
- - **No modification** to token sequences byte-identical to what you'd produce by tokenizing the raw data yourself.
 
 
 
248
 
249
  ### Attribution Chain
250
 
251
- CommonCrawl (CC-BY) → Hugging Face FineWeb (ODC-By 1.0) → This dataset (ODC-By 1.0)
252
 
253
  ## License
254
 
@@ -258,9 +313,13 @@ CommonCrawl (CC-BY) → Hugging Face FineWeb (ODC-By 1.0) → This dataset (ODC-
258
 
259
  ---
260
 
 
 
 
 
261
  ## Roadmap
262
 
263
- - SP4096 and SP8192 tokenizations (need tokenizer models — contributions welcome)
264
  - Automated checkpoint save/resume in the boot script
265
  - Open-source the CF Worker code
266
 
@@ -269,5 +328,3 @@ CommonCrawl (CC-BY) → Hugging Face FineWeb (ODC-By 1.0) → This dataset (ODC-
269
  - [The Agora](https://matotezitanka.github.io/parameter-golf) — live leaderboard + compliance tracker
270
  - [Issue #942](https://github.com/openai/parameter-golf/issues/942) — compute resources discussion
271
  - [Issue #140](https://github.com/openai/parameter-golf/issues/140) — competition discussion thread
272
-
273
- Built by [Light Speed Up](https://lightspeedup.com) for the Parameter Golf community.
 
4
  - text-generation
5
  language:
6
  - en
7
+ pretty_name: Parameter Golf Competition Data v2
8
  size_categories:
9
  - 1B<n<10B
10
  tags:
 
16
 
17
  # Parameter Golf Competition Data
18
 
19
+ Pre-tokenized [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) shards for the [OpenAI Parameter Golf](https://github.com/openai/parameter-golf) competition. Multiple SentencePiece vocab sizes plus a corrected byte-exact Scylla (TokenMonster) tokenization. Free checkpoint persistence API. Zero setup friction.
20
+
21
+ ---
22
+
23
+ ## ⚠️ Important: Scylla v1 Deprecated
24
+
25
+ The original `fineweb_scylla/` directory uses the 998-token vocab from [PR #1143](https://github.com/openai/parameter-golf/pull/1143). That vocab's byte-accounting metadata treated TokenMonster tokens as context-free, which overcounts source bytes by ~4%. Any `val_bpb` reported through the standard pipeline on `fineweb_scylla/` is inflated by roughly the same factor.
26
+
27
+ The bug is tracked in [Issue #897](https://github.com/openai/parameter-golf/issues/897) and corrected in [PR #1314](https://github.com/openai/parameter-golf/pull/1314) (simon-marcus, "Scylla: Corrected Byte-Exact Tokenizer Path"). The corrected path uses a full byte-native TokenMonster regime (`charset=none`, `capcode=0`, `normalization=none`, explicit 0x00–0xFF byte fallback) and is byte-exact on the fixed FineWeb validation text.
28
+
29
+ **Use `fineweb_scylla_v2/` for any new work.** Old `fineweb_scylla/` is kept for reproducibility of past runs and will not be deleted, but should not be used for new BPB comparisons.
30
 
31
  ---
32
 
 
34
 
35
  Think of it like plumbing:
36
 
37
+ 1. **The reservoir** is this dataset — competition data, pre-processed and ready to flow.
38
  2. **The pipe** is `huggingface-cli download` — one command, and data flows to your GPU pod. Fast, resumable. If the pipe breaks mid-transfer, reconnect and it picks up where it left off.
39
  3. **Your pod** is the sink — data arrives at `/workspace/data/`, ready to use. No processing, no conversion, no waiting.
40
  4. **The safety valve** is checkpoint persistence — every N steps, your training progress flows out to cloud storage. Pod dies? New pod picks up the flow from the last save. No lost work.
 
49
  # Step 1: Install huggingface-cli (if you don't have it)
50
  pip install huggingface-hub
51
 
52
+ # Step 2: Download the competition data (SP1024 default)
53
  huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_sp1024/*" --local-dir /workspace/data
54
 
55
  # Step 3: That's it. Train.
56
  python train_gpt.py --data_dir /workspace/data/fineweb_sp1024
57
  ```
58
 
59
+ **I want a bigger vocab:**
60
+
61
+ ```bash
62
+ # SP4096 — good middle ground
63
+ huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_sp4096/*" --local-dir /workspace/data
64
+
65
+ # SP8192 — a step up in vocab capacity
66
+ huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_sp8192/*" --local-dir /workspace/data
67
+ ```
68
+
69
  **I want to save checkpoints so I don't lose work:**
70
 
71
  ```bash
 
86
  Use our RunPod template: `matotezitanka/proteus-pytorch:community`
87
 
88
  Set these env vars before launch:
89
+ - `PGOLF_DATA=sp1024` (or `sp4096`, `sp8192`, `sp12288`, `sp16384`, or `scylla_v2`)
90
  - `PGOLF_SHARDS=full` (or `mini` for testing)
91
  - `PGOLF_GITHUB_TOKEN=ghp_yourtoken` (optional, for checkpoints)
92
  - `PGOLF_USER=yourgithubname` (optional)
 
103
  | Tokenizer | Vocab | Size | Use case |
104
  |-----------|-------|------|----------|
105
  | **SP1024** | 1024 tokens | ~15 GB | Competition default. Most PRs use this. |
106
+ | **SP4096** | 4096 tokens | ~12 GB | Larger vocab, shorter sequences per doc. |
107
+ | **SP8192** | 8192 tokens | ~11 GB | Common for bigram/mixer submissions. |
108
+ | **SP12288** | 12288 tokens | ~10 GB | Explores the 8k–16k vocab range. |
109
+ | **SP16384** | 16384 tokens | ~9 GB | Largest SentencePiece variant we publish. |
110
+ | **byte260** | 260 tokens | ~40 GB | Pure-byte tokenization. Bytes `0x00..0xFF` → ids `0..255` directly. Reserved specials: `pad=256, bos=257, eos=258, unk=259`. Encoding is `[257] + list(text.encode("utf-8"))`. No SentencePiece model involved. ~195 train + 2 val shards (byte density is ~4× SP1024). |
111
+ | **Scylla v2** | 1254 tokens | ~17 GB | **Corrected** TokenMonster byte-exact tokenizer (PR #1314). Leaderboard-comparable BPB. |
112
+ | ~~Scylla v1~~ | ~~998 tokens~~ | ~~~11 GB~~ | **Deprecated** — buggy byte accounting (Issue #897). Kept for reproducibility only. |
113
 
114
+ Each directory contains 80 training shards + 1 validation shard + tokenizer models.
115
 
116
  ### Download Options
117
 
 
121
  # SP1024 — competition default (~15 GB)
122
  huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_sp1024/*" --local-dir /workspace/data
123
 
124
+ # SP40964k vocab (~12 GB)
125
+ huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_sp4096/*" --local-dir /workspace/data
126
+
127
+ # SP8192 — 8k vocab (~11 GB)
128
+ huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_sp8192/*" --local-dir /workspace/data
129
+
130
+ # SP12288 — 12k vocab (~10 GB)
131
+ huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_sp12288/*" --local-dir /workspace/data
132
+
133
+ # SP16384 — 16k vocab (~9 GB)
134
+ huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_sp16384/*" --local-dir /workspace/data
135
+
136
+ # byte260 — UTF-8 bytes + 4 reserved specials, no tokenizer training (~40 GB)
137
+ huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_byte260/*" --local-dir /workspace/data
138
+
139
+ # Scylla v2 — corrected TokenMonster 1254-token vocab (~17 GB)
140
+ huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_scylla_v2/*" --local-dir /workspace/data
141
+
142
+ # Legacy Scylla v1 — 998-token vocab, deprecated (use v2 instead)
143
  huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_scylla/*" --local-dir /workspace/data
144
 
145
+ # All datasets + all tokenizers (~70 GB)
146
  huggingface-cli download LightSpeedUp/parameter-golf-data --local-dir /workspace/data
147
 
148
+ # Just tokenizer models (tiny, < 2 MB)
149
  huggingface-cli download LightSpeedUp/parameter-golf-data --include "tokenizers/*" --local-dir /workspace/data
150
 
151
 
 
232
 
233
  ```
234
  parameter-golf-data/
235
+ ├── fineweb_sp1024/ # 80 train + 1 val, SentencePiece BPE 1024
236
+ ├── fineweb_sp4096/ # 80 train + 1 val, SentencePiece BPE 4096
237
+ ── fineweb_sp8192/ # 80 train + 1 val, SentencePiece BPE 8192
238
+ ├── fineweb_sp12288/ # 80 train + 1 val, SentencePiece BPE 12288
239
+ ├── fineweb_sp16384/ # 80 train + 1 val, SentencePiece BPE 16384
240
+ ── fineweb_byte260/ # ~195 train + 2 val shards, pure-byte (PureByteTokenizer: pad=0, bos=1, eos=2, unk=3, bytes 4..259)
241
+ ├── fineweb_scylla_v2/ # 80 train + 1 val, corrected TokenMonster 1254-token (PR #1314)
242
+ ├── fineweb_scylla/ # DEPRECATED — 998-token buggy vocab, kept for reproducibility
243
  ├── tokenizers/
244
+ │ ├── fineweb_1024_bpe.model (SP1024 SentencePiece)
245
+ │ ├── fineweb_4096_bpe.model (SP4096 SentencePiece)
246
+ ── fineweb_8192_bpe.model (SP8192 SentencePiece)
247
+ │ ├── fineweb_12288_bpe.model (SP12288 SentencePiece)
248
+ │ ├── fineweb_16384_bpe.model (SP16384 SentencePiece)
249
+ │ ├── scylla_v2/scylla.yaml (TokenMonster 1254, corrected)
250
+ │ ├── scylla_v2/scylla.meta.npz (byte LUTs, byte-exact)
251
+ │ ├── scylla/candidate.vocab (TokenMonster 998, deprecated)
252
+ │ └── scylla/candidate.meta.npz (old byte LUTs, overcounts)
253
  ├── SHA256SUMS.txt
254
  └── PATENTS.md
255
  ```
 
293
 
294
  ## Provenance
295
 
296
+ - **Source:** [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) (CommonCrawl-derived, by Hugging Face). Our docs come from the `willdepueoai/parameter-golf` competition mirror.
297
+ - **SP1024 tokenization:** SentencePiece BPE, 1024 tokens — from the [openai/parameter-golf](https://github.com/openai/parameter-golf) competition repo, `data/tokenizer_specs.json`.
298
+ - **SP4096 / SP8192 / SP12288 / SP16384 tokenizations:** SentencePiece BPE models trained by LightSpeedUp on 1 M FineWeb docs decoded byte-exactly from the canonical SP1024 shards (SP1024 uses `byte_fallback=True`, so the decode is lossless). Trainer settings mirror the competition's `build_sentencepiece_tokenizer`: `character_coverage=0.999`, `byte_fallback=True`, `split_digits=True`, `normalization_rule_name=nmt_nfkc`, `add_dummy_prefix=False`, `hard_vocab_limit=False`.
299
+ - **byte260 tokenization:** direct UTF-8 byte mapping. Vocab 260 = 256 bytes + 4 reserved specials. Bytes `0x00..0xFF` map to ids `0..255` directly (no offset). Reserved specials: `pad_id=256, bos_id=257, eos_id=258, unk_id=259`. Encoding is `[257] + list(text.encode("utf-8"))`; `append_eos=False`. No SentencePiece model involved. Byte-accounting LUT is trivial: `base_bytes_lut[tok] = 1 if tok < 256 else 0`. **Note on convention:** this differs from the `openai/parameter-golf` `PureByteTokenizer` defaults (which put specials at 0..3 and offset bytes to 4..259). When loading these shards into the canonical `train_gpt.py --variant byte260` path, use a loader that matches the convention above.
300
+ - **Scylla v2 tokenization:** corrected byte-exact TokenMonster vocab (1254 tokens, logical 1178, `bos_id=1253`) from [@simon-marcus](https://github.com/simon-marcus)'s [PR #1314](https://github.com/openai/parameter-golf/pull/1314). Regime: `charset=none`, `capcode=0`, `normalization=none`, explicit full byte fallback, latin-1 byte interpretation, synthetic zero-byte BOS per doc.
301
+ - **Scylla v1 tokenization (deprecated):** original 998-token TokenMonster vocab from [@simon-marcus](https://github.com/simon-marcus)'s [PR #1143](https://github.com/openai/parameter-golf/pull/1143). Superseded by v2 due to the byte-accounting issue in [Issue #897](https://github.com/openai/parameter-golf/issues/897).
302
+ - **No modification** to token sequences on any variant — byte-identical to what you'd produce by running the same pipeline yourself.
303
 
304
  ### Attribution Chain
305
 
306
+ CommonCrawl (CC-BY) → Hugging Face FineWeb (ODC-By 1.0) → willdepueoai/parameter-golf (ODC-By 1.0) → This dataset (ODC-By 1.0)
307
 
308
  ## License
309
 
 
313
 
314
  ---
315
 
316
+ ## Related Community Resources
317
+
318
+ - **[`sproos/parameter-golf-tokenizers`](https://huggingface.co/sproos/parameter-golf-tokenizers)** — a complementary community mirror that publishes `fineweb10B_{sp1024, sp2048, sp4096, sp8192}/` pre-tokenized shards plus the corresponding SentencePiece `.model` / `.vocab` files. If you only need the SP variants in the 1K–8K vocab range, sproos is a direct source. Our `LightSpeedUp/parameter-golf-data` drop is complementary: we add the larger SP variants (12288, 16384), the corrected Scylla v2, and byte260, and we bundle the free checkpoint-persistence API + Docker image for end-to-end training on free-tier GPUs.
319
+
320
  ## Roadmap
321
 
322
+ - Cloudflare R2 mirror for HF-rate-limited users (coming soon)
323
  - Automated checkpoint save/resume in the boot script
324
  - Open-source the CF Worker code
325
 
 
328
  - [The Agora](https://matotezitanka.github.io/parameter-golf) — live leaderboard + compliance tracker
329
  - [Issue #942](https://github.com/openai/parameter-golf/issues/942) — compute resources discussion
330
  - [Issue #140](https://github.com/openai/parameter-golf/issues/140) — competition discussion thread