| --- |
| license: odc-by |
| task_categories: |
| - text-generation |
| language: |
| - en |
| pretty_name: Parameter Golf Competition Data v2 |
| size_categories: |
| - 1B<n<10B |
| tags: |
| - parameter-golf |
| - fineweb |
| - language-modeling |
| - competition |
| --- |
| |
| # Parameter Golf Competition Data |
|
|
| Pre-tokenized [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) shards for the [OpenAI Parameter Golf](https://github.com/openai/parameter-golf) competition. Multiple SentencePiece vocab sizes plus a corrected byte-exact Scylla (TokenMonster) tokenization. Free checkpoint persistence API. Zero setup friction. |
|
|
| --- |
|
|
| ## ⚠️ Important: Scylla v1 Deprecated |
|
|
| The original `fineweb_scylla/` directory uses the 998-token vocab from [PR #1143](https://github.com/openai/parameter-golf/pull/1143). That vocab's byte-accounting metadata treated TokenMonster tokens as context-free, which overcounts source bytes by ~4%. Any `val_bpb` reported through the standard pipeline on `fineweb_scylla/` is inflated by roughly the same factor. |
|
|
| The bug is tracked in [Issue #897](https://github.com/openai/parameter-golf/issues/897) and corrected in [PR #1314](https://github.com/openai/parameter-golf/pull/1314) (simon-marcus, "Scylla: Corrected Byte-Exact Tokenizer Path"). The corrected path uses a full byte-native TokenMonster regime (`charset=none`, `capcode=0`, `normalization=none`, explicit 0x00–0xFF byte fallback) and is byte-exact on the fixed FineWeb validation text. |
|
|
| **Use `fineweb_scylla_v2/` for any new work.** Old `fineweb_scylla/` is kept for reproducibility of past runs and will not be deleted, but should not be used for new BPB comparisons. |
|
|
| --- |
|
|
| ## How It Works (The Simple Version) |
|
|
| Think of it like plumbing: |
|
|
| 1. **The reservoir** is this dataset — competition data, pre-processed and ready to flow. |
| 2. **The pipe** is `huggingface-cli download` — one command, and data flows to your GPU pod. Fast, resumable. If the pipe breaks mid-transfer, reconnect and it picks up where it left off. |
| 3. **Your pod** is the sink — data arrives at `/workspace/data/`, ready to use. No processing, no conversion, no waiting. |
| 4. **The safety valve** is checkpoint persistence — every N steps, your training progress flows out to cloud storage. Pod dies? New pod picks up the flow from the last save. No lost work. |
|
|
| That's it. Data flows in. Checkpoints flow out. You train in between. |
|
|
| ### Step by Step |
|
|
| **I just want to train. What do I do?** |
|
|
| ```bash |
| # Step 1: Install huggingface-cli (if you don't have it) |
| pip install huggingface-hub |
| |
| # Step 2: Download the competition data (SP1024 default) |
| huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_sp1024/*" --local-dir /workspace/data |
| |
| # Step 3: That's it. Train. |
| python train_gpt.py --data_dir /workspace/data/fineweb_sp1024 |
| ``` |
|
|
| **I want a bigger vocab:** |
|
|
| ```bash |
| # SP4096 — good middle ground |
| huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_sp4096/*" --local-dir /workspace/data |
| |
| # SP8192 — a step up in vocab capacity |
| huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_sp8192/*" --local-dir /workspace/data |
| ``` |
|
|
| **I want to save checkpoints so I don't lose work:** |
|
|
| ```bash |
| # After every N steps in your training script, save: |
| curl -X PUT \ |
| -H "Authorization: Bearer YOUR_GITHUB_TOKEN" \ |
| --data-binary @checkpoint.pt \ |
| https://pgolf-api.lightspeedup.com/put/YOUR_GITHUB_USERNAME/my-run/checkpoint.pt |
| |
| # On a new pod, resume: |
| curl -o checkpoint.pt \ |
| -H "Authorization: Bearer YOUR_GITHUB_TOKEN" \ |
| "https://pgolf-api.lightspeedup.com/download?run_id=my-run&filename=checkpoint.pt" |
| ``` |
|
|
| **I want the fully automated experience:** |
|
|
| Use our RunPod template: `matotezitanka/proteus-pytorch:community` |
|
|
| Set these env vars before launch: |
| - `PGOLF_DATA=sp1024` (or `sp4096`, `sp8192`, `sp12288`, `sp16384`, or `scylla_v2`) |
| - `PGOLF_SHARDS=full` (or `mini` for testing) |
| - `PGOLF_GITHUB_TOKEN=ghp_yourtoken` (optional, for checkpoints) |
| - `PGOLF_USER=yourgithubname` (optional) |
| - `PGOLF_RUN=my-experiment-1` (optional) |
|
|
| Hit deploy. SSH in when it's ready. Everything is there. |
|
|
| --- |
|
|
| ## Comprehensive Guide |
|
|
| ### Available Data |
|
|
| | Tokenizer | Vocab | Size | Use case | |
| |-----------|-------|------|----------| |
| | **SP1024** | 1024 tokens | ~15 GB | Competition default. Most PRs use this. | |
| | **SP4096** | 4096 tokens | ~12 GB | Larger vocab, shorter sequences per doc. | |
| | **SP8192** | 8192 tokens | ~11 GB | Common for bigram/mixer submissions. | |
| | **SP12288** | 12288 tokens | ~10 GB | Explores the 8k–16k vocab range. | |
| | **SP16384** | 16384 tokens | ~9 GB | Largest SentencePiece variant we publish. | |
| | **byte260** | 260 tokens | ~40 GB | Pure-byte tokenization. Bytes `0x00..0xFF` → ids `0..255` directly. Reserved specials: `pad=256, bos=257, eos=258, unk=259`. Encoding is `[257] + list(text.encode("utf-8"))`. No SentencePiece model involved. ~195 train + 2 val shards (byte density is ~4× SP1024). | |
| | **Scylla v2** | 1254 tokens | ~17 GB | **Corrected** TokenMonster byte-exact tokenizer (PR #1314). Leaderboard-comparable BPB. | |
| | ~~Scylla v1~~ | ~~998 tokens~~ | ~~~11 GB~~ | **Deprecated** — buggy byte accounting (Issue #897). Kept for reproducibility only. | |
| |
| Each directory contains 80 training shards + 1 validation shard + tokenizer models. |
| |
| ### Download Options |
| |
| ```bash |
| # === DATA SELECTION === |
| |
| # SP1024 — competition default (~15 GB) |
| huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_sp1024/*" --local-dir /workspace/data |
| |
| # SP4096 — 4k vocab (~12 GB) |
| huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_sp4096/*" --local-dir /workspace/data |
| |
| # SP8192 — 8k vocab (~11 GB) |
| huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_sp8192/*" --local-dir /workspace/data |
| |
| # SP12288 — 12k vocab (~10 GB) |
| huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_sp12288/*" --local-dir /workspace/data |
| |
| # SP16384 — 16k vocab (~9 GB) |
| huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_sp16384/*" --local-dir /workspace/data |
| |
| # byte260 — UTF-8 bytes + 4 reserved specials, no tokenizer training (~40 GB) |
| huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_byte260/*" --local-dir /workspace/data |
| |
| # Scylla v2 — corrected TokenMonster 1254-token vocab (~17 GB) |
| huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_scylla_v2/*" --local-dir /workspace/data |
| |
| # Legacy Scylla v1 — 998-token vocab, deprecated (use v2 instead) |
| huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_scylla/*" --local-dir /workspace/data |
| |
| # All datasets + all tokenizers (~70 GB) |
| huggingface-cli download LightSpeedUp/parameter-golf-data --local-dir /workspace/data |
| |
| # Just tokenizer models (tiny, < 2 MB) |
| huggingface-cli download LightSpeedUp/parameter-golf-data --include "tokenizers/*" --local-dir /workspace/data |
| |
| |
| # === SHARD SUBSETS (save time/bandwidth) === |
| |
| # Mini — 10 shards for smoke tests (~2 GB) |
| huggingface-cli download LightSpeedUp/parameter-golf-data \ |
| --include "fineweb_sp1024/fineweb_train_00000?.bin" \ |
| --include "fineweb_sp1024/fineweb_val*" \ |
| --local-dir /workspace/data |
| |
| # Half — 40 shards (~7 GB) |
| huggingface-cli download LightSpeedUp/parameter-golf-data \ |
| --include "fineweb_sp1024/fineweb_train_0000[0-3]?.bin" \ |
| --include "fineweb_sp1024/fineweb_val*" \ |
| --local-dir /workspace/data |
| |
| # Val only — just validation data (~200 MB) |
| huggingface-cli download LightSpeedUp/parameter-golf-data \ |
| --include "fineweb_sp1024/fineweb_val*" \ |
| --local-dir /workspace/data |
| ``` |
| |
| **No HuggingFace account required.** This is a public dataset. No login, no token, no signup. |
| |
| **Downloads are resumable.** If your connection drops, re-run the same command and it picks up where it left off. |
| |
| ### Checkpoint Persistence API |
| |
| Save and resume training across pod preemptions. Your GitHub token is your identity — no accounts to create. |
| |
| ```bash |
| # === CHECKPOINT API (https://pgolf-api.lightspeedup.com) === |
| |
| # Upload a checkpoint |
| curl -X POST https://pgolf-api.lightspeedup.com/upload \ |
| -H "Authorization: Bearer ghp_yourtoken" \ |
| -H "Content-Type: application/json" \ |
| -d '{"run_id": "my-run", "filename": "checkpoint_step500.pt"}' |
| |
| # Then PUT the file |
| curl -X PUT https://pgolf-api.lightspeedup.com/put/yourusername/my-run/checkpoint_step500.pt \ |
| -H "Authorization: Bearer ghp_yourtoken" \ |
| --data-binary @checkpoint_step500.pt |
| |
| # Download a checkpoint |
| curl -o checkpoint.pt \ |
| -H "Authorization: Bearer ghp_yourtoken" \ |
| "https://pgolf-api.lightspeedup.com/download?run_id=my-run&filename=checkpoint_step500.pt" |
| |
| # List your checkpoints |
| curl -H "Authorization: Bearer ghp_yourtoken" \ |
| "https://pgolf-api.lightspeedup.com/list?run_id=my-run" |
| |
| # Delete a run's checkpoints |
| curl -X DELETE -H "Authorization: Bearer ghp_yourtoken" \ |
| "https://pgolf-api.lightspeedup.com/clean?run_id=my-run" |
| ``` |
| |
| **Limits:** 10 checkpoints per user, 2 GB max each. Auto-deleted after 7 days. |
| |
| ### Docker Image |
| |
| ```bash |
| docker pull matotezitanka/proteus-pytorch:community |
| ``` |
| |
| Includes: PyTorch 2.11.0 + CUDA 12.8 + Flash Attention 3 + all competition deps (brotli, tokenmonster, sentencepiece) + tools (cpu_test.py, retokenizer, swap_pytorch.sh) + automated boot script. |
| |
| Works on RunPod, Vast.ai, or any Docker host with NVIDIA GPUs. |
| |
| ### Data Integrity |
| |
| Every file has a SHA256 checksum. After downloading: |
| |
| ```bash |
| cd /workspace/data |
| sha256sum -c SHA256SUMS.txt |
| ``` |
| |
| If any checksum fails, re-download that file. The download is resumable — you won't re-download files that are already correct. |
| |
| ### Dataset Structure |
| |
| ``` |
| parameter-golf-data/ |
| ├── fineweb_sp1024/ # 80 train + 1 val, SentencePiece BPE 1024 |
| ├── fineweb_sp4096/ # 80 train + 1 val, SentencePiece BPE 4096 |
| ├── fineweb_sp8192/ # 80 train + 1 val, SentencePiece BPE 8192 |
| ├── fineweb_sp12288/ # 80 train + 1 val, SentencePiece BPE 12288 |
| ├── fineweb_sp16384/ # 80 train + 1 val, SentencePiece BPE 16384 |
| ├── fineweb_byte260/ # ~195 train + 2 val shards, pure-byte (PureByteTokenizer: pad=0, bos=1, eos=2, unk=3, bytes 4..259) |
| ├── fineweb_scylla_v2/ # 80 train + 1 val, corrected TokenMonster 1254-token (PR #1314) |
| ├── fineweb_scylla/ # DEPRECATED — 998-token buggy vocab, kept for reproducibility |
| ├── tokenizers/ |
| │ ├── fineweb_1024_bpe.model (SP1024 SentencePiece) |
| │ ├── fineweb_4096_bpe.model (SP4096 SentencePiece) |
| │ ├── fineweb_8192_bpe.model (SP8192 SentencePiece) |
| │ ├── fineweb_12288_bpe.model (SP12288 SentencePiece) |
| │ ├── fineweb_16384_bpe.model (SP16384 SentencePiece) |
| │ ├── scylla_v2/scylla.yaml (TokenMonster 1254, corrected) |
| │ ├── scylla_v2/scylla.meta.npz (byte LUTs, byte-exact) |
| │ ├── scylla/candidate.vocab (TokenMonster 998, deprecated) |
| │ └── scylla/candidate.meta.npz (old byte LUTs, overcounts) |
| ├── SHA256SUMS.txt |
| └── PATENTS.md |
| ``` |
| |
| --- |
| |
| ## Security & Privacy |
| |
| We believe in transparency. Here's exactly what we can and can't see. |
| |
| ### What we CAN access (technically) |
| - **Your checkpoint files** — they're stored in our Cloudflare R2 bucket. We have admin access to the bucket. We don't look at them, but we could. |
| - **Your checkpoint metadata** — filenames, sizes, upload timestamps. This is visible in the R2 dashboard. |
| - **Request logs** — Cloudflare logs request metadata (IP addresses, timestamps, URLs) by default. We do not add any additional logging. |
| - **Your GitHub username** — extracted from your token to scope your storage namespace. |
| |
| ### What we CANNOT access |
| - **Your training code** — it runs on your pod, never touches our infrastructure. |
| - **Your model weights** (unless you upload them as a checkpoint). |
| - **Your GitHub token contents** — the token transits our Worker to call GitHub's API, but it is NOT stored, NOT logged, and NOT persisted anywhere. It's used once per request for authentication and discarded. |
| - **Other users' data** — the Worker enforces namespace isolation. Your GitHub username is your storage prefix. You cannot read, list, or delete another user's checkpoints. |
| |
| ### What we DO NOT do |
| - We do NOT sell, share, or analyze your data. |
| - We do NOT train models on your checkpoints. |
| - We do NOT log your GitHub token value. |
| - We do NOT track your usage beyond standard Cloudflare request metrics. |
| |
| ### What we disclose |
| - The checkpoint API Worker code is in our private repo. We plan to open-source it. |
| - Cloudflare's privacy policy applies to request metadata: https://www.cloudflare.com/privacypolicy/ |
| - Checkpoints are automatically deleted after 7 days. We do not keep backups. |
| |
| ### If you don't trust us |
| That's fair. You can: |
| 1. **Use just the HF dataset** — no account, no tokens, no interaction with our API. Just `huggingface-cli download`. |
| 2. **Save checkpoints locally** — skip the API entirely. Save to `/workspace/` and accept the risk of losing work on preemption. |
| 3. **Inspect the Worker** — we'll open-source it. Until then, the API surface is 5 endpoints, ~150 lines of JavaScript, zero dependencies. |
| |
| --- |
| |
| ## Provenance |
| |
| - **Source:** [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) (CommonCrawl-derived, by Hugging Face). Our docs come from the `willdepueoai/parameter-golf` competition mirror. |
| - **SP1024 tokenization:** SentencePiece BPE, 1024 tokens — from the [openai/parameter-golf](https://github.com/openai/parameter-golf) competition repo, `data/tokenizer_specs.json`. |
| - **SP4096 / SP8192 / SP12288 / SP16384 tokenizations:** SentencePiece BPE models trained by LightSpeedUp on 1 M FineWeb docs decoded byte-exactly from the canonical SP1024 shards (SP1024 uses `byte_fallback=True`, so the decode is lossless). Trainer settings mirror the competition's `build_sentencepiece_tokenizer`: `character_coverage=0.999`, `byte_fallback=True`, `split_digits=True`, `normalization_rule_name=nmt_nfkc`, `add_dummy_prefix=False`, `hard_vocab_limit=False`. |
| - **byte260 tokenization:** direct UTF-8 byte mapping. Vocab 260 = 256 bytes + 4 reserved specials. Bytes `0x00..0xFF` map to ids `0..255` directly (no offset). Reserved specials: `pad_id=256, bos_id=257, eos_id=258, unk_id=259`. Encoding is `[257] + list(text.encode("utf-8"))`; `append_eos=False`. No SentencePiece model involved. Byte-accounting LUT is trivial: `base_bytes_lut[tok] = 1 if tok < 256 else 0`. **Note on convention:** this differs from the `openai/parameter-golf` `PureByteTokenizer` defaults (which put specials at 0..3 and offset bytes to 4..259). When loading these shards into the canonical `train_gpt.py --variant byte260` path, use a loader that matches the convention above. |
| - **Scylla v2 tokenization:** corrected byte-exact TokenMonster vocab (1254 tokens, logical 1178, `bos_id=1253`) from [@simon-marcus](https://github.com/simon-marcus)'s [PR #1314](https://github.com/openai/parameter-golf/pull/1314). Regime: `charset=none`, `capcode=0`, `normalization=none`, explicit full byte fallback, latin-1 byte interpretation, synthetic zero-byte BOS per doc. |
| - **Scylla v1 tokenization (deprecated):** original 998-token TokenMonster vocab from [@simon-marcus](https://github.com/simon-marcus)'s [PR #1143](https://github.com/openai/parameter-golf/pull/1143). Superseded by v2 due to the byte-accounting issue in [Issue #897](https://github.com/openai/parameter-golf/issues/897). |
| - **No modification** to token sequences on any variant — byte-identical to what you'd produce by running the same pipeline yourself. |
| |
| ### Attribution Chain |
| |
| CommonCrawl (CC-BY) → Hugging Face FineWeb (ODC-By 1.0) → willdepueoai/parameter-golf (ODC-By 1.0) → This dataset (ODC-By 1.0) |
| |
| ## License |
| |
| **Data:** [Open Data Commons Attribution License (ODC-By 1.0)](https://opendatacommons.org/licenses/by/1-0/) |
| |
| **Code & Tools:** Apache 2.0 — see [PATENTS.md](PATENTS.md) |
| |
| --- |
| |
| ## Related Community Resources |
| |
| - **[`sproos/parameter-golf-tokenizers`](https://huggingface.co/sproos/parameter-golf-tokenizers)** — a complementary community mirror that publishes `fineweb10B_{sp1024, sp2048, sp4096, sp8192}/` pre-tokenized shards plus the corresponding SentencePiece `.model` / `.vocab` files. If you only need the SP variants in the 1K–8K vocab range, sproos is a direct source. Our `LightSpeedUp/parameter-golf-data` drop is complementary: we add the larger SP variants (12288, 16384), the corrected Scylla v2, and byte260, and we bundle the free checkpoint-persistence API + Docker image for end-to-end training on free-tier GPUs. |
| |
| ## Roadmap |
| |
| - Cloudflare R2 mirror for HF-rate-limited users (coming soon) |
| - Automated checkpoint save/resume in the boot script |
| - Open-source the CF Worker code |
| |
| ## Community |
| |
| - [The Agora](https://matotezitanka.github.io/parameter-golf) — live leaderboard + compliance tracker |
| - [Issue #942](https://github.com/openai/parameter-golf/issues/942) — compute resources discussion |
| - [Issue #140](https://github.com/openai/parameter-golf/issues/140) — competition discussion thread |
| |