Datasets:
license: odc-by
task_categories:
- text-generation
language:
- en
pretty_name: Parameter Golf Competition Data
size_categories:
- 1B<n<10B
tags:
- parameter-golf
- fineweb
- language-modeling
- competition
Parameter Golf Competition Data
Pre-tokenized FineWeb shards for the OpenAI Parameter Golf competition. Two tokenizations included. Free checkpoint persistence API. Zero setup friction.
How It Works (The Simple Version)
Think of it like plumbing:
- The reservoir is this dataset — 26GB of competition data, pre-processed and ready to flow.
- The pipe is
huggingface-cli download— one command, and data flows to your GPU pod. Fast, resumable. If the pipe breaks mid-transfer, reconnect and it picks up where it left off. - Your pod is the sink — data arrives at
/workspace/data/, ready to use. No processing, no conversion, no waiting. - The safety valve is checkpoint persistence — every N steps, your training progress flows out to cloud storage. Pod dies? New pod picks up the flow from the last save. No lost work.
That's it. Data flows in. Checkpoints flow out. You train in between.
Step by Step
I just want to train. What do I do?
# Step 1: Install huggingface-cli (if you don't have it)
pip install huggingface-hub
# Step 2: Download the competition data
huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_sp1024/*" --local-dir /workspace/data
# Step 3: That's it. Train.
python train_gpt.py --data_dir /workspace/data/fineweb_sp1024
I want to save checkpoints so I don't lose work:
# After every N steps in your training script, save:
curl -X PUT \
-H "Authorization: Bearer YOUR_GITHUB_TOKEN" \
--data-binary @checkpoint.pt \
https://pgolf-api.lightspeedup.com/put/YOUR_GITHUB_USERNAME/my-run/checkpoint.pt
# On a new pod, resume:
curl -o checkpoint.pt \
-H "Authorization: Bearer YOUR_GITHUB_TOKEN" \
"https://pgolf-api.lightspeedup.com/download?run_id=my-run&filename=checkpoint.pt"
I want the fully automated experience:
Use our RunPod template: matotezitanka/proteus-pytorch:community
Set these env vars before launch:
PGOLF_DATA=sp1024(orscylla)PGOLF_SHARDS=full(orminifor testing)PGOLF_GITHUB_TOKEN=ghp_yourtoken(optional, for checkpoints)PGOLF_USER=yourgithubname(optional)PGOLF_RUN=my-experiment-1(optional)
Hit deploy. SSH in when it's ready. Everything is there.
Comprehensive Guide
Available Data
| Tokenizer | Vocab | Size | Use case |
|---|---|---|---|
| SP1024 | 1024 tokens | ~15 GB | Competition default. Most PRs use this. |
| Scylla | 998 tokens | ~11 GB | TokenMonster-derived. Used by top entries. |
Each includes 80 training shards + 1 validation shard + tokenizer models.
Download Options
# === DATA SELECTION ===
# SP1024 — competition default (~15 GB)
huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_sp1024/*" --local-dir /workspace/data
# Scylla — TokenMonster 998-token vocab (~11 GB)
huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_scylla/*" --local-dir /workspace/data
# Both datasets + all tokenizers (~26 GB)
huggingface-cli download LightSpeedUp/parameter-golf-data --local-dir /workspace/data
# Just tokenizer models (tiny, < 1 MB)
huggingface-cli download LightSpeedUp/parameter-golf-data --include "tokenizers/*" --local-dir /workspace/data
# === SHARD SUBSETS (save time/bandwidth) ===
# Mini — 10 shards for smoke tests (~2 GB)
huggingface-cli download LightSpeedUp/parameter-golf-data \
--include "fineweb_sp1024/fineweb_train_00000?.bin" \
--include "fineweb_sp1024/fineweb_val*" \
--local-dir /workspace/data
# Half — 40 shards (~7 GB)
huggingface-cli download LightSpeedUp/parameter-golf-data \
--include "fineweb_sp1024/fineweb_train_0000[0-3]?.bin" \
--include "fineweb_sp1024/fineweb_val*" \
--local-dir /workspace/data
# Val only — just validation data (~200 MB)
huggingface-cli download LightSpeedUp/parameter-golf-data \
--include "fineweb_sp1024/fineweb_val*" \
--local-dir /workspace/data
No HuggingFace account required. This is a public dataset. No login, no token, no signup.
Downloads are resumable. If your connection drops, re-run the same command and it picks up where it left off.
Checkpoint Persistence API
Save and resume training across pod preemptions. Your GitHub token is your identity — no accounts to create.
# === CHECKPOINT API (https://pgolf-api.lightspeedup.com) ===
# Upload a checkpoint
curl -X POST https://pgolf-api.lightspeedup.com/upload \
-H "Authorization: Bearer ghp_yourtoken" \
-H "Content-Type: application/json" \
-d '{"run_id": "my-run", "filename": "checkpoint_step500.pt"}'
# Then PUT the file
curl -X PUT https://pgolf-api.lightspeedup.com/put/yourusername/my-run/checkpoint_step500.pt \
-H "Authorization: Bearer ghp_yourtoken" \
--data-binary @checkpoint_step500.pt
# Download a checkpoint
curl -o checkpoint.pt \
-H "Authorization: Bearer ghp_yourtoken" \
"https://pgolf-api.lightspeedup.com/download?run_id=my-run&filename=checkpoint_step500.pt"
# List your checkpoints
curl -H "Authorization: Bearer ghp_yourtoken" \
"https://pgolf-api.lightspeedup.com/list?run_id=my-run"
# Delete a run's checkpoints
curl -X DELETE -H "Authorization: Bearer ghp_yourtoken" \
"https://pgolf-api.lightspeedup.com/clean?run_id=my-run"
Limits: 10 checkpoints per user, 2 GB max each. Auto-deleted after 7 days.
Docker Image
docker pull matotezitanka/proteus-pytorch:community
Includes: PyTorch 2.11.0 + CUDA 12.8 + Flash Attention 3 + all competition deps (brotli, tokenmonster, sentencepiece) + tools (cpu_test.py, retokenizer, swap_pytorch.sh) + automated boot script.
Works on RunPod, Vast.ai, or any Docker host with NVIDIA GPUs.
Data Integrity
Every file has a SHA256 checksum. After downloading:
cd /workspace/data
sha256sum -c SHA256SUMS.txt
If any checksum fails, re-download that file. The download is resumable — you won't re-download files that are already correct.
Dataset Structure
parameter-golf-data/
├── fineweb_sp1024/
│ ├── fineweb_train_000000.bin ... fineweb_train_000079.bin (80 shards)
│ └── fineweb_val_000000.bin
├── fineweb_scylla/
│ ├── fineweb_train_000000.bin ... fineweb_train_000079.bin (80 shards)
│ └── fineweb_val_000000.bin
├── tokenizers/
│ ├── fineweb_1024_bpe.model (SP1024 SentencePiece)
│ ├── scylla/candidate.vocab (TokenMonster 998)
│ └── scylla/candidate.meta.npz (byte LUTs)
├── SHA256SUMS.txt
└── PATENTS.md
Security & Privacy
We believe in transparency. Here's exactly what we can and can't see.
What we CAN access (technically)
- Your checkpoint files — they're stored in our Cloudflare R2 bucket. We have admin access to the bucket. We don't look at them, but we could.
- Your checkpoint metadata — filenames, sizes, upload timestamps. This is visible in the R2 dashboard.
- Request logs — Cloudflare logs request metadata (IP addresses, timestamps, URLs) by default. We do not add any additional logging.
- Your GitHub username — extracted from your token to scope your storage namespace.
What we CANNOT access
- Your training code — it runs on your pod, never touches our infrastructure.
- Your model weights (unless you upload them as a checkpoint).
- Your GitHub token contents — the token transits our Worker to call GitHub's API, but it is NOT stored, NOT logged, and NOT persisted anywhere. It's used once per request for authentication and discarded.
- Other users' data — the Worker enforces namespace isolation. Your GitHub username is your storage prefix. You cannot read, list, or delete another user's checkpoints.
What we DO NOT do
- We do NOT sell, share, or analyze your data.
- We do NOT train models on your checkpoints.
- We do NOT log your GitHub token value.
- We do NOT track your usage beyond standard Cloudflare request metrics.
What we disclose
- The checkpoint API Worker code is in our private repo. We plan to open-source it.
- Cloudflare's privacy policy applies to request metadata: https://www.cloudflare.com/privacypolicy/
- Checkpoints are automatically deleted after 7 days. We do not keep backups.
If you don't trust us
That's fair. You can:
- Use just the HF dataset — no account, no tokens, no interaction with our API. Just
huggingface-cli download. - Save checkpoints locally — skip the API entirely. Save to
/workspace/and accept the risk of losing work on preemption. - Inspect the Worker — we'll open-source it. Until then, the API surface is 5 endpoints, ~150 lines of JavaScript, zero dependencies.
Provenance
- Source: FineWeb (CommonCrawl-derived, by Hugging Face)
- SP1024 tokenization: SentencePiece BPE, 1024 tokens — from the openai/parameter-golf competition repo
- Scylla tokenization: TokenMonster vocabulary (998 tokens) by @simon-marcus (PR #1143). Retokenized using our pipeline.
- No modification to token sequences — byte-identical to what you'd produce by tokenizing the raw data yourself.
Attribution Chain
CommonCrawl (CC-BY) → Hugging Face FineWeb (ODC-By 1.0) → This dataset (ODC-By 1.0)
License
Data: Open Data Commons Attribution License (ODC-By 1.0)
Code & Tools: Apache 2.0 — see PATENTS.md
Roadmap
- SP4096 and SP8192 tokenizations (need tokenizer models — contributions welcome)
- Automated checkpoint save/resume in the boot script
- Open-source the CF Worker code
Community
- The Agora — live leaderboard + compliance tracker
- Issue #942 — compute resources discussion
- Issue #140 — competition discussion thread
Built by Light Speed Up for the Parameter Golf community.