Datasets:
File size: 10,612 Bytes
83a1e20 07b6d52 83a1e20 07b6d52 83a1e20 07b6d52 83a1e20 07b6d52 83a1e20 07b6d52 83a1e20 07b6d52 83a1e20 07b6d52 83a1e20 07b6d52 83a1e20 07b6d52 83a1e20 07b6d52 83a1e20 07b6d52 83a1e20 07b6d52 83a1e20 07b6d52 83a1e20 07b6d52 83a1e20 07b6d52 83a1e20 07b6d52 83a1e20 07b6d52 83a1e20 07b6d52 83a1e20 07b6d52 83a1e20 07b6d52 83a1e20 07b6d52 83a1e20 07b6d52 83a1e20 07b6d52 83a1e20 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 | ---
license: odc-by
task_categories:
- text-generation
language:
- en
pretty_name: Parameter Golf Competition Data
size_categories:
- 1B<n<10B
tags:
- parameter-golf
- fineweb
- language-modeling
- competition
---
# Parameter Golf Competition Data
Pre-tokenized [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) shards for the [OpenAI Parameter Golf](https://github.com/openai/parameter-golf) competition. Two tokenizations included. Free checkpoint persistence API. Zero setup friction.
---
## How It Works (The Simple Version)
Think of it like plumbing:
1. **The reservoir** is this dataset — 26GB of competition data, pre-processed and ready to flow.
2. **The pipe** is `huggingface-cli download` — one command, and data flows to your GPU pod. Fast, resumable. If the pipe breaks mid-transfer, reconnect and it picks up where it left off.
3. **Your pod** is the sink — data arrives at `/workspace/data/`, ready to use. No processing, no conversion, no waiting.
4. **The safety valve** is checkpoint persistence — every N steps, your training progress flows out to cloud storage. Pod dies? New pod picks up the flow from the last save. No lost work.
That's it. Data flows in. Checkpoints flow out. You train in between.
### Step by Step
**I just want to train. What do I do?**
```bash
# Step 1: Install huggingface-cli (if you don't have it)
pip install huggingface-hub
# Step 2: Download the competition data
huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_sp1024/*" --local-dir /workspace/data
# Step 3: That's it. Train.
python train_gpt.py --data_dir /workspace/data/fineweb_sp1024
```
**I want to save checkpoints so I don't lose work:**
```bash
# After every N steps in your training script, save:
curl -X PUT \
-H "Authorization: Bearer YOUR_GITHUB_TOKEN" \
--data-binary @checkpoint.pt \
https://pgolf-api.lightspeedup.com/put/YOUR_GITHUB_USERNAME/my-run/checkpoint.pt
# On a new pod, resume:
curl -o checkpoint.pt \
-H "Authorization: Bearer YOUR_GITHUB_TOKEN" \
"https://pgolf-api.lightspeedup.com/download?run_id=my-run&filename=checkpoint.pt"
```
**I want the fully automated experience:**
Use our RunPod template: `matotezitanka/proteus-pytorch:community`
Set these env vars before launch:
- `PGOLF_DATA=sp1024` (or `scylla`)
- `PGOLF_SHARDS=full` (or `mini` for testing)
- `PGOLF_GITHUB_TOKEN=ghp_yourtoken` (optional, for checkpoints)
- `PGOLF_USER=yourgithubname` (optional)
- `PGOLF_RUN=my-experiment-1` (optional)
Hit deploy. SSH in when it's ready. Everything is there.
---
## Comprehensive Guide
### Available Data
| Tokenizer | Vocab | Size | Use case |
|-----------|-------|------|----------|
| **SP1024** | 1024 tokens | ~15 GB | Competition default. Most PRs use this. |
| **Scylla** | 998 tokens | ~11 GB | TokenMonster-derived. Used by top entries. |
Each includes 80 training shards + 1 validation shard + tokenizer models.
### Download Options
```bash
# === DATA SELECTION ===
# SP1024 — competition default (~15 GB)
huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_sp1024/*" --local-dir /workspace/data
# Scylla — TokenMonster 998-token vocab (~11 GB)
huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_scylla/*" --local-dir /workspace/data
# Both datasets + all tokenizers (~26 GB)
huggingface-cli download LightSpeedUp/parameter-golf-data --local-dir /workspace/data
# Just tokenizer models (tiny, < 1 MB)
huggingface-cli download LightSpeedUp/parameter-golf-data --include "tokenizers/*" --local-dir /workspace/data
# === SHARD SUBSETS (save time/bandwidth) ===
# Mini — 10 shards for smoke tests (~2 GB)
huggingface-cli download LightSpeedUp/parameter-golf-data \
--include "fineweb_sp1024/fineweb_train_00000?.bin" \
--include "fineweb_sp1024/fineweb_val*" \
--local-dir /workspace/data
# Half — 40 shards (~7 GB)
huggingface-cli download LightSpeedUp/parameter-golf-data \
--include "fineweb_sp1024/fineweb_train_0000[0-3]?.bin" \
--include "fineweb_sp1024/fineweb_val*" \
--local-dir /workspace/data
# Val only — just validation data (~200 MB)
huggingface-cli download LightSpeedUp/parameter-golf-data \
--include "fineweb_sp1024/fineweb_val*" \
--local-dir /workspace/data
```
**No HuggingFace account required.** This is a public dataset. No login, no token, no signup.
**Downloads are resumable.** If your connection drops, re-run the same command and it picks up where it left off.
### Checkpoint Persistence API
Save and resume training across pod preemptions. Your GitHub token is your identity — no accounts to create.
```bash
# === CHECKPOINT API (https://pgolf-api.lightspeedup.com) ===
# Upload a checkpoint
curl -X POST https://pgolf-api.lightspeedup.com/upload \
-H "Authorization: Bearer ghp_yourtoken" \
-H "Content-Type: application/json" \
-d '{"run_id": "my-run", "filename": "checkpoint_step500.pt"}'
# Then PUT the file
curl -X PUT https://pgolf-api.lightspeedup.com/put/yourusername/my-run/checkpoint_step500.pt \
-H "Authorization: Bearer ghp_yourtoken" \
--data-binary @checkpoint_step500.pt
# Download a checkpoint
curl -o checkpoint.pt \
-H "Authorization: Bearer ghp_yourtoken" \
"https://pgolf-api.lightspeedup.com/download?run_id=my-run&filename=checkpoint_step500.pt"
# List your checkpoints
curl -H "Authorization: Bearer ghp_yourtoken" \
"https://pgolf-api.lightspeedup.com/list?run_id=my-run"
# Delete a run's checkpoints
curl -X DELETE -H "Authorization: Bearer ghp_yourtoken" \
"https://pgolf-api.lightspeedup.com/clean?run_id=my-run"
```
**Limits:** 10 checkpoints per user, 2 GB max each. Auto-deleted after 7 days.
### Docker Image
```bash
docker pull matotezitanka/proteus-pytorch:community
```
Includes: PyTorch 2.11.0 + CUDA 12.8 + Flash Attention 3 + all competition deps (brotli, tokenmonster, sentencepiece) + tools (cpu_test.py, retokenizer, swap_pytorch.sh) + automated boot script.
Works on RunPod, Vast.ai, or any Docker host with NVIDIA GPUs.
### Data Integrity
Every file has a SHA256 checksum. After downloading:
```bash
cd /workspace/data
sha256sum -c SHA256SUMS.txt
```
If any checksum fails, re-download that file. The download is resumable — you won't re-download files that are already correct.
### Dataset Structure
```
parameter-golf-data/
├── fineweb_sp1024/
│ ├── fineweb_train_000000.bin ... fineweb_train_000079.bin (80 shards)
│ └── fineweb_val_000000.bin
├── fineweb_scylla/
│ ├── fineweb_train_000000.bin ... fineweb_train_000079.bin (80 shards)
│ └── fineweb_val_000000.bin
├── tokenizers/
│ ├── fineweb_1024_bpe.model (SP1024 SentencePiece)
│ ├── scylla/candidate.vocab (TokenMonster 998)
│ └── scylla/candidate.meta.npz (byte LUTs)
├── SHA256SUMS.txt
└── PATENTS.md
```
---
## Security & Privacy
We believe in transparency. Here's exactly what we can and can't see.
### What we CAN access (technically)
- **Your checkpoint files** — they're stored in our Cloudflare R2 bucket. We have admin access to the bucket. We don't look at them, but we could.
- **Your checkpoint metadata** — filenames, sizes, upload timestamps. This is visible in the R2 dashboard.
- **Request logs** — Cloudflare logs request metadata (IP addresses, timestamps, URLs) by default. We do not add any additional logging.
- **Your GitHub username** — extracted from your token to scope your storage namespace.
### What we CANNOT access
- **Your training code** — it runs on your pod, never touches our infrastructure.
- **Your model weights** (unless you upload them as a checkpoint).
- **Your GitHub token contents** — the token transits our Worker to call GitHub's API, but it is NOT stored, NOT logged, and NOT persisted anywhere. It's used once per request for authentication and discarded.
- **Other users' data** — the Worker enforces namespace isolation. Your GitHub username is your storage prefix. You cannot read, list, or delete another user's checkpoints.
### What we DO NOT do
- We do NOT sell, share, or analyze your data.
- We do NOT train models on your checkpoints.
- We do NOT log your GitHub token value.
- We do NOT track your usage beyond standard Cloudflare request metrics.
### What we disclose
- The checkpoint API Worker code is in our private repo. We plan to open-source it.
- Cloudflare's privacy policy applies to request metadata: https://www.cloudflare.com/privacypolicy/
- Checkpoints are automatically deleted after 7 days. We do not keep backups.
### If you don't trust us
That's fair. You can:
1. **Use just the HF dataset** — no account, no tokens, no interaction with our API. Just `huggingface-cli download`.
2. **Save checkpoints locally** — skip the API entirely. Save to `/workspace/` and accept the risk of losing work on preemption.
3. **Inspect the Worker** — we'll open-source it. Until then, the API surface is 5 endpoints, ~150 lines of JavaScript, zero dependencies.
---
## Provenance
- **Source:** [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) (CommonCrawl-derived, by Hugging Face)
- **SP1024 tokenization:** SentencePiece BPE, 1024 tokens — from the [openai/parameter-golf](https://github.com/openai/parameter-golf) competition repo
- **Scylla tokenization:** TokenMonster vocabulary (998 tokens) by [@simon-marcus](https://github.com/simon-marcus) ([PR #1143](https://github.com/openai/parameter-golf/pull/1143)). Retokenized using our pipeline.
- **No modification** to token sequences — byte-identical to what you'd produce by tokenizing the raw data yourself.
### Attribution Chain
CommonCrawl (CC-BY) → Hugging Face FineWeb (ODC-By 1.0) → This dataset (ODC-By 1.0)
## License
**Data:** [Open Data Commons Attribution License (ODC-By 1.0)](https://opendatacommons.org/licenses/by/1-0/)
**Code & Tools:** Apache 2.0 — see [PATENTS.md](PATENTS.md)
---
## Roadmap
- SP4096 and SP8192 tokenizations (need tokenizer models — contributions welcome)
- Automated checkpoint save/resume in the boot script
- Open-source the CF Worker code
## Community
- [The Agora](https://matotezitanka.github.io/parameter-golf) — live leaderboard + compliance tracker
- [Issue #942](https://github.com/openai/parameter-golf/issues/942) — compute resources discussion
- [Issue #140](https://github.com/openai/parameter-golf/issues/140) — competition discussion thread
Built by [Light Speed Up](https://lightspeedup.com) for the Parameter Golf community.
|