Norelec7 commited on
Commit
07b6d52
·
verified ·
1 Parent(s): 41f1d9a

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +203 -31
README.md CHANGED
@@ -16,81 +16,253 @@ tags:
16
 
17
  # Parameter Golf Competition Data
18
 
19
- Pre-tokenized [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) shards for the [OpenAI Parameter Golf](https://github.com/openai/parameter-golf) competition.
20
 
21
- **Two tokenizations included:**
22
- - **SP1024** — SentencePiece BPE, 1024 tokens (competition default)
23
- - **Scylla** TokenMonster-derived, 998 tokens (community alternative from [PR #1143](https://github.com/openai/parameter-golf/pull/1143))
 
 
24
 
25
- ## Why this exists
 
 
 
26
 
27
- Every time you launch a GPU pod, re-downloading 16+ GB of data from the competition repo costs 10-30 minutes of billable GPU time. This dataset provides the same data via `huggingface-cli download` — fast, resumable, and available from any provider (RunPod, Modal, Colab, Vast.ai).
28
 
29
- ## Quick Start
 
 
30
 
31
  ```bash
32
- # Full SP1024 dataset (~11 GB)
 
 
 
33
  huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_sp1024/*" --local-dir /workspace/data
34
 
35
- # Full Scylla dataset (~11 GB)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
  huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_scylla/*" --local-dir /workspace/data
37
 
38
- # Tokenizers only
 
 
 
39
  huggingface-cli download LightSpeedUp/parameter-golf-data --include "tokenizers/*" --local-dir /workspace/data
40
 
41
- # Mini subset for smoke tests (~2 GB, 10 shards + val)
 
 
 
42
  huggingface-cli download LightSpeedUp/parameter-golf-data \
43
  --include "fineweb_sp1024/fineweb_train_00000?.bin" \
44
  --include "fineweb_sp1024/fineweb_val*" \
45
  --local-dir /workspace/data
46
 
47
- # Val only (~200 MB)
48
  huggingface-cli download LightSpeedUp/parameter-golf-data \
 
49
  --include "fineweb_sp1024/fineweb_val*" \
50
  --local-dir /workspace/data
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51
  ```
52
 
53
- ## Dataset Structure
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
 
55
  ```
56
  parameter-golf-data/
57
  ├── fineweb_sp1024/
58
- │ ├── fineweb_train_000000.bin ... fineweb_train_000079.bin (80 shards, ~11 GB)
59
- │ └── fineweb_val_000000.bin (1 shard, ~200 MB)
60
  ├── fineweb_scylla/
61
- │ ├── fineweb_train_000000.bin ... fineweb_train_000079.bin (80 shards, ~11 GB)
62
- │ └── fineweb_val_000000.bin (1 shard)
63
  ├── tokenizers/
64
  │ ├── fineweb_1024_bpe.model (SP1024 SentencePiece)
65
  │ ├── scylla/candidate.vocab (TokenMonster 998)
66
  │ └── scylla/candidate.meta.npz (byte LUTs)
67
- ── SHA256SUMS.txt (integrity manifest)
 
68
  ```
69
 
70
- ## Data Integrity
71
 
72
- Verify your download:
73
- ```bash
74
- cd /workspace/data
75
- sha256sum -c SHA256SUMS.txt
76
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77
 
78
  ## Provenance
79
 
80
  - **Source:** [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) (CommonCrawl-derived, by Hugging Face)
81
- - **SP1024 tokenization:** SentencePiece BPE trained on FineWeb, 1024 tokens — from [openai/parameter-golf](https://github.com/openai/parameter-golf) competition repo
82
- - **Scylla tokenization:** TokenMonster vocabulary (998 tokens) by [@simon-marcus](https://github.com/simon-marcus) ([PR #1143](https://github.com/openai/parameter-golf/pull/1143)). Retokenized using our [retokenize_scylla.py](https://github.com/MatoTeziTanka/parameter-golf-private) pipeline.
83
- - **No modification** to token sequences — these are byte-identical to what you'd get by cloning the competition repo and running the tokenizer yourself.
84
 
85
- ## Attribution Chain
86
 
87
- FineWeb → CommonCrawl (CC-BY) → Hugging Face (ODC-By 1.0) → This dataset (ODC-By 1.0)
88
 
89
  ## License
90
 
91
- **Data:** [Open Data Commons Attribution License (ODC-By 1.0)](https://opendatacommons.org/licenses/by/1-0/) — required by FineWeb upstream license. You may use, share, and adapt this data with attribution.
 
 
 
 
 
 
92
 
93
- **Retokenization code:** Apache 2.0 see [PATENTS.md](PATENTS.md) for patent boundary notice.
 
 
94
 
95
  ## Community
96
 
 
16
 
17
  # Parameter Golf Competition Data
18
 
19
+ Pre-tokenized [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) shards for the [OpenAI Parameter Golf](https://github.com/openai/parameter-golf) competition. Two tokenizations included. Free checkpoint persistence API. Zero setup friction.
20
 
21
+ ---
22
+
23
+ ## How It Works (The Simple Version)
24
+
25
+ Think of it like plumbing:
26
 
27
+ 1. **The reservoir** is this dataset — 26GB of competition data, pre-processed and ready to flow.
28
+ 2. **The pipe** is `huggingface-cli download` — one command, and data flows to your GPU pod. Fast, resumable. If the pipe breaks mid-transfer, reconnect and it picks up where it left off.
29
+ 3. **Your pod** is the sink — data arrives at `/workspace/data/`, ready to use. No processing, no conversion, no waiting.
30
+ 4. **The safety valve** is checkpoint persistence — every N steps, your training progress flows out to cloud storage. Pod dies? New pod picks up the flow from the last save. No lost work.
31
 
32
+ That's it. Data flows in. Checkpoints flow out. You train in between.
33
 
34
+ ### Step by Step
35
+
36
+ **I just want to train. What do I do?**
37
 
38
  ```bash
39
+ # Step 1: Install huggingface-cli (if you don't have it)
40
+ pip install huggingface-hub
41
+
42
+ # Step 2: Download the competition data
43
  huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_sp1024/*" --local-dir /workspace/data
44
 
45
+ # Step 3: That's it. Train.
46
+ python train_gpt.py --data_dir /workspace/data/fineweb_sp1024
47
+ ```
48
+
49
+ **I want to save checkpoints so I don't lose work:**
50
+
51
+ ```bash
52
+ # After every N steps in your training script, save:
53
+ curl -X PUT \
54
+ -H "Authorization: Bearer YOUR_GITHUB_TOKEN" \
55
+ --data-binary @checkpoint.pt \
56
+ https://pgolf-api.lightspeedup.com/put/YOUR_GITHUB_USERNAME/my-run/checkpoint.pt
57
+
58
+ # On a new pod, resume:
59
+ curl -o checkpoint.pt \
60
+ -H "Authorization: Bearer YOUR_GITHUB_TOKEN" \
61
+ "https://pgolf-api.lightspeedup.com/download?run_id=my-run&filename=checkpoint.pt"
62
+ ```
63
+
64
+ **I want the fully automated experience:**
65
+
66
+ Use our RunPod template: `matotezitanka/proteus-pytorch:community`
67
+
68
+ Set these env vars before launch:
69
+ - `PGOLF_DATA=sp1024` (or `scylla`)
70
+ - `PGOLF_SHARDS=full` (or `mini` for testing)
71
+ - `PGOLF_GITHUB_TOKEN=ghp_yourtoken` (optional, for checkpoints)
72
+ - `PGOLF_USER=yourgithubname` (optional)
73
+ - `PGOLF_RUN=my-experiment-1` (optional)
74
+
75
+ Hit deploy. SSH in when it's ready. Everything is there.
76
+
77
+ ---
78
+
79
+ ## Comprehensive Guide
80
+
81
+ ### Available Data
82
+
83
+ | Tokenizer | Vocab | Size | Use case |
84
+ |-----------|-------|------|----------|
85
+ | **SP1024** | 1024 tokens | ~15 GB | Competition default. Most PRs use this. |
86
+ | **Scylla** | 998 tokens | ~11 GB | TokenMonster-derived. Used by top entries. |
87
+
88
+ Each includes 80 training shards + 1 validation shard + tokenizer models.
89
+
90
+ ### Download Options
91
+
92
+ ```bash
93
+ # === DATA SELECTION ===
94
+
95
+ # SP1024 — competition default (~15 GB)
96
+ huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_sp1024/*" --local-dir /workspace/data
97
+
98
+ # Scylla — TokenMonster 998-token vocab (~11 GB)
99
  huggingface-cli download LightSpeedUp/parameter-golf-data --include "fineweb_scylla/*" --local-dir /workspace/data
100
 
101
+ # Both datasets + all tokenizers (~26 GB)
102
+ huggingface-cli download LightSpeedUp/parameter-golf-data --local-dir /workspace/data
103
+
104
+ # Just tokenizer models (tiny, < 1 MB)
105
  huggingface-cli download LightSpeedUp/parameter-golf-data --include "tokenizers/*" --local-dir /workspace/data
106
 
107
+
108
+ # === SHARD SUBSETS (save time/bandwidth) ===
109
+
110
+ # Mini — 10 shards for smoke tests (~2 GB)
111
  huggingface-cli download LightSpeedUp/parameter-golf-data \
112
  --include "fineweb_sp1024/fineweb_train_00000?.bin" \
113
  --include "fineweb_sp1024/fineweb_val*" \
114
  --local-dir /workspace/data
115
 
116
+ # Half 40 shards (~7 GB)
117
  huggingface-cli download LightSpeedUp/parameter-golf-data \
118
+ --include "fineweb_sp1024/fineweb_train_0000[0-3]?.bin" \
119
  --include "fineweb_sp1024/fineweb_val*" \
120
  --local-dir /workspace/data
121
+
122
+ # Val only — just validation data (~200 MB)
123
+ huggingface-cli download LightSpeedUp/parameter-golf-data \
124
+ --include "fineweb_sp1024/fineweb_val*" \
125
+ --local-dir /workspace/data
126
+ ```
127
+
128
+ **No HuggingFace account required.** This is a public dataset. No login, no token, no signup.
129
+
130
+ **Downloads are resumable.** If your connection drops, re-run the same command and it picks up where it left off.
131
+
132
+ ### Checkpoint Persistence API
133
+
134
+ Save and resume training across pod preemptions. Your GitHub token is your identity — no accounts to create.
135
+
136
+ ```bash
137
+ # === CHECKPOINT API (https://pgolf-api.lightspeedup.com) ===
138
+
139
+ # Upload a checkpoint
140
+ curl -X POST https://pgolf-api.lightspeedup.com/upload \
141
+ -H "Authorization: Bearer ghp_yourtoken" \
142
+ -H "Content-Type: application/json" \
143
+ -d '{"run_id": "my-run", "filename": "checkpoint_step500.pt"}'
144
+
145
+ # Then PUT the file
146
+ curl -X PUT https://pgolf-api.lightspeedup.com/put/yourusername/my-run/checkpoint_step500.pt \
147
+ -H "Authorization: Bearer ghp_yourtoken" \
148
+ --data-binary @checkpoint_step500.pt
149
+
150
+ # Download a checkpoint
151
+ curl -o checkpoint.pt \
152
+ -H "Authorization: Bearer ghp_yourtoken" \
153
+ "https://pgolf-api.lightspeedup.com/download?run_id=my-run&filename=checkpoint_step500.pt"
154
+
155
+ # List your checkpoints
156
+ curl -H "Authorization: Bearer ghp_yourtoken" \
157
+ "https://pgolf-api.lightspeedup.com/list?run_id=my-run"
158
+
159
+ # Delete a run's checkpoints
160
+ curl -X DELETE -H "Authorization: Bearer ghp_yourtoken" \
161
+ "https://pgolf-api.lightspeedup.com/clean?run_id=my-run"
162
+ ```
163
+
164
+ **Limits:** 10 checkpoints per user, 2 GB max each. Auto-deleted after 7 days.
165
+
166
+ ### Docker Image
167
+
168
+ ```bash
169
+ docker pull matotezitanka/proteus-pytorch:community
170
  ```
171
 
172
+ Includes: PyTorch 2.11.0 + CUDA 12.8 + Flash Attention 3 + all competition deps (brotli, tokenmonster, sentencepiece) + tools (cpu_test.py, retokenizer, swap_pytorch.sh) + automated boot script.
173
+
174
+ Works on RunPod, Vast.ai, or any Docker host with NVIDIA GPUs.
175
+
176
+ ### Data Integrity
177
+
178
+ Every file has a SHA256 checksum. After downloading:
179
+
180
+ ```bash
181
+ cd /workspace/data
182
+ sha256sum -c SHA256SUMS.txt
183
+ ```
184
+
185
+ If any checksum fails, re-download that file. The download is resumable — you won't re-download files that are already correct.
186
+
187
+ ### Dataset Structure
188
 
189
  ```
190
  parameter-golf-data/
191
  ├── fineweb_sp1024/
192
+ │ ├── fineweb_train_000000.bin ... fineweb_train_000079.bin (80 shards)
193
+ │ └── fineweb_val_000000.bin
194
  ├── fineweb_scylla/
195
+ │ ├── fineweb_train_000000.bin ... fineweb_train_000079.bin (80 shards)
196
+ │ └── fineweb_val_000000.bin
197
  ├── tokenizers/
198
  │ ├── fineweb_1024_bpe.model (SP1024 SentencePiece)
199
  │ ├── scylla/candidate.vocab (TokenMonster 998)
200
  │ └── scylla/candidate.meta.npz (byte LUTs)
201
+ ── SHA256SUMS.txt
202
+ └── PATENTS.md
203
  ```
204
 
205
+ ---
206
 
207
+ ## Security & Privacy
208
+
209
+ We believe in transparency. Here's exactly what we can and can't see.
210
+
211
+ ### What we CAN access (technically)
212
+ - **Your checkpoint files** — they're stored in our Cloudflare R2 bucket. We have admin access to the bucket. We don't look at them, but we could.
213
+ - **Your checkpoint metadata** — filenames, sizes, upload timestamps. This is visible in the R2 dashboard.
214
+ - **Request logs** — Cloudflare logs request metadata (IP addresses, timestamps, URLs) by default. We do not add any additional logging.
215
+ - **Your GitHub username** — extracted from your token to scope your storage namespace.
216
+
217
+ ### What we CANNOT access
218
+ - **Your training code** — it runs on your pod, never touches our infrastructure.
219
+ - **Your model weights** (unless you upload them as a checkpoint).
220
+ - **Your GitHub token contents** — the token transits our Worker to call GitHub's API, but it is NOT stored, NOT logged, and NOT persisted anywhere. It's used once per request for authentication and discarded.
221
+ - **Other users' data** — the Worker enforces namespace isolation. Your GitHub username is your storage prefix. You cannot read, list, or delete another user's checkpoints.
222
+
223
+ ### What we DO NOT do
224
+ - We do NOT sell, share, or analyze your data.
225
+ - We do NOT train models on your checkpoints.
226
+ - We do NOT log your GitHub token value.
227
+ - We do NOT track your usage beyond standard Cloudflare request metrics.
228
+
229
+ ### What we disclose
230
+ - The checkpoint API Worker code is in our private repo. We plan to open-source it.
231
+ - Cloudflare's privacy policy applies to request metadata: https://www.cloudflare.com/privacypolicy/
232
+ - Checkpoints are automatically deleted after 7 days. We do not keep backups.
233
+
234
+ ### If you don't trust us
235
+ That's fair. You can:
236
+ 1. **Use just the HF dataset** — no account, no tokens, no interaction with our API. Just `huggingface-cli download`.
237
+ 2. **Save checkpoints locally** — skip the API entirely. Save to `/workspace/` and accept the risk of losing work on preemption.
238
+ 3. **Inspect the Worker** — we'll open-source it. Until then, the API surface is 5 endpoints, ~150 lines of JavaScript, zero dependencies.
239
+
240
+ ---
241
 
242
  ## Provenance
243
 
244
  - **Source:** [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) (CommonCrawl-derived, by Hugging Face)
245
+ - **SP1024 tokenization:** SentencePiece BPE, 1024 tokens — from the [openai/parameter-golf](https://github.com/openai/parameter-golf) competition repo
246
+ - **Scylla tokenization:** TokenMonster vocabulary (998 tokens) by [@simon-marcus](https://github.com/simon-marcus) ([PR #1143](https://github.com/openai/parameter-golf/pull/1143)). Retokenized using our pipeline.
247
+ - **No modification** to token sequences — byte-identical to what you'd produce by tokenizing the raw data yourself.
248
 
249
+ ### Attribution Chain
250
 
251
+ CommonCrawl (CC-BY) → Hugging Face FineWeb (ODC-By 1.0) → This dataset (ODC-By 1.0)
252
 
253
  ## License
254
 
255
+ **Data:** [Open Data Commons Attribution License (ODC-By 1.0)](https://opendatacommons.org/licenses/by/1-0/)
256
+
257
+ **Code & Tools:** Apache 2.0 — see [PATENTS.md](PATENTS.md)
258
+
259
+ ---
260
+
261
+ ## Roadmap
262
 
263
+ - SP4096 and SP8192 tokenizations (need tokenizer models contributions welcome)
264
+ - Automated checkpoint save/resume in the boot script
265
+ - Open-source the CF Worker code
266
 
267
  ## Community
268