Add small-sample callout and Croissant pointer
Browse files
README.md
CHANGED
|
@@ -87,7 +87,21 @@ Per-city full sizes: Austin ~127 GB, NY ~101 GB, Dallas ~156 GB, Fort Worth ~134
|
|
| 87 |
|
| 88 |
---
|
| 89 |
|
| 90 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 91 |
|
| 92 |
```bash
|
| 93 |
pip install huggingface_hub
|
|
@@ -96,6 +110,10 @@ hf download neurips2026citympc/CityMPC --repo-type dataset --include "manifests/
|
|
| 96 |
|
| 97 |
For a one-line download + verify experience, use the citympc `scripts/download_hf.py` helper (see the citympc repo).
|
| 98 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 99 |
---
|
| 100 |
|
| 101 |
## Data schema
|
|
|
|
| 87 |
|
| 88 |
---
|
| 89 |
|
| 90 |
+
## Sample for reviewers (small subset, ~4.6 GB)
|
| 91 |
+
|
| 92 |
+
Per NeurIPS Datasets-track guidance, large datasets should provide a smaller sample. The **per-city mini bundle** is exactly that: 2000 training links + 500 validation + 500 test, baked as PyTorch tensors with normalisation statistics — same schema as the full data, ~3,000× smaller per city.
|
| 93 |
+
|
| 94 |
+
```bash
|
| 95 |
+
# Smallest single download — Austin mini bundle, ~4.6 GB:
|
| 96 |
+
hf download neurips2026citympc/CityMPC \
|
| 97 |
+
--repo-type dataset \
|
| 98 |
+
--include "manifests/city_10_austin_3p5_s/{train_2000,val_500,test_500}.* manifests/city_10_austin_3p5_s/norm_stats.json" \
|
| 99 |
+
--local-dir .
|
| 100 |
+
```
|
| 101 |
+
|
| 102 |
+
How the sample was created: deterministic seed-42 random subset of links from the filtered training/validation/test splits (size N=2000/500/500), then the same `bake_dataset.py` pipeline used for the full splits stacks normalised tensors into `.pt` files with the identical schema as `train.h5` / `val.h5` / `test.h5`. The same procedure is available for all 5 cities.
|
| 103 |
+
|
| 104 |
+
## Quick start (full dataset)
|
| 105 |
|
| 106 |
```bash
|
| 107 |
pip install huggingface_hub
|
|
|
|
| 110 |
|
| 111 |
For a one-line download + verify experience, use the citympc `scripts/download_hf.py` helper (see the citympc repo).
|
| 112 |
|
| 113 |
+
## Croissant metadata
|
| 114 |
+
|
| 115 |
+
A valid Croissant 1.1 metadata file is at the dataset repo root: [`croissant.json`](https://huggingface.co/datasets/neurips2026citympc/CityMPC/resolve/main/croissant.json). It contains all FileObject SHA-256 hashes, the RecordSet schema (channel sample fields + norm-stats fields), Responsible AI properties, and provenance. Validated against `mlcroissant` (0 errors, 0 warnings).
|
| 116 |
+
|
| 117 |
---
|
| 118 |
|
| 119 |
## Data schema
|