--- license: apache-2.0 language: - en tags: - wireless - multi-path-channel - ray-tracing - 5G - 6G - ml-for-wireless - sionna - deepmimo - synthetic-data - channel-prediction - generative-models size_categories: - 100B.tar.gz` (scene.xml + meshes/*.ply) | 244 KB | | **Mini bundle** (all 5) | Austin, NY, Dallas, Fort Worth, Denver | `train_2000.{npz,pt}`, `val_500.{npz,pt}`, `test_500.{npz,pt}`, `norm_stats.json` | ~23 GB | | **Full bundle** (all 5) | Austin, NY, Dallas, Fort Worth, Denver | `train.h5`, `val.h5`, `test.h5`, `train.npz`, `val.npz`, `test.npz` | ~680 GB | Per-city full sizes: Austin ~127 GB, NY ~101 GB, Dallas ~156 GB, Fort Worth ~134 GB, Denver ~162 GB. --- ## Sample for reviewers (small subset, ~4.6 GB) Per NeurIPS Datasets-track guidance, large datasets should provide a smaller sample. The **per-city mini bundle** is exactly that: 2000 training links + 500 validation + 500 test, baked as PyTorch tensors with normalisation statistics — same schema as the full data, ~3,000× smaller per city. ```bash # Smallest single download — Austin mini bundle, ~4.6 GB: hf download neurips2026citympc/CityMPC \ --repo-type dataset \ --include "manifests/city_10_austin_3p5_s/{train_2000,val_500,test_500}.* manifests/city_10_austin_3p5_s/norm_stats.json" \ --local-dir . ``` How the sample was created: deterministic seed-42 random subset of links from the filtered training/validation/test splits (size N=2000/500/500), then the same `bake_dataset.py` pipeline used for the full splits stacks normalised tensors into `.pt` files with the identical schema as `train.h5` / `val.h5` / `test.h5`. The same procedure is available for all 5 cities. ## Quick start (full dataset) ```bash pip install huggingface_hub hf download neurips2026citympc/CityMPC --repo-type dataset --include "manifests/city_10_austin_3p5_s/*" --local-dir . ``` For a one-line download + verify experience, use the citympc `scripts/download_hf.py` helper (see the citympc repo). ## Croissant metadata A valid Croissant 1.1 metadata file is at the dataset repo root: [`croissant.json`](https://huggingface.co/datasets/neurips2026citympc/CityMPC/resolve/main/croissant.json). It contains all FileObject SHA-256 hashes, the RecordSet schema (channel sample fields + norm-stats fields), Responsible AI properties, and provenance. Validated against `mlcroissant` (0 errors, 0 warnings). --- ## Data schema Each HDF5 file (`train.h5`, `val.h5`, `test.h5`) and each baked tensor file (`*_2000.pt`, `*_500.pt`) shares the same record schema. One record = one TX-RX link. Field shapes use `N` for the number of links in a split and `25` for the maximum number of multi-path components. | Field | Shape | Description | |---|---|---| | `excess_delay` | `(N, 25)` | Excess delay (seconds) of each path relative to first arrival τ₀. Zero-padded for inactive paths. | | `path_presence` | `(N, 25)` | Binary indicator of active paths (0.0 or 1.0). | | `path_coeff` | `(N, 25, 2)` | Complex path coefficient (real, imag) per path. | | `path_dirs` | `(N, 25, 6)` | Departure + arrival 3D unit direction vectors `[AoD_x, AoD_y, AoD_z, AoA_x, AoA_y, AoA_z]`. | | `path_loss_db` | `(N,)` | Total received path loss (dB) aggregated over all active paths. | | `first_arrival` | `(N,)` | Absolute propagation delay (seconds) of the first-arriving path. | | `rx_pov` | `(N, 12, 128, 128)` | Receiver POV image stack: RGB (3) + depth (1) + normals XYZ (3) + RF material props (5). Float32. | | `tx_pov` | `(N, 12, 128, 128)` | Transmitter POV image stack — same 12-channel layout as `rx_pov`. | | `global_map` | `(N, 1, 128, 128)` | Global building height map centred on the TX-RX midpoint. | | `scalars` | `(N, 6)` | TX + RX 3D world coordinates `[tx_x, tx_y, tx_z, rx_x, rx_y, rx_z]` in metres. | `norm_stats.json` (per city) contains: | Key | Description | |---|---| | `first_arrival.mean_log_ns` | Mean of `log(first_arrival × 1e9)` across training-split links. | | `first_arrival.std_log_ns` | Std of `log(first_arrival × 1e9)` across training-split links. | | `path_loss_db.mean` | Mean of `path_loss_db` across training-split links. | | `path_loss_db.std` | Std of `path_loss_db` across training-split links. | | `n_links_valid`, `n_links_total` | Link counts. | --- ## Provenance - **Geometry source**: [DeepMIMO v4](https://deepmimo.net/) — building footprints (OpenStreetMap-derived) and TX/RX user grids for 20 US city scenarios at 3.5 GHz. - **Ray tracing**: [Sionna RT](https://nvlabs.github.io/sionna/) — full TX-RX path enumeration with up to 25 paths per link, complex coefficients, delays, AoD/AoA, computed for all link pairs. - **POV rendering**: Mitsuba 3 — 12-channel image stacks rendered from both TX and RX viewpoints, plus a global height map per link. - **Filtering & splits**: link-level power filter + path-level prune + L_max truncation; 80/10/10 train/val/test split via deterministic seed; `norm_stats.json` recomputed on the filtered training set. - **Full pipeline reproducibility**: see `PIPELINE.md` in the citympc code release. --- ## Responsible AI ### Synthetic data This dataset is **fully synthetic**. It contains only ray-traced wireless channel parameters and 3D environment renderings derived from publicly available building geometry models. **No personal or sensitive information is present.** ### Limitations - **Frequency**: 3.5 GHz only. Results may not generalise to mmWave (28+ GHz), sub-GHz, or other bands without retraining. - **Geography**: 5 US cities. Other cities, non-US urban layouts, and rural/suburban environments are out of distribution. - **Static environments**: No dynamic objects (vehicles, pedestrians, foliage), no weather, no atmospheric effects. - **Geometry fidelity**: DeepMIMO v4 building footprints are extruded uniformly from OpenStreetMap data and may not match real building heights or shapes. - **Source code**: The generation pipeline is open-source; see the citympc repository for full reproducibility. ### Biases - All cities are mid-to-large US urban centres with dense building layouts; receiver placements follow DeepMIMO v4 user grids and may oversample regular street patterns. - Material RF properties are assigned from a fixed per-material lookup; no spatial variation within a material class. ### Intended uses - Training and benchmarking ML models for environment-aware multi-path channel prediction. - Channel charting, site-specific radio resource management, digital-twin radio simulation. - Evaluation of generative models that consume image/geometric scene representations. ### Out-of-scope uses - Operational deployment of trained models without site-specific validation. - Generalisation claims beyond the cities, frequency, and conditions covered. ### Social impact Open release of the dataset and the generation toolchain reduces reliance on proprietary or computationally expensive ray tracers in 5G/6G research, and supports reproducibility in environment-aware wireless communications. --- ## File integrity (SHA-256) | File | SHA-256 | |---|---| | `manifests/city_10_austin_3p5_s/train.h5` | `89830838f697674d993390862e8ae99adf1ebc16de0a6db9ded898ae9b32245c` | | `manifests/city_10_austin_3p5_s/val.h5` | `2a89a446355cebb5a347891a5a69b2cd8a67455e09c0efa53020638d9ef7ae60` | | `manifests/city_10_austin_3p5_s/test.h5` | `551cffd696a7a259ac1f9b318479880f886b08722c8fcca535e1ec39d050dda6` | | `manifests/city_10_austin_3p5_s/norm_stats.json` | `5a9f73b8692464b75f6fb54a228fb7fb51c58e837be0a7c4e01608c03ed8b8e1` | | `manifests/city_10_austin_3p5_s/train_2000.pt` | `3c996049dc3ca61c90c20f72366f18ad1fbceca1b5717b754558f0af9af6dd45` | | `manifests/city_0_newyork_3p5_s/train.h5` | `89d34d6d70b16eb226e5b0e7a0bc459115d03eb45112e246e742a957664f7035` | | `manifests/city_0_newyork_3p5_s/val.h5` | `a75efb4d9e9c2a162750c6b35f6848639c8b4e1a33ff9bff3683d4ff5646d567` | | `manifests/city_0_newyork_3p5_s/test.h5` | `00872e53d95895dba15dacfb6a675046db3b84166ee3a0c516eb668ac881fafe` | | `manifests/city_0_newyork_3p5_s/norm_stats.json` | `f74a82ca84aa3cd5577bcb0f29f37c386268c787d7c3201e4e09b8154ea4a1b6` | | `manifests/city_0_newyork_3p5_s/train_2000.pt` | `028e16a072ba6fe9589ce43ae8b02a4d084e3e125ab9371c6e9ef22dc2b8c7a6` | | `manifests/city_8_dallas_3p5_s/train.h5` | `ec9ff0c5df4feb3549bff3ef6341a1949363c40ce624976d9b272db73db71ef5` | | `manifests/city_8_dallas_3p5_s/val.h5` | `6981d2b15bb71a2ee8f74f11210f8d891efed1b177c2922c54d267fd9a0d4bad` | | `manifests/city_8_dallas_3p5_s/test.h5` | `9876d5b8a65ee42f99b084a44f22f4e3be5be5aee6a4742c9783b144437daa33` | | `manifests/city_8_dallas_3p5_s/norm_stats.json` | `f4bf3a71288e74d364f463ee85ef0377bd43b8faccfcf5b7fbda5afcc5fc7294` | | `manifests/city_8_dallas_3p5_s/train_2000.pt` | `670b0266756b308e56991bb54f9581adbef7fec51525b74217fd0d4b4c7638ca` | | `manifests/city_12_fortworth_3p5_s/train.h5` | `d7b0a591f60f10b0000a45c8d18320b78d0d8264cdb9d03401bfcab1288aaae4` | | `manifests/city_12_fortworth_3p5_s/val.h5` | `d13e41ae0fe852e58a01f692c55b7c6a89e8143d82430a84c89b83657c9d4623` | | `manifests/city_12_fortworth_3p5_s/test.h5` | `cde6c439f7ec918d8fa77ce524e8d096737b5b5824bcceb851919e1522ac5681` | | `manifests/city_12_fortworth_3p5_s/norm_stats.json` | `d400be7f4fcc59cda63b0eafe88bd3330a8a3866c19a9ac6ae59c86c97fb3ef5` | | `manifests/city_12_fortworth_3p5_s/train_2000.pt` | `ce7a01b737021521f215162d904cae70c6079b3951bbbf3bbffcdbb52e4fba7d` | | `manifests/city_18_denver_3p5_s/train.h5` | `0de729b9165b97f67835c04745b5e604d495fcd3c1c3fa27e8203460905e58cd` | | `manifests/city_18_denver_3p5_s/val.h5` | `d3eb526e35fe4c57eace7d4f8628c496f87ce7f8ae17f83d625458fb80daa8ca` | | `manifests/city_18_denver_3p5_s/test.h5` | `e4cdd747079b0027bb44533395881fa3f931fa14182f0fe12e4567f95d925c2f` | | `manifests/city_18_denver_3p5_s/norm_stats.json` | `6d9fd65de6bccb346b29204726392fd710276a6a2e763245c4c80794eb96cd7d` | | `manifests/city_18_denver_3p5_s/train_2000.pt` | `a08371d2ccc1d8cc5a5494880bcb388db155ff7c5bf589fd163e74e94363325c` | --- ## Citation ```bibtex @inproceedings{citympc2026, title = {CityMPC: Environment-Aware Multi-Path Channel Prediction via Conditional Generation}, booktitle = {Advances in Neural Information Processing Systems (NeurIPS)}, year = {2026}, note = {Evaluations \& Datasets Track} } ``` ## License Apache 2.0.