pretty_name: PowerZoo Dataset
license: other
license_name: mixed-upstream-licences
language:
- en
size_categories:
- 1M<n<10M
tags:
- power-systems
- electricity
- reinforcement-learning
- benchmark
- time-series
- smart-grid
- data-center
- load-forecasting
- opf
task_categories:
- time-series-forecasting
- tabular-regression
- reinforcement-learning
configs:
- config_name: aemo_5min_demand
data_files: parquet/AEMO_5min_Demand_2025_2026.parquet
- config_name: aemo_forecast
data_files: parquet/AEMO_Forecast_vs_Actual_2025.parquet
- config_name: ausgrid_zone_substation
data_files: parquet/Ausgrid_Zone_Substation_FY25_imputed_15min.parquet
- config_name: gb_forecast_actual_demand
data_files: parquet/GB_Forecast_Actual_Demand_2023_2025_30min.parquet
- config_name: gb_gen_by_type
data_files: parquet/GB_Gen_by_Type_2016_2025_30min.parquet
- config_name: gb_neso_demand
data_files: parquet/GB_NESO_Demand_2009_2025_30min.parquet
- config_name: gb_market_mid
data_files: parquet/MID_GB_30min_aligned_to_gen.parquet
- config_name: alibaba_dc_2018
data_files: parquet/alibaba_dc_2018_300s.parquet
- config_name: alibaba_gpu_2020
data_files: parquet/alibaba_gpu_2020_300s.parquet
- config_name: azure_dc_v2
data_files: parquet/azure_dc_v2_300s.parquet
- config_name: google_dc_2019
data_files: parquet/google_dc_2019_300s.parquet
PowerZoo Dataset
Real-world power-system and data-centre time-series, canonical grid topologies, and JSON manifests linking the two — packaged for reinforcement-learning and forecasting research with the PowerZoo / PowerZooJax benchmark code (released separately).
1. What's inside
- Eleven parquet time-series files ingested from public regulator and cloud-provider releases (GB / AU electricity load, generation by fuel, day-ahead forecasts, market mid-prices; Alibaba / Azure / Google data-centre utilisation).
- Fourteen electrical-network case files (Python classes with bus / branch / generator / load tables) covering transmission systems from 5 to 2383 buses and distribution systems from 33 to 533 buses.
- Eleven JSON manifests that map each parquet's raw columns to a shared canonical schema (e.g.
OPERATIONAL_DEMAND→load.actual_mw), so traces from different sources compose cleanly in one experiment.
2. Repository layout
PowerZooDataset/
├── README.md # this file
├── parquet/ # harmonised time-series traces
│ ├── *.parquet # primary data files
│ └── *.json # per-file provenance metadata (rows, dtypes, source URL, generation timestamp)
├── manifests/ # loader manifests (column maps, derivations, normalisation)
│ └── <dataset_id>.json
└── powergrid_case/ # electrical-network case definitions (Python)
├── CaseBase.py # ClearCase base class + ext. DataFrame
├── transmission/ # HV / sub-transmission test systems
│ ├── Case5.py, Case14.py, Case29GB.py, Case118.py,
│ ├── Case300.py, Case552GB.py, Case1354pegase.py, Case2383wp.py
│ └── __init__.py
└── distribution/ # MV distribution test feeders
├── Case33bw.py, Case118zh.py, Case123.py, Case141.py,
├── Case533mt_hi.py, Case533mt_lo.py
└── __init__.py
2.1 Parquet traces
| File | Domain | Resolution | Rows | Columns | Bytes |
|---|---|---|---|---|---|
AEMO_5min_Demand_2025_2026.parquet |
AU NEM demand (5 regions) | 5 min | 737,400 | 5 | 6.5 MB |
AEMO_Forecast_vs_Actual_2025.parquet |
AU NEM probabilistic forecast vs. actual | 30 min | 89,145 | 10 | 1.6 MB |
Ausgrid_Zone_Substation_FY25_imputed_15min.parquet |
NSW zone substations (175 sites) | 15 min | 6,095,040 | 4 | 60 MB |
GB_NESO_Demand_2009_2025_30min.parquet |
GB NESO historical demand | 30 min | 285,454 | 22 | 4.7 MB |
GB_Forecast_Actual_Demand_2023_2025_30min.parquet |
GB day-ahead forecast vs. actual | 30 min | 48,283 | 3 | 0.8 MB |
GB_Gen_by_Type_2016_2025_30min.parquet |
GB generation by fuel type | 30 min | 180,048 | 13 | 6.2 MB |
MID_GB_30min_aligned_to_gen.parquet |
GB APX/N2EX mid prices & volumes | 30 min | 48,283 | 6 | 0.8 MB |
alibaba_dc_2018_300s.parquet |
Alibaba production cluster (CPU/mem/net/disk) | 5 min | 2,243 | 6 | 0.1 MB |
alibaba_gpu_2020_300s.parquet |
Alibaba GPU cluster (GPU/CPU util) | 5 min | 415 | 3 | <0.1 MB |
azure_dc_v2_300s.parquet |
Azure VM trace v2 (CPU, assigned memory) | 5 min | 8,640 | 3 | 0.2 MB |
google_dc_2019_300s.parquet |
Google Borg 2019 (CPU/mem/CPI) | 5 min | 8,064 | 5 | 0.3 MB |
Each *.parquet ships with a sibling *.json capturing source URL, source organisation, generation timestamp, exact column dtypes, region/category enumerations and (where applicable) timezone conventions.
2.2 Manifests
Manifests in manifests/ are the contract used by PowerZoo / PowerZooJax loaders. A manifest declares:
parquet_fileand the matchingmetadata_jsoncolumn_map: rename rules from raw column → canonical schema (e.g.OPERATIONAL_DEMAND → load.actual_mw)index_map: which raw columns serve asdatetime/region/issue_time/target_timederived: closed-form derivations from raw columns (e.g.wind.available_mw = "Wind Offshore + Wind Onshore")normalize: per-channel scaling factors applied at load timetime_mode:calendar(absolute UTC) orprofile(cyclical, anchored todata_epoch)region_values,date_range,source_url,source_organization
The 11 manifests cover every parquet file shipped here. Note that gb_neso_demand uses a two-column index (SETTLEMENT_DATE + SETTLEMENT_PERIOD 1–50) instead of a single datetime column — the manifest's datetime_recipe field documents the exact reconstruction (Europe/London tz-localise then convert to UTC; SP=49–50 absorb the autumn clock-change repeat).
2.3 Power-grid case files
powergrid_case/ contains a unified Python representation for both transmission and distribution test systems. Every case subclasses ClearCase and exposes four pandas.DataFrame tables in MATPOWER-compatible units (MW, MVAr, p.u.):
nodes—id, type (1=PQ / 2=PV / 3=Ref), Pd, Qd, x, yunits—id, bus_id, mc_a, mc_b, mc_c, p_max, p_min(quadratic cost + capacity)lines—id, from, to, x, floor, cap(reactance + thermal limits)loads—id, bus_id, mc_a, mc_b, mc_c, d_max, d_min(price-responsive demand)
Each file declares BUS_COUNT, VOLTAGE_LEVEL, SOURCE and DESCRIPTION as class-level metadata. The values below are reproduced verbatim from those declarations (so the file itself is the source of truth):
| File | Voltage | Buses | SOURCE |
DESCRIPTION |
|---|---|---|---|---|
transmission/Case5.py |
HV | 5 | MATPOWER | IEEE 5-bus test system |
transmission/Case14.py |
HV | 14 | MATPOWER | IEEE 14-bus test system |
transmission/Case29GB.py |
HV | 29 | custom | GB reduced 29-bus transmission network |
transmission/Case118.py |
HV | 118 | MATPOWER | IEEE 118-bus test system |
transmission/Case300.py |
HV | 300 | MATPOWER | IEEE 300-bus test system |
transmission/Case552GB.py |
HV | 552 | GB | Great Britain 552-bus transmission (distinct from 29-bus Case29GB) |
transmission/Case1354pegase.py |
HV | 1354 | MATPOWER | European PEGASE 1354-bus system |
transmission/Case2383wp.py |
HV | 2383 | MATPOWER | Polish 2383-bus winter peak system |
distribution/Case33bw.py |
MV | 33 | MATPOWER | IEEE 33-bus Baran & Wu radial distribution |
distribution/Case118zh.py |
MV | 118 | MATPOWER | 118-bus Zhang distribution system |
distribution/Case123.py |
MV | 123 | MATPOWER | IEEE 123-bus three-phase distribution |
distribution/Case141.py |
MV | 141 | MATPOWER | 141-bus Caracas distribution system (Khodr et al., EPSR 2008) |
distribution/Case533mt_hi.py |
MV | 533 | MATPOWER | 533-bus Swedish distribution (high load) |
distribution/Case533mt_lo.py |
MV | 533 | MATPOWER | 533-bus Swedish distribution (low load) |
3. Loading
3.1 Direct parquet load (no extra dependency on PowerZoo)
import pandas as pd
from huggingface_hub import hf_hub_download
path = hf_hub_download(
repo_id="PowerZooJax/PowerZooDataset",
repo_type="dataset",
filename="parquet/AEMO_5min_Demand_2025_2026.parquet",
)
df = pd.read_parquet(path)
print(df.head())
3.2 Via datasets (per-config)
from datasets import load_dataset
ds = load_dataset(
"PowerZooJax/PowerZooDataset",
name="aemo_5min_demand",
split="train",
)
3.3 Via the PowerZooJax DataLoader
If you have the (separately released) benchmark package installed, instantiate DataLoader with this repository's parquet/ and manifests/ directories:
from powerzoojax.data import DataLoader
loader = DataLoader(
data_dir="/path/to/PowerZooDataset/parquet",
manifest_dir="/path/to/PowerZooDataset/manifests",
)
print(loader.list_available_datasets())
df = loader.load_actual_series("aemo_5min_demand")
3.4 Power-grid cases
from powerzoo.case.distribution.Case141 import Case141
case = Case141()
case.check()
case.get_node_ptdf()
4. Schema conventions
- Time stamps. Calendar-mode parquet files store timestamps as
datetime64[ns, UTC], with the exception ofGB_NESO_Demand_2009_2025_30min.parquet, which uses two columns (SETTLEMENT_DATE+SETTLEMENT_PERIOD) — see thedatetime_recipeinmanifests/gb_neso_demand.json. The Ausgrid metadata explicitly declarestimezone_local = "Australia/Sydney"andtimezone_stored = "UTC"; for the other calendar sources, only the stored UTC representation is documented in metadata. - Profile-mode traces. The data-centre traces (Alibaba / Azure / Google) are tagged
time_mode = "profile"andcyclical = truein their manifests, anchored to adata_epoch(Alibaba DC:2018-01-06, Alibaba GPU:2020-07-01, Azure:2019-01-01, Google:2019-05-01). They are intended as periodic exogenous signals, not absolute calendar series. - Imputation.
Ausgrid_Zone_Substation_FY25_imputed_15min.parquetcontains imputed values (as indicated by its filename suffix and source path); the imputation method is not documented inside the metadata shipped here. Users requiring raw observations should pull from the upstream Ausgrid release. - Units. Power columns in the parquet traces are in MW. The GB market-mid file uses prices in
mid_price_* columns (the upstream Elexon convention is £/MWh; consult the source URL for any unit caveats). Data-centre utilisation columns are 0–100 (percent) before the manifest'snormalizefactor is applied.
5. Source acknowledgements & licensing
This repository redistributes derivative datasets built from publicly accessible upstream releases. Each upstream is governed by its own terms; users are responsible for complying with them. The source_url and source_organization fields in every parquet/*.json and manifests/*.json give the canonical pointer.
| Trace | Upstream | Original portal |
|---|---|---|
| AEMO 5-min demand & POE forecasts | Australian Energy Market Operator | https://visualisations.aemo.com.au/aemo/nemweb/ |
| Ausgrid zone-substation load | Ausgrid | https://www.ausgrid.com.au/Industry/Our-Research/Data-to-share/Distribution-zone-substation-data |
| GB demand actual & day-ahead forecast | Elexon (BMRS) | https://data.elexon.co.uk/bmrs/api/v1 |
| GB generation by fuel type | Elexon (BMRS) | https://data.elexon.co.uk/bmrs/api/v1/generation/actual/per-type |
| GB historical demand 2009–2025 | National Energy System Operator (NESO) | https://www.neso.energy/data-portal/historic-demand-data |
| Alibaba cluster trace 2018 | Alibaba Group | https://github.com/alibaba/clusterdata/tree/master/cluster-trace-v2018 |
| Alibaba GPU trace 2020 | Alibaba Group | https://github.com/alibaba/clusterdata/tree/master/cluster-trace-gpu-v2020 |
| Azure public dataset v2 | Microsoft | https://github.com/Azure/AzurePublicDataset |
| Google cluster data 2019 | https://github.com/google/cluster-data |
The packaging artefacts authored for this benchmark (the manifest schema, case-file Python representations, schema harmonisation logic) are intended to be released under a permissive open-source licence at camera-ready time; the underlying parquet data inherits the upstream licence in every case. The canonical copy is the upstream URL above.
All series are aggregated at substation, regional, or national level; no PII.
6. Intended use & limitations
Intended use. Reinforcement-learning research, load / generation forecasting, OPF benchmarking, distribution-system control, data-centre power-shaping, demand-response studies.
Limitations.
- Static snapshot at release time; upstream sources continue to publish. Redownload from the URLs in §5 for live data.
- Geographic scope: Great Britain and Australia only.
- The Ausgrid trace is imputed by the upstream publisher; the imputation procedure is not documented here.
- Data-centre traces ship only the columns each manifest maps, at 5-minute resolution; upstream releases offer more fields at finer cadences.
- Grid-case parameter values are consistent with the named source systems but are not byte-identical to any specific upstream release.
Not intended for real-time grid control, retail tariff design, or settlement of real markets.
7. File integrity
To regenerate provenance metadata locally:
python -c "
import pyarrow.parquet as pq, glob, os
for p in sorted(glob.glob('parquet/*.parquet')):
m = pq.read_metadata(p)
print(f'{os.path.basename(p):60s} rows={m.num_rows:>10,} cols={m.num_columns:>3} bytes={os.path.getsize(p):>10,}')
"
Expected output is reproduced in §2.1.