The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError
Exception: ArrowNotImplementedError
Message: Cannot write struct type 'failure_reasons' with no child field to Parquet. Consider adding a dummy child field.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1890, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 758, in write_table
self._build_writer(inferred_schema=pa_table.schema)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 799, in _build_writer
self.pa_writer = pq.ParquetWriter(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 1070, in __init__
self.writer = _parquet.ParquetWriter(
^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/_parquet.pyx", line 2363, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'failure_reasons' with no child field to Parquet. Consider adding a dummy child field.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1911, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 773, in finalize
self._build_writer(self.schema)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 799, in _build_writer
self.pa_writer = pq.ParquetWriter(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 1070, in __init__
self.writer = _parquet.ParquetWriter(
^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/_parquet.pyx", line 2363, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'failure_reasons' with no child field to Parquet. Consider adding a dummy child field.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1739, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1922, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
generated string | total_cases_found int64 | successful int64 | failed int64 | skipped int64 | converged int64 | not_converged int64 | total_size_gb float64 | grid_dimensions dict | failure_reasons dict | avg_processing_time_s float64 |
|---|---|---|---|---|---|---|---|---|---|---|
2026-04-06T18:16:09.240434 | 2 | 2 | 0 | 0 | 2 | 0 | 0.27 | {
"unique_grids": [
[
298,
298,
64
]
],
"uniform": true
} | {} | 11.6 |
Wind Flow Over Complex Terrain Dataset
A large-scale dataset of steady-state RANS wind flow simulations over real-world complex terrain, designed for training machine learning models for wind resource assessment and atmospheric flow prediction.
Overview
| Parameter | Value |
|---|---|
| Number of terrain locations | ~1000 |
| Wind directions per terrain | 2 (random) |
| Total simulation cases | ~10,000 |
| Cropped grid per case | ~298 × 298 × 64 |
| Horizontal resolution (AOI) | ~30 m |
| Vertical extent | ~500 m AGL |
| Flow variables | U (3-component), p, k, ε |
| Surface variables | DEM, roughness (z₀), height AGL |
| Reference wind speed | 10 m/s at 100 m height |
| Atmospheric stability | Neutral |
| Solver | OpenFOAM simpleFoam (RANS, k-ε) |
| Format | Zarr (xarray-compatible) |
Quick Start
Load a single case
import xarray as xr
ds = xr.open_zarr("data/case_name.zarr")
# 3D wind field
Ux = ds['Ux'].values # (ni, nj, nk) array, m/s
Uy = ds['Uy'].values
Uz = ds['Uz'].values
# Terrain
dem = ds['dem'].values # (ni, nj), meters above sea level
z0 = ds['roughness'].values # (ni, nj), aerodynamic roughness in meters
# Height above ground
h_agl = ds['h_agl'].values # (ni, nj, nk), meters
# Metadata
print(ds.attrs['case_id'])
print(ds.attrs['rotation_deg']) # wind direction
print(ds.attrs['converged']) # simulation convergence flag
Load from Hugging Face directly
from huggingface_hub import hf_hub_download
import xarray as xr
import os
# Download a single case
local_path = hf_hub_download(
repo_id="souravsud/wind-terrain-cfd",
filename="data/case_name.zarr",
repo_type="dataset",
local_dir="./cache/"
)
ds = xr.open_zarr("./cache/data/case_name.zarr")
PyTorch DataLoader
See examples/02_dataloader_pytorch.py for a ready-to-use torch.utils.data.Dataset class.
Data Description
Per-case Zarr store contents
| Variable | Shape | Units | Description |
|---|---|---|---|
X |
(ni, nj, nk) | m | UTM easting of cell centre |
Y |
(ni, nj, nk) | m | UTM northing of cell centre |
Z |
(ni, nj, nk) | m | Elevation above MSL |
Ux |
(ni, nj, nk) | m/s | Velocity x-component (UTM east) |
Uy |
(ni, nj, nk) | m/s | Velocity y-component (UTM north) |
Uz |
(ni, nj, nk) | m/s | Velocity z-component (vertical) |
p |
(ni, nj, nk) | m²/s² | Kinematic pressure (p/ρ) |
k |
(ni, nj, nk) | m²/s² | Turbulent kinetic energy |
epsilon |
(ni, nj, nk) | m²/s³ | Turbulent dissipation rate |
dem |
(ni, nj) | m | Ground elevation (MSL) |
roughness |
(ni, nj) | m | Aerodynamic roughness length z₀ |
h_agl |
(ni, nj, nk) | m | Height above ground level |
Coordinate system
- X, Y: UTM coordinates. The EPSG code is stored in
ds.attrs['utm_epsg']. - Z: Absolute elevation above mean sea level (MSL), not height above ground.
- h_agl: Pre-computed height above ground:
h_agl[i,j,k] = Z[i,j,k] - dem[i,j]. - The mesh is terrain-following (curvilinear). At each (i,j) column, Z increases with k but follows the terrain surface. Horizontal coordinates vary slightly with k.
Velocity scaling
All simulations use a reference velocity of 10 m/s at 100 m height under neutral atmospheric stability. Since the governing equations (incompressible RANS) are linear in velocity for neutral conditions, results can be scaled to any reference wind speed:
V_ref_desired = 8.0 # m/s
scale = V_ref_desired / 10.0
U_scaled = U_dataset * scale
p_scaled = p_dataset * scale**2
k_scaled = k_dataset * scale**2
epsilon_scaled = epsilon_dataset * scale**3
This is a feature, not a limitation — it means the dataset effectively covers all wind speeds.
Wind direction
Each case has a specific wind direction stored in ds.attrs['rotation_deg']. This is the angle (in degrees) by which the terrain was rotated to align the inlet boundary with the desired wind direction. The velocity components (Ux, Uy) are in the rotated UTM frame corresponding to that case.
Convergence quality
Each case includes convergence information:
ds.attrs['converged']: Boolean flag (True if all residuals < 10⁻³)ds.attrs['residual_Ux'],ds.attrs['residual_p'], etc.: Final residual per fieldds.attrs['iterations']: Number of solver iterations
The metadata/case_index.csv file contains convergence data for all cases, allowing easy filtering.
Dataset Structure
wind-terrain-cfd/
├── README.md # This file
├── data/
│ ├── case_0001.zarr/ # One Zarr store per simulation
│ ├── case_0002.zarr/
│ └── ...
├── metadata/
│ ├── case_index.csv # Master index (lat, lon, wind_dir, converged, ...)
│ └── dataset_summary.json # Aggregate statistics
└── examples/
├── 01_load_single_case.py
├── 02_dataloader_pytorch.py
└── 03_velocity_scaling.py
Generation Pipeline
The dataset was generated using an automated pipeline:
- Terrain fetching: terrain-fetcher — downloads DEM (Copernicus GLO-30) and land cover (ESA WorldCover) data
- Mesh generation: terrain_following_mesh_generator — structured terrain-following mesh for OpenFOAM
- Boundary conditions: ABL_BC_generator — neutral atmospheric boundary layer inlet profiles
- Job management: taskManager — SLURM job submission and monitoring
- Orchestration: CFD-dataset — end-to-end pipeline coordination
Citation
If you use this dataset in your research, please cite:
@dataset{sud2026windterrain,
author = {Sud, Sourav},
title = {Wind Flow Over Complex Terrain Dataset},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/souravsud/wind-terrain-cfd}
}
License
This dataset is released under the CC BY 4.0 license.
- Downloads last month
- 739