PDEBench FNO Re-evaluation: Prediction Tensors
Test-set prediction arrays from The Unrealized Potential of Fourier Neural Operators: A Systematic Re-evaluation of PDEBench Baselines (NeurIPS 2026 E&D Track submission).
File layout
For all standard tests (1-27, 29, plus the three supplementary 2D CFD configurations), each .npz file contains:
preds: model predictions, shape[N_test, spatial_dims..., T, nc]targets: ground truth, same shapeper_sample: per-sample Frobenius/global nRMSE, shape[N_test],float32(this is the global variant — see Remark 1 in the paper). For supplementary CFD configs,per_sample_pertimestepis also stored.initial_step: number of input timesteps to skip during nRMSE evaluation
Storage dtype is float32 for all tests except Test 29, which uses float16 for the prediction/target tensors (1000 × 128 × 128 × 16 × 4) to fit within HuggingFace's 5 GB per-file limit. The per_sample field is float32 in every file. The end-to-end relative drift introduced by the fp16 quantisation of Test 29's tensors is approximately 0.55 % — small relative to all reported improvement factors.
Test 28 (2D incompressible Navier-Stokes, exploratory) — chunked
Test 28 is the vorticity-Poisson FNO discussed in paper Section 8. Its full output (100 test samples × 101 timesteps × 512 × 512 grid, both prediction and target) is split across 10 chunks of 10 samples each plus a top-level manifest:
test_28_predictions_manifest.json
test_28_predictions_chunk_00.npz # samples [0, 10)
test_28_predictions_chunk_01.npz # samples [10, 20)
...
test_28_predictions_chunk_09.npz # samples [90, 100)
Each chunk carries:
omega_pred,omega_target: vorticity at fp16, shape[10, 101, 512, 512]per_sample: velocity-space nRMSE at fp32, shape[10]sample_indices:[start, ..., start+9]— the global sample indices in this chunkchunk_index,total_chunks: bookkeepingH, W, dx, dy, ch_mean, ch_std, initial_step: physics + normalisation metadatanote: full reproduction recipe
Storing vorticity (rather than velocity) keeps each chunk under 1 GB and lets reviewers run their own DST Poisson + finite-difference velocity recovery to verify the boundary handling and operator conditioning analysed in the paper.
The full Test 28 mean nRMSE recovers exactly when concatenating the per-chunk per_sample arrays:
import numpy as np
mean = np.concatenate([np.load(f"test_28_predictions_chunk_{i:02d}.npz")["per_sample"]
for i in range(10)]).mean()
print(f"Test 28 mean nRMSE: {mean:.6e}") # 2.461890e-01
Verify nRMSE independently
The paper's headline metric is per-timestep nRMSE (Equation 1, Remark 1).
The per_sample array stored in each .npz is the Frobenius/global
variant. The snippet below computes both. Use the per-timestep value when
comparing against paper Table 1; the Frobenius value will match
per_sample.mean().
import numpy as np
d = np.load("test_13_predictions.npz")
pred, target = d["preds"], d["targets"]
init_step = int(d["initial_step"])
p = pred[..., init_step:, :].astype(np.float32) # cast in case fp16
t = target[..., init_step:, :].astype(np.float32)
B = p.shape[0]
# (a) Frobenius / global nRMSE — matches the stored `per_sample` array.
fro = (np.sqrt(((p - t) ** 2).reshape(B, -1).sum(1))
/ np.sqrt((t ** 2).reshape(B, -1).sum(1) + 1e-20)).mean()
# (b) Per-timestep nRMSE — the headline metric used in paper Table 1.
# Per-timestep, per-channel spatial nRMSE, averaged over (T, C) within sample,
# then averaged over samples.
ndim = p.ndim
T_pred = p.shape[-2]
nc = p.shape[-1]
accum = np.zeros(B, dtype=np.float64)
n_terms = 0
for ti in range(T_pred):
for ci in range(nc):
if ndim == 4: # 1D autoregressive: [B, X, T, C]
pp, tt = p[:, :, ti, ci], t[:, :, ti, ci]
elif ndim == 5: # 2D autoregressive: [B, H, W, T, C]
pp = p[:, :, :, ti, ci].reshape(B, -1)
tt = t[:, :, :, ti, ci].reshape(B, -1)
else:
raise ValueError(f"unexpected ndim={ndim}")
err = np.sqrt(((pp - tt) ** 2).sum(axis=-1))
nrm = np.sqrt((tt ** 2).sum(axis=-1) + 1e-20)
accum += err / nrm
n_terms += 1
pertimestep = (accum / n_terms).mean()
print(f" Frobenius nRMSE : {fro:.4e} (matches stored per_sample.mean)")
print(f" per-timestep nRMSE: {pertimestep:.4e} (matches paper Table 1)")
print(f" Stored per_sample.mean: {d['per_sample'].mean():.4e}")
For static problems (Darcy, Tests 21–25, no time axis) per-timestep nRMSE collapses to per-sample Frobenius; both expressions above give the same value.
The artifact's verification script also displays both metrics:
python standalone/evaluate_predictions.py --json-only --ci.
For Test 28, recompute on a single chunk:
import numpy as np
d = np.load("test_28_predictions_chunk_00.npz")
omega_pred = d["omega_pred"].astype(np.float32)
omega_target = d["omega_target"].astype(np.float32)
init_step = int(d["initial_step"])
ch_mean = d["ch_mean"]
ch_std = d["ch_std"]
H, W = int(d["H"]), int(d["W"])
dx, dy = float(d["dx"]), float(d["dy"])
# Or trust the per_sample stored in the chunk:
print(f"Chunk 00 mean nRMSE: {d['per_sample'].mean():.4e}")
# Or run the full DST Poisson + central-difference recovery using the
# formulas in paper Appendix Test 28 to verify end-to-end (recipe in
# the chunk's `note` field).
Seed-variance ablation (subdirectory seed_ablation/)
For four borderline test rows the dataset additionally hosts prediction tensors at two further training seeds (123, 456) so that training-seed variance can be quantified independently of the held-out-sample bootstrap CIs:
seed_ablation/test_11_seed123_predictions.npz # 1D Burgers ν=1.0
seed_ablation/test_11_seed456_predictions.npz
seed_ablation/test_24_seed123_predictions.npz # 2D Darcy β=10
seed_ablation/test_24_seed456_predictions.npz
seed_ablation/test_25_seed123_predictions.npz # 2D Darcy β=100
seed_ablation/test_25_seed456_predictions.npz
seed_ablation/test_26_seed123_predictions.npz # 2D Diff-React (43.7× headline)
seed_ablation/test_26_seed456_predictions.npz
Each file follows the same npz schema as the standard tests
(preds, targets, per_sample, initial_step). The seed-42
results for the same rows live at the headline paths
(test_NN_predictions.npz).
Code repository
The companion code repository contains a single-command verification script:
python evaluate_predictions.py --json-only # main 24-test summary
python evaluate_predictions.py --json-only --ci # add bootstrap CIs
python evaluate_predictions.py --seed-ablation --ci # multi-seed table
python evaluate_predictions.py --catalog # list all available tests
python evaluate_predictions.py --all # download + recompute everything
Data
PDEBench datasets: DaRUS doi:10.18419/darus-2986
- Downloads last month
- 62