text stringlengths 0 64 |
|---|
# Core scientific stack |
numpy |
scipy |
matplotlib |
# JAX — install the appropriate backend wheel for your hardware: |
# Linux + CUDA: pip install "jax[cuda12]" |
# Apple Silicon (MLX): pip install jax-mps |
# CPU-only fallback: pip install jax |
jax |
# Notebook environment |
jupyter |
jupyterlab |
jupytext |
ipykernel |
ipywidgets |
# Plotting / display |
seaborn |
Spinning Up in PDE Solvers
From Lewis Fry Richardson's forecast factory to differentiable physics — a hands-on curriculum tracing the lineage of computational PDE solvers
In 1922, Lewis Fry Richardson published Weather Prediction by Numerical Process — a 236-page proposal to forecast the atmosphere by hand, using a "forecast factory" of 64,000 human computers seated in a vast amphitheatre, each solving a finite-difference cell. His own six-hour test forecast diverged into nonsense. Six years later, Courant, Friedrichs, and Lewy explained why: he had violated a stability bound on the timestep. By 1950, Charney, Fjørtoft, and von Neumann ran the same idea on ENIAC and got a forecast that worked. The scheme they used — forward-time centred-space, FTCS — is the same baseline you'll find in every PDEBench plot today, three quarters of a century later.
A century after Richardson, the same problem has a new numerical machine. In 2023, Google DeepMind's GraphCast — a graph neural network operating on a refined icosahedral mesh of the Earth, trained on 39 years of ERA5 reanalysis — produced 10-day global weather forecasts in under sixty seconds on a single TPU and beat ECMWF's HRES, the gold-standard physics-based deterministic forecast, on roughly 90% of measured variables (Lam et al., Science 382, 1416–1421). In December 2024, GenCast extended the approach with a diffusion-based ensemble of graph networks, again outperforming ECMWF — this time the ensemble system, ENS — on probabilistic skill (Price et al., Nature 637, 84–90 (2025)). Both replace Richardson's hand-cranked primitive equations with a learned operator that takes past atmospheric states to future ones.
The numerical primitive Richardson actually wrote down at each grid cell — once per cell, once per six-hour timestep, six times across his test forecast — was a finite-difference stencil: a small fixed pattern of neighbouring grid values, multiplied by rational coefficients and summed to approximate a partial derivative. The simplest example, the one this curriculum builds in its first notebook, is the three-point second-derivative
u[i-1] − 2·u[i] + u[i+1] ≈ Δx² · ∂²u/∂x² + O(Δx⁴),
which reads three neighbours and approximates the second spatial derivative to second order. Stitch it together with a forward Euler step in time and you have FTCS, the workhorse Richardson rolled across central Europe and the same primitive that compresses time-stepping in roughly every classical PDE solver since. Richardson's actual stencils were 2D versions of the same idea, applied to the primitive equations of meteorology on a 200-km horizontal grid; the structure (read a small fixed neighbourhood, combine with hand-derived weights, write the result to the next timestep) is identical.
GraphCast does the same kind of operation — read from neighbours, combine with coefficients, write to self — except the neighbour set is a learned graph on an icosahedral mesh of the sphere, and the coefficients are trained from forty years of ERA5 reanalysis instead of derived from a Taylor expansion. The stencil generalises into a graph; the fixed integer coefficients into learned weights; the explicit time-step into a message-passing layer. The thread runs straight from a 1922 amphitheatre through ENIAC's vacuum tubes to a graph-neural-network layer on a sphere: same problem, same data origin, very different numerical machine — but at the level of the per-cell update, the two computations are kin.
This curriculum traces that intellectual lineage from Euler's 1768 forward step through finite elements, spectral methods, multigrid, and adjoints, into modern differentiable physics and neural operators — and on to the graph-based neural-operator architectures behind models like GraphCast. You will implement every method on a small problem, see why each one was invented, and understand what the modern PDE benchmarks (PDEBench, PDEArena, SciMLBenchmarks) are actually measuring — and where they intersect the adjacent benchmarks for control theory and dynamical-systems identification (e.g., DynaDojo, NeurIPS 2023 D&B), which share most of the same numerical primitives.
Modeled on OpenAI's Spinning Up in RL and the author's Spinning Up in Active Inference. Companion to the Hugging Face Science Discord, channel #pde, founded around the December 2025 blog post Why You Should Care About Partial Differential Equations (PDEs) by Balaji, Chen, Nápoles, Sinha, and Ben Chaim. The channel meets weekly; this curriculum is the long-form educational companion to that founding post's case for PDE-as-SciML.
How this fits with other resources
There are excellent neural-operator benchmarks already. This curriculum exists to give them a backstory.
| PDEBench | PDEArena | SciMLBenchmarks | DeepXDE | DeepInverse | This curriculum | |
|---|---|---|---|---|---|---|
| Classical PDE numerics | -- | -- | Some | -- | -- | Modules 1-5 (FD, FV, FEM, spectral, multigrid) |
| Adjoint & AD | -- | -- | Implicit (Julia AD) | -- | Yes (gradients through forward op) | Module 6, with jaxctrl crosswalk |
| Differentiable solvers | -- | -- | -- | -- | -- | Module 7 (JAX-CFD, PhiFlow, jwave) |
| PINNs | -- | -- | -- | Yes | -- | Module 8 |
| Neural operators | FNO, U-Net | FNO, U-Net, GNO | -- | -- | -- | Modules 9-10 |
| Standardised metrics | Conservation, spectral RMSE | Trajectory rollout | Work-precision diagrams | -- | PSNR / SSIM / LPIPS for imaging IPs | Module 11 (uses all of the above) |
| Inverse problems / FWI | -- | -- | -- | -- | Yes — learned priors, plug-and-play, diffusion-based | Module 12 (brain-fwi, jwave, fijee leadfield) |
| Historical narrative | -- | -- | -- | -- | -- | Every module |
| Primary language | PyTorch | PyTorch | Julia | TF/PyTorch/JAX | PyTorch | JAX |
| Format | Datasets + baselines | Datasets + baselines | Benchmarks + WPDs | Solver framework | Inverse-problem framework | Curriculum + weekly sessions |
The gap we fill: PDEBench tells you which architecture wins on Burgers. It does not tell you what FTCS is, why CFL exists, or why the FEM community spent thirty years arguing about test functions. This curriculum is the "spinning-up" prequel: read it, run the notebooks, and the benchmark papers will read like the next chapter instead of an alien language.
Where the work happens
The history of computational PDE solvers is also a history of specific institutions, and the modern SciML community is concentrated in a recognisable set of them. The curriculum cites them where their work is load-bearing; this section is the orienting map.
- Courant Institute (NYU) — Founded by Richard Courant, who put the C in CFL (Module 01). The lineage runs from Friedrichs and Lewy through Peter Lax, Cathleen Morawetz, Leslie Greengard (FMM), and a generation of mathematical-PDE-numerics PhDs who staffed every other institution on this list. Still the gravitational centre for theoretical PDE numerics.
- US Department of Energy national laboratories. Each lab carries a distinct numerical tradition:
- Los Alamos (LANL) — von Neumann's nuclear simulations; the institutional context in which his stability analysis (Module 01) matured. Modern: ALE methods, kinetic codes, FLAG, xRAGE.
- Argonne (ANL) — home of PETSc, the dominant scalable PDE-solver toolkit; also the MPICH reference implementation.
- Lawrence Berkeley (LBNL) — Phil Colella's adaptive-mesh-refinement lineage; BoxLib / AMReX, Chombo.
- Lawrence Livermore (LLNL) — hypre (algebraic multigrid; the production analogue of Module 05), MFEM, SUNDIALS, and libROM (reduced-order modelling). LLNL's libROM team also hosts the DDPS webinar series — see Recurring seminars below.
- Sandia — Trilinos, Kokkos — the performance-portability stack underneath much of US HPC.
- Oak Ridge (ORNL) — Jack Dongarra's group; LAPACK / ScaLAPACK / MAGMA; the Frontier exascale system, now the world's flagship machine for climate, fusion, and materials PDE workloads.
- Brown University, Division of Applied Mathematics — Lax-Wendroff lineage; the modern home of George Em Karniadakis's CRUNCH Group (the CRUNCH lecture archive is in Recurring seminars). PINNs (Raissi-Perdikaris-Karniadakis 2019) and DeepONet (Lu et al. 2021) were both invented here, anchoring Modules 08-09.
- Caltech Computing + Mathematical Sciences — One of the densest current concentrations of SciML. Andrew Stuart (operator learning, Bayesian inverse problems), Houman Owhadi (operator-valued kernels), Anima Anandkumar (co-creator of FNO).
- University of Washington — AI Institute in Dynamic Systems — NSF-funded institute led by Steve Brunton, Nathan Kutz, and Bing Brunton. The home of the data-driven dynamical-systems lineage — Dynamic Mode Decomposition, SINDy (Sparse Identification of Nonlinear Dynamics), and modern Koopman-operator approximations. The conceptual bridge from PDE numerics through dynamical-systems identification into control theory runs through this group's work.
- UT Austin Oden Institute — Tinsley Oden's institutional legacy; PDE-constrained optimization, uncertainty quantification, scientific machine learning. Directed by Karen Willcox.
- ETH Zurich SAM (Seminar for Applied Mathematics) — Christoph Schwab and Ralf Hiptmair lead Europe's strongest concentration of FEM theory and UQ.
- Oxford Mathematical Institute, Numerical Analysis Group — Lloyd N. Trefethen (now Harvard since 2023; Oxford archive here) and Endre Süli. The reading list for Modules 04+ leans heavily on the Oxford NA textbooks.
- MPI for Mathematics in the Sciences, Leipzig — Wolfgang Hackbusch's institute. Hackbusch invented multigrid (1976); the institutional lineage anchors Module 05.
- Heidelberg IWR (Interdisciplinary Center for Scientific Computing) — Peter Bastian's group; home of DUNE.
- INRIA (France) — National applied-math + HPC institute, multi-site. Olivier Pironneau (mesh adaptation, optimal control); birthplace of FreeFEM.
- TU Munich (TUM) — Munich Center for Computational Sciences — Hans-Joachim Bungartz's sparse-grid lineage; current home of JAX-Fluids and PhiFlow.
- SIMULA Research Laboratory (Oslo) — Hans Petter Langtangen's legacy; long-time home of FEniCS and the broader Python finite-element ecosystem. The UFL variational form referenced in Module 03 —
a = inner(a_sigma * grad(u), grad(v)) * dx— is FEniCS syntax, written at SIMULA. - Flatiron Institute — Center for Computational Mathematics (CCM) — Simons Foundation. Leslie Greengard (Fast Multipole Method); pure applied math at scale.
- Santa Fe Institute — Complex-systems and dynamical-systems framings of the SciML problem. The Crutchfield / West / Kauffman lineage informs the data-driven dynamics community.
- IPAM at UCLA — Long-program semesters where the SciML field's working consensus actually gets worked out: Machine Learning for Physical Sciences (2019), Tensor Methods and Emerging Applications (2021), ongoing PDE-and-learning workshops. Recordings on the IPAM YouTube channel.
For the longer survey — including Asian institutes (RIKEN, Tsinghua, NUS, ANU), additional European groups (Cambridge DAMTP, Imperial, EPFL, KTH, BCAM, ICTP), more US national-lab software, and the institutions behind Gmsh/CGAL/visualization tooling — see RESOURCES.md. Corrections and additions welcome via Discussion on Hugging Face or PR.
Open-source PDE software
The curriculum's notebooks are JAX-native, but the broader open-source PDE ecosystem is most of what students will encounter in industry and at national labs. Modules 03–05 will look at production-grade C++ and Python codes alongside the teaching code, and a working scientific computing practitioner needs at least passing fluency with the following stacks. The longer annotated list lives in RESOURCES.md.
FEM and multi-physics frameworks
- FEniCS — Python FEM with automatic weak-form compilation via UFL (the
tCS_model.uflexample in Module 03 is FEniCS syntax). Originated at SIMULA / KTH. - Firedrake — Imperial's fork of FEniCS; tighter PETSc + adjoint integration via pyadjoint.
- deal.II — C++ FEM. The reference for adaptive mesh refinement.
- NGSolve — Joachim Schöberl's Vienna project. Strong in HDG, mixed methods.
- MOOSE — Idaho National Lab's multi-physics framework. Built on libMesh; nuclear-reactor-scale problems.
- DUNE — Heidelberg IWR. The most flexible C++ PDE toolkit (modular FEM/FV/DG).
- FreeFEM — INRIA/Sorbonne. DSL-driven FEM widely used in the French community.
Finite-volume CFD and shock capturing
- OpenFOAM — the dominant open-source CFD package; industrial standard.
- SU2 — Stanford-origin; gradient-based shape and topology optimization via discrete adjoint.
- Clawpack — Randall LeVeque's lineage; the canonical reference codebase for hyperbolic PDEs and shock capturing. Companion to Module 02.
- Trixi.jl — Julia-native discontinuous Galerkin solver.
Mesh generation and computational geometry
Mesh quality is half of every nontrivial PDE simulation. The curriculum's tetrahedral-mesh example in Module 03 (Fijee tCS) was generated with these tools.
- Gmsh — The standard open-source mesh generator. Built-in geometry kernel, OpenCASCADE backend, scriptable in Python and a custom DSL. Pairs natively with FEniCS, Firedrake, deal.II, MFEM.
- CGAL — Computational Geometry Algorithms Library. C++ reference implementation for triangulations, mesh generation, surface reconstruction, and the algorithms beneath most modern meshers.
- Cubit / Coreform — Sandia-origin commercial mesher; open-source releases via Coreform Cubit.
- MeshPy / TetGen / Triangle — Python wrappers and underlying tetrahedral / 2D mesh generators.
Scalable solver toolkits (already named under Where the work happens)
PETSc, Trilinos, Kokkos, hypre, MFEM, SUNDIALS, AMReX, MAGMA.
Differentiable simulators (JAX-first)
- JAX-CFD (Google) — pseudospectral and finite-volume Navier-Stokes.
- JAX-Fluids (TUM) — differentiable compressible Navier-Stokes; shock capturing, multi-phase.
- PhiFlow (TUM) — differentiable physics with JAX/PyTorch/TF backends; tight ML integration.
- Diffrax + Equinox (Patrick Kidger) — ODE/SDE/CDE solvers and JAX neural-network library; the substrate for time-evolved PDEs in JAX.
- jwave — differentiable acoustic FWI (Modules 04, 07, 12).
Visualization
VTK, ParaView, and VisIt cover almost every PDE-data visualization need. ParaView is the typical first move for FEniCS / MFEM output.
Recurring seminars and talks
If you want to keep the field at peripheral vision while you work through the curriculum, the following recurring seminars, webinar series, and self-paced courses are the easiest way in. Most have full archives going back several years.
- DDPS — Data-Driven Physical Simulations (LLNL libROM team). Weekly webinar on machine learning + AI methods for computational science and physical simulation: deep learning for simulation, generative models, data assimilation, fluid dynamics, plasma physics. Recorded archive from 2020 onwards (e.g. The Nexus of Machine Learning, Physics-based Modeling, and Uncertainty Quantification is a representative DDPS talk on the ML / physics-modelling / UQ axis). Subscribe on the page. Organised by Youngsoo Choi and Siu Wun Cheung.
- CRUNCH Group lecture archive (George Em Karniadakis, Division of Applied Mathematics, Brown). The CRUNCH Group invented PINNs (Raissi–Perdikaris–Karniadakis, 2019) and DeepONet (Lu et al., 2021); their YouTube channel is the primary lecture archive for those methods, the natural companion to Modules 08 and 09.
- 12 steps to Navier-Stokes (Lorena Barba, GW Engineering). Twelve self-paced Python notebooks taking the reader from 1D linear convection to 2D incompressible Navier-Stokes. Genuinely the best free CFD onramp; pairs perfectly with Module 02. The associated 12 steps blog series gives the pedagogical backstory.
- Trefethen's Oxford NA video lectures — Lloyd N. Trefethen's recorded lectures, principally from his Oxford NA group years (he moved to Harvard in 2023). The companion to his textbook Spectral Methods in MATLAB and the Chebfun tooling. Module 04 reading.
(For the broader list — additional textbooks, MIT OCW, Stanford CME, IPAM workshop archives, and SIAM Visualization recordings — see RESOURCES.md. Suggestions welcome via Discord or PR.)
Who is this for?
- ML researchers entering SciML who want to understand what they're benchmarking against
- Computational scientists whose first move is FEniCS or PETSc, curious about JAX-native differentiable solvers
- Graduate students preparing to read Karniadakis et al., Li et al. (FNO), or Lu et al. (DeepONet) and wanting context
- Anyone in the Hugging Face Science Discord #pde channel who wants to follow along week by week
Prerequisites: Python, NumPy, ODE basics. Vector calculus helpful but reviewed as we go. No prior FEM or neural-operator experience required.
The Curriculum
Part 1 — Classical PDE Numerics
The 250-year backstory. What every modern surrogate model is compared against, and why those comparisons are the right ones.
| Module | What you'll build | Anchored to | |
|---|---|---|---|
| 01 | Finite Differences & CFL | 1D heat equation in JAX, explicit FTCS vs Crank-Nicolson, watch the CFL boundary in real time. | PDEBench FTCS baseline |
| 02 | Finite Volume & Conservation Laws | 1D Burgers with Godunov / Roe / minmod limiters. Why shocks need conservative discretisation. | PDEBench compressible flow |
| 03 | Finite Elements & Variational Forms | Anisotropic Poisson on a tetrahedral mesh in FEniCSx, reading from a real .ufl file. |
fijee/Finite_element_method_models/tCS_model.ufl |
| 04 | Spectral & Pseudospectral Methods | 2D Navier-Stokes with FFT-based pseudospectral, Orszag's 2/3 dealiasing rule. | jwave k-Wave-style propagation |
| 05 | Multigrid & Preconditioners | Full Multigrid V-cycles on a Poisson problem, comparison vs CG / PCG. | libspm/field.h — production FMG/CG in C |
Part 2 — Differentiable Physics
Where adjoints meet automatic differentiation, and the simulator becomes a layer.
| Module | What you'll build | Anchored to | |
|---|---|---|---|
| 06 | Adjoints & Automatic Differentiation | Hand-derived adjoint of a 1D advection solver, then jax.grad through the same solver. The two answers had better match. |
jaxctrl (Lyapunov / Riccati adjoints) |
| 07 | Differentiable Solvers in JAX | Tour of JAX-CFD, PhiFlow, jwave: how a forward solver becomes a gradient operator. Train a learned closure on a coarse grid. | jwave, vpjax, vbjax, dot-jax |
Part 3 — Neural Operators
Surrogate models that learn maps between function spaces, not point values.
| Module | What you'll build | Anchored to | |
|---|---|---|---|
| 08 | Physics-Informed Neural Networks | A PINN for 1D Burgers from scratch in Equinox, then read Raissi et al. 2019. Discuss: when do PINNs beat classical solvers, and when don't they? | DeepXDE baseline comparison |
| 09 | DeepONet & Fourier Neural Operators | Reproduce a small FNO on Darcy flow. Spectral bias, resolution invariance, the operator-learning premise. | neuraloperator |
| 10 | Geometric & Mesh-Aware Operators | GraphNet-based operators (MeshGraphNets style) on irregular meshes. Why FNO's regular-grid assumption breaks for engineering problems. | hgx hypergraph operator overlap |
Part 4 — Benchmarks, Inverse Problems, Discovery
What "good" looks like, and what real problems look like.
| Module | What you'll build | Anchored to | |
|---|---|---|---|
| 11 | The Benchmark Landscape | Run the same FNO on PDEBench Burgers, then on PDEArena Navier-Stokes, then on a SciMLBenchmarks WPD. Read the metric definitions: spectral RMSE, conservation RMSE, work-precision. Then look across at the adjacent dynamical-systems-identification benchmarks (DynaDojo) and the control-theory benchmarks (jaxctrl, Module 06) — the numerical primitives are shared even though the communities have grown apart. | PDEBench, PDEArena, SciMLBenchmarks, DynaDojo, jaxctrl |
| 12 | Inverse Problems & FWI | Full-waveform inversion on a small acoustic test case in jwave; brief tour of brain-fwi (transcranial FWI) and the EEG forward-inverse problem (fijee leadfield → source localization). Compare classical regularised inversion against learned-prior approaches via DeepInverse. | brain-fwi, jwave, Fijee-Project, DeepInverse |
A 13th module — Agent-driven PDE discovery — is planned but not scheduled, pending agentsciml maturity. The published reference is Jiang & Karniadakis (2025), AgenticSciML: Collaborative Multi-Agent Systems for Emergent Discovery in Scientific Machine Learning, out of the CRUNCH Group at Brown — the same group whose lecture archive is cited in Recurring seminars.
The Voice
Each module ships as both a notebook and a companion article. The articles follow the Fedora Magazine cadence the author has used in the linear-algebra series: a hook tied to a real-world artifact, the historical motivation (people, places, dates), the math, the code you can run today, and a forward link to the next idea in the sequence.
The companion notebooks are written in jupytext .py percent format for clean diffs, and convert to .ipynb with one command. See the notebooks README for the conversion recipe.
Companion repositories
This curriculum is the front door to a constellation of JAX-native scientific-computing repositories by the same author. Each module names the relevant ones above; here is the full crosswalk:
| Repo | Role in the curriculum |
|---|---|
| jaxctrl | Differentiable control (Lyapunov, Riccati, Gramians). Adjoint duality with PDE-constrained optimization. |
| agentsciml | Multi-agent evolutionary framework for SciML discovery. PDE-as-tool-use. |
| jwave | Differentiable acoustics in JAX, pseudospectral. Module 4, Module 7, Module 12. |
| vpjax | Differentiable cerebrovascular models. Coupled hyperbolic / parabolic systems. |
| vbjax | Virtual brain modelling — neural mass + integration. |
| dot-jax | Diffuse Optical Tomography (a parabolic forward, ill-posed inverse). |
| brain-fwi | Full waveform inversion through the skull. Capstone-grade inverse problem. |
| libspm | Standalone C library for SPM's PDE solvers — Full Multigrid, B-splines, regularizers. Module 5. |
| Fijee-Project | FEM forward EEG (anisotropic Poisson) + Jansen-Rit/Wendling biophysics. Module 3, Module 12. |
See crosswalk.md for module-by-module pointers.
Installation
This repository is mirrored on both Hugging Face and GitHub:
# From Hugging Face (primary; uses the `hf` CLI):
git clone https://huggingface.co/datasets/mhough/spinning-up-in-pde
cd spinning-up-in-pde
# Or from GitHub (mirror):
# git clone https://github.com/m9h/spinning-up-in-pde.git
# Create environment (requires uv: https://docs.astral.sh/uv/)
uv venv .venv --python 3.13
source .venv/bin/activate
# Core dependencies
uv pip install -r requirements.txt
# Convert jupytext .py notebooks to .ipynb
jupytext --to notebook notebooks/*.py
# Launch
jupyter lab
JAX install: on Linux/CUDA add jax[cuda12]; on Apple Silicon use the standard jax wheel (CPU) or jax-mps for MLX-backed acceleration on M-series hardware. See JAX install docs.
Community
- Discord: Hugging Face Science, channel
#pde - Weekly session: one module per week, walkthrough + open Q&A. Time TBA in the channel.
- Issues / PRs: corrections, extra references, alternative implementations all welcome — especially historical citations we missed.
License
Apache 2.0. Content is reusable; please cite the curriculum if you adapt it.
- Downloads last month
- 50