Graph Machine Learning
AnemoI
English
hcookie129 maplumridge commited on
Commit
38fd300
·
0 Parent(s):

AIFS ENS v2: ECMWF's data-driven ensemble weather forecasting model

Browse files

ECMWF's Artificial Intelligence Forecasting System (AIFS) ENS v2,
implemented operationally on 12 May 2026, superseding v1. Generates
51-member 15-day (6-hourly) global ensemble forecasts four times per day.

Built on a GNN encoder/decoder with sliding window transformer processor,
trained on ERA5 (1979-2022) and fine-tuned on operational analysis data
(2018-2024). Trained on 32 GH200 GPUs using the Anemoi framework.

Key features in v2:
- First operational data-driven wave forecasts (11 wave variables)
- New snow variable (fraction of snow cover)
- Tropical cyclone track forecasts (BUFR format)
- Improved stratosphere representation (10hPa pressure level)
- Improved vertical velocities (W changed to diagnostic field)
- Multi-scale loss replacing afCRPS loss function
- Revised graph features with more decoder edges
- Variable bounding matching AIFS Single for physical consistency

Includes pretrained checkpoint (2.55GB), inference notebook, and
uv lock files for reproducible environments.

Co-authored-by: Meghan Plumridge <maplumridge@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.grib filter=lfs diff=lfs merge=lfs -text
9
+ *.h5 filter=lfs diff=lfs merge=lfs -text
10
+ *.joblib filter=lfs diff=lfs merge=lfs -text
11
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
12
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
+ *.model filter=lfs diff=lfs merge=lfs -text
14
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
15
+ *.npy filter=lfs diff=lfs merge=lfs -text
16
+ *.npz filter=lfs diff=lfs merge=lfs -text
17
+ *.onnx filter=lfs diff=lfs merge=lfs -text
18
+ *.ot filter=lfs diff=lfs merge=lfs -text
19
+ *.parquet filter=lfs diff=lfs merge=lfs -text
20
+ *.pb filter=lfs diff=lfs merge=lfs -text
21
+ *.pickle filter=lfs diff=lfs merge=lfs -text
22
+ *.pkl filter=lfs diff=lfs merge=lfs -text
23
+ *.pt filter=lfs diff=lfs merge=lfs -text
24
+ *.pth filter=lfs diff=lfs merge=lfs -text
25
+ *.rar filter=lfs diff=lfs merge=lfs -text
26
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
27
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar filter=lfs diff=lfs merge=lfs -text
30
+ *.tflite filter=lfs diff=lfs merge=lfs -text
31
+ *.tgz filter=lfs diff=lfs merge=lfs -text
32
+ *.wasm filter=lfs diff=lfs merge=lfs -text
33
+ *.xz filter=lfs diff=lfs merge=lfs -text
34
+ *.zip filter=lfs diff=lfs merge=lfs -text
35
+ *.zst filter=lfs diff=lfs merge=lfs -text
36
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
37
+ assets/aifs_diagram.png filter=lfs diff=lfs merge=lfs -text
38
+ assets/decoder_graph.jpeg filter=lfs diff=lfs merge=lfs -text
39
+ assets/encoder_graph.jpeg filter=lfs diff=lfs merge=lfs -text
.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ .venv
.pre-commit-config.yaml ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ repos:
2
+ - repo: https://github.com/pre-commit/pre-commit-hooks
3
+ rev: v6.0.0
4
+ hooks:
5
+ - id: check-yaml # Check YAML files for syntax errors only
6
+ args: [--unsafe, --allow-multiple-documents]
7
+ - id: debug-statements # Check for debugger imports and py37+ breakpoint()
8
+ - id: end-of-file-fixer # Ensure files end in a newline
9
+ - id: trailing-whitespace # Trailing whitespace checker
10
+ - id: check-merge-conflict # Check for files that contain merge conflict
11
+ - repo: https://github.com/pre-commit/pygrep-hooks
12
+ rev: v1.10.0 # Use the ref you want to point at
13
+ hooks:
14
+ - id: python-use-type-annotations # Check for missing type annotations
15
+ - id: python-check-blanket-noqa # Check for # noqa: all
16
+ - id: python-no-log-warn # Check for log.warn
17
+ - repo: https://github.com/psf/black-pre-commit-mirror
18
+ rev: 26.1.0
19
+ hooks:
20
+ - id: black
21
+ args: [--line-length=120]
22
+ - repo: https://github.com/pycqa/isort
23
+ rev: 8.0.1
24
+ hooks:
25
+ - id: isort
26
+ args:
27
+ - -l 120
28
+ - --force-single-line-imports
29
+ - --profile black
30
+ - --project anemoi
31
+ - repo: https://github.com/astral-sh/ruff-pre-commit
32
+ rev: v0.15.4
33
+ hooks:
34
+ - id: ruff
35
+ args:
36
+ - --line-length=120
37
+ - --fix
38
+ - --exit-non-zero-on-fix
39
+ - --exclude=docs/**/*_.py
40
+ - repo: https://github.com/sphinx-contrib/sphinx-lint
41
+ rev: v1.0.2
42
+ hooks:
43
+ - id: sphinx-lint
44
+ - repo: https://github.com/tox-dev/pyproject-fmt
45
+ rev: "v2.16.2"
46
+ hooks:
47
+ - id: pyproject-fmt
48
+ args: ["--max-supported-python", "3.12"]
README.md ADDED
@@ -0,0 +1,304 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ pipeline_tag: graph-ml
4
+ language:
5
+ - en
6
+ library_name: anemoi
7
+ ---
8
+
9
+ # AIFS ENS v2
10
+
11
+ <!-- Provide a quick summary of what the model is/does. -->
12
+
13
+ [![License](https://img.shields.io/badge/license-CC--BY--4.0-blue.svg)](https://creativecommons.org/licenses/by/4.0/)
14
+ [![Framework](https://img.shields.io/badge/framework-Anemoi-orange)](https://anemoi.readthedocs.io/)
15
+ [![Task](https://img.shields.io/badge/task-Weather%20Forecasting-green)]()
16
+ [![Model](https://img.shields.io/badge/model-GNN%20%2B%20Transformer-purple)]()
17
+
18
+ The AIFS is ECMWF's **Artificial Intelligence Forecasting System**.
19
+ It consists of two data-driven medium-range forecast models:
20
+ - AIFS Single v2 (described on the [following page](https://huggingface.co/ecmwf/aifs-single-2.0))
21
+ - AIFS ENS v2 (described here)
22
+
23
+ **AIFS ENS v2** was implemented on **12 May 2026** and supersedes version [1](https://huggingface.co/ecmwf/aifs-ens-1).
24
+ It is run operationally by ECMWF, generating a 51-member 15-day (6-hourly) global forecast four times per day.
25
+
26
+ ## Table of contents
27
+
28
+ - [What's new in v2](#whats-new-in-v2)
29
+ - [Quickstart](#quickstart)
30
+ - [Model overview](#model-overview)
31
+ - [Data details](#data-details)
32
+ - [Training details](#training-details)
33
+ - [Evaluation](#evaluation)
34
+ - [Known limitations](#known-limitations)
35
+ - [Technical specifications](#technical-specifications)
36
+ - [Citation](#citation)
37
+
38
+ ---
39
+
40
+ ## What's new in v2
41
+ The release of AIFS v2 introduces:
42
+ - A new wave component, including 11 wave variables, marking **ECMWF's first operational data-driven wave forecasts**.
43
+ - The addition of a new snow variable to the existing land component.
44
+ - The addition of tropical cyclone track forecasts.
45
+ - The addition of two variables already present in AIFS Single (vsw and cp), to harmonise the two models.
46
+ - Improved vertical velocities, by changing parameter W from a prognostic to a diagnostic field.
47
+ - An improved representation of the stratosphere, with the addition of pressure level fields at 10hPa.
48
+
49
+ Architectural changes include:
50
+ - Replacing the afCRPS loss function used in AIFS ENS [v1](https://huggingface.co/ecmwf/aifs-ens-1) with a multi-scale loss.
51
+ - Imposing the same variable bounding as is used by the AIFS Single model, improving physical consistency.
52
+ - Revised graph features, utilising more edges in the decoder.
53
+ - An improved training regime, using 1 year of additional training data compared to AIFS ENS [v1](https://huggingface.co/ecmwf/aifs-ens-1).
54
+
55
+ For full details of the changes introduced with this upgrade, see the [implementation page](https://confluence.ecmwf.int/display/FCST/Implementation+of+AIFS+ENS+v2).
56
+
57
+ ---
58
+
59
+ ## Quickstart
60
+ ### Access AIFS Single v2 model data
61
+ ECMWF generates operational forecasts from AIFS ENS v2 four times per day (at 00, 06, 12 & 18 UTC).
62
+ Users can access the forecast data free-of-charge through various [open data platforms](https://www.ecmwf.int/en/forecasts/datasets/open-data).
63
+
64
+ ### Generate a forecast with AIFS Single v2
65
+ To generate a forecast using the AIFS ENS v2 model, follow the [notebook example](run_AIFS_ENS_v2.0.ipynb).
66
+
67
+ The notebook demonstrates:
68
+ - installing packages and imports
69
+ - retrieving initial conditions from ECMWF Open Data
70
+ - loading the pretrained checkpoint
71
+ - running inference with [anemoi-inference](https://github.com/ecmwf/anemoi-inference)
72
+ - visualising the forecast
73
+
74
+ [anemoi-inference](https://github.com/ecmwf/anemoi-inference) also provides a command line interface:
75
+ ```bash
76
+ anemoi-inference run inference.yaml
77
+ ```
78
+
79
+ or, if using uv from this repository:
80
+ ```bash
81
+ uv run --extra inference anemoi-inference run inference.yaml
82
+ ```
83
+
84
+ ---
85
+
86
+ ## Model overview
87
+
88
+ ### Model description
89
+
90
+ <!-- Provide a longer summary of what this model is. -->
91
+
92
+ AIFS ENS v2 is based on a graph neural network (GNN) encoder and decoder, and a sliding window transformer processor.
93
+
94
+ <div style="display: flex; justify-content: center;">
95
+ <img src="assets/encoder_graph.jpeg" alt="Encoder graph" style="width: 50%;"/>
96
+ <img src="assets/decoder_graph.jpeg" alt="Decoder graph" style="width: 50%;"/>
97
+ </div>
98
+
99
+ The model has a flexible and modular design and supports several levels of parallelism to enable training on
100
+ high resolution input data. AIFS forecast skill is assessed by comparing its forecasts to numerical weather prediciton (NWP) analyses
101
+ and direct observational data.
102
+
103
+ - **Developed by:** ECMWF
104
+ - **Model type:** Encoder-processor-decoder model
105
+ - **License:** Inference model weights are published under a Creative Commons Attribution 4.0 International (CC BY 4.0).
106
+ To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.
107
+ The [demonstration notebook](run_AIFS_ENS_v2.0.ipynb) and other script files are published under an Apache 2.0 licence.
108
+ To view a copy of this license, visit https://www.apache.org/licenses/LICENSE-2.0.txt.
109
+
110
+ ### Model resolution
111
+ AIFS ENS v2 introduces a **new pressure level** to the stratospheric component.
112
+ | Model | Vertical resolution [pressure levels] (hPa) |
113
+ |:---|:---|
114
+ | AIFS ENS v2 | 10 (new), 50, 100, 150, 200, 250, 300, 400, 500, 600, 700, 850, 925, 1000 |
115
+
116
+ There are no changes in horiztonal resolution compared to previous version AIFS ENS v2.
117
+ | Component | Horizontal Resolution [kms] | Vertical Resolution [levels] |
118
+ |---|:---:|:---:|
119
+ | Atmosphere | ~ 31 | 13 |
120
+
121
+ ### Model sources
122
+
123
+ <!-- Provide the basic links for the model. -->
124
+
125
+ - **Repository:** [Anemoi](https://anemoi.readthedocs.io/en/latest/) is an open-source framework for
126
+ creating data-driven weather forecasting systems. Anemoi is co-developed by ECMWF and national meteorological
127
+ services across Europe.
128
+ - **Papers:**
129
+ - [AIFS-CRPS: Ensemble forecasting using a model trained with a loss function based on the Continuous Ranked Probability Score (2024)](https://arxiv.org/abs/2412.15832)
130
+ - [A multi-scale loss formulation for learning a probabilistic model with proper score optimisation (2025)](https://arxiv.org/abs/2506.10868)
131
+
132
+ ---
133
+
134
+ ## Data details
135
+
136
+ ### Training data
137
+ The data used for **pre-training** AIFS ENS v2 remains the same as for the previous model version ([v1](https://huggingface.co/ecmwf/aifs-ens-1)):
138
+ - Pre-training was performed on ERA5 data covering the years 1979–2022 over 300,000 training steps.
139
+
140
+ The data used for **fine-tuning** AIFS Single v2 has been updated:
141
+ - Fine-tuning was performed on 1 year of additional data compared to [AIFS ENS v1](https://huggingface.co/ecmwf/aifs-ens-1).
142
+ - In particular, fine-tuning was performed on ECMWF operational analysis data and
143
+ IFS 50r1 esuite analysis data, covering the years 2018-2024 over 7,900 training steps.
144
+
145
+ <div style="display: flex; justify-content: center;">
146
+ <img src="assets/ens_v2_training.png" alt="AIFS ENS v2 fine-tuning" style="width: 80%;"/>
147
+ </div>
148
+
149
+ > *Note: The IFS 50r1 esuite analysis data used for fine-tuning is not available to users. It consists of prototype data from early versions of IFS Cycle 50r1.*
150
+
151
+ As in previous versions of AIFS ENS, IFS fields are interpolated from their native O1280 resolution
152
+ (approximately 0.1°) using MARS default interpolation tools down to N320 (approximately 0.25°) for
153
+ fine-tuning and initialisation of the model during inference.
154
+
155
+ ### Data parameters
156
+
157
+ #### New parameters
158
+
159
+ AIFS Single v2 introduces 14 new parameters, which were used for model training and are output by the
160
+ model during a forecast.
161
+
162
+ | Short Name | Name | Units |
163
+ |:----------:|:----:|:-----:|
164
+ | h1012 | Significant wave height of all waves with periods within the inclusive range from 10 to 12 seconds | \(m\) |
165
+ | h1214 | Significant wave height of all waves with periods within the inclusive range from 12 to 14 seconds | \(m\) |
166
+ | h1417 | Significant wave height of all waves with periods within the inclusive range from 14 to 17 seconds | \(m\) |
167
+ | h1721 | Significant wave height of all waves with periods within the inclusive range from 17 to 21 seconds | \(m\) |
168
+ | h2125 | Significant wave height of all waves with periods within the inclusive range from 21 to 25 seconds | \(m\) |
169
+ | h2530 | Significant wave height of all waves with periods within the inclusive range from 25 to 30 seconds | \(m\) |
170
+ | wmb | Model bathymetry | \(m\) |
171
+ | swh | Significant wave height | \(m\) |
172
+ | mwd | Mean wave direction | \(Degree\ true\) |
173
+ | mwp | Mean wave period | \(s\) |
174
+ | cdww | Coefficient of drag with waves | \(dimensionless\) |
175
+ | fscov | Fraction of snow cover | \(Proportion\) |
176
+ | cp | Convective precipitation | \(kg m-2\) |
177
+ | vsw | Volumetric soil moisture | \(m3 m-3\) |
178
+
179
+ Additionaly, AIFS ENS v2.0 now produces tropical cyclone track forecasts in BUFR format.
180
+ See https://confluence.ecmwf.int/display/FCST/Implementation+of+AIFS+ENS+v2#ImplementationofAIFSENSv2-Newparameters for more details.
181
+
182
+ ---
183
+
184
+ ## Training Details
185
+
186
+ ### Training Data
187
+
188
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
189
+ AIFS ENS v2 is trained on ECMWF’s ERA5 re-analysis and ECMWF’s operational numerical weather prediction (NWP) analyses as
190
+ described in the [training data](#training-data) section.
191
+
192
+ AIFS ENS v2 is trained to produce 6-hour forecasts. It receives as input a representation of the atmospheric states
193
+ at \\(t_{−6h}\\), \\(t_{0}\\), and then forecasts the state at time \\(t_{+6h}\\).
194
+
195
+ <div style="display: flex; justify-content: center;">
196
+ <img src="assets/aifs_diagram.png" alt="AIFS 2m Temperature" style="width: 80%;"/>
197
+ </div>
198
+
199
+ The table below shows all parameters used and output by AIFS ENS v2. New parameters and levels are marked **bold**.
200
+
201
+ | Field | Level type | Input/Output |
202
+ |---|---|---|
203
+ | Geopotential (Z), horizontal and vertical wind components (U, V), temperature (T) | Pressure levels: **10**, 50, 100, 150, 200, 250, 300, 400, 500, 600, 700, 850, 925, 1000 | Both ("Prognostic") |
204
+ | Specific humidity (Q) | Pressure levels: 100, 150, 200, 250, 300, 400, 500, 600, 700, 850, 925, 1000 | Both ("Prognostic") |
205
+ | Vertical velocity (W) | Pressure levels: **10**, 50, 100, 150, 200, 250, 300, 400, 500, 600, 700, 850, 925, 1000 | Output ("Diagnostic") |
206
+ | Specific humidity (Q) | Pressure level: 50 | Output ("Diagnostic") |
207
+ | Surface pressure (SP), mean sea-level pressure (MSL), sea-surface temperature (SST), skin temperature (SKT), 2m temperature (2T), 2m dewpoint temperature (2D), 10m horizontal wind components (10U, 10V), total column water (TCW), **mean wave period (MWP)**, **mean wave direction (MWD)**, **coefficient of drag with waves (CDWW)**, **significant wave height (SWH)**, significant wave height of all waves with periods within the inclusive range from: <br><br> - **10 to 12 seconds (H1012)** <br> - **12 to 14 seconds (H1214)** <br> - **14 to 17 seconds (H1417)** <br> - **17 to 21 seconds (H1721)** <br> - **21 to 25 seconds (H2125)** <br> - **25 to 30 seconds (H2530)** | Surface | Both ("Prognostic") |
208
+ | **Volumetric soil moisture (VSW)** and soil temperature (SOT), both at soil depth 1 and 2 | Soil layer | Both ("Prognostic") |
209
+ | 100m horizontal wind components (100U, 100V), surface short-wave (solar) radiation downwards (SSRD), surface long-wave (thermal) radiation downwards (STRD), cloud variables (TCC, HCC, MCC, LCC), runoff water equivalent (ROWE) and snow fall (SF), total precipitation (TP), **convective precipitation (CP)**, **fraction of snow cover (FSCOV)** | Surface | Output ("Diagnostic") |
210
+ | Standard deviation of sub-gridscale orography (SDOR), slope of sub-gridscale orography (SLOR), land-sea mask (LSM), Geopotential (Z), insolation, latitude/longitude, time of day/day of year | Surface | Input ("Forcings") |
211
+
212
+ ### Training Procedure
213
+
214
+ Key changes to the AIFS ENS v2 training regime include:
215
+ - the introduction of a multi-scale loss
216
+ - the removal of reference field truncation, imposing the same variable bounds used by the AIFS Single model to improve physical consistency
217
+ - the revision of graph features, by using more edges in the decoder together with new edge features.
218
+
219
+ Full details about the model architecture are detailed in the arXiv
220
+ preprints [here](https://arxiv.org/abs/2412.15832) and [here](https://arxiv.org/abs/2506.10868).
221
+
222
+ #### Speeds, Sizes, Times
223
+
224
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
225
+
226
+ Data parallelism is used for training, with a batch size of 1. One model instance is split across two 120GB GH200
227
+ GPUs, with an ensemble instance split across 4 gpus. Training is done using mixed precision (Micikevicius et al. [2018]), and the entire process
228
+ takes about one week, with 32 GPUs in total. The checkpoint size is 2.55GB and as mentioned above, it does not include the optimizer
229
+ state.
230
+
231
+ ---
232
+
233
+ ## Evaluation
234
+
235
+ <!-- This section describes the evaluation protocols and provides the results. -->
236
+
237
+ Interactive scorecards presenting the performance of AIFS ENS v2 between January-March 2026 are now available.
238
+ The scorecards compare performance when initialised from 49r1 and 50r1 IFS initial conditions: 
239
+
240
+ - [AIFS ENS v2 (initialised from 50r1) compared with AIFS ENS v1 (initialised from 49r1)](https://sites.ecmwf.int/aifs/scorecards/AIFS-ENS-v2%20vs%20AIFS-ENS-v1.html)
241
+ - [AIFS ENS v2 compared with AIFS ENS v1 (both initialised from 50r1)](https://sites.ecmwf.int/aifs/scorecards/AIFS-ENS-v2%20vs%20AIFS-ENS-v1%20from%2050r1.html)
242
+
243
+ ---
244
+
245
+ ## Known limitations
246
+
247
+ Please refer to https://confluence.ecmwf.int/display/FCST/Known+AIFS+Forecasting+Issues.
248
+
249
+ ---
250
+
251
+ ## Technical specifications
252
+
253
+ ### Hardware
254
+
255
+ <!-- {{ hardware_requirements | default("[More Information Needed]", true)}} -->
256
+
257
+ AIFS ENS v2 was trained on 32 GH200 GPUs (120GB).
258
+
259
+ ### Software
260
+
261
+ The model was developed and trained using the [Anemoi framework](https://anemoi.readthedocs.io/en/latest/).
262
+ The Anemoi framework provides a complete toolkit to develop data-driven weather models –
263
+ from data preparation through to inference. The development is primarily driven by a number
264
+ of European Meterological Organisations but open to contributions from any organisation or any individual.
265
+ The framework is composed of several packages which target the different components necessary to
266
+ construct data-driven weather models. To aid development and deployment, each package collects metadata
267
+ that can be used by the subsequent packages. The framework builds upon on established Python tools
268
+ including PyTorch, Lighting, Hydra, Zarr, Xarray and earthkit.
269
+
270
+ ---
271
+
272
+ ## Citation
273
+
274
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
275
+
276
+ If you use this model in your work, please cite it as follows:
277
+
278
+ **BibTeX:**
279
+
280
+ ```bibtex
281
+ @misc{lang2025multiscale,
282
+ title={A multi-scale loss formulation for learning a probabilistic model with proper score optimisation},
283
+ author={Simon Lang and Martin Leutbecher and Pedro Maciel},
284
+ year={2025},
285
+ eprint={2506.10868},
286
+ archivePrefix={arXiv},
287
+ primaryClass={physics.ao-ph},
288
+ url={https://arxiv.org/abs/2506.10868},
289
+ }
290
+
291
+ ```
292
+
293
+ **APA:**
294
+
295
+ ```apa
296
+ Lang, S., Leutbecher, M., & Maciel, P. (2025). A multi-scale loss formulation for learning a probabilistic model with proper score optimisation. arXiv preprint arXiv:2506.10868.
297
+
298
+ ```
299
+
300
+ ## More Information
301
+
302
+ All papers:
303
+ - [AIFS-CRPS: Ensemble forecasting using a model trained with a loss function based on the Continuous Ranked Probability Score (2024)](https://arxiv.org/abs/2412.15832)
304
+ - [A multi-scale loss formulation for learning a probabilistic model with proper score optimisation (2025)](https://arxiv.org/abs/2506.10868)
aifs-ens-crps-2.0.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e383027ee9a20469b920ed11f9af2ecc9268ebcf826447638c5c9fa296fbbbdb
3
+ size 2553843230
assets/aifs_diagram.png ADDED

Git LFS Details

  • SHA256: 8c37f8858399f9fef6918fdb9517f18ffb6bc1ed0a2fc45476b4f775f48bfb0d
  • Pointer size: 131 Bytes
  • Size of remote file: 470 kB
assets/decoder_graph.jpeg ADDED

Git LFS Details

  • SHA256: a306a3f914ed55b70bb2d8fe89b7070857e963115f1bfa8d58f6d8910b820b3c
  • Pointer size: 131 Bytes
  • Size of remote file: 215 kB
assets/encoder_graph.jpeg ADDED

Git LFS Details

  • SHA256: fe96aa418e231a623dba209aa048248b6506d8c3106df39ac82e3bef2098ba1f
  • Pointer size: 131 Bytes
  • Size of remote file: 168 kB
assets/ens_v2_training.png ADDED
inference.yaml ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ description: |
2
+ AIFS-ens CRPS 2.0
3
+
4
+ checkpoint: aifs-ens-crps-2.0.ckpt
5
+
6
+ input: opendata
7
+
8
+ pre_processors:
9
+ - forward-transform-filter: cos_sin_mean_wave_direction
10
+ - forward-transform-filter: # Mask fix
11
+ filter: apply-mask
12
+ param:
13
+ - sd
14
+ - swvl1
15
+ - swvl2
16
+ path: lsm.grib
17
+ mask_value: 0
18
+
19
+ post_processors:
20
+ - backward-transform-filter: cos_sin_mean_wave_direction
21
+ - accumulate_from_start_of_forecast
22
+
23
+ patch_metadata:
24
+ dataset:
25
+ constant_fields: [z, sdor, slor, lsm]
26
+
27
+ use_grib_paramid: true
28
+ allow_nans: true
29
+ typed_variables:
30
+ mwd:
31
+ mars:
32
+ param: mwd
33
+ stream: waef
34
+ levtype: sfc
35
+ snowc: # Encoding fix
36
+ mars:
37
+ param: fscov
38
+ stream: enfo
39
+ levtype: sfc
40
+ # fscov: # required for step0 copy
41
+ # mars:
42
+ # param: fscov
43
+ # stream: enfo
44
+ # levtype: sfc
45
+
46
+ output:
47
+ grib:
48
+ path: output.grib
49
+ encoding:
50
+ class: ai
51
+ type: pf
52
+ model: aifs-ens
53
+ generatingProcessIdentifier: 2
54
+
55
+ env:
56
+ ANEMOI_INFERENCE_NUM_CHUNKS: ${oc.env:ANEMOI_INFERENCE_NUM_CHUNKS,8}
lsm.grib ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6660ca07729fd8f2ac3fb8bfeee66e7513a355ca90d287f39b5987832c44a891
3
+ size 1627680
pyproject.toml ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [project]
2
+ name = "aifs-ens"
3
+ version = "2.0"
4
+ requires-python = ">=3.11,<3.14"
5
+ classifiers = [
6
+ "Programming Language :: Python :: 3 :: Only",
7
+ "Programming Language :: Python :: 3.11",
8
+ "Programming Language :: Python :: 3.12",
9
+ "Programming Language :: Python :: 3.13",
10
+ ]
11
+ dependencies = [
12
+ "anemoi-graphs==0.8.1",
13
+ "anemoi-models==0.11.2",
14
+ "anemoi-transform==0.1.16.post2",
15
+ "anemoi-utils==0.4.37",
16
+ "earthkit-data<1",
17
+ "earthkit-plots<1",
18
+ "flash-attn==2.7.4",
19
+ "torch==2.7",
20
+ "torch-geometric==2.6.1",
21
+ ]
22
+ optional-dependencies.inference = [
23
+ "anemoi-inference[huggingface]==0.8.3",
24
+ "anemoi-plugins-ecmwf-inference[opendata]==0.2.1",
25
+ "earthkit-regrid==0.5.1",
26
+ "ecmwf-opendata==0.3.29",
27
+ ]
28
+ optional-dependencies.jupyter = [
29
+ "ipykernel>=7.2",
30
+ ]
31
+ optional-dependencies.training = [
32
+ "anemoi-training==0.8.1",
33
+ ]
34
+
35
+ [tool.uv]
36
+ sources.flash-attn = [
37
+ { url = "https://github.com/cathalobrien/get-flash-attn/releases/download/v0.1-alpha/flash_attn-2.8.3+cu12torch2.7cxx11abiFALSE-cp312-cp312-linux_x86_64.whl", marker = "platform_machine == 'x86_64' and sys_platform != 'win32'" },
38
+ { url = "https://github.com/cathalobrien/get-flash-attn/releases/download/v0.1-alpha/flash_attn-2.8.3+cu12torch2.7cxx11abiFALSE-cp312-cp312-linux_aarch64.whl", marker = "platform_machine == 'aarch64' and sys_platform == 'linux'" },
39
+ ]
40
+ environments = [
41
+ "sys_platform == 'linux' and (platform_machine == 'x86_64' or platform_machine == 'aarch64')",
42
+ ]
run_AIFS_ENS_v2.0.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
uv.lock ADDED
The diff for this file is too large to render. See raw diff