JulioContrerasH commited on
Commit
3d881b3
·
verified ·
1 Parent(s): b7a07b2

Upload: Complete folder into assets directory

Browse files
README.md ADDED
@@ -0,0 +1,262 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ pipeline_tag: emulation
6
+ tags:
7
+ - emulation
8
+ - atmosphere radiative transfer models
9
+ - hyperspectral
10
+ pretty_name: Atmospheric Radiative Transfer Emulation Challenge
11
+ ---
12
+ Last update: 16-04-2025
13
+
14
+ <img src="https://elias-ai.eu/wp-content/uploads/2023/09/elias_logo_big-1.png" alt="elias_logo" style="width:15%; display: inline-block; margin-right: 150px;">
15
+ <img src="https://elias-ai.eu/wp-content/uploads/2024/01/EN_FundedbytheEU_RGB_WHITE-Outline-1.png" alt="eu_logo" style="width:20%; display: inline-block;">
16
+
17
+ # **Atmospheric Radiative Transfer Emulation Challenge**
18
+
19
+ ## **Index**
20
+ 1. [**Introduction**](/datasets/isp-uv-es/rtm_emulation#introduction)
21
+ 2. [**Challenge Tasks and Data**](/datasets/isp-uv-es/rtm_emulation#challenge_tasks_and_data):
22
+ 2.1. ([Proposed Experiments](/datasets/isp-uv-es/rtm_emulation#proposed_experiments)),
23
+ 2.2. ([Data Availability and Format](/datasets/isp-uv-es/rtm_emulation#data_availability_and_format))
24
+ 3. [**Evaluation methodology**](/datasets/isp-uv-es/rtm_emulation#evaluation_methodology)
25
+ 3.1. ([Prediction Accuracy](/datasets/isp-uv-es/rtm_emulation#prediction_accuracy)),
26
+ 3.2. ([Computational efficiency](/datasets/isp-uv-es/rtm_emulation#computational_efficiency)),
27
+ 3.3. ([Proposed Protocol](/datasets/isp-uv-es/rtm_emulation#proposed_protocol))
28
+ 4. [**Expected Outcomes**](/datasets/isp-uv-es/rtm_emulation#expected_outcomes)
29
+
30
+
31
+ ## **Introduction**
32
+
33
+ Atmospheric Radiative Transfer Models (RTM) are crucial in Earth and climate sciences with applications such as synthetic scene generation, satellite data processing, or
34
+ numerical weather forecasting. However, their increasing complexity results in a computational burden that limits direct use in operational settings. A practical solution
35
+ is to interpolate look-up-tables (LUTs) of pre-computed RTM simulations generated from long and costly model runs. However, large LUTs are still needed to achieve accurate
36
+ results, requiring significant time to generate and demanding high memory capacity. Alternative, ad hoc solutions make data processing algorithms mission-specific and
37
+ lack generalization. These problems are exacerbated for hyperspectral satellite missions, where the data volume of LUTs can increase by one or two orders of magnitude,
38
+ limiting the applicability of advanced data processing algorithms. In this context, emulation offers an alternative, allowing for real-time satellite data processing
39
+ algorithms while providing high prediction accuracy and adaptability across atmospheric conditions. Emulation replicate the behavior of a deterministic and computationally
40
+ demanding model using statistical regression algorithms. This approach facilitates the implementation of physics-based inversion algorithms, yielding accurate and
41
+ computationally efficient model predictions compared to traditional look-up table interpolation methods.
42
+
43
+ RTM emulation is challenging due to the high-dimensional nature of both input (~10 dimensions) and output (several thousand) spaces, and the complex interactions of
44
+ electromagnetic radiation with the atmosphere. The research implications are vast, with potential breakthroughs in surrogate modeling, uncertainty quantification,
45
+ and physics-aware AI systems that can significantly contribute to climate and Earth observation sciences.
46
+
47
+ This challenge will contribute to reducing computational burdens in climate and atmospheric research, enabling (1) Faster satellite data processing for applications in
48
+ remote sensing and weather prediction, (2) improved accuracy in atmospheric correction of hyperspectral imaging data, and (3) more efficient climate simulations, allowing
49
+ broader exploration of emission pathways aligned with sustainability goals.
50
+
51
+
52
+ ## **Challenge Tasks and Data**
53
+
54
+ Participants in this challenge will develop emulators trained on provided datasets to predict spectral magnitudes (atmospheric transmittances and reflectances)
55
+ based on input atmospheric and geometric conditions. The challenge is structured around three main tasks: (1) training ML models
56
+ using predefined datasets, (2) predicting outputs for given test conditions, and (3) evaluating emulator performance based on accuracy and runtime.
57
+
58
+ ### **Proposed Experiments**
59
+
60
+ The challenge includes two primary application test scenarios:
61
+ 1. **Atmospheric Correction** (`A`): This scenario focuses on the atmospheric correction of hyperspectral satellite imaging data. Emulators will be tested on
62
+ their ability to reproduce key atmospheric transfer functions that influence radiance measurements. This includes path radiance, direct/diffuse solar irradiance, and
63
+ transmittance properties. Full spectral range simulations (400-2500 nm) will be provided at a resolution of 5cm<sup>-1</sup>.
64
+ 2. **CO<sub>2</sub> Column Retrieval** (`B`): This scenario is in the context of atmospheric CO<sub>2</sub> retrieval by modeling how radiation interacts with various gas
65
+ layers. The emulators will be evaluated on their accuracy in predicting top-of-atmosphere radiance, particularly within the spectral range sensitive to CO<sub>2</sub>
66
+ absorption (2000-2100 nm) at high spectral resolution (0.1cm<sup>-1</sup>).
67
+
68
+ For both scenarios, two test datasets (tracks) will be provided to evaluate 1) interpolation, and 2) extrapolation.
69
+
70
+ Each scenario-track combination will be identified using alphanumeric ID `Sn`, where `S`={`A`,`B`} denotes to the scenario, and `n`={1,2}
71
+ represents test dataset type (i.e., track). For example, `A2` refers to prediction for the atmospheric correction scenario using the the extrapolation dataset.
72
+
73
+ Participants may choose their preferred scenario(s) and tracks; however, we encourage submitting predictions for all test conditions.
74
+
75
+ ### **Data Availability and Format**
76
+
77
+ Participants will have access to multiple training datasets of atmospheric RTM simulations varying in sample sizes, input parameters, and spectral range/resolution.
78
+ These datasets will be generated using Latin Hypercube Sampling to ensure a comprehensive input space coverage and minimize issues related to ill-posedness and
79
+ unrealistic results.
80
+
81
+ The training data (i.e., inputs and outputs of RTM simulations) will be stored in [HDF5](https://docs.h5py.org/en/stable/) format with the following structure:
82
+
83
+ | **Dimensions** | |
84
+ |:---:|:---:|
85
+ | **Name** | **Description** |
86
+ | `n_wl` | Number of wavelengths for which spectral data is provided |
87
+ | `n_funcs` | Number of atmospheric transfer functions |
88
+ | `n_comb` | Number of data points at which spectral data is provided |
89
+ | `n_param` | Dimensionality of the input variable space |
90
+
91
+ | **Data Components** | | | |
92
+ |:---:|:---:|:---:|:---:|
93
+ | **Name** | **Description** | **Dimensions** | **Datatype** |
94
+ | **`LUTdata`** | Atmospheric transfer functions (i.e. outputs) | `n_funcs*n_wvl x n_comb` | single |
95
+ | **`LUTHeader`** | Matrix of input variable values for each combination (i.e., inputs) | `n_param x n_comb` | double |
96
+ | **`wvl`** | Wavelength values associated with the atmospheric transfer functions (i.e., spectral grid) | `n_wvl` | double |
97
+
98
+ **Note:** Participants may choose to predict the spectral data either as a single vector of length `n_funcs*n_wvl` or as `n_funcs` separate vectors of lenght `n_wvl`.
99
+
100
+ Testing input datasets (i.e., input for predictions) will be stored in a tabulated `.csv` format with dimensions `n_param x n_comb`.
101
+
102
+ The trainng and testing dataset will be organized organized into scenario-specific folders: `scenarioA` (Atmospheric Correction), and `scenarioB` (CO<sub>2</sub> Column Retrieval).
103
+ Each folder will contain:
104
+ - A `train` with multiple `.h5` files corresponding to different training sample sizes (e.g. `train2000.h5`contains 2000 samples).
105
+ - A `reference` subfolder containg three test files (`refInterp` and `refExtrap`) referring to the two aforementioned tracks (i.e., interpolation and extrapolation).
106
+ Additionally, a global attribute (`scenario`) will be included in the training data files to indicate the relevant challenge scenario (see
107
+ [**Proposed experiments**](/datasets/isp-uv-es/rtm_emulation#proposed-experiments))
108
+
109
+ Here is an example of how to load each dataset in python:
110
+ ```{python}
111
+ import h5py
112
+ import pandas as pd
113
+ import numpy as np
114
+
115
+ # Replace with the actual path to your training and testing data
116
+ trainFile = 'train2000.h5'
117
+ testFile = 'refInterp.csv'
118
+
119
+ # Open the H5 file
120
+ with h5py.File(file_path, 'r') as h5_file
121
+ Ytrain = h5_file['LUTdata'][:]
122
+ Xtrain = h5_file['LUTHeader'][:]
123
+ wvl = h5_file['wvl'][:]
124
+
125
+ # Read testing data
126
+ df = pd.read_csv(testFile)
127
+ Xtest = df.to_numpy()
128
+ ```
129
+
130
+ in Matlab:
131
+ ```{matlab}
132
+ # Replace with the actual path to your training and testing data
133
+ trainFile = 'train2000.h5';
134
+ testFile = 'refInterp.csv';
135
+
136
+ # Open the H5 file
137
+ Ytrain = h5read(trainFile,'/LUTdata');
138
+ Xtrain = h5read(trainFile,'/LUTheader');
139
+ wvl = h5read(trainFile,'/wvl');
140
+
141
+ # Read testing data
142
+ Xtest = importdata(testFile);
143
+ ```
144
+
145
+ and in R language:
146
+ ```{r}
147
+ library(rhdf5)
148
+
149
+ # Replace with the actual path to your training and testing data
150
+ trainFile <- "train2000.h5"
151
+ testFile <- "refInterp.csv"
152
+
153
+ # Open the H5 file
154
+ lut_data <- h5read(file_path, "LUTdata")
155
+ lut_header <- h5read(file_path, "LUTHeader")
156
+ wavelengths <- h5read(file_path, "wvl")
157
+
158
+ # Read testing data
159
+ Xtest <- as.matrix(read.table(file_path, sep = ",", header = TRUE))
160
+ ```
161
+
162
+ All data will be shared through a this [huggingface]. After the challenge finishes, participants will also have access to the evaluation scripts on
163
+ [this GitLab](http://to_be_prepared) to ensure transparency and reproducibility.
164
+
165
+
166
+ ## **Evaluation methodology**
167
+
168
+ The evaluation will focus on three key aspects: prediction accuracy, computational efficiency, and extrapolation performance.
169
+
170
+ ### **Prediction Accuracy**
171
+
172
+ For the **atmospheric correction** scenario (`A`), the predicted atmospheric transfer functions will be used to retrieve surface reflectance from the top-of-atmosphere
173
+ (TOA) radiance simulations in the testing dataset. The evaluation will proceed as follows:
174
+ 1. The relative difference between retrieved and reference reflectance will be computed for each spectral channel and sample from the testing dataset.
175
+ 2. The mean relative error (MRE) will be calculated over the enrire reference dataset to assess overall emulator bias.
176
+ 3. The spectrally-averaged MRE (MRE<sub>λ</sub> will be computed, excluding wavelengths in the deep H<sub>2</sub>O. absorption regions, to ensure direct comparability between participants.
177
+
178
+ For the **CO<sub>2</sub> retrieval** scenario (`B`), evaluation will follow the same steps, comparing predicted TOA radiance spectral data against the reference values
179
+ in the testing dataset.
180
+
181
+ Since each participant/model can contribute to up to four scenario-track combinations, we will consolidate results into a single final ranking using the following process:
182
+ 1. **Individual ranking**: For each of the four combinations, submissions will be ranked based on their MRE<sub>λ</sub> values. Lower MRE<sub>λ</sub> values correspond to
183
+ better performance. In the unlikely case of ties, these will be handled by averaging the tied ranks.
184
+ 2. **Final ranking**: Rankings will be aggregated into a single final score using a weighted average. The following weights will be applied: 0.375 for interpolation and
185
+ 0.175 for extrapolation tracks. That is:
186
+ **Final score = (0.325 × AC-Interp Rank) + (0.175 × AC-Extrap Rank) + (0.325 × CO2-Interp Rank) + (0.175 × CO2-Extrap Rank)**
187
+ 3. **Missing Submissions**: If a participant does not submit results for a particular scenario-track combination, they will be placed in the last position for that track.
188
+
189
+ To ensure fairness in the final ranking, we will use the **standard competition ranking** method in the case of ties. If two or more participants achieve the same
190
+ weighted average rank, they will be assigned the same final position, and the subsequent rank(s) will be skipped accordingly. For example, if two participants are tied
191
+ for 1st place, they will both receive rank 1, and the next participant will be ranked 3rd (not 2nd).
192
+
193
+ **Note:** while the challenge is open, the daily evaluation of error metrics will be done on a subset of the test data. This will avoid participants to have detailed
194
+ information that would allow them to fine-tune their models. The final results and ranking evaluated with all the validation data will be provided and the end-date of the challenge.
195
+
196
+ ### **Computational efficiency**
197
+ Participants must report the runtime required to generate predictions across different emulator configurations. To facilitate fair comparisons, they should also provide
198
+ a report with hardware specifications, including: CPU, Parallelization settings (e.g., multi-threading, GPU acceleration), RAM availability.
199
+ Additionally, participants should report key model characteristics, such as the number of operations required for a single prediction and the number of trainable
200
+ parameters in their ML models.
201
+
202
+ All evaluation scripts will be publicly available on GitLab and Huggingface to ensure fairness, trustworthiness, and transparency.
203
+
204
+ ### **Proposed Protocol**
205
+
206
+ - Participant must generate emulator predictions on the provided testing datasets before the submission deadline. Multiple emulator models can be submitted.
207
+
208
+ - The submission will be made via a [pull request](https://huggingface.co/docs/hub/en/repositories-pull-requests-discussions) to this repository.
209
+
210
+ - Each submission **MUST** include the prediction results in hdf5 format and a `metadata.json`.
211
+
212
+ - The predictions should be stored in a `.h5`file with the same format as the [training data](/datasets/isp-uv-es/rtm_emulation#data-availability-and-format).
213
+ Note that only the **`LUTdata`** matrix (i.e., the predictions) are needed. A baseline example of this file is available for participants (`baseline_Sn.h5`).
214
+ We encourage participants to compress their hdf5 files using the deflate option.
215
+
216
+ - Each prediction file must be stored in`predictions` subfolder within the corresponding
217
+ scenario folder (e.g., (e.g. `/scenarioA/predictions`). The prediction files should be named using the emulator/model name followed by the scenario-track ID
218
+ (e.g. `/scenarioA/predictions/mymodel_A1.h5`). A global attribute named `scenario` must be included to specify the corresponding scenario-track (e.g., `A1`, see
219
+ [**Proposed experiments**](/datasets/isp-uv-es/rtm_emulation#proposed-experiments)). A global attributed named `runtime`must be included to report the
220
+ computational efficiency of your model (value expressed in seconds).
221
+ Note that all predictions for different scenario-tracks should be stored in separate files.
222
+
223
+ - The metadata file (`metadata.json`) shall contain the following information:
224
+
225
+ ```{json}
226
+ {
227
+ "name": "model_name",
228
+ "authors": ["author1", "author2"],
229
+ "affiliations": ["affiliation1", "affiliation2"],
230
+ "description": "A brief description of the emulator",
231
+ "url": "[OPTIONAL] URL to the model repository if it is open-source",
232
+ "doi": "DOI to the model publication (if available)"
233
+ }
234
+ ```
235
+
236
+ - Emulator predictions will be evaluated once per day at 12:00 CET based on the defined metrics.
237
+
238
+ - After the deadline, teams will be contacted with their evaluation results. If any issues are identified, theams will have up to two
239
+ weeks to provide the necessary corrections.
240
+
241
+ - Questions and discussions will be held in the discussion section of this [repository](https://huggingface.co/isp-uv-es/rtm_emulation/discussions).
242
+
243
+ - After all the participants have provided the necessary corrections, the results will be published in the discussion section of this repository.
244
+
245
+
246
+ ## **Expected Outcomes**
247
+
248
+ - No clear superiority of any methodology in all metrics is expected.
249
+ - Participants will benefit from the analysis on scenarios/tracks, which will serve them to improve their models.
250
+ - A research publication will be submitted to a remote sensing journal with the top three winners.
251
+ - An overview paper of the challenge will be published at the [ECML-PKDD 2025](https://ecmlpkdd.org/2025/) proceedings.
252
+ - The winner will get covered the registratin cost for the [ECML-PKDD 2025](https://ecmlpkdd.org/2025/).
253
+ - We are exploring the possibility to provid an economic prizes for the top three winners. Stay tuned!
254
+
255
+ ## **Benchmark Results**
256
+
257
+ | **Model** | **Scenario** | **MREλ (%)** | **Inference Time (s)** |
258
+ |-----------|--------------|--------------|-------------------------|
259
+ | demov1 | A | 0.21 | N/A |
260
+ | demov2 | A | 0.22 | N/A |
261
+ | demov1 | B | 0.51 | N/A |
262
+ | demov3 | C | 0.51 | N/A |
results/baseline_A1.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d86578aa121c1e36b73a0ffeb75927469af72417f78ee48c0d77e9f52e5bbf9
3
+ size 100921152
results/baseline_B1.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:89d0b0864fefab4bfdafcafa5a85363dc021b43466f52ba53f2c9b01c2b4379c
3
+ size 58801152
results/test1_A1.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0441ecd4879f48e804b34d9f493354b2746db7a20bde042b1013fe26b5817266
3
+ size 100921152
results/test1_A2.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0441ecd4879f48e804b34d9f493354b2746db7a20bde042b1013fe26b5817266
3
+ size 100921152
results/test1_B1.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fbf68e084b588d2ee99db212f463eee95c103ea45ad70d72df400cf698ab2db7
3
+ size 58801152
results/test2_A1.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb8a6e2369ee143e833f5f4299a062fd16db5f87c440ec8951845d0884fc3b6b
3
+ size 100921152
results/test2_B1.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ceb06b2b3ef4659722a5bc330762b36556f6b9156ca492918914b02d8dc35371
3
+ size 58801152
scenarioA/reference/refExtrap.csv ADDED
The diff for this file is too large to render. See raw diff
 
scenarioA/reference/refInterp.csv ADDED
The diff for this file is too large to render. See raw diff
 
scenarioA/train/train2000.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a0d97bcba3c2c6dd90200fea5d892cea9b2edd5b3084b63284ee12e05bfa811f
3
+ size 202108012
scenarioA/train/train500.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:03c6bc48c73db819382e7938cb367b6771424ef8712c4da2063576c3695c2926
3
+ size 50620012
scenarioB/reference/refExtrap.csv ADDED
The diff for this file is too large to render. See raw diff
 
scenarioB/reference/refInterp.csv ADDED
The diff for this file is too large to render. See raw diff
 
scenarioB/train/train2000.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f6c919fedffd47ddcd02fb6dca079e22eaedba4ef81ea53efad5531e603ee4f9
3
+ size 117805394
scenarioB/train/train500.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28e1a14a33a7c8cfed622f00b15801c5d58c7f2d31e6459a4b3abf48adbe27f9
3
+ size 29521394