Commit
·
7efbe29
verified
·
0
Parent(s):
Update dataset with new benchmark results and README
Browse filesCo-authored-by: geogpt69 <[email protected]>
Co-authored-by: viserjor <[email protected]>
- .gitattributes +59 -0
- README.md +266 -0
- results/baseline.json +9 -0
- results/baseline_A1.h5 +3 -0
- results/baseline_A2.h5 +3 -0
- results/baseline_B1.h5 +3 -0
- results/baseline_B2.h5 +3 -0
- scenarioA/reference/refExtrap.csv +0 -0
- scenarioA/reference/refInterp.csv +0 -0
- scenarioA/train/train2000.h5 +3 -0
- scenarioA/train/train500.h5 +3 -0
- scenarioB/reference/refExtrap.csv +0 -0
- scenarioB/reference/refInterp.csv +0 -0
- scenarioB/train/train2000.h5 +3 -0
- scenarioB/train/train500.h5 +3 -0
.gitattributes
ADDED
@@ -0,0 +1,59 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.lz4 filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.mds filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
20 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
26 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
28 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
29 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
30 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
31 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
32 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
33 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
34 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
35 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
36 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
37 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
38 |
+
# Audio files - uncompressed
|
39 |
+
*.pcm filter=lfs diff=lfs merge=lfs -text
|
40 |
+
*.sam filter=lfs diff=lfs merge=lfs -text
|
41 |
+
*.raw filter=lfs diff=lfs merge=lfs -text
|
42 |
+
# Audio files - compressed
|
43 |
+
*.aac filter=lfs diff=lfs merge=lfs -text
|
44 |
+
*.flac filter=lfs diff=lfs merge=lfs -text
|
45 |
+
*.mp3 filter=lfs diff=lfs merge=lfs -text
|
46 |
+
*.ogg filter=lfs diff=lfs merge=lfs -text
|
47 |
+
*.wav filter=lfs diff=lfs merge=lfs -text
|
48 |
+
# Image files - uncompressed
|
49 |
+
*.bmp filter=lfs diff=lfs merge=lfs -text
|
50 |
+
*.gif filter=lfs diff=lfs merge=lfs -text
|
51 |
+
*.png filter=lfs diff=lfs merge=lfs -text
|
52 |
+
*.tiff filter=lfs diff=lfs merge=lfs -text
|
53 |
+
# Image files - compressed
|
54 |
+
*.jpg filter=lfs diff=lfs merge=lfs -text
|
55 |
+
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
56 |
+
*.webp filter=lfs diff=lfs merge=lfs -text
|
57 |
+
# Video files - compressed
|
58 |
+
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
59 |
+
*.webm filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,266 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
pipeline_tag: emulation
|
6 |
+
tags:
|
7 |
+
- emulation
|
8 |
+
- atmosphere radiative transfer models
|
9 |
+
- hyperspectral
|
10 |
+
pretty_name: Atmospheric Radiative Transfer Emulation Challenge
|
11 |
+
---
|
12 |
+
Last update: 08-05-2025
|
13 |
+
|
14 |
+
<img src="https://elias-ai.eu/wp-content/uploads/2023/09/elias_logo_big-1.png" alt="elias_logo" style="width:15%; display: inline-block; margin-right: 150px;">
|
15 |
+
<img src="https://elias-ai.eu/wp-content/uploads/2024/01/EN_FundedbytheEU_RGB_WHITE-Outline-1.png" alt="eu_logo" style="width:20%; display: inline-block;">
|
16 |
+
|
17 |
+
# **Atmospheric Radiative Transfer Emulation Challenge**
|
18 |
+
|
19 |
+
|
20 |
+
1. [**Introduction**](#introduction)
|
21 |
+
2. [**Challenge Tasks and Data**](#challenge-tasks-and-data):
|
22 |
+
|
23 |
+
2.1. [**Proposed Experiments**](#proposed-experiments)
|
24 |
+
|
25 |
+
2.2. [**Data Availability and Format**](#data-availability-and-format)
|
26 |
+
3. [**Evaluation methodology**](#evaluation-methodology)
|
27 |
+
|
28 |
+
3.1. [**Prediction Accuracy**](#prediction-accuracy)
|
29 |
+
|
30 |
+
3.2. [**Computational efficiency**](#computational-efficiency)
|
31 |
+
|
32 |
+
3.3. [**Proposed Protocol**](#proposed-protocol)
|
33 |
+
|
34 |
+
4. [**Expected Outcomes**](#expected-outcomes)
|
35 |
+
|
36 |
+
|
37 |
+
|
38 |
+
## **Benchmark Results**
|
39 |
+
|
40 |
+
| **Model** | **MRE A1 (%)** | **MRE A2 (%)** | **MRE B1 (%)** | **MRE B2 (%)** | **Score** | **Runtime** | **Rank** |
|
41 |
+
|-----------|---------------|---------------|---------------|---------------|----------|----------|--------|
|
42 |
+
| Krtek | 0.545 | 7.693 | 0.816 | 7.877 | 1.175 | 0.526 | 1° |
|
43 |
+
| baseline | 0.998 | 12.604 | 1.065 | 7.072 | 1.825 | 2.400 | 2° |
|
44 |
+
|
45 |
+
## **Introduction**
|
46 |
+
|
47 |
+
Atmospheric Radiative Transfer Models (RTM) are crucial in Earth and climate sciences with applications such as synthetic scene generation, satellite data processing, or
|
48 |
+
numerical weather forecasting. However, their increasing complexity results in a computational burden that limits direct use in operational settings. A practical solution
|
49 |
+
is to interpolate look-up-tables (LUTs) of pre-computed RTM simulations generated from long and costly model runs. However, large LUTs are still needed to achieve accurate
|
50 |
+
results, requiring significant time to generate and demanding high memory capacity. Alternative, ad hoc solutions make data processing algorithms mission-specific and
|
51 |
+
lack generalization. These problems are exacerbated for hyperspectral satellite missions, where the data volume of LUTs can increase by one or two orders of magnitude,
|
52 |
+
limiting the applicability of advanced data processing algorithms. In this context, emulation offers an alternative, allowing for real-time satellite data processing
|
53 |
+
algorithms while providing high prediction accuracy and adaptability across atmospheric conditions. Emulation replicate the behavior of a deterministic and computationally
|
54 |
+
demanding model using statistical regression algorithms. This approach facilitates the implementation of physics-based inversion algorithms, yielding accurate and
|
55 |
+
computationally efficient model predictions compared to traditional look-up table interpolation methods.
|
56 |
+
|
57 |
+
RTM emulation is challenging due to the high-dimensional nature of both input (~10 dimensions) and output (several thousand) spaces, and the complex interactions of
|
58 |
+
electromagnetic radiation with the atmosphere. The research implications are vast, with potential breakthroughs in surrogate modeling, uncertainty quantification,
|
59 |
+
and physics-aware AI systems that can significantly contribute to climate and Earth observation sciences.
|
60 |
+
|
61 |
+
This challenge will contribute to reducing computational burdens in climate and atmospheric research, enabling (1) Faster satellite data processing for applications in
|
62 |
+
remote sensing and weather prediction, (2) improved accuracy in atmospheric correction of hyperspectral imaging data, and (3) more efficient climate simulations, allowing
|
63 |
+
broader exploration of emission pathways aligned with sustainability goals.
|
64 |
+
|
65 |
+
## **Challenge Tasks and Data**
|
66 |
+
|
67 |
+
Participants in this challenge will develop emulators trained on provided datasets to predict spectral magnitudes (atmospheric transmittances and reflectances)
|
68 |
+
based on input atmospheric and geometric conditions. The challenge is structured around three main tasks: (1) training ML models
|
69 |
+
using predefined datasets, (2) predicting outputs for given test conditions, and (3) evaluating emulator performance based on accuracy.
|
70 |
+
|
71 |
+
### **Proposed Experiments**
|
72 |
+
|
73 |
+
The challenge includes two primary application test scenarios:
|
74 |
+
1. **Atmospheric Correction** (`A`): This scenario focuses on the atmospheric correction of hyperspectral satellite imaging data. Emulators will be tested on
|
75 |
+
their ability to reproduce key atmospheric transfer functions that influence radiance measurements. This includes path radiance, direct/diffuse solar irradiance, and
|
76 |
+
transmittance properties. Full spectral range simulations (400-2500 nm) will be provided at a resolution of 5cm<sup>-1</sup>.
|
77 |
+
2. **CO<sub>2</sub> Column Retrieval** (`B`): This scenario is in the context of atmospheric CO<sub>2</sub> retrieval by modeling how radiation interacts with various gas
|
78 |
+
layers. The emulators will be evaluated on their accuracy in predicting top-of-atmosphere radiance, particularly within the spectral range sensitive to CO<sub>2</sub>
|
79 |
+
absorption (2000-2100 nm) at high spectral resolution (0.1cm<sup>-1</sup>).
|
80 |
+
|
81 |
+
For both scenarios, two test datasets (tracks) will be provided to evaluate 1) interpolation, and 2) extrapolation.
|
82 |
+
|
83 |
+
Each scenario-track combination will be identified using alphanumeric ID `Sn`, where `S`={`A`,`B`} denotes to the scenario, and `n`={1,2}
|
84 |
+
represents test dataset type (i.e., track). For example, `A2` refers to prediction for the atmospheric correction scenario using the the extrapolation dataset.
|
85 |
+
|
86 |
+
Participants may choose their preferred scenario(s) and tracks; however, we encourage submitting predictions for all test conditions.
|
87 |
+
|
88 |
+
### **Data Availability and Format**
|
89 |
+
|
90 |
+
Participants will have access to multiple training datasets of atmospheric RTM simulations varying in sample sizes, input parameters, and spectral range/resolution.
|
91 |
+
These datasets will be generated using Latin Hypercube Sampling to ensure a comprehensive input space coverage and minimize issues related to ill-posedness and
|
92 |
+
unrealistic results.
|
93 |
+
|
94 |
+
The training data (i.e., inputs and outputs of RTM simulations) will be stored in [HDF5](https://docs.h5py.org/en/stable/) format with the following structure:
|
95 |
+
|
96 |
+
| **Dimensions** | |
|
97 |
+
|:---:|:---:|
|
98 |
+
| **Name** | **Description** |
|
99 |
+
| `n_wl` | Number of wavelengths for which spectral data is provided |
|
100 |
+
| `n_funcs` | Number of atmospheric transfer functions |
|
101 |
+
| `n_comb` | Number of data points at which spectral data is provided |
|
102 |
+
| `n_param` | Dimensionality of the input variable space |
|
103 |
+
|
104 |
+
| **Data Components** | | | |
|
105 |
+
|:---:|:---:|:---:|:---:|
|
106 |
+
| **Name** | **Description** | **Dimensions** | **Datatype** |
|
107 |
+
| **`LUTdata`** | Atmospheric transfer functions (i.e. outputs) | `n_funcs*n_wvl x n_comb` | single |
|
108 |
+
| **`LUTHeader`** | Matrix of input variable values for each combination (i.e., inputs) | `n_param x n_comb` | double |
|
109 |
+
| **`wvl`** | Wavelength values associated with the atmospheric transfer functions (i.e., spectral grid) | `n_wvl` | double |
|
110 |
+
|
111 |
+
**Note:** Participants may choose to predict the spectral data either as a single vector of length `n_funcs*n_wvl` or as `n_funcs` separate vectors of lenght `n_wvl`.
|
112 |
+
|
113 |
+
Testing input datasets (i.e., input for predictions) will be stored in a tabulated `.csv` format with dimensions `n_param x n_comb`.
|
114 |
+
|
115 |
+
The trainng and testing dataset will be organized organized into scenario-specific folders (see
|
116 |
+
[**Proposed experiments**](/datasets/isp-uv-es/rtm_emulation#proposed-experiments)): `scenarioA` (Atmospheric Correction), and `scenarioB` (CO<sub>2</sub> Column Retrieval).
|
117 |
+
Each folder will contain:
|
118 |
+
- A `train` with multiple `.h5` files corresponding to different training sample sizes (e.g. `train2000.h5`contains 2000 samples).
|
119 |
+
- A `reference` subfolder containg two test files (`refInterp` and `refExtrap`) referring to the two aforementioned tracks (i.e., interpolation and extrapolation).
|
120 |
+
|
121 |
+
Here is an example of how to load each dataset in python:
|
122 |
+
```{python}
|
123 |
+
import h5py
|
124 |
+
import pandas as pd
|
125 |
+
import numpy as np
|
126 |
+
|
127 |
+
# Replace with the actual path to your training and testing data
|
128 |
+
trainFile = 'train2000.h5'
|
129 |
+
testFile = 'refInterp.csv'
|
130 |
+
|
131 |
+
# Open the H5 file
|
132 |
+
with h5py.File(file_path, 'r') as h5_file
|
133 |
+
Ytrain = h5_file['LUTdata'][:]
|
134 |
+
Xtrain = h5_file['LUTHeader'][:]
|
135 |
+
wvl = h5_file['wvl'][:]
|
136 |
+
|
137 |
+
# Read testing data
|
138 |
+
df = pd.read_csv(testFile)
|
139 |
+
Xtest = df.to_numpy()
|
140 |
+
```
|
141 |
+
|
142 |
+
in Matlab:
|
143 |
+
```{matlab}
|
144 |
+
# Replace with the actual path to your training and testing data
|
145 |
+
trainFile = 'train2000.h5';
|
146 |
+
testFile = 'refInterp.csv';
|
147 |
+
|
148 |
+
# Open the H5 file
|
149 |
+
Ytrain = h5read(trainFile,'/LUTdata');
|
150 |
+
Xtrain = h5read(trainFile,'/LUTheader');
|
151 |
+
wvl = h5read(trainFile,'/wvl');
|
152 |
+
|
153 |
+
# Read testing data
|
154 |
+
Xtest = importdata(testFile);
|
155 |
+
```
|
156 |
+
|
157 |
+
and in R language:
|
158 |
+
```{r}
|
159 |
+
library(rhdf5)
|
160 |
+
|
161 |
+
# Replace with the actual path to your training and testing data
|
162 |
+
trainFile <- "train2000.h5"
|
163 |
+
testFile <- "refInterp.csv"
|
164 |
+
|
165 |
+
# Open the H5 file
|
166 |
+
lut_data <- h5read(file_path, "LUTdata")
|
167 |
+
lut_header <- h5read(file_path, "LUTHeader")
|
168 |
+
wavelengths <- h5read(file_path, "wvl")
|
169 |
+
|
170 |
+
# Read testing data
|
171 |
+
Xtest <- as.matrix(read.table(file_path, sep = ",", header = TRUE))
|
172 |
+
```
|
173 |
+
|
174 |
+
All data will be shared through a this [repository](ttps://huggingface.co/datasets/isp-uv-es/rtm_emulation/tree/main). After the challenge finishes, participants
|
175 |
+
will also have access to the evaluation scripts on [this GitLab](http://to_be_prepared) to ensure transparency and reproducibility.
|
176 |
+
|
177 |
+
|
178 |
+
## **Evaluation methodology**
|
179 |
+
|
180 |
+
The evaluation will focus on three key aspects: prediction accuracy, computational efficiency, and extrapolation performance.
|
181 |
+
|
182 |
+
### **Prediction Accuracy**
|
183 |
+
|
184 |
+
For the **atmospheric correction** scenario (`A`), the predicted atmospheric transfer functions will be used to retrieve surface reflectance from the top-of-atmosphere
|
185 |
+
(TOA) radiance simulations in the testing dataset. The evaluation will proceed as follows:
|
186 |
+
1. The relative difference between retrieved and reference reflectance will be computed for each spectral channel and sample from the testing dataset.
|
187 |
+
2. The mean relative error (MRE) will be calculated over the enrire reference dataset to assess overall emulator bias.
|
188 |
+
3. The spectrally-averaged MRE (MRE<sub>λ</sub> will be computed, excluding wavelengths in the deep H<sub>2</sub>O. absorption regions, to ensure direct comparability between participants.
|
189 |
+
|
190 |
+
For the **CO<sub>2</sub> retrieval** scenario (`B`), evaluation will follow the same steps, comparing predicted TOA radiance spectral data against the reference values
|
191 |
+
in the testing dataset.
|
192 |
+
|
193 |
+
Since each participant/model can contribute to up to four scenario-track combinations, we will consolidate results into a single final ranking using the following process:
|
194 |
+
1. **Individual ranking**: For each of the four combinations, submissions will be ranked based on their MRE<sub>λ</sub> values. Lower MRE<sub>λ</sub> values correspond to
|
195 |
+
better performance. In the unlikely case of ties, these will be handled by averaging the tied ranks.
|
196 |
+
2. **Final ranking**: Rankings will be aggregated into a single final score using a weighted average. The following weights will be applied: 0.375 for interpolation and
|
197 |
+
0.175 for extrapolation tracks. That is:
|
198 |
+
**Final score = (0.325 × AC-Interp Rank) + (0.175 × AC-Extrap Rank) + (0.325 × CO2-Interp Rank) + (0.175 × CO2-Extrap Rank)**
|
199 |
+
3. **Missing Submissions**: If a participant does not submit results for a particular scenario-track combination, they will be placed in the last position for that track.
|
200 |
+
|
201 |
+
To ensure fairness in the final ranking, we will use the **standard competition ranking** method in the case of ties. If two or more participants achieve the same
|
202 |
+
weighted average rank, they will be assigned the same final position, and the subsequent rank(s) will be skipped accordingly. For example, if two participants are tied
|
203 |
+
for 1st place, they will both receive rank 1, and the next participant will be ranked 3rd (not 2nd).
|
204 |
+
|
205 |
+
**Note:** while the challenge is open, the daily evaluation of error metrics will be done on a subset of the test data. This will avoid participants to have detailed
|
206 |
+
information that would allow them to fine-tune their models. The final results and ranking evaluated with all the validation data will be provided and the end-date of the challenge.
|
207 |
+
|
208 |
+
### **Computational efficiency**
|
209 |
+
Participants must report the runtime required to generate predictions across different emulator configurations. The average runtime of all scenario-track combinations
|
210 |
+
will be calculated and reported in the table. **Runtime won't be taken into account for the final ranking**. After the competition ends, and to facilitate fair
|
211 |
+
comparisons, participants will be requested to provide a report with hardware specifications, including: CPU, Parallelization settings (e.g., multi-threading, GPU
|
212 |
+
acceleration), RAM availability. Additionally, participants should report key model characteristics, such as the number of operations required for a single prediction and the number of trainable
|
213 |
+
parameters in their ML models.
|
214 |
+
|
215 |
+
All evaluation scripts will be publicly available on GitLab and Huggingface to ensure fairness, trustworthiness, and transparency.
|
216 |
+
|
217 |
+
### **Proposed Protocol**
|
218 |
+
|
219 |
+
- Participant must generate emulator predictions on the provided testing datasets before the submission deadline. Multiple emulator models can be submitted.
|
220 |
+
|
221 |
+
- The submission will be made via a [pull request](https://huggingface.co/docs/hub/en/repositories-pull-requests-discussions) to this repository.
|
222 |
+
|
223 |
+
- Each submission **MUST** include the prediction results in hdf5 format and a `metadata.json`.
|
224 |
+
|
225 |
+
- The predictions should be stored in a `.h5`file with the same format as the [training data](/datasets/isp-uv-es/rtm_emulation#data-availability-and-format).
|
226 |
+
Note that only the **`LUTdata`** matrix (i.e., the predictions) are needed. A baseline example of this file is available for participants (`baseline_Sn.h5`).
|
227 |
+
We encourage participants to compress their hdf5 files using the deflate option.
|
228 |
+
|
229 |
+
- Each prediction file must be stored in`predictions` subfolder within the corresponding
|
230 |
+
scenario folder (e.g., (e.g. `/scenarioA/predictions`). The prediction files should be named using the emulator/model name followed by the scenario-track ID
|
231 |
+
(e.g. `/scenarioA/predictions/mymodel_A1.h5`). A global attributed named `runtime`must be included to report the
|
232 |
+
computational efficiency of your model (value expressed in seconds).
|
233 |
+
Note that all predictions for different scenario-tracks should be stored in separate files.
|
234 |
+
|
235 |
+
- The metadata file (`metadata.json`) shall contain the following information:
|
236 |
+
|
237 |
+
```{json}
|
238 |
+
{
|
239 |
+
"name": "model_name",
|
240 |
+
"authors": ["author1", "author2"],
|
241 |
+
"affiliations": ["affiliation1", "affiliation2"],
|
242 |
+
"description": "A brief description of the emulator",
|
243 |
+
"url": "[OPTIONAL] URL to the model repository if it is open-source",
|
244 |
+
"doi": "DOI to the model publication (if available)",
|
245 |
+
"email": <main_contact_email>
|
246 |
+
}
|
247 |
+
```
|
248 |
+
|
249 |
+
- Emulator predictions will be evaluated once per day at 12:00 CET based on the defined metrics.
|
250 |
+
|
251 |
+
- After the deadline, teams will be contacted with their evaluation results. If any issues are identified, theams will have up to two
|
252 |
+
weeks to provide the necessary corrections.
|
253 |
+
|
254 |
+
- In case of **problems with the pull request** or incorrect validity of the submitted files, all discussions will be held in the [discussion board](https://huggingface.co/isp-uv-es/rtm_emulation/discussions).
|
255 |
+
|
256 |
+
- After all the participants have provided the necessary corrections, the results will be published in the discussion section of this repository.
|
257 |
+
|
258 |
+
|
259 |
+
## **Expected Outcomes**
|
260 |
+
|
261 |
+
- No clear superiority of any methodology in all metrics is expected.
|
262 |
+
- Participants will benefit from the analysis on scenarios/tracks, which will serve them to improve their models.
|
263 |
+
- A research publication will be submitted to a remote sensing journal with the top three winners.
|
264 |
+
- An overview paper of the challenge will be published at the [ECML-PKDD 2025](https://ecmlpkdd.org/2025/) workshop proceedings.
|
265 |
+
- The winner will get covered the registratin cost for the [ECML-PKDD 2025](https://ecmlpkdd.org/2025/).
|
266 |
+
- We are exploring the possibility to provid an economic prizes for the top three winners. Stay tuned!
|
results/baseline.json
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"name": "baseline",
|
3 |
+
"authors": ["Jorge Vicent Servera"],
|
4 |
+
"affiliations": ["Image & Signal Processing (ISP)"],
|
5 |
+
"description": "2nd order hypersurface polynomial fitting",
|
6 |
+
"url": "",
|
7 |
+
"doi": "",
|
8 |
+
"email": "[email protected]"
|
9 |
+
}
|
results/baseline_A1.h5
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b24098b9f8b311d898044eebb0b7b06bf7f4891f5965fa02fccd1f8d1b668187
|
3 |
+
size 1009201248
|
results/baseline_A2.h5
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b1fd2839f0ee10a3c6ee12818e2d1a0bffd04a06dac3a1419cc3c54fb3b93fd6
|
3 |
+
size 403681248
|
results/baseline_B1.h5
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b319f9bbfd8e24aaafe2d27e545052f33c988a4fe1d26e55a4f6607401feef6b
|
3 |
+
size 58801248
|
results/baseline_B2.h5
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d035e50f986540bb28cc07c596b2023b758547bca8a73bbd7a6a5d9958d57e89
|
3 |
+
size 235201248
|
scenarioA/reference/refExtrap.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
scenarioA/reference/refInterp.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
scenarioA/train/train2000.h5
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:45c49623ecfbecb8ef242c59914dba6fe1577fdc046fefcff6e4bedc44cb86fd
|
3 |
+
size 202108012
|
scenarioA/train/train500.h5
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:042d6153eb75fb8d488e3aa78be092ef2920339b3b52b44bc90747682992e1d3
|
3 |
+
size 50620012
|
scenarioB/reference/refExtrap.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
scenarioB/reference/refInterp.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
scenarioB/train/train2000.h5
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f6c919fedffd47ddcd02fb6dca079e22eaedba4ef81ea53efad5531e603ee4f9
|
3 |
+
size 117805394
|
scenarioB/train/train500.h5
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:28e1a14a33a7c8cfed622f00b15801c5d58c7f2d31e6459a4b3abf48adbe27f9
|
3 |
+
size 29521394
|