Changelog
All notable changes to the dataset will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
v1.1.0
- 04.12.2024
Overall, the structure of the input data now looks like:
from datasets import load_dataset
dataset = load_dataset('AGBD.py',trust_remote_code=True,streaming=True)
for sample in dataset['train']:
_in = sample['input']
break
np.array(_in).shape # (24, 15, 15)
Where the shape follows the convention below:
- 12 x Sentinel-2 bands
- 3 x Sentinel-2 dates (s2_num_days, s2_doy_cos, s2_doy_sin)
- 4 x Latitude/longitude (lat_cos, lat_sin, lon_cos, lon_sin)
- 3 x GEDI dates (gedi_num_days, gedi_doy_cos, gedi_doy_sin)
- 2 x ALOS bands (HH, HV)
- 2 x CH bands (ch, std)
- (N + 1) x LC (lc_encoded, lc_prob), where N = 14 for onehot, N = 5 for cat2vec, and N = 2 for sin/cos
- 3 x Topographic bands (slope, aspect_cos, aspect_sin)
- 1 x DEM (dem)
Added
- New features:
slope
: the percentage of the slope where each pixel is located; it is afloat
between $0$ and $1$aspect_cos
,aspect_sin
: the sine and cosine encoded aspect, or orientation of the slope, which is measured clockwise in degrees from $0$ to $360$; it is afloat
between $0$ and $1$s2_num_days
: the date of acquisition of the Sentinel-2 product, calculated as the number of days since the start of the GEDI mission (April 17th, 2019); it is anint
s2_doy_cos
,s2_doy_sin
: the sine and cosine encoded corresponding day of the year (DOY); it is afloat
between $0$ and $1$gedi_num_days
: the date of acquisition of the GEDI footprint, calculated as the number of days since the start of the GEDI mission (April 17th, 2019); it is auint
gedi_doy_cos
,gedi_doy_sin
: the sine and cosine encoded corresponding day of the year (DOY); it is afloat
between $0$ and $1$
- The
load_dataset()
function now provides the following configuration options:norm_strat
(default ='pct'
) : the strategy to apply to process the input features. Valid options are:'pct'
, which applies min-max scaling with the $1$-st and $99$-th percentile of the data; and'mean_std'
which applies mean/variance standardization`.encode_strat
(default ='onehot'
) : the encoding strategy to apply to the land classification (LC) data. Valid options are:'onehot'
, one-hot encoding;'sin_cos'
, sine-cosine encoding;'cat2vec'
, cat2vec transformation based on embeddings pre-computed on the train set.input_features
(dict) : the input features to be included in the data, the default values being:{'S2_bands': ['B01', 'B02', 'B03', 'B04', 'B05', 'B06', 'B07', 'B08', 'B8A', 'B09','B11', 'B12'], 'S2_dates' : False, 'lat_lon': True, 'GEDI_dates': False, 'ALOS': True, 'CH': True, 'LC': True, 'DEM': True, 'topo': False}
additional_features
(list, default =[]
) : the metadata to include in the data. Possible values are:
This metadata can later be accessed as such:['s2_num_days', 'gedi_num_days', 'lat', 'lon', 'agbd_se', 'elev_lowes', 'leaf_off_f', 'pft_class', 'region_cla', 'rh98', 'sensitivity', 'solar_elev', 'urban_prop']
from datasets import load_dataset dataset = load_dataset('AGBD.py',trust_remote_code=True,streaming=True) for sample in dataset['train']: lat = sample['lat'] break
Fixed
- Statistics: there was in bug in the computation of the percentiles, and the statistics were fixed accordingly.
- Latitude and longitude: while the latitude and longitude in the raw
.h5
files are absolutely correct, there was a bug in the generation of the.parquet
files for HuggingFace, resulting in offsettedlat
andlon
in the additional features. We've now corrected this.
Removed
- The
load_dataset()
function does not provide thenormalize_data
configuration anymore, as it was replaced by thenorm_strat
configuration, giving users more flexibility regarding the processing of the input feature.