File size: 4,269 Bytes
3b49aee |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
# Changelog
All notable changes to the dataset will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
### `v1.1.0` - 04.12.2024
Overall, the structure of the input data now looks like:
```
from datasets import load_dataset
dataset = load_dataset('AGBD.py',trust_remote_code=True,streaming=True)
for sample in dataset['train']:
_in = sample['input']
break
np.array(_in).shape # (24, 15, 15)
```
Where the shape follows the convention below:
```
- 12 x Sentinel-2 bands
- 3 x Sentinel-2 dates (s2_num_days, s2_doy_cos, s2_doy_sin)
- 4 x Latitude/longitude (lat_cos, lat_sin, lon_cos, lon_sin)
- 3 x GEDI dates (gedi_num_days, gedi_doy_cos, gedi_doy_sin)
- 2 x ALOS bands (HH, HV)
- 2 x CH bands (ch, std)
- (N + 1) x LC (lc_encoded, lc_prob), where N = 14 for onehot, N = 5 for cat2vec, and N = 2 for sin/cos
- 3 x Topographic bands (slope, aspect_cos, aspect_sin)
- 1 x DEM (dem)
```
#### Added
* New features:
* `slope` : the percentage of the slope where each pixel is located; it is a `float` between $0$ and $1$
* `aspect_cos`,` aspect_sin` : the sine and cosine encoded aspect, or orientation of the slope, which is measured clockwise in degrees from $0$ to $360$; it is a `float` between $0$ and $1$
* `s2_num_days` : the date of acquisition of the Sentinel-2 product, calculated as the number of days since the start of the GEDI mission (April 17th, 2019); it is an `int`
* `s2_doy_cos`, `s2_doy_sin` : the sine and cosine encoded corresponding day of the year (DOY); it is a `float` between $0$ and $1$
* `gedi_num_days` : the date of acquisition of the GEDI footprint, calculated as the number of days since the start of the GEDI mission (April 17th, 2019); it is a `uint`
* `gedi_doy_cos`, `gedi_doy_sin` : the sine and cosine encoded corresponding day of the year (DOY); it is a `float` between $0$ and $1$
* The `load_dataset()` function now provides the following configuration options:
* `norm_strat` (default = `'pct'`) : the strategy to apply to process the input features. Valid options are: `'pct'`, which applies min-max scaling with the $1$-st and $99$-th percentile of the data; and `'mean_std'` which applies mean/variance standardization`.
* `encode_strat` (default = `'onehot'`) : the encoding strategy to apply to the land classification (LC) data. Valid options are: `'onehot'`, one-hot encoding; `'sin_cos'`, sine-cosine encoding; `'cat2vec'`, cat2vec transformation based on embeddings pre-computed on the train set.
* `input_features` (dict) : the input features to be included in the data, the default values being:
```
{'S2_bands': ['B01', 'B02', 'B03', 'B04', 'B05', 'B06', 'B07', 'B08', 'B8A', 'B09','B11', 'B12'],
'S2_dates' : False, 'lat_lon': True, 'GEDI_dates': False, 'ALOS': True, 'CH': True, 'LC': True,
'DEM': True, 'topo': False}
```
* `additional_features` (list, default = `[]`) : the metadata to include in the data. Possible values are:
```
['s2_num_days', 'gedi_num_days', 'lat', 'lon', 'agbd_se', 'elev_lowes', 'leaf_off_f', 'pft_class', 'region_cla', 'rh98', 'sensitivity', 'solar_elev', 'urban_prop']
```
This metadata can later be accessed as such:
```
from datasets import load_dataset
dataset = load_dataset('AGBD.py',trust_remote_code=True,streaming=True)
for sample in dataset['train']:
lat = sample['lat']
break
```
#### Fixed
* Statistics: there was in bug in the computation of the percentiles, and the statistics were fixed accordingly.
* Latitude and longitude: while the latitude and longitude in the raw `.h5` files are absolutely correct, there was a bug in the generation of the `.parquet` files for HuggingFace, resulting in offsetted `lat` and `lon` in the additional features. We've now corrected this.
#### Removed
* The `load_dataset()` function does not provide the `normalize_data` configuration anymore, as it was replaced by the `norm_strat` configuration, giving users more flexibility regarding the processing of the input feature.
|