Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -35,10 +35,6 @@ This dataset is designed for machine learning research involving phenomena influ
|
|
35 |
* Studying the evolution of solar active regions.
|
36 |
* Image-to-image translation tasks between different solar channels.
|
37 |
|
38 |
-
## Data Splits
|
39 |
-
|
40 |
-
This dataset is provided as a collection of daily `.tar` files. No predefined training, validation, or test splits are provided. Users are encouraged to define their own splits according to their specific research requirements, such as by date ranges (e.g., specific years or months of year for training/validation/testing) or by solar events.
|
41 |
-
|
42 |
## Dataset Structure
|
43 |
|
44 |
The dataset is organized into 5194 TAR files, with one TAR file per day within the temporal coverage period. Each daily TAR file contains multiple observation sets, one for every 15-minute interval of that day (up to 96 observation sets per day). An observation set consists of 6 NumPy (`.npy`) files, one for each specified channel.
|
@@ -66,4 +62,33 @@ Each sample obtained by iterating through the WebDataset corresponds to one sola
|
|
66 |
* `aia_0193.npy`: (numpy.ndarray) Image data for AIA 193 Å. Shape: (512, 512). Dtype: float32.
|
67 |
* `aia_0211.npy`: (numpy.ndarray) Image data for AIA 211 Å. Shape: (512, 512). Dtype: float32.
|
68 |
* `aia_1600.npy`: (numpy.ndarray) Image data for AIA 1600 Å. Shape: (512, 512). Dtype: float32.
|
69 |
-
* `hmi_m.npy`: (numpy.ndarray) Line-of-sight magnetogram data from HMI. Shape: (512, 512). Dtype: float32.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
* Studying the evolution of solar active regions.
|
36 |
* Image-to-image translation tasks between different solar channels.
|
37 |
|
|
|
|
|
|
|
|
|
38 |
## Dataset Structure
|
39 |
|
40 |
The dataset is organized into 5194 TAR files, with one TAR file per day within the temporal coverage period. Each daily TAR file contains multiple observation sets, one for every 15-minute interval of that day (up to 96 observation sets per day). An observation set consists of 6 NumPy (`.npy`) files, one for each specified channel.
|
|
|
62 |
* `aia_0193.npy`: (numpy.ndarray) Image data for AIA 193 Å. Shape: (512, 512). Dtype: float32.
|
63 |
* `aia_0211.npy`: (numpy.ndarray) Image data for AIA 211 Å. Shape: (512, 512). Dtype: float32.
|
64 |
* `aia_1600.npy`: (numpy.ndarray) Image data for AIA 1600 Å. Shape: (512, 512). Dtype: float32.
|
65 |
+
* `hmi_m.npy`: (numpy.ndarray) Line-of-sight magnetogram data from HMI. Shape: (512, 512). Dtype: float32.
|
66 |
+
|
67 |
+
## Data Generation and Processing
|
68 |
+
|
69 |
+
The SDOML-lite dataset is generated using the pipeline detailed in the [sdoml-lite GitHub repository](https://github.com/oxai4science/sdoml-lite). The download and processing scripts were run in July 2024 using distributed computing resources provided by Google Cloud for FDL-X Heliolab 2024, which is a public-private partnership AI research initiative with NASA, Google Cloud and Nvidia and other leading research organizations.
|
70 |
+
|
71 |
+
## Data Splits
|
72 |
+
|
73 |
+
This dataset is provided as a collection of daily `.tar` files. No predefined training, validation, or test splits are provided. Users are encouraged to define their own splits according to their specific research requirements, such as by date ranges (e.g., specific years or months of year for training/validation/testing) or by solar events.
|
74 |
+
|
75 |
+
## Data normalization
|
76 |
+
|
77 |
+
The data comes normalized within each image channel such that the pixel values are in the range [0, 1], making it ready for machine learning use out of the box.
|
78 |
+
|
79 |
+
The HMI source we use is already normalized in the range [0, 1]. We normalize the AIA data based on the statistics of the actual AIA data processed during the generation of the dataset, in a two-phase processing pipeline where the first phase computes data statistics and the second phase applies normalization.
|
80 |
+
|
81 |
+
## A note on data quality
|
82 |
+
|
83 |
+
The main motivation for SDOML-lite is to provide a lightweight dataset to be consumed as an input to machine learning pipelines, e.g., models that can predict Sun-dependent quantities in space weather, thermospheric density, or radiation domains.
|
84 |
+
|
85 |
+
We believe that the data is of sufficient quality as an input for machine learning applications, but note that it is not intended to conduct scientific analyses of the HMI or AIA instruments.
|
86 |
+
|
87 |
+
|
88 |
+
## Acknowledgments
|
89 |
+
|
90 |
+
This work is supported by NASA under award #80NSSC24M0122 and is the research product of FDL-X Heliolab a public/private partnership between NASA, Trillium Technologies Inc (trillium.tech) and commercial AI partners Google Cloud, NVIDIA and Pasteur Labs & ISI, developing open science for all Humankind.
|
91 |
+
|
92 |
+
## License
|
93 |
+
|
94 |
+
[Creative Commons Attribution 4.0 International (CC BY 4.0)](https://choosealicense.com/licenses/cc-by-4.0/)
|