Datasets:
ArXiv:
License:
Update README.md
Browse files
README.md
CHANGED
@@ -196,7 +196,7 @@ grid.
|
|
196 |
|
197 |
## Dataset Structure
|
198 |
|
199 |
-
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
200 |
|
201 |
**raw:** unprocessed data dump from NASA Power API in the JSON format.
|
202 |
|
@@ -206,28 +206,28 @@ grid.
|
|
206 |
so that the **sequence length is 365**. Each sample is a tuple of the following data:
|
207 |
* weather measurements (shape `sequence_length x 31`)
|
208 |
* coordinates (shape `1 x 2`)
|
209 |
-
* index (`1 x 2`). the first number is the temporal index of the current row since
|
210 |
or the spacing between indices, which is 1 for daily data, 7 for weekly data, and 30 for monthly data. Note: this means the daily data
|
211 |
-
contains 1 year of data in each row, weekly data contains 7 years of data in each row (`7 * 52 =
|
212 |
-
data (`12 * 30 =
|
213 |
|
214 |
## Dataset Creation
|
215 |
|
216 |
### Source Data
|
217 |
|
218 |
-
<!-- This section describes the source data (e.g
|
219 |
NASA Power API daily weather measurements. The data comes from multiple sources, but mostly satellite data.
|
220 |
|
221 |
#### Data Processing
|
222 |
|
223 |
-
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
|
224 |
|
225 |
The `raw` data is in the JSON format and unprocessed. The `csvs` and the `pytorch` data are processed in the following manner:
|
226 |
|
227 |
- Missing values were backfilled.
|
228 |
- Leap year extra day was omitted. So, each year of the daily dataset has 365 days. Similarly, each year of the weekly dataset has 52 weeks, and the monthly dataset has 12 columns.
|
229 |
- Data was pivoted. So each measurement has x columns where x is either 365, 52, or 12.
|
230 |
-
- `pytorch` data was standardized using the mean and
|
231 |
|
232 |
## Citation
|
233 |
|
|
|
196 |
|
197 |
## Dataset Structure
|
198 |
|
199 |
+
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure, such as criteria used to create the splits, relationships between data points, etc. -->
|
200 |
|
201 |
**raw:** unprocessed data dump from NASA Power API in the JSON format.
|
202 |
|
|
|
206 |
so that the **sequence length is 365**. Each sample is a tuple of the following data:
|
207 |
* weather measurements (shape `sequence_length x 31`)
|
208 |
* coordinates (shape `1 x 2`)
|
209 |
+
* index (`1 x 2`). the first number is the temporal index of the current row since January 1, 1984. The second number is the temporal granularity,
|
210 |
or the spacing between indices, which is 1 for daily data, 7 for weekly data, and 30 for monthly data. Note: this means the daily data
|
211 |
+
contains 1 year of data in each row, weekly data contains 7.02 years of data in each row (`7.02 * 52 = ~365`), and monthly data contains 12.2 years of
|
212 |
+
data (`12.2 * 30 = ~365`).
|
213 |
|
214 |
## Dataset Creation
|
215 |
|
216 |
### Source Data
|
217 |
|
218 |
+
<!-- This section describes the source data (e.g., news text and headlines, social media posts, translated sentences, ...). -->
|
219 |
NASA Power API daily weather measurements. The data comes from multiple sources, but mostly satellite data.
|
220 |
|
221 |
#### Data Processing
|
222 |
|
223 |
+
<!-- This section describes the data collection and processing process, such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
|
224 |
|
225 |
The `raw` data is in the JSON format and unprocessed. The `csvs` and the `pytorch` data are processed in the following manner:
|
226 |
|
227 |
- Missing values were backfilled.
|
228 |
- Leap year extra day was omitted. So, each year of the daily dataset has 365 days. Similarly, each year of the weekly dataset has 52 weeks, and the monthly dataset has 12 columns.
|
229 |
- Data was pivoted. So each measurement has x columns where x is either 365, 52, or 12.
|
230 |
+
- `pytorch` data was standardized using the mean and standard deviation of the weather over the continental United States.
|
231 |
|
232 |
## Citation
|
233 |
|