Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -5,4 +5,21 @@ configs:
|
|
5 |
- split: train
|
6 |
path: "*.parquet"
|
7 |
---
|
8 |
-
Gemstones Training Dataset - Worker sharded version
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
- split: train
|
6 |
path: "*.parquet"
|
7 |
---
|
8 |
+
Gemstones Training Dataset - Worker sharded version
|
9 |
+
|
10 |
+
**Disclaimer:** this is an approximation of the dataset used to train the Gemstones model suite.
|
11 |
+
Due to the randomized and sharded nature of the distributed training code, the only way to perfectly
|
12 |
+
reproduce the training bactches across the gpus is/was the run the training code.
|
13 |
+
This is the result of an attempt to simulate the way in which the training code loaded the data and
|
14 |
+
stream it out to a portable file format for use in downstream analyses of the model suite.
|
15 |
+
|
16 |
+
# Sharding format: worker parallel
|
17 |
+
|
18 |
+
This version of th
|
19 |
+
|
20 |
+
|
21 |
+
# Loading
|
22 |
+
|
23 |
+
This data should be loadable using `load_dataset` in the standard manner to auto-download the data.
|
24 |
+
Alternately, the dataset can be cloned using git to materialize the files locally, and then loaded
|
25 |
+
using the default `parquet` builder as described here: https://huggingface.co/datasets/tomg-group-umd/gemstones_data_order_parallel
|