Datasets:

Formats:
parquet
Libraries:
Datasets
Dask
License:
jwkirchenbauer commited on
Commit
2457c22
·
verified ·
1 Parent(s): 88f9c11

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +37 -5
README.md CHANGED
@@ -4,22 +4,54 @@ configs:
4
  data_files:
5
  - split: train
6
  path: "*.parquet"
 
7
  ---
8
  Gemstones Training Dataset - Worker sharded version
9
 
10
- **Disclaimer:** this is an approximation of the dataset used to train the Gemstones model suite.
 
 
11
  Due to the randomized and sharded nature of the distributed training code, the only way to perfectly
12
- reproduce the training bactches across the gpus is/was the run the training code.
13
- This is the result of an attempt to simulate the way in which the training code loaded the data and
14
  stream it out to a portable file format for use in downstream analyses of the model suite.
15
 
16
  # Sharding format: worker parallel
17
 
18
- This version of th
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
 
 
20
 
21
  # Loading
22
 
23
  This data should be loadable using `load_dataset` in the standard manner to auto-download the data.
24
  Alternately, the dataset can be cloned using git to materialize the files locally, and then loaded
25
- using the default `parquet` builder as described here: https://huggingface.co/datasets/tomg-group-umd/gemstones_data_order_parallel
 
4
  data_files:
5
  - split: train
6
  path: "*.parquet"
7
+ license: odc-by
8
  ---
9
  Gemstones Training Dataset - Worker sharded version
10
 
11
+ This data is a reporcessed version of the first 1B rows of the Dolma v1.7 dataset (https://huggingface.co/datasets/allenai/dolma).
12
+
13
+ **Disclaimer:** this is an approximation of the dataset used to train the Gemstones model suite.
14
  Due to the randomized and sharded nature of the distributed training code, the only way to perfectly
15
+ reproduce the training batches across the gpus is/was the run the training code.
16
+ This repo is the result of an attempt to simulate the way in which the training code loaded the data and
17
  stream it out to a portable file format for use in downstream analyses of the model suite.
18
 
19
  # Sharding format: worker parallel
20
 
21
+ This version of the dataset approximates the specific subsets of the data that each of the distributed
22
+ workers (GPUs) would have individually loaded and passed through the local copy of the model during
23
+ dataparallel training. Since the Gemstones suite of models was trained on a variety of topologies
24
+ (the 50M models were trained on 8 nodes while the 2B models used 64 nodes) the distributed reading
25
+ format was chosen such that different topologies would read the data in similar orders.
26
+
27
+ Specifically, a round-robin reading order ensured that while an 8 node set of workers would each be responsible for more
28
+ data than individual workers in a larger 64 node configuration, the first files read by the smaller
29
+ configuration would be the same as the first files read by the workers in the larger configuration.
30
+ Eg. if workers `1` and `2` in a 2 worker job got files `[A,B]` and `[C,D]`, then workers `1`, `2`, `3`, and `4` in a larger 4 worker job would receive files `[A]`, `[B]`, `[C]`, `[D]` respectively. This way, periodically, all models would be guaranteed to
31
+ have seen all of the same rows of the dataset during training. The sync granularity is determined by the largest configuration, so 64 nodes = 512 gpus, loading 4 raw files at a time each containing 2048*2049=~4M tokens, means synchronization every 512*4*2048*2049 = ~8.6B tokens.
32
+
33
+ This recreation assumes the ~1B Gemstones model sizes which were trained on 32 nodes * 8 gpus per node = 256 worker shards
34
+ at a microbatch size of 8 over packed sequences of 2048 tokens.
35
+ They were trained for 82998 steps at a batch size of ~4M tokens to reach ~350B tokens.
36
+
37
+ At runtime, the 256 workers each received a slice of the total dataset represented by a subset of
38
+ the thousands of raw training format files (for reference, this format is defined by the `packed_cycle_dataset.py` file in this repo).
39
+ The raw files were first shuffled globally, and then each worker's slice was defined by this round-robin
40
+ strided indexing of the shuffled filelist: `filenames[shard_id:max_num_files:num_shards]`. Then, each worker
41
+ loaded 4 files at a time, and shuffled the "blocks" of 2048 tokens each in a temporary buffer so
42
+ that the contents of the 4 packed files were not read in the exact order in which the tokens appeared in them.
43
+
44
+ The `train_mock_data_order_file.py` materializes the shuffled file list, and uses a pool of cpu workers
45
+ to mimic a distributed set of gpus, and passes their process ids into the dataset implementation
46
+ so that each worker in the pool receives its subset of the data and loads it as it would have during training.
47
+ Then, the dataset rows are wrapped in dataloaders, and read in microbatches before being written out
48
+ to the parquet file format.
49
 
50
+ Each shard named like `worker_{worker_rank}-of-{total_num_workers}_ordered_dataset.parquet` represents the ordered microbatches that one of the 256 gpus would
51
+ have drawn and passed through its copy of the model during training.
52
 
53
  # Loading
54
 
55
  This data should be loadable using `load_dataset` in the standard manner to auto-download the data.
56
  Alternately, the dataset can be cloned using git to materialize the files locally, and then loaded
57
+ using the default `parquet` builder as described here: https://huggingface.co/docs/datasets/en/loading#parquet