Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -18,6 +18,12 @@ reproduce the training batches across the gpus is/was the run the training code.
|
|
18 |
This repo is the result of an attempt to simulate the way in which the training code loaded the data and
|
19 |
stream it out to a portable file format for use in downstream analyses of the model suite.
|
20 |
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
# Sharding format: worker parallel
|
22 |
|
23 |
This version of the dataset approximates the specific subsets of the data that each of the distributed
|
@@ -36,24 +42,18 @@ This recreation assumes the ~1B Gemstones model sizes which were trained on 32 n
|
|
36 |
at a microbatch size of 8 over packed sequences of 2048 tokens.
|
37 |
They were trained for 82998 steps at a batch size of ~4M tokens to reach ~350B tokens.
|
38 |
|
39 |
-
|
40 |
the thousands of raw training format files (for reference, this format is defined by the `packed_cycle_dataset.py` file in this repo).
|
41 |
The raw files were first shuffled globally, and then each worker's slice was defined by this round-robin
|
42 |
strided indexing of the shuffled filelist: `filenames[shard_id:max_num_files:num_shards]`. Then, each worker
|
43 |
loaded 4 files at a time, and shuffled the "blocks" of 2048 tokens each in a temporary buffer so
|
44 |
that the contents of the 4 packed files were not read in the exact order in which the tokens appeared in them.
|
45 |
|
46 |
-
The `train_mock_data_order_file.py`
|
47 |
to mimic a distributed set of gpus, and passes their process ids into the dataset implementation
|
48 |
so that each worker in the pool receives its subset of the data and loads it as it would have during training.
|
49 |
-
Then, the
|
50 |
to the parquet file format.
|
51 |
|
52 |
Each shard named like `worker_{worker_rank}-of-{total_num_workers}_ordered_dataset.parquet` represents the ordered microbatches that one of the 256 gpus would
|
53 |
have drawn and passed through its copy of the model during training.
|
54 |
-
|
55 |
-
# Loading
|
56 |
-
|
57 |
-
This data should be loadable using `load_dataset` in the standard manner to auto-download the data.
|
58 |
-
Alternately, the dataset can be cloned using git to materialize the files locally, and then loaded
|
59 |
-
using the default `parquet` builder as described here: https://huggingface.co/docs/datasets/en/loading#parquet
|
|
|
18 |
This repo is the result of an attempt to simulate the way in which the training code loaded the data and
|
19 |
stream it out to a portable file format for use in downstream analyses of the model suite.
|
20 |
|
21 |
+
# Loading
|
22 |
+
|
23 |
+
This data should be loadable using `load_dataset` in the standard manner to auto-download the data.
|
24 |
+
Alternately, the dataset can be cloned using git to materialize the files locally, and then loaded
|
25 |
+
using the default `parquet` builder as described here: https://huggingface.co/docs/datasets/en/loading#parquet
|
26 |
+
|
27 |
# Sharding format: worker parallel
|
28 |
|
29 |
This version of the dataset approximates the specific subsets of the data that each of the distributed
|
|
|
42 |
at a microbatch size of 8 over packed sequences of 2048 tokens.
|
43 |
They were trained for 82998 steps at a batch size of ~4M tokens to reach ~350B tokens.
|
44 |
|
45 |
+
The 256 workers each received a slice of the total dataset represented by a subset of
|
46 |
the thousands of raw training format files (for reference, this format is defined by the `packed_cycle_dataset.py` file in this repo).
|
47 |
The raw files were first shuffled globally, and then each worker's slice was defined by this round-robin
|
48 |
strided indexing of the shuffled filelist: `filenames[shard_id:max_num_files:num_shards]`. Then, each worker
|
49 |
loaded 4 files at a time, and shuffled the "blocks" of 2048 tokens each in a temporary buffer so
|
50 |
that the contents of the 4 packed files were not read in the exact order in which the tokens appeared in them.
|
51 |
|
52 |
+
The `train_mock_data_order_file.py` uses a pool of cpu workers
|
53 |
to mimic a distributed set of gpus, and passes their process ids into the dataset implementation
|
54 |
so that each worker in the pool receives its subset of the data and loads it as it would have during training.
|
55 |
+
Then, the subsets of data are wrapped in dataloaders and read in microbatches before being written out
|
56 |
to the parquet file format.
|
57 |
|
58 |
Each shard named like `worker_{worker_rank}-of-{total_num_workers}_ordered_dataset.parquet` represents the ordered microbatches that one of the 256 gpus would
|
59 |
have drawn and passed through its copy of the model during training.
|
|
|
|
|
|
|
|
|
|
|
|