Datasets:

Formats:
parquet
Libraries:
Datasets
Dask
License:
jwkirchenbauer commited on
Commit
91c518f
·
verified ·
1 Parent(s): b3789d2

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -30,7 +30,7 @@ Specifically, a round-robin reading order ensured that while an 8 node set of wo
30
  data than individual workers in a larger 64 node configuration, the first files read by the smaller
31
  configuration would be the same as the first files read by the workers in the larger configuration.
32
  Eg. if workers `1` and `2` in a 2 worker job got files `[A,B]` and `[C,D]`, then workers `1`, `2`, `3`, and `4` in a larger 4 worker job would receive files `[A]`, `[B]`, `[C]`, `[D]` respectively. This way, periodically, all models would be guaranteed to
33
- have seen all of the same rows of the dataset during training. The sync granularity is determined by the largest configuration, so 64 nodes = 512 gpus, loading 4 raw files at a time each containing 2048*2049=~4M tokens, means synchronization every 512*4*2048*2049 = ~8.6B tokens.
34
 
35
  This recreation assumes the ~1B Gemstones model sizes which were trained on 32 nodes * 8 gpus per node = 256 worker shards
36
  at a microbatch size of 8 over packed sequences of 2048 tokens.
 
30
  data than individual workers in a larger 64 node configuration, the first files read by the smaller
31
  configuration would be the same as the first files read by the workers in the larger configuration.
32
  Eg. if workers `1` and `2` in a 2 worker job got files `[A,B]` and `[C,D]`, then workers `1`, `2`, `3`, and `4` in a larger 4 worker job would receive files `[A]`, `[B]`, `[C]`, `[D]` respectively. This way, periodically, all models would be guaranteed to
33
+ have seen all of the same rows of the dataset during training. The sync granularity is determined by the largest configuration, so 64 nodes = 512 gpus, loading 4 raw files at a time each containing 2048 x 2049 = ~4M tokens, means synchronization every 512 x 4 x 2048 x 2049 = ~8.6B tokens.
34
 
35
  This recreation assumes the ~1B Gemstones model sizes which were trained on 32 nodes * 8 gpus per node = 256 worker shards
36
  at a microbatch size of 8 over packed sequences of 2048 tokens.