Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -19,13 +19,17 @@ The dataset has 3 columns:
|
|
19 |
- `batch_idx`: the index of the batch to which a sequence belongs to (not present in the original dataset).
|
20 |
- `token_ids`: the tokenised texts, each of length 2049 tokens.
|
21 |
|
22 |
-
The dataset is split into 143 chunks (
|
23 |
-
|
24 |
-
|
|
|
|
|
25 |
|
26 |
-
**Example**:
|
|
|
27 |
|
28 |
-
|
|
|
29 |
|
30 |
|
31 |
## License
|
|
|
19 |
- `batch_idx`: the index of the batch to which a sequence belongs to (not present in the original dataset).
|
20 |
- `token_ids`: the tokenised texts, each of length 2049 tokens.
|
21 |
|
22 |
+
The dataset is split into 143 chunks (parquet files) and each chunk includes 1024000 sequences (rows) corresponding to 1000 batches, each formed by 1024 sequences.
|
23 |
+
This means that each chunk corresponds to the data seen between between one checkpoint and the next.
|
24 |
+
Specifically, the Pythia model checkpoints are available* at initialisation (step 0) and each 1000 steps (steps 1000, 2000, etc) up to the last checkpoint (step 143000).
|
25 |
+
We reflect this structure into the filenames: `train-001000.parquet`, `train-002000.parquet`, ..., `train-143000.parquet`.
|
26 |
+
Let's clarify the mapping between chunks and checkpoints with an example.
|
27 |
|
28 |
+
**Example**: Consider file `train-001000.parquet`. It contains sequences with `batch_idx` in [0, 999]. These sequences were "seen" by checkpoint 1000.
|
29 |
+
Batches with `batch_idx` >= 1000 are only seen by later checkpoints.
|
30 |
|
31 |
+
|
32 |
+
*NOTE: additional log-spaced checkpoints are available for the initial part of training (i.e., steps 1, 2, 4, 8, ..., 512). I did not create a separate file for these, but you can easily subset the first chunk (i.e., `data/train-001000.parquet`).
|
33 |
|
34 |
|
35 |
## License
|