Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -7,9 +7,9 @@ size_categories:
|
|
7 |
- 100B<n<1T
|
8 |
---
|
9 |
|
10 |
-
This dataset contains the fully prepared data
|
11 |
-
|
12 |
-
|
13 |
|
14 |
|
15 |
## Format
|
|
|
7 |
- 100B<n<1T
|
8 |
---
|
9 |
|
10 |
+
This dataset contains the fully prepared data, which has been tokenized and pre-shuffled, used to train the Pythia (deduplicated) models. You can find these models under the EleutherAI organization, and they are also listed in my [Memorization Profiles collection](https://huggingface.co/collections/pietrolesci/memorisation-profiles-6619604c4594c878cd9d451f).
|
11 |
+
|
12 |
+
This data is the same as the one found in [EleutherAI/pile-deduped-pythia-preshuffled](https://huggingface.co/datasets/EleutherAI/pile-deduped-pythia-preshuffled), but it is presented in a more manageable format. Instead of using the Megatron format utilized by the GPT-NeoX library, I have stored the data in a parquet format.
|
13 |
|
14 |
|
15 |
## Format
|