Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -1542,7 +1542,7 @@ The dataset is also available in the following versions:
|
|
1542 |
- **v1.1** / [**main**](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/main/data) (default):
|
1543 |
The data used for the first (main) pretraining phase of [Lucie-7B](https://huggingface.co/OpenLLM-France/Lucie-7B), which contains approximately 2.3T tokens. The statistics above apply to this version.
|
1544 |
- [**v1.2**](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/v1.2/data): An improved version of the main dataset, where
|
1545 |
-
- GallicaMonographies and GallicaPress have been
|
1546 |
- The `Ubuntu_IRC` and `PhilPapers` subsets of the Pile have been refined by fixing encoding issues and removing documents in languages other than English, French, Spanish, German and Italian. After filtering, Ubuntu_IRC contains about 9,000 documents and 1.745 billion tokens. PhilPapers contains around 28,000 million documents and 502 million tokens.
|
1547 |
- [**v1.2-recent-web**](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/v1.2-recent-web/data) : The data used for the second pretraining phase (context extension) of [Lucie-7B](https://huggingface.co/OpenLLM-France/Lucie-7B#2-context-extension).
|
1548 |
This version is identical to `v1.2` with the exception that older snapshots of web data (before 2023 for RedPajama and before 2024 for FineWebEdu) have been excluded.
|
|
|
1542 |
- **v1.1** / [**main**](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/main/data) (default):
|
1543 |
The data used for the first (main) pretraining phase of [Lucie-7B](https://huggingface.co/OpenLLM-France/Lucie-7B), which contains approximately 2.3T tokens. The statistics above apply to this version.
|
1544 |
- [**v1.2**](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/v1.2/data): An improved version of the main dataset, where
|
1545 |
+
- GallicaMonographies and GallicaPress have been filtered aggressively to remove documents with low OCR quality. After filtering, GallicaMonographies contains around 220,000 documents and 20.131 billion tokens. For GallicaPress, we first selected a subset of the original corpus that contained only html documents (as opposed to documents in .txt format). This subset contained 1,747,600 documents and 74 billion tokens. After filtering, this subset contains roughly 989,100 documents and 45.7 billion tokens.
|
1546 |
- The `Ubuntu_IRC` and `PhilPapers` subsets of the Pile have been refined by fixing encoding issues and removing documents in languages other than English, French, Spanish, German and Italian. After filtering, Ubuntu_IRC contains about 9,000 documents and 1.745 billion tokens. PhilPapers contains around 28,000 million documents and 502 million tokens.
|
1547 |
- [**v1.2-recent-web**](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/v1.2-recent-web/data) : The data used for the second pretraining phase (context extension) of [Lucie-7B](https://huggingface.co/OpenLLM-France/Lucie-7B#2-context-extension).
|
1548 |
This version is identical to `v1.2` with the exception that older snapshots of web data (before 2023 for RedPajama and before 2024 for FineWebEdu) have been excluded.
|