Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -1542,8 +1542,8 @@ The dataset is also available in the following versions:
|
|
1542 |
- **v1.1** / [**main**](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/main/data) (default):
|
1543 |
The data used for the first (main) pretraining phase of [Lucie-7B](https://huggingface.co/OpenLLM-France/Lucie-7B), which contains approximately 2.3T tokens. The statistics above apply to this version.
|
1544 |
- [**v1.2**](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/v1.2/data): An improved version of the main dataset, where
|
1545 |
-
- GallicaMonographies and GallicaPress have been fltered aggressively to remove documents with low OCR quality.
|
1546 |
-
- The `Ubuntu_IRC` and `PhilPapers` subsets of Pile have been refined by fixing encoding issues and removing documents in languages other than English, French, Spanish, German and Italian.
|
1547 |
- [**v1.2-recent-web**](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/v1.2-recent-web/data) : The data used for the second pretraining phase (context extension) of [Lucie-7B](https://huggingface.co/OpenLLM-France/Lucie-7B#2-context-extension).
|
1548 |
This version is identical to `v1.2` with the exception that older snapshots of web data (before 2023 for RedPajama and before 2024 for FineWebEdu) have been excluded.
|
1549 |
All data from `v1.1` that were not filtered out remain unchanged in `v1.2` and `v1.2-recent-web`.
|
@@ -1617,7 +1617,7 @@ The <a href="#example-use-in-python">Example use in Python</a> section contains
|
|
1617 |
* <u>Pre-processing</u>:
|
1618 |
* <u>Filtering</u>:
|
1619 |
To filter out documents with excessive OCR errors, the dataset was refined by discarding texts with a perplexity higher than 1500,
|
1620 |
-
measured using a CCNET model
|
1621 |
The code to compute CCNET perplexity, parallelizing on parquet files, is [available here](https://github.com/OpenLLM-France/Lucie-dataset-filtering).
|
1622 |
* <u>Text cleaning</u>:
|
1623 |
Mentions of Credit Institutions Directives (CID) that appears in the raw texts such as `(cid:146)` were removed.
|
|
|
1542 |
- **v1.1** / [**main**](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/main/data) (default):
|
1543 |
The data used for the first (main) pretraining phase of [Lucie-7B](https://huggingface.co/OpenLLM-France/Lucie-7B), which contains approximately 2.3T tokens. The statistics above apply to this version.
|
1544 |
- [**v1.2**](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/v1.2/data): An improved version of the main dataset, where
|
1545 |
+
- GallicaMonographies and GallicaPress have been fltered aggressively to remove documents with low OCR quality. After filtering, GallicaMonographies contains 220,000 documents and 20.131 billion tokens. For GallicaPress, we first selected a subset of the original corpus that contained only html documents (as opposed to documents in .txt format). This subset contained 1,747,600 documents and 74 billion tokens. After filtering, this subset contains 989,100 documents and 45.7 billion tokens.
|
1546 |
+
- The `Ubuntu_IRC` and `PhilPapers` subsets of the Pile have been refined by fixing encoding issues and removing documents in languages other than English, French, Spanish, German and Italian. After filtering, Ubuntu_IRC contains 9,000 documents and 1.745 billion tokens. PhilPapers contains 28,000 million documents and 502 million tokens.
|
1547 |
- [**v1.2-recent-web**](https://huggingface.co/datasets/OpenLLM-France/Lucie-Training-Dataset/tree/v1.2-recent-web/data) : The data used for the second pretraining phase (context extension) of [Lucie-7B](https://huggingface.co/OpenLLM-France/Lucie-7B#2-context-extension).
|
1548 |
This version is identical to `v1.2` with the exception that older snapshots of web data (before 2023 for RedPajama and before 2024 for FineWebEdu) have been excluded.
|
1549 |
All data from `v1.1` that were not filtered out remain unchanged in `v1.2` and `v1.2-recent-web`.
|
|
|
1617 |
* <u>Pre-processing</u>:
|
1618 |
* <u>Filtering</u>:
|
1619 |
To filter out documents with excessive OCR errors, the dataset was refined by discarding texts with a perplexity higher than 1500,
|
1620 |
+
measured using a CCNET model on the target language (see [code details](https://github.com/OpenLLM-France/Lucie-Training/blob/7f1f7efa1288f709662a9067bf2c3db856b850f8/tokenization/data.py#L1590)).
|
1621 |
The code to compute CCNET perplexity, parallelizing on parquet files, is [available here](https://github.com/OpenLLM-France/Lucie-dataset-filtering).
|
1622 |
* <u>Text cleaning</u>:
|
1623 |
Mentions of Credit Institutions Directives (CID) that appears in the raw texts such as `(cid:146)` were removed.
|