Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -112,13 +112,13 @@ license: odc-by
|
|
112 |
|
113 |
## Dataset summary
|
114 |
|
115 |
-
FineWeb2-HQ is a **high-quality, model-filtered pretraining dataset** derived
|
116 |
|
117 |
<center>
|
118 |
<img src="https://huggingface.co/datasets/epfml/FineWeb2-HQ/raw/main/agg_score_plot.svg" style="width: 70%;" />
|
119 |
</center>
|
120 |
|
121 |
-
Validation was performed by pretraining **1B-parameter LLM models** (llama-like architecture) across multiple languages and writing systems (scripts). Evaluations on **CMMLU (Chinese) and MMLU (German & French)** demonstrate that **FineWeb2-HQ matches FineWeb2 performance
|
122 |
|
123 |
For more details, see our paper [Enhancing Multilingual LLM Pretraining with Model-Based Data Selection](https://arxiv.org/abs/2502.10361).
|
124 |
|
|
|
112 |
|
113 |
## Dataset summary
|
114 |
|
115 |
+
FineWeb2-HQ is a **high-quality, model-filtered pretraining dataset** derived as a subset of [**FineWeb2**](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2), spanning **20 languages**. It enables around 6x faster pretraining compared to the base dataset. FineWeb2-HQ was created by selecting the **top 10% quality documents of FineWeb2** in each language, based on scores assigned by a deep learning classifier trained to identify **structured and knowledge-rich samples** using [**XLM-RoBERTa**](https://huggingface.co/FacebookAI/xlm-roberta-base) **embeddings**.
|
116 |
|
117 |
<center>
|
118 |
<img src="https://huggingface.co/datasets/epfml/FineWeb2-HQ/raw/main/agg_score_plot.svg" style="width: 70%;" />
|
119 |
</center>
|
120 |
|
121 |
+
Validation was performed by pretraining **1B-parameter LLM models** (llama-like architecture) across multiple languages and writing systems (scripts). Evaluations on **CMMLU (Chinese) and MMLU (German & French)** demonstrate that **FineWeb2-HQ matches FineWeb2 performance when trained with 6x fewer tokens, and outperforms it when fully trained**. Additionally, **improvements were observed across other benchmarks**, such as outperforming its English cousins [DCLM](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0-parquet) and [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu).
|
122 |
|
123 |
For more details, see our paper [Enhancing Multilingual LLM Pretraining with Model-Based Data Selection](https://arxiv.org/abs/2502.10361).
|
124 |
|