Datasets:

Modalities:
Tabular
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
mjaggi commited on
Commit
9b9f12f
·
verified ·
1 Parent(s): 5c410ef

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -112,13 +112,13 @@ license: odc-by
112
 
113
  ## Dataset summary
114
 
115
- FineWeb2-HQ is a **high-quality, model-filtered pretraining dataset** derived from [**FineWeb2**](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2), spanning **20 languages**. It was created by selecting the **top 10% quality documents of FineWeb2** in each language, based on scores assigned by a deep learning classifier trained to identify **structured and knowledge-rich samples** using [**XLM-RoBERTa**](https://huggingface.co/FacebookAI/xlm-roberta-base) **embeddings**.
116
 
117
  <center>
118
  <img src="https://huggingface.co/datasets/epfml/FineWeb2-HQ/raw/main/agg_score_plot.svg" style="width: 70%;" />
119
  </center>
120
 
121
- Validation was performed by pretraining **1B-parameter LLM models** (llama-like architecture) across multiple languages and writing systems (scripts). Evaluations on **CMMLU (Chinese) and MMLU (German & French)** demonstrate that **FineWeb2-HQ matches FineWeb2 performance early in training with 6x fewer tokens, and outperforms it when fully trained**. Additionally, **improvements were observed across other benchmarks**, such as outperforming [DCLM](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0-parquet) and [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) in English.
122
 
123
  For more details, see our paper [Enhancing Multilingual LLM Pretraining with Model-Based Data Selection](https://arxiv.org/abs/2502.10361).
124
 
 
112
 
113
  ## Dataset summary
114
 
115
+ FineWeb2-HQ is a **high-quality, model-filtered pretraining dataset** derived as a subset of [**FineWeb2**](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2), spanning **20 languages**. It enables around 6x faster pretraining compared to the base dataset. FineWeb2-HQ was created by selecting the **top 10% quality documents of FineWeb2** in each language, based on scores assigned by a deep learning classifier trained to identify **structured and knowledge-rich samples** using [**XLM-RoBERTa**](https://huggingface.co/FacebookAI/xlm-roberta-base) **embeddings**.
116
 
117
  <center>
118
  <img src="https://huggingface.co/datasets/epfml/FineWeb2-HQ/raw/main/agg_score_plot.svg" style="width: 70%;" />
119
  </center>
120
 
121
+ Validation was performed by pretraining **1B-parameter LLM models** (llama-like architecture) across multiple languages and writing systems (scripts). Evaluations on **CMMLU (Chinese) and MMLU (German & French)** demonstrate that **FineWeb2-HQ matches FineWeb2 performance when trained with 6x fewer tokens, and outperforms it when fully trained**. Additionally, **improvements were observed across other benchmarks**, such as outperforming its English cousins [DCLM](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0-parquet) and [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu).
122
 
123
  For more details, see our paper [Enhancing Multilingual LLM Pretraining with Model-Based Data Selection](https://arxiv.org/abs/2502.10361).
124