Datasets:

Modalities:
Tabular
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
FineWeb2-embedded / README.md
vsabolcec's picture
Super-squash branch 'v1.0.0' using huggingface_hub
4631ff2 verified
metadata
task_categories:
  - text-generation
language:
  - ru
  - zh
  - de
  - ja
  - es
  - fr
  - it
  - pt
  - pl
  - nl
  - id
  - tr
  - cs
  - vi
  - sv
  - fa
  - ar
  - el
  - da
  - hu
pretty_name: FineWeb2-embedded
configs:
  - config_name: rus_Cyrl
    data_files:
      - split: train
        path: rus_Cyrl/*
  - config_name: cmn_Hani
    data_files:
      - split: train
        path: cmn_Hani/*
  - config_name: deu_Latn
    data_files:
      - split: train
        path: deu_Latn/*
  - config_name: jpn_Jpan
    data_files:
      - split: train
        path: jpn_Jpan/*
  - config_name: spa_Latn
    data_files:
      - split: train
        path: spa_Latn/*
  - config_name: fra_Latn
    data_files:
      - split: train
        path: fra_Latn/*
  - config_name: ita_Latn
    data_files:
      - split: train
        path: ita_Latn/*
  - config_name: por_Latn
    data_files:
      - split: train
        path: por_Latn/*
  - config_name: pol_Latn
    data_files:
      - split: train
        path: pol_Latn/*
  - config_name: nld_Latn
    data_files:
      - split: train
        path: nld_Latn/*
  - config_name: ind_Latn
    data_files:
      - split: train
        path: ind_Latn/*
  - config_name: tur_Latn
    data_files:
      - split: train
        path: tur_Latn/*
  - config_name: ces_Latn
    data_files:
      - split: train
        path: ces_Latn/*
  - config_name: vie_Latn
    data_files:
      - split: train
        path: vie_Latn/*
  - config_name: swe_Latn
    data_files:
      - split: train
        path: swe_Latn/*
  - config_name: fas_Arab
    data_files:
      - split: train
        path: fas_Arab/*
  - config_name: arb_Arab
    data_files:
      - split: train
        path: arb_Arab/*
  - config_name: ell_Grek
    data_files:
      - split: train
        path: ell_Grek/*
  - config_name: dan_Latn
    data_files:
      - split: train
        path: dan_Latn/*
  - config_name: hun_Latn
    data_files:
      - split: train
        path: hun_Latn/*
license: odc-by
size_categories:
  - 1B<n<10B

FineWeb2-embedded

Dataset summary

FineWeb2-embedded is an extension of the FineWeb2 dataset, annotated with document-level XLM-RoBERTa embeddings for 20 languages, making the dataset useful for a variety of tasks, including document clustering, filtering, and other multilingual research.

Since XLM-RoBERTa has a sequence length limit of 512 tokens, each document's embeddings are obtained by mean-pooling 512 token chunks of the XLM-RoBERTa output. Therefore, longer texts have more embeddings available (one per 512 tokens).

The embeddings were initially computed as part of our FineWeb2-HQ dataset (a high-quality subset of FineWeb2). However, we believe that they can be useful for other multilingual research and applications.

For more details, see our paper Enhancing Multilingual LLM Pretraining with Model-Based Data Selection.

Languages and subsets

Subset name Language name Number of documents Disk size
rus_Cyrl Russian 605,468,615 5.3T
cmn_Hani Chinese 578,332,129 4.4T
deu_Latn German 427,700,394 2.5T
spa_Latn Spanish 405,634,303 2.3T
jpn_Jpan Japanese 376,134,745 2.4T
fra_Latn French 332,646,715 2.0T
ita_Latn Italian 219,117,921 1.3T
por_Latn Portuguese 189,851,449 1.1T
pol_Latn Polish 138,337,436 794G
nld_Latn Dutch 133,855,612 720G
ind_Latn Indonesian 92,992,647 537G
tur_Latn Turkish 88,769,907 487G
ces_Latn Czech 62,703,458 390G
arb_Arab Arabic 57,752,149 363G
fas_Arab Persian 51,043,666 322G
hun_Latn Hungarian 46,879,826 328G
swe_Latn Swedish 45,329,979 261G
ell_Grek Greek 44,202,550 267G
dan_Latn Danish 42,975,661 262G
vie_Latn Vietnamese 40,741,340 298G

We might consider adding new languages supported by the XLM-RoBERTa model to an upcoming version of the present dataset.

Dataset structure

Data fields

Each data entry includes the original FineWeb2 data fields with the addition of:

  • embeddings: array of float arrays containing 768-dimensional XLM-RoBERTa embeddings for every 512 token chunk of the tokenized text

Data instance

{
  "id": "<urn:uuid:f26003c7-6084-4791-b3fe-240eedc37e76>",
  "text": "Plutonium ist einer der gefährlichsten Stoffe der Welt. Es entsteht als hochgiftiges und radioaktives Nebenprodukt der Energiegewinnung in Atomkraftwerken. Wer nur ein Millionstel Gramm – ein kaum staubkorngroßes Teilchen – der Substanz einatmet, kann daran sterben. In der Natur kommt der Stoff nur in geringsten Mengen vor, wird aber künstlich hergestellt, weil man damit Bomben bauen kann. Je nach Reinheitsgrad reichen für eine Atombombe bereits fünf Kilogramm. Bis zum Beginn der achtziger Jahre des letzten Jahrhunderts hatten die Reaktoren weltweit bereits rund 300.000 Kilogramm erbrütet. Jährlich kommen etwa 20.000 Kilo hinzu. Genau dieser Stoff wird zu Land und zu Wasser um den ganzen Erdball herum transportiert. Legendär sind die Castor-Transporte, bei denen unter strengsten Sicherheitsvorkehrungen und entsprechenden Kosten abgebrannte Brennelemente aus deutschen Kernkraftwerken zur Wiederaufbereitung nach La Hague (Frankreich) oder Sellafield (Großbritannien) gebracht werden. Erst vergangenen Mai hat ein Frachter die größte Menge wiederaufbereiteten Mülls aller Zeiten von Frankreich nach Japan gebracht. Nicht auszudenken, was ein Unfall auf See bedeuten würde.",
  "date": "2014-03-16T08:53:38Z",
  "dump": "CC-MAIN-2014-10",
  "embeddings": [[ ... ]],
  "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678702159/warc/CC-MAIN-20140313024502-00039-ip-10-183-142-35.ec2.internal.warc.gz",
  "language": "deu",
  "language_score":  0.9983288645744324,
  "language_script": "Latn",
  "minhash_cluster_size": 2,
  "top_langs": {"deu_Latn_score": 0.9983288645744324},
  "url": "http://www.greenpeace.org/austria/de/themen/atom/probleme/atomtransporte/",
}

Usage

You can load the dataset in Python using datasets:

from datasets import load_dataset

dataset = load_dataset("epfml/FineWeb2-embedded", "deu_Latn")

Licensing information

Like FineWeb2, this dataset is released under Open Data Commons Attribution License (ODC-By) v1.0 license and is subject to CommonCrawl's Terms of Use.

Dataset origin

Being based on FineWeb2, this data covers websites over the 2013-2024 time period.

FineWeb2 is sourced from the internet at large, it is very likely that some personable identifiable information (PII) will be present, even if the FineWeb2 processing has already anonymized email addresses and public IP addresses. If you find your own PII and would like it removed, please fill out the FineWeb2 PII removal/opt out form.

CommonCrawl respects robots.txt at crawl time, but if you are a webmaster and find your website in FineWeb2 and would like to have it removed, you may also use the FineWeb2 PII removal/opt out form.

Considerations for Using the Data

For the aspects of social impact, discussion of biases, and known limitations, we also refer to the FineWeb2 documentation.

Citation information

If you use this dataset in your research or applications, please use the following citation:

@article{messmer2025multilingdatacomp,
  title={Enhancing Multilingual LLM Pretraining with Model-Based Data Selection}, 
  author={Bettina Messmer and Vinko Sabolčec and Martin Jaggi},
  journal={arXiv},
  year={2025},
  url={https://arxiv.org/abs/2502.10361}, 
}