OpenWeb383K / README.md
prithivMLmods's picture
Update README.md
37604f7 verified
|
raw
history blame
1.37 kB
metadata
license: odc-by
task_categories:
  - text-generation
  - summarization
  - text2text-generation
language:
  - en
tags:
  - synthetic
size_categories:
  - 100K<n<1M

OpenWeb Datasets Web Collection

The OpenWeb Datasets Web Collection, derived from the 'FineWeb' dataset, consists of more than 15 trillion tokens of cleaned and deduplicated English web data from CommonCrawl. The data processing pipeline is optimized for LLM performance, and the necessary set of datasets has been extracted from Hugging Face's FineWeb collections. This dataset was created by processing 96 CommonCrawl dumps, comprising web data crawled from the summer of 2013 to April 2024. FineWeb includes a variety of domains and topics in English and is primarily intended to serve as a research artifact for public data in the context of pretraining datasets for large language models. The CommonCrawl data was carefully processed, filtered, and deduplicated using the Datatrove library, resulting in the largest publicly available clean LLM pretraining dataset, containing approximately 15 trillion tokens (using the GPT-2 tokenizer).

FineWeb Dataset Overview

Dataset Name Total Entries Dataset Link
FineWeb 25B FineWeb Dataset