Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,20 @@
|
|
1 |
-
---
|
2 |
-
license: odc-by
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: odc-by
|
3 |
+
task_categories:
|
4 |
+
- text-generation
|
5 |
+
- summarization
|
6 |
+
- text2text-generation
|
7 |
+
language:
|
8 |
+
- en
|
9 |
+
tags:
|
10 |
+
- synthetic
|
11 |
+
size_categories:
|
12 |
+
- 100K<n<1M
|
13 |
+
---
|
14 |
+
# **OpenWeb Datasets Web Collection**
|
15 |
+
|
16 |
+
The *FineWeb* dataset consists of more than 15 trillion tokens of cleaned and deduplicated English web data from CommonCrawl. The data processing pipeline is optimized for LLM performance, sorting the necessary datasets extracted from Hugging Face's *FineWeb* collections.
|
17 |
+
|
18 |
+
This dataset was created by processing 96 CommonCrawl dumps, comprising web data crawled from the summer of 2013 to April 2024. *FineWeb* includes a variety of domains and topics in English and is primarily intended as a research artifact for public data in the context of pretraining datasets for large language models.
|
19 |
+
|
20 |
+
The CommonCrawl data was carefully processed, filtered, and deduplicated using the *DataTrove* library, resulting in the largest publicly available clean LLM pretraining dataset, totaling approximately 15 trillion tokens (using the GPT-2 tokenizer).
|