Datasets:

Modalities:
Tabular
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
mnmnmnmn commited on
Commit
40b5207
·
verified ·
1 Parent(s): 290abeb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -53
README.md CHANGED
@@ -1,72 +1,74 @@
1
- ---
2
- language:
3
- - en
4
- - fr
5
- - de
6
- - it
7
- - pt
8
- - nl
9
- - es
10
- pretty_name: Common Corpus
11
- size_categories:
12
- - n>1T
13
- task_categories:
14
- - text-generation
15
- tags:
16
- - legal
17
- - finance
18
- - literature
19
- - science
20
- - code
21
- ---
22
-
23
  # Common Corpus
24
 
25
- Common Corpus is the largest open and permissible licensed text dataset, comprising over 2 trillion tokens (2,003,039,184,047 tokens). It is a diverse dataset, consisting of books, newspapers, scientific articles, government and legal documents, code, and more.
26
 
27
  Common Corpus differs from existing open datasets in that it is:
28
- * **Truly Open**: contains only data that is permissively licensed
29
- * **Multilingual**: mostly representing English and French data, but contains data for XX languages
 
30
  * **Diverse**: consisting of scientific articles, government and legal documents, code, and cultural heritage data, including books and newspapers
31
  * **Extensively Curated**: spelling and formatting has been corrected from digitized texts, harmful and toxic content has been removed, and content with low educational content has also been removed.
32
 
 
 
 
 
 
33
  # About Common Corpus
34
 
35
- Common Corpus is made of five carefully curated collections:
36
- * **OpenCulture**: our largest collection at 926,541,096,243 tokens, featuring public domain books, newspapers, and Wikisource content. We've developed innovative tools like OCROnos-Vintage to correct historical digitization errors, while implementing advanced toxicity filtering to ensure content meets modern ethical standards.
37
- * **OpenGovernment**: 387,965,738,992 tokens of financial and legal documents, including Finance Commons (from sources like SEC and WTO) and Legal Commons (including Europarl and Caselaw Access Project), providing enterprise-grade training data from regulatory bodies and administrative sources.
38
- * **OpenSource**: 334,658,896,533 tokens of high-quality code in open source from GitHub, filtered using ArmoRM to ensure only the top 80% of submissions by quality rating are included.
39
- * **OpenScience**: 221,798,136,564 tokens of academic content from Open Alex and other open science reposiories, processed using vision-language models to preserve crucial document structure and formatting.
40
- * **OpenWeb**: 132,075,315,715 tokens from Wikipedia (official releases from the [Wikimedia Foundation](https://huggingface.co/datasets/wikimedia/wikipedia) on Huggingface), YouTube Commons and other websites available under permissible licenses like Stack-Exchange.
 
41
 
42
- | Collection | Domain | Sources |
43
  |----------------|--------------------------|-------------------------------------------------------------------------------------------|
44
  | OpenGovernment | legal and administrative | [Finance Commons](https://huggingface.co/collections/PleIAs/finance-commons-66925e1095c7fa6e6828e26c) (e.g. SEC, WTO) and Legal Commons (e.g. Europarl, Caselaw Access Project) |
45
- | OpenCulture | cultural heritage | public domain books and newspapers, Wikisource |
46
- | OpenScience | academic | OpenAlex, French theses |
47
- | OpenWeb | web text | [YouTube Commons](https://huggingface.co/datasets/PleIAs/YouTube-Commons), Stack Exchange |
48
- | OpenSource | code | GitHub |
 
 
 
 
 
 
49
 
50
- We will accompany the dataset release with a comprehensive technical report detailing our methodologies and data sources will accompany the release, ensuring full transparency and reproducibility. We will release the individual sub-corpora in coming weeks for more fine-grained auditability for to expand uses
51
 
52
  ## Dataset Structure
53
 
54
  <details >
55
  <summary>Data Fields</summary>
56
-
57
- * identifier: unique text identifier
58
- * text: post-processed text
59
- * char_count: number of UTF-8 characters in text
60
- * file_name: original file path, organized by collection
61
- * set_id: set id (1-10)
62
- * subset_id: subset id (1-100)
 
 
 
 
 
63
 
64
  </details >
65
  <br />
66
 
67
- # How to Use
68
 
69
- ## Considerations for Using the Data
 
 
 
 
 
 
 
70
 
71
  All data in Common Corpus are permissibly licensed and may be used for both commercial and non-commercial purposes.
72
 
@@ -78,20 +80,32 @@ Some of the dataset sources contain biased and toxic content, such as stereotype
78
 
79
  ### Personal and Sensitive Information
80
 
81
- We have attempted to remove personally identifiable information (PII). We primarily use [Microsoft Presidio](https://microsoft.github.io/presidio/), but make additional modifications to account for language- and country-specific considerations, such as European phone number formats.
82
-
83
 
84
- ## Use Common Corpus
85
 
86
- ```
87
  from datasets import load_dataset
88
  data = load_dataset('PleIAs/common_corpus')
89
- ```
90
 
91
 
92
  # Acknowledgements
93
 
94
- The corpus was stored and processed with the generous support of the AI Alliance, Jean Zay (Eviden, Idris), Nvidia Inception program, Nebius AI, Tracto AI, Mozilla. It was built up with the support and concerted efforts of the state start-up LANGU:IA (start-up d’Etat), supported by the French Ministry of Culture and DINUM, as part of the prefiguration of the service offering of the Alliance for Language technologies EDIC (ALT-EDIC). This dataset was also made in partnership with Wikimedia Enterprise for the Wikipedia part. The collection of the corpus has been largely facilitated thanks to the open science LLM community insights, cooperation and support (Eleuther AI, Allen AI, HuggingFace…).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
95
 
96
  <div style="text-align: center;">
97
  <img src="https://huggingface.co/datasets/PleIAs/common_corpus/resolve/main/logo/ai_alliance.png" style="width: 33%; margin: 0 auto; display: inline-block;"/>
@@ -101,4 +115,5 @@ The corpus was stored and processed with the generous support of the AI Alliance
101
  <img src="https://huggingface.co/datasets/PleIAs/common_corpus/resolve/main/logo/mozilla.png" style="width: 33%; margin: 0 auto; display: inline-block;"/>
102
  <img src="https://raw.githubusercontent.com/Pleias/logos/f117dee70b317bc664eac14ee70d7c0563101ed1/ministere_logo.png?token=GHSAT0AAAAAACZUTJMICO3MSWUJ43EQWG5QZZL3RFQ" style="width: 33%; margin: 0 auto; display: inline-block;"/>
103
  <img src="https://raw.githubusercontent.com/Pleias/logos/f117dee70b317bc664eac14ee70d7c0563101ed1/wikimedia_logo.png?token=GHSAT0AAAAAACZUTJMIIPAP4J7MKP6RSSWCZZL3TFA" style="width: 33%; margin: 0 auto; display: inline-block;"/>
104
- </div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # Common Corpus
2
 
3
+ Common Corpus is the largest open and permissible licensed text dataset, comprising 2 trillion tokens (1,998,647,168,282 tokens). It is a diverse dataset, consisting of books, newspapers, scientific articles, government and legal documents, code, and more. Common Corpus has been created by Pleias in association with several partners and contributed in-kind to Current AI initiative.
4
 
5
  Common Corpus differs from existing open datasets in that it is:
6
+ * **Truly Open**: contains only data that is either uncopyrighted or permissively licensed
7
+ * **Traceable**: each individual document is associated with documented contextual information, including licensed use or lack of copyright.
8
+ * **Multilingual**: mostly representing English and French data, but contains data for 8 languages with more than 10 billion tokens (German, Spanish, Italian, Polish, Greek, Latin) and 33 languages with more than 1 billion tokens.
9
  * **Diverse**: consisting of scientific articles, government and legal documents, code, and cultural heritage data, including books and newspapers
10
  * **Extensively Curated**: spelling and formatting has been corrected from digitized texts, harmful and toxic content has been removed, and content with low educational content has also been removed.
11
 
12
+ The dataset in its entirety meets the requirements of the Code of Conduct of the AI Act and goes further than the current requirements for data transparency. It aims to set a new standard of openness in AI, showing that detailed provenance at a granular document level is a realistic objective, even at the scale of 2 trillion tokens.
13
+
14
+ Common Corpus makes it possible to train model compatible with [the Open Source Initiative’s definition](https://opensource.org/ai/open-source-ai-definition#:~:text=An%20Open%20Source%20AI%20is,including%20to%20change%20its%20output.) of open-source AI, which includes openness of use, meaning use is permitted for “any purpose and without having to ask for permission". Based on the available licensing information Common Corpus can be filtered to only include public domain works or a subset of free licenses (like attribution only).
15
+
16
+
17
  # About Common Corpus
18
 
19
+ Common Corpus is made of six carefully curated collections:
20
+ * **OpenCulture**: our largest collection at 885,982,490,090 tokens, featuring public domain books, newspapers from cultural heritage repositories and open projets like Wikisource ad Gutenberg. We're developing innovative tools of OCR correction based on Pleias Models to correct historical digitization errors, while implementing advanced toxicity filtering to ensure content meets modern ethical standards.
21
+ * **OpenGovernment**: 406,581,454,455 tokens of financial and legal documents, including Finance Commons (from sources like SEC and WTO) and Legal Commons (including Europarl and Caselaw Access Project), providing enterprise-grade training data from regulatory bodies and administrative sources.
22
+ * **OpenSource**: 283,227,402,898 tokens of high-quality code in open source from GitHub, filtered using ArmoRM to ensure only the top 80% of submissions by quality rating are included.
23
+ * **OpenScience**: 281,193,563,789 tokens of academic content from Open Alex and other open science reposiories, processed using vision-language models to preserve crucial document structure and formatting.
24
+ * **OpenWeb**: 73,217,485,489 tokens from Wikipedia (official releases from the [Wikimedia Foundation](https://huggingface.co/datasets/wikimedia/wikipedia) on Huggingface), YouTube Commons and Stack-Exchange.
25
+ * **Open Semantic**: 67,958,671,827 tokens from Wikidata (official releases from the [Wikimedia Foundation](https://huggingface.co/datasets/wikimedia/wikipedia) on Huggingface). The data has been reprocessed thanks to support and help of Wikidata and Wikimedia Germany. It includes the transcriptions of all the semantic triplets into natural language statements in over 300 languages.
26
 
27
+ | Collection | Domain | Sources |
28
  |----------------|--------------------------|-------------------------------------------------------------------------------------------|
29
  | OpenGovernment | legal and administrative | [Finance Commons](https://huggingface.co/collections/PleIAs/finance-commons-66925e1095c7fa6e6828e26c) (e.g. SEC, WTO) and Legal Commons (e.g. Europarl, Caselaw Access Project) |
30
+ | OpenCulture | cultural heritage | public domain books and newspapers, Wikisource |
31
+ | OpenScience | academic | OpenAlex, French theses |
32
+ | OpenWeb | web text | [YouTube Commons](https://huggingface.co/datasets/PleIAs/YouTube-Commons), Stack Exchange |
33
+ | OpenSource | code | GitHub |
34
+ | OpenSemantic | Semantic data | Wikidata |
35
+
36
+ The first version of [Common Corpus](https://huggingface.co/datasets/PleIAs/common_corpus) was released in November of 2024. The second version includes the following features:
37
+ * Additional datasets, including OpenSemantic thanks to a collaboration with Wikidata, extended scraps from Wikisource and the Gutenberg project, as well large open science datasets with permissive licences.
38
+ * Detailed document-level information, including licensing and other core metadata whenever available.
39
+ * Direct link to original publication for a significant share of documents (especially, from Github, Wikipedia, Wikidata…)
40
 
41
+ The dataset release is accompanied by a comprehensive technical report detailing our methodologies and data sources will accompany the release, ensuring full transparency and reproducibility.
42
 
43
  ## Dataset Structure
44
 
45
  <details >
46
  <summary>Data Fields</summary>
47
+
48
+ * `identifier`: unique text identifier. In many cases, this is also the link to the original resources.
49
+ * `collection`: name of one of the XX sub-collections curated for Common corpus.
50
+ * `open type`: one of the six leading collection groupings:
51
+ * `license`: sharing rights for the content either uncopyrighted (public domain, US federal public domain, CC0 on Wikidata) or various free licenses (Creative Commons, MIT, French Licence ouverte, etc.)
52
+ * `date`: date of creation of the resource where known. Due to the significance of public domain and other cultural heritage content, more than half of Common Corpus predates the 21st century.
53
+ * `title`: title of the resource when known or alternatively the filename.
54
+ * `creator`: institution publishing/collecting/curating the resource.
55
+ * `language`: automatically identified language.
56
+ * `word_count`: number of space delimited words.
57
+ * `token_count`: number of tokens as calculated by Pleias official tokenizer.
58
+ * `text`: full text, without formatting.
59
 
60
  </details >
61
  <br />
62
 
 
63
 
64
+ ## Provenance
65
+
66
+ The provenance of the datasets that make up Refined Common Corpus is detailed in the technical report [link]. Additionally, the original source URL is available in the metadata for each document for most of the dataset.
67
+
68
+
69
+ ## How to Use
70
+
71
+ ### Considerations for Using the Data
72
 
73
  All data in Common Corpus are permissibly licensed and may be used for both commercial and non-commercial purposes.
74
 
 
80
 
81
  ### Personal and Sensitive Information
82
 
83
+ We have attempted to remove personally identifiable information (PII). We primarily use [Microsoft Presidio](https://microsoft.github.io/presidio/), but make additional modifications to account for language- and country-specific considerations, such as European phone number formats.
 
84
 
85
+ Some small parts of the French administrative common crawl have been entirely dropped using our unreleased small reasoning model for GDPR-filtering, due to the heightened risk of transmitting identifiable indirect personal information.
86
 
87
+ ## Using Common Corpus
88
  from datasets import load_dataset
89
  data = load_dataset('PleIAs/common_corpus')
 
90
 
91
 
92
  # Acknowledgements
93
 
94
+
95
+ The Corpus is part of the Current AI initiative to accelerate the development of the building blocks of open AI - notably data - that serves and is shaped by the public interest.
96
+
97
+
98
+ It was built up with the support and concerted efforts of the AI Alliance, the French Ministry of Culture and DINUM, as part of the prefiguration of the service offering of the Alliance for Language technologies EDIC (ALT-EDIC).
99
+
100
+
101
+ This dataset was also made in partnership with Wikimedia Enterprise and Wikidata/Wikimedia Germany.
102
+
103
+
104
+ The corpus was stored and processed with the generous support of the AI Alliance, Jean Zay (Eviden, Idris), Tracto AI, Mozilla.
105
+
106
+
107
+ The collection of the corpus has been largely facilitated thanks to the open science LLM community insights, cooperation and support (Eleuther AI, Allen AI, HuggingFace…).
108
+
109
 
110
  <div style="text-align: center;">
111
  <img src="https://huggingface.co/datasets/PleIAs/common_corpus/resolve/main/logo/ai_alliance.png" style="width: 33%; margin: 0 auto; display: inline-block;"/>
 
115
  <img src="https://huggingface.co/datasets/PleIAs/common_corpus/resolve/main/logo/mozilla.png" style="width: 33%; margin: 0 auto; display: inline-block;"/>
116
  <img src="https://raw.githubusercontent.com/Pleias/logos/f117dee70b317bc664eac14ee70d7c0563101ed1/ministere_logo.png?token=GHSAT0AAAAAACZUTJMICO3MSWUJ43EQWG5QZZL3RFQ" style="width: 33%; margin: 0 auto; display: inline-block;"/>
117
  <img src="https://raw.githubusercontent.com/Pleias/logos/f117dee70b317bc664eac14ee70d7c0563101ed1/wikimedia_logo.png?token=GHSAT0AAAAAACZUTJMIIPAP4J7MKP6RSSWCZZL3TFA" style="width: 33%; margin: 0 auto; display: inline-block;"/>
118
+ </div>
119
+