Update README and config files - README.md
Browse files
README.md
CHANGED
@@ -32,6 +32,12 @@ configs:
|
|
32 |
|
33 |
This dataset is part of the [ALEA Institute's](https://aleainstitute.ai/) KL3M Data Project, which provides copyright-clean training resources for large language models.
|
34 |
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
## Abstract
|
36 |
|
37 |
Practically all large language models have been pre-trained on data that is subject to global uncertainty related to copyright infringement and breach of contract. This creates potential risk for users and developers due to this uncertain legal status. The KL3M Data Project directly confronts this critical issue by introducing the largest comprehensive training data pipeline that minimizes risks related to copyright or breach of contract.
|
@@ -46,11 +52,6 @@ The foundation of this project is a corpus of over 132 million documents and tri
|
|
46 |
|
47 |
All of these resources are freely available to the public on S3, Hugging Face, and GitHub under CC-BY terms. We are committed to continuing this project in furtherance of a more ethical, legal, and sustainable approach to the development and use of AI models.
|
48 |
|
49 |
-
## Dataset Details
|
50 |
-
|
51 |
-
- **Format**: Parquet files containing document text and metadata
|
52 |
-
- **License**: [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
|
53 |
-
|
54 |
## Legal Basis
|
55 |
|
56 |
This dataset is fully compliant with copyright law and contractual terms. The content is included based on the following legal foundation:
|
|
|
32 |
|
33 |
This dataset is part of the [ALEA Institute's](https://aleainstitute.ai/) KL3M Data Project, which provides copyright-clean training resources for large language models.
|
34 |
|
35 |
+
## Dataset Details
|
36 |
+
|
37 |
+
- **Format**: Parquet files containing document text and metadata
|
38 |
+
- **License**: [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
|
39 |
+
- **Tokenizer**: The `tokens` field uses the [kl3m-004-128k-cased](https://huggingface.co/alea-institute/kl3m-004-128k-cased) tokenizer, a case-sensitive 128K vocabulary tokenizer optimized for legal, financial, and enterprise documents
|
40 |
+
|
41 |
## Abstract
|
42 |
|
43 |
Practically all large language models have been pre-trained on data that is subject to global uncertainty related to copyright infringement and breach of contract. This creates potential risk for users and developers due to this uncertain legal status. The KL3M Data Project directly confronts this critical issue by introducing the largest comprehensive training data pipeline that minimizes risks related to copyright or breach of contract.
|
|
|
52 |
|
53 |
All of these resources are freely available to the public on S3, Hugging Face, and GitHub under CC-BY terms. We are committed to continuing this project in furtherance of a more ethical, legal, and sustainable approach to the development and use of AI models.
|
54 |
|
|
|
|
|
|
|
|
|
|
|
55 |
## Legal Basis
|
56 |
|
57 |
This dataset is fully compliant with copyright law and contractual terms. The content is included based on the following legal foundation:
|