Update README.md
Browse files
README.md
CHANGED
@@ -3,6 +3,8 @@ pretty_name: QuRatedPajama-1B_tokens_for_analysis
|
|
3 |
---
|
4 |
## QuRatedPajama
|
5 |
|
|
|
|
|
6 |
This dataset is a 1B token subset derived from [princeton-nlp/QuRatedPajama-260B](https://huggingface.co/datasets/princeton-nlp/QuRatedPajama-260B), which is a subset of [cerebras/SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B) annotated by [princeton-nlp/QuRater-1.3B](https://huggingface.co/princeton-nlp/QuRater-1.3B) with sequence-level quality ratings across 4 criteria:
|
7 |
- **Educational Value** - e.g. the text includes clear explanations, step-by-step reasoning, or questions and answers
|
8 |
- **Facts & Trivia** - how much factual and trivia knowledge the text contains, where specific facts and obscure trivia are preferred over more common knowledge
|
@@ -13,16 +15,12 @@ This subset is useful for analysis of quality ratings. It unsupervised domain cl
|
|
13 |
|
14 |
In a pre-processing step, we split documents in into chunks of exactly 1024 tokens. We provide tokenization with the Llama-2 tokenizer in the `input_ids` column.
|
15 |
|
16 |
-
|
17 |
-
|
18 |
|
19 |
-
|
20 |
-
|
21 |
-
In the paper, we document various types of bias that are present in the quality ratings/QuRater model (biases related to domains, topics, social roles, regions and languages - see Section 6 of the paper).
|
22 |
Hence, be aware that data selection with QuRating could have unintended and harmful effects on the language model that is being trained. We strongly recommend a comprehensive evaluation of the language model for these and other types of bias, particularly before real-world deployment. We hope that releasing the data/models can facilitate future research aimed at uncovering and mitigating such biases.
|
23 |
|
24 |
-
|
25 |
-
Citation:
|
26 |
```
|
27 |
@article{wettig2024qurating,
|
28 |
title={QuRating: Selecting High-Quality Data for Training Language Models},
|
|
|
3 |
---
|
4 |
## QuRatedPajama
|
5 |
|
6 |
+
**Paper:** [QuRating: Selecting High-Quality Data for Training Language Models](https://arxiv.org/pdf/2402.09739.pdf)
|
7 |
+
|
8 |
This dataset is a 1B token subset derived from [princeton-nlp/QuRatedPajama-260B](https://huggingface.co/datasets/princeton-nlp/QuRatedPajama-260B), which is a subset of [cerebras/SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B) annotated by [princeton-nlp/QuRater-1.3B](https://huggingface.co/princeton-nlp/QuRater-1.3B) with sequence-level quality ratings across 4 criteria:
|
9 |
- **Educational Value** - e.g. the text includes clear explanations, step-by-step reasoning, or questions and answers
|
10 |
- **Facts & Trivia** - how much factual and trivia knowledge the text contains, where specific facts and obscure trivia are preferred over more common knowledge
|
|
|
15 |
|
16 |
In a pre-processing step, we split documents in into chunks of exactly 1024 tokens. We provide tokenization with the Llama-2 tokenizer in the `input_ids` column.
|
17 |
|
18 |
+
**Guidance on Responsible Use**
|
|
|
19 |
|
20 |
+
In the paper, we document various types of bias that are present in the quality ratings (biases related to domains, topics, social roles, regions and languages - see Section 6 of the paper).
|
|
|
|
|
21 |
Hence, be aware that data selection with QuRating could have unintended and harmful effects on the language model that is being trained. We strongly recommend a comprehensive evaluation of the language model for these and other types of bias, particularly before real-world deployment. We hope that releasing the data/models can facilitate future research aimed at uncovering and mitigating such biases.
|
22 |
|
23 |
+
**Citation:**
|
|
|
24 |
```
|
25 |
@article{wettig2024qurating,
|
26 |
title={QuRating: Selecting High-Quality Data for Training Language Models},
|