Update README.md
Browse files
README.md
CHANGED
@@ -1,31 +1,21 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
num_bytes: 2010865867591
|
23 |
-
num_examples: 254141282
|
24 |
-
download_size: 1055335695720
|
25 |
-
dataset_size: 2010865867591
|
26 |
-
configs:
|
27 |
-
- config_name: default
|
28 |
-
data_files:
|
29 |
-
- split: train
|
30 |
-
path: data/train-*
|
31 |
-
---
|
|
|
1 |
+
## QuRatedPajama
|
2 |
+
|
3 |
+
A 260B token subset of [cerebras/SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B), annotated by [princeton-nlp/QuRater-1.3B](https://huggingface.co/princeton-nlp/QuRater-1.3B/tree/main) with sequence-level quality ratings across 4 criteria:
|
4 |
+
- **Educational Value** - e.g. the text includes clear explanations, step-by-step reasoning, or questions and answers
|
5 |
+
- **Facts & Trivia** - how much factual and trivia knowledge the text contains, where specific facts and obscure trivia are preferred over more common knowledge
|
6 |
+
- **Writing Style** - how polished and good is the writing style in the text
|
7 |
+
- **Required Expertise**: - how much required expertise and prerequisite knowledge is necessary to understand the text
|
8 |
+
|
9 |
+
In a pre-processing step, we split documents in into chunks of exactly 1024 tokens. We provide tokenization with the Llama-2 tokenizer in the `input_ids` column.
|
10 |
+
|
11 |
+
Paper: [QuRating: Selecting High-Quality Data for Training Language Models](https://arxiv.org/pdf/2402.09739.pdf)
|
12 |
+
|
13 |
+
Citation:
|
14 |
+
```
|
15 |
+
@article{wettig2024qurating,
|
16 |
+
title={QuRating: Selecting High-Quality Data for Training Language Models},
|
17 |
+
author={Alexander Wettig, Aatmik Gupta, Saumya Malik, Danqi Chen},
|
18 |
+
journal={arXiv preprint 2402.09739},
|
19 |
+
year={2024}
|
20 |
+
}
|
21 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|