Update README.md
Browse files
README.md
CHANGED
@@ -15,4 +15,27 @@ configs:
|
|
15 |
data_files:
|
16 |
- split: train
|
17 |
path: max_len-448/train-*
|
|
|
|
|
|
|
18 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
data_files:
|
16 |
- split: train
|
17 |
path: max_len-448/train-*
|
18 |
+
license: cc-by-sa-4.0
|
19 |
+
language:
|
20 |
+
- en
|
21 |
---
|
22 |
+
|
23 |
+
# Wikipedia simple splitted
|
24 |
+
|
25 |
+
Wikipedia simple data splitted using Langchain's RecursiveCharacterTextSplitter
|
26 |
+
|
27 |
+
|
28 |
+
|
29 |
+
## Usage
|
30 |
+
|
31 |
+
- This dataset is meant to be an ultra high quality dataset.
|
32 |
+
- This can be used for Annealing LLMs.
|
33 |
+
## Why its different
|
34 |
+
|
35 |
+
- This dataset is split with a max length of 448 (128*3.5) characters
|
36 |
+
- And rather than splitting by length, it is split using RecursiveCharacterTextSplitter. So, the chunks don't end randomly.
|
37 |
+
- Can use very large batch sizes.
|
38 |
+
|
39 |
+
## License
|
40 |
+
|
41 |
+
[CC-BY-SA](https://creativecommons.org/licenses/by-sa/4.0/deed.en)
|