Update README.md
Browse files
README.md
CHANGED
@@ -32,17 +32,17 @@ Each passage in the original version of a book chapter is aligned with its corre
|
|
32 |
|
33 |
| Passage Size | Description | # Train | # Dev | # Test |
|
34 |
| ------------- | ------------- | ------- | ------- | ------- |
|
35 |
-
|
|
36 |
-
|
|
37 |
-
|
|
38 |
-
|
|
39 |
|
40 |
#### Example Usage
|
41 |
|
42 |
To load aligned sentences:
|
43 |
```
|
44 |
from datasets import load_dataset
|
45 |
-
data = load_dataset("ablit", "sentences")
|
46 |
```
|
47 |
|
48 |
### Data Fields
|
|
|
32 |
|
33 |
| Passage Size | Description | # Train | # Dev | # Test |
|
34 |
| ------------- | ------------- | ------- | ------- | ------- |
|
35 |
+
| chapters | Each passage is a single chapter | 808 | 10 | 50
|
36 |
+
| sentences | Each passage is a sentence delimited by the NLTK sentence tokenizer | 122,219 | 1,143 | 10,431 |
|
37 |
+
| paragraphs | Each passage is a paragraph delimited by a line break | 37,227 | 313 | 3,125 |
|
38 |
+
| chunks-10-sentences | Each passage consists of up to X=10 number of sentences, which may span more than one paragraph. To derive chunks with other lengths X, see GitHub repo above | 14,857 | 141 | 1,264
|
39 |
|
40 |
#### Example Usage
|
41 |
|
42 |
To load aligned sentences:
|
43 |
```
|
44 |
from datasets import load_dataset
|
45 |
+
data = load_dataset("ablit", "chunks-10-sentences")
|
46 |
```
|
47 |
|
48 |
### Data Fields
|