jennhu commited on
Commit
b998a65
·
verified ·
1 Parent(s): 1b2427a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -21
README.md CHANGED
@@ -1,21 +1,39 @@
1
- ---
2
- dataset_info:
3
- features:
4
- - name: text
5
- dtype: string
6
- - name: sentences
7
- sequence: string
8
- - name: parses
9
- sequence: string
10
- splits:
11
- - name: train
12
- num_bytes: 713719296
13
- num_examples: 769764
14
- download_size: 338028239
15
- dataset_size: 713719296
16
- configs:
17
- - config_name: default
18
- data_files:
19
- - split: train
20
- path: data/train-*
21
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: text
5
+ dtype: string
6
+ - name: sentences
7
+ sequence: string
8
+ - name: parses
9
+ sequence: string
10
+ splits:
11
+ - name: train
12
+ num_bytes: 713719296
13
+ num_examples: 769764
14
+ download_size: 338028239
15
+ dataset_size: 713719296
16
+ configs:
17
+ - config_name: default
18
+ data_files:
19
+ - split: train
20
+ path: data/train-*
21
+ language:
22
+ - en
23
+ tags:
24
+ - wikipedia
25
+ - wiki
26
+ size_categories:
27
+ - 100K<n<1M
28
+ ---
29
+
30
+ # Dataset Card for Dataset Name
31
+
32
+ <!-- Provide a quick summary of the dataset. -->
33
+
34
+ This dataset is a processed version of [rahular/simple-wikipedia](https://huggingface.co/datasets/rahular/simple-wikipedia),
35
+ which is a dump of articles from Simple English Wikipedia.
36
+
37
+ In addition to the raw texts in the `text` column, this dataset also provides two additional columns:
38
+ - `sentences`: A list of sentences in `text`, produced by a spaCy sentence tokenizer
39
+ - `parses`: A list of constituency parse strings, one per sentence in `sentences`, generated by the [Berkeley neural parser](https://github.com/nikitakit/self-attentive-parser)