|
--- |
|
dataset_info: |
|
features: |
|
- name: text |
|
dtype: string |
|
- name: sentences |
|
sequence: string |
|
- name: parses |
|
sequence: string |
|
splits: |
|
- name: train |
|
num_bytes: 713719296 |
|
num_examples: 769764 |
|
download_size: 338028239 |
|
dataset_size: 713719296 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
language: |
|
- en |
|
tags: |
|
- wikipedia |
|
- wiki |
|
size_categories: |
|
- 100K<n<1M |
|
--- |
|
|
|
# Dataset Card |
|
|
|
<!-- Provide a quick summary of the dataset. --> |
|
|
|
This dataset is a processed version of [rahular/simple-wikipedia](https://huggingface.co/datasets/rahular/simple-wikipedia), |
|
which is a dump of articles from Simple English Wikipedia. |
|
|
|
In addition to the raw texts in the `text` column, this dataset also provides two additional columns: |
|
- `sentences`: A list of sentences in `text`, produced by a spaCy sentence tokenizer |
|
- `parses`: A list of constituency parse strings, one per sentence in `sentences`, generated by the [Berkeley neural parser](https://github.com/nikitakit/self-attentive-parser) |