metadata
pretty_name: OpenWebText-Sentences
license:
- cc0-1.0
language:
- en
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 40569863620
num_examples: 307432490
download_size: 25671862280
dataset_size: 40569863620
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
OpenWebText-Sentences Dataset
Overview
This dataset is derived from the popular OpenWebText dataset (see here). It contains the same text content as the original OpenWebText, but split into individual sentences.
Key Features
- Content: All text from the original OpenWebText dataset
- Format: Sentences are stored individually now in parquet format for faster access
- Order: Maintains all original OpenWebText text and the order thereof
- Tokenization: Sentences were split using NLTK 3.9.1 pre-trained "Punkt" tokenizer for English (see here)
OpenWebText-Sentences Information
- Size: 25.7 GB (generated dataset)
- Number of sentences: 307,432,490
- Language: English
OpenWebText Information
- Size: 41.70 GB (generated dataset)
- Number of documents: 8,013,769
- Language: English
Citation
When using this dataset, please cite the original OpenWebText corpus:
@misc{Gokaslan2019OpenWeb,
title={OpenWebText Corpus},
author={Gokaslan, Aaron and Cohen, Vanya and Pavlick, Ellie and Tellex, Stefanie},
howpublished={\url{http://Skylion007.github.io/OpenWebTextCorpus}},
year={2019}
}