|
--- |
|
dataset_info: |
|
- config_name: cc-by |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: idx |
|
dtype: int64 |
|
- name: paragraph |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 151115318 |
|
num_examples: 139463 |
|
download_size: 76199216 |
|
dataset_size: 151115318 |
|
- config_name: cc-by-nc |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: idx |
|
dtype: int64 |
|
- name: paragraph |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 78538396 |
|
num_examples: 69457 |
|
download_size: 39741294 |
|
dataset_size: 78538396 |
|
configs: |
|
- config_name: cc-by |
|
data_files: |
|
- split: train |
|
path: cc-by/train-* |
|
- config_name: cc-by-nc |
|
data_files: |
|
- split: train |
|
path: cc-by-nc/train-* |
|
--- |
|
# ChemRxiv Paragraphs |
|
|
|
This dataset consists of paragraphs from ChemRxiv papers with **CC BY 4.0** and **CC BY-NC 4.0** licenses, sourced from the [BASF-AI/ChemRxiv-Papers](https://huggingface.co/datasets/BASF-AI/ChemRxiv-Papers) dataset. Paragraphs are extracted using [Grobid](https://github.com/kermitt2/grobid), and filtered using an average log word probability method similar to the approach in [allenai/peS2o](https://huggingface.co/datasets/allenai/peS2o). Paragraphs with fewer than 50 words are excluded. |
|
|
|
The number of unique papers in each license category is as follows: |
|
|
|
- **CC BY 4.0:** 5,848 papers |
|
- **CC BY-NC 4.0:** 3,082 papers |
|
|
|
To obtain metadata for each paper, join on the `id` column with the [BASF-AI/ChemRxiv-Papers](https://huggingface.co/datasets/BASF-AI/ChemRxiv-Papers) dataset. |
|
|
|
To access paragraphs for a specific license, use the `name` argument as follows: |
|
|
|
|
|
```python |
|
import datasets |
|
|
|
cc_by = datasets.load_dataset('BASF-AI/ChemRxiv-Paragraphs', name='cc-by') |
|
cc_by_nc = datasets.load_dataset('BASF-AI/ChemRxiv-Paragraphs', name='cc-by-nc') |
|
|
|
``` |