|
--- |
|
annotations_creators: |
|
- crowdsourced |
|
language_creators: |
|
- crowdsourced |
|
language: |
|
- en |
|
license: |
|
- unknown |
|
multilinguality: |
|
- monolingual |
|
size_categories: |
|
- 10K<n<100K |
|
source_datasets: |
|
- original |
|
task_categories: |
|
- text2text-generation |
|
task_ids: |
|
- open-domain-abstractive-qa |
|
paperswithcode_id: break |
|
pretty_name: BREAK |
|
dataset_info: |
|
- config_name: QDMR |
|
features: |
|
- name: question_id |
|
dtype: string |
|
- name: question_text |
|
dtype: string |
|
- name: decomposition |
|
dtype: string |
|
- name: operators |
|
dtype: string |
|
- name: split |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 12757200 |
|
num_examples: 44321 |
|
- name: validation |
|
num_bytes: 2231632 |
|
num_examples: 7760 |
|
- name: test |
|
num_bytes: 894558 |
|
num_examples: 8069 |
|
download_size: 5175508 |
|
dataset_size: 15883390 |
|
- config_name: QDMR-high-level |
|
features: |
|
- name: question_id |
|
dtype: string |
|
- name: question_text |
|
dtype: string |
|
- name: decomposition |
|
dtype: string |
|
- name: operators |
|
dtype: string |
|
- name: split |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 5134938 |
|
num_examples: 17503 |
|
- name: validation |
|
num_bytes: 912408 |
|
num_examples: 3130 |
|
- name: test |
|
num_bytes: 479919 |
|
num_examples: 3195 |
|
download_size: 3113187 |
|
dataset_size: 6527265 |
|
- config_name: QDMR-high-level-lexicon |
|
features: |
|
- name: source |
|
dtype: string |
|
- name: allowed_tokens |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 23227946 |
|
num_examples: 17503 |
|
- name: validation |
|
num_bytes: 4157495 |
|
num_examples: 3130 |
|
- name: test |
|
num_bytes: 4239547 |
|
num_examples: 3195 |
|
download_size: 5663924 |
|
dataset_size: 31624988 |
|
- config_name: QDMR-lexicon |
|
features: |
|
- name: source |
|
dtype: string |
|
- name: allowed_tokens |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 56896433 |
|
num_examples: 44321 |
|
- name: validation |
|
num_bytes: 9934015 |
|
num_examples: 7760 |
|
- name: test |
|
num_bytes: 10328787 |
|
num_examples: 8069 |
|
download_size: 10818266 |
|
dataset_size: 77159235 |
|
- config_name: logical-forms |
|
features: |
|
- name: question_id |
|
dtype: string |
|
- name: question_text |
|
dtype: string |
|
- name: decomposition |
|
dtype: string |
|
- name: operators |
|
dtype: string |
|
- name: split |
|
dtype: string |
|
- name: program |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 19783061 |
|
num_examples: 44098 |
|
- name: validation |
|
num_bytes: 3498114 |
|
num_examples: 7719 |
|
- name: test |
|
num_bytes: 920007 |
|
num_examples: 8006 |
|
download_size: 7572815 |
|
dataset_size: 24201182 |
|
configs: |
|
- config_name: QDMR |
|
data_files: |
|
- split: train |
|
path: QDMR/train-* |
|
- split: validation |
|
path: QDMR/validation-* |
|
- split: test |
|
path: QDMR/test-* |
|
- config_name: QDMR-high-level |
|
data_files: |
|
- split: train |
|
path: QDMR-high-level/train-* |
|
- split: validation |
|
path: QDMR-high-level/validation-* |
|
- split: test |
|
path: QDMR-high-level/test-* |
|
- config_name: QDMR-high-level-lexicon |
|
data_files: |
|
- split: train |
|
path: QDMR-high-level-lexicon/train-* |
|
- split: validation |
|
path: QDMR-high-level-lexicon/validation-* |
|
- split: test |
|
path: QDMR-high-level-lexicon/test-* |
|
- config_name: QDMR-lexicon |
|
data_files: |
|
- split: train |
|
path: QDMR-lexicon/train-* |
|
- split: validation |
|
path: QDMR-lexicon/validation-* |
|
- split: test |
|
path: QDMR-lexicon/test-* |
|
- config_name: logical-forms |
|
data_files: |
|
- split: train |
|
path: logical-forms/train-* |
|
- split: validation |
|
path: logical-forms/validation-* |
|
- split: test |
|
path: logical-forms/test-* |
|
--- |
|
|
|
# Dataset Card for "break_data" |
|
|
|
## Table of Contents |
|
- [Dataset Description](#dataset-description) |
|
- [Dataset Summary](#dataset-summary) |
|
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) |
|
- [Languages](#languages) |
|
- [Dataset Structure](#dataset-structure) |
|
- [Data Instances](#data-instances) |
|
- [Data Fields](#data-fields) |
|
- [Data Splits](#data-splits) |
|
- [Dataset Creation](#dataset-creation) |
|
- [Curation Rationale](#curation-rationale) |
|
- [Source Data](#source-data) |
|
- [Annotations](#annotations) |
|
- [Personal and Sensitive Information](#personal-and-sensitive-information) |
|
- [Considerations for Using the Data](#considerations-for-using-the-data) |
|
- [Social Impact of Dataset](#social-impact-of-dataset) |
|
- [Discussion of Biases](#discussion-of-biases) |
|
- [Other Known Limitations](#other-known-limitations) |
|
- [Additional Information](#additional-information) |
|
- [Dataset Curators](#dataset-curators) |
|
- [Licensing Information](#licensing-information) |
|
- [Citation Information](#citation-information) |
|
- [Contributions](#contributions) |
|
|
|
## Dataset Description |
|
|
|
- **Homepage:** [https://github.com/allenai/Break](https://github.com/allenai/Break) |
|
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
- **Size of downloaded dataset files:** 79.86 MB |
|
- **Size of the generated dataset:** 155.55 MB |
|
- **Total amount of disk used:** 235.39 MB |
|
|
|
### Dataset Summary |
|
|
|
Break is a human annotated dataset of natural language questions and their Question Decomposition Meaning Representations |
|
(QDMRs). Break consists of 83,978 examples sampled from 10 question answering datasets over text, images and databases. |
|
This repository contains the Break dataset along with information on the exact data format. |
|
|
|
### Supported Tasks and Leaderboards |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### Languages |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
## Dataset Structure |
|
|
|
### Data Instances |
|
|
|
#### QDMR |
|
|
|
- **Size of downloaded dataset files:** 15.97 MB |
|
- **Size of the generated dataset:** 15.93 MB |
|
- **Total amount of disk used:** 31.90 MB |
|
|
|
An example of 'validation' looks as follows. |
|
``` |
|
{ |
|
"decomposition": "return flights ;return #1 from denver ;return #2 to philadelphia ;return #3 if available", |
|
"operators": "['select', 'filter', 'filter', 'filter']", |
|
"question_id": "ATIS_dev_0", |
|
"question_text": "what flights are available tomorrow from denver to philadelphia ", |
|
"split": "dev" |
|
} |
|
``` |
|
|
|
#### QDMR-high-level |
|
|
|
- **Size of downloaded dataset files:** 15.97 MB |
|
- **Size of the generated dataset:** 6.54 MB |
|
- **Total amount of disk used:** 22.51 MB |
|
|
|
An example of 'train' looks as follows. |
|
``` |
|
{ |
|
"decomposition": "return ground transportation ;return #1 which is available ;return #2 from the pittsburgh airport ;return #3 to downtown ;return the cost of #4", |
|
"operators": "['select', 'filter', 'filter', 'filter', 'project']", |
|
"question_id": "ATIS_dev_102", |
|
"question_text": "what ground transportation is available from the pittsburgh airport to downtown and how much does it cost ", |
|
"split": "dev" |
|
} |
|
``` |
|
|
|
#### QDMR-high-level-lexicon |
|
|
|
- **Size of downloaded dataset files:** 15.97 MB |
|
- **Size of the generated dataset:** 31.64 MB |
|
- **Total amount of disk used:** 47.61 MB |
|
|
|
An example of 'train' looks as follows. |
|
``` |
|
This example was too long and was cropped: |
|
|
|
{ |
|
"allowed_tokens": "\"['higher than', 'same as', 'what ', 'and ', 'than ', 'at most', 'he', 'distinct', 'House', 'two', 'at least', 'or ', 'date', 'o...", |
|
"source": "What office, also held by a member of the Maine House of Representatives, did James K. Polk hold before he was president?" |
|
} |
|
``` |
|
|
|
#### QDMR-lexicon |
|
|
|
- **Size of downloaded dataset files:** 15.97 MB |
|
- **Size of the generated dataset:** 77.19 MB |
|
- **Total amount of disk used:** 93.16 MB |
|
|
|
An example of 'validation' looks as follows. |
|
``` |
|
This example was too long and was cropped: |
|
|
|
{ |
|
"allowed_tokens": "\"['higher than', 'same as', 'what ', 'and ', 'than ', 'at most', 'distinct', 'two', 'at least', 'or ', 'date', 'on ', '@@14@@', ...", |
|
"source": "what flights are available tomorrow from denver to philadelphia " |
|
} |
|
``` |
|
|
|
#### logical-forms |
|
|
|
- **Size of downloaded dataset files:** 15.97 MB |
|
- **Size of the generated dataset:** 24.25 MB |
|
- **Total amount of disk used:** 40.22 MB |
|
|
|
An example of 'train' looks as follows. |
|
``` |
|
{ |
|
"decomposition": "return ground transportation ;return #1 which is available ;return #2 from the pittsburgh airport ;return #3 to downtown ;return the cost of #4", |
|
"operators": "['select', 'filter', 'filter', 'filter', 'project']", |
|
"program": "some program", |
|
"question_id": "ATIS_dev_102", |
|
"question_text": "what ground transportation is available from the pittsburgh airport to downtown and how much does it cost ", |
|
"split": "dev" |
|
} |
|
``` |
|
|
|
### Data Fields |
|
|
|
The data fields are the same among all splits. |
|
|
|
#### QDMR |
|
- `question_id`: a `string` feature. |
|
- `question_text`: a `string` feature. |
|
- `decomposition`: a `string` feature. |
|
- `operators`: a `string` feature. |
|
- `split`: a `string` feature. |
|
|
|
#### QDMR-high-level |
|
- `question_id`: a `string` feature. |
|
- `question_text`: a `string` feature. |
|
- `decomposition`: a `string` feature. |
|
- `operators`: a `string` feature. |
|
- `split`: a `string` feature. |
|
|
|
#### QDMR-high-level-lexicon |
|
- `source`: a `string` feature. |
|
- `allowed_tokens`: a `string` feature. |
|
|
|
#### QDMR-lexicon |
|
- `source`: a `string` feature. |
|
- `allowed_tokens`: a `string` feature. |
|
|
|
#### logical-forms |
|
- `question_id`: a `string` feature. |
|
- `question_text`: a `string` feature. |
|
- `decomposition`: a `string` feature. |
|
- `operators`: a `string` feature. |
|
- `split`: a `string` feature. |
|
- `program`: a `string` feature. |
|
|
|
### Data Splits |
|
|
|
| name |train|validation|test| |
|
|-----------------------|----:|---------:|---:| |
|
|QDMR |44321| 7760|8069| |
|
|QDMR-high-level |17503| 3130|3195| |
|
|QDMR-high-level-lexicon|17503| 3130|3195| |
|
|QDMR-lexicon |44321| 7760|8069| |
|
|logical-forms |44098| 7719|8006| |
|
|
|
## Dataset Creation |
|
|
|
### Curation Rationale |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### Source Data |
|
|
|
#### Initial Data Collection and Normalization |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
#### Who are the source language producers? |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### Annotations |
|
|
|
#### Annotation process |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
#### Who are the annotators? |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### Personal and Sensitive Information |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
## Considerations for Using the Data |
|
|
|
### Social Impact of Dataset |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### Discussion of Biases |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### Other Known Limitations |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
## Additional Information |
|
|
|
### Dataset Curators |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### Licensing Information |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### Citation Information |
|
|
|
``` |
|
@article{Wolfson2020Break, |
|
title={Break It Down: A Question Understanding Benchmark}, |
|
author={Wolfson, Tomer and Geva, Mor and Gupta, Ankit and Gardner, Matt and Goldberg, Yoav and Deutch, Daniel and Berant, Jonathan}, |
|
journal={Transactions of the Association for Computational Linguistics}, |
|
year={2020}, |
|
} |
|
``` |
|
|
|
### Contributions |
|
|
|
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |