Datasets:

Modalities:
Text
Formats:
parquet
Sub-tasks:
extractive-qa
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
parquet-converter commited on
Commit
d31de01
·
1 Parent(s): aab93b0

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,224 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - crowdsourced
4
- language:
5
- - en
6
- language_creators:
7
- - found
8
- license:
9
- - other
10
- multilinguality:
11
- - monolingual
12
- pretty_name: 'CoQA: Conversational Question Answering Challenge'
13
- size_categories:
14
- - 1K<n<10K
15
- source_datasets:
16
- - extended|race
17
- - extended|cnn_dailymail
18
- - extended|wikipedia
19
- - extended|other
20
- task_categories:
21
- - question-answering
22
- task_ids:
23
- - extractive-qa
24
- paperswithcode_id: coqa
25
- tags:
26
- - conversational-qa
27
- dataset_info:
28
- features:
29
- - name: source
30
- dtype: string
31
- - name: story
32
- dtype: string
33
- - name: questions
34
- sequence: string
35
- - name: answers
36
- sequence:
37
- - name: input_text
38
- dtype: string
39
- - name: answer_start
40
- dtype: int32
41
- - name: answer_end
42
- dtype: int32
43
- splits:
44
- - name: train
45
- num_bytes: 17981459
46
- num_examples: 7199
47
- - name: validation
48
- num_bytes: 1225518
49
- num_examples: 500
50
- download_size: 58092681
51
- dataset_size: 19206977
52
- ---
53
-
54
- # Dataset Card for "coqa"
55
-
56
- ## Table of Contents
57
- - [Dataset Description](#dataset-description)
58
- - [Dataset Summary](#dataset-summary)
59
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
60
- - [Languages](#languages)
61
- - [Dataset Structure](#dataset-structure)
62
- - [Data Instances](#data-instances)
63
- - [Data Fields](#data-fields)
64
- - [Data Splits](#data-splits)
65
- - [Dataset Creation](#dataset-creation)
66
- - [Curation Rationale](#curation-rationale)
67
- - [Source Data](#source-data)
68
- - [Annotations](#annotations)
69
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
70
- - [Considerations for Using the Data](#considerations-for-using-the-data)
71
- - [Social Impact of Dataset](#social-impact-of-dataset)
72
- - [Discussion of Biases](#discussion-of-biases)
73
- - [Other Known Limitations](#other-known-limitations)
74
- - [Additional Information](#additional-information)
75
- - [Dataset Curators](#dataset-curators)
76
- - [Licensing Information](#licensing-information)
77
- - [Citation Information](#citation-information)
78
- - [Contributions](#contributions)
79
-
80
- ## Dataset Description
81
-
82
- - **Homepage:** [https://stanfordnlp.github.io/coqa/](https://stanfordnlp.github.io/coqa/)
83
- - **Repository:** https://github.com/stanfordnlp/coqa-baselines
84
- - **Paper:** [CoQA: A Conversational Question Answering Challenge](https://arxiv.org/abs/1808.07042)
85
- - **Point of Contact:** [Google Group](https://groups.google.com/forum/#!forum/coqa), [Siva Reddy](mailto:[email protected]), [Danqi Chen](mailto:[email protected])
86
- - **Size of downloaded dataset files:** 55.40 MB
87
- - **Size of the generated dataset:** 18.35 MB
88
- - **Total amount of disk used:** 73.75 MB
89
-
90
- ### Dataset Summary
91
-
92
- CoQA is a large-scale dataset for building Conversational Question Answering systems.
93
-
94
- Our dataset contains 127k questions with answers, obtained from 8k conversations about text passages from seven diverse domains. The questions are conversational, and the answers are free-form text with their corresponding evidence highlighted in the passage.
95
-
96
- ### Supported Tasks and Leaderboards
97
-
98
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
99
-
100
- ### Languages
101
-
102
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
103
-
104
- ## Dataset Structure
105
-
106
- ### Data Instances
107
-
108
- #### default
109
-
110
- - **Size of downloaded dataset files:** 55.40 MB
111
- - **Size of the generated dataset:** 18.35 MB
112
- - **Total amount of disk used:** 73.75 MB
113
-
114
- An example of 'train' looks as follows.
115
- ```
116
- This example was too long and was cropped:
117
-
118
- {
119
- "answers": "{\"answer_end\": [179, 494, 511, 545, 879, 1127, 1128, 94, 150, 412, 1009, 1046, 643, -1, 764, 724, 125, 1384, 881, 910], \"answer_...",
120
- "questions": "[\"When was the Vat formally opened?\", \"what is the library for?\", \"for what subjects?\", \"and?\", \"what was started in 2014?\", \"ho...",
121
- "source": "wikipedia",
122
- "story": "\"The Vatican Apostolic Library (), more commonly called the Vatican Library or simply the Vat, is the library of the Holy See, l..."
123
- }
124
- ```
125
-
126
- ### Data Fields
127
-
128
- The data fields are the same among all splits.
129
-
130
- #### default
131
- - `source`: a `string` feature.
132
- - `story`: a `string` feature.
133
- - `questions`: a `list` of `string` features.
134
- - `answers`: a dictionary feature containing:
135
- - `input_text`: a `string` feature.
136
- - `answer_start`: a `int32` feature.
137
- - `answer_end`: a `int32` feature.
138
-
139
- ### Data Splits
140
-
141
- | name |train|validation|
142
- |-------|----:|---------:|
143
- |default| 7199| 500|
144
-
145
- ## Dataset Creation
146
-
147
- ### Curation Rationale
148
-
149
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
150
-
151
- ### Source Data
152
-
153
- #### Initial Data Collection and Normalization
154
-
155
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
156
-
157
- #### Who are the source language producers?
158
-
159
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
160
-
161
- ### Annotations
162
-
163
- #### Annotation process
164
-
165
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
166
-
167
- #### Who are the annotators?
168
-
169
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
170
-
171
- ### Personal and Sensitive Information
172
-
173
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
174
-
175
- ## Considerations for Using the Data
176
-
177
- ### Social Impact of Dataset
178
-
179
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
180
-
181
- ### Discussion of Biases
182
-
183
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
184
-
185
- ### Other Known Limitations
186
-
187
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
188
-
189
- ## Additional Information
190
-
191
- ### Dataset Curators
192
-
193
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
194
-
195
- ### Licensing Information
196
-
197
- CoQA contains passages from seven domains. We make five of these public under the following licenses:
198
- - Literature and Wikipedia passages are shared under [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) license.
199
- - Children's stories are collected from [MCTest](https://www.microsoft.com/en-us/research/publication/mctest-challenge-dataset-open-domain-machine-comprehension-text/) which comes with [MSR-LA](https://github.com/mcobzarenco/mctest/blob/master/data/MCTest/LICENSE.pdf) license.
200
- - Middle/High school exam passages are collected from [RACE](https://arxiv.org/abs/1704.04683) which comes with its [own](http://www.cs.cmu.edu/~glai1/data/race/) license.
201
- - News passages are collected from the [DeepMind CNN dataset](https://arxiv.org/abs/1506.03340) which comes with [Apache](https://github.com/deepmind/rc-data/blob/master/LICENSE) license.
202
-
203
- ### Citation Information
204
-
205
- ```
206
- @article{reddy-etal-2019-coqa,
207
- title = "{C}o{QA}: A Conversational Question Answering Challenge",
208
- author = "Reddy, Siva and
209
- Chen, Danqi and
210
- Manning, Christopher D.",
211
- journal = "Transactions of the Association for Computational Linguistics",
212
- volume = "7",
213
- year = "2019",
214
- address = "Cambridge, MA",
215
- publisher = "MIT Press",
216
- url = "https://aclanthology.org/Q19-1016",
217
- doi = "10.1162/tacl_a_00266",
218
- pages = "249--266",
219
- }
220
- ```
221
-
222
- ### Contributions
223
-
224
- Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@ojasaar](https://github.com/ojasaar), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
coqa.py DELETED
@@ -1,91 +0,0 @@
1
- """CoQA dataset."""
2
-
3
-
4
- import json
5
-
6
- import datasets
7
-
8
-
9
- _HOMEPAGE = "https://stanfordnlp.github.io/coqa/"
10
-
11
- _CITATION = """\
12
- @article{reddy-etal-2019-coqa,
13
- title = "{C}o{QA}: A Conversational Question Answering Challenge",
14
- author = "Reddy, Siva and
15
- Chen, Danqi and
16
- Manning, Christopher D.",
17
- journal = "Transactions of the Association for Computational Linguistics",
18
- volume = "7",
19
- year = "2019",
20
- address = "Cambridge, MA",
21
- publisher = "MIT Press",
22
- url = "https://aclanthology.org/Q19-1016",
23
- doi = "10.1162/tacl_a_00266",
24
- pages = "249--266",
25
- }
26
- """
27
-
28
- _DESCRIPTION = """\
29
- CoQA: A Conversational Question Answering Challenge
30
- """
31
-
32
- _TRAIN_DATA_URL = "https://nlp.stanford.edu/data/coqa/coqa-train-v1.0.json"
33
- _DEV_DATA_URL = "https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json"
34
-
35
-
36
- class Coqa(datasets.GeneratorBasedBuilder):
37
-
38
- VERSION = datasets.Version("1.0.0")
39
-
40
- def _info(self):
41
- return datasets.DatasetInfo(
42
- description=_DESCRIPTION,
43
- features=datasets.Features(
44
- {
45
- "source": datasets.Value("string"),
46
- "story": datasets.Value("string"),
47
- "questions": datasets.features.Sequence(datasets.Value("string")),
48
- "answers": datasets.features.Sequence(
49
- {
50
- "input_text": datasets.Value("string"),
51
- "answer_start": datasets.Value("int32"),
52
- "answer_end": datasets.Value("int32"),
53
- }
54
- ),
55
- }
56
- ),
57
- homepage=_HOMEPAGE,
58
- citation=_CITATION,
59
- )
60
-
61
- def _split_generators(self, dl_manager):
62
- """Returns SplitGenerators."""
63
- urls_to_download = {"train": _TRAIN_DATA_URL, "dev": _DEV_DATA_URL}
64
- downloaded_files = dl_manager.download_and_extract(urls_to_download)
65
-
66
- return [
67
- datasets.SplitGenerator(
68
- name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"], "split": "train"}
69
- ),
70
- datasets.SplitGenerator(
71
- name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"], "split": "validation"}
72
- ),
73
- ]
74
-
75
- def _generate_examples(self, filepath, split):
76
- """Yields examples."""
77
- with open(filepath, encoding="utf-8") as f:
78
- data = json.load(f)
79
- for row in data["data"]:
80
- questions = [question["input_text"] for question in row["questions"]]
81
- story = row["story"]
82
- source = row["source"]
83
- answers_start = [answer["span_start"] for answer in row["answers"]]
84
- answers_end = [answer["span_end"] for answer in row["answers"]]
85
- answers = [answer["input_text"] for answer in row["answers"]]
86
- yield row["id"], {
87
- "source": source,
88
- "story": story,
89
- "questions": questions,
90
- "answers": {"input_text": answers, "answer_start": answers_start, "answer_end": answers_end},
91
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "CoQA: A Conversational Question Answering Challenge\n", "citation": "@article{reddy-etal-2019-coqa,\n title = \"{C}o{QA}: A Conversational Question Answering Challenge\",\n author = \"Reddy, Siva and\n Chen, Danqi and\n Manning, Christopher D.\",\n journal = \"Transactions of the Association for Computational Linguistics\",\n volume = \"7\",\n year = \"2019\",\n address = \"Cambridge, MA\",\n publisher = \"MIT Press\",\n url = \"https://aclanthology.org/Q19-1016\",\n doi = \"10.1162/tacl_a_00266\",\n pages = \"249--266\",\n}\n", "homepage": "https://stanfordnlp.github.io/coqa/", "license": "", "features": {"source": {"dtype": "string", "id": null, "_type": "Value"}, "story": {"dtype": "string", "id": null, "_type": "Value"}, "questions": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "answers": {"feature": {"input_text": {"dtype": "string", "id": null, "_type": "Value"}, "answer_start": {"dtype": "int32", "id": null, "_type": "Value"}, "answer_end": {"dtype": "int32", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "coqa", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 17981459, "num_examples": 7199, "dataset_name": "coqa"}, "validation": {"name": "validation", "num_bytes": 1225518, "num_examples": 500, "dataset_name": "coqa"}}, "download_checksums": {"https://nlp.stanford.edu/data/coqa/coqa-train-v1.0.json": {"num_bytes": 49001836, "checksum": "b0fdb2bc1bd38dd3ca2ce5fa2ac3e02c6288ac914f241ac409a655ffb6619fa6"}, "https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json": {"num_bytes": 9090845, "checksum": "dfa367a9733ce53222918d0231d9b3bedc2b8ee831a2845f62dfc70701f2540a"}}, "download_size": 58092681, "post_processing_size": null, "dataset_size": 19206977, "size_in_bytes": 77299658}}
 
 
default/coqa-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c93fc08e88acb9bdc5782f6397215cf2479164b802343f483627879b49fc011c
3
+ size 11394342
default/coqa-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f6bce923639f965034e2845a0b8627daa590822cb5ffb632fe48446175e8c93
3
+ size 793143