Abinaya Mahendiran
commited on
Commit
·
fd27136
1
Parent(s):
445f17c
Updated README
Browse files
README.md
CHANGED
@@ -68,11 +68,12 @@ to look similar to answerable ones. To do well on SQuAD2.0, systems must not onl
|
|
68 |
also determine when no answer is supported by the paragraph and abstain from answering.
|
69 |
|
70 |
### Supported Tasks and Leaderboards
|
|
|
71 |
SQuAD2.0 tests the ability of a system to not only answer reading comprehension questions, but also abstain when presented with a question that cannot be answered based on the provided paragraph. Leaderboard is present on the [Homepage](https://rajpurkar.github.io/SQuAD-explorer/).
|
72 |
|
73 |
### Languages
|
74 |
|
75 |
-
|
76 |
|
77 |
## Dataset Structure
|
78 |
|
@@ -119,6 +120,8 @@ The data fields are the same among all splits.
|
|
119 |
|
120 |
### Data Splits
|
121 |
|
|
|
|
|
122 |
| name | train | validation | test |
|
123 |
| --------- | -----: | ---------: | ----: |
|
124 |
| squad_v2 | 90403 | 11873 | 39916 |
|
@@ -128,7 +131,15 @@ The data fields are the same among all splits.
|
|
128 |
|
129 |
### Curation Rationale
|
130 |
|
131 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
132 |
|
133 |
### Source Data
|
134 |
|
@@ -144,11 +155,16 @@ The data fields are the same among all splits.
|
|
144 |
|
145 |
#### Annotation process
|
146 |
|
147 |
-
|
|
|
|
|
|
|
|
|
|
|
148 |
|
149 |
#### Who are the annotators?
|
150 |
|
151 |
-
|
152 |
|
153 |
### Personal and Sensitive Information
|
154 |
|
@@ -172,7 +188,7 @@ The data fields are the same among all splits.
|
|
172 |
|
173 |
### Dataset Curators
|
174 |
|
175 |
-
|
176 |
|
177 |
### Licensing Information
|
178 |
|
@@ -198,4 +214,4 @@ archivePrefix = {arXiv},
|
|
198 |
|
199 |
### Contributions
|
200 |
|
201 |
-
Thanks to [@AbinayaM02](https://github.com/AbinayaM02) for adding this dataset to GEM.
|
|
|
68 |
also determine when no answer is supported by the paragraph and abstain from answering.
|
69 |
|
70 |
### Supported Tasks and Leaderboards
|
71 |
+
|
72 |
SQuAD2.0 tests the ability of a system to not only answer reading comprehension questions, but also abstain when presented with a question that cannot be answered based on the provided paragraph. Leaderboard is present on the [Homepage](https://rajpurkar.github.io/SQuAD-explorer/).
|
73 |
|
74 |
### Languages
|
75 |
|
76 |
+
English
|
77 |
|
78 |
## Dataset Structure
|
79 |
|
|
|
120 |
|
121 |
### Data Splits
|
122 |
|
123 |
+
The original SQuAD2.0 dataset has only training and dev (validation) splits. The train split is further divided into test split and added as part of the GEM datasets.
|
124 |
+
|
125 |
| name | train | validation | test |
|
126 |
| --------- | -----: | ---------: | ----: |
|
127 |
| squad_v2 | 90403 | 11873 | 39916 |
|
|
|
131 |
|
132 |
### Curation Rationale
|
133 |
|
134 |
+
The dataset is curated in three stages:
|
135 |
+
- Curating passages,
|
136 |
+
- Crowdsourcing question-answers on those passages,
|
137 |
+
- Obtaining additional answers
|
138 |
+
|
139 |
+
As part of SQuAD1.1, 10000 high-quality articles from English Wikipedia is extracted using Project Nayuki’s Wikipedia’s internal
|
140 |
+
PageRanks, from which 536 articles are sampled uniformly at random. From each of these articles, individual paragraphs are extracted, stripping away images, figures, tables, and discarding paragraphs shorter than 500 characters.
|
141 |
+
|
142 |
+
SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones.
|
143 |
|
144 |
### Source Data
|
145 |
|
|
|
155 |
|
156 |
#### Annotation process
|
157 |
|
158 |
+
The Daemo platform (Gaikwad et al., 2015), with Amazon Mechanical Turk as its backend is used for annotation.
|
159 |
+
|
160 |
+
- On each paragraph, crowdworkers were tasked with asking and answering up to 5 questions on the content of that paragraph and were asked spend 4 minutes on every paragraph. Questions need to be entered in a text box and answers need to be highlighted in the paragraph.
|
161 |
+
- To get an indication of human performance on SQuAD and to make the evaluation more robust, at least 2 additional answers for each question is obtained in the development and test sets.
|
162 |
+
- In the secondary answer generation task, each crowdworker was shown only the questions along with the paragraphs of an article,
|
163 |
+
and asked to select the shortest span in the paragraph that answered the question. If a question was not answerable by a span in the paragraph, workers were asked to submit the question without marking an answer
|
164 |
|
165 |
#### Who are the annotators?
|
166 |
|
167 |
+
Crowdworkers from the United States or Canada with a 97% HIT acceptance rate, a minimum of 1000 HITs, were employed to create questions.
|
168 |
|
169 |
### Personal and Sensitive Information
|
170 |
|
|
|
188 |
|
189 |
### Dataset Curators
|
190 |
|
191 |
+
The authors of SQuAD dataset would like to thank Durim Morina and Professor Michael Bernstein for their help in crowdsourcing the collection of the dataset, both in terms of funding and technical support of the Daemo platform.
|
192 |
|
193 |
### Licensing Information
|
194 |
|
|
|
214 |
|
215 |
### Contributions
|
216 |
|
217 |
+
Thanks to [@AbinayaM02](https://github.com/AbinayaM02) for adding this dataset to GEM. All the details are obtained from the cited paper.
|