File size: 14,241 Bytes
1db676b 04099b1 8c8a754 1db676b 8c8a754 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 |
---
license: cc-by-sa-4.0
dataset_info:
features:
- name: id
dtype: string
- name: turns
list:
- name: id
dtype: int64
- name: ques_type_id
dtype: int64
- name: question-type
dtype: string
- name: description
dtype: string
- name: entities_in_utterance
list: string
- name: relations
list: string
- name: type_list
list: string
- name: speaker
dtype: string
- name: utterance
dtype: string
- name: all_entities
list: string
- name: active_set
list: string
- name: sec_ques_sub_type
dtype: int64
- name: sec_ques_type
dtype: int64
- name: set_op_choice
dtype: int64
- name: is_inc
dtype: int64
- name: count_ques_sub_type
dtype: int64
- name: count_ques_type
dtype: int64
- name: is_incomplete
dtype: int64
- name: inc_ques_type
dtype: int64
- name: set_op
dtype: int64
- name: bool_ques_type
dtype: int64
- name: entities
list: string
- name: clarification_step
dtype: int64
- name: gold_actions
list:
list: string
- name: is_spurious
dtype: bool
- name: masked_verbalized_answer
dtype: string
- name: parsed_active_set
list: string
- name: sparql_query
dtype: string
- name: verbalized_all_entities
list: string
- name: verbalized_answer
dtype: string
- name: verbalized_entities_in_utterance
list: string
- name: verbalized_gold_actions
list:
list: string
- name: verbalized_parsed_active_set
list: string
- name: verbalized_sparql_query
dtype: string
- name: verbalized_triple
dtype: string
- name: verbalized_type_list
list: string
splits:
- name: train
num_bytes: 6815016095
num_examples: 152391
- name: test
num_bytes: 1007873839
num_examples: 27797
- name: validation
num_bytes: 692344634
num_examples: 16813
download_size: 2406342185
dataset_size: 8515234568
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
task_categories:
- conversational
- question-answering
tags:
- qa
- knowledge-graph
- sparql
- multi-hop
language:
- en
---
# Dataset Card for CSQA-SPARQLtoText
## Table of Contents
- [Dataset Card for CSQA-SPARQLtoText](#dataset-card-for-csqa-sparqltotext)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported tasks](#supported-tasks)
- [Knowledge based question-answering](#knowledge-based-question-answering)
- [SPARQL queries and natural language questions](#sparql-queries-and-natural-language-questions)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Types of questions](#types-of-questions)
- [Data splits](#data-splits)
- [JSON fields](#json-fields)
- [Original fields](#original-fields)
- [New fields](#new-fields)
- [Verbalized fields](#verbalized-fields)
- [Format of the SPARQL queries](#format-of-the-sparql-queries)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [This version of the corpus (with SPARQL queries)](#this-version-of-the-corpus-with-sparql-queries)
- [Original corpus (CSQA)](#original-corpus-csqa)
- [CARTON](#carton)
## Dataset Description
- **Paper:** [SPARQL-to-Text Question Generation for Knowledge-Based Conversational Applications (AACL-IJCNLP 2022)](https://aclanthology.org/2022.aacl-main.11/)
- **Point of Contact:** GwΓ©nolΓ© LecorvΓ©
### Dataset Summary
CSQA corpus (Complex Sequential Question-Answering, see https://amritasaha1812.github.io/CSQA/) is a large corpus for conversational knowledge-based question answering. The version here is augmented with various fields to make it easier to run specific tasks, especially SPARQL-to-text conversion.
The original data has been post-processing as follows:
1. Verbalization templates were applied on the answers and their entities were verbalized (replaced by their label in Wikidata)
2. Questions were parsed using the CARTON algorithm to produce a sequence of action in a specific grammar
3. Sequence of actions were mapped to SPARQL queries and entities were verbalized (replaced by their label in Wikidata)
### Supported tasks
- Knowledge-based question-answering
- Text-to-SPARQL conversion
#### Knowledge based question-answering
Below is an example of dialogue:
- Q1: Which occupation is the profession of Edmond Yernaux ?
- A1: politician
- Q2: Which collectable has that occupation as its principal topic ?
- A2: Notitia Parliamentaria, An History of the Counties, etc.
#### SPARQL queries and natural language questions
```SQL
SELECT DISTINCT ?x WHERE
{ ?x rdf:type ontology:occupation . resource:Edmond_Yernaux property:occupation ?x }
```
is equivalent to:
```txt
Which occupation is the profession of Edmond Yernaux ?
```
### Languages
- English
## Dataset Structure
The corpus follows the global architecture from the original version of CSQA (https://amritasaha1812.github.io/CSQA/).
There is one directory of the train, dev, and test sets, respectively.
Dialogues are stored in separate directories, 100 dialogues per directory.
Finally, each dialogue is stored in a JSON file as a list of turns.
### Types of questions
Comparison of question types compared to related datasets:
| | | [SimpleQuestions](https://huggingface.co/datasets/OrangeInnov/simplequestions-sparqltotext) | [ParaQA](https://huggingface.co/datasets/OrangeInnov/paraqa-sparqltotext) | [LC-QuAD 2.0](https://huggingface.co/datasets/OrangeInnov/lcquad_2.0-sparqltotext) | [CSQA](https://huggingface.co/datasets/OrangeInnov/csqa-sparqltotext) | [WebNLQ-QA](https://huggingface.co/datasets/OrangeInnov/webnlg-qa) |
|--------------------------|-----------------|:---------------:|:------:|:-----------:|:----:|:---------:|
| **Number of triplets in query** | 1 | β | β | β | β | β |
| | 2 | | β | β | β | β |
| | More | | | β | β | β |
| **Logical connector between triplets** | Conjunction | β | β | β | β | β |
| | Disjunction | | | | β | β |
| | Exclusion | | | | β | β |
| **Topology of the query graph** | Direct | β | β | β | β | β |
| | Sibling | | β | β | β | β |
| | Chain | | β | β | β | β |
| | Mixed | | | β | | β |
| | Other | | β | β | β | β |
| **Variable typing in the query** | None | β | β | β | β | β |
| | Target variable | | β | β | β | β |
| | Internal variable | | β | β | β | β |
| **Comparisons clauses** | None | β | β | β | β | β |
| | String | | | β | | β |
| | Number | | | β | β | β |
| | Date | | | β | | β |
| **Superlative clauses** | No | β | β | β | β | β |
| | Yes | | | | β | |
| **Answer type** | Entity (open) | β | β | β | β | β |
| | Entity (closed) | | | | β | β |
| | Number | | | β | β | β |
| | Boolean | | β | β | β | β |
| **Answer cardinality** | 0 (unanswerable) | | | β | | β |
| | 1 | β | β | β | β | β |
| | More | | β | β | β | β |
| **Number of target variables** | 0 (β ASK verb) | | β | β | β | β |
| | 1 | β | β | β | β | β |
| | 2 | | | β | | β |
| **Dialogue context** | Self-sufficient | β | β | β | β | β |
| | Coreference | | | | β | β |
| | Ellipsis | | | | β | β |
| **Meaning** | Meaningful | β | β | β | β | β |
| | Non-sense | | | | | β |
### Data splits
Text verbalization is only available for a subset of the test set, referred to as *challenge set*. Other sample only contain dialogues in the form of follow-up sparql queries.
| | Train | Validation | Test |
| --------------------- | ---------- | ---------- | ---------- |
| Questions | 1.5M | 167K | 260K |
| Dialogues | 152K | 17K | 28K |
| NL question per query | 1 |
| Characters per query | 163 (Β± 100) |
| Tokens per question | 10 (Β± 4) |
### JSON fields
Each turn of a dialogue contains the following fields:
#### Original fields
* `ques_type_id`: ID corresponding to the question utterance
* `description`: Description of type of question
* `relations`: ID's of predicates used in the utterance
* `entities_in_utterance`: ID's of entities used in the question
* `speaker`: The nature of speaker: `SYSTEM` or `USER`
* `utterance`: The utterance: either the question, clarification or response
* `active_set`: A regular expression which identifies the entity set of answer list
* `all_entities`: List of ALL entities which constitute the answer of the question
* `question-type`: Type of question (broad types used for evaluation as given in the original authors' paper)
* `type_list`: List containing entity IDs of all entity parents used in the question
#### New fields
* `is_spurious`: introduced by CARTON,
* `is_incomplete`: either the question is self-sufficient (complete) or it relies on information given by the previous turns (incomplete)
* `parsed_active_set`:
* `gold_actions`: sequence of ACTIONs as returned by CARTON
* `sparql_query`: SPARQL query
#### Verbalized fields
Fields with `verbalized` in their name are verbalized versions of another fields, ie IDs were replaced by actual words/labels.
### Format of the SPARQL queries
* Clauses are in random order
* Variables names are represented as random letters. The letters change from one turn to another.
* Delimiters are spaced
## Additional Information
### Licensing Information
* Content from original dataset: CC-BY-SA 4.0
* New content: CC BY-SA 4.0
### Citation Information
#### This version of the corpus (with SPARQL queries)
```bibtex
@inproceedings{lecorve2022sparql2text,
title={SPARQL-to-Text Question Generation for Knowledge-Based Conversational Applications},
author={Lecorv\'e, Gw\'enol\'e and Veyret, Morgan and Brabant, Quentin and Rojas-Barahona, Lina M.},
journal={Proceedings of the Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing (AACL-IJCNLP)},
year={2022}
}
```
#### Original corpus (CSQA)
```bibtex
@InProceedings{saha2018complex,
title = {Complex {Sequential} {Question} {Answering}: {Towards} {Learning} to {Converse} {Over} {Linked} {Question} {Answer} {Pairs} with a {Knowledge} {Graph}},
volume = {32},
issn = {2374-3468},
url = {https://ojs.aaai.org/index.php/AAAI/article/view/11332},
booktitle = {Proceedings of the AAAI Conference on Artificial Intelligence},
author = {Saha, Amrita and Pahuja, Vardaan and Khapra, Mitesh and Sankaranarayanan, Karthik and Chandar, Sarath},
month = apr,
year = {2018}
}
```
#### CARTON
```bibtex
@InProceedings{plepi2021context,
author="Plepi, Joan and Kacupaj, Endri and Singh, Kuldeep and Thakkar, Harsh and Lehmann, Jens",
editor="Verborgh, Ruben and Hose, Katja and Paulheim, Heiko and Champin, Pierre-Antoine and Maleshkova, Maria and Corcho, Oscar and Ristoski, Petar and Alam, Mehwish",
title="Context Transformer with Stacked Pointer Networks for Conversational Question Answering over Knowledge Graphs",
booktitle="Proceedings of The Semantic Web",
year="2021",
publisher="Springer International Publishing",
pages="356--371",
isbn="978-3-030-77385-4"
}
```
|