matteogabburo commited on
Commit
44521b2
·
verified ·
1 Parent(s): 51dcd1f

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +415 -0
README.md ADDED
@@ -0,0 +1,415 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - question-answering
4
+ language:
5
+ - en
6
+ - fr
7
+ - de
8
+ - it
9
+ - es
10
+ - pt
11
+ pretty_name: mTRECQA
12
+ size_categories:
13
+ - 100K<n<1M
14
+ configs:
15
+ - config_name: default
16
+ data_files:
17
+ - split: train_en
18
+ path: "eng-train.jsonl"
19
+ - split: train_de
20
+ path: "deu-train.jsonl"
21
+ - split: train_fr
22
+ path: "fra-train.jsonl"
23
+ - split: train_it
24
+ path: "ita-train.jsonl"
25
+ - split: train_po
26
+ path: "por-train.jsonl"
27
+ - split: train_sp
28
+ path: "spa-train.jsonl"
29
+ - split: validation_en
30
+ path: "eng-dev.jsonl"
31
+ - split: validation_de
32
+ path: "deu-dev.jsonl"
33
+ - split: validation_fr
34
+ path: "fra-dev.jsonl"
35
+ - split: validation_it
36
+ path: "ita-dev.jsonl"
37
+ - split: validation_po
38
+ path: "por-dev.jsonl"
39
+ - split: validation_sp
40
+ path: "spa-dev.jsonl"
41
+ - split: test_en
42
+ path: "eng-test.jsonl"
43
+ - split: test_de
44
+ path: "deu-test.jsonl"
45
+ - split: test_fr
46
+ path: "fra-test.jsonl"
47
+ - split: test_it
48
+ path: "ita-test.jsonl"
49
+ - split: test_po
50
+ path: "por-test.jsonl"
51
+ - split: test_sp
52
+ path: "spa-test.jsonl"
53
+ - split: validation_clean_en
54
+ path: "eng-dev_clean.jsonl"
55
+ - split: validation_clean_de
56
+ path: "deu-dev_clean.jsonl"
57
+ - split: validation_clean_fr
58
+ path: "fra-dev_clean.jsonl"
59
+ - split: validation_clean_it
60
+ path: "ita-dev_clean.jsonl"
61
+ - split: validation_clean_po
62
+ path: "por-dev_clean.jsonl"
63
+ - split: validation_clean_sp
64
+ path: "spa-dev_clean.jsonl"
65
+ - split: test_clean_en
66
+ path: "eng-test_clean.jsonl"
67
+ - split: test_clean_de
68
+ path: "deu-test_clean.jsonl"
69
+ - split: test_clean_fr
70
+ path: "fra-test_clean.jsonl"
71
+ - split: test_cleanit
72
+ path: "ita-test_clean.jsonl"
73
+ - split: test_clean_po
74
+ path: "por-test_clean.jsonl"
75
+ - split: test_clean_sp
76
+ path: "spa-test_clean.jsonl"
77
+ - split: validation_++_en
78
+ path: "eng-dev_no_allneg.jsonl"
79
+ - split: validation_++_de
80
+ path: "deu-dev_no_allneg.jsonl"
81
+ - split: validation_++_fr
82
+ path: "fra-dev_no_allneg.jsonl"
83
+ - split: validation_++_it
84
+ path: "ita-dev_no_allneg.jsonl"
85
+ - split: validation_++_po
86
+ path: "por-dev_no_allneg.jsonl"
87
+ - split: validation_++_sp
88
+ path: "spa-dev_no_allneg.jsonl"
89
+ - split: test_++_en
90
+ path: "eng-test_no_allneg.jsonl"
91
+ - split: test_++_de
92
+ path: "deu-test_no_allneg.jsonl"
93
+ - split: test_++_fr
94
+ path: "fra-test_no_allneg.jsonl"
95
+ - split: test_++_it
96
+ path: "ita-test_no_allneg.jsonl"
97
+ - split: test_++_po
98
+ path: "por-test_no_allneg.jsonl"
99
+ - split: test_++_sp
100
+ path: "spa-test_no_allneg.jsonl"
101
+ - config_name: clean
102
+ data_files:
103
+ - split: train_en
104
+ path: "eng-train.jsonl"
105
+ - split: train_de
106
+ path: "deu-train.jsonl"
107
+ - split: train_fr
108
+ path: "fra-train.jsonl"
109
+ - split: train_it
110
+ path: "ita-train.jsonl"
111
+ - split: train_po
112
+ path: "por-train.jsonl"
113
+ - split: train_sp
114
+ path: "spa-train.jsonl"
115
+ - split: validation_clean_en
116
+ path: "eng-dev_clean.jsonl"
117
+ - split: validation_clean_de
118
+ path: "deu-dev_clean.jsonl"
119
+ - split: validation_clean_fr
120
+ path: "fra-dev_clean.jsonl"
121
+ - split: validation_clean_it
122
+ path: "ita-dev_clean.jsonl"
123
+ - split: validation_clean_po
124
+ path: "por-dev_clean.jsonl"
125
+ - split: validation_clean_sp
126
+ path: "spa-dev_clean.jsonl"
127
+ - split: test_clean_en
128
+ path: "eng-test_clean.jsonl"
129
+ - split: test_clean_de
130
+ path: "deu-test_clean.jsonl"
131
+ - split: test_clean_fr
132
+ path: "fra-test_clean.jsonl"
133
+ - split: test_cleanit
134
+ path: "ita-test_clean.jsonl"
135
+ - split: test_clean_po
136
+ path: "por-test_clean.jsonl"
137
+ - split: test_clean_sp
138
+ path: "spa-test_clean.jsonl"
139
+ - config_name: ++
140
+ data_files:
141
+ - split: train_en
142
+ path: "eng-train.jsonl"
143
+ - split: train_de
144
+ path: "deu-train.jsonl"
145
+ - split: train_fr
146
+ path: "fra-train.jsonl"
147
+ - split: train_it
148
+ path: "ita-train.jsonl"
149
+ - split: train_po
150
+ path: "por-train.jsonl"
151
+ - split: train_sp
152
+ path: "spa-train.jsonl"
153
+ - split: validation_++_en
154
+ path: "eng-dev_no_allneg.jsonl"
155
+ - split: validation_++_de
156
+ path: "deu-dev_no_allneg.jsonl"
157
+ - split: validation_++_fr
158
+ path: "fra-dev_no_allneg.jsonl"
159
+ - split: validation_++_it
160
+ path: "ita-dev_no_allneg.jsonl"
161
+ - split: validation_++_po
162
+ path: "por-dev_no_allneg.jsonl"
163
+ - split: validation_++_sp
164
+ path: "spa-dev_no_allneg.jsonl"
165
+ - split: test_++_en
166
+ path: "eng-test_no_allneg.jsonl"
167
+ - split: test_++_de
168
+ path: "deu-test_no_allneg.jsonl"
169
+ - split: test_++_fr
170
+ path: "fra-test_no_allneg.jsonl"
171
+ - split: test_++_it
172
+ path: "ita-test_no_allneg.jsonl"
173
+ - split: test_++_po
174
+ path: "por-test_no_allneg.jsonl"
175
+ - split: test_++_sp
176
+ path: "spa-test_no_allneg.jsonl"
177
+ - config_name: en
178
+ data_files:
179
+ - split: train
180
+ path: "eng-train.jsonl"
181
+ - split: validation
182
+ path: "eng-dev.jsonl"
183
+ - split: test
184
+ path: "eng-test.jsonl"
185
+ - config_name: de
186
+ data_files:
187
+ - split: train
188
+ path: "deu-train.jsonl"
189
+ - split: validation
190
+ path: "deu-dev.jsonl"
191
+ - split: test
192
+ path: "deu-test.jsonl"
193
+ - config_name: fr
194
+ data_files:
195
+ - split: train
196
+ path: "fra-train.jsonl"
197
+ - split: validation
198
+ path: "fra-dev.jsonl"
199
+ - split: test
200
+ path: "fra-test.jsonl"
201
+ - config_name: it
202
+ data_files:
203
+ - split: train
204
+ path: "ita-train.jsonl"
205
+ - split: validation
206
+ path: "ita-dev.jsonl"
207
+ - split: test
208
+ path: "ita-test.jsonl"
209
+ - config_name: po
210
+ data_files:
211
+ - split: train
212
+ path: "por-train.jsonl"
213
+ - split: validation
214
+ path: "por-dev.jsonl"
215
+ - split: test
216
+ path: "por-test.jsonl"
217
+ - config_name: sp
218
+ data_files:
219
+ - split: train
220
+ path: "spa-train.jsonl"
221
+ - split: validation
222
+ path: "spa-dev.jsonl"
223
+ - split: test
224
+ path: "spa-test.jsonl"
225
+ - config_name: en_++
226
+ data_files:
227
+ - split: train
228
+ path: "eng-train.jsonl"
229
+ - split: validation
230
+ path: "eng-dev_no_allneg.jsonl"
231
+ - split: test
232
+ path: "eng-test_no_allneg.jsonl"
233
+ - config_name: de_++
234
+ data_files:
235
+ - split: train
236
+ path: "deu-train.jsonl"
237
+ - split: validation
238
+ path: "deu-dev_no_allneg.jsonl"
239
+ - split: test
240
+ path: "deu-test_no_allneg.jsonl"
241
+ - config_name: fr_++
242
+ data_files:
243
+ - split: train
244
+ path: "fra-train.jsonl"
245
+ - split: validation
246
+ path: "fra-dev_no_allneg.jsonl"
247
+ - split: test
248
+ path: "fra-test_no_allneg.jsonl"
249
+ - config_name: it_++
250
+ data_files:
251
+ - split: train
252
+ path: "ita-train.jsonl"
253
+ - split: validation
254
+ path: "ita-dev_no_allneg.jsonl"
255
+ - split: test
256
+ path: "ita-test_no_allneg.jsonl"
257
+ - config_name: po_++
258
+ data_files:
259
+ - split: train
260
+ path: "por-train.jsonl"
261
+ - split: validation
262
+ path: "por-dev_no_allneg.jsonl"
263
+ - split: test
264
+ path: "por-test_no_allneg.jsonl"
265
+ - config_name: sp_++
266
+ data_files:
267
+ - split: train
268
+ path: "spa-train.jsonl"
269
+ - split: validation
270
+ path: "spa-dev_no_allneg.jsonl"
271
+ - split: test
272
+ path: "spa-test_no_allneg.jsonl"
273
+ - config_name: en_clean
274
+ data_files:
275
+ - split: train
276
+ path: "eng-train.jsonl"
277
+ - split: validation
278
+ path: "eng-dev_clean.jsonl"
279
+ - split: test
280
+ path: "eng-test_clean.jsonl"
281
+ - config_name: de_clean
282
+ data_files:
283
+ - split: train
284
+ path: "deu-train.jsonl"
285
+ - split: validation
286
+ path: "deu-dev_clean.jsonl"
287
+ - split: test
288
+ path: "deu-test_clean.jsonl"
289
+ - config_name: fr_clean
290
+ data_files:
291
+ - split: train
292
+ path: "fra-train.jsonl"
293
+ - split: validation
294
+ path: "fra-dev_clean.jsonl"
295
+ - split: test
296
+ path: "fra-test_clean.jsonl"
297
+ - config_name: it_clean
298
+ data_files:
299
+ - split: train
300
+ path: "ita-train.jsonl"
301
+ - split: validation
302
+ path: "ita-dev_clean.jsonl"
303
+ - split: test
304
+ path: "ita-test_clean.jsonl"
305
+ - config_name: po_clean
306
+ data_files:
307
+ - split: train
308
+ path: "por-train.jsonl"
309
+ - split: validation
310
+ path: "por-dev_clean.jsonl"
311
+ - split: test
312
+ path: "por-test_clean.jsonl"
313
+ - config_name: sp_clean
314
+ data_files:
315
+ - split: train
316
+ path: "spa-train.jsonl"
317
+ - split: validation
318
+ path: "spa-dev_clean.jsonl"
319
+ - split: test
320
+ path: "spa-test_clean.jsonl"
321
+ ---
322
+
323
+ ## Dataset Description
324
+
325
+ **mTRECQA** is a translated version of TRECQA. It contains 3,047 questions sampled from Bing query logs. The candidate answer sentences are extracted from Wikipedia and then manually labeled to assess whether they are correct answers.
326
+
327
+ The dataset has been translated into five European languages: French, German, Italian, Portuguese, and Spanish, as described in this paper: [Datasets for Multilingual Answer Sentence Selection](https://arxiv.org/abs/2406.10172 'Datasets for Multilingual Answer Sentence Selection').
328
+
329
+ ## Splits:
330
+
331
+ For each language (English, French, German, Italian, Portuguese, and Spanish), we provide:
332
+
333
+ - **train** split
334
+ - **validation** split
335
+ - **test** split
336
+
337
+ In addition, the validation and the test splits are available also in the following preprocessed versions:
338
+
339
+ - **++**: without questions with only negative answer candidates
340
+ - **clean**: without questions with only negative and only positive answer candidates
341
+
342
+ ### How to load them:
343
+ To use these splits, you can use the following snippet of code replacing ``[LANG]`` with a language identifier (en, fr, de, it, po, sp), and ``[VERSION]`` with the version identifier (++, clean)
344
+
345
+ ```
346
+ from datasets import load_dataset
347
+
348
+ # if you want the whole corpora
349
+ corpora = load_dataset("matteogabburo/mTRECQA")
350
+
351
+ # if you want the clean test and test sets
352
+ corpora = load_dataset("matteogabburo/mTRECQA", "clean")
353
+
354
+ # if you want the "no all negatives" validation and test sets
355
+ corpora = load_dataset("matteogabburo/mTRECQA", "++")
356
+
357
+ """
358
+ if you want the default splits of a specific language, replace [LANG] with an identifier in: en, fr, de, it, po, sp
359
+ dataset = load_dataset("matteogabburo/mTRECQA", "[LANG]")
360
+ """
361
+ # example:
362
+ italian_dataset = load_dataset("matteogabburo/mTRECQA", "it")
363
+
364
+
365
+ """
366
+ if you want the processed splits ("clean" and "no all negatives" sets), replace [LANG] with a language identifier and [VERSION] with "++" or "clean"
367
+ dataset = load_dataset("matteogabburo/mTRECQA", "[LANG]_[VERSION]")
368
+ """
369
+ # example:
370
+ italian_clean_dataset = load_dataset("matteogabburo/mTRECQA", "it_clean")
371
+
372
+ ```
373
+
374
+
375
+ ## Format:
376
+ Each example has the following format:
377
+
378
+ ```
379
+ {
380
+ 'eid': 42588,
381
+ 'qid': 1003,
382
+ 'cid': 4,
383
+ 'label': 1,
384
+ 'question': 'In welchem Land liegt die heilige Stadt Mekka?',
385
+ 'candidate': 'Der französische Präsident Jacques Chirac hat heute sein Beileid ausgedrückt, wegen des Todes von 250 Pilgern bei einem Brand, der am Dienstag in einem Lager in der Nähe der heiligen Stadt Mekka in Saudi-Arabien ausbrach.'
386
+ }
387
+ ```
388
+
389
+ Where:
390
+
391
+ - **eid**: is the unique id of the example (question, candidate)
392
+ - **qid**: is the unique id of the question
393
+ - **cid**: is the unique id of the answer candidate
394
+ - **label**: identifies whether the answer candidate ``candidate`` is correct for the ``question`` (1 if correct, 0 otherwise)
395
+ - **question**: the question
396
+ - **candidate**: the answer candidate
397
+
398
+
399
+
400
+ ## Citation
401
+
402
+ If you find this dataset useful, please cite the following paper:
403
+
404
+ **BibTeX:**
405
+ ```
406
+ @misc{gabburo2024datasetsmultilingualanswersentence,
407
+ title={Datasets for Multilingual Answer Sentence Selection},
408
+ author={Matteo Gabburo and Stefano Campese and Federico Agostini and Alessandro Moschitti},
409
+ year={2024},
410
+ eprint={2406.10172},
411
+ archivePrefix={arXiv},
412
+ primaryClass={cs.CL},
413
+ url={https://arxiv.org/abs/2406.10172},
414
+ }
415
+ ```