Datasets:

Modalities:
Image
ArXiv:
License:
Multi-EuP / README.md
Jinruiy's picture
Update README.md
125838a
|
raw
history blame
9.22 kB
metadata
license: apache-2.0
task_categories:
  - text-retrieval
size_categories:
  - 10K<n<100K
language:
  - en
  - de
  - fr
  - it
  - es
  - pl
  - ro
  - nl
  - el
  - hu
  - pt
  - cs
  - sv
  - bg
  - da
  - fi
  - sk
  - lt
  - hr
  - sl
  - et
  - lv
  - mt
  - ga
pretty_name: multi_eup

Dataset Description

Dataset Summary

The Multi-Eup is a new multilingual benchmark dataset, comprising 22K multilingual documents collected from the European Parliament, spanning 24 languages. This dataset is designed to investigate fairness in a multilingual information retrieval (IR) context to analyze both language and demographic bias in a ranking context. It boasts an authentic multilingual corpus, featuring topics translated into all 24 languages, as well as cross-lingual relevance judgments. Furthermore, it offers rich demographic information associated with its documents, facilitating the study of demographic bias.

Languages

Language ISO code Countries where official lang. Native Usage Total Usage # Docs Words per Doc (mean/median)
English EN United Kingdom, Ireland, Malta 13% 51% 7123 286/200
German DE Germany, Belgium, Luxembourg 16% 32% 3433 180/164
French FR France, Belgium, Luxembourg 12% 26% 2779 296/223
Italian IT Italy 13% 16% 1829 190/175
Spanish ES Spain 8% 15% 2371 232/198
Polish PL Poland 8% 9% 1841 155/148
Romanian RO Romania 5% 5% 794 186/172
Dutch NL Netherlands, Belgium 4% 5% 897 184/170
Greek EL Greece, Cyprus 3% 4% 707 209/205
Hungarian HU Hungary 3% 3% 614 126/128
Portuguese PT Portugal 2% 3% 1176 179/167
Czech CS Czech Republic 2% 3% 397 167/149
Swedish SV Sweden 2% 3% 531 175/165
Bulgarian BG Bulgaria 2% 2% 408 196/178
Danish DA Denmark 1% 1% 292 218/198
Finnish FI Finland 1% 1% 405 94/87
Slovak SK Slovakia 1% 1% 348 151/158
Lithuanian LT Lithuania 1% 1% 115 142/127
Croatian HR Croatia <1% <1% 524 183/164
Slovene SL Slovenia <1% <1% 270 201/163
Estonian ET Estonia <1% <1% 58 160/158
Latvian LV Latvia <1% <1% 89 111/123
Maltese MT Malta <1% <1% 178 117/115
Irish GA Ireland <1% <1% 33 198/172
Table 1: Multi-EuP statistics, broken down by language: ISO language code; EU member states using the language officially; proportion of the EU population speaking the language; number of debate speech documents in Mult-EuP; and words per document (mean/median).

Dataset Structure

The Multi-EuP dataset contains two files, debate coprpus and MEP info.

Data Instances

For each instance, there is a string for the article, a string for the highlights, and a string for the id. See the CNN / Daily Mail dataset viewer to explore more examples.

{'id': '0054d6d30dbcad772e20b22771153a2a9cbeaf62',
 'article': '(CNN) -- An American woman died aboard a cruise ship that docked at Rio de Janeiro on Tuesday, the same ship on which 86 passengers previously fell ill, according to the state-run Brazilian news agency, Agencia Brasil. The American tourist died aboard the MS Veendam, owned by cruise operator Holland America. Federal Police told Agencia Brasil that forensic doctors were investigating her death. The ship's doctors told police that the woman was elderly and suffered from diabetes and hypertension, according the agency. The other passengers came down with diarrhea prior to her death during an earlier part of the trip, the ship's doctors said. The Veendam left New York 36 days ago for a South America tour.'
 'highlights': 'The elderly woman suffered from diabetes and hypertension, ship's doctors say .\nPreviously, 86 passengers had fallen ill on the ship, Agencia Brasil says .'}

Data Fields

  • id: a string containing the heximal formated SHA1 hash of the url where the story was retrieved from
  • article: a string containing the body of the news article
  • highlights: a string containing the highlight of the article as written by the article author

Dataset Creation

Curation Rationale

Version 1.0.0 aimed to support supervised neural methodologies for machine reading and question answering with a large amount of real natural language training data and released about 313k unique articles and nearly 1M Cloze style questions to go with the articles. Versions 2.0.0 and 3.0.0 changed the structure of the dataset to support summarization rather than question answering. Version 3.0.0 provided a non-anonymized version of the data, whereas both the previous versions were preprocessed to replace named entities with unique identifier labels.

Source Data

Initial Data Collection and Normalization

The code for the non-anonymized version is made publicly available by Abigail See of Stanford University, Peter J. Liu of Google Brain and Christopher D. Manning of Stanford University at https://github.com/abisee/cnn-dailymail. The work at Stanford University was supported by the DARPA DEFT ProgramAFRL contract no. FA8750-13-2-0040.

Ethic

The CNN / Daily Mail dataset version 1.0.0 is released under the Apache-2.0 License.

Citation Information

@inproceedings{see-etal-2017-get,
    title = "Get To The Point: Summarization with Pointer-Generator Networks",
    author = "See, Abigail  and
      Liu, Peter J.  and
      Manning, Christopher D.",
    booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = jul,
    year = "2017",
    address = "Vancouver, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/P17-1099",
    doi = "10.18653/v1/P17-1099",
    pages = "1073--1083",
    abstract = "Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.",
}