sha
stringlengths
40
40
text
stringlengths
0
13.4M
id
stringlengths
2
117
tags
sequence
created_at
stringlengths
25
25
metadata
stringlengths
2
31.7M
last_modified
stringlengths
25
25
4949de27eaa80078f2253d1d254709a5e06a47a7
The dataset is in the form of a json lines file with 10,657 examples, where an example consists of text (extracted from the first 13,000 rows of OSCAR unshuffled English dataset) and metadata fields (entities). Structure of an example. ``` { "text": "This is exactly the sort of article to raise the profile of the club around the Midlands. Very positive and really focusses on how the club has improved over a short period of time and the bright prospects for the future \n\"Oxford Town\" - professional as always at the Birmingham Mail. Not only is Oxford a city, but Oxford United are pretty recognisable name to anyone who has ever taken even a vague interest in English football.", "metadata": [ { "key": "entity", "type": "local", "char_start_idx": 80, "char_end_idx": 88, "value": "Midlands" }, { "key": "entity", "type": "local", "char_start_idx": 225, "char_end_idx": 236, "value": "Oxford Town" }, { "key": "entity", "type": "local", "char_start_idx": 270, "char_end_idx": 285, "value": "Birmingham_Mail" }, { "key": "entity", "type": "local", "char_start_idx": 299, "char_end_idx": 305, "value": "Oxford" }, { "key": "entity", "type": "local", "char_start_idx": 318, "char_end_idx": 331, "value": "Oxford_United_Stars_F.C." }, { "key": "entity", "type": "local", "char_start_idx": 415, "char_end_idx": 422, "value": "England" } ] } ```
bs-modeling-metadata/OSCAR_Entity_13_000
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-09-15T13:20:53+00:00
f6cba351f9d1893a3b60f76455cdbd64fc0239c7
The dataset is in the form of a json lines file with 1,20,000 examples, where an example consists of text (extracted from C4 English dataset) and metadata fields (website description extracted from Wikipedia). Example: ``` { "text": "US10289222B2 - Handling of touch events in a browser environment - Google Patents\nHandling of touch events in a browser environment Download PDF\nUS10289222B2\nUS10289222B2 US13/857,848 US201313857848A US10289222B2 US 10289222 B2 US10289222 B2 US 10289222B2 US 201313857848 A US201313857848 A US 201313857848A US 10289222 B2 US10289222 B2 US 10289222B2\nUS13/857,848\nUS20130222244A1 (en\nEli Joshua FIDLER\nMichael Thomas Winkler\nMatthew Nicholaos STAIKOS\nJoseph Charles MASON\n2011-01-05 Priority to US12/985,337 priority Critical patent/US8438473B2/en\n2013-04-05 Application filed by BlackBerry Ltd filed Critical BlackBerry Ltd\n2013-04-05 Priority to US13/857,848 priority patent/US10289222B2/en\n2013-06-26 Assigned to RESEARCH IN MOTION CORPORATION reassignment RESEARCH IN MOTION CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: Winkler, Michael Thomas\n2013-06-26 Assigned to RESEARCH IN MOTION LIMITED reassignment RESEARCH IN MOTION LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: Fidler, Eli Joshua, Mak, Genevieve Elizabeth, Mason, Joseph Charles, STAIKOS, MATTHEW\n2013-08-29 Publication of US20130222244A1 publication Critical patent/US20130222244A1/en\n2016-03-08 Assigned to RESEARCH IN MOTION LIMITED reassignment RESEARCH IN MOTION LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: RESEARCH IN MOTION CORPORATION\n2019-05-14 Publication of US10289222B2 publication Critical patent/US10289222B2/en\nHandling of touch events in a browser environment is disclosed. An example method includes, while a document is displayed on a touchscreen display of a device, detecting a touch event at the touchscreen display, and selectively processing the detected touch event using one of a default hander, a touch event handler, and a conversion to one or more mouse events, according to a touch event handling property defined for the document.\nThe present application relates generally to the processing of detected user input events in a web browser.\nComputing devices such as desktop computers are typically equipped with external pointing devices, such as a mouse, to permit cursor-based user interaction with content executing on the computer., "metadata": [ { "key": "website_description", "type": "global", "value": "Google Patents is a search engine from Google that indexes patents and patent applications." } ] } ```
bs-modeling-metadata/website_metadata_c4
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-11-24T14:04:30+00:00
a77e6b9e25050d202bc69d78b3cdd9529ef10029
caca/zscczs
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-04-07T08:02:09+00:00
47de2aba7b0adf7b8c37a568df2d4b69717d8dcb
cahya/persona_empathetic
[ "license:mit", "region:us" ]
2022-03-02T23:29:22+00:00
{"license": "mit"}
2022-02-19T22:49:35+00:00
da29f2b2fc7c86176813b8a6440f73e0823f05d3
# Dataset Card for the args.me corpus ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Usage](#dataset-usage) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://zenodo.org/record/4139439 - **Repository:** https://git.webis.de/code-research/arguana/args/args-framework - **Paper:** [Building an Argument Search Engine for the Web](https://webis.de/downloads/publications/papers/wachsmuth_2017f.pdf) - **Leaderboard:** https://touche.webis.de/ - **Point of Contact:** [Webis Group](https://webis.de/people.html) ### Dataset Summary The args.me corpus (version 1.0, cleaned) comprises 382 545 arguments crawled from four debate portals in the middle of 2019. The debate portals are Debatewise, IDebate.org, Debatepedia, and Debate.org. The arguments are extracted using heuristics that are designed for each debate portal. ### Dataset Usage ```python import datasets args = datasets.load_dataset('cakiki/args_me', 'corpus', streaming=True) args_iterator = iter(args) for arg in args_iterator: print(args['conclusion']) print(args['id']) print(args['argument']) print(args['stance']) break ``` ### Supported Tasks and Leaderboards Document Retrieval, Argument Retrieval for Controversial Questions ### Languages The args.me corpus is monolingual; it only includes English (mostly en-US) documents. ## Dataset Structure ### Data Instances #### Corpus ``` {'conclusion': 'Science is the best!', 'id': 'd6517702-2019-04-18T12:36:24Z-00000-000', 'argument': 'Science is aright I guess, but Physical Education (P.E) is better. Think about it, you could sit in a classroom for and hour learning about molecular reconfiguration, or you could play football with your mates. Why would you want to learn about molecular reconfiguration anyway? I think the argument here would be based on, healthy mind or healthy body. With science being the healthy mind and P.E being the healthy body. To work this one out all you got to do is ask Steven Hawkins. Only 500 words', 'stance': 'CON'} ``` ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) ### Citation Information ``` @dataset{yamen_ajjour_2020_4139439, author = {Yamen Ajjour and Henning Wachsmuth and Johannes Kiesel and Martin Potthast and Matthias Hagen and Benno Stein}, title = {args.me corpus}, month = oct, year = 2020, publisher = {Zenodo}, version = {1.0-cleaned}, doi = {10.5281/zenodo.4139439}, url = {https://doi.org/10.5281/zenodo.4139439} } ```
cakiki/args_me
[ "task_categories:text-retrieval", "task_ids:document-retrieval", "annotations_creators:machine-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "license:cc-by-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["crowdsourced"], "language": ["'en-US'"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "pretty_name": "Webis args.me argument corpus"}
2022-10-25T08:07:25+00:00
3a0dac229b4e21cbde67cb06af07d11fd8bb7c75
cakiki/arxiv-metadata
[ "license:cc0-1.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"license": "cc0-1.0"}
2022-02-03T20:57:23+00:00
0882808a91679e12e98407691dabd28115bff670
cakiki/en_wiki_quote
[ "license:cc-by-sa-3.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"license": "cc-by-sa-3.0"}
2022-02-03T17:36:03+00:00
aba728f708a01b22a508061490cf389dd15f6ca2
## Dataset Summary Contains 15 Harry Potter trivia questions in Squadv2 format, 3 of which are unanswerable. ## Model Performance [Test Notebook](https://colab.research.google.com/drive/1VFUJKV7eun68XgQDAHSHsbvoM_CGHzWA?usp=sharing) | Model | exact | f1 | | ----------- | ----------- | ----------- | | Albert Base ([twmkn9/albert-base-v2-squad2](https://huggingface.co/twmkn9/albert-base-v2-squad2)) | 46.6667 | 46.6667 | | Albert XXLarge ([ahotrod/albert_xxlargev1_squad2_512](https://huggingface.co/ahotrod/albert_xxlargev1_squad2_512)) | 66.6667 | 66.6667 |
caltonji/harrypotter_squad_v2_2
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-12-31T20:01:23+00:00
1059be355e830b808093595856135651e770d22c
**QR-AN Dataset: a classification and generation dataset of french Parliament questions-answers.** This is a dataset for theme/topic classification, made of questions and answers from https://www2.assemblee-nationale.fr/recherche/resultats_questions . \ It contains 188 unbalanced classes, 80k questions-answers divided into 3 splits: train (60k), val (10k) and test (10k). \ Can be used for generation with 'qran_generation' This dataset is compatible with the [`run_summarization.py`](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization) script from Transformers if you add this line to the `summarization_name_mapping` variable: ```python "ccdv/cass-summarization": ("question", "answer") ``` Compatible with [run_glue.py](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) script: ``` export MODEL_NAME=camembert-base export MAX_SEQ_LENGTH=512 python run_glue.py \ --model_name_or_path $MODEL_NAME \ --dataset_name cassandra-themis/QR-AN \ --do_train \ --do_eval \ --max_seq_length $MAX_SEQ_LENGTH \ --per_device_train_batch_size 8 \ --gradient_accumulation_steps 4 \ --learning_rate 2e-5 \ --num_train_epochs 1 \ --max_eval_samples 500 \ --output_dir tmp/QR-AN ```
cassandra-themis/QR-AN
[ "task_categories:summarization", "task_categories:text-classification", "task_categories:text-generation", "task_ids:multi-class-classification", "task_ids:topic-classification", "size_categories:10K<n<100K", "language:fr", "conditional-text-generation", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["fr"], "size_categories": "10K<n<100K", "task_categories": ["summarization", "text-classification", "text-generation"], "task_ids": ["multi-class-classification", "topic-classification"], "tags": ["conditional-text-generation"]}
2022-10-24T19:31:22+00:00
d83da9653ef2a5f823c3693a28018e3009464522
# Dataset Card for AfriBERTa's Corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Loading Dataset](#loading-dataset) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Discussion of Biases](#discussion-of-biases) - [Additional Information](#additional-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description ### Dataset Summary This is the corpus on which AfriBERTa was trained on. The dataset is mostly from the BBC news website, but some languages also have data from Common Crawl. - **Homepage:** https://github.com/keleog/afriberta - **Models:** - https://huggingface.co/castorini/afriberta_small - https://huggingface.co/castorini/afriberta_base - https://huggingface.co/castorini/afriberta_large - **Paper:** https://aclanthology.org/2021.mrl-1.11/ - **Point of Contact:** [email protected] ### Supported Tasks and Leaderboards The AfriBERTa corpus was mostly intended to pre-train language models. ### Languages ``` afaanoromoo amharic gahuza hausa igbo pidgin somali swahili tigrinya yoruba ``` ### Loading Dataset An example to load the train split of the Somali corpus: ``` dataset = load_dataset("castorini/afriberta-corpus", "somali", split="train") ``` An example to load the test split of the Pidgin corpus: ``` dataset = load_dataset("castorini/afriberta-corpus", "pidgin", split="test") ``` ## Dataset Structure ### Data Instances Each data point is a line of text. An example from the `igbo` dataset: ``` {"id": "6", "text": "Ngwá ọrụ na-echebe ma na-ebuli gị na kọmputa."} ``` ### Data Fields The data fields are: - id: id of the example - text: content as a string ### Data Splits Each language has a train and test split, with varying sizes. ## Considerations for Using the Data ### Discussion of Biases Since majority of the data is obtained from the BBC's news website, models trained on this dataset are likely going to be biased towards the news domain. Also, since some of the data is obtained from Common Crawl, care should be taken (especially for text generation models) since personal and sensitive information might be present. ## Additional Information ### Citation Information ``` @inproceedings{ogueji-etal-2021-small, title = "Small Data? No Problem! Exploring the Viability of Pretrained Multilingual Language Models for Low-resourced Languages", author = "Ogueji, Kelechi and Zhu, Yuxin and Lin, Jimmy", booktitle = "Proceedings of the 1st Workshop on Multilingual Representation Learning", month = nov, year = "2021", address = "Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.mrl-1.11", pages = "116--126", } ``` ### Contributions Thanks to [Kelechi Ogueji](https://github.com/keleog) for adding this dataset.
castorini/afriberta-corpus
[ "task_categories:text-generation", "task_ids:language-modeling", "language:om", "language:am", "language:rw", "language:rn", "language:ha", "language:ig", "language:pcm", "language:so", "language:sw", "language:ti", "language:yo", "language:multilingual", "license:apache-2.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["om", "am", "rw", "rn", "ha", "ig", "pcm", "so", "sw", "ti", "yo", "multilingual"], "license": "apache-2.0", "task_categories": ["text-generation"], "task_ids": ["language-modeling"]}
2022-10-19T20:33:04+00:00
3a3aa212bbe94a8cc0dc858710a3dad49d532054
# Dataset Summary Mr. TyDi is a multi-lingual benchmark dataset built on TyDi, covering eleven typologically diverse languages. It is designed for monolingual retrieval, specifically to evaluate ranking with learned dense representations. This dataset stores documents of Mr. TyDi. To access the queries and judgments, please refer to [castorini/mr-tydi](https://huggingface.co/datasets/castorini/mr-tydi). # Dataset Structure The only configuration here is the `language`. As all three folds (train, dev and test) share the same corpus, there is only one fold 'train' under each language, unlike [castorini/mr-tydi](https://huggingface.co/datasets/castorini/mr-tydi). An example of document data entry looks as follows: ``` { 'docid': '25#0', 'title': 'Autism', 'text': 'Autism is a developmental disorder characterized by difficulties with social interaction and communication, ...' } ``` # Load Dataset An example to load the dataset: ``` language = 'english' dataset = load_dataset('castorini/mr-tydi-corpus', language, 'train') ``` # Citation Information ``` @article{mrtydi, title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin}, year={2021}, journal={arXiv:2108.08787}, } ```
castorini/mr-tydi-corpus
[ "task_categories:text-retrieval", "multilinguality:multilingual", "language:ar", "language:bn", "language:en", "language:fi", "language:id", "language:ja", "language:ko", "language:ru", "language:sw", "language:te", "language:th", "license:apache-2.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["ar", "bn", "en", "fi", "id", "fi", "ja", "ko", "ru", "sw", "te", "th"], "license": "apache-2.0", "multilinguality": ["multilingual"], "task_categories": ["text-retrieval"]}
2022-10-12T19:25:51+00:00
1d43c80218d06d0ef80f5b172ccabd848b948bc1
# Dataset Summary Mr. TyDi is a multi-lingual benchmark dataset built on TyDi, covering eleven typologically diverse languages. It is designed for monolingual retrieval, specifically to evaluate ranking with learned dense representations. This dataset stores the queries, judgements, and example training data of Mr. TyDi. To access the corpus, please refer to [castorini/mr-tydi-corpus](https://huggingface.co/datasets/castorini/mr-tydi-corpus). # Dataset Structure The only configuration here is the `language`, For each language, there are three splits: `train`, `dev`, and `test`. The negative examples from training set are sampled from the top-30 BM25 runfiles on each language. Specifically, we combine the **training** data for all languages under the `combined` configuration. An example of `train` set looks as follows: ``` { 'query_id': '1', 'query': 'When was quantum field theory developed?', 'positive_passages': [ { 'docid': '25267#12', 'title': 'Quantum field theory', 'text': 'Quantum field theory naturally began with the study of electromagnetic interactions, as the electromagnetic field was the only known classical field as of the 1920s.' }, ... ] 'negative_passages': [ { 'docid': '346489#8', 'title': 'Local quantum field theory', 'text': 'More recently, the approach has been further implemented to include an algebraic version of quantum field ...' }, ... ], } ``` An example of `dev` and `test` set looks as follows. We only provide the docid of positive passages here to save the space. Also no candidate passages are provided at this point. Note that to perform the retrieval, it need to be used together with [castorini/mr-tydi-corpus](https://huggingface.co/datasets/castorini/mr-tydi-corpus) ``` { 'query_id': '0', 'query': 'Is Creole a pidgin of French?', 'positive_passages': [ { 'docid': '3716905#1', 'title': '', 'text': '' }, ... ] } ``` # Load Dataset An example to load the dataset: ``` language = 'english' # to load all train, dev and test sets dataset = load_dataset('castorini/mr-tydi', language) # or to load a specific set: set_name = 'train' dataset = load_dataset('castorini/mr-tydi', language, set_name) ``` Note that the 'combined' option has only the 'train' set. # Citation Information ``` @article{mrtydi, title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin}, year={2021}, journal={arXiv:2108.08787}, } ```
castorini/mr-tydi
[ "task_categories:text-retrieval", "multilinguality:multilingual", "language:ar", "language:bn", "language:en", "language:fi", "language:id", "language:ja", "language:ko", "language:ru", "language:sw", "language:te", "language:th", "license:apache-2.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["ar", "bn", "en", "fi", "id", "fi", "ja", "ko", "ru", "sw", "te", "th"], "license": "apache-2.0", "multilinguality": ["multilingual"], "task_categories": ["text-retrieval"]}
2022-10-12T19:25:19+00:00
73205571221e7eac6953ed884e05c8625e06272c
# Dataset Summary The repo provides queries generated for the MS MARCO V1 document corpus with docTTTTTquery (sometimes written as docT5query or doc2query-T5), the latest version of the doc2query family of document expansion models. The basic idea is to train a model, that when given an input document, generates questions that the document might answer (or more broadly, queries for which the document might be relevant). These predicted questions (or queries) are then appended to the original documents, which are then indexed as before. The docTTTTTquery model gets its name from the use of T5 as the expansion model. # Dataset Structure All three folds (train, dev and test) share the same corpus. An example data entry looks as follows: ``` { "id": "D1555982", "predicted_queries": ["when find radius of star r", "what is r radius", "how to find out radius of star", "what is radius r", "what is radius of r", "how do you find radius of star igel", "which law states that radiation is proportional to radiation?", "what is the radius of a spherical star", "what is the radius of the star", "what is radius of star", "which radiation is produced during a solar radiation experiment?", "how to find radius r", "what is radius r of a star", "the hot glowing surfaces of stars emit energy in the form of", "what is the radius of a star", "what is the radius of a star", "how to find radius r on a star", "how to find radius r in a solar cell", "what kind of energy does a hot glowing surface of a star emit?", "what kind of energy does the hot glowing surface of stars emit"] } ``` # Load Dataset An example to load the dataset: ``` dataset = load_dataset('castorini/msmarco_v1_doc_doc2query-t5_expansions') ``` # Citation Information ``` @article{docTTTTTquery, title={From doc2query to {docTTTTTquery}}, author={Nogueira, Rodrigo and Lin, Jimmy}, year={2019} } @article{emdt5, author = "Ronak Pradeep and Rodrigo Nogueira and Jimmy Lin", title = "The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models", journal = "arXiv:2101.05667", year = 2021, }
castorini/msmarco_v1_doc_doc2query-t5_expansions
[ "language:en", "license:apache-2.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["en"], "license": "apache-2.0"}
2022-07-02T18:16:12+00:00
4254f0bda2a6e562cb2e53001220e0f1f981d2b8
# Dataset Summary The repo provides queries generated for the MS MARCO V1 document segmented corpus with docTTTTTquery (sometimes written as docT5query or doc2query-T5), the latest version of the doc2query family of document expansion models. The basic idea is to train a model, that when given an input document, generates questions that the document might answer (or more broadly, queries for which the document might be relevant). These predicted questions (or queries) are then appended to the original documents, which are then indexed as before. The docTTTTTquery model gets its name from the use of T5 as the expansion model. # Dataset Structure All three folds (train, dev and test) share the same corpus. An example data entry looks as follows: ``` { "id": "D1555982#0", "predicted_queries": ["when find radius of star r", "what is r radius", "how to find out radius of star", "what is radius r", "what is radius of r", "how do you find radius of star igel", "which law states that radiation is proportional to radiation?", "what is the radius of a spherical star", "what is the radius of the star", "what is radius of star"] } ``` # Load Dataset An example to load the dataset: ``` dataset = load_dataset('castorini/msmarco_v1_doc_segmented_doc2query-t5_expansions') ``` # Citation Information ``` @article{docTTTTTquery, title={From doc2query to {docTTTTTquery}}, author={Nogueira, Rodrigo and Lin, Jimmy}, year={2019} } @article{emdt5, author = "Ronak Pradeep and Rodrigo Nogueira and Jimmy Lin", title = "The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models", journal = "arXiv:2101.05667", year = 2021, }
castorini/msmarco_v1_doc_segmented_doc2query-t5_expansions
[ "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["English"], "license": "Apache License 2.0"}
2021-11-10T04:51:35+00:00
aca81f4eabebd63c46026565b9123b17269bb1c4
# Dataset Summary The repo provides queries generated for the MS MARCO V1 passage corpus with docTTTTTquery (sometimes written as docT5query or doc2query-T5), the latest version of the doc2query family of document expansion models. The basic idea is to train a model, that when given an input document, generates questions that the document might answer (or more broadly, queries for which the document might be relevant). These predicted questions (or queries) are then appended to the original documents, which are then indexed as before. The docTTTTTquery model gets its name from the use of T5 as the expansion model. # Dataset Structure All three folds (train, dev and test) share the same corpus. The queries are generated from this corpus. An example data entry looks as follows: ``` { "id": "0", "predicted_queries": ["what was important to the success of the manhattan project", "why was the manhattan project important?", "what was important about the manhattan project", "why was the success of the manhattan project so important?", "who was the manhattan project a scientific project for", "what was the manhattan project important for", "why was the manhattan project a success", "how was the success of the manhattan project", "why was the manhattan project important to the success of the project?", "what is the importance of communication amongst scientific minds", "what was the importance of scientific communication for the success of the manhattan project", "what was the purpose of the manhattan project", "why was the manhattan project significant?", "why was the manhattan project important", "why did scientists believe in atomic power", "why did scientists and engineers have to communicate?", "why was the manhattan project a success", "what was the purpose of the manhattan project", "why did scientists and engineers want to be involved in the manhattan project", "why are the scientists so valuable", "which of the following was an important outcome of the manhattan project?", "why was the manhattan project successful", "why was the manhattan project an important scientific achievement", "what was the success of manhattan", "what was the result of the manhattan project", "why was communications important to the success of the manhattan project?", "why the manhattan project was important", "why is it important to know who is the manhattan project", "what was the most important accomplishment to the success of the manhattan project?", "why was the manhattan project an important achievement?", "why was the manhattan project important to the success of the atomic bomb", "how did the manhattan project impact scientists?", "what were the effects of the manhattan project", "what were the results of the manhattan project and how did they affect the public", "what was the manhattan project", "why did scientists contribute to the success of the manhattan project", "why was communication important in the manhattan project", "what was the effect of the manhattan project on the world", "what was the importance of communication in the success of the manhattan project?", "why was communications important to the success of the manhattan project?", "why was the manhattan project important", "what was the manhattan project", "why was the success of the manhattan project important", "why was manhattan project a success", "what was important about the manhattan project", "what benefited from the success of the new york nuclear bomb", "what was the significance to the success of the manhattan project?", "why is communication important", "why was the manhattan project an important achievement", "why did the manhattan project work", "what was the manhattan project's success", "what was the significance of the manhattan experiment", "how important was communication to the success of the manhattan project", "why is communication important to the success of the manhattan project?", "what was the importance of the manhattan project", "why did scientists believe the manhattan project had the greatest impact on science?", "what was a critical effect of the manhattan project?", "why did the manhattan project succeed", "what was the importance of the manhattan project", "why was the manhattan project important", "why was the manhattan project a success?", "what was the importance of communication and communication during the manhattan project", "why was the manhattan project significant?", "what was the importance of communication in the manhattan project?", "why was communication important to the success of the manhattan project?", "why was the manhattan project an important achievement", "what was important about the manhattan project", "why was the manhattan project a success", "why were the scientists at the manhattan project so successful?", "why did the manhattan project really work", "what was the success of the manhattan project", "what is the importance of communication during the manhattan project", "why was the manhattan project important", "why was communication important?", "what was the importance of communication in the success of the manhattan project?", "why was the manhattan project successful?", "which statement reflects the success of the manhattan project?", "why did the manhattan project succeed", "why was the manhattan project a great success", "why was the manhattan project important"] } ``` # Load Dataset An example to load the dataset: ``` dataset = load_dataset('castorini/msmarco_v1_passage_doc2query-t5_expansions', data_files='d2q.jsonl.gz') ``` # Citation Information ``` @article{docTTTTTquery, title={From doc2query to {docTTTTTquery}}, author={Nogueira, Rodrigo and Lin, Jimmy}, year={2019} } @article{emdt5, author={Ronak Pradeep and Rodrigo Nogueira and Jimmy Lin}, title={The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models}, journal={arXiv:2101.05667}, year={2021}, }
castorini/msmarco_v1_passage_doc2query-t5_expansions
[ "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["English"], "license": "Apache License 2.0"}
2022-06-21T16:45:43+00:00
cb336701cbfdf1de2df51de8315b27fcec566c56
# Dataset Summary The repo provides queries generated for the MS MARCO v2 document corpus with docTTTTTquery (sometimes written as docT5query or doc2query-T5), the latest version of the doc2query family of document expansion models. The basic idea is to train a model, that when given an input document, generates questions that the document might answer (or more broadly, queries for which the document might be relevant). These predicted questions (or queries) are then appended to the original documents, which are then indexed as before. The docTTTTTquery model gets its name from the use of T5 as the expansion model. # Dataset Structure All three folds (train, dev and test) share the same corpus. An example data entry looks as follows: ``` { 'docid': '25#0', 'title': 'Autism', 'text': 'Autism is a developmental disorder characterized by difficulties with social interaction and communication, ...' } ``` # Load Dataset An example to load the dataset: ``` dataset = load_dataset('castorini/msmarco_v2_doc_doc2query-t5_expansions') ``` # Citation Information ``` @article{docTTTTTquery, title={From doc2query to {docTTTTTquery}}, author={Nogueira, Rodrigo and Lin, Jimmy}, year={2019} } @article{emdt5, author = "Ronak Pradeep and Rodrigo Nogueira and Jimmy Lin", title = "The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models", journal = "arXiv:2101.05667", year = 2021, }
castorini/msmarco_v2_doc_doc2query-t5_expansions
[ "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["English"], "license": "Apache License 2.0"}
2021-11-11T17:41:32+00:00
61325a80b2ff2b81642bd532483dc51d0b46a8fb
# Dataset Summary The repo provides queries generated for the MS MARCO v2 document segmented corpus with docTTTTTquery (sometimes written as docT5query or doc2query-T5), the latest version of the doc2query family of document expansion models. The basic idea is to train a model, that when given an input document, generates questions that the document might answer (or more broadly, queries for which the document might be relevant). These predicted questions (or queries) are then appended to the original documents, which are then indexed as before. The docTTTTTquery model gets its name from the use of T5 as the expansion model. # Dataset Structure All three folds (train, dev and test) share the same corpus. An example data entry looks as follows: ``` { 'docid': '25#0', 'title': 'Autism', 'text': 'Autism is a developmental disorder characterized by difficulties with social interaction and communication, ...' } ``` # Load Dataset An example to load the dataset: ``` dataset = load_dataset('castorini/msmarco_v2_doc_segmented_doc2query-t5_expansions', data_files='d2q/d2q.jsonl???.gz') ``` # Citation Information ``` @article{docTTTTTquery, title={From doc2query to {docTTTTTquery}}, author={Nogueira, Rodrigo and Lin, Jimmy}, year={2019} } @article{emdt5, author = "Ronak Pradeep and Rodrigo Nogueira and Jimmy Lin", title = "The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models", journal = "arXiv:2101.05667", year = 2021, }
castorini/msmarco_v2_doc_segmented_doc2query-t5_expansions
[ "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["English"], "license": "Apache License 2.0"}
2021-11-02T08:13:56+00:00
22a0c06017015ef75b33d066711b1ebc2ddb7e8e
# Dataset Summary The repo provides queries generated for the MS MARCO v2 passage corpus with docTTTTTquery (sometimes written as docT5query or doc2query-T5), the latest version of the doc2query family of document expansion models. The basic idea is to train a model, that when given an input document, generates questions that the document might answer (or more broadly, queries for which the document might be relevant). These predicted questions (or queries) are then appended to the original documents, which are then indexed as before. The docTTTTTquery model gets its name from the use of T5 as the expansion model. # Dataset Structure All three folds (train, dev and test) share the same corpus. The queries are generated from this corpus. An example data entry looks as follows: ``` { "id": "msmarco_passage_22_0", "predicted_queries": ["in drug combat does a zombie take more damage or die", "is the health bar the same as smash bros", "is brawlhalla health bar", "icpri league brawlhalla", "what is a battle brawlhalla", "is smash bros minecraft brawlhalla zombies", "what are the health bars on brawlhalla", "does smash bros have health bars", "is brawlhalla a health bar", "what is brawlhalla", "what is brwlhalla", "how many health bars is in brawlhalla", "is there health bar in brawlhalla", "what is boiledhalla?", "what is a good health bar in brawlhalla", "what is skills brawlhalla", "how many gobs in a brawlhalla", "is smash bros. an nsb game", "how many health bars are there in the brawlhalla", "what is brawlhalla"] } ``` # Load Dataset An example to load the dataset: ``` dataset = load_dataset('castorini/msmarco_v2_passage_doc2query-t5_expansions', data_files='d2q/d2q.jsonl???.gz') ``` # Citation Information ``` @article{docTTTTTquery, title={From doc2query to {docTTTTTquery}}, author={Nogueira, Rodrigo and Lin, Jimmy}, year={2019} } @article{emdt5, author={Ronak Pradeep and Rodrigo Nogueira and Jimmy Lin}, title={The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models}, journal={arXiv:2101.05667}, year={2021}, }
castorini/msmarco_v2_passage_doc2query-t5_expansions
[ "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["English"], "license": "Apache License 2.0"}
2021-11-02T06:37:36+00:00
cc0700180b0a2f2618a039cb4827e51948986b6e
# Dataset Summary The repo provides answer, title and sentence expansions for the Natural Questions corpus with gar-T5. # Dataset Structure There are dev and test folds An example data entry of the dev split looks as follows: ``` { "id": "1", "predicted_answers": ["312"], "predicted_titles": ["Invisible Man"], "predicted_sentences": ["The Invisible Man First edition Author Ralph Ellison Cover artist M."] } ``` An example data entry of the test split looks as follows: ``` { "id": "1", "predicted_answers": ["May 18 , 2018"], "predicted_titles": ["Deadpool 2 *** Deadpool (film) *** Deadpool 2 (soundtrack) *** X-Men in other media"], "predicted_sentences": ["Deadpool 2 was released on May 18 , 2018 , with Leitch directing from a screenplay by Rhett Reese and Paul Wernick ."] } ``` # Load Dataset An example to load the dataset: ```python data_files = {"dev":"dev/dev.jsonl", "test": "test/test.jsonl"} dataset = load_dataset('castorini/nq_gar-t5_expansions') ```
castorini/nq_gar-t5_expansions
[ "language:en", "license:apache-2.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["en"], "license": "apache-2.0"}
2023-10-10T17:58:22+00:00
5e899a9b63776d2982c72aa242cc35ecdb7073a4
# Dataset Summary The repo provides answer,title and sentence expansions for the Trivia QA corpus with gar-T5. # Dataset Structure There are dev and test folds An example data entry of the dev split looks as follows: ``` { "id": "1", "predicted_answers": ["Bz"], "predicted_titles": ["Vehicle registration plates of Belize *** Vehicle registration plate"], "predicted_sentences": ["The international code for Belize is \"\"BZ\"\"."] } ``` An example data entry of the test split looks as follows: ``` { "id": "1", "predicted_answers": ["Taurus"], "predicted_titles": ["Jamie Lee Curtis *** Under the Tuscan Sun *** Angels (Jamie Lee Curtis song) *** Under the Tuscan Sun (film) *** John Michael King *** Robert Earl *** Henry Jones, Sr. *** Jamie Lee (singer) *** Under the Tuscan Sun (1974 film) *** Richard Benjamin"], "predicted_sentences": ["In July 2007, several news outlets reported that the couple had quietly married in December 2007, and that Curtis had taken a liking to one another, sharing \"\"sweet nothings\"\" about their relationship."] } ``` # Load Dataset An example to load the dataset: ```python data_files = {"dev":"dev/dev.jsonl", "test": "test/test.jsonl"} dataset = load_dataset('castorini/triviaqa_gar-t5_expansions') ```
castorini/triviaqa_gar-t5_expansions
[ "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["English"], "license": "Apache License 2.0"}
2022-02-17T00:58:32+00:00
f9bd92144ed76200d6eb3ce73a8bd4eba9ffdc85
**Arxiv Classification: a classification of Arxiv Papers (11 classes).** This dataset is intended for long context classification (documents have all > 4k tokens). \ Copied from "Long Document Classification From Local Word Glimpses via Recurrent Attention Learning" ``` @ARTICLE{8675939, author={He, Jun and Wang, Liqun and Liu, Liu and Feng, Jiao and Wu, Hao}, journal={IEEE Access}, title={Long Document Classification From Local Word Glimpses via Recurrent Attention Learning}, year={2019}, volume={7}, number={}, pages={40707-40718}, doi={10.1109/ACCESS.2019.2907992} } ``` * See: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8675939 * See: https://github.com/LiqunW/Long-document-dataset It contains 11 slightly unbalanced classes, 33k Arxiv Papers divided into 3 splits: train (28k), val (2.5k) and test (2.5k). 2 configs: * default * no_ref, removes references to the class inside the document (eg: [cs.LG] -> []) Compatible with [run_glue.py](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) script: ``` export MODEL_NAME=roberta-base export MAX_SEQ_LENGTH=512 python run_glue.py \ --model_name_or_path $MODEL_NAME \ --dataset_name ccdv/arxiv-classification \ --do_train \ --do_eval \ --max_seq_length $MAX_SEQ_LENGTH \ --per_device_train_batch_size 8 \ --gradient_accumulation_steps 4 \ --learning_rate 2e-5 \ --num_train_epochs 1 \ --max_eval_samples 500 \ --output_dir tmp/arxiv ```
ccdv/arxiv-classification
[ "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:topic-classification", "size_categories:10K<n<100K", "language:en", "long context", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": "en", "size_categories": "10K<n<100K", "task_categories": ["text-classification"], "task_ids": ["multi-class-classification", "topic-classification"], "tags": ["long context"]}
2022-10-22T08:23:50+00:00
f70ea0378deb9b1c8fa1032168dc4ea7d77f3259
# Arxiv dataset for summarization Dataset for summarization of long documents.\ Adapted from this [repo](https://github.com/armancohan/long-summarization).\ Note that original data are pre-tokenized so this dataset returns " ".join(text) and add "\n" for paragraphs. \ This dataset is compatible with the [`run_summarization.py`](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization) script from Transformers if you add this line to the `summarization_name_mapping` variable: ```python "ccdv/arxiv-summarization": ("article", "abstract") ``` ### Data Fields - `id`: paper id - `article`: a string containing the body of the paper - `abstract`: a string containing the abstract of the paper ### Data Splits This dataset has 3 splits: _train_, _validation_, and _test_. \ Token counts are white space based. | Dataset Split | Number of Instances | Avg. tokens | | ------------- | --------------------|:----------------------| | Train | 203,037 | 6038 / 299 | | Validation | 6,436 | 5894 / 172 | | Test | 6,440 | 5905 / 174 | # Cite original article ``` @inproceedings{cohan-etal-2018-discourse, title = "A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents", author = "Cohan, Arman and Dernoncourt, Franck and Kim, Doo Soon and Bui, Trung and Kim, Seokhwan and Chang, Walter and Goharian, Nazli", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2097", doi = "10.18653/v1/N18-2097", pages = "615--621", abstract = "Neural abstractive summarization models have led to promising results in summarizing relatively short documents. We propose the first model for abstractive summarization of single, longer-form documents (e.g., research papers). Our approach consists of a new hierarchical encoder that models the discourse structure of a document, and an attentive discourse-aware decoder to generate the summary. Empirical results on two large-scale datasets of scientific papers show that our model significantly outperforms state-of-the-art models.", } ```
ccdv/arxiv-summarization
[ "task_categories:summarization", "task_categories:text-generation", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "conditional-text-generation", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["en"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "task_categories": ["summarization", "text-generation"], "task_ids": [], "tags": ["conditional-text-generation"], "train-eval-index": [{"config": "document", "task": "summarization", "task_id": "summarization", "splits": {"eval_split": "test"}, "col_mapping": {"article": "text", "abstract": "target"}}]}
2022-12-08T06:58:05+00:00
dc2ce3bd19d8e323365bc1a244f3dd32e02d4f22
**Copy of the [cnn_dailymail](https://huggingface.co/datasets/cnn_dailymail) dataset fixing the "NotADirectoryError: [Errno 20]".** # Dataset Card for CNN Dailymail Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** [CNN / DailyMail Dataset repository](https://github.com/abisee/cnn-dailymail) - **Paper:** [Abstractive Text Summarization Using Sequence-to-Sequence RNNs and Beyond](https://papers.nips.cc/paper/5945-teaching-machines-to-read-and-comprehend.pdf), [Get To The Point: Summarization with Pointer-Generator Networks](https://www.aclweb.org/anthology/K16-1028.pdf) - **Leaderboard:** [Papers with Code leaderboard for CNN / Dailymail Dataset](https://paperswithcode.com/sota/document-summarization-on-cnn-daily-mail) - **Point of Contact:** [Abigail See](mailto:[email protected]) ### Dataset Summary The CNN / DailyMail Dataset is an English-language dataset containing just over 300k unique news articles as written by journalists at CNN and the Daily Mail. The current version supports both extractive and abstractive summarization, though the original version was created for machine reading and comprehension and abstractive question answering. ### Supported Tasks and Leaderboards - 'summarization': [Versions 2.0.0 and 3.0.0 of the CNN / DailyMail Dataset](https://www.aclweb.org/anthology/K16-1028.pdf) can be used to train a model for abstractive and extractive summarization ([Version 1.0.0](https://papers.nips.cc/paper/5945-teaching-machines-to-read-and-comprehend.pdf) was developed for machine reading and comprehension and abstractive question answering). The model performance is measured by how high the output summary's [ROUGE](https://huggingface.co/metrics/rouge) score for a given article is when compared to the highlight as written by the original article author. [Zhong et al (2020)](https://www.aclweb.org/anthology/2020.acl-main.552.pdf) report a ROUGE-1 score of 44.41 when testing a model trained for extractive summarization. See the [Papers With Code leaderboard](https://paperswithcode.com/sota/document-summarization-on-cnn-daily-mail) for more models. ### Languages The BCP-47 code for English as generally spoken in the United States is en-US and the BCP-47 code for English as generally spoken in the United Kingdom is en-GB. It is unknown if other varieties of English are represented in the data. ## Dataset Structure ### Data Instances For each instance, there is a string for the article, a string for the highlights, and a string for the id. See the [CNN / Daily Mail dataset viewer](https://huggingface.co/datasets/viewer/?dataset=cnn_dailymail&config=3.0.0) to explore more examples. ``` {'id': '0054d6d30dbcad772e20b22771153a2a9cbeaf62', 'article': '(CNN) -- An American woman died aboard a cruise ship that docked at Rio de Janeiro on Tuesday, the same ship on which 86 passengers previously fell ill, according to the state-run Brazilian news agency, Agencia Brasil. The American tourist died aboard the MS Veendam, owned by cruise operator Holland America. Federal Police told Agencia Brasil that forensic doctors were investigating her death. The ship's doctors told police that the woman was elderly and suffered from diabetes and hypertension, according the agency. The other passengers came down with diarrhea prior to her death during an earlier part of the trip, the ship's doctors said. The Veendam left New York 36 days ago for a South America tour.' 'highlights': 'The elderly woman suffered from diabetes and hypertension, ship's doctors say .\nPreviously, 86 passengers had fallen ill on the ship, Agencia Brasil says .'} ``` The average token count for the articles and the highlights are provided below: | Feature | Mean Token Count | | ---------- | ---------------- | | Article | 781 | | Highlights | 56 | ### Data Fields - `id`: a string containing the heximal formated SHA1 hash of the url where the story was retrieved from - `article`: a string containing the body of the news article - `highlights`: a string containing the highlight of the article as written by the article author ### Data Splits The CNN/DailyMail dataset has 3 splits: _train_, _validation_, and _test_. Below are the statistics for Version 3.0.0 of the dataset. | Dataset Split | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 287,113 | | Validation | 13,368 | | Test | 11,490 | ## Dataset Creation ### Curation Rationale Version 1.0.0 aimed to support supervised neural methodologies for machine reading and question answering with a large amount of real natural language training data and released about 313k unique articles and nearly 1M Cloze style questions to go with the articles. Versions 2.0.0 and 3.0.0 changed the structure of the dataset to support summarization rather than question answering. Version 3.0.0 provided a non-anonymized version of the data, whereas both the previous versions were preprocessed to replace named entities with unique identifier labels. ### Source Data #### Initial Data Collection and Normalization The data consists of news articles and highlight sentences. In the question answering setting of the data, the articles are used as the context and entities are hidden one at a time in the highlight sentences, producing Cloze style questions where the goal of the model is to correctly guess which entity in the context has been hidden in the highlight. In the summarization setting, the highlight sentences are concatenated to form a summary of the article. The CNN articles were written between April 2007 and April 2015. The Daily Mail articles were written between June 2010 and April 2015. The code for the original data collection is available at <https://github.com/deepmind/rc-data>. The articles were downloaded using archives of <www.cnn.com> and <www.dailymail.co.uk> on the Wayback Machine. Articles were not included in the Version 1.0.0 collection if they exceeded 2000 tokens. Due to accessibility issues with the Wayback Machine, Kyunghyun Cho has made the datasets available at <https://cs.nyu.edu/~kcho/DMQA/>. An updated version of the code that does not anonymize the data is available at <https://github.com/abisee/cnn-dailymail>. Hermann et al provided their own tokenization script. The script provided by See uses the PTBTokenizer. It also lowercases the text and adds periods to lines missing them. #### Who are the source language producers? The text was written by journalists at CNN and the Daily Mail. ### Annotations The dataset does not contain any additional annotations. #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information Version 3.0 is not anonymized, so individuals' names can be found in the dataset. Information about the original author is not included in the dataset. ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to help develop models that can summarize long paragraphs of text in one or two sentences. This task is useful for efficiently presenting information given a large quantity of text. It should be made clear that any summarizations produced by models trained on this dataset are reflective of the language used in the articles, but are in fact automatically generated. ### Discussion of Biases [Bordia and Bowman (2019)](https://www.aclweb.org/anthology/N19-3002.pdf) explore measuring gender bias and debiasing techniques in the CNN / Dailymail dataset, the Penn Treebank, and WikiText-2. They find the CNN / Dailymail dataset to have a slightly lower gender bias based on their metric compared to the other datasets, but still show evidence of gender bias when looking at words such as 'fragile'. Because the articles were written by and for people in the US and the UK, they will likely present specifically US and UK perspectives and feature events that are considered relevant to those populations during the time that the articles were published. ### Other Known Limitations News articles have been shown to conform to writing conventions in which important information is primarily presented in the first third of the article [(Kryściński et al, 2019)](https://www.aclweb.org/anthology/D19-1051.pdf). [Chen et al (2016)](https://www.aclweb.org/anthology/P16-1223.pdf) conducted a manual study of 100 random instances of the first version of the dataset and found 25% of the samples to be difficult even for humans to answer correctly due to ambiguity and coreference errors. It should also be noted that machine-generated summarizations, even when extractive, may differ in truth values when compared to the original articles. ## Additional Information ### Dataset Curators The data was originally collected by Karl Moritz Hermann, Tomáš Kočiský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom of Google DeepMind. Tomáš Kočiský and Phil Blunsom are also affiliated with the University of Oxford. They released scripts to collect and process the data into the question answering format. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, and Bing Xiang of IMB Watson and Çağlar Gu̇lçehre of Université de Montréal modified Hermann et al's collection scripts to restore the data to a summary format. They also produced both anonymized and non-anonymized versions. The code for the non-anonymized version is made publicly available by Abigail See of Stanford University, Peter J. Liu of Google Brain and Christopher D. Manning of Stanford University at <https://github.com/abisee/cnn-dailymail>. The work at Stanford University was supported by the DARPA DEFT ProgramAFRL contract no. FA8750-13-2-0040. ### Licensing Information The CNN / Daily Mail dataset version 1.0.0 is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0). ### Citation Information ``` @inproceedings{see-etal-2017-get, title = "Get To The Point: Summarization with Pointer-Generator Networks", author = "See, Abigail and Liu, Peter J. and Manning, Christopher D.", booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2017", address = "Vancouver, Canada", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/P17-1099", doi = "10.18653/v1/P17-1099", pages = "1073--1083", abstract = "Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.", } ``` ``` @inproceedings{DBLP:conf/nips/HermannKGEKSB15, author={Karl Moritz Hermann and Tomás Kociský and Edward Grefenstette and Lasse Espeholt and Will Kay and Mustafa Suleyman and Phil Blunsom}, title={Teaching Machines to Read and Comprehend}, year={2015}, cdate={1420070400000}, pages={1693-1701}, url={http://papers.nips.cc/paper/5945-teaching-machines-to-read-and-comprehend}, booktitle={NIPS}, crossref={conf/nips/2015} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@jplu](https://github.com/jplu), [@jbragg](https://github.com/jbragg), [@patrickvonplaten](https://github.com/patrickvonplaten) and [@mcmillanmajora](https://github.com/mcmillanmajora) for adding this dataset.
ccdv/cnn_dailymail
[ "task_categories:summarization", "task_categories:text-generation", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:apache-2.0", "conditional-text-generation", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["summarization", "text-generation"], "task_ids": [], "paperswithcode_id": "cnn-daily-mail-1", "pretty_name": "CNN / Daily Mail", "tags": ["conditional-text-generation"]}
2022-10-24T19:31:59+00:00
b949637ab41c9f668a4b83cea46c80b489c02290
# GovReport dataset for summarization Dataset for summarization of long documents.\ Adapted from this [repo](https://github.com/luyang-huang96/LongDocSum) and this [paper](https://arxiv.org/pdf/2104.02112.pdf)\ This dataset is compatible with the [`run_summarization.py`](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization) script from Transformers if you add this line to the `summarization_name_mapping` variable: ```python "ccdv/govreport-summarization": ("report", "summary") ``` ### Data Fields - `id`: paper id - `report`: a string containing the body of the report - `summary`: a string containing the summary of the report ### Data Splits This dataset has 3 splits: _train_, _validation_, and _test_. \ Token counts with a RoBERTa tokenizer. | Dataset Split | Number of Instances | Avg. tokens | | ------------- | --------------------|:----------------------| | Train | 17,517 | < 9,000 / < 500 | | Validation | 973 | < 9,000 / < 500 | | Test | 973 | < 9,000 / < 500 | # Cite original article ``` @misc{huang2021efficient, title={Efficient Attentions for Long Document Summarization}, author={Luyang Huang and Shuyang Cao and Nikolaus Parulian and Heng Ji and Lu Wang}, year={2021}, eprint={2104.02112}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
ccdv/govreport-summarization
[ "task_categories:summarization", "task_categories:text-generation", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en", "conditional-text-generation", "arxiv:2104.02112", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["en"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["summarization", "text-generation"], "task_ids": [], "tags": ["conditional-text-generation"]}
2022-10-24T19:32:47+00:00
2f38a1dfdecfacee0184d74eaeafd3c0fb49d2a6
**Patent Classification: a classification of Patents and abstracts (9 classes).** This dataset is intended for long context classification (non abstract documents are longer that 512 tokens). \ Data are sampled from "BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent Summarization." by Eva Sharma, Chen Li and Lu Wang * See: https://aclanthology.org/P19-1212.pdf * See: https://evasharma.github.io/bigpatent/ It contains 9 unbalanced classes, 35k Patents and abstracts divided into 3 splits: train (25k), val (5k) and test (5k). **Note that documents are uncased and space separated (by authors)** Compatible with [run_glue.py](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) script: ``` export MODEL_NAME=roberta-base export MAX_SEQ_LENGTH=512 python run_glue.py \ --model_name_or_path $MODEL_NAME \ --dataset_name ccdv/patent-classification \ --do_train \ --do_eval \ --max_seq_length $MAX_SEQ_LENGTH \ --per_device_train_batch_size 8 \ --gradient_accumulation_steps 4 \ --learning_rate 2e-5 \ --num_train_epochs 1 \ --max_eval_samples 500 \ --output_dir tmp/patent ```
ccdv/patent-classification
[ "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:topic-classification", "size_categories:10K<n<100K", "language:en", "long context", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": "en", "size_categories": "10K<n<100K", "task_categories": ["text-classification"], "task_ids": ["multi-class-classification", "topic-classification"], "tags": ["long context"]}
2022-10-22T08:25:36+00:00
26155ccf2b18393a38a05fafc26c66a068974839
# PubMed dataset for summarization Dataset for summarization of long documents.\ Adapted from this [repo](https://github.com/armancohan/long-summarization).\ Note that original data are pre-tokenized so this dataset returns " ".join(text) and add "\n" for paragraphs. \ This dataset is compatible with the [`run_summarization.py`](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization) script from Transformers if you add this line to the `summarization_name_mapping` variable: ```python "ccdv/pubmed-summarization": ("article", "abstract") ``` ### Data Fields - `id`: paper id - `article`: a string containing the body of the paper - `abstract`: a string containing the abstract of the paper ### Data Splits This dataset has 3 splits: _train_, _validation_, and _test_. \ Token counts are white space based. | Dataset Split | Number of Instances | Avg. tokens | | ------------- | --------------------|:----------------------| | Train | 119,924 | 3043 / 215 | | Validation | 6,633 | 3111 / 216 | | Test | 6,658 | 3092 / 219 | # Cite original article ``` @inproceedings{cohan-etal-2018-discourse, title = "A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents", author = "Cohan, Arman and Dernoncourt, Franck and Kim, Doo Soon and Bui, Trung and Kim, Seokhwan and Chang, Walter and Goharian, Nazli", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2097", doi = "10.18653/v1/N18-2097", pages = "615--621", abstract = "Neural abstractive summarization models have led to promising results in summarizing relatively short documents. We propose the first model for abstractive summarization of single, longer-form documents (e.g., research papers). Our approach consists of a new hierarchical encoder that models the discourse structure of a document, and an attentive discourse-aware decoder to generate the summary. Empirical results on two large-scale datasets of scientific papers show that our model significantly outperforms state-of-the-art models.", } ```
ccdv/pubmed-summarization
[ "task_categories:summarization", "task_categories:text-generation", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "conditional-text-generation", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["en"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "task_categories": ["summarization", "text-generation"], "task_ids": [], "tags": ["conditional-text-generation"]}
2022-10-24T19:33:04+00:00
464088ad69bd568eba869f3af6bc2f16a9cd9a5c
## Dataset Description - **Homepage:** cdleong.github.io # Dataset Summary: Pig-latin machine and English parallel machine translation corpus. Based on [The Project Gutenberg EBook of "De Bello Gallico" and Other Commentaries](https://www.gutenberg.org/ebooks/10657) Converted to pig-latin with https://github.com/bpabel/piglatin Blank lines removed. ## Dataset Structure ``` DatasetDict({ train: Dataset({ features: ['translation'], num_rows: 14778 }) validation: Dataset({ features: ['translation'], num_rows: 1000 }) }) ``` ### Data Instances ``` { 'translation': { 'eng': 'thrown into disorder they returned with more precipitation than is usual', 'engyay': 'own-thray into-ay isorder-day ey-thay eturned-ray ith-way ore-may ecipitation-pray an-thay is-ay usual-ay' } } ``` ### Data Fields - `translation`: a dictionary containing two strings paired with a key indicating the corresponding language. ### Data Splits - `train`: most of the data, 13,232 samples total. - `dev`: 1k holdout samples, created with the datasets.train_test_split() function
cdleong/piglatin-mt
[ "task_categories:translation", "multilinguality:translation", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["en"], "license": ["mit"], "multilinguality": ["translation"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "language_details": "eng and engyay"}
2022-10-24T18:22:09+00:00
edb6563c1ba616922132466f1a969807bba8651e
## Dataset Description - **Homepage:** https://zenodo.org/record/4661645 TEMPORARY TEST DATASET Not for actual use! Attempting to test out a dataset script for loading https://zenodo.org/record/4661645
cdleong/temp_africaNLP_keyword_spotting_for_african_languages
[ "language:wo", "language:fuc", "language:srr", "language:mnk", "language:snk", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["wo", "fuc", "srr", "mnk", "snk"]}
2022-10-25T08:07:32+00:00
2784446f8e97c8a4a2ce7242bb8a7537b36ff3dc
This dataset has 336 pieces of quotes from William Shakespeare and Taylor Swift (labeled) for supervised classification. Source: https://www.kaggle.com/kellylougheed/tswift-vs-shakespeare
cemigo/taylor_vs_shakes
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-03-14T23:45:59+00:00
321516f50bdcc1214fa75164c545478976ed84bd
<p align="center"><img src="https://huggingface.co/datasets/cfilt/HiNER-collapsed/raw/main/cfilt-dark-vec.png" alt="Computation for Indian Language Technology Logo" width="150" height="150"/></p> # IITB-English-Hindi Parallel Corpus [![License: CC BY-NC 4.0](https://img.shields.io/badge/License-CC%20BY--NC%204.0-lightgrey.svg)](https://creativecommons.org/licenses/by-nc/4.0/) [![Twitter Follow](https://img.shields.io/twitter/follow/cfiltnlp?color=1DA1F2&logo=twitter&style=flat-square)](https://twitter.com/cfiltnlp) [![Twitter Follow](https://img.shields.io/twitter/follow/PeopleCentredAI?color=1DA1F2&logo=twitter&style=flat-square)](https://twitter.com/PeopleCentredAI) ## About The IIT Bombay English-Hindi corpus contains parallel corpus for English-Hindi as well as monolingual Hindi corpus collected from a variety of existing sources and corpora developed at the Center for Indian Language Technology, IIT Bombay over the years. This page describes the corpus. This corpus has been used at the Workshop on Asian Language Translation Shared Task since 2016 the Hindi-to-English and English-to-Hindi languages pairs and as a pivot language pair for the Hindi-to-Japanese and Japanese-to-Hindi language pairs. The complete details of this corpus are available at [this URL](https://www.cfilt.iitb.ac.in/iitb_parallel/). We also provide this parallel corpus via browser download from the same URL. We also provide a monolingual Hindi corpus on the same URL. ### Recent Updates * Version 3.1 - December 2021 - Added 49,400 sentence pairs to the parallel corpus. * Version 3.0 - August 2020 - Added ~47,000 sentence pairs to the parallel corpus. ## Usage We provide a notebook that shows how to import the IITB English-Hindi Parallel Corpus from the HuggingFace datasets repository. The notebook also shows how to segment the corpus using BPE tokenization which can be used to train an English-Hindi MT System. [https://github.com/cfiltnlp/IITB-English-Hindi-PC](https://github.com/cfiltnlp/IITB-English-Hindi-PC) ## Other You can find a catalogue of other English-Hindi and other Indian language parallel corpora here: [Indic NLP Catalog](https://github.com/indicnlpweb/indicnlp_catalog) ## Maintainer(s) [Diptesh Kanojia](https://dipteshkanojia.github.io)<br/> Shivam Mhasker<br/> ## Citation If you use this corpus or its derivate resources for your research, kindly cite it as follows: Anoop Kunchukuttan, Pratik Mehta, Pushpak Bhattacharyya. The IIT Bombay English-Hindi Parallel Corpus. Language Resources and Evaluation Conference. 2018. ### BiBTeX Citation ```latex @inproceedings{kunchukuttan-etal-2018-iit, title = "The {IIT} {B}ombay {E}nglish-{H}indi Parallel Corpus", author = "Kunchukuttan, Anoop and Mehta, Pratik and Bhattacharyya, Pushpak", booktitle = "Proceedings of the Eleventh International Conference on Language Resources and Evaluation ({LREC} 2018)", month = may, year = "2018", address = "Miyazaki, Japan", publisher = "European Language Resources Association (ELRA)", url = "https://aclanthology.org/L18-1548", } ```
cfilt/iitb-english-hindi
[ "language:en", "language:hi", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["en", "hi"]}
2023-12-30T12:00:15+00:00
b88be4d36f97e51173120d42cd35ce2ffa074cc9
# Point CLoud MNIST A point cloud version of the original MNIST. ![sample](https://huggingface.co/datasets/cgarciae/point-cloud-mnist/resolve/main/docs/sample.png) ## Getting Started ```python import matplotlib.pyplot as plt import numpy as np from datasets import load_dataset # load dataset dataset = load_dataset("cgarciae/point-cloud-mnist") dataset.set_format("np") # get numpy arrays X_train = dataset["train"]["points"] y_train = dataset["train"]["label"] X_test = dataset["test"]["points"] y_test = dataset["test"]["label"] # plot some training samples figure = plt.figure(figsize=(10, 10)) for i in range(3): for j in range(3): k = 3 * i + j plt.subplot(3, 3, k + 1) idx = np.random.randint(0, len(X_train)) plt.title(f"{y_train[idx]}") plt.scatter(X_train[idx, :, 0], X_train[idx, :, 1]) plt.show() ``` ## Format * `points`: `(batch, point, 3)` array of uint8. * `label`: `(batch, 1)` array of uint8. Where `point` is the number of points in the point cloud. Points have no order and were shuffled when creating the data. Each point has the structure `[x, y, v]` where: * `x`: is the x coordinate of the point in the image. * `y`: is the y coordinate of the point in the image. * `v`: is the value of the pixel at the point in the image. Samples are padded with `0`s such that `point = 351` since its the largest number of non-zero pixels per image in the original dataset. You can tell apart padding point because they are the only ones where `v = 0`. Here is the distribution of non-zero pixels in the MNIST: ![distribution](https://huggingface.co/datasets/cgarciae/point-cloud-mnist/resolve/main/docs/lengths.png)
cgarciae/point-cloud-mnist
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-10-31T23:09:55+00:00
8b8191c92578f5f381bd7020eddbb7c334d414eb
chau/ink_test01
[ "license:other", "region:us" ]
2022-03-02T23:29:22+00:00
{"license": "other"}
2022-02-15T09:15:56+00:00
c3d46ee0b1969347cb803449156be9a59e275ae7
## Dataset Description - **Homepage:** [scielo.org](https://search.livros.scielo.org/search/?fb=&where=BOOK&filter%5Bis_comercial_filter%5D%5B%5D=f) ### Dataset Summary This dataset contains all text from open-access PDFs on [scielo.org](https://search.livros.scielo.org/search/?fb=&where=BOOK&filter%5Bis_comercial_filter%5D%5B%5D=f). As of Dec. 5 2021, the total number of books available is 962. Some of them are not in native PDF format (e.g. scanned images) though. ### Supported Tasks and Leaderboards - `sequence-modeling` or `language-modeling`: The dataset can be used to train a language model. ### Languages As of Dec. 5 2021, there are 902 books in Portuguese, 55 in Spanish, and 5 in English. ## Dataset Structure ### Data Instances Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples. ``` { "sbid":"23pcw", "id":"23pcw", "shortname":"", "title":"Educa\u00e7\u00e3o, sa\u00fade e esporte: novos\tdesafios \u00e0 Educa\u00e7\u00e3o F\u00edsica", "eisbn":"9788574554907", "isbn":"9788574554273", "author":"Farias, Gelcemar Oliveira; Nascimento, Juarez Vieira do", "corporate_authors":"", "translators":"", "coordinators":"", "editors":"", "others":"", "organizers":"", "collaborators":"", "publisher":"Editus", "language":"pt", "year": 2016, "synopsis":"\"A colet\u00e2nea contempla cap\u00edtulos que discutem a Educa\u00e7\u00e3o F\u00edsica a partir dos pressupostos da Educa\u00e7\u00e3o, da Sa\u00fade e do Esporte, enquanto importante desafio do momento atual e diante dos avan\u00e7os e das mudan\u00e7as que se consolidaram na forma\u00e7\u00e3o inicial em Educa\u00e7\u00e3o F\u00edsica. A obra convida a todos para a realiza\u00e7\u00e3o de futuras investiga\u00e7\u00f5es, no sentido de concentrar esfor\u00e7os para o fortalecimento de n\u00facleos de estudos e a sistematiza\u00e7\u00e3o de linhas de pesquisa.\"", "format":"", "type":"book", "is_public":"true", "is_comercial":"false", "publication_date":"2018-11-07", "_version_":"1718206093473087488", "pdf_url":"http://books.scielo.org//id/23pcw/pdf/farias-9788574554907.pdf", "pdf_filename":"farias-9788574554907.pdf", "metadata_filename":"farias-9788574554907.json", "text":"..." } ``` ### Data Fields All fields are of string type except `year`. ### Data Splits All records are in the default `train` split. ## Dataset Creation ### Curation Rationale Part of the big science efforts to create lanague modeling datasets. ### Source Data [scielo.org](https://search.livros.scielo.org/search/?fb=&where=BOOK&filter%5Bis_comercial_filter%5D%5B%5D=f) #### Initial Data Collection and Normalization All PDFs are directly downloaded from the website and text is extracted with [pdftotext](https://pypi.org/project/pdftotext/) lib. #### Who are the source language producers? NA ### Annotations No annotation is available. #### Annotation process NA #### Who are the annotators? NA ### Personal and Sensitive Information NA ## Considerations for Using the Data ### Social Impact of Dataset NA ### Discussion of Biases NA ### Other Known Limitations NA ## Additional Information ### Dataset Curators [@chenghao](https://huggingface.co/chenghao) ### Licensing Information Provide the license and link to the license webpage if available. [CC BY-NC-SA 3.0](https://creativecommons.org/licenses/by-nc-sa/3.0/) ### Contributions NA
chenghao/scielo_books
[ "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:multilingual", "size_categories:n<1K", "source_datasets:original", "language:en", "language:pt", "language:es", "license:cc-by-nc-sa-3.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en", "pt", "es"], "license": ["cc-by-nc-sa-3.0"], "multilinguality": ["multilingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["sequence-modeling"], "task_ids": ["language-modeling"]}
2022-07-01T17:34:59+00:00
d51bd8aa4dcb0d95600de289e7c6ea761d412c2d
# Dataset Card for [KsponSpeech] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary KsponSpeech is a large-scale spontaneous speech corpus of Korean conversations. This corpus contains 969 hrs of general open-domain dialog utterances, spoken by about 2,000 native Korean speakers in a clean environment. All data were constructed by recording the dialogue of two people freely conversing on a variety of topics and manually transcribing the utterances. The transcription provides a dual transcription consisting of orthography and pronunciation, and disfluency tags for spontaneity of speech, such as filler words, repeated words, and word fragments. KsponSpeech is publicly available on an open data hub site of the Korea government. (https://aihub.or.kr/aidata/105) ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
cheulyop/ksponspeech
[ "region:us" ]
2022-03-02T23:29:22+00:00
{"YAML tags": [{"copy-paste the tags obtained with the tagging app": "https://github.com/huggingface/datasets-tagging"}]}
2021-10-02T03:27:13+00:00
b1e632ba5e39486891c9ade0d6ba70561993c91d
This data can help in solving contradiction detection problem. this data is picked from kaggle. reference - Contradictory, My DWatson
chitra/contradictionNLI
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-12-29T10:45:19+00:00
de949e03d6bdecb42f9300fb9be8f5a9b5acf5f4
This is extracted from telugu subset from https://huggingface.co/datasets/ai4bharat/samanantar - used to create telugu kenLM models for ASR decoding.
chmanoj/ai4bharat__samanantar_processed_te
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2022-02-05T04:02:51+00:00
24fba98c601fcde47d5a50fe72d54fdf70b69e11
Dhivehi dataset for MNT
chopey/dhivehi
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-11-30T03:41:11+00:00
6051cc3ed097fbbe93c7cc2c480279e230f43e93
# Punctuation restoration from read text Restore punctuation marks from the output of an ASR system. ## Motivation Speech transcripts generated by Automatic Speech Recognition (ASR) systems typically do not contain any punctuation or capitalization. In longer stretches of automatically recognized speech, the lack of punctuation affects the general clarity of the output text [1]. The primary purpose of punctuation (PR) and capitalization restoration (CR) as a distinct natural language processing (NLP) task is to improve the legibility of ASR-generated text, and possibly other types of texts without punctuation. Aside from their intrinsic value, PR and CR may improve the performance of other NLP aspects such as Named Entity Recognition (NER), part-of-speech (POS) and semantic parsing or spoken dialog segmentation [2, 3]. As useful as it seems, It is hard to systematically evaluate PR on transcripts of conversational language; mainly because punctuation rules can be ambiguous even for originally written texts, and the very nature of naturally-occurring spoken language makes it difficult to identify clear phrase and sentence boundaries [4,5]. Given these requirements and limitations, a PR task based on a redistributable corpus of read speech was suggested. 1200 texts included in this collection (totaling over 240,000 words) were selected from two distinct sources: WikiNews and WikiTalks. Punctuation found in these sources should be approached with some reservation when used for evaluation: these are original texts and may contain some user-induced errors and bias. The texts were read out by over a hundred different speakers. Original texts with punctuation were forced-aligned with recordings and used as the ideal ASR output. The goal of the task is to provide a solution for restoring punctuation in the test set collated for this task. The test set consists of time-aligned ASR transcriptions of read texts from the two sources. Participants are encouraged to use both text-based and speech-derived features to identify punctuation symbols (e.g. multimodal framework [6]). In addition, the train set is accompanied by reference text corpora of WikiNews and WikiTalks data that can be used in training and fine-tuning punctuation models. ## Task description The purpose of this task is to restore punctuation in the ASR recognition of texts read out loud. ![](https://poleval.github.io/2021-punctuation-restoration/img/image001.png) **Input** ('tokens*'* column): sequence of tokens **Output** ('tags*'* column): sequence of tags **Measurements**: F1-score (seqeval) **Example**: Input: `['selekcjoner', 'szosowej', 'kadry', 'elity', 'mężczyzn', 'piotr', 'wadecki', 'ogłosił', '27', 'marca', '2008', 'r', 'szeroki', 'skład', 'zawodników', 'którzy', 'będą', 'rywalizować', 'o', 'miejsce', 'w', 'reprezentacji', 'na', 'tour', 'de', 'pologne', 'lista', 'liczy', '22', 'nazwiska', 'zawodników', 'zarówno', 'z', 'zagranicznych', 'jaki', 'i', 'polskich', 'ekip', 'spośród', '22', 'wybrańców', 'selekcjonera', 'do', 'składu', 'dostanie', 'się', 'tylko', 'ośmiu', 'kolarzy', 'którzy', 'we', 'wrześniu', 'będą', 'rywalizować', 'z', 'najlepszymi', 'grupami', 'kolarskimi', 'na', 'świecie', 'w', 'kręgu', 'zainteresowania', 'wadeckiego', 'znajduje', 'się', 'także', 'pięciu', 'innych', 'zawodników', 'ale', 'oni', 'prawdopodobnie', 'wystartują', 'w', 'polskim', 'tourze', 'w', 'szeregach', 'swoich', 'ekip', 'szeroka', 'kadra', 'na', 'tour', 'de', 'pologne', 'dariusz', 'baranowski', 'łukasz', 'bodnar', 'bartosz', 'huzarski', 'błażej', 'janiaczyk', 'tomasz', 'kiendyś', 'mateusz', 'komar', 'tomasz', 'lisowicz', 'piotr', 'mazur', 'jacek', 'morajko', 'przemysław', 'niemiec', 'marek', 'rutkiewicz', 'krzysztof', 'szczawiński', 'mateusz', 'taciak', 'adam', 'wadecki', 'mariusz', 'witecki', 'piotr', 'zaradny', 'piotr', 'zieliński', 'mateusz', 'mróz', 'marek', 'wesoły', 'jarosław', 'rębiewski', 'robert', 'radosz', 'jarosław', 'dąbrowski']` Input (translated by DeepL): `the selector of the men's elite road cycling team piotr wadecki announced on march 27, 2008 a wide line-up of riders who will compete for a place in the national team for the tour de pologne the list includes 22 names of riders both from foreign and Polish teams out of the 22 selected by the selector only eight riders will get into the line-up who in September will compete with the best cycling groups in the world wadecki's circle of interest also includes five other cyclists, but they will probably compete in the Polish tour in the ranks of their teams wide cadre for the tour de pologne dariusz baranowski łukasz bodnar bartosz huzarski błażej janiaczyk tomasz kiendyś mateusz komar tomasz lisowicz piotr mazur jacek morajko przemysław german marek rutkiewicz krzysztof szczawiński mateusz taciak adam wadecki mariusz witecki piotr zaradny piotr zieliński mateusz mróz marek wesoły jarosław rębiewski robert radosz jarosław dąbrowski` Output: `['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-.', 'O', 'O', 'B-,', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-.', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-,', 'O', 'O', 'O', 'B-.', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-,', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-.', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-,', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-.', 'O', 'O', 'O', 'O', 'O', 'B-:', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O']` ## Dataset – WikiPunct WikiPunct is a crowdsourced text and audio data set of Polish Wikipedia pages read out loud by Polish lectors. The dataset is divided into two parts:conversational(WikiTalks)and information (WikiNews). Over a hundred people were involved in the production of the audio component. The total length of audio data reaches almost thirty-six hours, including the test set. Steps were taken to balance the male-to-female ratio. WikiPuncthas over thirty-two thousand texts and 1200 audio files, one thousand in the training set and two hundred in the test set. There is a transcript of automatically recognized speech and force-aligned text for each text. The details behind the data format and evaluation metrics are presented below in the respective sections. **Statistics:** - **Text:** - ver thirty-two thousand texts; WikiNews ca. 15,000, WikiTalks ca. 17,000; - **Audio:** - Selection procedure: - randomly selected WikiNews (80% that is equal 800 entries for the training set) with the word count above 150 words and smaller than 300 words; - randomly selected WikiTalks (20%) with word the count above 150 words but smaller than 300 words and at least one question mark - Data set split - Training data: 1000 recordings - Test data: at 274 recordings - Speakers: - Polish male: 51 speakers, 16.7 hours of speech - Polish female: 54 speakers, 19 hours of speech **Data splits** | Subset | Cardinality (texts) | | ----------- | ----------------------: | | train | 800 | | dev | 0 | | test | 200 | **Class distribution (without "O")** | Class | train | validation | test | |:--------|--------:|-------------:|-------:| | B-. | 0.419 | - | 0.416 | | B-, | 0.406 | - | 0.403 | | B-- | 0.097 | - | 0.099 | | B-: | 0.037 | - | 0.052 | | B-? | 0.032 | - | 0.024 | | B-! | 0.005 | - | 0.004 | | B-; | 0.004 | - | 0.002 | **Punctuation for raw text:** | | **symbol** | **mean** | **median** | **max** | **sum** | **included** | | --- | --- | --- | --- | --- | --- | --- | | **fullstop** | . | 12.44 | 7.0 | 1129.0 | 404 378 | yes | | **comma** | , | 10.97 | 5.0 | 1283.0 | 356 678 | yes | | **question\_mark** | ? | 0.83 | 0.0 | 130.0 | 26 879 | yes | | **exclamation\_mark** | ! | 0.22 | 0.0 | 55.0 | 7 164 | yes | | **hyphen** | - | 2.64 | 1.0 | 363.0 | 81 190 | yes | | **colon** | : | 1.49 | 0.0 | 202.0 | 44 995 | yes | | **ellipsis** | ... | 0.27 | 0.0 | 60.0 | 8 882 | yes | | **semicolon** | ; | 0.13 | 0.0 | 51.0 | 4 270 | no | | **quote** | &quot; | 3.64 | 0.0 | 346.0 | 116 874 | no | | **words** | | 169.50 | 89.0 | 17252.0 | 5 452 032 | - | The dataset is divided into two parts: conversational (WikiTalks) and information (WikiNews). **Part 1. WikiTalks** Data scraped from Polish Wikipedia Talk pages. Talk pages, also known as discussion pages, are administration pages with editorial details and discussions for Wikipedia articles.. Talk pages were scrapped from the web using a list of article titles shared alongside Wikipedia dump archives. Wikipedia Talk pages serve as conversational data. Here, users communicate with each other by writing comments. Vocabulary and punctuation errors are expected. This data set covers 20% of the spoken data. Example: - **wikitalks001948:** Cóż za bzdury tu powypisywane! Fra Diavolo starał się nie dopuścić do upadku Republiki Partenopejskiej? Kto to wymyślił?! Człowiek ten był jednym z najżarliwszych wrogów francuskiej okupacji, a za zasługi w wypędzeniu Francuzów został mianowany pułkownikiem w królewskiej armii z prawdziwie królewską pensją. Bez niego wyzwolenie, nazywać to tak czy też nie, północnej części królestwa byłoby dużo trudniejsze, bo dysponował siłą kilku tysięcy sprawnych w boju i umiejętnie wziętych w karby rzezimieszków. Toteż armia Burbonów nie pokonywała go, jak to się twierdzi w artykule, lecz ściśle współpracowała. Redaktorów zachęcam do jak najszybszej korekty artykułu, bo aktualnie jest obrazą dla ambicji Wikipedii. 91.199.250.17 - **wikitalks008902:** Stare wątki w dyskusji przeniosłem do archiwum. Od prawie roku dyskusja w nich nie była kontynuowana. Sławek Borewicz **Part 2. WikiNews** **Wikinews** is a free-content news wiki and a project of the Wikimedia Foundation. The site works through collaborative journalism. The data was scraped directly from wikinews dump archive. The overall text quality is high, but vocabulary and punctuation errors may occur. This data set covers 80% of the spoken data. Example: - **wikinews222361:** Misja STS-127 promu kosmicznego Endeavour do Międzynarodowej Stacji Kosmicznej została przełożona ze względu na wyciek wodoru. Podczas procesu napełniania zewnętrznego zbiornika paliwem, część ciekłego wodoru przemieniła się w gaz i przedostała się do systemu odpowietrzania. System ten jest używany do bezpiecznego odprowadzania nadmiaru wodoru z platformy startowej 39A do Centrum Lotów Kosmicznych imienia Johna F. Kennedy&#39;ego. Początek misji miał mieć miejsce dzisiaj, o godzinie 13:17. Ze względu jednak na awarię, najbliższa możliwa data startu wahadłowca to środa 17 czerwca, jednak na ten dzień NASA na Przylądku Canaveral zaplanowana wystrzelenie sondy kosmicznej Lunar Reconnaissance Orbiter. Misja może być zatem opóźniona do 20 czerwca, który jest ostatnią możliwą datą startu w tym miesiącu. W niedzielę odbędzie się spotkanie specjalistów NASA, na którym zostanie ustalona nowa data startu i dalszy plan misji STS-127. ## Data format Input is a TSV file with two columns: 1. Text ID (to be used when handling forced-aligned transcriptions and WAV files if needed) 2. Input text - in lower-case letter without punctuation marks The output should have the same number of lines as the input file, in each line the text with punctuation marks should be given. ### Forced-aligned transcriptions We use force-aligned transcriptions of the original texts to approximate ASR output. Files in the _.clntmstmp_ format contain forced-alignment of the original text together with the audio file read out by a group of volunteers. The files may contain errors resulting from incorrect reading of the text (skipping fragments, adding words missing from the original text) and alignment errors resulting from the configuration of the alignment tool for text and audio files. The configuration targeted Polish; names from foreign languages may be poorly recognised, with the word duration equal to zero (start and end timestamps are equal). Data is given in the following format: **(timestamp\_start,timestamp\_end) word** ... **\</s\>** where **\</s\>** is a symbol of the end of recognition. Example: (990,1200) Rosja (1230,1500) zaczyna (1590,1950) powracać (1980,2040) do (2070,2400) praktyk (2430,2490) z (2520,2760) czasów (2820,3090) zimnej (3180,3180) wojny. (3960,4290) Rosjanie (4380,4770) wznowili (4860,5070) bowiem (5100,5160) na (5220,5430) stałe (5520,5670) loty (5760,6030) swoich (6120,6600) bombowców (6630,7230) strategicznych (7350,7530) poza (7590,7890) granice (8010,8010) kraju. (8880,9300) Prezydent (9360,9810) Władimir (9930,10200) Putin (10650,10650) wyjaśnił, (10830,10920) iż (10980,11130) jest (11160,11190) to (11220,11520) odpowiedź (11550,11640) na (11670,12120) zagrożenie (12240,12300) ze (12330,12570) strony (12660,12870) innych (13140,13140) państw. \</s\> ## Evaluation procedure Baseline results will be provided in final evaluation. ### Punctuation During the task the following punctuation marks will be evaluated: | **Punctuation mark** | **symbol** | | --- | --- | | fullstop | . | | comma | , | | question mark | ? | | exclamation mark | ! | | hyphen | - | | colon | : | | ellipsis | ... | | blank (no punctuation) | | Note that semi-colon (`;`) is disregarded here. ### Submission format The output to be evaluated is just the text with punctuation marks added. ### Metrics Final results are evaluated in terms of precision, recall, and F1 scores for predicting each punctuation mark separately. Submissions are compared with respect to the weighted average of F1 scores for each punctuation mark. ##### Per-document score: ![](https://poleval.github.io/2021-punctuation-restoration/img/image003.png) ##### Global score per punctuation mark _p_: ![](https://poleval.github.io/2021-punctuation-restoration/img/image005.png) Final scoring metric calculated as weighted average of global scores per ![](https://poleval.github.io/2021-punctuation-restoration/img/image007.png) We would like to invite participants to discussion about evaluation metrics, taking into account such factors as: - ASR and Forced-Alignment errors, - inconsistencies among annotators, - impact of only slight displacement of punctuation, - assigning different weights to different types of errors. ### Video introduction [![Video instruction](http://img.youtube.com/vi/yEh-RiFGN94/0.jpg)](http://www.youtube.com/watch?v=yEh-RiFGN94 "Video instruction") ### Downloads Data has been published in the following repository: https://github.com/poleval/2021-punctuation-restoration Training data is provided in train/\*.tsv. Additional data can be downloaded from Google Drive. Below is a list of file names along with a description of what they contain. - [poleval\_fa.train.tar.gz](https://drive.google.com/file/d/1oBFjZPb5Hk4r_VW4G0HrVnGy7A7zmTpa/view?usp=sharing) - archive contains forced-alignment of the original text together with the audio file - [poleval\_wav.train.tar.gz](https://drive.google.com/file/d/1b6MyyqgA9D1U7DX3Vtgda7f9ppkxjCXJ/view?usp=sharing) - archive contains training audio files - [poleval\_wav.validation.tar.gz](https://drive.google.com/file/d/1gwQRvrUtFqz3xGnmEN8znAzkBwC12Czu/view?usp=sharing) - archive contains test audio files - [poleval\_text.rest.tar.gz](https://drive.google.com/file/d/10SdpLHPLXVfhJsq1okgC5fcxbFzCGoR5/view?usp=sharing) - archive contains additional text provided in JSON formatand CSV for which no audio files were provided (can be used for training purposes) ### Challenge stage The competition in September 2021. Now the challenge is in the after-competition stage. You can submit solutions, but they will be marked with a different color. ### License Creative Commons - Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) ### References 1. Yi, J., Tao, J., Bai, Y., Tian, Z., &amp; Fan, C. (2020). Adversarial transfer learning for punctuation restoration. _arXiv preprint arXiv:2004.00248_. 2. Nguyen, Thai Binh, et al. &quot;Improving Vietnamese Named Entity Recognition from Speech Using Word Capitalization and Punctuation Recovery Models.&quot; _Proc. Interspeech 2020_ (2020): 4263-4267. 3. Hlubík, Pavel, et al. &quot;Inserting Punctuation to ASR Output in a Real-Time Production Environment.&quot; _International Conference on Text, Speech, and Dialogue_. Springer, Cham, 2020. 4. Sirts, Kairit, and Kairit Peekman. &quot;Evaluating Sentence Segmentation and Word Tokenization Systems on Estonian Web Texts.&quot; _Human Language Technologies–The Baltic Perspective: Proceedings of the Ninth International Conference Baltic HLT 2020_. Vol. 328. IOS Press, 2020. 5. Wang, Xueyujie. &quot;Analysis of Sentence Boundary of the Host&#39;s Spoken Language Based on Semantic Orientation Pointwise Mutual Information Algorithm.&quot; _2020 12th International Conference on Measuring Technology and Mechatronics Automation (ICMTMA)_. IEEE, 2020. 6. Sunkara, Monica, et al. &quot;Multimodal Semi-supervised Learning Framework for Punctuation Prediction in Conversational Speech.&quot; _arXiv preprint arXiv:2008.00702_ (2020).
clarin-pl/2021-punctuation-restoration
[ "task_categories:automatic-speech-recognition", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:n<1K", "language:pl", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["pl"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": [], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "2021-punctuation-restoration", "tags": []}
2022-08-29T15:39:18+00:00
55467c09094ac3a0d8261013f884f8f3247b53a0
# AspectEmo ## Description AspectEmo Corpus is an extended version of a publicly available PolEmo 2.0 corpus of Polish customer reviews used in many projects on the use of different methods in sentiment analysis. The AspectEmo corpus consists of four subcorpora, each containing online customer reviews from the following domains: school, medicine, hotels, and products. All documents are annotated at the aspect level with six sentiment categories: strong negative (minus_m), weak negative (minus_s), neutral (zero), weak positive (plus_s), strong positive (plus_m). ## Versions | version | config name | description | default | notes | |---------|-------------|--------------------------------|---------|------------------| | 1.0 | "1.0" | The version used in the paper. | YES | | | 2.0 | - | Some bugs fixed. | NO | work in progress | ## Tasks (input, output and metrics) Aspect-based sentiment analysis (ABSA) is a text analysis method that categorizes data by aspects and identifies the sentiment assigned to each aspect. It is the sequence tagging task. **Input** ('*tokens'* column): sequence of tokens **Output** ('*labels'* column): sequence of predicted tokens’ classes ("O" + 6 possible classes: strong negative (a_minus_m), weak negative (a_minus_s), neutral (a_zero), weak positive (a_plus_s), strong positive (a_plus_m), ambiguous (a_amb) ) **Domain**: school, medicine, hotels and products **Measurements**: F1-score (seqeval) **Example***:* Input: `['Dużo', 'wymaga', ',', 'ale', 'bardzo', 'uczciwy', 'i', 'przyjazny', 'studentom', '.', 'Warto', 'chodzić', 'na', 'konsultacje', '.', 'Docenia', 'postępy', 'i', 'zaangażowanie', '.', 'Polecam', '.']` Input (translated by DeepL): `'Demands a lot , but very honest and student friendly . Worth going to consultations . Appreciates progress and commitment . I recommend .'` Output: `['O', 'a_plus_s', 'O', 'O', 'O', 'a_plus_m', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'a_zero', 'O', 'a_plus_m', 'O', 'O', 'O', 'O', 'O', 'O']` ## Data splits | Subset | Cardinality (sentences) | |:-------|------------------------:| | train | 1173 | | val | 0 | | test | 292 | ## Class distribution(without "O") | Class | train | validation | test | |:----------|--------:|-------------:|-------:| | a_plus_m | 0.359 | - | 0.369 | | a_minus_m | 0.305 | - | 0.377 | | a_zero | 0.234 | - | 0.182 | | a_minus_s | 0.037 | - | 0.024 | | a_plus_s | 0.037 | - | 0.015 | | a_amb | 0.027 | - | 0.033 | ## Citation ``` @misc{11321/849, title = {{AspectEmo} 1.0: Multi-Domain Corpus of Consumer Reviews for Aspect-Based Sentiment Analysis}, author = {Koco{\'n}, Jan and Radom, Jarema and Kaczmarz-Wawryk, Ewa and Wabnic, Kamil and Zaj{\c a}czkowska, Ada and Za{\'s}ko-Zieli{\'n}ska, Monika}, url = {http://hdl.handle.net/11321/849}, note = {{CLARIN}-{PL} digital repository}, copyright = {The {MIT} License}, year = {2021} } ``` ## License ``` The MIT License ``` ## Links [HuggingFace](https://huggingface.co/datasets/clarin-pl/aspectemo) [Source](https://clarin-pl.eu/dspace/handle/11321/849) [Paper](https://sentic.net/sentire2021kocon.pdf) ## Examples ### Loading ```python from pprint import pprint from datasets import load_dataset dataset = load_dataset("clarin-pl/aspectemo") pprint(dataset['train'][20]) # {'labels': [0, 4, 0, 0, 0, 5, 0, 0, 0, 0, 0, 0, 0, 3, 0, 5, 0, 0, 0, 0, 0, 0], # 'tokens': ['Dużo', # 'wymaga', # ',', # 'ale', # 'bardzo', # 'uczciwy', # 'i', # 'przyjazny', # 'studentom', # '.', # 'Warto', # 'chodzić', # 'na', # 'konsultacje', # '.', # 'Docenia', # 'postępy', # 'i', # 'zaangażowanie', # '.', # 'Polecam', # '.']} ``` ### Evaluation ```python import random from pprint import pprint from datasets import load_dataset, load_metric dataset = load_dataset("clarin-pl/aspectemo") references = dataset["test"]["labels"] # generate random predictions predictions = [ [ random.randrange(dataset["train"].features["labels"].feature.num_classes) for _ in range(len(labels)) ] for labels in references ] # transform to original names of labels references_named = [ [dataset["train"].features["labels"].feature.names[label] for label in labels] for labels in references ] predictions_named = [ [dataset["train"].features["labels"].feature.names[label] for label in labels] for labels in predictions ] # transform to BILOU scheme references_named = [ [f"U-{label}" if label != "O" else label for label in labels] for labels in references_named ] predictions_named = [ [f"U-{label}" if label != "O" else label for label in labels] for labels in predictions_named ] # utilise seqeval to evaluate seqeval = load_metric("seqeval") seqeval_score = seqeval.compute( predictions=predictions_named, references=references_named, scheme="BILOU", mode="strict", ) pprint(seqeval_score) # {'a_amb': {'f1': 0.00597237775289287, # 'number': 91, # 'precision': 0.003037782418834251, # 'recall': 0.17582417582417584}, # 'a_minus_m': {'f1': 0.048306148055207034, # 'number': 1039, # 'precision': 0.0288551620760727, # 'recall': 0.1482194417709336}, # 'a_minus_s': {'f1': 0.004682997118155619, # 'number': 67, # 'precision': 0.0023701002734731083, # 'recall': 0.19402985074626866}, # 'a_plus_m': {'f1': 0.045933014354066985, # 'number': 1015, # 'precision': 0.027402473834443386, # 'recall': 0.14187192118226602}, # 'a_plus_s': {'f1': 0.0021750951604132683, # 'number': 41, # 'precision': 0.001095690284879474, # 'recall': 0.14634146341463414}, # 'a_zero': {'f1': 0.025159400310184387, # 'number': 501, # 'precision': 0.013768389287061486, # 'recall': 0.14570858283433133}, # 'overall_accuracy': 0.13970115681233933, # 'overall_f1': 0.02328248652368391, # 'overall_precision': 0.012639312620633834, # 'overall_recall': 0.14742193173565724} ```
clarin-pl/aspectemo
[ "task_categories:token-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:1K", "size_categories:1K<n<10K", "source_datasets:original", "language:pl", "license:mit", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["other"], "language": ["pl"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K", "1K<n<10K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "AspectEmo"}
2022-08-29T15:39:32+00:00
a20247f8787670ef987fa61e1c9139227c186105
# KPWR-NER ## Description KPWR-NER is a part the Polish Corpus of Wrocław University of Technology (*Korpus Języka Polskiego Politechniki Wrocławskiej*). Its objective is named entity recognition for fine-grained categories of entities. It is the ‘n82’ version of the KPWr, which means that number of classes is restricted to 82 (originally 120). During corpus creation, texts were annotated by humans from various sources, covering many domains and genres. ## Tasks (input, output and metrics) Named entity recognition (NER) - tagging entities in text with their corresponding type. **Input** ('*tokens'* column): sequence of tokens **Output** ('*ner'* column): sequence of predicted tokens’ classes in BIO notation (82 possible classes, described in detail in the annotation guidelines) **Measurements**: F1-score (seqeval) **Example**: Input: `[‘Roboty’, ‘mają’, ‘kilkanaście’, ‘lat’, ‘i’, ‘pochodzą’, ‘z’, ‘USA’, ‘,’, ‘Wysokie’, ‘napięcie’, ‘jest’, ‘dużo’, ‘młodsze’, ‘,’, ‘powstało’, ‘w’, ‘Niemczech’, ‘.’]` Input (translated by DeepL): `Robots are more than a dozen years old and come from the US, High Voltage is much younger, having been developed in Germany.` Output: `[‘B-nam_pro_title’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘B-nam_loc_gpe_country’, ‘O’, ‘B-nam_pro_title’, ‘I-nam_pro_title’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘B-nam_loc_gpe_country’, ‘O’]` ## Data splits | Subset | Cardinality (sentences) | |--------|------------------------:| | train | 13959 | | dev | 0 | | test | 4323 | ## Class distribution (without "O" and "I-*") | Class | train | validation | test | |:----------------------------|--------:|-------------:|----------:| | B-nam_liv_person | 0.21910 | - | 0.21422 | | B-nam_loc_gpe_city | 0.10101 | - | 0.09865 | | B-nam_loc_gpe_country | 0.07467 | - | 0.08059 | | B-nam_org_institution | 0.05893 | - | 0.06005 | | B-nam_org_organization | 0.04448 | - | 0.05553 | | B-nam_org_group_team | 0.03492 | - | 0.03363 | | B-nam_adj_country | 0.03410 | - | 0.03747 | | B-nam_org_company | 0.02439 | - | 0.01716 | | B-nam_pro_media_periodic | 0.02250 | - | 0.01896 | | B-nam_fac_road | 0.01995 | - | 0.02144 | | B-nam_liv_god | 0.01934 | - | 0.00790 | | B-nam_org_nation | 0.01739 | - | 0.01828 | | B-nam_oth_tech | 0.01724 | - | 0.01377 | | B-nam_pro_media_web | 0.01709 | - | 0.00903 | | B-nam_fac_goe | 0.01596 | - | 0.01445 | | B-nam_eve_human | 0.01573 | - | 0.01761 | | B-nam_pro_title | 0.01558 | - | 0.00790 | | B-nam_pro_brand | 0.01543 | - | 0.01038 | | B-nam_org_political_party | 0.01264 | - | 0.01309 | | B-nam_loc_gpe_admin1 | 0.01219 | - | 0.01445 | | B-nam_eve_human_sport | 0.01174 | - | 0.01242 | | B-nam_pro_software | 0.01091 | - | 0.02190 | | B-nam_adj | 0.00963 | - | 0.01174 | | B-nam_loc_gpe_admin3 | 0.00888 | - | 0.01061 | | B-nam_pro_model_car | 0.00873 | - | 0.00587 | | B-nam_loc_hydronym_river | 0.00843 | - | 0.01151 | | B-nam_oth | 0.00775 | - | 0.00497 | | B-nam_pro_title_document | 0.00738 | - | 0.01986 | | B-nam_loc_astronomical | 0.00730 | - | - | | B-nam_oth_currency | 0.00723 | - | 0.01151 | | B-nam_adj_city | 0.00670 | - | 0.00948 | | B-nam_org_group_band | 0.00587 | - | 0.00429 | | B-nam_loc_gpe_admin2 | 0.00565 | - | 0.00813 | | B-nam_loc_gpe_district | 0.00504 | - | 0.00406 | | B-nam_loc_land_continent | 0.00459 | - | 0.00722 | | B-nam_loc_country_region | 0.00459 | - | 0.00090 | | B-nam_loc_land_mountain | 0.00414 | - | 0.00203 | | B-nam_pro_title_book | 0.00384 | - | 0.00248 | | B-nam_loc_historical_region | 0.00376 | - | 0.00497 | | B-nam_loc | 0.00361 | - | 0.00090 | | B-nam_eve | 0.00361 | - | 0.00181 | | B-nam_org_group | 0.00331 | - | 0.00406 | | B-nam_loc_land_island | 0.00331 | - | 0.00248 | | B-nam_pro_media_tv | 0.00316 | - | 0.00158 | | B-nam_liv_habitant | 0.00316 | - | 0.00158 | | B-nam_eve_human_cultural | 0.00316 | - | 0.00497 | | B-nam_pro_title_tv | 0.00309 | - | 0.00542 | | B-nam_oth_license | 0.00286 | - | 0.00248 | | B-nam_num_house | 0.00256 | - | 0.00248 | | B-nam_pro_title_treaty | 0.00248 | - | 0.00045 | | B-nam_fac_system | 0.00248 | - | 0.00587 | | B-nam_loc_gpe_subdivision | 0.00241 | - | 0.00587 | | B-nam_loc_land_region | 0.00226 | - | 0.00248 | | B-nam_pro_title_album | 0.00218 | - | 0.00158 | | B-nam_adj_person | 0.00203 | - | 0.00406 | | B-nam_fac_square | 0.00196 | - | 0.00135 | | B-nam_pro_award | 0.00188 | - | 0.00519 | | B-nam_eve_human_holiday | 0.00188 | - | 0.00203 | | B-nam_pro_title_song | 0.00166 | - | 0.00158 | | B-nam_pro_media_radio | 0.00151 | - | 0.00068 | | B-nam_pro_vehicle | 0.00151 | - | 0.00090 | | B-nam_oth_position | 0.00143 | - | 0.00226 | | B-nam_liv_animal | 0.00143 | - | 0.00248 | | B-nam_pro | 0.00135 | - | 0.00045 | | B-nam_oth_www | 0.00120 | - | 0.00451 | | B-nam_num_phone | 0.00120 | - | 0.00045 | | B-nam_pro_title_article | 0.00113 | - | - | | B-nam_oth_data_format | 0.00113 | - | 0.00226 | | B-nam_fac_bridge | 0.00105 | - | 0.00090 | | B-nam_liv_character | 0.00098 | - | - | | B-nam_pro_software_game | 0.00090 | - | 0.00068 | | B-nam_loc_hydronym_lake | 0.00090 | - | 0.00045 | | B-nam_loc_gpe_conurbation | 0.00090 | - | - | | B-nam_pro_media | 0.00083 | - | 0.00181 | | B-nam_loc_land | 0.00075 | - | 0.00045 | | B-nam_loc_land_peak | 0.00075 | - | - | | B-nam_fac_park | 0.00068 | - | 0.00226 | | B-nam_org_organization_sub | 0.00060 | - | 0.00068 | | B-nam_loc_hydronym | 0.00060 | - | 0.00023 | | B-nam_loc_hydronym_sea | 0.00045 | - | 0.00068 | | B-nam_loc_hydronym_ocean | 0.00045 | - | 0.00023 | | B-nam_fac_goe_stop | 0.00038 | - | 0.00090 | ## Citation ``` @inproceedings{broda-etal-2012-kpwr, title = "{KPW}r: Towards a Free Corpus of {P}olish", author = "Broda, Bartosz and Marci{\'n}czuk, Micha{\l} and Maziarz, Marek and Radziszewski, Adam and Wardy{\'n}ski, Adam", booktitle = "Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}'12)", month = may, year = "2012", address = "Istanbul, Turkey", publisher = "European Language Resources Association (ELRA)", url = "http://www.lrec-conf.org/proceedings/lrec2012/pdf/965_Paper.pdf", pages = "3218--3222", abstract = "This paper presents our efforts aimed at collecting and annotating a free Polish corpus. The corpus will serve for us as training and testing material for experiments with Machine Learning algorithms. As others may also benefit from the resource, we are going to release it under a Creative Commons licence, which is hoped to remove unnecessary usage restrictions, but also to facilitate reproduction of our experimental results. The corpus is being annotated with various types of linguistic entities: chunks and named entities, selected syntactic and semantic relations, word senses and anaphora. We report on the current state of the project as well as our ultimate goals.", } ``` ## License ``` Creative Commons Attribution 3.0 Unported Licence ``` ## Links [HuggingFace](https://huggingface.co/datasets/clarin-pl/kpwr-ner) [Source](https://clarin-pl.eu/index.php/kpwr-en/) [Paper](https://aclanthology.org/L12-1574/) [KPWr annotation guidelines](http://www.nlp.pwr.wroc.pl/narzedzia-i-zasoby/zasoby/kpwr-lemma/16-narzedzia-zasoby/79-wytyczne) [KPWr annotation guidelines - named entities](https://clarin-pl.eu/dspace/handle/11321/294) ## Examples ### Loading ```python from pprint import pprint from datasets import load_dataset dataset = load_dataset("clarin-pl/kpwr-ner") pprint(dataset['train'][0]) # {'lemmas': ['roborally', 'czy', 'wysoki', 'napięcie', '?'], # 'ner': [73, 160, 73, 151, 160], # 'orth': ['subst:sg:nom:n', # 'qub', # 'adj:sg:nom:n:pos', # 'subst:sg:nom:n', # 'interp'], # 'tokens': ['RoboRally', 'czy', 'Wysokie', 'napięcie', '?']} ``` ### Evaluation ```python import random from pprint import pprint from datasets import load_dataset, load_metric dataset = load_dataset("clarin-pl/kpwr-ner") references = dataset["test"]["ner"] # generate random predictions predictions = [ [ random.randrange(dataset["train"].features["ner"].feature.num_classes) for _ in range(len(labels)) ] for labels in references ] # transform to original names of labels references_named = [ [dataset["train"].features["ner"].feature.names[label] for label in labels] for labels in references ] predictions_named = [ [dataset["train"].features["ner"].feature.names[label] for label in labels] for labels in predictions ] # utilise seqeval to evaluate seqeval = load_metric("seqeval") seqeval_score = seqeval.compute( predictions=predictions_named, references=references_named, scheme="IOB2" ) pprint(seqeval_score, depth=1) # {'nam_adj': {...}, # 'nam_adj_city': {...}, # 'nam_adj_country': {...}, # 'nam_adj_person': {...}, # 'nam_eve': {...}, # 'nam_eve_human': {...}, # 'nam_eve_human_cultural': {...}, # 'nam_eve_human_holiday': {...}, # 'nam_eve_human_sport': {...}, # 'nam_fac_bridge': {...}, # 'nam_fac_goe': {...}, # 'nam_fac_goe_stop': {...}, # 'nam_fac_park': {...}, # 'nam_fac_road': {...}, # 'nam_fac_square': {...}, # 'nam_fac_system': {...}, # 'nam_liv_animal': {...}, # 'nam_liv_character': {...}, # 'nam_liv_god': {...}, # 'nam_liv_habitant': {...}, # 'nam_liv_person': {...}, # 'nam_loc': {...}, # 'nam_loc_astronomical': {...}, # 'nam_loc_country_region': {...}, # 'nam_loc_gpe_admin1': {...}, # 'nam_loc_gpe_admin2': {...}, # 'nam_loc_gpe_admin3': {...}, # 'nam_loc_gpe_city': {...}, # 'nam_loc_gpe_conurbation': {...}, # 'nam_loc_gpe_country': {...}, # 'nam_loc_gpe_district': {...}, # 'nam_loc_gpe_subdivision': {...}, # 'nam_loc_historical_region': {...}, # 'nam_loc_hydronym': {...}, # 'nam_loc_hydronym_lake': {...}, # 'nam_loc_hydronym_ocean': {...}, # 'nam_loc_hydronym_river': {...}, # 'nam_loc_hydronym_sea': {...}, # 'nam_loc_land': {...}, # 'nam_loc_land_continent': {...}, # 'nam_loc_land_island': {...}, # 'nam_loc_land_mountain': {...}, # 'nam_loc_land_peak': {...}, # 'nam_loc_land_region': {...}, # 'nam_num_house': {...}, # 'nam_num_phone': {...}, # 'nam_org_company': {...}, # 'nam_org_group': {...}, # 'nam_org_group_band': {...}, # 'nam_org_group_team': {...}, # 'nam_org_institution': {...}, # 'nam_org_nation': {...}, # 'nam_org_organization': {...}, # 'nam_org_organization_sub': {...}, # 'nam_org_political_party': {...}, # 'nam_oth': {...}, # 'nam_oth_currency': {...}, # 'nam_oth_data_format': {...}, # 'nam_oth_license': {...}, # 'nam_oth_position': {...}, # 'nam_oth_tech': {...}, # 'nam_oth_www': {...}, # 'nam_pro': {...}, # 'nam_pro_award': {...}, # 'nam_pro_brand': {...}, # 'nam_pro_media': {...}, # 'nam_pro_media_periodic': {...}, # 'nam_pro_media_radio': {...}, # 'nam_pro_media_tv': {...}, # 'nam_pro_media_web': {...}, # 'nam_pro_model_car': {...}, # 'nam_pro_software': {...}, # 'nam_pro_software_game': {...}, # 'nam_pro_title': {...}, # 'nam_pro_title_album': {...}, # 'nam_pro_title_article': {...}, # 'nam_pro_title_book': {...}, # 'nam_pro_title_document': {...}, # 'nam_pro_title_song': {...}, # 'nam_pro_title_treaty': {...}, # 'nam_pro_title_tv': {...}, # 'nam_pro_vehicle': {...}, # 'overall_accuracy': 0.006156203762418094, # 'overall_f1': 0.0009844258777797407, # 'overall_precision': 0.0005213624939842789, # 'overall_recall': 0.008803611738148984} ```
clarin-pl/kpwr-ner
[ "task_categories:other", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:18K", "size_categories:10K<n<100K", "source_datasets:original", "language:pl", "license:cc-by-3.0", "structure-prediction", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["pl"], "license": ["cc-by-3.0"], "multilinguality": ["monolingual"], "size_categories": ["18K", "10K<n<100K"], "source_datasets": ["original"], "task_categories": ["other"], "task_ids": ["named-entity-recognition"], "pretty_name": "KPWr-NER", "tags": ["structure-prediction"]}
2023-01-30T22:54:02+00:00
b77d0057aa0d3231fb40bd529a7e164ca887ba9f
# nkjp-pos ## Description NKJP-POS is a part the National Corpus of Polish (*Narodowy Korpus Języka Polskiego*). Its objective is part-of-speech tagging, e.g. nouns, verbs, adjectives, adverbs, etc. During the creation of corpus, texts of were annotated by humans from various sources, covering many domains and genres. ## Tasks (input, output and metrics) Part-of-speech tagging (POS tagging) - tagging words in text with their corresponding part of speech. **Input** ('*tokens'* column): sequence of tokens **Output** ('*pos_tags'* column): sequence of predicted tokens’ classes (35 possible classes, described in detail in the annotation guidelines) **Measurements**: F1-score (seqeval) **Example***:* Input: `['Zarejestruj', 'się', 'jako', 'bezrobotny', '.']` Input (translated by DeepL): `Register as unemployed.` Output: `['impt', 'qub', 'conj', 'subst', 'interp']` ## Data splits | Subset | Cardinality (sentences) | | ----------- | ----------------------: | | train | 78219 | | dev | 0 | | test | 7444 | ## Class distribution | Class | train | dev | test | |:--------|--------:|------:|--------:| | subst | 0.27345 | - | 0.27656 | | interp | 0.18101 | - | 0.17944 | | adj | 0.10611 | - | 0.10919 | | prep | 0.09567 | - | 0.09547 | | qub | 0.05670 | - | 0.05491 | | fin | 0.04939 | - | 0.04648 | | praet | 0.04409 | - | 0.04348 | | conj | 0.03711 | - | 0.03724 | | adv | 0.03512 | - | 0.03333 | | inf | 0.01591 | - | 0.01547 | | comp | 0.01476 | - | 0.01439 | | num | 0.01322 | - | 0.01436 | | ppron3 | 0.01111 | - | 0.01018 | | ppas | 0.01086 | - | 0.01085 | | ger | 0.00961 | - | 0.01050 | | brev | 0.00856 | - | 0.01181 | | ppron12 | 0.00670 | - | 0.00665 | | aglt | 0.00629 | - | 0.00602 | | pred | 0.00539 | - | 0.00540 | | pact | 0.00454 | - | 0.00452 | | bedzie | 0.00229 | - | 0.00243 | | pcon | 0.00218 | - | 0.00189 | | impt | 0.00203 | - | 0.00226 | | siebie | 0.00177 | - | 0.00158 | | imps | 0.00174 | - | 0.00177 | | interj | 0.00131 | - | 0.00102 | | xxx | 0.00070 | - | 0.00048 | | adjp | 0.00069 | - | 0.00065 | | winien | 0.00068 | - | 0.00057 | | adja | 0.00048 | - | 0.00058 | | pant | 0.00012 | - | 0.00018 | | burk | 0.00011 | - | 0.00006 | | numcol | 0.00011 | - | 0.00013 | | depr | 0.00010 | - | 0.00004 | | adjc | 0.00007 | - | 0.00008 | ## Citation ``` @book{przepiorkowski_narodowy_2012, title = {Narodowy korpus języka polskiego}, isbn = {978-83-01-16700-4}, language = {pl}, publisher = {Wydawnictwo Naukowe PWN}, editor = {Przepiórkowski, Adam and Bańko, Mirosław and Górski, Rafał L. and Lewandowska-Tomaszczyk, Barbara}, year = {2012} } ``` ## License ``` GNU GPL v.3 ``` ## Links [HuggingFace](https://huggingface.co/datasets/clarin-pl/nkjp-pos) [Source](http://clip.ipipan.waw.pl/NationalCorpusOfPolish) [Paper](http://nkjp.pl/settings/papers/NKJP_ksiazka.pdf) ## Examples ### Loading ```python from pprint import pprint from datasets import load_dataset dataset = load_dataset("clarin-pl/nkjp-pos") pprint(dataset['train'][5000]) # {'id': '130-2-900005_morph_49.49-s', # 'pos_tags': [16, 4, 3, 30, 12, 18, 3, 16, 14, 6, 14, 26, 1, 30, 12], # 'tokens': ['Najwyraźniej', # 'źle', # 'ocenił', # 'odległość', # ',', # 'bo', # 'zderzył', # 'się', # 'z', # 'jadącą', # 'z', # 'naprzeciwka', # 'ciężarową', # 'scanią', # '.']} ``` ### Evaluation ```python import random from pprint import pprint from datasets import load_dataset, load_metric dataset = load_dataset("clarin-pl/nkjp-pos") references = dataset["test"]["pos_tags"] # generate random predictions predictions = [ [ random.randrange(dataset["train"].features["pos_tags"].feature.num_classes) for _ in range(len(labels)) ] for labels in references ] # transform to original names of labels references_named = [ [dataset["train"].features["pos_tags"].feature.names[label] for label in labels] for labels in references ] predictions_named = [ [dataset["train"].features["pos_tags"].feature.names[label] for label in labels] for labels in predictions ] # transform to BILOU scheme references_named = [ [f"U-{label}" if label != "O" else label for label in labels] for labels in references_named ] predictions_named = [ [f"U-{label}" if label != "O" else label for label in labels] for labels in predictions_named ] # utilise seqeval to evaluate seqeval = load_metric("seqeval") seqeval_score = seqeval.compute( predictions=predictions_named, references=references_named, scheme="BILOU", mode="strict", ) pprint(seqeval_score, depth=1) # {'adj': {...}, # 'adja': {...}, # 'adjc': {...}, # 'adjp': {...}, # 'adv': {...}, # 'aglt': {...}, # 'bedzie': {...}, # 'brev': {...}, # 'burk': {...}, # 'comp': {...}, # 'conj': {...}, # 'depr': {...}, # 'fin': {...}, # 'ger': {...}, # 'imps': {...}, # 'impt': {...}, # 'inf': {...}, # 'interj': {...}, # 'interp': {...}, # 'num': {...}, # 'numcol': {...}, # 'overall_accuracy': 0.027855061488566583, # 'overall_f1': 0.027855061488566583, # 'overall_precision': 0.027855061488566583, # 'overall_recall': 0.027855061488566583, # 'pact': {...}, # 'pant': {...}, # 'pcon': {...}, # 'ppas': {...}, # 'ppron12': {...}, # 'ppron3': {...}, # 'praet': {...}, # 'pred': {...}, # 'prep': {...}, # 'qub': {...}, # 'siebie': {...}, # 'subst': {...}, # 'winien': {...}, # 'xxx': {...}} ```
clarin-pl/nkjp-pos
[ "task_categories:other", "task_ids:part-of-speech", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:pl", "license:gpl-3.0", "structure-prediction", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["other"], "language": ["pl"], "license": ["gpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["other"], "task_ids": ["part-of-speech"], "pretty_name": "nkjp-pos", "tags": ["structure-prediction"]}
2023-01-30T22:53:57+00:00
802e35d2b12bae84bb07911d841e8f046dc2fcef
# Polemo2 ## Description The PolEmo2.0 is a dataset of online consumer reviews from four domains: medicine, hotels, products, and university. It is human-annotated on a level of full reviews and individual sentences. Current version (PolEmo 2.0) contains 8,216 reviews having 57,466 sentences. Each text and sentence was manually annotated with sentiment in the 2+1 scheme, which gives a total of 197,046 annotations. About 85% of the reviews are from the medicine and hotel domains. Each review is annotated with four labels: positive, negative, neutral, or ambiguous. ## Tasks (input, output and metrics) The task is to predict the correct label of the review. **Input** ('*text*' column): sentence **Output** ('*target*' column): label for sentence sentiment ('zero': neutral, 'minus': negative, 'plus': positive, 'amb': ambiguous) **Domain**: Online reviews **Measurements**: Accuracy, F1 Macro **Example**: Input: `Na samym wejściu hotel śmierdzi . W pokojach jest pleśń na ścianach , brudny dywan . W łazience śmierdzi chemią , hotel nie grzeje w pokojach panuje chłód . Wyposażenie pokoju jest stare , kran się rusza , drzwi na balkon nie domykają się . Jedzenie jest w małych ilościach i nie smaczne . Nie polecam nikomu tego hotelu .` Input (translated by DeepL): `At the very entrance the hotel stinks . In the rooms there is mold on the walls , dirty carpet . The bathroom smells of chemicals , the hotel does not heat in the rooms are cold . The room furnishings are old , the faucet moves , the door to the balcony does not close . The food is in small quantities and not tasty . I would not recommend this hotel to anyone .` Output: `1` (negative) ## Data splits | Subset | Cardinality | |--------|------------:| | train | 6573 | | val | 823 | | test | 820 | ## Class distribution | Class | train | dev | test | |:--------|--------:|-------------:|-------:| | minus | 0.3756 | 0.3694 | 0.4134 | | plus | 0.2775 | 0.2868 | 0.2768 | | amb | 0.1991 | 0.1883 | 0.1659 | | zero | 0.1477 | 0.1555 | 0.1439 | ## Citation ``` @inproceedings{kocon-etal-2019-multi, title = "Multi-Level Sentiment Analysis of {P}ol{E}mo 2.0: Extended Corpus of Multi-Domain Consumer Reviews", author = "Koco{\'n}, Jan and Mi{\l}kowski, Piotr and Za{\'s}ko-Zieli{\'n}ska, Monika", booktitle = "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)", month = nov, year = "2019", address = "Hong Kong, China", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/K19-1092", doi = "10.18653/v1/K19-1092", pages = "980--991", abstract = "In this article we present an extended version of PolEmo {--} a corpus of consumer reviews from 4 domains: medicine, hotels, products and school. Current version (PolEmo 2.0) contains 8,216 reviews having 57,466 sentences. Each text and sentence was manually annotated with sentiment in 2+1 scheme, which gives a total of 197,046 annotations. We obtained a high value of Positive Specific Agreement, which is 0.91 for texts and 0.88 for sentences. PolEmo 2.0 is publicly available under a Creative Commons copyright license. We explored recent deep learning approaches for the recognition of sentiment, such as Bi-directional Long Short-Term Memory (BiLSTM) and Bidirectional Encoder Representations from Transformers (BERT).", } ``` ## License ``` Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) ``` ## Links [HuggingFace](https://huggingface.co/datasets/clarin-pl/polemo2-official) [Source](https://clarin-pl.eu/dspace/handle/11321/710) [Paper](https://aclanthology.org/K19-1092/) ## Examples ### Loading ```python from pprint import pprint from datasets import load_dataset dataset = load_dataset("clarin-pl/polemo2-official") pprint(dataset['train'][0]) # {'target': 1, # 'text': 'Na samym wejściu hotel śmierdzi . W pokojach jest pleśń na ścianach ' # ', brudny dywan . W łazience śmierdzi chemią , hotel nie grzeje w ' # 'pokojach panuje chłód . Wyposażenie pokoju jest stare , kran się ' # 'rusza , drzwi na balkon nie domykają się . Jedzenie jest w małych ' # 'ilościach i nie smaczne . Nie polecam nikomu tego hotelu .'} ``` ### Evaluation ```python import random from pprint import pprint from datasets import load_dataset, load_metric dataset = load_dataset("clarin-pl/polemo2-official") references = dataset["test"]["target"] # generate random predictions predictions = [random.randrange(max(references) + 1) for _ in range(len(references))] acc = load_metric("accuracy") f1 = load_metric("f1") acc_score = acc.compute(predictions=predictions, references=references) f1_score = f1.compute(predictions=predictions, references=references, average='macro') pprint(acc_score) pprint(f1_score) # {'accuracy': 0.2475609756097561} # {'f1': 0.23747048177471738} ```
clarin-pl/polemo2-official
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:8K", "size_categories:1K<n<10K", "source_datasets:original", "language:pl", "license:cc-by-sa-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["other"], "language": ["pl"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["8K", "1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "Polemo2"}
2022-08-29T15:40:01+00:00
52483dba0ff23291271ee9249839865e3c3e7e50
# Offensive language dataset of Croatian comments FRENK 1.0 English subset of the [FRENK dataset](http://hdl.handle.net/11356/1433). Also available on HuggingFace dataset hub: [Croatian subset](https://huggingface.co/datasets/5roop/FRENK-hate-hr), [Slovenian subset](https://huggingface.co/datasets/5roop/FRENK-hate-sl). ## Dataset Description - **Homepage:** http://hdl.handle.net/11356/1433 - **Repository:** http://hdl.handle.net/11356/1433 - **Paper:** https://arxiv.org/abs/1906.02045 - **Project page** https://nl.ijs.si/frenk/ ## Description of the original dataset The original FRENK dataset consists of comments to Facebook posts (news articles) of mainstream media outlets from Croatia, Great Britain, and Slovenia, on the topics of migrants and LGBT. The dataset contains whole discussion threads. Each comment is annotated by the type of socially unacceptable discourse (e.g., inappropriate, offensive, violent speech) and its target (e.g., migrants/LGBT, commenters, media). The annotation schema is described in detail in [https://arxiv.org/pdf/1906.02045.pdf]. Usernames in the metadata are pseudo-anonymised and removed from the comments. The data in each language (Croatian (hr), English (en), Slovenian (sl), and topic (migrants, LGBT) is divided into a training and a testing portion. The training and testing data consist of separate discussion threads, i.e., there is no cross-discussion-thread contamination between training and testing data. The sizes of the splits are the following: Croatian, migrants: 4356 training comments, 978 testing comments; Croatian LGBT: 4494 training comments, 1142 comments; English, migrants: 4540 training comments, 1285 testing comments; English, LGBT: 4819 training comments, 1017 testing comments; Slovenian, migrants: 5145 training comments, 1277 testing comments; Slovenian, LGBT: 2842 training comments, 900 testing comments. For this dataset only the English data was used. Training segment has been split into beginning 90% (published here as training split) and end 10% (published here as dev split). ## Usage in `Transformers` ```python import datasets ds = datasets.load_dataset("classla/FRENK-hate-en","binary") ``` For binary classification the following encoding is used: ```python _CLASS_MAP_BINARY = { 'Acceptable': 0, 'Offensive': 1, } ``` The original labels are available if the dataset is loaded with the `multiclass` option: ```python import datasets ds = datasets.load_dataset("5roop/FRENK-hate-en","multiclass"). ``` In this case the encoding used is: ```python _CLASS_MAP_MULTICLASS = { 'Acceptable speech': 0, 'Inappropriate': 1, 'Background offensive': 2, 'Other offensive': 3, 'Background violence': 4, 'Other violence': 5, } ``` The original labels are available if the dataset is loaded with the `multiclass` option: ```python import datasets ds = datasets.load_dataset("classla/FRENK-hate-en","multiclass"). ``` In this case the encoding used is: ```python _CLASS_MAP_MULTICLASS = { 'Acceptable speech': 0, 'Inappropriate': 1, 'Background offensive': 2, 'Other offensive': 3, 'Background violence': 4, 'Other violence': 5, } ``` ## Data structure * `text`: text * `target`: who is the target of the hate-speech text ("no target", "commenter", "target" (migrants or LGBT, depending on the topic), or "related to" (again, the topic)) * `topic`: whether the text relates to lgbt or migrants hate-speech domains * `label`: label of the text instance, see above. ## Data instance ``` {'text': "Not everyone has the option of a rainbow reaction; I don't but wish I did.", 'target': 'No target', 'topic': 'lgbt', 'label': 0} ``` ## Licensing information CLARIN.SI Licence ACA ID-BY-NC-INF-NORED 1.0 ## Citation information When using this dataset please cite the following paper: ``` @misc{ljubešić2019frenk, title={The FRENK Datasets of Socially Unacceptable Discourse in Slovene and English}, author={Nikola Ljubešić and Darja Fišer and Tomaž Erjavec}, year={2019}, eprint={1906.02045}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/1906.02045} } ``` The original dataset can be cited as ``` @misc{11356/1433, title = {Offensive language dataset of Croatian, English and Slovenian comments {FRENK} 1.0}, author = {Ljube{\v s}i{\'c}, Nikola and Fi{\v s}er, Darja and Erjavec, Toma{\v z}}, url = {http://hdl.handle.net/11356/1433}, note = {Slovenian language resource repository {CLARIN}.{SI}}, copyright = {{CLARIN}.{SI} Licence {ACA} {ID}-{BY}-{NC}-{INF}-{NORED} 1.0}, year = {2021} } ```
classla/FRENK-hate-en
[ "task_categories:text-classification", "size_categories:1K<n<10K", "language:en", "license:other", "hate-speech-detection", "offensive-language", "arxiv:1906.02045", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["en"], "license": ["other"], "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "task_ids": [], "tags": ["hate-speech-detection", "offensive-language"]}
2022-10-21T06:52:06+00:00
e7fc9f3d8d6c5640a26679d8a50b1666b02cc41f
# Offensive language dataset of Croatian comments FRENK 1.0 Croatian subset of the [FRENK dataset](http://hdl.handle.net/11356/1433). Also available on HuggingFace dataset hub: [English subset](https://huggingface.co/datasets/5roop/FRENK-hate-en), [Slovenian subset](https://huggingface.co/datasets/5roop/FRENK-hate-sl). ## Dataset Description - **Homepage:** http://hdl.handle.net/11356/1433 - **Repository:** http://hdl.handle.net/11356/1433 - **Paper:** https://arxiv.org/abs/1906.02045 - **Project page** https://nl.ijs.si/frenk/ ## Description of the original dataset >The original FRENK dataset consists of comments to Facebook posts (news articles) of mainstream media outlets from Croatia, Great Britain, and Slovenia, on the topics of migrants and LGBT. The dataset contains whole discussion threads. Each comment is annotated by the type of socially unacceptable discourse (e.g., inappropriate, offensive, violent speech) and its target (e.g., migrants/LGBT, commenters, media). The annotation schema is described in detail in [https://arxiv.org/pdf/1906.02045.pdf]. Usernames in the metadata are pseudo-anonymised and removed from the comments. > >The data in each language (Croatian (hr), English (en), Slovenian (sl), and topic (migrants, LGBT) is divided into a training and a testing portion. The training and testing data consist of separate discussion threads, i.e., there is no cross-discussion-thread contamination between training and testing data. The sizes of the splits are the following: Croatian, migrants: 4356 training comments, 978 testing comments; Croatian LGBT: 4494 training comments, 1142 comments; English, migrants: 4540 training comments, 1285 testing comments; English, LGBT: 4819 training comments, 1017 testing comments; Slovenian, migrants: 5145 training comments, 1277 testing comments; Slovenian, LGBT: 2842 training comments, 900 testing comments. For this dataset only the Croatian data was used. Training segment has been split into beginning 90% (published here as training split) and end 10% (published here as dev split). Test segment has been preserved in its original form. ## Usage in `Transformers` ```python import datasets ds = datasets.load_dataset("classla/FRENK-hate-hr","binary") ``` For binary classification the following encoding is used: ```python _CLASS_MAP_BINARY = { 'Acceptable': 0, 'Offensive': 1, } ``` The original labels are available if the dataset is loaded with the `multiclass` option: ```python import datasets ds = datasets.load_dataset("classla/FRENK-hate-hr","multiclass"). ``` In this case the encoding used is: ```python _CLASS_MAP_MULTICLASS = { 'Acceptable speech': 0, 'Inappropriate': 1, 'Background offensive': 2, 'Other offensive': 3, 'Background violence': 4, 'Other violence': 5, } ``` ## Data structure * `text`: text * `target`: who is the target of the hate-speech text ("no target", "commenter", "target" (migrants or LGBT, depending on the topic), or "related to" (again, the topic)) * `topic`: whether the text relates to lgbt or migrants hate-speech domains * `label`: label of the text instance, see above. ## Data instance ``` {'text': 'Potpisujem komentar g ankice pavicic', 'target': 'No target', 'topic': 'lgbt', 'label': 0} ``` ## Licensing information CLARIN.SI Licence ACA ID-BY-NC-INF-NORED 1.0 ## Citation information When using this dataset please cite the following paper: ``` @misc{ljubešić2019frenk, title={The FRENK Datasets of Socially Unacceptable Discourse in Slovene and English}, author={Nikola Ljubešić and Darja Fišer and Tomaž Erjavec}, year={2019}, eprint={1906.02045}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/1906.02045} } ``` The original dataset can be cited as ``` @misc{11356/1433, title = {Offensive language dataset of Croatian, English and Slovenian comments {FRENK} 1.0}, author = {Ljube{\v s}i{\'c}, Nikola and Fi{\v s}er, Darja and Erjavec, Toma{\v z}}, url = {http://hdl.handle.net/11356/1433}, note = {Slovenian language resource repository {CLARIN}.{SI}}, copyright = {{CLARIN}.{SI} Licence {ACA} {ID}-{BY}-{NC}-{INF}-{NORED} 1.0}, year = {2021} } ```
classla/FRENK-hate-hr
[ "task_categories:text-classification", "size_categories:1K<n<10K", "language:hr", "license:other", "hate-speech-detection", "offensive-language", "arxiv:1906.02045", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["hr"], "license": ["other"], "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "task_ids": [], "tags": ["hate-speech-detection", "offensive-language"]}
2022-10-21T06:46:28+00:00
37c8b42c63d4eb75f549679158a85eb5bd984caa
Slovenian subset of the [FRENK dataset](http://hdl.handle.net/11356/1433). Also available on HuggingFace dataset hub: [English subset](https://huggingface.co/datasets/5roop/FRENK-hate-en), [Croatian subset](https://huggingface.co/datasets/5roop/FRENK-hate-hr). ## Dataset Description - **Homepage:** http://hdl.handle.net/11356/1433 - **Repository:** http://hdl.handle.net/11356/1433 - **Paper:** https://arxiv.org/abs/1906.02045 - **Project page** https://nl.ijs.si/frenk/ ## Description of the original dataset >The original FRENK dataset consists of comments to Facebook posts (news articles) of mainstream media outlets from Croatia, Great Britain, and Slovenia, on the topics of migrants and LGBT. The dataset contains whole discussion threads. Each comment is annotated by the type of socially unacceptable discourse (e.g., inappropriate, offensive, violent speech) and its target (e.g., migrants/LGBT, commenters, media). The annotation schema is described in detail in [https://arxiv.org/pdf/1906.02045.pdf]. Usernames in the metadata are pseudo-anonymised and removed from the comments. > >The data in each language (Croatian (hr), English (en), Slovenian (sl), and topic (migrants, LGBT) is divided into a training and a testing portion. The training and testing data consist of separate discussion threads, i.e., there is no cross-discussion-thread contamination between training and testing data. The sizes of the splits are the following: Croatian, migrants: 4356 training comments, 978 testing comments; Croatian LGBT: 4494 training comments, 1142 comments; English, migrants: 4540 training comments, 1285 testing comments; English, LGBT: 4819 training comments, 1017 testing comments; Slovenian, migrants: 5145 training comments, 1277 testing comments; Slovenian, LGBT: 2842 training comments, 900 testing comments. For this dataset only the Croatian data was used. Training segment has been split into beginning 90% (published here as training split) and end 10% (published here as dev split). ## Usage in `Transformers` ```python import datasets ds = datasets.load_dataset("classla/FRENK-hate-sl","binary") ``` For binary classification the following encoding is used: ```python _CLASS_MAP_BINARY = { 'Acceptable': 0, 'Offensive': 1, } ``` The original labels are available if the dataset is loaded with the `multiclass` option: ```python import datasets ds = datasets.load_dataset("classla/FRENK-hate-sl","multiclass"). ``` In this case the encoding used is: ```python _CLASS_MAP_MULTICLASS = { 'Acceptable speech': 0, 'Inappropriate': 1, 'Background offensive': 2, 'Other offensive': 3, 'Background violence': 4, 'Other violence': 5, } ``` ## Data structure * `text`: text * `target`: who is the target of the hate-speech text ("no target", "commenter", "target" (migrants or LGBT, depending on the topic), or "related to" (again, the topic)) * `topic`: whether the text relates to lgbt or migrants hate-speech domains * `label`: label of the text instance, see above. ## Data instance ``` {'text': 'Otroci so odprti in brez predsodkov.Predsodke jim vcepimo starejši,starši,družba,družina...Če otroku lepo razložimo,razume.Nikoli ni dobro,da omejujemo otroka,njegovo inteligenco in duhovnost z lastnim ne razumevanjem nečesa ali nekoga.Predsodek je miselni zapor,prepreka,da bi bili svobodni.Ljubezen je svoboda.Sem ZA spremembo zakona!Srečno :D', 'target': 'No target', 'topic': 'lgbt', 'label': 0} ``` ## Licensing information CLARIN.SI Licence ACA ID-BY-NC-INF-NORED 1.0 ## Citation information When using this dataset please cite the following paper: ``` @misc{ljubešić2019frenk, title={The FRENK Datasets of Socially Unacceptable Discourse in Slovene and English}, author={Nikola Ljubešić and Darja Fišer and Tomaž Erjavec}, year={2019}, eprint={1906.02045}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/1906.02045} } ``` The original dataset can be cited as ``` @misc{11356/1433, title = {Offensive language dataset of Croatian, English and Slovenian comments {FRENK} 1.0}, author = {Ljube{\v s}i{\'c}, Nikola and Fi{\v s}er, Darja and Erjavec, Toma{\v z}}, url = {http://hdl.handle.net/11356/1433}, note = {Slovenian language resource repository {CLARIN}.{SI}}, copyright = {{CLARIN}.{SI} Licence {ACA} {ID}-{BY}-{NC}-{INF}-{NORED} 1.0}, year = {2021} } ```
classla/FRENK-hate-sl
[ "task_categories:text-classification", "size_categories:1K<n<10K", "language:sl", "license:other", "hate-speech-detection", "offensive-language", "arxiv:1906.02045", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["sl"], "license": ["other"], "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "task_ids": [], "tags": ["hate-speech-detection", "offensive-language"]}
2022-10-21T06:46:11+00:00
f3f3a4708e6f8b92915ab02c20ac7fb829e45173
The COPA-HR dataset (Choice of plausible alternatives in Croatian) is a translation of the English COPA dataset (https://people.ict.usc.edu/~gordon/copa.html) by following the XCOPA dataset translation methodology (https://arxiv.org/abs/2005.00333). The dataset consists of 1000 premises (My body cast a shadow over the grass), each given a question (What is the cause?), and two choices (The sun was rising; The grass was cut), with a label encoding which of the choices is more plausible given the annotator or translator (The sun was rising). The dataset is split into 400 training samples, 100 validation samples, and 500 test samples. It includes the following features: 'premise', 'choice1', 'choice2', 'label', 'question', 'changed' (boolean). If you use the dataset in your work, please cite ``` @article{DBLP:journals/corr/abs-2104-09243, author = {Nikola Ljube\\\\v{s}i\\\\'{c} and Davor Lauc}, title = {BERTi{\\\\'{c}} - The Transformer Language Model for Bosnian, Croatian, Montenegrin and Serbian}, journal = {CoRR}, volume = {abs/2104.09243}, year = {2021}, url = {https://arxiv.org/abs/2104.09243}, archivePrefix = {arXiv}, } ```
classla/copa_hr
[ "task_categories:text-classification", "task_ids:natural-language-inference", "language:hr", "license:cc-by-sa-4.0", "causal-reasoning", "textual-entailment", "commonsense-reasoning", "arxiv:2005.00333", "arxiv:2104.09243", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["hr"], "license": ["cc-by-sa-4.0"], "task_categories": ["text-classification"], "task_ids": ["natural-language-inference"], "tags": ["causal-reasoning", "textual-entailment", "commonsense-reasoning"]}
2022-10-25T06:32:15+00:00
708662e326e2e0ee4ce0fb7fa4e41db6c93771f0
The hr500k training corpus contains 506,457 Croatian tokens manually annotated on the levels of tokenisation, sentence segmentation, morphosyntactic tagging, lemmatisation, named entities and dependency syntax. On the sentence level, the dataset contains 20159 training samples, 1963 validation samples and 2672 test samples across the respective data splits. Each sample represents a sentence and includes the following features: sentence ID ('sent\_id'), sentence text ('text'), list of tokens ('tokens'), list of lemmas ('lemmas'), list of MULTEXT-East tags ('xpos\_tags), list of UPOS tags ('upos\_tags'), list of morphological features ('feats'), and list of IOB tags ('iob\_tags'). A subset of the data also contains universal dependencies ('ud') and consists of 7498 training samples, 649 validation samples, and 742 test samples. Three dataset configurations are available, namely 'ner', 'upos', and 'ud', with the corresponding features encoded as class labels. If the configuration is not specified, it defaults to 'ner'. If you use this dataset in your research, please cite the following paper: ``` Bibtex @InProceedings{LJUBEI16.340, author = {Nikola Ljubešić and Filip Klubička and Željko Agić and Ivo-Pavao Jazbec}, title = {New Inflectional Lexicons and Training Corpora for Improved Morphosyntactic Annotation of Croatian and Serbian}, booktitle = {Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)}, year = {2016}, month = {may}, date = {23-28}, location = {Portorož, Slovenia}, editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Sara Goggi and Marko Grobelnik and Bente Maegaard and Joseph Mariani and Helene Mazo and Asuncion Moreno and Jan Odijk and Stelios Piperidis}, publisher = {European Language Resources Association (ELRA)}, address = {Paris, France}, isbn = {978-2-9517408-9-1}, language = {english} } ```
classla/hr500k
[ "task_categories:other", "task_ids:lemmatization", "task_ids:named-entity-recognition", "task_ids:part-of-speech", "language:hr", "license:cc-by-sa-4.0", "structure-prediction", "normalization", "tokenization", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["hr"], "license": ["cc-by-sa-4.0"], "task_categories": ["other"], "task_ids": ["lemmatization", "named-entity-recognition", "part-of-speech"], "tags": ["structure-prediction", "normalization", "tokenization"]}
2022-10-25T06:32:05+00:00
ba014295e666710c5dfe6215338933ecf235156c
The dataset contains 6273 training samples, 762 validation samples and 749 test samples. Each sample represents a sentence and includes the following features: sentence ID ('sent\_id'), list of tokens ('tokens'), list of normalised word forms ('norms'), list of lemmas ('lemmas'), list of Multext-East tags ('xpos\_tags), list of morphological features ('feats'), and list of UPOS tags ('upos\_tags'), which are encoded as class labels.
classla/janes_tag
[ "task_categories:other", "task_ids:lemmatization", "task_ids:part-of-speech", "language:si", "license:cc-by-sa-4.0", "structure-prediction", "normalization", "tokenization", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["si"], "license": ["cc-by-sa-4.0"], "task_categories": ["other"], "task_ids": ["lemmatization", "part-of-speech"], "tags": ["structure-prediction", "normalization", "tokenization"]}
2022-10-25T06:31:04+00:00
da293b9a70a87a936777e93dd59046ddbc6399ce
This dataset is based on 3,871 Croatian tweets that were segmented into sentences, tokens, and annotated with normalized forms, lemmas, MULTEXT-East tags (XPOS), UPOS tags and morphological features, and named entities. The dataset contains 6339 training samples (sentences), 815 validation samples and 785 test samples. Each sample represents a sentence and includes the following features: sentence ID ('sent\_id'), list of tokens ('tokens'), list of normalised tokens ('norms'), list of lemmas ('lemmas'), list of UPOS tags ('upos\_tags'), list of MULTEXT-East tags ('xpos\_tags), list of morphological features ('feats'), and list of named entity IOB tags ('iob\_tags'), which are encoded as class labels. If you are using this dataset in your research, please cite the following paper: ``` @article{Miličević_Ljubešić_2016, title={Tviterasi, tviteraši or twitteraši? Producing and analysing a normalised dataset of Croatian and Serbian tweets}, volume={4}, url={https://revije.ff.uni-lj.si/slovenscina2/article/view/7007}, DOI={10.4312/slo2.0.2016.2.156-188}, number={2}, journal={Slovenščina 2.0: empirical, applied and interdisciplinary research}, author={Miličević, Maja and Ljubešić, Nikola}, year={2016}, month={Sep.}, pages={156–188} } ```
classla/reldi_hr
[ "task_categories:other", "task_ids:lemmatization", "task_ids:named-entity-recognition", "task_ids:part-of-speech", "language:hr", "license:cc-by-sa-4.0", "structure-prediction", "normalization", "tokenization", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["hr"], "license": ["cc-by-sa-4.0"], "task_categories": ["other"], "task_ids": ["lemmatization", "named-entity-recognition", "part-of-speech"], "tags": ["structure-prediction", "normalization", "tokenization"]}
2022-10-25T06:30:56+00:00
10a37a1a9ea782093646e0b03d5ef05b3e1e11d5
This dataset is based on 3,748 Serbian tweets that were segmented into sentences, tokens, and annotated with normalized forms, lemmas, MULTEXT-East tags (XPOS), UPOS tags and morphological features, and named entities. The dataset contains 5462 training samples (sentences), 711 validation samples and 725 test samples. Each sample represents a sentence and includes the following features: sentence ID ('sent\_id'), list of tokens ('tokens'), list of normalised tokens ('norms'), list of lemmas ('lemmas'), list of UPOS tags ('upos\_tags'), list of MULTEXT-East tags ('xpos\_tags), list of morphological features ('feats'), and list of named entity IOB tags ('iob\_tags'), which are encoded as class labels. If you are using this dataset in your research, please cite the following paper: ``` @article{Miličević_Ljubešić_2016, title={Tviterasi, tviteraši or twitteraši? Producing and analysing a normalised dataset of Croatian and Serbian tweets}, volume={4}, url={https://revije.ff.uni-lj.si/slovenscina2/article/view/7007}, DOI={10.4312/slo2.0.2016.2.156-188}, number={2}, journal={Slovenščina 2.0: empirical, applied and interdisciplinary research}, author={Miličević, Maja and Ljubešić, Nikola}, year={2016}, month={Sep.}, pages={156–188} } ```
classla/reldi_sr
[ "task_categories:other", "task_ids:lemmatization", "task_ids:named-entity-recognition", "task_ids:part-of-speech", "language:sr", "license:cc-by-sa-4.0", "structure-prediction", "normalization", "tokenization", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["sr"], "license": ["cc-by-sa-4.0"], "task_categories": ["other"], "task_ids": ["lemmatization", "named-entity-recognition", "part-of-speech"], "tags": ["structure-prediction", "normalization", "tokenization"]}
2022-10-25T06:30:33+00:00
42861d4054bc5fb993e6606e3c70a2957ec52e91
The SETimes\_sr training corpus contains 86,726 Serbian tokens manually annotated on the levels of tokenisation, sentence segmentation, morphosyntactic tagging, lemmatisation, named entities and dependency syntax. The dataset contains 3177 training samples, 395 validation samples and 319 test samples across the respective data splits. Each sample represents a sentence and includes the following features: sentence ID ('sent\_id'), sentence text ('text'), list of tokens ('tokens'), list of lemmas ('lemmas'), list of MULTEXT-East tags ('xpos\_tags), list of UPOS tags ('upos\_tags'), list of morphological features ('feats'), list of IOB tags ('iob\_tags') and list of universal dependencies ('uds'). Three dataset configurations are available, namely 'ner', 'upos', and 'ud', with the corresponding features encoded as class labels. If the configuration is not specified, it defaults to 'ner'. If you use this dataset in your research, please cite the following paper: ``` @inproceedings{samardzic-etal-2017-universal, title = "{U}niversal {D}ependencies for {S}erbian in Comparison with {C}roatian and Other {S}lavic Languages", author = "Samard{\v{z}}i{\'c}, Tanja and Starovi{\'c}, Mirjana and Agi{\'c}, {\v{Z}}eljko and Ljube{\v{s}}i{\'c}, Nikola", booktitle = "Proceedings of the 6th Workshop on {B}alto-{S}lavic Natural Language Processing", month = apr, year = "2017", address = "Valencia, Spain", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/W17-1407", doi = "10.18653/v1/W17-1407", pages = "39--44", } ```
classla/setimes_sr
[ "task_categories:other", "task_ids:lemmatization", "task_ids:named-entity-recognition", "task_ids:part-of-speech", "language:sr", "license:cc-by-sa-4.0", "structure-prediction", "normalization", "tokenization", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["sr"], "license": ["cc-by-sa-4.0"], "task_categories": ["other"], "task_ids": ["lemmatization", "named-entity-recognition", "part-of-speech"], "tags": ["structure-prediction", "normalization", "tokenization"]}
2022-10-25T06:30:04+00:00
446b04c97cb43772a229cebbb8da0ce05ee03d2d
The dataset contains 7432 training samples, 1164 validation samples and 893 test samples. Each sample represents a sentence and includes the following features: sentence ID ('sent\_id'), list of tokens ('tokens'), list of lemmas ('lemmas'), list of Multext-East tags ('xpos\_tags), list of UPOS tags ('upos\_tags'), list of morphological features ('feats'), list of IOB tags ('iob\_tags'), and list of universal dependency tags ('uds'). Three dataset configurations are available, where the corresponding features are encoded as class labels: 'ner', 'upos', and 'ud'.
classla/ssj500k
[ "task_categories:token-classification", "task_ids:lemmatization", "task_ids:named-entity-recognition", "task_ids:parsing", "task_ids:part-of-speech", "language:sl", "license:cc-by-sa-4.0", "structure-prediction", "tokenization", "dependency-parsing", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["sl"], "license": ["cc-by-sa-4.0"], "task_categories": ["token-classification"], "task_ids": ["lemmatization", "named-entity-recognition", "parsing", "part-of-speech"], "tags": ["structure-prediction", "tokenization", "dependency-parsing"]}
2022-10-28T04:37:22+00:00
dcbb0c37d501225a976dc9e8a12bf0e20c8e2e04
This is a very good dataset!
clem/autonlp-data-french_word_detection
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-09-14T08:45:38+00:00
87a7bada8da4fe2a7b738c6d3e549153383198ad
# MFAQ 🚨 See [MQA](https://huggingface.co/datasets/clips/mqa) or [MFAQ Light](maximedb/mfaq_light) for an updated version of the dataset. MFAQ is a multilingual corpus of *Frequently Asked Questions* parsed from the [Common Crawl](https://commoncrawl.org/). ``` from datasets import load_dataset load_dataset("clips/mfaq", "en") { "qa_pairs": [ { "question": "Do I need a rental Car in Cork?", "answer": "If you plan on travelling outside of Cork City, for instance to Kinsale [...]" }, ... ] } ``` ## Languages We collected around 6M pairs of questions and answers in 21 different languages. To download a language specific subset you need to specify the language key as configuration. See below for an example. ``` load_dataset("clips/mfaq", "en") # replace "en" by any language listed below ``` | Language | Key | Pairs | Pages | |------------|-----|-----------|-----------| | All | all | 6,346,693 | 1,035,649 | | English | en | 3,719,484 | 608,796 | | German | de | 829,098 | 111,618 | | Spanish | es | 482,818 | 75,489 | | French | fr | 351,458 | 56,317 | | Italian | it | 155,296 | 24,562 | | Dutch | nl | 150,819 | 32,574 | | Portuguese | pt | 138,778 | 26,169 | | Turkish | tr | 102,373 | 19,002 | | Russian | ru | 91,771 | 22,643 | | Polish | pl | 65,182 | 10,695 | | Indonesian | id | 45,839 | 7,910 | | Norwegian | no | 37,711 | 5,143 | | Swedish | sv | 37,003 | 5,270 | | Danish | da | 32,655 | 5,279 | | Vietnamese | vi | 27,157 | 5,261 | | Finnish | fi | 20,485 | 2,795 | | Romanian | ro | 17,066 | 3,554 | | Czech | cs | 16,675 | 2,568 | | Hebrew | he | 11,212 | 1,921 | | Hungarian | hu | 8,598 | 1,264 | | Croatian | hr | 5,215 | 819 | ## Data Fields #### Nested (per page - default) The data is organized by page. Each page contains a list of questions and answers. - **id** - **language** - **num_pairs**: the number of FAQs on the page - **domain**: source web domain of the FAQs - **qa_pairs**: a list of questions and answers - **question** - **answer** - **language** #### Flattened The data is organized by pair (i.e. pages are flattened). You can access the flat version of any language by appending `_flat` to the configuration (e.g. `en_flat`). The data will be returned pair-by-pair instead of page-by-page. - **domain_id** - **pair_id** - **language** - **domain**: source web domain of the FAQs - **question** - **answer** ## Source Data This section was adapted from the source data description of [OSCAR](https://huggingface.co/datasets/oscar#source-data) Common Crawl is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected nofollow and robots.txt policies. To construct MFAQ, the WARC files of Common Crawl were used. We looked for `FAQPage` markup in the HTML and subsequently parsed the `FAQItem` from the page. ## People This model was developed by [Maxime De Bruyn](https://www.linkedin.com/in/maximedebruyn/), Ehsan Lotfi, Jeska Buhmann and Walter Daelemans. ## Licensing Information ``` These data are released under this licensing scheme. We do not own any of the text from which these data has been extracted. We license the actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/ Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please: * Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted. * Clearly identify the copyrighted work claimed to be infringed. * Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material. We will comply to legitimate requests by removing the affected sources from the next release of the corpus. ``` ## Citation information ``` @misc{debruyn2021mfaq, title={MFAQ: a Multilingual FAQ Dataset}, author={Maxime {De Bruyn} and Ehsan Lotfi and Jeska Buhmann and Walter Daelemans}, year={2021}, eprint={2109.12870}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
clips/mfaq
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "annotations_creators:no-annotation", "language_creators:other", "multilinguality:multilingual", "size_categories:unknown", "source_datasets:original", "language:cs", "language:da", "language:de", "language:en", "language:es", "language:fi", "language:fr", "language:he", "language:hr", "language:hu", "language:id", "language:it", "language:nl", "language:no", "language:pl", "language:pt", "language:ro", "language:ru", "language:sv", "language:tr", "language:vi", "license:cc0-1.0", "arxiv:2109.12870", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["other"], "language": ["cs", "da", "de", "en", "es", "fi", "fr", "he", "hr", "hu", "id", "it", "nl", "no", "pl", "pt", "ro", "ru", "sv", "tr", "vi"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["multiple-choice-qa"], "pretty_name": "MFAQ - a Multilingual FAQ Dataset"}
2022-10-20T10:32:50+00:00
27eebc4a00d229f8dd4ae2a6d9f1e4ad45781f3b
# MQA MQA is a Multilingual corpus of Questions and Answers (MQA) parsed from the [Common Crawl](https://commoncrawl.org/). Questions are divided in two types: *Frequently Asked Questions (FAQ)* and *Community Question Answering (CQA)*. ```python from datasets import load_dataset all_data = load_dataset("clips/mqa", language="en") { "name": "the title of the question (if any)", "text": "the body of the question (if any)", "answers": [{ "text": "the text of the answer", "is_accepted": "true|false" }] } faq_data = load_dataset("clips/mqa", scope="faq", language="en") cqa_data = load_dataset("clips/mqa", scope="cqa", language="en") ``` ## Languages We collected around **234M pairs** of questions and answers in **39 languages**. To download a language specific subset you need to specify the language key as configuration. See below for an example. ```python load_dataset("clips/mqa", language="en") # replace "en" by any language listed below ``` | Language | FAQ | CQA | |:-----------|------------:|-----------:| | en | 174,696,414 | 14,082,180 | | de | 17,796,992 | 1,094,606 | | es | 14,967,582 | 845,836 | | fr | 13,096,727 | 1,299,359 | | ru | 12,435,022 | 1,715,131 | | it | 6,850,573 | 455,027 | | ja | 6,369,706 | 2,089,952 | | zh | 5,940,796 | 579,596 | | pt | 5,851,286 | 373,982 | | nl | 4,882,511 | 503,376 | | tr | 3,893,964 | 370,975 | | pl | 3,766,531 | 70,559 | | vi | 2,795,227 | 96,528 | | id | 2,253,070 | 200,441 | | ar | 2,211,795 | 805,661 | | uk | 2,090,611 | 27,260 | | el | 1,758,618 | 17,167 | | no | 1,752,820 | 11,786 | | sv | 1,733,582 | 20,024 | | fi | 1,717,221 | 41,371 | | ro | 1,689,471 | 93,222 | | th | 1,685,463 | 73,204 | | da | 1,554,581 | 16,398 | | he | 1,422,449 | 88,435 | | ko | 1,361,901 | 49,061 | | cs | 1,224,312 | 143,863 | | hu | 878,385 | 27,639 | | fa | 787,420 | 118,805 | | sk | 785,101 | 4,615 | | lt | 672,105 | 301 | | et | 547,208 | 441 | | hi | 516,342 | 205,645 | | hr | 458,958 | 11,677 | | is | 437,748 | 37 | | lv | 428,002 | 88 | | ms | 230,568 | 7,460 | | bg | 198,671 | 5,320 | | sr | 110,270 | 3,980 | | ca | 100,201 | 1,914 | ## FAQ vs. CQA You can download the *Frequently Asked Questions* (FAQ) or the *Community Question Answering* (CQA) part of the dataset. ```python faq = load_dataset("clips/mqa", scope="faq") cqa = load_dataset("clips/mqa", scope="cqa") all = load_dataset("clips/mqa", scope="all") ``` Although FAQ and CQA questions share the same structure, CQA questions can have multiple answers for a given questions, while FAQ questions have a single answer. FAQ questions typically only have a title (`name` key), while CQA have a title and a body (`name` and `text`). ## Nesting and Data Fields You can specify three different nesting level: `question`, `page` and `domain`. #### Question ```python load_dataset("clips/mqa", level="question") # default ``` The default level is the question object: - **name**: the title of the question(if any) in markdown format - **text**: the body of the question (if any) in markdown format - **answers**: a list of answers - **text**: the title of the answer (if any) in markdown format - **name**: the body of the answer in markdown format - **is_accepted**: true if the answer is selected. #### Page This level returns a list of questions present on the same page. This is mostly useful for FAQs since CQAs already have one question per page. ```python load_dataset("clips/mqa", level="page") ``` #### Domain This level returns a list of pages present on the web domain. This is a good way to cope with FAQs duplication by sampling one page per domain at each epoch. ```python load_dataset("clips/mqa", level="domain") ``` ## Source Data This section was adapted from the source data description of [OSCAR](https://huggingface.co/datasets/oscar#source-data) Common Crawl is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected nofollow and robots.txt policies. To construct MQA, we used the WARC files of Common Crawl. ## People This model was developed by [Maxime De Bruyn](https://maximedb.vercel.app), Ehsan Lotfi, Jeska Buhmann and Walter Daelemans. ## Licensing Information ``` These data are released under this licensing scheme. We do not own any of the text from which these data has been extracted. We license the actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/ Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please: * Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted. * Clearly identify the copyrighted work claimed to be infringed. * Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material. We will comply to legitimate requests by removing the affected sources from the next release of the corpus. ``` ## Citation information ``` @inproceedings{de-bruyn-etal-2021-mfaq, title = "{MFAQ}: a Multilingual {FAQ} Dataset", author = "De Bruyn, Maxime and Lotfi, Ehsan and Buhmann, Jeska and Daelemans, Walter", booktitle = "Proceedings of the 3rd Workshop on Machine Reading for Question Answering", month = nov, year = "2021", address = "Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.mrqa-1.1", pages = "1--13", } ```
clips/mqa
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "annotations_creators:no-annotation", "language_creators:other", "multilinguality:multilingual", "size_categories:unknown", "source_datasets:original", "language:ca", "language:en", "language:de", "language:es", "language:fr", "language:ru", "language:ja", "language:it", "language:zh", "language:pt", "language:nl", "language:tr", "language:pl", "language:vi", "language:ar", "language:id", "language:uk", "language:ro", "language:no", "language:th", "language:sv", "language:el", "language:fi", "language:he", "language:da", "language:cs", "language:ko", "language:fa", "language:hi", "language:hu", "language:sk", "language:lt", "language:et", "language:hr", "language:is", "language:lv", "language:ms", "language:bg", "language:sr", "license:cc0-1.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["other"], "language": ["ca", "en", "de", "es", "fr", "ru", "ja", "it", "zh", "pt", "nl", "tr", "pl", "vi", "ar", "id", "uk", "ro", false, "th", "sv", "el", "fi", "he", "da", "cs", "ko", "fa", "hi", "hu", "sk", "lt", "et", "hr", "is", "lv", "ms", "bg", "sr", "ca"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["multiple-choice-qa"], "pretty_name": "MQA - a Multilingual FAQ and CQA Dataset"}
2022-09-27T11:38:50+00:00
3a1dc9acf1e9957e628865fa9937a70f71cf5f3f
fwefwefewf
cnrcastroli/aaaa
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-03-04T21:51:21+00:00
ab5506446dea35e06b6ac00d0b9c7a6677cd43ed
# Dataset Card for "FairLex" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/coastalcph/fairlex - **Repository:** https://github.com/coastalcph/fairlex - **Paper:** https://aclanthology.org/2022.acl-long.301/ - **Leaderboard:** - - **Point of Contact:** [Ilias Chalkidis](mailto:[email protected]) ### Dataset Summary We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian, and Chinese), and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP. For the purpose of this work, we release four domain-specific BERT models with continued pre-training on the corpora of the examined datasets (ECtHR, SCOTUS, FSCS, CAIL). We train mini-sized BERT models with 6 Transformer blocks, 384 hidden units, and 12 attention heads. We warm-start all models from the public MiniLMv2 (Wang et al., 2021) using the distilled version of RoBERTa (Liu et al., 2019). For the English datasets (ECtHR, SCOTUS) and the one distilled from XLM-R (Conneau et al., 2021) for the rest (trilingual FSCS, and Chinese CAIL). [[Link to Models](https://huggingface.co/models?search=fairlex)] ### Supported Tasks and Leaderboards The supported tasks are the following: <table> <tr><td>Dataset</td><td>Source</td><td>Sub-domain</td><td>Language</td><td>Task Type</td><td>Classes</td><tr> <tr><td>ECtHR</td><td> <a href="https://aclanthology.org/P19-1424/">Chalkidis et al. (2019)</a> </td><td>ECHR</td><td>en</td><td>Multi-label classification</td><td>10+1</td></tr> <tr><td>SCOTUS</td><td> <a href="http://scdb.wustl.edu">Spaeth et al. (2020)</a></td><td>US Law</td><td>en</td><td>Multi-class classification</td><td>11</td></tr> <tr><td>FSCS</td><td> <a href="https://aclanthology.org/2021.nllp-1.3/">Niklaus et al. (2021)</a></td><td>Swiss Law</td><td>en, fr , it</td><td>Binary classification</td><td>2</td></tr> <tr><td>CAIL</td><td> <a href="https://arxiv.org/abs/2103.13868">Wang et al. (2021)</a></td><td>Chinese Law</td><td>zh</td><td>Multi-class classification</td><td>6</td></tr> </table> #### ecthr The European Court of Human Rights (ECtHR) hears allegations that a state has breached human rights provisions of the European Convention of Human Rights (ECHR). We use the dataset of Chalkidis et al. (2021), which contains 11K cases from ECtHR's public database. Each case is mapped to *articles* of the ECHR that were violated (if any). This is a multi-label text classification task. Given the facts of a case, the goal is to predict the ECHR articles that were violated, if any, as decided (ruled) by the court. The cases are chronologically split into training (9k, 2001--16), development (1k, 2016--17), and test (1k, 2017--19) sets. To facilitate the study of the fairness of text classifiers, we record for each case the following attributes: (a) The _defendant states_, which are the European states that allegedly violated the ECHR. The defendant states for each case is a subset of the 47 Member States of the Council of Europe; To have statistical support, we group defendant states in two groups: Central-Eastern European states, on one hand, and all other states, as classified by the EuroVoc thesaurus. (b) The _applicant's age_ at the time of the decision. We extract the birth year of the applicant from the case facts, if possible, and classify its case in an age group (<=35, <=64, or older); and (c) the _applicant's gender_, extracted from the facts, if possible based on pronouns, classified in two categories (male, female). #### scotus The US Supreme Court (SCOTUS) is the highest federal court in the United States of America and generally hears only the most controversial or otherwise complex cases that have not been sufficiently well solved by lower courts. We combine information from SCOTUS opinions with the Supreme Court DataBase (SCDB) (Spaeth, 2020). SCDB provides metadata (e.g., date of publication, decisions, issues, decision directions, and many more) for all cases. We consider the available 14 thematic issue areas (e.g, Criminal Procedure, Civil Rights, Economic Activity, etc.). This is a single-label multi-class document classification task. Given the court's opinion, the goal is to predict the issue area whose focus is on the subject matter of the controversy (dispute). SCOTUS contains a total of 9,262 cases that we split chronologically into 80% for training (7.4k, 1946--1982), 10% for development (914, 1982--1991) and 10% for testing (931, 1991--2016). From SCDB, we also use the following attributes to study fairness: (a) the _type of respondent_, which is a manual categorization of respondents (defendants) in five categories (person, public entity, organization, facility, and other); and (c) the _direction of the decision_, i.e., whether the decision is liberal, or conservative, provided by SCDB. #### fscs The Federal Supreme Court of Switzerland (FSCS) is the last level of appeal in Switzerland and similarly to SCOTUS, the court generally hears only the most controversial or otherwise complex cases which have not been sufficiently well solved by lower courts. The court often focuses only on small parts of the previous decision, where they discuss possible wrong reasoning by the lower court. The Swiss-Judgment-Predict dataset (Niklaus et al., 2021) contains more than 85K decisions from the FSCS written in one of three languages (50K German, 31K French, 4K Italian) from the years 2000 to 2020. The dataset is not parallel, i.e., all cases are unique and decisions are written only in a single language. The dataset provides labels for a simplified binary (_approval_, _dismissal_) classification task. Given the facts of the case, the goal is to predict if the plaintiff's request is valid or partially valid. The cases are also chronologically split into training (59.7k, 2000-2014), development (8.2k, 2015-2016), and test (17.4k, 2017-2020) sets. The dataset provides three additional attributes: (a) the _language_ of the FSCS written decision, in either German, French, or Italian; (b) the _legal area_ of the case (public, penal, social, civil, or insurance law) derived from the chambers where the decisions were heard; and (c) the _region_ that denotes in which federal region was the case originated. #### cail The Supreme People's Court of China (CAIL) is the last level of appeal in China and considers cases that originated from the high people's courts concerning matters of national importance. The Chinese AI and Law challenge (CAIL) dataset (Xiao et al., 2018) is a Chinese legal NLP dataset for judgment prediction and contains over 1m criminal cases. The dataset provides labels for *relevant article of criminal code* prediction, *charge* (type of crime) prediction, imprisonment *term* (period) prediction, and monetary *penalty* prediction. The publication of the original dataset has been the topic of an active debate in the NLP community(Leins et al., 2020; Tsarapatsanis and Aletras, 2021; Bender, 2021). Recently, Wang et al. (2021) re-annotated a subset of approx. 100k cases with demographic attributes. Specifically, the new dataset has been annotated with: (a) the _applicant's gender_, classified in two categories (male, female); and (b) the _region_ of the court that denotes in which out of the 7 provincial-level administrative regions was the case judged. We re-split the dataset chronologically into training (80k, 2013-2017), development (12k, 2017-2018), and test (12k, 2018) sets. In our study, we re-frame the imprisonment _term_ prediction and examine a soft version, dubbed _crime severity_ prediction task, a multi-class classification task, where given the facts of a case, the goal is to predict how severe was the committed crime with respect to the imprisonment term. We approximate crime severity by the length of imprisonment term, split in 6 clusters (0, <=12, <=36, <=60, <=120, >120 months). ### Languages We consider datasets in English, German, French, Italian, and Chinese. ## Dataset Structure ### Data Instances #### ecthr An example of 'train' looks as follows. ```json { "text": "1. At the beginning of the events relevant to the application, K. had a daughter, P., and a son, M., born in 1986 and 1988 respectively. ... ", "labels": [4], "defendant_state": 1, "applicant_gender": 0, "applicant_age": 0 } ``` #### scotus An example of 'train' looks as follows. ```json { "text": "United States Supreme Court MICHIGAN NAT. BANK v. MICHIGAN(1961) No. 155 Argued: Decided: March 6, 1961 </s> R. S. 5219 permits States to tax the shares of national banks, but not at a greater rate than . . . other moneyed capital . . . coming into competition with the business of national banks ...", "label": 9, "decision_direction": 0, "respondent_type": 3 } ``` #### fscs An example of 'train' looks as follows. ```json { "text": "A.- Der 1955 geborene V._ war seit 1. September 1986 hauptberuflich als technischer Kaufmann bei der Firma A._ AG tätig und im Rahmen einer Nebenbeschäftigung (Nachtarbeit) ab Mai 1990 bei einem Bewachungsdienst angestellt gewesen, als er am 10....", "label": 0, "decision_language": 0, "legal_are": 5, "court_region": 2 } ``` #### cail An example of 'train' looks as follows. ```json { "text": "南宁市兴宁区人民检察院指控,2012年1月1日19时许,被告人蒋满德在南宁市某某路某号某市场内,因经营问题与被害人杨某某发生争吵并推打 ...", "label": 0, "defendant_gender": 0, "court_region": 5 } ``` ### Data Fields #### ecthr_a - `text`: a `string` feature (factual paragraphs (facts) from the case description). - `labels`: a list of classification labels (a list of violated ECHR articles, if any). The ECHR articles considered are 2, 3, 5, 6, 8, 9, 11, 14, P1-1. - `defendant_state`: Defendant State group (C.E. European, Rest of Europe) - `applicant_gender`: The gender of the applicant (N/A, Male, Female) - `applicant_age`: The age group of the applicant (N/A, <=35, <=64, or older) #### scotus - `text`: a `string` feature (the court opinion). - `label`: a classification label (the relevant issue area). The issue areas are: (1, Criminal Procedure), (2, Civil Rights), (3, First Amendment), (4, Due Process), (5, Privacy), (6, Attorneys), (7, Unions), (8, Economic Activity), (9, Judicial Power), (10, Federalism), (11, Interstate Relations), (12, Federal Taxation), (13, Miscellaneous), (14, Private Action). - `respondent_type`: the type of respondent, which is a manual categorization (clustering) of respondents (defendants) in five categories (person, public entity, organization, facility, and other). - `decision_direction`: the direction of the decision, i.e., whether the decision is liberal, or conservative, provided by SCDB. #### fscs - `text`: a `string` feature (an EU law). - `label`: a classification label (approval or dismissal of the appeal). - `language`: the language of the FSCS written decision, (German, French, or Italian). - `legal_area`: the legal area of the case (public, penal, social, civil, or insurance law) derived from the chambers where the decisions were heard. - `region`: the region that denotes in which federal region was the case originated. #### cail - `text`: a `string` feature (the factual description of the case). - `label`: a classification label (crime severity derived by the imprisonment term). - `defendant_gender`: the gender of the defendant (Male or Female). - `court_region`: the region of the court that denotes in which out of the 7 provincial-level administrative regions was the case judged. ### Data Splits <table> <tr><td>Dataset </td><td>Training</td><td>Development</td><td>Test</td><td>Total</td></tr> <tr><td>ECtHR</td><td>9000</td><td>1000</td><td>1000</td><td>11000</td></tr> <tr><td>SCOTUS</td><td>7417</td><td>914</td><td>931</td><td>9262</td></tr> <tr><td>FSCS</td><td>59709</td><td>8208</td><td>17357</td><td>85274</td></tr> <tr><td>CAIL</td><td>80000</td><td>12000</td><td>12000</td><td>104000</td></tr> </table> ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data <table> <tr><td>Dataset</td><td>Source</td><td>Sub-domain</td><td>Language</td><td>Task Type</td><td>Classes</td><tr> <tr><td>ECtHR</td><td> <a href="https://aclanthology.org/P19-1424/">Chalkidis et al. (2019)</a> </td><td>ECHR</td><td>en</td><td>Multi-label classification</td><td>10+1</td></tr> <tr><td>SCOTUS</td><td> <a href="http://scdb.wustl.edu">Spaeth et al. (2020)</a></td><td>US Law</td><td>en</td><td>Multi-class classification</td><td>14</td></tr> <tr><td>FSCS</td><td> <a href="https://aclanthology.org/2021.nllp-1.3/">Niklaus et al. (2021)</a></td><td>Swiss Law</td><td>en, fr , it</td><td>Binary classification</td><td>2</td></tr> <tr><td>CAIL</td><td> <a href="https://arxiv.org/abs/2105.03887">Wang et al. (2021)</a></td><td>Chinese Law</td><td>zh</td><td>Multi-class classification</td><td>6</td></tr> </table> #### Initial Data Collection and Normalization We standardize and put together four datasets: ECtHR (Chalkidis et al., 2021), SCOTUS (Spaeth et al., 2020), FSCS (Niklaus et al., 2021), and CAIL (Xiao et al., 2018; Wang et al., 2021) that are already publicly available. The benchmark is not a blind stapling of pre-existing resources, we augment previous datasets. In the case of ECtHR, previously unavailable demographic attributes have been released to make the original dataset amenable for fairness research. For SCOTUS, two resources (court opinions with SCDB) have been combined for the very same reason, while the authors provide a manual categorization (clustering) of respondents. All datasets, except SCOTUS, are publicly available and have been previously published. If datasets or the papers where they were introduced were not compiled or written by the authors, the original work is referenced and authors encourage FairLex users to do so as well. In fact, this work should only be referenced, in addition to citing the original work, when jointly experimenting with multiple FairLex datasets and using the FairLex evaluation framework and infrastructure, or using any newly introduced annotations (ECtHR, SCOTUS). Otherwise only the original work should be cited. #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? All classification labels rely on legal decisions (ECtHR, FSCS, CAIL), or are part of archival procedures (SCOTUS). The demographic attributes and other metadata are either provided by the legal databases or have been extracted automatically from the text by means of Regular Expressions. Consider the **Dataset Description** and **Discussion of Biases** sections, and the original publication for detailed information. ### Personal and Sensitive Information The data is in general partially anonymized in accordance with the applicable national law. The data is considered to be in the public sphere from a privacy perspective. This is a very sensitive matter, as the courts try to keep a balance between transparency (the public's right to know) and privacy (respect for private and family life). ECtHR cases are partially annonymized by the court. Its data is processed and made public in accordance with the European Data Protection Law. SCOTUS cases may also contain personal information and the data is processed and made available by the US Supreme Court, whose proceedings are public. While this ensures compliance with US law, it is very likely that similarly to the ECtHR any processing could be justified by either implied consent or legitimate interest under European law. In FSCS, the names of the parties have been redacted by the courts according to the official guidelines. CAIL cases are also partially anonymized by the courts according to the courts' policy. Its data is processed and made public in accordance with Chinese Law. ## Considerations for Using the Data ### Social Impact of Dataset This work can help practitioners to build assisting technology for legal professionals - with respect to the legal framework (jurisdiction) they operate -; technology that does not only rely on performance on majority groups but also considering minorities and the robustness of the developed models across them. This is an important application field, where more research should be conducted (Tsarapatsanis and Aletras, 2021) in order to improve legal services and democratize law, but more importantly, highlight (inform the audience on) the various multi-aspect shortcomings seeking a responsible and ethical (fair) deployment of technology. ### Discussion of Biases The current version of FairLex covers a very small fraction of legal applications, jurisdictions, and protected attributes. The benchmark inevitably cannot cover "_everything in the whole wide (legal) world_" (Raji et al., 2021), but nonetheless, we believe that the published resources will help critical research in the area of fairness. Some protected attributes within the datasets are extracted automatically, i.e., the gender and the age of the ECtHR dataset, by means of Regular Expressions, or manually clustered by the authors, such as the defendant state in the ECtHR dataset and the respondent attribute in the SCOTUS dataset. Those assumptions and simplifications can hold in an experimental setting only and by no means should be used in real-world applications where some simplifications, e.g., binary gender, would not be appropriate. By no means, do the authors or future users have to endorse the law standards or framework of the examined datasets, to any degree rather than the publication and use of the data. ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Dataset Curators *Ilias Chalkidis, Tommaso Pasini, Sheng Zhang, Letizia Tomada, Letizia, Sebastian Felix Schwemer, Anders Søgaard.* *FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing.* *2022. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.* **Note:** The original datasets have been originally curated by others, and further curated (updated) by means of this benchmark. ### Licensing Information The benchmark is released under a [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) license. The licensing is compatible with the licensing of former material (remixed, transformed datasets). ### Citation Information [*Ilias Chalkidis, Tommaso Pasini, Sheng Zhang, Letizia Tomada, Letizia, Sebastian Felix Schwemer, Anders Søgaard.* *FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing.* *2022. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.*](https://arxiv.org/abs/2203.07228) ``` @inproceedings{chalkidis-etal-2022-fairlex, author={Chalkidis, Ilias and Passini, Tommaso and Zhang, Sheng and Tomada, Letizia and Schwemer, Sebastian Felix and Søgaard, Anders}, title={FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing}, booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics}, year={2022}, address={Dublin, Ireland} } ``` **Note:** Please consider citing and giving credits to all publications releasing the examined datasets. ### Contributions Thanks to [@iliaschalkidis](https://github.com/iliaschalkidis) for adding this dataset.
coastalcph/fairlex
[ "task_categories:text-classification", "task_ids:multi-label-classification", "task_ids:multi-class-classification", "task_ids:topic-classification", "annotations_creators:found", "annotations_creators:machine-generated", "language_creators:found", "source_datasets:extended", "language:en", "language:de", "language:fr", "language:it", "language:zh", "license:cc-by-nc-sa-4.0", "bias", "gender-bias", "arxiv:2103.13868", "arxiv:2105.03887", "arxiv:2203.07228", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found", "machine-generated"], "language_creators": ["found"], "language": ["en", "en", "de", "fr", "it", "zh"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": {"ecthr": ["monolingual"], "scotus": ["monolingual"], "fscs": ["multilingual"], "cail": ["monolingual"]}, "size_categories": {"ecthr": ["10K<n<100K"], "scotus": ["1K<n<10K"], "fscs": ["10K<n<100K"], "cail": ["100K<n<1M"]}, "source_datasets": ["extended"], "task_categories": ["text-classification"], "task_ids": ["multi-label-classification", "multi-class-classification", "topic-classification"], "pretty_name": "FairLex", "tags": ["bias", "gender-bias"]}
2023-07-27T11:43:39+00:00
6e4bef0cfa6a9570ba29b06ca47a2db111f71cc0
codeceejay/ng_accent
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2022-01-28T16:41:32+00:00
9070da7298a73ea6129f711916f17e52d82884de
# Dataset Card for **cointegrated/ru-paraphrase-NMT-Leipzig** ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Paper:** https://habr.com/ru/post/564916/ - **Point of Contact:** [@cointegrated](https://huggingface.co/cointegrated) ### Dataset Summary The dataset contains 1 million Russian sentences and their automatically generated paraphrases. It was created by David Dale ([@cointegrated](https://huggingface.co/cointegrated)) by translating the `rus-ru_web-public_2019_1M` corpus from [the Leipzig collection](https://wortschatz.uni-leipzig.de/en/download) into English and back into Russian. A fraction of the resulting paraphrases are invalid, and should be filtered out. The blogpost ["Перефразирование русских текстов: корпуса, модели, метрики"](https://habr.com/ru/post/564916/) provides a detailed description of the dataset and its properties. The dataset can be loaded with the following code: ```Python import datasets data = datasets.load_dataset( 'cointegrated/ru-paraphrase-NMT-Leipzig', data_files={"train": "train.csv","val": "val.csv","test": "test.csv"}, ) ``` Its output should look like ``` DatasetDict({ train: Dataset({ features: ['idx', 'original', 'en', 'ru', 'chrf_sim', 'labse_sim'], num_rows: 980000 }) val: Dataset({ features: ['idx', 'original', 'en', 'ru', 'chrf_sim', 'labse_sim'], num_rows: 10000 }) test: Dataset({ features: ['idx', 'original', 'en', 'ru', 'chrf_sim', 'labse_sim'], num_rows: 10000 }) }) ``` ### Supported Tasks and Leaderboards The dataset can be used to train and validate models for paraphrase generation or (if negative sampling is used) for paraphrase detection. ### Languages Russian (main), English (auxilliary). ## Dataset Structure ### Data Instances Data instances look like ``` { "labse_sim": 0.93502015, "chrf_sim": 0.4946451012684782, "idx": 646422, "ru": "О перспективах развития новых медиа-технологий в РФ расскажут на медиафоруме Енисея.", "original": "Перспективы развития новых медиатехнологий в Российской Федерации обсудят участники медиафорума «Енисей.", "en": "Prospects for the development of new media technologies in the Russian Federation will be discussed at the Yenisey Media Forum." } ``` Where `original` is the original sentence, and `ru` is its machine-generated paraphrase. ### Data Fields - `idx`: id of the instance in the original corpus - `original`: the original sentence - `en`: automatic translation of `original` to English - `ru`: automatic translation of `en` back to Russian, i.e. a paraphrase of `original` - `chrf_sim`: [ChrF++](https://huggingface.co/metrics/chrf) similarity of `original` and `ru` - `labse_sim`: cosine similarity of [LaBSE](https://huggingface.co/cointegrated/LaBSE-en-ru) embedings of `original` and `ru` - `forward_entailment`: predicted probability that `original` entails `ru` - `backward_entailment`: predicted probability that `ru` entails `original` - `p_good`: predicted probability that `ru` and `original` have equivalent meaning ### Data Splits Train – 980K, validation – 10K, test – 10K. The splits were generated randomly. ## Dataset Creation ### Curation Rationale There are other Russian paraphrase corpora, but they have major drawbacks: - The best known [corpus from paraphraser.ru 2016 contest](http://paraphraser.ru/download/) is rather small and covers only the News domain. - [Opusparcus](https://huggingface.co/datasets/GEM/opusparcus), [ParaPhraserPlus](http://paraphraser.ru/download/), and [corpora of Tamara Zhordanija](https://github.com/tamriq/paraphrase) are noisy, i.e. a large proportion of sentence pairs in them have substantial difference in meaning. - The Russian part of [TaPaCo](https://huggingface.co/datasets/tapaco) has very high lexical overlap in the sentence pairs; in other words, their paraphrases are not diverse enough. The current corpus is generated with a dual objective: the parphrases should be semantically as close as possible to the original sentences, while being lexically different from them. Back-translation with restricted vocabulary seems to achieve this goal often enough. ### Source Data #### Initial Data Collection and Normalization The `rus-ru_web-public_2019_1M` corpus from [the Leipzig collection](https://wortschatz.uni-leipzig.de/en/download) as is. The process of its creation is described [in this paper](http://www.lrec-conf.org/proceedings/lrec2012/pdf/327_Paper.pdf): D. Goldhahn, T. Eckart & U. Quasthoff: Building Large Monolingual Dictionaries at the Leipzig Corpora Collection: From 100 to 200 Languages. In: *Proceedings of the 8th International Language Resources and Evaluation (LREC'12), 2012*. #### Automatic paraphrasing The paraphrasing was carried out by translating the original sentence to English and then back to Russian. The models [facebook/wmt19-ru-en](https://huggingface.co/facebook/wmt19-ru-en) and [facebook/wmt19-en-ru](https://huggingface.co/facebook/wmt19-en-ru) were used for translation. To ensure that the back-translated texts are not identical to the original texts, the final decoder was prohibited to use the token n-grams from the original texts. The code below implements the paraphrasing function. ```python import torch from transformers import FSMTModel, FSMTTokenizer, FSMTForConditionalGeneration tokenizer = FSMTTokenizer.from_pretrained("facebook/wmt19-en-ru") model = FSMTForConditionalGeneration.from_pretrained("facebook/wmt19-en-ru") inverse_tokenizer = FSMTTokenizer.from_pretrained("facebook/wmt19-ru-en") inverse_model = FSMTForConditionalGeneration.from_pretrained("facebook/wmt19-ru-en") model.cuda(); inverse_model.cuda(); def paraphrase(text, gram=4, num_beams=5, **kwargs): """ Generate a paraphrase using back translation. Parameter `gram` denotes size of token n-grams of the original sentence that cannot appear in the paraphrase. """ input_ids = inverse_tokenizer.encode(text, return_tensors="pt") with torch.no_grad(): outputs = inverse_model.generate(input_ids.to(inverse_model.device), num_beams=num_beams, **kwargs) other_lang = inverse_tokenizer.decode(outputs[0], skip_special_tokens=True) # print(other_lang) input_ids = input_ids[0, :-1].tolist() bad_word_ids = [input_ids[i:(i+gram)] for i in range(len(input_ids)-gram)] input_ids = tokenizer.encode(other_lang, return_tensors="pt") with torch.no_grad(): outputs = model.generate(input_ids.to(model.device), num_beams=num_beams, bad_words_ids=bad_word_ids, **kwargs) decoded = tokenizer.decode(outputs[0], skip_special_tokens=True) return decoded ``` The corpus was created by running the above `paraphrase` function on the original sentences with parameters `gram=3, num_beams=5, repetition_penalty=3.14, no_repeat_ngram_size=6`. ### Annotations #### Annotation process The dataset was annotated by several automatic metrics: - [ChrF++](https://huggingface.co/metrics/chrf) between `original` and `ru` sentences; - cosine similarity between [LaBSE](https://huggingface.co/cointegrated/LaBSE-en-ru) embeddings of these sentences; - forward and backward entailment probabilites predictd by the [rubert-base-cased-nli-twoway](https://huggingface.co/cointegrated/rubert-base-cased-nli-twoway) model; - `p_good`, a metric aggregating the four metrics above into a single number. It is obtained with a logistic regression trained on 100 randomly chosen from the train set and manually labelled sentence pairs. #### Who are the annotators? Human annotation was involved only for a small subset used to train the model for `p_good`. It was conduced by the dataset author, @cointegrated. ### Personal and Sensitive Information The dataset is not known to contain any personal or sensitive information. The sources and processes of original data collection are described at https://wortschatz.uni-leipzig.de/en/download. ## Considerations for Using the Data ### Social Impact of Dataset The dataset may enable creation for paraphrasing systems that can be used both for "good" purposes (such as assisting writers or augmenting text datasets), and for "bad" purposes (such as disguising plagiarism). The authors are not responsible for any uses of the dataset. ### Discussion of Biases The dataset may inherit some of the biases of [the underlying Leipzig web corpus](https://wortschatz.uni-leipzig.de/en/download) or the neural machine translation models ([1](https://huggingface.co/facebook/wmt19-ru-en), [2](https://huggingface.co/facebook/wmt19-en-ru)) with which it was generated. ### Other Known Limitations Most of the paraphrases in the dataset are valid (by a rough estimante, at least 80%). However, in some sentence pairs there are faults: - Named entities are often spelled in different ways (e.g. `"Джейкоб" -> "Яков") or even replaced with other entities (e.g. `"Оймякон" -> "Оймянск" or `"Верхоянск" -> "Тольятти"`). - Sometimes the meaning of words or phrases changes signigicantly, e.g. `"полустанок" -> "полумашина"`, or `"были по колено в грязи" -> "лежали на коленях в иле"`. - Sometimes the syntax is changed in a meaning-altering way, e.g. `"Интеллектуальное преимущество Вавилова и его соратников над демагогами из рядов сторонников новой агробиологии разительно очевидно." -> "Интеллектуал Вавилов и его приспешники в новой аграрной биологии явно превзошли демогогов."`. - Grammatical properties that are present in Russian morphology but absent in English, such as gender, are often lost, e.g. `"Я не хотела тебя пугать" -> "Я не хотел пугать вас"`. The field `labse_sim` reflects semantic similarity between the sentences, and it can be used to filter out at least some poor paraphrases. ## Additional Information ### Dataset Curators The dataset was created by [David Dale](https://daviddale.ru/en), a.k.a. [@cointegrated](https://huggingface.co/cointegrated). ### Licensing Information This corpus, as well as the original Leipzig corpora, are licensed under [CC BY](http://creativecommons.org/licenses/by/4.0/). ### Citation Information [This blog post](https://habr.com/ru/post/564916/) can be cited: ``` @misc{dale_paraphrasing_2021, author = "Dale, David", title = "Перефразирование русских текстов: корпуса, модели, метрики", editor = "habr.com", url = "https://habr.com/ru/post/564916/", month = {June}, year = {2021}, note = {[Online; posted 28-June-2021]}, } ``` ### Contributions Thanks to [@avidale](https://github.com/avidale) for adding this dataset.
cointegrated/ru-paraphrase-NMT-Leipzig
[ "task_categories:text-generation", "annotations_creators:no-annotation", "language_creators:machine-generated", "multilinguality:translation", "size_categories:100K<n<1M", "source_datasets:extended|other", "language:ru", "license:cc-by-4.0", "conditional-text-generation", "paraphrase-generation", "paraphrase", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["machine-generated"], "language": ["ru"], "license": ["cc-by-4.0"], "multilinguality": ["translation"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|other"], "task_categories": ["text-generation"], "pretty_name": "ru-paraphrase-NMT-Leipzig", "tags": ["conditional-text-generation", "paraphrase-generation", "paraphrase"]}
2022-10-23T11:23:15+00:00
f646cd6d101c64b6226b3a299aed424f19181672
# Dataset Card for TV3Parla ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://collectivat.cat/asr#tv3parla - **Repository:** - **Paper:** [Building an Open Source Automatic Speech Recognition System for Catalan](https://www.isca-speech.org/archive/iberspeech_2018/kulebi18_iberspeech.html) - **Point of Contact:** [Col·lectivaT](mailto:[email protected]) ### Dataset Summary This corpus includes 240 hours of Catalan speech from broadcast material. The details of segmentation, data processing and also model training are explained in Külebi, Öktem; 2018. The content is owned by Corporació Catalana de Mitjans Audiovisuals, SA (CCMA); we processed their material and hereby making it available under their terms of use. This project was supported by the Softcatalà Association. ### Supported Tasks and Leaderboards The dataset can be used for: - Language Modeling. - Automatic Speech Recognition (ASR) transcribes utterances into words. ### Languages The dataset is in Catalan (`ca`). ## Dataset Structure ### Data Instances ``` { 'path': 'tv3_0.3/wav/train/5662515_1492531876710/5662515_1492531876710_120.180_139.020.wav', 'audio': {'path': 'tv3_0.3/wav/train/5662515_1492531876710/5662515_1492531876710_120.180_139.020.wav', 'array': array([-0.01168823, 0.01229858, 0.02819824, ..., 0.015625 , 0.01525879, 0.0145874 ]), 'sampling_rate': 16000}, 'text': 'algunes montoneres que que et feien anar ben col·locat i el vent també hi jugava una mica de paper bufava vent de cantó alguns cops o de cul i el pelotón el vent el porta molt malament hi havia molts nervis' } ``` ### Data Fields - `path` (str): Path to the audio file. - `audio` (dict): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus, it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - `text` (str): Transcription of the audio file. ### Data Splits The dataset is split into "train" and "test". | | train | test | |:-------------------|-------:|-----:| | Number of examples | 159242 | 2220 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [Creative Commons Attribution-NonCommercial 4.0 International](https://creativecommons.org/licenses/by-nc/4.0/). ### Citation Information ``` @inproceedings{kulebi18_iberspeech, author={Baybars Külebi and Alp Öktem}, title={{Building an Open Source Automatic Speech Recognition System for Catalan}}, year=2018, booktitle={Proc. IberSPEECH 2018}, pages={25--29}, doi={10.21437/IberSPEECH.2018-6} } ``` ### Contributions Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
collectivat/tv3_parla
[ "task_categories:automatic-speech-recognition", "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:ca", "license:cc-by-nc-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["ca"], "license": ["cc-by-nc-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["automatic-speech-recognition", "text-generation"], "task_ids": ["language-modeling"], "pretty_name": "TV3Parla"}
2022-12-12T09:01:48+00:00
853146eccb23be28175456f81456e82cba2f83f1
comodoro/pscr
[ "license:cc-by-nc-3.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"license": "cc-by-nc-3.0"}
2022-02-08T07:07:49+00:00
219094aed954b897758697a8921a854f5e199b70
comodoro/vystadial2016_asr
[ "license:cc-by-nc-3.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"license": "cc-by-nc-3.0"}
2022-09-02T07:41:16+00:00
9f47e7ea19a1f969027a138c92e4e3a71b5537d3
# Dataset Card for CoDa ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [nala-cub/coda](https://github.com/nala-cub/coda) - **Paper:** [The World of an Octopus: How Reporting Bias Influences a Language Model's Perception of Color](https://arxiv.org/abs/2110.08182) - **Point of Contact:** [Cory Paik]([email protected]) ### Dataset Summary *The Color Dataset* (CoDa) is a probing dataset to evaluate the representation of visual properties in language models. CoDa consists of color distributions for 521 common objects, which are split into 3 groups. We denote these groups as Single, Multi, and Any, which represents the typical object of each group. The default configuration of CoDa uses 10 CLIP-style templates (e.g. "A photo of a [object]"), and 10 cloze-style templates (e.g. "Everyone knows most [object] are [color]." ) ### Supported Tasks and Leaderboards This version of the dataset consists of the filtered and templated examples as cloze style questions. See the [GitHub](https://github.com/nala-cub/coda) repo for the raw data (e.g. unfiltered annotations) as well as example usage with GPT-2, RoBERTa, ALBERT, and CLIP. ### Languages The text in the dataset is in English. The associated BCP-47 code is `en-US`. ## Dataset Structure ### Data Instances An example looks like this: ```json { "text": "All rulers are [MASK].", "label": [ 0.0181818176, 0.0363636352, 0.3077272773, 0.0181818176, 0.0363636352, 0.086363636, 0.0363636352, 0.0363636352, 0.0363636352, 0.086363636, 0.301363647 ], "template_group": 1, "template_idx": 0, "class_id": "/m/0hdln", "display_name": "Ruler", "object_group": 2, "ngram": "ruler" } ``` ### Data Fields - `text`: The templated example. What this is depends on the value of `template_group`. - `template_group=0`: A CLIP style example. There are no `[MASK]` tokens in these examples. - `template_group=1`: A cloze style example. Note that all templates have `[MASK]` as the last word, but in most cases, the period should be included. - `label`: A list of probability values for the 11 colors. Note that these are sorted by the alphabetic order of the 11 colors (black, blue, brown, gray, green, orange, pink, purple, red, white, yellow). - `template_group`: Type of template, `0` corresponds to A CLIP style template (`clip-imagenet`), and `1` corresponds to A cloze style templates (`text-masked`). - `template_idx`: The index of the template out of all templates - `class_id`: The Corresponding [OpenImages v6](https://storage.googleapis.com/openimages/web/index.html) `ClassID`. - `display_name`: The Corresponding [OpenImages v6](https://storage.googleapis.com/openimages/web/index.html) `DisplayName`. - `object_group`: Object Group, values correspond to `Single`, `Multi`, and `Any`. - `ngram`: Corresponding n-gram used for lookups. ### Data Splits Object Splits: | Group | All | Train | Valid | Test | | ------ | --- | ----- | ----- | ---- | | Single | 198 | 118 | 39 | 41 | | Multi | 208 | 124 | 41 | 43 | | Any | 115 | 69 | 23 | 23 | | Total | 521 | 311 | 103 | 107 | Example Splits: | Group | All | Train | Valid | Test | | ------ | ----- | ----- | ----- | ---- | | Single | 3946 | 2346 | 780 | 820 | | Multi | 4146 | 2466 | 820 | 860 | | Any | 2265 | 1352 | 460 | 453 | | Total | 10357 | 6164 | 2060 | 2133 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information CoDa is licensed under the Apache 2.0 license. ### Citation Information ``` @misc{paik2021world, title={The World of an Octopus: How Reporting Bias Influences a Language Model's Perception of Color}, author={Cory Paik and Stéphane Aroca-Ouellette and Alessandro Roncone and Katharina Kann}, year={2021}, eprint={2110.08182}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
corypaik/coda
[ "annotations_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:apache-2.0", "arxiv:2110.08182", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-scoring"], "task_ids": ["text-scoring-other-distribution-prediction"], "paperswithcode_id": "coda", "pretty_name": "CoDa", "language_bcp47": ["en-US"]}
2022-10-20T15:57:23+00:00
b3efebf08969fc19335ba894353316878b6fa493
# PROST: Physical Reasoning about Objects Through Space and Time ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** https://github.com/nala-cub/prost - **Paper:** https://arxiv.org/abs/2106.03634 - **Leaderboard:** - **Point of Contact:** [Stéphane Aroca-Ouellette](mailto:[email protected]) ### Dataset Summary *Physical Reasoning about Objects Through Space and Time* (PROST) is a probing dataset to evaluate the ability of pretrained LMs to understand and reason about the physical world. PROST consists of 18,736 cloze-style multiple choice questions from 14 manually curated templates, covering 10 physical reasoning concepts: direction, mass, height, circumference, stackable, rollable, graspable, breakable, slideable, and bounceable. ### Supported Tasks and Leaderboards The task is multiple choice question answering, but you can formulate it multiple ways. You can use `context` and `question` to form cloze style questions, or `context` and `ex_question` as multiple choice question answering. See the [GitHub](https://github.com/nala-cub/prost) repo for examples using GPT-1, GPT-2, BERT, RoBERTa, ALBERT, T5, and UnifiedQA. ### Languages The text in the dataset is in English. The associated BCP-47 code is `en-US`. ## Dataset Structure ### Data Instances An example looks like this: ```json { "A": "glass", "B": "pillow", "C": "coin", "D": "ball", "context": "A person drops a glass, a pillow, a coin, and a ball from a balcony.", "ex_question": "Which object is the most likely to break?", "group": "breaking", "label": 0, "name": "breaking_1", "question": "The [MASK] is the most likely to break." } ``` ### Data Fields - `A`: Option A (0) - `B`: Option B (1) - `C`: Option C (2) - `D`: Option D (3) - `context`: Context for the question - `question`: A cloze style continuation of the context. - `ex_question`: A multiple-choice style question. - `group`: The question group, e.g. *bouncing* - `label`: A ClassLabel indication the correct option - `name':` The template identifier. ### Data Splits The dataset contains 18,736 examples for testing. ## Dataset Creation ### Curation Rationale PROST is designed to avoid models succeeding in unintended ways. First, PROST provides no training data, so as to probe models in a zero-shot fashion. This prevents models from succeeding through spurious correlations between testing and training, and encourages success through a true understanding of and reasoning about the concepts at hand. Second, we manually write templates for all questions in an effort to prevent models from having seen the exact same sentences in their training data. Finally, it focuses on a small set of well defined, objective concepts that only require a small vocabulary. This allows researchers to focus more on the quality of training data rather than on size of it. ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information PROST is licensed under the Apache 2.0 license. ### Citation Information ``` @inproceedings{aroca-ouellette-etal-2021-prost, title = "{PROST}: {P}hysical Reasoning about Objects through Space and Time", author = "Aroca-Ouellette, St{\'e}phane and Paik, Cory and Roncone, Alessandro and Kann, Katharina", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.404", pages = "4597--4608", } ``` ### Contributions Thanks to [@corypaik](https://github.com/corypaik) for adding this dataset.
corypaik/prost
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "task_ids:open-domain-qa", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "license:apache-2.0", "arxiv:2106.03634", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en-US"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["multiple-choice-qa", "open-domain-qa"], "paperswithcode_id": "prost", "extended": ["original"]}
2022-10-25T08:07:34+00:00
bfd4f4689c343cabfc936eb4c12f026df15cf977
see https://huggingface.co/datasets/csarron/4m-img-caps for example usage
csarron/25m-img-caps
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2022-03-28T17:51:26+00:00
b27ebb236e94f8d090891e010f93832dccb034d3
see [read_pyarrow.py](https://gist.github.com/csarron/df712e53c9e0dcaad4eb6843e7a3d51c#file-read_pyarrow-py) for how to read one pyarrow file. example PyTorch dataset: ```python from torch.utils.data import Dataset class ImageCaptionArrowDataset(Dataset): def __init__( self, dataset_file, tokenizer, ): import pyarrow as pa data = [pa.ipc.open_file(pa.memory_map(f, "rb")).read_all() for f in glob.glob(dataset_file)] self.data = pa.concat_tables(data) # do other initialization, like init image preprocessing fn, def __getitem__(self, index): # item_id = self.data["id"][index].as_py() text = self.data["text"][index].as_py() # get text if isinstance(text, list): text = random.choice(text) img_bytes = self.data["image"][index].as_py() # get image bytes # do some processing with image and text, return the features # img_feat = self.image_bytes_to_tensor(img_bytes) # inputs = self.tokenizer( # text, # padding="max_length", # max_length=self.max_text_len, # truncation=True, # return_token_type_ids=True, # return_attention_mask=True, # add_special_tokens=True, # return_tensors="pt", # ) # input_ids = inputs.input_ids.squeeze(0) # attention_mask = inputs.attention_mask.squeeze(0) # return { # # "item_ids": item_id, # "text_ids": input_ids, # "input_ids": input_ids, # "text_masks": attention_mask, # "pixel_values": img_feat, # } def __len__(self): return len(self.data) ```
csarron/4m-img-caps
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2022-03-28T17:50:53+00:00
30fece425f9a3866e04321773ca7a80056d55ca6
# Dataset Card for "XL-Sum" ## Table of Contents - [Dataset Card Creation Guide](#dataset-card-creation-guide) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [https://github.com/csebuetnlp/xl-sum](https://github.com/csebuetnlp/xl-sum) - **Paper:** [XL-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages](https://aclanthology.org/2021.findings-acl.413/) - **Point of Contact:** [Tahmid Hasan](mailto:[email protected]) ### Dataset Summary We present XLSum, a comprehensive and diverse dataset comprising 1.35 million professionally annotated article-summary pairs from BBC, extracted using a set of carefully designed heuristics. The dataset covers 45 languages ranging from low to high-resource, for many of which no public dataset is currently available. XL-Sum is highly abstractive, concise, and of high quality, as indicated by human and intrinsic evaluation. ### Supported Tasks and Leaderboards [More information needed](https://github.com/csebuetnlp/xl-sum) ### Languages - `amharic` - `arabic` - `azerbaijani` - `bengali` - `burmese` - `chinese_simplified` - `chinese_traditional` - `english` - `french` - `gujarati` - `hausa` - `hindi` - `igbo` - `indonesian` - `japanese` - `kirundi` - `korean` - `kyrgyz` - `marathi` - `nepali` - `oromo` - `pashto` - `persian` - `pidgin` - `portuguese` - `punjabi` - `russian` - `scottish_gaelic` - `serbian_cyrillic` - `serbian_latin` - `sinhala` - `somali` - `spanish` - `swahili` - `tamil` - `telugu` - `thai` - `tigrinya` - `turkish` - `ukrainian` - `urdu` - `uzbek` - `vietnamese` - `welsh` - `yoruba` ## Dataset Structure ### Data Instances One example from the `English` dataset is given below in JSON format. ``` { "id": "technology-17657859", "url": "https://www.bbc.com/news/technology-17657859", "title": "Yahoo files e-book advert system patent applications", "summary": "Yahoo has signalled it is investigating e-book adverts as a way to stimulate its earnings.", "text": "Yahoo's patents suggest users could weigh the type of ads against the sizes of discount before purchase. It says in two US patent applications that ads for digital book readers have been \"less than optimal\" to date. The filings suggest that users could be offered titles at a variety of prices depending on the ads' prominence They add that the products shown could be determined by the type of book being read, or even the contents of a specific chapter, phrase or word. The paperwork was published by the US Patent and Trademark Office late last week and relates to work carried out at the firm's headquarters in Sunnyvale, California. \"Greater levels of advertising, which may be more valuable to an advertiser and potentially more distracting to an e-book reader, may warrant higher discounts,\" it states. Free books It suggests users could be offered ads as hyperlinks based within the book's text, in-laid text or even \"dynamic content\" such as video. Another idea suggests boxes at the bottom of a page could trail later chapters or quotes saying \"brought to you by Company A\". It adds that the more willing the customer is to see the ads, the greater the potential discount. \"Higher frequencies... may even be great enough to allow the e-book to be obtained for free,\" it states. The authors write that the type of ad could influence the value of the discount, with \"lower class advertising... such as teeth whitener advertisements\" offering a cheaper price than \"high\" or \"middle class\" adverts, for things like pizza. The inventors also suggest that ads could be linked to the mood or emotional state the reader is in as a they progress through a title. For example, they say if characters fall in love or show affection during a chapter, then ads for flowers or entertainment could be triggered. The patents also suggest this could applied to children's books - giving the Tom Hanks animated film Polar Express as an example. It says a scene showing a waiter giving the protagonists hot drinks \"may be an excellent opportunity to show an advertisement for hot cocoa, or a branded chocolate bar\". Another example states: \"If the setting includes young characters, a Coke advertisement could be provided, inviting the reader to enjoy a glass of Coke with his book, and providing a graphic of a cool glass.\" It adds that such targeting could be further enhanced by taking account of previous titles the owner has bought. 'Advertising-free zone' At present, several Amazon and Kobo e-book readers offer full-screen adverts when the device is switched off and show smaller ads on their menu screens, but the main text of the titles remains free of marketing. Yahoo does not currently provide ads to these devices, and a move into the area could boost its shrinking revenues. However, Philip Jones, deputy editor of the Bookseller magazine, said that the internet firm might struggle to get some of its ideas adopted. \"This has been mooted before and was fairly well decried,\" he said. \"Perhaps in a limited context it could work if the merchandise was strongly related to the title and was kept away from the text. \"But readers - particularly parents - like the fact that reading is an advertising-free zone. Authors would also want something to say about ads interrupting their narrative flow.\"" } ``` ### Data Fields - 'id': A string representing the article ID. - 'url': A string representing the article URL. - 'title': A string containing the article title. - 'summary': A string containing the article summary. - 'text' : A string containing the article text. ### Data Splits We used a 80%-10%-10% split for all languages with a few exceptions. `English` was split 93%-3.5%-3.5% for the evaluation set size to resemble that of `CNN/DM` and `XSum`; `Scottish Gaelic`, `Kyrgyz` and `Sinhala` had relatively fewer samples, their evaluation sets were increased to 500 samples for more reliable evaluation. Same articles were used for evaluation in the two variants of Chinese and Serbian to prevent data leakage in multilingual training. Individual dataset download links with train-dev-test example counts are given below: Language | ISO 639-1 Code | BBC subdomain(s) | Train | Dev | Test | Total | --------------|----------------|------------------|-------|-----|------|-------| Amharic | am | https://www.bbc.com/amharic | 5761 | 719 | 719 | 7199 | Arabic | ar | https://www.bbc.com/arabic | 37519 | 4689 | 4689 | 46897 | Azerbaijani | az | https://www.bbc.com/azeri | 6478 | 809 | 809 | 8096 | Bengali | bn | https://www.bbc.com/bengali | 8102 | 1012 | 1012 | 10126 | Burmese | my | https://www.bbc.com/burmese | 4569 | 570 | 570 | 5709 | Chinese (Simplified) | zh-CN | https://www.bbc.com/ukchina/simp, https://www.bbc.com/zhongwen/simp | 37362 | 4670 | 4670 | 46702 | Chinese (Traditional) | zh-TW | https://www.bbc.com/ukchina/trad, https://www.bbc.com/zhongwen/trad | 37373 | 4670 | 4670 | 46713 | English | en | https://www.bbc.com/english, https://www.bbc.com/sinhala `*` | 306522 | 11535 | 11535 | 329592 | French | fr | https://www.bbc.com/afrique | 8697 | 1086 | 1086 | 10869 | Gujarati | gu | https://www.bbc.com/gujarati | 9119 | 1139 | 1139 | 11397 | Hausa | ha | https://www.bbc.com/hausa | 6418 | 802 | 802 | 8022 | Hindi | hi | https://www.bbc.com/hindi | 70778 | 8847 | 8847 | 88472 | Igbo | ig | https://www.bbc.com/igbo | 4183 | 522 | 522 | 5227 | Indonesian | id | https://www.bbc.com/indonesia | 38242 | 4780 | 4780 | 47802 | Japanese | ja | https://www.bbc.com/japanese | 7113 | 889 | 889 | 8891 | Kirundi | rn | https://www.bbc.com/gahuza | 5746 | 718 | 718 | 7182 | Korean | ko | https://www.bbc.com/korean | 4407 | 550 | 550 | 5507 | Kyrgyz | ky | https://www.bbc.com/kyrgyz | 2266 | 500 | 500 | 3266 | Marathi | mr | https://www.bbc.com/marathi | 10903 | 1362 | 1362 | 13627 | Nepali | np | https://www.bbc.com/nepali | 5808 | 725 | 725 | 7258 | Oromo | om | https://www.bbc.com/afaanoromoo | 6063 | 757 | 757 | 7577 | Pashto | ps | https://www.bbc.com/pashto | 14353 | 1794 | 1794 | 17941 | Persian | fa | https://www.bbc.com/persian | 47251 | 5906 | 5906 | 59063 | Pidgin`**` | n/a | https://www.bbc.com/pidgin | 9208 | 1151 | 1151 | 11510 | Portuguese | pt | https://www.bbc.com/portuguese | 57402 | 7175 | 7175 | 71752 | Punjabi | pa | https://www.bbc.com/punjabi | 8215 | 1026 | 1026 | 10267 | Russian | ru | https://www.bbc.com/russian, https://www.bbc.com/ukrainian `*` | 62243 | 7780 | 7780 | 77803 | Scottish Gaelic | gd | https://www.bbc.com/naidheachdan | 1313 | 500 | 500 | 2313 | Serbian (Cyrillic) | sr | https://www.bbc.com/serbian/cyr | 7275 | 909 | 909 | 9093 | Serbian (Latin) | sr | https://www.bbc.com/serbian/lat | 7276 | 909 | 909 | 9094 | Sinhala | si | https://www.bbc.com/sinhala | 3249 | 500 | 500 | 4249 | Somali | so | https://www.bbc.com/somali | 5962 | 745 | 745 | 7452 | Spanish | es | https://www.bbc.com/mundo | 38110 | 4763 | 4763 | 47636 | Swahili | sw | https://www.bbc.com/swahili | 7898 | 987 | 987 | 9872 | Tamil | ta | https://www.bbc.com/tamil | 16222 | 2027 | 2027 | 20276 | Telugu | te | https://www.bbc.com/telugu | 10421 | 1302 | 1302 | 13025 | Thai | th | https://www.bbc.com/thai | 6616 | 826 | 826 | 8268 | Tigrinya | ti | https://www.bbc.com/tigrinya | 5451 | 681 | 681 | 6813 | Turkish | tr | https://www.bbc.com/turkce | 27176 | 3397 | 3397 | 33970 | Ukrainian | uk | https://www.bbc.com/ukrainian | 43201 | 5399 | 5399 | 53999 | Urdu | ur | https://www.bbc.com/urdu | 67665 | 8458 | 8458 | 84581 | Uzbek | uz | https://www.bbc.com/uzbek | 4728 | 590 | 590 | 5908 | Vietnamese | vi | https://www.bbc.com/vietnamese | 32111 | 4013 | 4013 | 40137 | Welsh | cy | https://www.bbc.com/cymrufyw | 9732 | 1216 | 1216 | 12164 | Yoruba | yo | https://www.bbc.com/yoruba | 6350 | 793 | 793 | 7936 | `*` A lot of articles in BBC Sinhala and BBC Ukrainian were written in English and Russian respectively. They were identified using [Fasttext](https://arxiv.org/abs/1607.01759) and moved accordingly. `**` West African Pidgin English ## Dataset Creation ### Curation Rationale [More information needed](https://github.com/csebuetnlp/xl-sum) ### Source Data [BBC News](https://www.bbc.co.uk/ws/languages) #### Initial Data Collection and Normalization [Detailed in the paper](https://aclanthology.org/2021.findings-acl.413/) #### Who are the source language producers? [Detailed in the paper](https://aclanthology.org/2021.findings-acl.413/) ### Annotations [Detailed in the paper](https://aclanthology.org/2021.findings-acl.413/) #### Annotation process [Detailed in the paper](https://aclanthology.org/2021.findings-acl.413/) #### Who are the annotators? [Detailed in the paper](https://aclanthology.org/2021.findings-acl.413/) ### Personal and Sensitive Information [More information needed](https://github.com/csebuetnlp/xl-sum) ## Considerations for Using the Data ### Social Impact of Dataset [More information needed](https://github.com/csebuetnlp/xl-sum) ### Discussion of Biases [More information needed](https://github.com/csebuetnlp/xl-sum) ### Other Known Limitations [More information needed](https://github.com/csebuetnlp/xl-sum) ## Additional Information ### Dataset Curators [More information needed](https://github.com/csebuetnlp/xl-sum) ### Licensing Information Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders. ### Citation Information If you use any of the datasets, models or code modules, please cite the following paper: ``` @inproceedings{hasan-etal-2021-xl, title = "{XL}-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages", author = "Hasan, Tahmid and Bhattacharjee, Abhik and Islam, Md. Saiful and Mubasshir, Kazi and Li, Yuan-Fang and Kang, Yong-Bin and Rahman, M. Sohel and Shahriyar, Rifat", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.413", pages = "4693--4703", } ``` ### Contributions Thanks to [@abhik1505040](https://github.com/abhik1505040) and [@Tahmid](https://github.com/Tahmid04) for adding this dataset.
csebuetnlp/xlsum
[ "task_categories:summarization", "task_categories:text-generation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:1M<n<10M", "source_datasets:original", "language:am", "language:ar", "language:az", "language:bn", "language:my", "language:zh", "language:en", "language:fr", "language:gu", "language:ha", "language:hi", "language:ig", "language:id", "language:ja", "language:rn", "language:ko", "language:ky", "language:mr", "language:ne", "language:om", "language:ps", "language:fa", "language:pcm", "language:pt", "language:pa", "language:ru", "language:gd", "language:sr", "language:si", "language:so", "language:es", "language:sw", "language:ta", "language:te", "language:th", "language:ti", "language:tr", "language:uk", "language:ur", "language:uz", "language:vi", "language:cy", "language:yo", "license:cc-by-nc-sa-4.0", "conditional-text-generation", "arxiv:1607.01759", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["am", "ar", "az", "bn", "my", "zh", "en", "fr", "gu", "ha", "hi", "ig", "id", "ja", "rn", "ko", "ky", "mr", "ne", "om", "ps", "fa", "pcm", "pt", "pa", "ru", "gd", "sr", "si", "so", "es", "sw", "ta", "te", "th", "ti", "tr", "uk", "ur", "uz", "vi", "cy", "yo"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["summarization", "text-generation"], "task_ids": [], "paperswithcode_id": "xl-sum", "pretty_name": "XL-Sum", "tags": ["conditional-text-generation"]}
2023-04-18T00:46:20+00:00
a18ecb62d7ffd4a6bff5756afb6e799bbb91dd3e
# Dataset Card for `xnli_bn` ## Table of Contents - [Dataset Card for `xnli_bn`](#dataset-card-for-xnli_bn) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Usage](#usage) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [https://github.com/csebuetnlp/banglabert](https://github.com/csebuetnlp/banglabert) - **Paper:** [**"BanglaBERT: Combating Embedding Barrier in Multilingual Models for Low-Resource Language Understanding"**](https://arxiv.org/abs/2101.00204) - **Point of Contact:** [Tahmid Hasan](mailto:[email protected]) ### Dataset Summary This is a Natural Language Inference (NLI) dataset for Bengali, curated using the subset of MNLI data used in XNLI and state-of-the-art English to Bengali translation model introduced **[here](https://aclanthology.org/2020.emnlp-main.207/).** ### Supported Tasks and Leaderboards [More information needed](https://github.com/csebuetnlp/banglabert) ### Languages * `Bengali` ### Usage ```python from datasets import load_dataset dataset = load_dataset("csebuetnlp/xnli_bn") ``` ## Dataset Structure ### Data Instances One example from the dataset is given below in JSON format. ``` { "sentence1": "আসলে, আমি এমনকি এই বিষয়ে চিন্তাও করিনি, কিন্তু আমি এত হতাশ হয়ে পড়েছিলাম যে, শেষ পর্যন্ত আমি আবার তার সঙ্গে কথা বলতে শুরু করেছিলাম", "sentence2": "আমি তার সাথে আবার কথা বলিনি।", "label": "contradiction" } ``` ### Data Fields The data fields are as follows: - `sentence1`: a `string` feature indicating the premise. - `sentence2`: a `string` feature indicating the hypothesis. - `label`: a classification label, where possible values are `contradiction` (0), `entailment` (1), `neutral` (2) . ### Data Splits | split |count | |----------|--------| |`train`| 381449 | |`validation`| 2419 | |`test`| 4895 | ## Dataset Creation The dataset curation procedure was the same as the [XNLI](https://aclanthology.org/D18-1269/) dataset: we translated the [MultiNLI](https://aclanthology.org/N18-1101/) training data using the English to Bangla translation model introduced [here](https://aclanthology.org/2020.emnlp-main.207/). Due to the possibility of incursions of error during automatic translation, we used the [Language-Agnostic BERT Sentence Embeddings (LaBSE)](https://arxiv.org/abs/2007.01852) of the translations and original sentences to compute their similarity. All sentences below a similarity threshold of 0.70 were discarded. ### Curation Rationale [More information needed](https://github.com/csebuetnlp/banglabert) ### Source Data [XNLI](https://aclanthology.org/D18-1269/) #### Initial Data Collection and Normalization [More information needed](https://github.com/csebuetnlp/banglabert) #### Who are the source language producers? [More information needed](https://github.com/csebuetnlp/banglabert) ### Annotations [More information needed](https://github.com/csebuetnlp/banglabert) #### Annotation process [More information needed](https://github.com/csebuetnlp/banglabert) #### Who are the annotators? [More information needed](https://github.com/csebuetnlp/banglabert) ### Personal and Sensitive Information [More information needed](https://github.com/csebuetnlp/banglabert) ## Considerations for Using the Data ### Social Impact of Dataset [More information needed](https://github.com/csebuetnlp/banglabert) ### Discussion of Biases [More information needed](https://github.com/csebuetnlp/banglabert) ### Other Known Limitations [More information needed](https://github.com/csebuetnlp/banglabert) ## Additional Information ### Dataset Curators [More information needed](https://github.com/csebuetnlp/banglabert) ### Licensing Information Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders. ### Citation Information If you use the dataset, please cite the following paper: ``` @misc{bhattacharjee2021banglabert, title={BanglaBERT: Combating Embedding Barrier in Multilingual Models for Low-Resource Language Understanding}, author={Abhik Bhattacharjee and Tahmid Hasan and Kazi Samin and Md Saiful Islam and M. Sohel Rahman and Anindya Iqbal and Rifat Shahriyar}, year={2021}, eprint={2101.00204}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@abhik1505040](https://github.com/abhik1505040) and [@Tahmid](https://github.com/Tahmid04) for adding this dataset.
csebuetnlp/xnli_bn
[ "task_categories:text-classification", "task_ids:natural-language-inference", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended", "language:bn", "license:cc-by-nc-sa-4.0", "arxiv:2101.00204", "arxiv:2007.01852", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["bn"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended"], "task_categories": ["text-classification"], "task_ids": ["natural-language-inference"]}
2022-08-21T12:14:56+00:00
d810e76b4b49ceffb417666524b0daabd94c059c
# Dataset Card for Task2Dial ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Acknowledgements] (#funding-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** [Needs More Information] - **Paper:** https://aclanthology.org/2021.icnlsp-1.28/ - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary The Task2Dial dataset includes (1) a set of recipe documents with 353 individual dialogues; and (2) conversations between an IG and an IF, which are grounded in the associated recipe documents. Presents sample utterances from a dialogue along with the associated recipe. It demonstrates some important features of the dataset, such as mentioning entities not present in the recipe document; re-composition of the original text to focus on the important steps and the breakdown of the recipe into manageable and appropriate steps. Following recent efforts in the field to standardise NLG research, we have made the dataset freely available. ### Supported Tasks and Leaderboards We demonstrate the task of implementing the Task2Dial in a conversational agent called chefbot in the following git repo: https://github.com/carlstrath/ChefBot ### Languages English ### Data Fields Dataset.1: Task2Dial main, 353 cooking recipes modelled on real conversations between an IF and IG. Dataset. 2: A list of alternative ingredients for every swappable ingredient in the Task2Dial dataset. Dataset. 3. A list of objects and utensils with explanations, comparisons, handling and common storage location information. ## Dataset Creation The proposed task considers the recipe-following scenario with an information giver (IG) and an information follower (IF), where the IG has access to the recipe and gives instructions to the IF. The IG might choose to omit irrelevant information, simplify the content of a recipe or provide it as is. The IF will either follow the task or ask for further information. The IG might have to rely on information outside the given document (i.e. commonsense) to enhance understanding and success of the task. In addition, the IG decides on how to present the recipe steps, i.e. split them into sub- steps or merge them together, often diverging from the original number of recipe steps. The task is regarded as successful when the IG has successfully followed/understood the recipe. Hence, other dialogue-focused metrics, such as the number of turns, are not appropriate here. Formally, Task2Dial can be defined as follows: Given a recipe 𝑅𝑖 from 𝑅 =𝑅1, 𝑅2, 𝑅3,..., 𝑅𝑛, an ontology or ontologies 𝑂𝑖 =𝑂11,𝑂2,...,𝑂𝑛 of cooking-related concepts, a history of the conversation ℎ, predict the response 𝑟 of the IG. ### Curation Rationale Text selection was dependent on the quality of the information provided in the existing recipes. Too little information and the transcription and interpretation of the text became diffused with missing or incorrect knowledge. Conversely, providing too much information in the text resulted in a lack of creativity and commonsense reasoning by the data curators. Thus, the goal of the curation was to identify text that contained all the relevant information to complete the cooking task (tools, ingredients, weights, timings, servings) but not in such detail that it subtracted from the creativity, commonsense and imagination of the annotators. ### Source Data #### Initial Data Collection and Normalization Three open-source and creative commons licensed cookery websites6 were identified for data extraction, which permits any use or non- commercial use of data for research purposes. As content submission to the cooking websites was unrestricted, data appropriateness was ratified by the ratings and reviews given to each recipe by the public, highly rated recipes with a positive feedback were given preference over recipes with low scores and poor reviews [38]. From this, a list of 353 recipes was compiled and divided amongst the annotators for the data collection. As mentioned earlier, annotators were asked to take on the roles of both IF and IG, rather than a multi-turn WoZ approach, to allow flexibility in the utterances. This approach allowed the annotators additional time to formulate detailed and concise responses. #### Who are the source language producers? Undergraduate RAs were recruited through email. The participants were paid an hourly rate based on a university pay scale which is above the living wage and corresponds to the real living wage, following ethical guidelines for responsible innovation. The annotation team was composed of two males and one female data curators, under the age of 25 of mixed ethnicity’s with experience in AI and computing. This minimised the gender bias that is frequently observed in crowdsourcing platforms. #### Annotation process Each annotator was provided with a detailed list of instructions, an example dialogue and an IF/IG template (see Appendix A). The annotators were asked to read both the example dialogue and the original recipe to understand the text, context, composition, translation and annotation. The instructions included information handling and storage of data, text formatting, metadata and examples of high-quality and poor dialogues. An administrator was on hand throughout the data collection to support and guide the annotators. This approach reduced the number of low-quality dialogues associated with large crowdsourcing platforms that are often discarded post evaluation, as demonstrated in the data collection of the Doc2Dial dataset. #### Who are the annotators? Research assistants (RAs) from the School of Computing were employed on temporary contracts to construct and format the dataset. After an initial meeting to discuss the job role and determine suitability, the RAs were asked to complete a paid trial, this was evaluated and further advice was given on how to write dialogues and format the data to ensure high quality. After the successful completion of the trial, the RAs were permitted to continue with the remainder of the data collection. To ensure the high quality of the dataset, samples of the dialogues were often reviewed and further feedback was provided. ### Personal and Sensitive Information An ethics request was submitted for review by the board of ethics at our university. No personal or other data that may be used to identify an individual was collected in this study. ## Considerations for Using the Data The Task2Dial dataset is currently only for the cooking domain, but using the methodologies provided other tasks can be modelled for example, furniture assembly and maintenance tasks. ### Social Impact of Dataset Our proposed task aims to motivate research for modern dialogue systems that address the following challenges. Firstly, modern dialogue systems should be flexible and allow for "off-script" scenarios in order to emulate real-world phenomena, such as the ones present in human-human communication. This will require new ways of encoding user intents and new approaches to dialogue management in general. Secondly, as dialogue systems find different domain applications, the complexity of the dialogues might increase as well as the reliance on domain knowledge that can be encoded in structured or unstructured ways, such as documents, databases etc. Many applications, might require access to different domain knowledge sources in a course of a dialogue, and in such context, selection might prove beneficial in choosing "what to say". ### Discussion of Biases Prior to data collection, we performed three pilot studies. In the first, two participants assumed the roles of IG and IF respectively, where the IG had access to a recipe and provided recipe instructions to the IF (who did not have access to the recipe) over the phone, recording the session and then transcribing it. Next, we repeated the process with text-based dialogue through an online platform following a similar setup, however, the interaction was solely chat-based. The final study used self-dialogue, with one member of the team writing entire dialogues assuming both the IF and IG roles. We found that self-dialogue results were proximal to the results of two-person studies. However, time and cost were higher for producing two-person dialogues, with the additional time needed for transcribing and correction, thus, we opted to use self-dialogue. ## Additional Information Video: https://www.youtube.com/watch?v=zISkwn95RXs&ab_channel=ICNLSPConference ### Dataset Curators The recipes are composed by people of a different races / ethnicity, nationalities, socioeconomic status, abilities, age, gender and language with significant variation in pronunciations, structure, language and grammar. This provided the annotators with unique linguistic content for each recipe to interpret the data and configure the text into an IF/IG format. To help preserve sociolinguistic patterns in speech, the data curators retained the underlying language when para- phrasing, to intercede social and regional dialects with their own interpretation of the data to enhance the lexical richness. ### Licensing Information CC ### Citation Information https://aclanthology.org/2021.icnlsp-1.28/ ### Acknowledgements The research is supported under the EPSRC projects CiViL (EP/T014598/1) and NLG for low-resource domains (EP/T024917/1).
cstrathe435/Task2Dial
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2022-02-03T12:55:28+00:00
c9f2ce78fc92e19353b7f1cb3f4b68f15d32eb1c
# CsFEVER experimental Fact-Checking dataset Czech dataset for fact verification localized from the data points of [FEVER](https://arxiv.org/abs/1803.05355) using the localization scheme described in the [CTKFacts: Czech Datasets for Fact Verification](https://arxiv.org/abs/2201.11115) paper which is currently being revised for publication in LREV journal. The version you are looking at was reformatted to *Claim*-*Evidence* string pairs for the specific task of NLI - a more general Document-Retrieval-ready interpretation of our datapoints which can be used for training and evaluating the DR models over the June 2016 wikipedia snapshot can be found in the [data_dr]() folder in the JSON Lines format. ## Data Statement ### Curation Rationale TODO
ctu-aic/csfever
[ "license:cc-by-sa-3.0", "arxiv:1803.05355", "arxiv:2201.11115", "region:us" ]
2022-03-02T23:29:22+00:00
{"license": "cc-by-sa-3.0"}
2022-11-01T05:56:15+00:00
69d0247380ab01c39f2920974a1736e92fe45783
ctu-aic/csfever_nli
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2022-02-22T11:13:35+00:00
387ae4582c8054cb52ef57ef0941f19bd8012abf
# CTKFacts dataset for Natural Language Inference Czech Natural Language Inference dataset of ~3K *evidence*-*claim* pairs labelled with SUPPORTS, REFUTES or NOT ENOUGH INFO veracity labels. Extracted from a round of fact-checking experiments concluded and described within the CsFEVER and [CTKFacts: Czech Datasets for Fact Verification](https://arxiv.org/abs/2201.11115) paper currently being revised for publication in LREV journal. ## Document retrieval version Can be found at https://huggingface.co/datasets/ctu-aic/ctkfacts
ctu-aic/ctkfacts_nli
[ "arxiv:2201.11115", "region:us" ]
2022-03-02T23:29:22+00:00
{}
2022-11-01T06:35:47+00:00
3768a20ee7e29288ea5feb4531fc5ab68ca8c2f2
# Dataset Card for GitHub Issues ## Dataset Description This dataset is created for the Hugging Face Datasets library course ### Dataset Summary GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the 🤗 Datasets [repository](https://github.com/huggingface/datasets). It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond. ### Supported Tasks and Leaderboards For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the `task-category-tag` with an appropriate `other:other-task-name`). - `task-category-tag`: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* [metric name](https://huggingface.co/metrics/metric_name). The ([model name](https://huggingface.co/model_name) or [model class](https://huggingface.co/transformers/model_doc/model_class.html)) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard which can be found at [leaderboard url]() and ranks models based on [metric name](https://huggingface.co/metrics/metric_name) while also reporting [other metric name](https://huggingface.co/metrics/other_metric_name). ### Languages Provide a brief overview of the languages represented in the dataset. Describe relevant details about specifics of the language such as whether it is social media text, African American English,... When relevant, please provide [BCP-47 codes](https://tools.ietf.org/html/bcp47), which consist of a [primary language subtag](https://tools.ietf.org/html/bcp47#section-2.2.1), with a [script subtag](https://tools.ietf.org/html/bcp47#section-2.2.3) and/or [region subtag](https://tools.ietf.org/html/bcp47#section-2.2.4) if available. ## Dataset Structure ### Data Instances Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples. ``` { 'example_field': ..., ... } ``` Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit. ### Data Fields List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points. - `example_field`: description of `example_field` Note that the descriptions can be initialized with the **Show Markdown Data Fields** output of the [tagging app](https://github.com/huggingface/datasets-tagging), you will then only need to refine the generated descriptions. ### Data Splits Describe and name the splits in the dataset if there are more than one. Describe any criteria for splitting the data, if used. If their are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example: | | Tain | Valid | Test | | ----- | ------ | ----- | ---- | | Input Sentences | | | | | Average Sentence Length | | | | ## Dataset Creation ### Curation Rationale What need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together? ### Source Data This section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...) #### Initial Data Collection and Normalization Describe the data collection process. Describe any criteria for data selection or filtering. List any key words or search terms used. If possible, include runtime information for the collection process. If data was collected from other pre-existing datasets, link to source here and to their [Hugging Face version](https://huggingface.co/datasets/dataset_name). If the data was modified or normalized after being collected (e.g. if the data is word-tokenized), describe the process and the tools used. #### Who are the source language producers? State whether the data was produced by humans or machine generated. Describe the people or systems who originally created the data. If available, include self-reported demographic or identity information for the source data creators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender. Describe the conditions under which the data was created (for example, if the producers were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here. Describe other people represented or mentioned in the data. Where possible, link to references for the information. ### Annotations If the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs. #### Annotation process If applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes. #### Who are the annotators? If annotations were collected for the source data (such as class labels or syntactic parses), state whether the annotations were produced by humans or machine generated. Describe the people or systems who originally created the annotations and their selection criteria if applicable. If available, include self-reported demographic or identity information for the annotators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender. Describe the conditions under which the data was annotated (for example, if the annotators were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here. ### Personal and Sensitive Information State whether the dataset uses identity categories and, if so, how the information is used. Describe where this information comes from (i.e. self-reporting, collecting from profiles, inferring, etc.). See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender. State whether the data is linked to individuals and whether those individuals can be identified in the dataset, either directly or indirectly (i.e., in combination with other data). State whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history). If efforts were made to anonymize the data, describe the anonymization process. ## Considerations for Using the Data ### Social Impact of Dataset Please discuss some of the ways you believe the use of this dataset will impact society. The statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people's lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations. Also describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here. ### Discussion of Biases Provide descriptions of specific biases that are likely to be reflected in the data, and state whether any steps were taken to reduce their impact. For Wikipedia text, see for example [Dinan et al 2020 on biases in Wikipedia (esp. Table 1)](https://arxiv.org/abs/2005.00614), or [Blodgett et al 2020](https://www.aclweb.org/anthology/2020.acl-main.485/) for a more general discussion of the topic. If analyses have been run quantifying these biases, please add brief summaries and links to the studies here. ### Other Known Limitations If studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here. ## Additional Information ### Dataset Curators List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here. ### Licensing Information Provide the license and link to the license webpage if available. ### Citation Information Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example: ``` @article{article_id, author = {Author List}, title = {Dataset Paper Title}, journal = {Publication Venue}, year = {2525} } ``` If the dataset has a [DOI](https://www.doi.org/), please provide it here. ### Contributions [@cylee] added this dataset as part of the Hugging Face Dataset library tutorial (https://huggingface.co/course/chapter5/5?fw=tf).
cylee/github-issues
[ "arxiv:2005.00614", "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-12-19T19:12:55+00:00
986e65392adb1f3bdab07c25ed9a23cb83a0b354
# YFCC100M subset from OpenAI Subset of [YFCC100M](https://arxiv.org/abs/1503.01817) used by OpenAI for [CLIP](https://github.com/openai/CLIP/blob/main/data/yfcc100m.md), filtered to contain only the images that we could retrieve. | Split | train | validation | | --- | --- | --- | | Number of samples | 14,808,859 | 16,374 | | Size | 1.9 TB | 2.1 GB | Features: * from the original dataset: `title`, `description`, `photoid`, `uid`, `unickname`, `datetaken`, `dateuploaded`, `capturedevice`, `usertags`, `machinetags`, `longitude`, `latitude`, `accuracy`, `pageurl`, `downloadurl`, `licensename`, `licenseurl`, `serverid`, `farmid`, `secret`, `secretoriginal`, `ext`, `marker`, `key` * `img`: image content, can be loaded with `PIL.Image.open(io.BytesIO(item['img']))` * `title_clean` and `description_clean`: derived from `title` and `description` using `clean_text` function detailed below ```python def clean_text(text): # decode url text = urllib.parse.unquote_plus(text) # remove html tags text = re.sub('<[^<]+?>', '', text) # remove multiple spaces + "\r" + "\n" + "\t" text = " ".join(text.split()) return text ```
dalle-mini/YFCC100M_OpenAI_subset
[ "arxiv:1503.01817", "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-08-26T16:56:01+00:00
a8e47c9a43d12564240e175708fe4e9424d275f0
# Dataset Description ## Dataset Summary This dataset was derived from the Los Alamos National Laboratory HIV sequence (LANL) database. It contains the most recent version (2016-Full-genome), composed of 1,609 high-quality full-length genomes. The genes within these sequences were processed using the GeneCutter tool and translated into corresponding amino acid sequences using the BioPython library Seq.translate function. Supported Tasks and Leaderboards: None Languages: English ## Dataset Structure ### Data Instances Each column represents the protein amino acid sequence of the HIV genome. The ID field indicates the Genbank reference ID for future cross-referencing. There are 1,609 full length HIV genomes. Data Fields: ID, gag, pol, env, nef, tat, rev, proteome Data Splits: None ## Dataset Creation Curation Rationale: This dataset was curated to train a model (HIV-BERT) designed to predict a variety of sequence-dependent features regarding HIV. Initial Data Collection and Normalization: Dataset was downloaded and curated on 12/21/2021. ## Considerations for Using the Data Social Impact of Dataset: This dataset can be used to study sequence-dependent features of HIV, a virus that has claimed the lives of many individuals globally in the last few decades. Discussion of Biases: This dataset was derived from the Los Alamos National Laboratory HIV sequence (LANL) database full genome database and contains a representative sample from each subtype and geographic region. ## Additional Information: - Dataset Curators: Will Dampier - Citation Information: TBA
damlab/HIV_FLT
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2022-02-08T20:58:56+00:00
f0bada3a186a6ab795d578088eaff9cae1ee7106
# Dataset Description ## Dataset Summary This dataset was derived from the Stanford HIV Genotype-Phenotype database and contains 1,733 HIV protease sequences. A pproximately half of the sequences are resistant to at least one antiretroviral therapeutic (ART). Supported Tasks and Leaderboards: None Languages: English ## Dataset Structure ### Data Instances Each column represents the protein amino acid sequence of the HIV protease protein. The ID field indicates the Genbank reference ID for future cross-referencing. There are 1,733 total protease sequences. Data Fields: ID, sequence, fold, FPV, IDV, NFV, SQV Data Splits: None ## Dataset Creation Curation Rationale: This dataset was curated to train a model (HIV-BERT-PI) designed to predict whether an HIV protease sequence would result in resistance to certain antiretroviral (ART) drugs. Initial Data Collection and Normalization: Dataset was downloaded and curated on 12/21/2021. ## Considerations for Using the Data Social Impact of Dataset: Due to the tendency of HIV to mutate, drug resistance is a common issue when attempting to treat those infected with HIV. Protease inhibitors are a class of drugs that HIV is known to develop resistance via mutations. Thus, by providing a collection of protease sequences known to be resistant to one or more drugs, this dataset provides a significant collection of data that could be utilized to perform computational analysis of protease resistance mutations. Discussion of Biases: Due to the sampling nature of this database, it is predominantly composed of subtype B sequences from North America and Europe with only minor contributions of Subtype C, A, and D. Currently, there was no effort made to balance the performance across these classes. As such, one should consider refinement with additional sequences to perform well on non-B sequences. ## Additional Information: - Dataset Curators: Will Dampier - Citation Information: TBA
damlab/HIV_PI
[ "license:mit", "region:us" ]
2022-03-02T23:29:22+00:00
{"license": "mit"}
2022-03-09T19:48:01+00:00
7c81ad7c34d35f0ea4cabc28c24dc79c299dd6b3
# Dataset Description ## Dataset Summary This dataset was derived from the Los Alamos National Laboratory HIV sequence (LANL) database. It contains 5,510 unique V3 sequences, each annotated with its corresponding bodysite that it was associated with. Supported Tasks and Leaderboards: None Languages: English ## Dataset Structure ### Data Instances Data Instances: Each column represents the protein amino acid sequence of the HIV V3 loop. The ID field indicates the Genbank reference ID for future cross-referencing. There are 2,935 total V3 sequences, with 91% being CCR5 tropic and 23% CXCR4 tropic. Data Fields: ID, sequence, fold, periphery-tcell, periphery-monocyte, CNS, lung, breast-milk, gastric, male-genitals, female-genitals, umbilical-cord, organ Data Splits: None ## Dataset Creation Curation Rationale: Initial Data Collection and Normalization: Dataset was downloaded and curated on 12/20/2021. ## Considerations for Using the Data Social Impact of Dataset: This dataset can be used to study the mechanism by which HIV V3 loops allow for study of HIV compartmentalization. Discussion of Biases: DDue to the sampling nature of this database, it is predominantly composed of subtype B sequences from North America and Europe with only minor contributions of Subtype C, A, and D. Currently, there was no effort made to balance the performance across these classes. As such, one should consider refinement with additional sequences to perform well on non-B sequences. Additionally, this dataset is highly biased to peripheral T-cells. ## Additional Information: - Dataset Curators: Will Dampier - Citation Information: TBA --- license: mit ---
damlab/HIV_V3_bodysite
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2022-02-08T21:12:25+00:00
e6aae6b448d287929238c39a8bb880ae93ab4211
# Dataset Description ## Dataset Summary This dataset was derived from the Los Alamos National Laboratory HIV sequence (LANL) database. It contains 2,935 HIV V3 loop protein sequences, which can interact with either CCR5 receptors on T-Cells or CXCR4 receptors on macrophages. Supported Tasks and Leaderboards: None Languages: English ## Dataset Structure ### Data Instances Data Instances: Each column represents the protein amino acid sequence of the HIV V3 loop. The ID field indicates the Genbank reference ID for future cross-referencing. There are 2,935 total V3 sequences, with 91% being CCR5 tropic and 23% CXCR4 tropic. Data Fields: ID, sequence, fold, CCR5, CXCR4 Data Splits: None ## Dataset Creation Curation Rationale: This dataset was curated to train a model (HIV-BERT-V3) designed to predict whether an HIV V3 loop would be CCR5 or CXCR4 tropic. Initial Data Collection and Normalization: Dataset was downloaded and curated on 12/20/2021. ## Considerations for Using the Data Social Impact of Dataset: This dataset can be used to study the mechanism by which HIV V3 loops allow for entry into T-cells and macrophages. Discussion of Biases: Due to the sampling nature of this database, it is predominantly composed of subtype B sequences from North America and Europe with only minor contributions of Subtype C, A, and D. Currently, there was no effort made to balance the performance across these classes. As such, one should consider refinement with additional sequences to perform well on non-B sequences. ## Additional Information: - Dataset Curators: Will Dampier - Citation Information: TBA
damlab/HIV_V3_coreceptor
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2022-02-08T21:09:21+00:00
68844f7ae036f6901f3b08526c45f6026ea26997
This dataset contains postings and comments from the following recurring threads on [Hacker News](http://news.ycombinator.com/) 1. Ask HN: Who is hiring? 2. Ask HN: Who wants to be hired? 3. Freelancer? Seeking freelancer? These post types are stored in datasets called `hiring`, `wants_to_be_hired` and `freelancer` respectively. Each type of posting has occurred on a regular basis for several years. You can identify when each comment/listing was added through the CommentTime field. The `ParentTitle` also indicates the date of the parent thread in text (e.g. `Ask HN: Who is hiring? (March 2021)`) This dataset is not programmatically reproducible from source because it was uploaded as an experiment with HF datasets. The raw data was created by querying the public table `bigquery-public-data.hacker_news.full` in Google BigQuery. Email addresses have been redacted from the dataset. If this dataset is interesting/useful, I (Dan Becker) will look into improving reproducibility and other general clean-up. This dataset may be useful for finding trends in tech and tech job listings.
dansbecker/hackernews_hiring_posts
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-12-07T13:46:20+00:00
6c892e1bee3fc78527d31d4183c11f343c2fcb23
# Dataset Card for Heritage Made Digital Newspapers ## Table of Contents - [Dataset Card for Heritage Made Digital Newspapers](#dataset-card-for-heritage-made-digital-newspapers) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** https://bl.iro.bl.uk/?locale=en - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset contains text extracted at the article level from historic digitised newspapers from the [Heritage Made Digital](https://bl.iro.bl.uk/collections/9a6a4cdd-2bfe-47bb-8c14-c0a5d100501f?locale=en) newspaper digitisation program at the [British Library](https://www.bl.uk/). The newspapers in the dataset were published between 1800 and 1896. This dataset contains ~2.5 billion tokens and 3,065,408 articles. The dataset contains text generated from Optical Character Recognition software on digitised newspaper pages. This dataset includes the plain text from the OCR alongside some minimal metadata associated with the newspaper from which the text is derived and OCR confidence score information generated from the OCR software. ### Supported Tasks and Leaderboards This dataset can be used for: - historical research and digital humanities research - training language models - training historic language models Whilst this dataset can be used for all of these tasks, it is important to understand that the dataset was not constructed in a representative way so it contains biases in terms of the newspapers and articles that are included (more on this below). ### Languages The text in this dataset is in English that has been recognised by the OCR software. The OCR software used is generic commercial OCR software that has not been trained on historic newspapers. There are therefore many errors in the text. Some of the OCR in this text will be of such poor quality that is is incomprehensible to a human reader. ## Dataset Structure ### Data Instances Each row in the dataset is an article from a newspaper as recognised by an OLR (Optical Layout Recognition) step in the digitisation process. ### Data Splits There is one split in this dataset, the training split. ## Dataset Creation ### Curation Rationale This dataset consists of public-domain newspapers published in the UK during the 19th Century. The majority of newspapers digitised in the UK are not freely available (even if they are out of copyright). The newspapers in this dataset were digitised specifically to be freely available but also to meet preservation goals for newspapers in poor condition. As a result, the newspapers chosen for digitisation are biased toward poor quality physical newspapers. This may in turn result in worse OCR. ### Source Data The source data for this dataset is the digitised newspapers from the [Heritage Made Digital](https://bl.iro.bl.uk/collections/9a6a4cdd-2bfe-47bb-8c14-c0a5d100501f?locale=en) newspaper digitisation program. The newspapers in the dataset were published between 1800 and 1870. ### Dataset Curators The original digitisation was carried out by the British Library. The dataset was created by the British Library in partnership with Findmypast. This dataset was created by [@davanstrien](https://huggingface.co/davanstrien). ### Licensing Information The newspapers in this dataset are in the public domain. The dataset is licensed under a [Creative Commons Zero v1.0 Universal](https://creativecommons.org/publicdomain/zero/1.0/) license. ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
biglam/hmd_newspapers
[ "task_categories:text-generation", "size_categories:1M<n<10M", "language:en", "license:cc0-1.0", "newspapers", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["en"], "license": "cc0-1.0", "size_categories": ["1M<n<10M"], "task_categories": ["text-generation"], "pretty_name": "Heritage Made Digital Newspapers", "dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "date", "dtype": "timestamp[s]"}, {"name": "item_type", "dtype": "string"}, {"name": "word_count", "dtype": "int32"}, {"name": "ocr_quality_mean", "dtype": "float64"}, {"name": "ocr_quality_sd", "dtype": "float64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14304741164, "num_examples": 3065408}], "download_size": 9682476047, "dataset_size": 14304741164}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["newspapers"]}
2024-01-30T12:06:17+00:00
01740f7cd9ffa5855819bd828d5dcb03578abf0e
# Reddit Randomness Dataset A dataset I created because I was curious about how "random" r/random really is. This data was collected by sending `GET` requests to `https://www.reddit.com/r/random` for a few hours on September 19th, 2021. I scraped a bit of metadata about the subreddits as well. `randomness_12k_clean.csv` reports the random subreddits as they happened and `summary.csv` lists some metadata about each subreddit. # The Data ## `randomness_12k_clean.csv` This file serves as a record of the 12,055 successful results I got from r/random. Each row represents one result. ### Fields * `subreddit`: The name of the subreddit that the scraper recieved from r/random (`string`) * `response_code`: HTTP response code the scraper recieved when it sent a `GET` request to /r/random (`int`, always `302`) ## `summary.csv` As the name suggests, this file summarizes `randomness_12k_clean.csv` into the information that I cared about when I analyzed this data. Each row represents one of the 3,679 unique subreddits and includes some stats about the subreddit as well as the number of times it appears in the results. ### Fields * `subreddit`: The name of the subreddit (`string`, unique) * `subscribers`: How many subscribers the subreddit had (`int`, max of `99_886`) * `current_users`: How many users accessed the subreddit in the past 15 minutes (`int`, max of `999`) * `creation_date`: Date that the subreddit was created (`YYYY-MM-DD` or `Error:PrivateSub` or `Error:Banned`) * `date_accessed`: Date that I collected the values in `subscribers` and `current_users` (`YYYY-MM-DD`) * `time_accessed_UTC`: Time that I collected the values in `subscribers` and `current_users`, reported in UTC+0 (`HH:MM:SS`) * `appearances`: How many times the subreddit shows up in `randomness_12k_clean.csv` (`int`, max of `9`) # Missing Values and Quirks In the `summary.csv` file, there are three missing values. After I collected the number of subscribers and the number of current users, I went back about a week later to collect the creation date of each subreddit. In that week, three subreddits had been banned or taken private. I filled in the values with a descriptive string. * SomethingWasWrong (`Error:PrivateSub`) * HannahowoOnlyfans (`Error:Banned`) * JanetGuzman (`Error:Banned`) I think there are a few NSFW subreddits in the results, even though I only queried r/random and not r/randnsfw. As a simple example, searching the data for "nsfw" shows that I got the subreddit r/nsfwanimegifs twice. # License This dataset is made available under the Open Database License: http://opendatacommons.org/licenses/odbl/1.0/. Any rights in individual contents of the database are licensed under the Database Contents License: http://opendatacommons.org/licenses/dbcl/1.0/
davidwisdom/reddit-randomness
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-11-06T23:56:43+00:00
c0b444a1e1fd9773a8ed19fdf9d1034f6b922ead
debajyotidatta/biosses
[ "license:gpl-3.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"license": "gpl-3.0"}
2022-02-01T01:46:29+00:00
6e8e9947c03e380226bb9b3e2e1839d8bd2c05d2
# Dataset Card for Artificial Argument Analysis Corpus (AAAC) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Construction of the Synthetic Data](#construction-of-the-synthetic-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://debatelab.github.io/journal/deepa2.html - **Repository:** None - **Paper:** G. Betz, K. Richardson. *DeepA2: A Modular Framework for Deep Argument Analysis with Pretrained Neural Text2Text Language Models*. https://arxiv.org/abs/2110.01509 - **Leaderboard:** None ### Dataset Summary DeepA2 is a modular framework for deep argument analysis. DeepA2 datasets contain comprehensive logical reconstructions of informally presented arguments in short argumentative texts. This document describes two synthetic DeepA2 datasets for artificial argument analysis: AAAC01 and AAAC02. ```sh # clone git lfs clone https://huggingface.co/datasets/debatelab/aaac ``` ```python import pandas as pd from datasets import Dataset # loading train split as pandas df df = pd.read_json("aaac/aaac01_train.jsonl", lines=True, orient="records") # creating dataset from pandas df Dataset.from_pandas(df) ``` ### Supported Tasks and Leaderboards The multi-dimensional datasets can be used to define various text-2-text tasks (see also [Betz and Richardson 2021](https://arxiv.org/abs/2110.01509)), for example: * Premise extraction, * Conclusion extraction, * Logical formalization, * Logical reconstrcution. ### Languages English. ## Dataset Structure ### Data Instances The following histograms (number of dataset records with given property) describe and compare the two datasets AAAC01 (train split, N=16000) and AAAC02 (dev split, N=4000). |AAAC01 / train split|AAAC02 / dev split| |-|-| |![domains](https://huggingface.co/datasets/debatelab/aaac/resolve/main/img/st_domains_aaac01.png) |![domains](https://huggingface.co/datasets/debatelab/aaac/resolve/main/img/st_domains_aaac02.png) | |![schemes](https://huggingface.co/datasets/debatelab/aaac/resolve/main/img/st_schemes_aaac01.png) |![schemes](https://huggingface.co/datasets/debatelab/aaac/resolve/main/img/st_schemes_aaac02.png) | |![var](https://huggingface.co/datasets/debatelab/aaac/resolve/main/img/st_sch-vars_aaac01.png) |![domains](https://huggingface.co/datasets/debatelab/aaac/resolve/main/img/st_sch-vars_aaac02.png) | |![steps](https://huggingface.co/datasets/debatelab/aaac/resolve/main/img/st_steps_aaac01.png) |![steps](https://huggingface.co/datasets/debatelab/aaac/resolve/main/img/st_steps_aaac02.png) | |![prem](https://huggingface.co/datasets/debatelab/aaac/resolve/main/img/st_prem_aaac01.png) |![prem](https://huggingface.co/datasets/debatelab/aaac/resolve/main/img/st_prem_aaac02.png) | |![impl prem](https://huggingface.co/datasets/debatelab/aaac/resolve/main/img/st_impl-prem_aaac01.png) |![impl prem](https://huggingface.co/datasets/debatelab/aaac/resolve/main/img/st_impl-prem_aaac02.png) | |![impl fc](https://huggingface.co/datasets/debatelab/aaac/resolve/main/img/st_impl-fc_aaac01.png) |![impl fc](https://huggingface.co/datasets/debatelab/aaac/resolve/main/img/st_impl-fc_aaac02.png) | |![dist](https://huggingface.co/datasets/debatelab/aaac/resolve/main/img/st_distr_aaac01.png) |![dist](https://huggingface.co/datasets/debatelab/aaac/resolve/main/img/st_distr_aaac02.png) | ### Data Fields The following multi-dimensional example record (2-step argument with one implicit premise) illustrates the structure of the AAAC datasets. #### argument_source ``` If someone was discovered in 'Moonlight', then they won't play the lead in 'Booksmart', because being a candidate for the lead in 'Booksmart' is sufficient for not being an Oscar-Nominee for a role in 'Eighth Grade'. Yet every BAFTA-Nominee for a role in 'The Shape of Water' is a fan-favourite since 'Moonlight' or a supporting actor in 'Black Panther'. And if someone is a supporting actor in 'Black Panther', then they could never become the main actor in 'Booksmart'. Consequently, if someone is a BAFTA-Nominee for a role in 'The Shape of Water', then they are not a candidate for the lead in 'Booksmart'. ``` #### reason_statements ```json [ {"text":"being a candidate for the lead in 'Booksmart' is sufficient for not being an Oscar-Nominee for a role in 'Eighth Grade'","starts_at":96, "ref_reco":2}, {"text":"every BAFTA-Nominee for a role in 'The Shape of Water' is a fan-favourite since 'Moonlight' or a supporting actor in 'Black Panther'", "starts_at":221,"ref_reco":4}, {"text":"if someone is a supporting actor in 'Black Panther', then they could never become the main actor in 'Booksmart'","starts_at":359, "ref_reco":5} ] ``` #### conclusion_statements ```json [ {"text":"If someone was discovered in 'Moonlight', then they won't play the lead in 'Booksmart'","starts_at":0,"ref_reco":3}, {"text":"if someone is a BAFTA-Nominee for a role in 'The Shape of Water', then they are not a candidate for the lead in 'Booksmart'","starts_at":486, "ref_reco":6} ] ``` #### distractors `[]` #### argdown_reconstruction ``` (1) If someone is a fan-favourite since 'Moonlight', then they are an Oscar-Nominee for a role in 'Eighth Grade'. (2) If someone is a candidate for the lead in 'Booksmart', then they are not an Oscar-Nominee for a role in 'Eighth Grade'. -- with hypothetical syllogism {variant: ["negation variant", "transposition"], uses: [1,2]} -- (3) If someone is beloved for their role in 'Moonlight', then they don't audition in 'Booksmart'. (4) If someone is a BAFTA-Nominee for a role in 'The Shape of Water', then they are a fan-favourite since 'Moonlight' or a supporting actor in 'Black Panther'. (5) If someone is a supporting actor in 'Black Panther', then they don't audition in 'Booksmart'. -- with generalized dilemma {variant: ["negation variant"], uses: [3,4,5]} -- (6) If someone is a BAFTA-Nominee for a role in 'The Shape of Water', then they are not a candidate for the lead in 'Booksmart'. ``` #### premises ```json [ {"ref_reco":1,"text":"If someone is a fan-favourite since 'Moonlight', then they are an Oscar-Nominee for a role in 'Eighth Grade'.","explicit":false}, {"ref_reco":2,"text":"If someone is a candidate for the lead in 'Booksmart', then they are not an Oscar-Nominee for a role in 'Eighth Grade'.","explicit":true}, {"ref_reco":4,"text":"If someone is a BAFTA-Nominee for a role in 'The Shape of Water', then they are a fan-favourite since 'Moonlight' or a supporting actor in 'Black Panther'.","explicit":true}, {"ref_reco":5,"text":"If someone is a supporting actor in 'Black Panther', then they don't audition in 'Booksmart'.","explicit":true} ] ``` #### premises_formalized ```json [ {"form":"(x): ${F2}x -> ${F5}x","ref_reco":1}, {"form":"(x): ${F4}x -> ¬${F5}x","ref_reco":2}, {"form":"(x): ${F1}x -> (${F2}x v ${F3}x)","ref_reco":4}, {"form":"(x): ${F3}x -> ¬${F4}x","ref_reco":5} ] ``` #### conclusion ```json [{"ref_reco":6,"text":"If someone is a BAFTA-Nominee for a role in 'The Shape of Water', then they are not a candidate for the lead in 'Booksmart'.", "explicit":true}] ``` #### conclusion_formalized ```json [{"form":"(x): ${F1}x -> ¬${F4}x","ref_reco":6}] ``` #### intermediary_conclusions ```json [{"ref_reco":3,"text":"If someone is beloved for their role in 'Moonlight', then they don't audition in 'Booksmart'.","explicit":true}] ``` #### intermediary_conclusions_formalized ```json [{"form":"(x): ${F2}x -> ¬${F4}x","ref_reco":3}] ``` #### plcd_subs ```json { "F1":"BAFTA-Nominee for a role in 'The Shape of Water'", "F2":"fan-favourite since 'Moonlight'", "F3":"supporting actor in 'Black Panther'", "F4":"candidate for the lead in 'Booksmart'", "F5":"Oscar-Nominee for a role in 'Eighth Grade'" } ``` ### Data Splits Number of instances in the various splits: | Split | AAAC01 | AAAC02 | | :--- | :---: | :---: | | TRAIN | 16,000 | 16,000 | | DEV | 4,000 | 4,000 | | TEST | 4,000 | 4,000 | To correctly load a specific split, define `data_files` as follows: ```python >>> data_files = {"train": "aaac01_train.jsonl", "eval": "aaac01_dev.jsonl", "test": "aaac01_test.jsonl"} >>> dataset = load_dataset("debatelab/aaac", data_files=data_files) ``` ## Dataset Creation ### Curation Rationale Argument analysis refers to the interpretation and logical reconstruction of argumentative texts. Its goal is to make an argument transparent, so as to understand, appreciate and (possibly) criticize it. Argument analysis is a key critical thinking skill. Here's a first example of an informally presented argument, **Descartes' Cogito**: > I have convinced myself that there is absolutely nothing in the world, no sky, no earth, no minds, no bodies. Does it now follow that I too do not exist? No: if I convinced myself of something then I certainly existed. But there is a deceiver of supreme power and cunning who is deliberately and constantly deceiving me. In that case I too undoubtedly exist, if he is deceiving me; and let him deceive me as much as he can, he will never bring it about that I am nothing so long as I think that I am something. So after considering everything very thoroughly, I must finally conclude that this proposition, I am, I exist, is necessarily true whenever it is put forward by me or conceived in my mind. (AT 7:25, CSM 2:16f) And here's a second example, taken from the *Debater's Handbook*, **Pro Censorship**: > Freedom of speech is never an absolute right but an aspiration. It ceases to be a right when it causes harm to others -- we all recognise the value of, for example, legislating against incitement to racial hatred. Therefore it is not the case that censorship is wrong in principle. Given such texts, argument analysis aims at answering the following questions: 1. Does the text present an argument? 2. If so, how many? 3. What is the argument supposed to show (conclusion)? 4. What exactly are the premises of the argument? * Which statements, explicit in the text, are not relevant for the argument? * Which premises are required, but not explicitly stated? 5. Is the argument deductively valid, inductively strong, or simply fallacious? To answer these questions, argument analysts **interpret** the text by (re-)constructing its argument in a standardized way (typically as a premise-conclusion list) and by making use of logical streamlining and formalization. A reconstruction of **Pro Censorship** which answers the above questions is: ```argdown (1) Freedom of speech is never an absolute right but an aspiration. (2) Censorship is wrong in principle only if freedom of speech is an absolute right. --with modus tollens-- (3) It is not the case that censorship is wrong in principle ``` There are typically multiple, more or less different interpretations and logical reconstructions of an argumentative text. For instance, there exists an [extensive debate](https://plato.stanford.edu/entries/descartes-epistemology/) about how to interpret **Descartes' Cogito**, and scholars have advanced rival interpretation of the argument. An alternative reconstruction of the much simpler **Pro Censorship** might read: ```argdown (1) Legislating against incitement to racial hatred is valuable. (2) Legislating against incitement to racial hatred is an instance of censorship. (3) If some instance of censorship is valuable, censorship is not wrong in principle. ----- (4) Censorship is not wrong in principle. (5) Censorship is wrong in principle only if and only if freedom of speech is an absolute right. ----- (4) Freedom of speech is not an absolute right. (5) Freedom of speech is an absolute right or an aspiration. --with disjunctive syllogism-- (6) Freedom of speech is an aspiration. ``` What are the main reasons for this kind of underdetermination? * **Incompleteness.** Many relevant parts of an argument (statements, their function in the argument, inference rules, argumentative goals) are not stated in its informal presentation. The argument analyst must infer the missing parts. * **Additional material.** Over and above what is strictly part of the argument, informal presentations contain typically further material: relevant premises are repeated in slightly different ways, further examples are added to illustrate a point, statements are contrasted with views by opponents, etc. etc. It's argument analyst to choice which of the presented material is really part of the argument. * **Errors.** Authors may err in the presentation of an argument, confounding, e.g., necessary and sufficient conditions in stating a premise. Following the principle of charity, benevolent argument analysts correct such errors and have to choose on of the different ways for how to do so. * **Linguistic indeterminacy.** One and the same statement can be interpreted -- regarding its logical form -- in different ways. * **Equivalence.** There are different natural language expressions for one and the same proposition. AAAC datasets provide logical reconstructions of informal argumentative texts: Each record contains a source text to-be-reconstructed and further fields which describe an internally consistent interpretation of the text, notwithstanding the fact that there might be alternative interpretations of this very text. ### Construction of the Synthetic Data Argument analysis starts with a text and reconstructs its argument (cf. [Motivation and Background](#curation-rationale)). In constructing our synthetic data, we inverse this direction: We start by sampling a complete argument, construct an informal presentation, and provide further info that describes both logical reconstruction and informal presentation. More specifically, the construction of the data involves the following steps: 1. [Generation of valid symbolic inference schemes](#step-1-generation-of-symbolic-inference-schemes) 2. [Assembling complex ("multi-hop") argument schemes from symbolic inference schemes](#step-2-assembling-complex-multi-hop-argument-schemes-from-symbolic-inference-schemes) 3. [Creation of (precise and informal) natural-language argument](#step-3-creation-of-precise-and-informal-natural-language-argument-schemes) 4. [Substitution of placeholders with domain-specific predicates and names](#step-4-substitution-of-placeholders-with-domain-specific-predicates-and-names) 5. [Creation of the argdown-snippet](#step-5-creation-of-the-argdown-snippet) 7. [Paraphrasing](#step-6-paraphrasing) 6. [Construction of a storyline for the argument source text](#step-7-construction-of-a-storyline-for-the-argument-source-text) 8. [Assembling the argument source text](#step-8-assembling-the-argument-source-text) 9. [Linking the precise reconstruction and the informal argumentative text](#step-9-linking-informal-presentation-and-formal-reconstruction) #### Step 1: Generation of symbolic inference schemes We construct the set of available inference schemes by systematically transforming the following 12 base schemes (6 from propositional and another 6 from predicate logic): * modus ponens: `['Fa -> Gb', 'Fa', 'Gb']` * chain rule: `['Fa -> Gb', 'Gb -> Hc', 'Fa -> Hc']` * adjunction: `['Fa', 'Gb', 'Fa & Gb']` * case analysis: `['Fa v Gb', 'Fa -> Hc', 'Gb -> Hc', 'Hc']` * disjunctive syllogism: `['Fa v Gb', '¬Fa', 'Gb']` * biconditional elimination: `['Fa <-> Gb', 'Fa -> Gb']` * instantiation: `['(x): Fx -> Gx', 'Fa -> Ga']` * hypothetical syllogism: `['(x): Fx -> Gx', '(x): Gx -> Hx', '(x): Fx -> Hx']` * generalized biconditional elimination: `['(x): Fx <-> Gx', '(x): Fx -> Gx']` * generalized adjunction: `['(x): Fx -> Gx', '(x): Fx -> Hx', '(x): Fx -> (Gx & Hx)']` * generalized dilemma: `['(x): Fx -> (Gx v Hx)', '(x): Gx -> Ix', '(x): Hx -> Ix', '(x): Fx -> Ix']` * generalized disjunctive syllogism: `['(x): Fx -> (Gx v Hx)', '(x): Fx -> ¬Gx', '(x): Fx -> Hx']` (Regarding the propositional schemes, we allow for `a`=`b`=`c`.) Further symbolic inference schemes are generated by applying the following transformations to each of these base schemes: * *negation*: replace all occurrences of an atomic formula by its negation (for any number of such atomic sentences) * *transposition*: transpose exactly one (generalized) conditional * *dna*: simplify by applying duplex negatio affirmat * *complex predicates*: replace all occurrences of a given atomic formula by a complex formula consisting in the conjunction or disjunction of two atomic formulas * *de morgan*: apply de Morgan's rule once These transformations are applied to the base schemes in the following order: > **{base_schemes}** > negation_variants > transposition_variants > dna > **{transposition_variants}** > complex_predicates > negation_variants > dna > **{complex_predicates}** > de_morgan > dna > **{de_morgan}** All transformations, except *dna*, are monotonic, i.e. simply add further schemes to the ones generated in the previous step. Results of bold steps are added to the list of valid inference schemes. Each inference scheme is stored with information about which transformations were used to create it. All in all, this gives us 5542 schemes. #### Step 2: Assembling complex ("multi-hop") argument schemes from symbolic inference schemes The complex argument *scheme*, which consists in multiple inferences, is assembled recursively by adding inferences that support premises of previously added inferences, as described by the following pseudocode: ``` argument = [] intermediary_conclusion = [] inference = randomly choose from list of all schemes add inference to argument for i in range(number_of_sub_arguments - 1): target = randomly choose a premise which is not an intermediary_conclusion inference = randomly choose a scheme whose conclusion is identical with target add inference to argument add target to intermediary_conclusion return argument ``` The complex arguments we create are hence trees, with a root scheme. Let's walk through this algorithm by means of an illustrative example and construct a symbolic argument scheme with two sub-arguments. First, we randomly choose some inference scheme (random sampling is controlled by weights that compensate for the fact that the list of schemes mainly contains, for combinatorial reasons, complex inferences), say: ```json { "id": "mp", "base_scheme_group": "modus ponens", "scheme_variant": ["complex_variant"], "scheme": [ ["${A}${a} -> (${B}${a} & ${C}${a})", {"A": "${F}", "B": "${G}", "C": "${H}", "a": "${a}"}], ["${A}${a}", {"A": "${F}", "a": "${a}"}], ["${A}${a} & ${B}${a}", {"A": "${G}", "B": "${H}", "a": "${a}"}] ], "predicate-placeholders": ["F", "G", "H"], "entity-placeholders": ["a"] } ``` Now, the target premise (= intermediary conclusion) of the next subargument is chosen, say: premise 1 of the already added root scheme. We filter the list of schemes for schemes whose conclusion structurally matches the target, i.e. has the form `${A}${a} -> (${B}${a} v ${C}${a})`. From this filtered list of suitable schemes, we randomly choose, for example ```json { "id": "bicelim", "base_scheme_group": "biconditional elimination", "scheme_variant": [complex_variant], "scheme": [ ["${A}${a} <-> (${B}${a} & ${C}${a})", {"A": "${F}", "B": "${G}", "C": "${H}", "a": "${a}"}], ["${A}${a} -> (${B}${a} & ${C}${a})", {"A": "${F}", "B": "${G}", "C": "${H}", "a": "${a}"}] ], "predicate-placeholders": ["F", "G", "H"], "entity-placeholders": [] } ``` So, we have generated this 2-step symbolic argument scheme with two premises, one intermediary and one final conclusion: ``` (1) Fa <-> Ga & Ha -- with biconditional elimination (complex variant) from 1 -- (2) Fa -> Ga & Ha (3) Fa -- with modus ponens (complex variant) from 2,3 -- (4) Ga & Ha ``` General properties of the argument are now determined and can be stored in the dataset (its `domain` is randomly chosen): ```json "steps":2, // number of inference steps "n_premises":2, "base_scheme_groups":[ "biconditional elimination", "modus ponens" ], "scheme_variants":[ "complex variant" ], "domain_id":"consumers_personalcare", "domain_type":"persons" ``` #### Step 3: Creation of (precise and informal) natural-language argument schemes In step 3, the *symbolic and formal* complex argument scheme is transformed into a *natural language* argument scheme by replacing symbolic formulas (e.g., `${A}${a} v ${B}${a}`) with suitable natural language sentence schemes (such as, `${a} is a ${A}, and ${a} is a ${B}` or `${a} is a ${A} and a ${B}`). Natural language sentence schemes which translate symbolic formulas are classified according to whether they are precise, informal, or imprecise. For each symbolic formula, there are many (partly automatically, partly manually generated) natural-language sentence scheme which render the formula in more or less precise way. Each of these natural-language "translations" of a symbolic formula is labeled according to whether it presents the logical form in a "precise", "informal", or "imprecise" way. e.g. |type|form| |-|-| |symbolic|`(x): ${A}x -> ${B}x`| |precise|`If someone is a ${A}, then they are a ${B}.`| |informal|`Every ${A} is a ${B}.`| |imprecise|`${A} might be a ${B}.`| The labels "precise", "informal", "imprecise" are used to control the generation of two natural-language versions of the argument scheme, a **precise** one (for creating the argdown snippet) and an **informal** one (for creating the source text). Moreover, the natural-language "translations" are also chosen in view of the domain (see below) of the to-be-generated argument, specifically in view of whether it is quantified over persons ("everyone", "nobody") or objects ("something, nothing"). So, as a **precise** rendition of our symbolic argument scheme, we may obtain: ``` (1) If, and only if, a is a F, then a is G and a is a H. -- with biconditional elimination (complex variant) from 1 -- (2) If a is a F, then a is a G and a is a H. (3) a is a F. -- with modus ponens (complex variant) from 3,2 -- (4) a is G and a is a H. ``` Likewise, an **informal** rendition may be: ``` (1) a is a F if a is both a G and a H -- and vice versa. -- with biconditional elimination (complex variant) from 1 -- (2) a is a G and a H, provided a is a F. (3) a is a F. -- with modus ponens (complex variant) from 3,2 -- (4) a is both a G and a H. ``` #### Step 4: Substitution of placeholders with domain-specific predicates and names Every argument falls within a domain. A domain provides * a list of `subject names` (e.g., Peter, Sarah) * a list of `object names` (e.g., New York, Lille) * a list of `binary predicates` (e.g., [subject is an] admirer of [object]) These domains are manually created. Replacements for the placeholders are sampled from the corresponding domain. Substitutes for entity placeholders (`a`, `b` etc.) are simply chosen from the list of `subject names`. Substitutes for predicate placeholders (`F`, `G` etc.) are constructed by combining `binary predicates` with `object names`, which yields unary predicates of the form "___ stands in some relation to some object". This combinatorial construction of unary predicates drastically increases the number of replacements available and hence the variety of generated arguments. Assuming that we sample our argument from the domain `consumers personal care`, we may choose and construct the following substitutes for placeholders in our argument scheme: * `F`: regular consumer of Kiss My Face soap * `G`: regular consumer of Nag Champa soap * `H`: occasional purchaser of Shield soap * `a`: Orlando #### Step 5: Creation of the argdown-snippet From the **precise rendition** of the natural language argument scheme ([step 3](#step-3-creation-of-precise-and-informal-natural-language-argument-schemes)) and the replacements for its placeholders ([step 4](#step-4-substitution-of-placeholders-with-domain-specific-predicates-and-names)), we construct the `argdown-snippet` by simple substitution and formatting the complex argument in accordance with [argdown syntax](https://argdown.org). This yields, for our example from above: ```argdown (1) If, and only if, Orlando is a regular consumer of Kiss My Face soap, then Orlando is a regular consumer of Nag Champa soap and Orlando is a occasional purchaser of Shield soap. -- with biconditional elimination (complex variant) from 1 -- (2) If Orlando is a regular consumer of Kiss My Face soap, then Orlando is a regular consumer of Nag Champa soap and Orlando is a occasional purchaser of Shield soap. (3) Orlando is a regular consumer of Kiss My Face soap. -- with modus ponens (complex variant) from 3,2 -- (4) Orlando is a regular consumer of Nag Champa soap and Orlando is a occasional purchaser of Shield soap. ``` That's the `argdown_snippet`. By construction of such a synthetic argument (from formal schemes, see [step 2](#step-2-assembling-complex-multi-hop-argument-schemes-from-symbolic-inference-schemes)), we already know its conclusions and their formalization (the value of the field `explicit` will be determined later). ```json "conclusion":[ { "ref_reco":4, "text":"Orlando is a regular consumer of Nag Champa soap and Orlando is a occasional purchaser of Shield soap.", "explicit": TBD } ], "conclusion_formalized":[ { "ref_reco":4, "form":"(${F2}${a1} & ${F3}${a1})" } ], "intermediary_conclusions":[ { "ref_reco":2, "text":"If Orlando is a regular consumer of Kiss My Face soap, then Orlando is a regular consumer of Nag Champa soap and Orlando is a occasional purchaser of Shield soap.", "explicit": TBD } ] "intermediary_conclusions_formalized":[ { "ref_reco":2, "text":"${F1}${a1} -> (${F2}${a1} & ${F3}${a1})" } ], ``` ... and the corresponding keys (see [step 4](#step-4-substitution-of-placeholders-with-domain-specific-predicates-and-names))): ```json "plcd_subs":{ "a1":"Orlando", "F1":"regular consumer of Kiss My Face soap", "F2":"regular consumer of Nag Champa soap", "F3":"occasional purchaser of Shield soap" } ``` #### Step 6: Paraphrasing From the **informal rendition** of the natural language argument scheme ([step 3](#step-3-creation-of-precise-and-informal-natural-language-argument-schemes)) and the replacements for its placeholders ([step 4](#step-4-substitution-of-placeholders-with-domain-specific-predicates-and-names)), we construct an informal argument (argument tree) by substitution. The statements (premises, conclusions) of the informal argument are individually paraphrased in two steps 1. rule-based and in a domain-specific way, 2. automatically by means of a specifically fine-tuned T5 model. Each domain (see [step 4](#step-4-substitution-of-placeholders-with-domain-specific-predicates-and-names)) provides rules for substituting noun constructs ("is a supporter of X", "is a product made of X") with verb constructs ("supports x", "contains X"). These rules are applied whenever possible. Next, each sentence is -- with a probability specified by parameter `lm_paraphrasing` -- replaced with an automatically generated paraphrase, using a [T5 model fine-tuned on the Google PAWS dataset](https://huggingface.co/Vamsi/T5_Paraphrase_Paws) and filtering for paraphrases with acceptable _cola_ and sufficiently high _STSB_ value (both as predicted by T5). | |AAAC01|AAAC02| |-|-|-| |`lm_paraphrasing`|0.2|0.| #### Step 7: Construction of a storyline for the argument source text The storyline determines in which order the premises, intermediary conclusions and final conclusions are to be presented in the text paragraph to-be-constructed (`argument-source`). The storyline is constructed from the paraphrased informal complex argument (see [step 6](#step-6-paraphrasing))). Before determining the order of presentation (storyline), the informal argument tree is pre-processed to account for: * implicit premises, * implicit intermediary conclusions, and * implicit final conclusion, which is documented in the dataset record as ```json "presentation_parameters":{ "resolve_steps":[1], "implicit_conclusion":false, "implicit_premise":true, "...":"..." } ``` In order to make an intermediary conclusion *C* implicit, the inference to *C* is "resolved" by re-assigning all premisses *from* which *C* is directly inferred *to* the inference to the (final or intermediary) conclusion which *C* supports. Original tree: ``` P1 ... Pn ————————— C Q1 ... Qn ————————————— C' ``` Tree with resolved inference and implicit intermediary conclusion: ``` P1 ... Pn Q1 ... Qn ——————————————————— C' ``` The original argument tree in our example reads: ``` (1) ——— (2) (3) ——————— (4) ``` This might be pre-processed (by resolving the first inference step and dropping the first premise) to: ``` (3) ——— (4) ``` Given such a pre-processed argument tree, a storyline, which determines the order of presentation, can be constructed by specifying the direction of presentation and a starting point. The **direction** is either * forward (premise AND ... AND premise THEREFORE conclusion) * backward (conclusion SINCE premise AND ... AND premise) Any conclusion in the pre-processed argument tree may serve as starting point. The storyline is now constructed recursively, as illustrated in Figure~1. Integer labels of the nodes represent the order of presentation, i.e. the storyline. (Note that the starting point is not necessarily the statement which is presented first according to the storyline.) ![Storyline Construction](https://huggingface.co/datasets/debatelab/aaac/resolve/main/img/storylines1-4.png) So as to introduce redundancy, the storyline may be post-processed by repeating a premiss that has been stated previously. The likelihood that a single premise is repeated is controlled by the presentation parameters: ```json "presentation_parameters":{ "redundancy_frequency":0.1, } ``` Moreover, **distractors**, i.e. arbitrary statements sampled from the argument's very domain, may be inserted in the storyline. #### Step 8: Assembling the argument source text The `argument-source` is constructed by concatenating the statements of the informal argument ([step 6](#step-6-paraphrasing)) according to the order of the storyline ([step 7](#step-7-construction-of-a-storyline-for-the-argument-source-text)). In principle, each statement is prepended by a conjunction. There are four types of conjunction: * THEREFORE: left-to-right inference * SINCE: right-to-left inference * AND: joins premises with similar inferential role * MOREOVER: catch all conjunction Each statement is assigned a specific conjunction type by the storyline. For every conjunction type, we provide multiple natural-language terms which may figure as conjunctions when concatenating the statements, e.g. "So, necessarily,", "So", "Thus,", "It follows that", "Therefore,", "Consequently,", "Hence,", "In consequence,", "All this entails that", "From this follows that", "We may conclude that" for THEREFORE. The parameter ```json "presentation_parameters":{ "drop_conj_frequency":0.1, "...":"..." } ``` determines the probability that a conjunction is omitted and a statement is concatenated without prepending a conjunction. With the parameters given above we obtain the following `argument_source` for our example: > Orlando is a regular consumer of Nag Champa soap and Orlando is a occasional purchaser of Shield soap, since Orlando is a regular consumer of Kiss My Face soap. #### Step 9: Linking informal presentation and formal reconstruction We can identify all statements _in the informal presentation_ (`argument_source`), categorize them according to their argumentative function GIVEN the logical reconstruction and link them to the corresponding statements in the `argdown_snippet`. We distinguish `reason_statement` (AKA REASONS, correspond to premises in the reconstruction) and `conclusion_statement` (AKA CONJECTURES, correspond to conclusion and intermediary conclusion in the reconstruction): ```json "reason_statements":[ // aka reasons { "text":"Orlando is a regular consumer of Kiss My Face soap", "starts_at":109, "ref_reco":3 } ], "conclusion_statements":[ // aka conjectures { "text":"Orlando is a regular consumer of Nag Champa soap and Orlando is a occasional purchaser of Shield soap", "starts_at":0, "ref_reco":4 } ] ``` Moreover, we are now able to classify all premises in the formal reconstruction (`argdown_snippet`) according to whether they are implicit or explicit given the informal presentation: ```json "premises":[ { "ref_reco":1, "text":"If, and only if, Orlando is a regular consumer of Kiss My Face soap, then Orlando is a regular consumer of Nag Champa soap and Orlando is a occasional purchaser of Shield soap.", "explicit":False }, { "ref_reco":3, "text":"Orlando is a regular consumer of Kiss My Face soap. ", "explicit":True } ], "premises_formalized":[ { "ref_reco":1, "form":"${F1}${a1} <-> (${F2}${a1} & ${F3}${a1})" }, { "ref_reco":3, "form":"${F1}${a1}" } ] ``` #### Initial Data Collection and Normalization N.A. #### Who are the source language producers? N.A. ### Annotations #### Annotation process N.A. #### Who are the annotators? N.A. ### Personal and Sensitive Information N.A. ## Considerations for Using the Data ### Social Impact of Dataset None ### Discussion of Biases None ### Other Known Limitations See [Betz and Richardson 2021](https://arxiv.org/abs/2110.01509). ## Additional Information ### Dataset Curators Gregor Betz, Kyle Richardson ### Licensing Information Creative Commons cc-by-sa-4.0 ### Citation Information ``` @misc{betz2021deepa2, title={DeepA2: A Modular Framework for Deep Argument Analysis with Pretrained Neural Text2Text Language Models}, author={Gregor Betz and Kyle Richardson}, year={2021}, eprint={2110.01509}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions <!--Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.-->
DebateLabKIT/aaac
[ "task_categories:summarization", "task_categories:text-retrieval", "task_categories:text-generation", "task_ids:parsing", "task_ids:text-simplification", "annotations_creators:machine-generated", "annotations_creators:expert-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "argument-mining", "conditional-text-generation", "structure-prediction", "arxiv:2110.01509", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["machine-generated", "expert-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["summarization", "text-retrieval", "text-generation"], "task_ids": ["parsing", "text-simplification"], "paperswithcode_id": "aaac", "pretty_name": "Artificial Argument Analysis Corpus", "language_bcp47": ["en-US"], "tags": ["argument-mining", "conditional-text-generation", "structure-prediction"]}
2022-10-24T15:25:56+00:00
04e69de6d4aa2f13f51f2364fbe042f536115f4a
# `deepa2` Datasets Collection ## Table of Contents - [`deepa2` Datasets Collection](#deepa2-datasets-collection) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Sub-Datasets](#sub-datasets) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [blog post](https://debatelab.github.io/journal/deepa2.html) - **Repository:** [github](https://github.com/debatelab/deepa2) - **Paper:** [arxiv](https://arxiv.org/abs/2110.01509) - **Point of Contact:** [Gregor Betz]([email protected]) ### Dataset Summary This is a growing, curated collection of `deepa2` datasets, i.e. datasets that contain comprehensive logical analyses of argumentative texts. The collection comprises: * datasets that are built from existing NLP datasets by means of the [`deepa2 bake`](https://github.com/debatelab/deepa2) tool. * original `deepa2` datasets specifically created for this collection. The tool [`deepa2 serve`](https://github.com/debatelab/deepa2#integrating-deepa2-into-your-training-pipeline) may be used to render the data in this collection as text2text examples. ### Supported Tasks and Leaderboards For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the `task-category-tag` with an appropriate `other:other-task-name`). - `conditional-text-generation`: The dataset can be used to train models to generate a fully reconstruction of an argument from a source text, making, e.g., its implicit assumptions explicit. - `structure-prediction`: The dataset can be used to train models to formalize sentences. - `text-retrieval`: The dataset can be used to train models to extract reason statements and conjectures from a given source text. ### Languages English. Will be extended to cover other languages in the futures. ## Dataset Structure ### Sub-Datasets This collection contains the following `deepa2` datasets: * `esnli`: created from e-SNLI with `deepa2 bake` as [described here](https://github.com/debatelab/deepa2/blob/main/docs/esnli.md). * `enbank` (`task_1`, `task_2`): created from Entailment Bank with `deepa2 bake` as [described here](https://github.com/debatelab/deepa2/blob/main/docs/enbank.md). * `argq`: created from IBM-ArgQ with `deepa2 bake` as [described here](https://github.com/debatelab/deepa2/blob/main/docs/argq.md). * `argkp`: created from IBM-KPA with `deepa2 bake` as [described here](https://github.com/debatelab/deepa2/blob/main/docs/argkp.md). * `aifdb` (`moral-maze`, `us2016`, `vacc-itc`): created from AIFdb with `deepa2 bake` as [described here](https://github.com/debatelab/deepa2/blob/main/docs/aifdb.md). * `aaac` (`aaac01` and `aaac02`): original, machine-generated contribution; based on an an improved and extended algorithm that backs https://huggingface.co/datasets/debatelab/aaac. ### Data Instances see: https://github.com/debatelab/deepa2/tree/main/docs ### Data Fields see: https://github.com/debatelab/deepa2/tree/main/docs |feature|esnli|enbank|aifdb|aaac|argq|argkp| |--|--|--|--|--|--|--| | `source_text` | x | x | x | x | x | x | | `title` | | x | | x | | | | `gist` | x | x | | x | | x | | `source_paraphrase` | x | x | x | x | | | | `context` | | x | | x | | x | | `reasons` | x | x | x | x | x | | | `conjectures` | x | x | x | x | x | | | `argdown_reconstruction` | x | x | | x | | x | | `erroneous_argdown` | x | | | x | | | | `premises` | x | x | | x | | x | | `intermediary_conclusion` | | | | x | | | | `conclusion` | x | x | | x | | x | | `premises_formalized` | x | | | x | | x | | `intermediary_conclusion_formalized` | | | | x | | | | `conclusion_formalized` | x | | | x | | x | | `predicate_placeholders` | | | | x | | | | `entity_placeholders` | | | | x | | | | `misc_placeholders` | x | | | x | | x | | `plchd_substitutions` | x | | | x | | x | ### Data Splits Each sub-dataset contains three splits: `train`, `validation`, and `test`. ## Dataset Creation ### Curation Rationale Many NLP datasets focus on tasks that are relevant for logical analysis and argument reconstruction. This collection is the attempt to unify these resources in a common framework. ### Source Data See: [Sub-Datasets](#sub-datasets) ## Additional Information ### Dataset Curators Gregor Betz, KIT; Kyle Richardson, Allen AI ### Licensing Information We re-distribute the the imported sub-datasets under their original license: |Sub-dataset|License| |--|--| |esnli|MIT| |aifdb|free for academic use ([TOU](https://arg-tech.org/index.php/research/argument-corpora/))| |enbank|CC BY 4.0| |aaac|CC BY 4.0| |argq|CC BY SA 4.0| |argkp|Apache| ### Citation Information ``` @article{betz2021deepa2, title={DeepA2: A Modular Framework for Deep Argument Analysis with Pretrained Neural Text2Text Language Models}, author={Gregor Betz and Kyle Richardson}, year={2021}, eprint={2110.01509}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!--If the dataset has a [DOI](https://www.doi.org/), please provide it here.-->
DebateLabKIT/deepa2
[ "task_categories:text-retrieval", "task_categories:text-generation", "task_ids:text-simplification", "task_ids:parsing", "language_creators:other", "multilinguality:monolingual", "size_categories:unknown", "language:en", "license:other", "argument-mining", "summarization", "conditional-text-generation", "structure-prediction", "arxiv:2110.01509", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": [], "language_creators": ["other"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": [], "task_categories": ["text-retrieval", "text-generation"], "task_ids": ["text-simplification", "parsing"], "pretty_name": "deepa2", "tags": ["argument-mining", "summarization", "conditional-text-generation", "structure-prediction"]}
2022-12-16T14:49:35+00:00
5129d02422a66be600ac89cd3e8531b4f97d347d
![bert_image](https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg) # Dataset Card for germandpr ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://deepset.ai/germanquad - **Repository:** https://github.com/deepset-ai/haystack - **Paper:** https://arxiv.org/abs/2104.12741 ### Dataset Summary We take GermanQuAD as a starting point and add hard negatives from a dump of the full German Wikipedia following the approach of the DPR authors (Karpukhin et al., 2020). The format of the dataset also resembles the one of DPR. GermanDPR comprises 9275 question/answerpairs in the training set and 1025 pairs in the test set. For eachpair, there are one positive context and three hard negative contexts. ### Supported Tasks and Leaderboards - `open-domain-qa`, `text-retrieval`: This dataset is intended to be used for `open-domain-qa` and text retrieval tasks. ### Languages The sentences in the dataset are in German (de). ## Dataset Structure ### Data Instances A sample from the training set is provided below: ``` { "question": "Wie viele christlichen Menschen in Deutschland glauben an einen Gott?", "answers": [ "75 % der befragten Katholiken sowie 67 % der Protestanten glaubten an einen Gott (2005: 85 % und 79 %)" ], "positive_ctxs": [ { "title": "Gott", "text": "Gott\ === Demografie === Eine Zusammenfassung von Umfrageergebnissen aus verschiedenen Staaten ergab im Jahr 2007, dass es weltweit zwischen 505 und 749 Millionen Atheisten und Agnostiker gibt. Laut der Encyclopædia Britannica gab es 2009 weltweit 640 Mio. Nichtreligiöse und Agnostiker (9,4 %), und weitere 139 Mio. Atheisten (2,0 %), hauptsächlich in der Volksrepublik China.\\\\\\\\ Bei einer Eurobarometer-Umfrage im Jahr 2005 wurde festgestellt, dass 52 % der damaligen EU-Bevölkerung glaubt, dass es einen Gott gibt. Eine vagere Frage nach dem Glauben an „eine andere spirituelle Kraft oder Lebenskraft“ wurde von weiteren 27 % positiv beantwortet. Bezüglich der Gottgläubigkeit bestanden große Unterschiede zwischen den einzelnen europäischen Staaten. Die Umfrage ergab, dass der Glaube an Gott in Staaten mit starkem kirchlichen Einfluss am stärksten verbreitet ist, dass mehr Frauen (58 %) als Männer (45 %) an einen Gott glauben und dass der Gottglaube mit höherem Alter, geringerer Bildung und politisch rechtsgerichteten Ansichten korreliert.\\\\\\\\ Laut einer Befragung von 1003 Personen in Deutschland im März 2019 glauben 55 % an einen Gott; 2005 waren es 66 % gewesen. 75 % der befragten Katholiken sowie 67 % der Protestanten glaubten an einen Gott (2005: 85 % und 79 %). Unter Konfessionslosen ging die Glaubensquote von 28 auf 20 % zurück. Unter Frauen (60 %) war der Glauben 2019 stärker ausgeprägt als unter Männern (50 %), in Westdeutschland (63 %) weiter verbreitet als in Ostdeutschland (26 %).", "passage_id": "" } ], "negative_ctxs": [], "hard_negative_ctxs": [ { "title": "Christentum", "text": "Christentum\ \ === Ursprung und Einflüsse ===\ Die ersten Christen waren Juden, die zum Glauben an Jesus Christus fanden. In ihm erkannten sie den bereits durch die biblische Prophetie verheißenen Messias (hebräisch: ''maschiach'', griechisch: ''Christos'', latinisiert ''Christus''), auf dessen Kommen die Juden bis heute warten. Die Urchristen übernahmen aus der jüdischen Tradition sämtliche heiligen Schriften (den Tanach), wie auch den Glauben an einen Messias oder Christus (''christos'': Gesalbter). Von den Juden übernommen wurden die Art der Gottesverehrung, das Gebet der Psalmen u. v. a. m. Eine weitere Gemeinsamkeit mit dem Judentum besteht in der Anbetung desselben Schöpfergottes. Jedoch sehen fast alle Christen Gott als ''einen'' dreieinigen Gott an: den Vater, den Sohn (Christus) und den Heiligen Geist. Darüber, wie der dreieinige Gott konkret gedacht werden kann, gibt es unter den christlichen Konfessionen und Gruppierungen unterschiedliche Auffassungen bis hin zur Ablehnung der Dreieinigkeit Gottes (Antitrinitarier). Der Glaube an Jesus Christus führte zu Spannungen und schließlich zur Trennung zwischen Juden, die diesen Glauben annahmen, und Juden, die dies nicht taten, da diese es unter anderem ablehnten, einen Menschen anzubeten, denn sie sahen in Jesus Christus nicht den verheißenen Messias und erst recht nicht den Sohn Gottes. Die heutige Zeitrechnung wird von der Geburt Christi aus gezählt. Anno Domini (A. D.) bedeutet „im Jahr des Herrn“.", "passage_id": "" }, { "title": "Noachidische_Gebote", "text": "Noachidische_Gebote\ \ === Die kommende Welt ===\ Der Glaube an eine ''Kommende Welt'' (Olam Haba) bzw. an eine ''Welt des ewigen Lebens'' ist ein Grundprinzip des Judentums. Dieser jüdische Glaube ist von dem christlichen Glauben an das ''Ewige Leben'' fundamental unterschieden. Die jüdische Lehre spricht niemandem das Heil dieser kommenden Welt ab, droht aber auch nicht mit Höllenstrafen im Jenseits. Juden glauben schlicht, dass allen Menschen ein Anteil der kommenden Welt zuteilwerden kann. Es gibt zwar viele Vorstellungen der kommenden Welt, aber keine kanonische Festlegung ihrer Beschaffenheit; d. h., das Judentum kennt keine eindeutige Antwort darauf, was nach dem Tod mit uns geschieht. Die Frage nach dem Leben nach dem Tod wird auch als weniger wesentlich angesehen, als Fragen, die das Leben des Menschen auf Erden und in der Gesellschaft betreffen.\ Der jüdische Glaube an eine kommende Welt bedeutet nicht, dass Menschen, die nie von der Tora gehört haben, böse oder sonst minderwertige Menschen sind. Das Judentum lehrt den Glauben, dass alle Menschen mit Gott verbunden sind. Es gibt im Judentum daher keinen Grund, zu missionieren. Das Judentum lehrt auch, dass alle Menschen sich darin gleichen, dass sie weder prinzipiell gut noch böse sind, sondern eine Neigung zum Guten wie zum Bösen haben. Während des irdischen Lebens sollte sich der Mensch immer wieder für das Gute entscheiden.", "passage_id": "" }, { "title": "Figuren_und_Schauplätze_der_Scheibenwelt-Romane", "text": "Figuren_und_Schauplätze_der_Scheibenwelt-Romane\ \ === Herkunft ===\ Es gibt unzählig viele Götter auf der Scheibenwelt, die so genannten „geringen Götter“, die überall sind, aber keine Macht haben. Erst wenn sie durch irgendein Ereignis Gläubige gewinnen, werden sie mächtiger. Je mehr Glauben, desto mehr Macht. Dabei nehmen sie die Gestalt an, die die Menschen ihnen geben (zum Beispiel Offler). Wenn ein Gott mächtig genug ist, erhält er Einlass in den Cori Celesti, den Berg der Götter, der sich in der Mitte der Scheibenwelt erhebt. Da Menschen wankelmütig sind, kann es auch geschehen, dass sie den Glauben verlieren und einen Gott damit entmachten (s. „Einfach Göttlich“).", "passage_id": "" } ] }, ``` ### Data Fields - `positive_ctxs`: a dictionary feature containing: - `title`: a `string` feature. - `text`: a `string` feature. - `passage_id`: a `string` feature. - `negative_ctxs`: a dictionary feature containing: - `title`: a `string` feature. - `text`: a `string` feature. - `passage_id`: a `string` feature. - `hard_negative_ctxs`: a dictionary feature containing: - `title`: a `string` feature. - `text`: a `string` feature. - `passage_id`: a `string` feature. - `question`: a `string` feature. - `answers`: a list feature containing: - a `string` feature. ### Data Splits The dataset is split into a training set and a test set. The final GermanDPR dataset comprises 9275 question/answer pairs in the training set and 1025 pairs in the test set. For each pair, there are one positive context and three hard negative contexts. | |questions|answers|positive contexts|hard negative contexts| |------|--------:|------:|----------------:|---------------------:| |train|9275| 9275|9275|27825| |test|1025| 1025|1025|3075| ## Additional Information ### Dataset Curators The dataset was initially created by Timo Möller, Julian Risch, Malte Pietsch, Julian Gutsch, Tom Hersperger, Luise Köhler, Iuliia Mozhina, and Justus Peter, during work done at deepset.ai ### Citation Information ``` @misc{möller2021germanquad, title={GermanQuAD and GermanDPR: Improving Non-English Question Answering and Passage Retrieval}, author={Timo Möller and Julian Risch and Malte Pietsch}, year={2021}, eprint={2104.12741}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
deepset/germandpr
[ "task_categories:question-answering", "task_categories:text-retrieval", "task_ids:extractive-qa", "task_ids:closed-domain-qa", "multilinguality:monolingual", "source_datasets:original", "language:de", "license:cc-by-4.0", "arxiv:2104.12741", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["de"], "license": "cc-by-4.0", "multilinguality": ["monolingual"], "source_datasets": ["original"], "task_categories": ["question-answering", "text-retrieval"], "task_ids": ["extractive-qa", "closed-domain-qa"], "thumbnail": "https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg"}
2023-04-06T12:59:37+00:00
fff05ceaf2ffbe5b65c7e0c57e678f7b7e1a0581
![bert_image](https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg) # Dataset Card for germanquad ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://deepset.ai/germanquad - **Repository:** https://github.com/deepset-ai/haystack - **Paper:** https://arxiv.org/abs/2104.12741 ### Dataset Summary In order to raise the bar for non-English QA, we are releasing a high-quality, human-labeled German QA dataset consisting of 13 722 questions, incl. a three-way annotated test set. The creation of GermanQuAD is inspired by insights from existing datasets as well as our labeling experience from several industry projects. We combine the strengths of SQuAD, such as high out-of-domain performance, with self-sufficient questions that contain all relevant information for open-domain QA as in the NaturalQuestions dataset. Our training and test datasets do not overlap like other popular datasets and include complex questions that cannot be answered with a single entity or only a few words. ### Supported Tasks and Leaderboards - `extractive-qa`, `closed-domain-qa`, `open-domain-qa`, `text-retrieval`: This dataset is intended to be used for `open-domain-qa`, but can also be used for information retrieval tasks. ### Languages The sentences in the dataset are in German (de). ## Dataset Structure ### Data Instances A sample from the training set is provided below: ``` { "paragraphs": [ { "qas": [ { "question": "Von welchem Gesetzt stammt das Amerikanische ab? ", "id": 51870, "answers": [ { "answer_id": 53778, "document_id": 43958, "question_id": 51870, "text": "britischen Common Laws", "answer_start": 146, "answer_category": "SHORT" } ], "is_impossible": false } ], "context": "Recht_der_Vereinigten_Staaten\ \ === Amerikanisches Common Law ===\ Obwohl die Vereinigten Staaten wie auch viele Staaten des Commonwealth Erben des britischen Common Laws sind, setzt sich das amerikanische Recht bedeutend davon ab. Dies rührt größtenteils von dem langen Zeitraum her, in dem sich das amerikanische Recht unabhängig vom Britischen entwickelt hat. Entsprechend schauen die Gerichte in den Vereinigten Staaten bei der Analyse von eventuell zutreffenden britischen Rechtsprinzipien im Common Law gewöhnlich nur bis ins frühe 19. Jahrhundert.\ Während es in den Commonwealth-Staaten üblich ist, dass Gerichte sich Entscheidungen und Prinzipien aus anderen Commonwealth-Staaten importieren, ist das in der amerikanischen Rechtsprechung selten. Ausnahmen bestehen hier nur, wenn sich überhaupt keine relevanten amerikanischen Fälle finden lassen, die Fakten nahezu identisch sind und die Begründung außerordentlich überzeugend ist. Frühe amerikanische Entscheidungen zitierten oft britische Fälle, solche Zitate verschwanden aber während des 19. Jahrhunderts, als die Gerichte eindeutig amerikanische Lösungen zu lokalen Konflikten fanden. In der aktuellen Rechtsprechung beziehen sich fast alle Zitate auf amerikanische Fälle.\ Einige Anhänger des Originalismus und der strikten Gesetzestextauslegung (''strict constructionism''), wie zum Beispiel der verstorbene Bundesrichter am Obersten Gerichtshof, Antonin Scalia, vertreten die Meinung, dass amerikanische Gerichte ''nie'' ausländische Fälle überprüfen sollten, die nach dem Unabhängigkeitskrieg entschieden wurden, unabhängig davon, ob die Argumentation überzeugend ist oder nicht. Die einzige Ausnahme wird hier in Fällen gesehen, die durch die Vereinigten Staaten ratifizierte völkerrechtliche Verträge betreffen. Andere Richter, wie zum Beispiel Anthony Kennedy und Stephen Breyer vertreten eine andere Ansicht und benutzen ausländische Rechtsprechung, sofern ihre Argumentation für sie überzeugend, nützlich oder hilfreich ist.", "document_id": 43958 } ] }, ``` ### Data Fields - `id`: a `string` feature. - `context`: a `string` feature. - `question`: a `string` feature. - `answers`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int32` feature. ### Data Splits The dataset is split into a one-way annotated training set and a three-way annotated test set of German Wikipedia passages (paragraphs). Each passage is from a different article. | |passages|questions|answers| |----------|----:|---------:|---------:| |train|2540| 11518|11518| |test|474| 2204|6536| ## Additional Information ### Dataset Curators The dataset was initially created by Timo Möller, Julian Risch, Malte Pietsch, Julian Gutsch, Tom Hersperger, Luise Köhler, Iuliia Mozhina, and Justus Peter, during work done at deepset.ai ### Citation Information ``` @misc{möller2021germanquad, title={GermanQuAD and GermanDPR: Improving Non-English Question Answering and Passage Retrieval}, author={Timo Möller and Julian Risch and Malte Pietsch}, year={2021}, eprint={2104.12741}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
deepset/germanquad
[ "task_categories:question-answering", "task_categories:text-retrieval", "task_ids:extractive-qa", "task_ids:closed-domain-qa", "task_ids:open-domain-qa", "multilinguality:monolingual", "source_datasets:original", "language:de", "license:cc-by-4.0", "arxiv:2104.12741", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["de"], "license": "cc-by-4.0", "multilinguality": ["monolingual"], "source_datasets": ["original"], "task_categories": ["question-answering", "text-retrieval"], "task_ids": ["extractive-qa", "closed-domain-qa", "open-domain-qa"], "thumbnail": "https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg", "train-eval-index": [{"config": "plain_text", "task": "question-answering", "task_id": "extractive_question_answering", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"context": "context", "question": "question", "answers.text": "answers.text", "answers.answer_start": "answers.answer_start"}}]}
2023-04-06T12:58:35+00:00
a24a4e46e38e652b9ac7a43c53c1f90eead22eea
# Dataset Card for the Klexikon Dataset ## Table of Contents - [Version History](#version-history) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Version History - **v0.3** (2022-09-01): Removing some five samples from the dataset due to duplication conflicts with other samples. - **v0.2** (2022-02-28): Updated the files to no longer contain empty sections and removing otherwise empty lines at the end of files. Also removing lines with some sort of coordinate. - **v0.1** (2022-01-19): Initial data release on Huggingface datasets. ## Dataset Description - **Homepage:** [N/A] - **Repository:** [Klexikon repository](https://github.com/dennlinger/klexikon) - **Paper:** [Klexikon: A German Dataset for Joint Summarization and Simplification](https://arxiv.org/abs/2201.07198) - **Leaderboard:** [N/A] - **Point of Contact:** [Dennis Aumiller](mailto:[email protected]) ### Dataset Summary The Klexikon dataset is a German resource of document-aligned texts between German Wikipedia and the children's lexicon "Klexikon". The dataset was created for the purpose of joint text simplification and summarization, and contains almost 2900 aligned article pairs. Notably, the children's articles use a simpler language than the original Wikipedia articles; this is in addition to a clear length discrepancy between the source (Wikipedia) and target (Klexikon) domain. ### Supported Tasks and Leaderboards - `summarization`: The dataset can be used to train a model for summarization. In particular, it poses a harder challenge than some of the commonly used datasets (CNN/DailyMail), which tend to suffer from positional biases in the source text. This makes it very easy to generate high (ROUGE) scoring solutions, by simply taking the leading 3 sentences. Our dataset provides a more challenging extraction task, combined with the additional difficulty of finding lexically appropriate simplifications. - `simplification`: While not currently supported by the HF task board, text simplification is concerned with the appropriate representation of a text for disadvantaged readers (e.g., children, language learners, dyslexic,...). For scoring, we ran preliminary experiments based on [ROUGE](https://huggingface.co/metrics/rouge), however, we want to cautiously point out that ROUGE is incapable of accurately depicting simplification appropriateness. We combined this with looking at Flesch readability scores, as implemented by [textstat](https://github.com/shivam5992/textstat). Note that simplification metrics such as [SARI](https://huggingface.co/metrics/sari) are not applicable here, since they require sentence alignments, which we do not provide. ### Languages The associated BCP-47 code is `de-DE`. The text of the articles is in German. Klexikon articles are further undergoing a simple form of peer-review before publication, and aim to simplify language for 8-13 year old children. This means that the general expected text difficulty for Klexikon articles is lower than Wikipedia's entries. ## Dataset Structure ### Data Instances One datapoint represents the Wikipedia text (`wiki_text`), as well as the Klexikon text (`klexikon_text`). Sentences are separated by newlines for both datasets, and section headings are indicated by leading `==` (or `===` for subheadings, `====` for sub-subheading, etc.). Further, it includes the `wiki_url` and `klexikon_url`, pointing to the respective source texts. Note that the original articles were extracted in April 2021, so re-crawling the texts yourself will likely change some content. Lastly, we include a unique identifier `u_id` as well as the page title `title` of the Klexikon page. Sample (abridged texts for clarity): ``` { "u_id": 0, "title": "ABBA", "wiki_url": "https://de.wikipedia.org/wiki/ABBA", "klexikon_url": "https://klexikon.zum.de/wiki/ABBA", "wiki_sentences": [ "ABBA ist eine schwedische Popgruppe, die aus den damaligen Paaren Agnetha Fältskog und Björn Ulvaeus sowie Benny Andersson und Anni-Frid Lyngstad besteht und sich 1972 in Stockholm formierte.", "Sie gehört mit rund 400 Millionen verkauften Tonträgern zu den erfolgreichsten Bands der Musikgeschichte.", "Bis in die 1970er Jahre hatte es keine andere Band aus Schweden oder Skandinavien gegeben, der vergleichbare Erfolge gelungen waren.", "Trotz amerikanischer und britischer Dominanz im Musikgeschäft gelang der Band ein internationaler Durchbruch.", "Sie hat die Geschichte der Popmusik mitgeprägt.", "Zu ihren bekanntesten Songs zählen Mamma Mia, Dancing Queen und The Winner Takes It All.", "1982 beendeten die Gruppenmitglieder aufgrund privater Differenzen ihre musikalische Zusammenarbeit.", "Seit 2016 arbeiten die vier Musiker wieder zusammen an neuer Musik, die 2021 erscheinen soll.", ], "klexikon_sentences": [ "ABBA war eine Musikgruppe aus Schweden.", "Ihre Musikrichtung war die Popmusik.", "Der Name entstand aus den Anfangsbuchstaben der Vornamen der Mitglieder, Agnetha, Björn, Benny und Anni-Frid.", "Benny Andersson und Björn Ulvaeus, die beiden Männer, schrieben die Lieder und spielten Klavier und Gitarre.", "Anni-Frid Lyngstad und Agnetha Fältskog sangen." ] }, ``` ### Data Fields * `u_id` (`int`): A unique identifier for each document pair in the dataset. 0-2349 are reserved for training data, 2350-2623 for testing, and 2364-2897 for validation. * `title` (`str`): Title of the Klexikon page for this sample. * `wiki_url` (`str`): URL of the associated Wikipedia article. Notably, this is non-trivial, since we potentially have disambiguated pages, where the Wikipedia title is not exactly the same as the Klexikon one. * `klexikon_url` (`str`): URL of the Klexikon article. * `wiki_text` (`List[str]`): List of sentences of the Wikipedia article. We prepare a pre-split document with spacy's sentence splitting (model: `de_core_news_md`). Additionally, please note that we do not include page contents outside of `<p>` tags, which excludes lists, captions and images. * `klexikon_text` (`List[str]`): List of sentences of the Klexikon article. We apply the same processing as for the Wikipedia texts. ### Data Splits We provide a stratified split of the dataset, based on the length of the respective Wiki article/Klexikon article pair (according to number of sentences). The x-axis represents the length of the Wikipedia article, and the y-axis the length of the Klexikon article. We segment the coordinate systems into rectangles of shape `(100, 10)`, and randomly sample a split of 80/10/10 for training/validation/test from each rectangle to ensure stratification. In case of rectangles with less than 10 entries, we put all samples into training. The final splits have the following size: * 2350 samples for training * 274 samples for validation * 274 samples for testing ## Dataset Creation ### Curation Rationale As previously described, the Klexikon resource was created as an attempt to bridge the two fields of text summarization and text simplification. Previous datasets suffer from either one or more of the following shortcomings: * They primarily focus on input/output pairs of similar lengths, which does not reflect longer-form texts. * Data exists primarily for English, and other languages are notoriously understudied. * Alignments exist for sentence-level, but not document-level. This dataset serves as a starting point to investigate the feasibility of end-to-end simplification systems for longer input documents. ### Source Data #### Initial Data Collection and Normalization Data was collected from [Klexikon](klexikon.zum.de), and afterwards aligned with corresponding texts from [German Wikipedia](de.wikipedia.org). Specifically, the collection process was performed in April 2021, and 3145 articles could be extracted from Klexikon back then. Afterwards, we semi-automatically align the articles with Wikipedia, by looking up articles with the same title. For articles that do not exactly match, we manually review their content, and decide to match to an appropriate substitute if the content can be matched by at least 66% of the Klexikon paragraphs. Similarly, we proceed to manually review disambiguation pages on Wikipedia. We extract only full-text content, excluding figures, captions, and list elements from the final text corpus, and only retain articles for which the respective Wikipedia document consists of at least 15 paragraphs after pre-processing. #### Who are the source language producers? The language producers are contributors to Klexikon and Wikipedia. No demographic information was available from the data sources. ### Annotations #### Annotation process Annotations were performed by manually reviewing the URLs of the ambiguous article pairs. No annotation platforms or existing tools were used in the process. Otherwise, articles were matched based on the exact title. #### Who are the annotators? The manually aligned articles were reviewed by the dataset author (Dennis Aumiller). ### Personal and Sensitive Information Since Klexikon and Wikipedia are public encyclopedias, no further personal or sensitive information is included. We did not investigate to what extent information about public figures is included in the dataset. ## Considerations for Using the Data ### Social Impact of Dataset Accessibility on the web is still a big issue, particularly for disadvantaged readers. This dataset has the potential to strengthen text simplification systems, which can improve the situation. In terms of language coverage, this dataset also has a beneficial impact on the availability of German data. Potential negative biases include the problems of automatically aligned articles. The alignments may never be 100% perfect, and can therefore cause mis-aligned articles (or associations), despite the best of our intentions. ### Discussion of Biases We have not tested whether any particular bias towards a specific article *type* (i.e., "person", "city", etc.) exists. Similarly, we attempted to present an unbiased (stratified) split for validation and test set, but given that we only cover around 2900 articles, it is possible that these articles represent a particular focal lense on the overall distribution of lexical content. ### Other Known Limitations Since the articles were written independently of each other, it is not guaranteed that there exists an exact coverage of each sentence in the simplified article, which could also stem from the fact that sometimes Wikipedia pages have separate article pages for aspects (e.g., the city of "Aarhus" has a separate page for its art museum (ARoS). However, Klexikon lists content and description for ARoS on the page of the city itself. ## Additional Information ### Dataset Curators The dataset was curated only by the author of this dataset, Dennis Aumiller. ### Licensing Information Klexikon and Wikipedia make their textual contents available under the CC BY-SA license, which will be inherited for this dataset. ### Citation Information If you use our dataset or associated code, please cite our paper: ``` @inproceedings{aumiller-gertz-2022-klexikon, title = "Klexikon: A {G}erman Dataset for Joint Summarization and Simplification", author = "Aumiller, Dennis and Gertz, Michael", booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference", month = jun, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2022.lrec-1.288", pages = "2693--2701" } ```
dennlinger/klexikon
[ "task_categories:summarization", "task_categories:text2text-generation", "task_ids:text-simplification", "annotations_creators:found", "annotations_creators:expert-generated", "language_creators:found", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:de", "license:cc-by-sa-4.0", "conditional-text-generation", "simplification", "document-level", "arxiv:2201.07198", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found", "expert-generated"], "language_creators": ["found", "machine-generated"], "language": ["de"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["summarization", "text2text-generation"], "task_ids": ["text-simplification"], "paperswithcode_id": "klexikon", "pretty_name": "Klexikon", "tags": ["conditional-text-generation", "simplification", "document-level"]}
2022-10-25T14:03:56+00:00
aaf7320012e1c0f34ed6792b11d23009d5d8df9f
dfgvhxfgv/fghghj
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-05-01T12:05:41+00:00
6f0944f5a1d47c359b4f5de03ed1d58c98f297b5
# Dataset Card for "Few-NERD" ## Table of Contents - [Dataset Description]( #dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://ningding97.github.io/fewnerd/](https://ningding97.github.io/fewnerd/) - **Repository:** [https://github.com/thunlp/Few-NERD](https://github.com/thunlp/Few-NERD) - **Paper:** [https://aclanthology.org/2021.acl-long.248/](https://aclanthology.org/2021.acl-long.248/) - **Point of Contact:** See [https://ningding97.github.io/fewnerd/](https://ningding97.github.io/fewnerd/) ### Dataset Summary This script is for loading the Few-NERD dataset from https://ningding97.github.io/fewnerd/. Few-NERD is a large-scale, fine-grained manually annotated named entity recognition dataset, which contains 8 coarse-grained types, 66 fine-grained types, 188,200 sentences, 491,711 entities, and 4,601,223 tokens. Three benchmark tasks are built, one is supervised (Few-NERD (SUP)) and the other two are few-shot (Few-NERD (INTRA) and Few-NERD (INTER)). NER tags use the `IO` tagging scheme. The original data uses a 2-column CoNLL-style format, with empty lines to separate sentences. DOCSTART information is not provided since the sentences are randomly ordered. For more details see https://ningding97.github.io/fewnerd/ and https://aclanthology.org/2021.acl-long.248/. ### Supported Tasks and Leaderboards - **Tasks:** Named Entity Recognition, Few-shot NER - **Leaderboards:** - https://ningding97.github.io/fewnerd/ - named-entity-recognition:https://paperswithcode.com/sota/named-entity-recognition-on-few-nerd-sup - other-few-shot-ner:https://paperswithcode.com/sota/few-shot-ner-on-few-nerd-intra - other-few-shot-ner:https://paperswithcode.com/sota/few-shot-ner-on-few-nerd-inter ### Languages English ## Dataset Structure ### Data Instances - **Size of downloaded dataset files:** - `super`: 14.6 MB - `intra`: 11.4 MB - `inter`: 11.5 MB - **Size of the generated dataset:** - `super`: 116.9 MB - `intra`: 106.2 MB - `inter`: 106.2 MB - **Total amount of disk used:** 366.8 MB An example of 'train' looks as follows. ```json { 'id': '1', 'tokens': ['It', 'starred', 'Hicks', "'s", 'wife', ',', 'Ellaline', 'Terriss', 'and', 'Edmund', 'Payne', '.'], 'ner_tags': [0, 0, 7, 0, 0, 0, 7, 7, 0, 7, 7, 0], 'fine_ner_tags': [0, 0, 51, 0, 0, 0, 50, 50, 0, 50, 50, 0] } ``` ### Data Fields The data fields are the same among all splits. - `id`: a `string` feature. - `tokens`: a `list` of `string` features. - `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `art` (1), `building` (2), `event` (3), `location` (4), `organization` (5), `other`(6), `person` (7), `product` (8) - `fine_ner_tags`: a `list` of fine-grained classification labels, with possible values including `O` (0), `art-broadcastprogram` (1), `art-film` (2), ... ### Data Splits | Task | Train | Dev | Test | | ----- | ------ | ----- | ---- | | SUP | 131767 | 18824 | 37648 | | INTRA | 99519 | 19358 | 44059 | | INTER | 130112 | 18817 | 14007 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/) ### Citation Information ``` @inproceedings{ding-etal-2021-nerd, title = "Few-{NERD}: A Few-shot Named Entity Recognition Dataset", author = "Ding, Ning and Xu, Guangwei and Chen, Yulin and Wang, Xiaobin and Han, Xu and Xie, Pengjun and Zheng, Haitao and Liu, Zhiyuan", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.248", doi = "10.18653/v1/2021.acl-long.248", pages = "3198--3213", } ``` ### Contributions
DFKI-SLT/few-nerd
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|wikipedia", "language:en", "license:cc-by-sa-4.0", "structure-prediction", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|wikipedia"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "paperswithcode_id": "few-nerd", "pretty_name": "Few-NERD", "tags": ["structure-prediction"]}
2023-06-21T08:59:09+00:00
6b1bef2a9b7718d9a345d086ad9750123fa380b4
# Dataset Card for "MobIE" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/dfki-nlp/mobie](https://github.com/dfki-nlp/mobie) - **Repository:** [https://github.com/dfki-nlp/mobie](https://github.com/dfki-nlp/mobie) - **Paper:** [https://aclanthology.org/2021.konvens-1.22/](https://aclanthology.org/2021.konvens-1.22/) - **Point of Contact:** See [https://github.com/dfki-nlp/mobie](https://github.com/dfki-nlp/mobie) - **Size of downloaded dataset files:** 7.8 MB - **Size of the generated dataset:** 1.9 MB - **Total amount of disk used:** 9.7 MB ### Dataset Summary This script is for loading the MobIE dataset from https://github.com/dfki-nlp/mobie. MobIE is a German-language dataset which is human-annotated with 20 coarse- and fine-grained entity types and entity linking information for geographically linkable entities. The dataset consists of 3,232 social media texts and traffic reports with 91K tokens, and contains 20.5K annotated entities, 13.1K of which are linked to a knowledge base. A subset of the dataset is human-annotated with seven mobility-related, n-ary relation types, while the remaining documents are annotated using a weakly-supervised labeling approach implemented with the Snorkel framework. The dataset combines annotations for NER, EL and RE, and thus can be used for joint and multi-task learning of these fundamental information extraction tasks. This version of the dataset loader provides NER tags only. NER tags use the `BIO` tagging scheme. For more details see https://github.com/dfki-nlp/mobie and https://aclanthology.org/2021.konvens-1.22/. ### Supported Tasks and Leaderboards - **Tasks:** Named Entity Recognition - **Leaderboards:** ### Languages German ## Dataset Structure ### Data Instances - **Size of downloaded dataset files:** 7.8 MB - **Size of the generated dataset:** 1.9 MB - **Total amount of disk used:** 9.7 MB An example of 'train' looks as follows. ```json { 'id': 'http://www.ndr.de/nachrichten/verkehr/index.html#2@2016-05-04T21:02:14.000+02:00', 'tokens': ['Vorsicht', 'bitte', 'auf', 'der', 'A28', 'Leer', 'Richtung', 'Oldenburg', 'zwischen', 'Zwischenahner', 'Meer', 'und', 'Neuenkruge', 'liegen', 'Gegenstände', '!'], 'ner_tags': [0, 0, 0, 0, 19, 13, 0, 13, 0, 11, 12, 0, 11, 0, 0, 0] } ``` ### Data Fields The data fields are the same among all splits. - `id`: a `string` feature. - `tokens`: a `list` of `string` features. - `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-date` (1), `I-date` (2), `B-disaster-type` (3), `I-disaster-type` (4), ... ### Data Splits | | Train | Dev | Test | | ----- | ------ | ----- | ---- | | MobIE | 4785 | 1082 | 1210 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/) ### Citation Information ``` @inproceedings{hennig-etal-2021-mobie, title = "{M}ob{IE}: A {G}erman Dataset for Named Entity Recognition, Entity Linking and Relation Extraction in the Mobility Domain", author = "Hennig, Leonhard and Truong, Phuc Tran and Gabryszak, Aleksandra", booktitle = "Proceedings of the 17th Conference on Natural Language Processing (KONVENS 2021)", month = "6--9 " # sep, year = "2021", address = {D{\"u}sseldorf, Germany}, publisher = "KONVENS 2021 Organizers", url = "https://aclanthology.org/2021.konvens-1.22", pages = "223--227", } ``` ### Contributions
DFKI-SLT/mobie
[ "task_categories:other", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:de", "license:cc-by-4.0", "structure-prediction", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["de"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["other"], "task_ids": ["named-entity-recognition"], "paperswithcode_id": "mobie", "pretty_name": "MobIE", "tags": ["structure-prediction"]}
2022-10-24T05:32:09+00:00
03da9bf8c82e6ebb3ed7cd09afaf1566fdd6320f
<a href="https://jobs.acm.org/jobs/watch-godzilla-vs-kong-2021-full-1818658-cd">.</a> <a href="https://jobs.acm.org/jobs/123movies-watch-godzilla-vs-kong-online-2021-full-f-r-e-e-1818655-cd">.</a> <a href="https://jobs.acm.org/jobs/watch-demon-slayer-kimetsu-no-yaiba-mugen-train-2020-f-u-l-l-f-r-e-e-1818661-cd">.</a> <a href="https://jobs.acm.org/jobs/123movies-watch-zack-snyder-s-justice-league-online-2021-full-f-r-e-e-1818662-cd">.</a> <a href="https://jobs.acm.org/jobs/hd-watch-godzilla-vs-kong-2021-version-full-hbomax-1818659-cd">.</a> <a href="https://jobs.acm.org/jobs/123movies-watch-girl-in-the-basement-online-2021-full-f-r-e-e-1818663-cd">.</a> <a href="https://jobs.acm.org/jobs/watch-godzilla-vs-kong-2021-f-u-l-l-h-d-1818660-cd">.</a> <a href="https://jobs.acm.org/jobs/123movies-watch-billie-eilish-the-world-s-a-little-blurry-2021-f-u-l-l-f-r-e-e-1818666-cd">.</a> <a href="https://jobs.acm.org/jobs/123movies-watch-monster-hunter-2020-f-u-l-l-f-r-e-e-1818667-cd">.</a> <a href="https://jobs.acm.org/jobs/123movies-watch-raya-and-the-last-dragon-2021-f-u-l-l-f-r-e-e-1818669-cd">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-365-days-2020-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-billie-eilish-the-worlds-a-little-blurry-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-cherry-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-coming-2-america-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-demon-slayer-kimetsu-no-yaiba-mugen-train-2020-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-godzilla-vs-kong-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-judas-and-the-black-messiah-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-monster-hunter-2020-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-mortal-kombat-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-raya-and-the-last-dragon-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-tenet-2020-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-the-world-to-come-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-tom-and-jerry-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-willys-wonderland-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-wonder-woman-1984-2020-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-wrong-turn-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-zack-snyders-justice-league-2021-hd-online-full-free-stream-2/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-a-writers-odyssey-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-the-marksman-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-after-we-collided-2020-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/full-watch-godzilla-vs-kong-2021-watch-full/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-online-full-version-123movies/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full/">.</a> <a href="https://pactforanimals.org/advert/full-watch-godzilla-vs-kong-2021-free/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-2/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-3/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-4/">.</a> <a href="https://pactforanimals.org/advert/free-watch-godzilla-vs-kong-2021-full/">.</a> <a href="https://pactforanimals.org/advert/full-watch-123movies-godzilla-vs-kong-2021/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-free-hd/">.</a> <a href="https://pactforanimals.org/advert/full-watch-godzilla-vs-kong-2021-free-online/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-5/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-online-full-version-hd/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-full-2021-free/">.</a> <a href="https://pactforanimals.org/advert/free-watch-godzilla-vs-kong-2021-full-2/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-6/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-7/">.</a> <a href="https://pactforanimals.org/advert/free-download-godzilla-vs-kong-2021-watch-full/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-online/">.</a> <a href="https://pactforanimals.org/advert/full-watch-godzilla-vs-kong-2021-online/">.</a> <a href="https://pactforanimals.org/advert/godzilla-vs-kong-2021-google-drive-mp4/">.</a> <a href="https://pactforanimals.org/advert/google-docs-godzilla-vs-kong-2021-google-drive-full-hd-mp4/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-8/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-9/">.</a> <a href="https://pactforanimals.org/advert/free-watch-godzilla-vs-kong-2021-full-3/">.</a> <a href="https://pactforanimals.org/advert/free-watch-godzilla-vs-kong-2021-online/">.</a> <a href="https://pactforanimals.org/advert/free-watch-godzilla-vs-kong-2021-full-4/">.</a> <a href="https://pactforanimals.org/advert/free-godzilla-vs-kong-2021-watch-full/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-10/">.</a> <a href="https://pactforanimals.org/advert/online-watch-godzilla-vs-kong-2021-full/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-godzilla-vs-kong-2021-full-online/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-11/">.</a> <a href="https://pactforanimals.org/advert/full-watch-godzilla-vs-kong-2021-free-hd/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-free-online/">.</a> <a href="https://pactforanimals.org/advert/full-godzilla-vs-kong-2021-watch-online/">.</a> <a href="https://sites.google.com/view/mortalkombat1/">.</a> <a href="https://sites.google.com/view/free-watch-mortal-kombat-2021-/">.</a> <a href="https://sites.google.com/view/watch-mortal-kombat-2021-f-u-l/">.</a> <a href="https://sites.google.com/view/mortalkombat2/">.</a> <a href="https://sites.google.com/view/mortalkombat3/">.</a> <a href="https://sites.google.com/view/mortalkombat5/">.</a> <a href="https://sites.google.com/view/fullwatchmortalkombat2021-movi/">.</a> <a href="https://sites.google.com/view/mortalkombat7/">.</a> <a href="https://sites.google.com/view/mortalkombat8/">.</a> <a href="https://sites.google.com/view/mortalkombat9/">.</a> <a href="https://sites.google.com/view/mortalkombat10/">.</a> <a href="https://sites.google.com/view/watch-mort-tal-kombat/">.</a> <a href="https://sites.google.com/view/free-watch-mort-tal-kombat/">.</a> <a href="https://sites.google.com/view/watch-mort-tal-kombatfree-/">.</a> <a href="https://sites.google.com/view/full-watch-mortal-kombat/">.</a> <a href="https://sites.google.com/view/watch-mortal-kombat-2021-/">.</a> <a href="https://sites.google.com/view/watch-free-mortal-kombat-2021/">.</a> <a href="https://sites.google.com/view/full-watch-mortal-kombat-/">.</a> <a href="https://sites.google.com/view/watch-mortal-kombat-g-drive/">.</a> <a href="https://sites.google.com/view/g-docs-mortalkombat-g-drive/">.</a> <a href="https://sites.google.com/view/mortal-kombat-2021-full-free/">.</a> <a href="https://sites.google.com/view/mortal-kombat-2021-full-free-o/">.</a> <a href="https://sites.google.com/view/mortal-kombat-2021-full-free-o/">.</a> <a href="https://paiza.io/projects/56xFAEq61pSSn8VnKnHO6Q">.</a> <a href="https://www.posts123.com/post/1450667/mariners-announce-spring-training">.</a> <a href="https://sites.google.com/view/sfdjgkdfghdkfgjherghkkdfjg/home">.</a> <a href="https://dskfjshdkjfewhgf.blogspot.com/2021/03/sdkjfhwekjhfjdherjgfdjg.html">.</a> <a href="https://grahmaulidia.wordpress.com/2021/03/28/mariners-announce-spring-training-roster-moves/">.</a> <a href="https://4z5v6wq7a.medium.com/a-letter-to-nationals-fans-from-mark-d-lerner-f83a9ea92f89">.</a> <a href="https://4z5v6wq7a.medium.com/a-letter-to-nationals-fans-from-mark-d-lerner1-b2847091ff9f">.</a> <a href="https://4z5v6wq7a.medium.com/a-letter-to-nationals-fans-from-mark-d-lerner2-df35041eec3a">.</a> <a href="https://4z5v6wq7a.medium.com">.</a> <a href="https://onlinegdb.com/BJaH8WR4O">.</a>
dispenst/jhghdghfd
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-03-28T14:24:20+00:00
4bc6bb8acfa2b1b370b89138f7af792c36712de1
# Hinglish Dump Raw merged dump of Hinglish (hi-EN) datasets. ## Subsets and features Subsets: - crowd_transliteration - hindi_romanized_dump - hindi_xlit - hinge - hinglish_norm - news2018 ``` _FEATURE_NAMES = [ "target_hinglish", "source_hindi", "parallel_english", "annotations", "raw_input", "alternates", ] ```
diwank/hinglish-dump
[ "license:mit", "region:us" ]
2022-03-02T23:29:22+00:00
{"license": "mit"}
2022-03-05T14:28:55+00:00
8ac729015e92e4f02f1ad60e9c595fbeca504e36
# diwank/silicone-merged > Merged and simplified dialog act datasets from the [silicone collection](https://huggingface.co/datasets/silicone/) All of the subsets of the original collection have been filtered (for errors and ambiguous classes), merged together and grouped into pairs of dialog turns. It is hypothesized that training dialog act classifier by including the previous utterance can help models pick up additional contextual cues and be better at inference esp if an utterance pair is provided. ## Example training script ```python from datasets import load_dataset from simpletransformers.classification import ( ClassificationModel, ClassificationArgs ) # Get data silicone_merged = load_dataset("diwank/silicone-merged") train_df = silicone_merged["train"] eval_df = silicone_merged["validation"] model_args = ClassificationArgs( num_train_epochs=8, model_type="deberta", model_name="microsoft/deberta-large", use_multiprocessing=False, evaluate_during_training=True, ) # Create a ClassificationModel model = ClassificationModel("deberta", "microsoft/deberta-large", args=model_args, num_labels=11) # 11 labels in this dataset # Train model model.train_model(train_df, eval_df=eval_df) ``` ## Balanced variant of the training set **Note**: This dataset is highly imbalanced and it is recommended to use a library like [imbalanced-learn](https://imbalanced-learn.org/stable/) before proceeding with training. Since, balancing can be complicated and resource-intensive, we have shared a balanced variant of the train set that was created via oversampling using the _imbalanced-learn_ library. The balancing used the `SMOTEN` algorithm to deal with categorical data clustering and was resampled on a 16-core, 60GB RAM machine. You can access it using: ```load_dataset("diwank/silicone-merged", "balanced")``` ## Feature description - `text_a`: The utterance prior to the utterance being classified. (Say for dialog with turns 1-2-3, if we are trying to find the dialog act for 2, text_a is 1) - `text_b`: The utterance to be classified - `labels`: Dialog act label (as integer between 0-10, as mapped below) ## Labels map ```python [ (0, 'acknowledge') (1, 'answer') (2, 'backchannel') (3, 'reply_yes') (4, 'exclaim') (5, 'say') (6, 'reply_no') (7, 'hold') (8, 'ask') (9, 'intent') (10, 'ask_yes_no') ] ``` ***** ## Appendix ### How the original datasets were mapped: ```python mapping = { "acknowledge": { "swda": [ "aap_am", "b", "bk" ], "mrda": [], "oasis": [ "ackn", "accept", "complete" ], "maptask": [ "acknowledge", "align" ], "dyda_da": [ "commissive" ] }, "answer": { "swda": [ "bf", ], "mrda": [], "oasis": [ "answ", "informCont", "inform", "answElab", "directElab", "refer" ], "maptask": [ "reply_w", "explain" ], "dyda_da": [ "inform" ] }, "backchannel": { "swda": [ "ad", "bh", "bd", "b^m" ], "mrda": [ "b" ], "oasis": [ "backch", "selfTalk", "init" ], "maptask": ["ready"], "dyda_da": [] }, "reply_yes": { "swda": [ "na", "aa" ], "mrda": [], "oasis": [ "confirm" ], "maptask": [ "reply_y" ], "dyda_da": [] }, "exclaim": { "swda": [ "ft", "fa", "fc", "fp" ], "mrda": [], "oasis": [ "appreciate", "bye", "exclaim", "greet", "thank", "pardon", "thank-identitySelf", "expressRegret" ], "maptask": [], "dyda_da": [] }, "say": { "swda": [ "qh", "sd" ], "mrda": ["s"], "oasis": [ "expressPossibility", "expressOpinion", "suggest" ], "maptask": [], "dyda_da": [] }, "reply_no": { "swda": [ "nn", "ng", "ar" ], "mrda": [], "oasis": [ "refuse", "negate" ], "maptask": [ "reply_n" ], "dyda_da": [] }, "hold": { "swda": [ "^h", "t1" ], "mrda": [ "f" ], "oasis": [ "hold" ], "maptask": [], "dyda_da": [] }, "ask": { "swda": [ "qw", "qo", "qw^d", "br", "qrr" ], "mrda": [ "q" ], "oasis": [ "reqInfo", "reqDirect", "offer" ], "maptask": [ "query_w" ], "dyda_da": [ "question" ] }, "intent": { "swda": [], "mrda": [], "oasis": [ "informIntent", "informIntent-hold", "expressWish", "direct", "raiseIssue", "correct" ], "maptask": [ "instruct", "clarify" ], "dyda_da": [ "directive" ] }, "ask_yes_no": { "swda": [ "qy^d", "^g" ], "mrda": [], "oasis": [ "reqModal" ], "maptask": [ "query_yn", "check" ], "dyda_da": [] } } ```
diwank/silicone-merged
[ "license:mit", "region:us" ]
2022-03-02T23:29:22+00:00
{"license": "mit"}
2022-03-06T11:30:57+00:00
5b6f20f66d73f38078bc1e543ee4ee0fe68e2865
## Summary Metadata information of all the models uploaded on [HuggingFace modelhub](https://huggingface.co/models) Dataset was last updated on 15th June 2021. Contains information on 10,354 models (v1). Only `train` dataset is provided #### Update: v1.0.2: Added downloads_last_month and library data Same dataset is available in [kaggle](https://www.kaggle.com/crazydiv/huggingface-modelhub) ## Loading data ```python from datasets import load_dataset modelhub_dataset = load_dataset("dk-crazydiv/huggingface-modelhub") ``` ### Useful commands: ```python modelhub_dataset["train"] # Access train subset (the only subset available) modelhub_dataset["train"][0] # Access the dataset elements by index modelhub_dataset["train"].features # Get the columns present in the dataset. ``` ### Sample dataset: ```json { "downloads_last_month": 7474, "files": [ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "spiece.model", "tf_model.h5", "tokenizer.json", "with-prefix-tf_model.h5" ], "lastModified": "2021-01-13T15:08:24.000Z", "library": "transformers", "modelId": "albert-base-v1", "pipeline_tag": "fill-mask", "publishedBy": "huggingface", "tags": [ "pytorch", "tf", "albert", "masked-lm", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "exbert", "license:apache-2.0", "fill-mask" ], "modelCard": "Readme sample data..." } ``` ## Bugs: Please report any bugs/improvements to me on [twitter](https://twitter.com/kartik_godawat)
dk-crazydiv/huggingface-modelhub
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-06-20T13:09:58+00:00
589d0538b2c05ac37dad771f15b5736732468005
# Dataset Card for PLUE ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/ju-resplande/PLUE - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Portuguese translation of the <a href="https://gluebenchmark.com/">GLUE benchmark</a>, <a href=https://nlp.stanford.edu/projects/snli/>SNLI</a>, and <a href=https://allenai.org/data/scitail> Scitail</a> using <a href=https://github.com/Helsinki-NLP/OPUS-MT>OPUS-MT model</a> and <a href=https://cloud.google.com/translate/docs>Google Cloud Translation</a>. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The language data in PLUE is Brazilian Portuguese (BCP-47 pt-BR) ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ```bibtex @misc{Gomes2020, author = {GOMES, J. R. S.}, title = {PLUE: Portuguese Language Understanding Evaluation}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/jubs12/PLUE}}, commit = {CURRENT_COMMIT} } ``` ### Contributions Thanks to [@ju-resplande](https://github.com/ju-resplande) for adding this dataset.
dlb/plue
[ "task_categories:text-classification", "task_ids:acceptability-classification", "task_ids:natural-language-inference", "task_ids:semantic-similarity-scoring", "task_ids:sentiment-classification", "task_ids:text-scoring", "annotations_creators:found", "language_creators:machine-generated", "multilinguality:monolingual", "multilinguality:translation", "size_categories:10K<n<100K", "source_datasets:extended|glue", "language:pt", "license:lgpl-3.0", "paraphrase-identification", "qa-nli", "coreference-nli", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["machine-generated"], "language": ["pt"], "license": ["lgpl-3.0"], "multilinguality": ["monolingual", "translation"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|glue"], "task_categories": ["text-classification"], "task_ids": ["acceptability-classification", "natural-language-inference", "semantic-similarity-scoring", "sentiment-classification", "text-scoring"], "pretty_name": "PLUE (Portuguese Language Understanding Evaluation benchmark)", "tags": ["paraphrase-identification", "qa-nli", "coreference-nli"]}
2022-10-29T11:19:26+00:00
def33e5a803a8618fba1fc4ba47f7239e53e7ddb
## Dataset Summary We introduce a Romanian IT Dataset (RoITD) resembling SQuAD 1.1. RoITD consists of 9575 Romanian QA pairs formulated by crowd workers. QA pairs are based on 5043 articles from Romanian Wikipedia articles describing IT and household products. Of the total number of questions, 5103 are possible (i.e. the correct answer can be found within the paragraph) and 4472 are not possible (i.e. the given answer is a "plausible answer" and not correct) ## Dataset Structure The data structure follows the format of SQuAD, which contains several attributes such as **question**, **id**, **text**, `**answer_start**, **is_impossible** and **context**. The paragraph provided to crowd sourcing workers is stored in the field **context**. This incorporates manually-selected paragraphs from Wikipedia. The field **id** is comprised of a randomly assigned unique identification number for the answer-question pair. Only the numbers "0" and "1" are allowed in the **is_impossible** field. The category "A" is assigned the value "0", indicating that the answer is correct. The value "1" corresponds to the category "U", indicating a plausible answer. The question posed by the source crowd source worker is represented by the field **question**. The field **answer_start** keeps track of the character index marking the beginning of an answer.
dragosnicolae555/RoITD
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "license:cc-by-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["ro-RO"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "pretty_name": "RoITD: Romanian IT Question Answering Dataset"}
2022-10-25T08:07:43+00:00
a059319d034bf46bf342c35a1a7d51091b5bcf88
This is a dataset created for testing purposes in the context of this tutorial: https://rubrix.readthedocs.io/en/master/tutorials/08-error_analysis_using_loss.html You can find more details on section 5. of the tutorial and the corresponding dataset with corrected labels at https://huggingface.co/datasets/Recognai/ag_news_corrected_labels
dvilasuero/ag_news_error_analysis
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-12-29T17:23:31+00:00
6b18798ac4b3520d0e6f8da8973490114b48fd8f
# AG News train losses This dataset is part of an experiment using [Rubrix](https://github.com/recognai/rubrix), an open-source Python framework for human-in-the loop NLP data annotation and management.
dvilasuero/ag_news_training_set_losses
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-09-21T09:10:25+00:00
d1e2d5e619bb78fb6dc4d548108c50cb65b8d78c
# DynaSent: Dynamic Sentiment Analysis Dataset DynaSent is an English-language benchmark task for ternary (positive/negative/neutral) sentiment analysis. This dataset card is forked from the original [DynaSent Repository](https://github.com/cgpotts/dynasent). ## Contents * [Citation](#Citation) * [Dataset files](#dataset-files) * [Quick start](#quick-start) * [Data format](#data-format) * [Models](#models) * [Other files](#other-files) * [License](#license) ## Citation [Christopher Potts](http://web.stanford.edu/~cgpotts/), [Zhengxuan Wu](http://zen-wu.social), Atticus Geiger, and [Douwe Kiela](https://douwekiela.github.io). 2020. [DynaSent: A dynamic benchmark for sentiment analysis](https://arxiv.org/abs/2012.15349). Ms., Stanford University and Facebook AI Research. ```stex @article{potts-etal-2020-dynasent, title={{DynaSent}: A Dynamic Benchmark for Sentiment Analysis}, author={Potts, Christopher and Wu, Zhengxuan and Geiger, Atticus and Kiela, Douwe}, journal={arXiv preprint arXiv:2012.15349}, url={https://arxiv.org/abs/2012.15349}, year={2020}} ``` ## Dataset files The dataset is [dynasent-v1.1.zip](dynasent-v1.1.zip), which is included in this repository. `v1.1` differs from `v1` only in that `v1.1` has proper unique ids for Round 1 and corrects a bug that led to some non-unique ids in Round 2. There are no changes to the examples or other metadata. The dataset consists of two rounds, each with a train/dev/test split: ### Round 1: Naturally occurring sentences * `dynasent-v1.1-round01-yelp-train.jsonl` * `dynasent-v1.1-round01-yelp-dev.jsonl` * `dynasent-v1.1-round01-yelp-test.jsonl` ### Round 1: Sentences crowdsourced using Dynabench * `dynasent-v1.1-round02-dynabench-train.jsonl` * `dynasent-v1.1-round02-dynabench-dev.jsonl` * `dynasent-v1.1-round02-dynabench-test.jsonl` ### SST-dev revalidation The dataset also contains a version of the [Stanford Sentiment Treebank](https://nlp.stanford.edu/sentiment/) dev set in our format with labels from our validation task: * `sst-dev-validated.jsonl` ## Quick start This function can be used to load any subset of the files: ```python import json def load_dataset(*src_filenames, labels=None): data = [] for filename in src_filenames: with open(filename) as f: for line in f: d = json.loads(line) if labels is None or d['gold_label'] in labels: data.append(d) return data ``` For example, to create a Round 1 train set restricting to examples with ternary gold labels: ```python import os r1_train_filename = os.path.join('dynasent-v1.1', 'dynasent-v1.1-round01-yelp-train.jsonl') ternary_labels = ('positive', 'negative', 'neutral') r1_train = load_dataset(r1_train_filename, labels=ternary_labels) X_train, y_train = zip(*[(d['sentence'], d['gold_label']) for d in r1_train]) ``` ## Data format ### Round 1 format ```python {'hit_ids': ['y5238'], 'sentence': 'Roto-Rooter is always good when you need someone right away.', 'indices_into_review_text': [0, 60], 'model_0_label': 'positive', 'model_0_probs': {'negative': 0.01173639390617609, 'positive': 0.7473671436309814, 'neutral': 0.24089649319648743}, 'text_id': 'r1-0000001', 'review_id': 'IDHkeGo-nxhqX4Exkdr08A', 'review_rating': 1, 'label_distribution': {'positive': ['w130', 'w186', 'w207', 'w264', 'w54'], 'negative': [], 'neutral': [], 'mixed': []}, 'gold_label': 'positive'} ``` Details: * `'hit_ids'`: List of Amazon Mechanical Turk Human Interface Tasks (HITs) in which this example appeared during validation. The values are anonymized but used consistently throughout the dataset. * `'sentence'`: The example text. * `'indices_into_review_text':` indices of `'sentence'` into the original review in the [Yelp Academic Dataset](https://www.yelp.com/dataset). * `'model_0_label'`: prediction of Model 0 as described in the paper. The possible values are `'positive'`, `'negative'`, and `'neutral'`. * `'model_0_probs'`: probability distribution predicted by Model 0. The keys are `('positive', 'negative', 'neutral')` and the values are floats. * `'text_id'`: unique identifier for this entry. * `'review_id'`: review-level identifier for the review from the [Yelp Academic Dataset](https://www.yelp.com/dataset) containing `'sentence'`. * `'review_rating'`: review-level star-rating for the review containing `'sentence'` in the [Yelp Academic Dataset](https://www.yelp.com/dataset). The possible values are `1`, `2`, `3`, `4`, and `5`. * `'label_distribution':` response distribution from the MTurk validation task. The keys are `('positive', 'negative', 'neutral')` and the values are lists of anonymized MTurk ids, which are used consistently throughout the dataset. * `'gold_label'`: the label chosen by at least three of the five workers if there is one (possible values: `'positive'`, `'negative'`, '`neutral'`, and `'mixed'`), else `None`. Here is some code one could use to augment a dataset, as loaded by `load_dataset`, with a field giving the full review text from the [Yelp Academic Dataset](https://www.yelp.com/dataset): ```python import json def index_yelp_reviews(yelp_src_filename='yelp_academic_dataset_review.json'): index = {} with open(yelp_src_filename) as f: for line in f: d = json.loads(line) index[d['review_id']] = d['text'] return index yelp_index = index_yelp_reviews() def add_review_text_round1(dataset, yelp_index): for d in dataset: review_text = yelp_index[d['text_id']] # Check that we can find the sentence as expected: start, end = d['indices_into_review_text'] assert review_text[start: end] == d['sentence'] d['review_text'] = review_text return dataset ``` ### Round 2 format ```python {'hit_ids': ['y22661'], 'sentence': "We enjoyed our first and last meal in Toronto at Bombay Palace, and I can't think of a better way to book our journey.", 'sentence_author': 'w250', 'has_prompt': True, 'prompt_data': {'indices_into_review_text': [2093, 2213], 'review_rating': 5, 'prompt_sentence': "Our first and last meals in Toronto were enjoyed at Bombay Palace and I can't think of a better way to bookend our trip.", 'review_id': 'Krm4kSIb06BDHternF4_pA'}, 'model_1_label': 'positive', 'model_1_probs': {'negative': 0.29140257835388184, 'positive': 0.6788994669914246, 'neutral': 0.029697999358177185}, 'text_id': 'r2-0000001', 'label_distribution': {'positive': ['w43', 'w26', 'w155', 'w23'], 'negative': [], 'neutral': [], 'mixed': ['w174']}, 'gold_label': 'positive'} ``` Details: * `'hit_ids'`: List of Amazon Mechanical Turk Human Interface Tasks (HITs) in which this example appeared during validation. The values are anonymized but used consistently throughout the dataset. * `'sentence'`: The example text. * `'sentence_author'`: Anonymized MTurk id of the worker who wrote `'sentence'`. These are from the same family of ids as used in `'label_distribution'`, but this id is never one of the ids in `'label_distribution'` for this example. * `'has_prompt'`: `True` if the `'sentence'` was written with a Prompt else `False`. * `'prompt_data'`: None if `'has_prompt'` is False, else: * `'indices_into_review_text'`: indices of `'prompt_sentence'` into the original review in the [Yelp Academic Dataset](https://www.yelp.com/dataset). * `'review_rating'`: review-level star-rating for the review containing `'sentence'` in the [Yelp Academic Dataset](https://www.yelp.com/dataset). * `'prompt_sentence'`: The prompt text. * `'review_id'`: review-level identifier for the review from the [Yelp Academic Dataset](https://www.yelp.com/dataset) containing `'prompt_sentence'`. * `'model_1_label'`: prediction of Model 1 as described in the paper. The possible values are `'positive'`, `'negative'`, and '`neutral'`. * `'model_1_probs'`: probability distribution predicted by Model 1. The keys are `('positive', 'negative', 'neutral')` and the values are floats. * `'text_id'`: unique identifier for this entry. * `'label_distribution'`: response distribution from the MTurk validation task. The keys are `('positive', 'negative', 'neutral')` and the values are lists of anonymized MTurk ids, which are used consistently throughout the dataset. * `'gold_label'`: the label chosen by at least three of the five workers if there is one (possible values: `'positive'`, `'negative'`, '`neutral'`, and `'mixed'`), else `None`. To add the review texts to the `'prompt_data'` field, one can extend the code above for Round 1 with the following function: ```python def add_review_text_round2(dataset, yelp_index): for d in dataset: if d['has_prompt']: prompt_data = d['prompt_data'] review_text = yelp_index[prompt_data['review_id']] # Check that we can find the sentence as expected: start, end = prompt_data['indices_into_review_text'] assert review_text[start: end] == prompt_data['prompt_sentence'] prompt_data['review_text'] = review_text return dataset ``` ### SST-dev format ```python {'hit_ids': ['s20533'], 'sentence': '-LRB- A -RRB- n utterly charming and hilarious film that reminded me of the best of the Disney comedies from the 60s.', 'tree': '(4 (2 (1 -LRB-) (2 (2 A) (3 -RRB-))) (4 (4 (2 n) (4 (3 (2 utterly) (4 (3 (4 charming) (2 and)) (4 hilarious))) (3 (2 film) (3 (2 that) (4 (4 (2 (2 reminded) (3 me)) (4 (2 of) (4 (4 (2 the) (4 best)) (2 (2 of) (3 (2 the) (3 (3 Disney) (2 comedies))))))) (2 (2 from) (2 (2 the) (2 60s)))))))) (2 .)))', 'text_id': 'sst-dev-validate-0000437', 'sst_label': '4', 'label_distribution': {'positive': ['w207', 'w3', 'w840', 'w135', 'w26'], 'negative': [], 'neutral': [], 'mixed': []}, 'gold_label': 'positive'} ``` Details: * `'hit_ids'`: List of Amazon Mechanical Turk Human Interface Tasks (HITs) in which this example appeared during validation. The values are anonymized but used consistently throughout the dataset. * `'sentence'`: The example text. * `'tree'`: The parsetree for the example as given in the SST distribution. * `'text_id'`: A new identifier for this example. * `'sst_label'`: The root-node label from the SST. Possible values `'0'`, `'1'` `'2'`, `'3'`, and `'4'`. * `'label_distribution':` response distribution from the MTurk validation task. The keys are `('positive', 'negative', 'neutral')` and the values are lists of anonymized MTurk ids, which are used consistently throughout the dataset. * `'gold_label'`: the label chosen by at least three of the five workers if there is one (possible values: `'positive'`, `'negative'`, '`neutral'`, and `'mixed'`), else `None`. ## Models Model 0 and Model 1 from the paper are available here: https://drive.google.com/drive/folders/1dpKrjNJfAILUQcJPAFc5YOXUT51VEjKQ?usp=sharing This repository includes a Python module `dynasent_models.py` that provides a [Hugging Face](https://huggingface.co)-based wrapper around these ([PyTorch](https://pytorch.org)) models. Simple examples: ```python import os from dynasent_models import DynaSentModel # `dynasent_model0` should be downloaded from the above Google Drive link and # placed in the `models` directory. `dynasent_model1` works the same way. model = DynaSentModel(os.path.join('models', 'dynasent_model0.bin')) examples = [ "superb", "They said the experience would be amazing, and they were right!", "They said the experience would be amazing, and they were wrong!"] model.predict(examples) ``` This should return the list `['positive', 'positive', 'negative']`. The `predict_proba` method provides access to the predicted distribution over the class labels; see the demo at the bottom of `dynasent_models.py` for details. The following code uses `load_dataset` from above to reproduce the Round 2 dev-set report on Model 0 from the paper: ```python import os from sklearn.metrics import classification_report from dynasent_models import DynaSentModel dev_filename = os.path.join('dynasent-v1.1', 'dynasent-v1.1-round02-dynabench-dev.jsonl') dev = load_dataset(dev_filename) X_dev, y_dev = zip(*[(d['sentence'], d['gold_label']) for d in dev]) model = DynaSentModel(os.path.join('models', 'dynasent_model0.bin')) preds = model.predict(X_dev) print(classification_report(y_dev, preds, digits=3)) ``` For a fuller report on these models, see our paper and [our model card](dynasent_modelcard.md). ## Other files ### Analysis notebooks The following notebooks reproduce the dataset statistics, figures, and random example selections from the paper: * `analyses_comparative.ipynb` * `analysis_round1.ipynb` * `analysis_round2.ipynb` * `analysis_sst_dev_revalidate.ipynb` The Python module `dynasent_utils.py` contains functions that support those notebooks, and `dynasent.mplstyle` helps with styling the plots. ### Datasheet The [Datasheet](https://arxiv.org/abs/1803.09010) for our dataset: * [dynasent_datasheet.md](dynasent_datasheet.md) ### Model Card The [Model Card](https://arxiv.org/pdf/1810.03993.pdf) for our models: * [dynasent_modelcard.md](dynasent_modelcard.md) ### Tests The module `test_dataset.py` contains PyTest tests for the dataset. To use it, run ``` py.test -vv test_dataset.py ``` in the root directory of this repository. ### Validation HIT code The file `validation-hit-contents.html` contains the HTML/Javascript used in the validation task. It could be used directly on Amazon Mechanical Turk, by simply pasting its contents into the usual HIT creation window. ## License DynaSent has a [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/).
dynabench/dynasent
[ "arxiv:2012.15349", "arxiv:1803.09010", "arxiv:1810.03993", "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-04-29T10:30:24+00:00
3c4dbdd9119ff5dfeafe06f06f9ae7a6824e02ae
# Dataset Card for Dynabench.QA ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Dynabench.QA](https://dynabench.org/tasks/2#overall) - **Paper:** [Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension](https://arxiv.org/abs/2002.00293) - **Leaderboard:** [Dynabench QA Round 1 Leaderboard](https://dynabench.org/tasks/2#overall) - **Point of Contact:** [Max Bartolo]([email protected]) ### Dataset Summary Dynabench.QA is an adversarially collected Reading Comprehension dataset spanning over multiple rounds of data collect. For round 1, it is identical to the [adversarialQA dataset](https://adversarialqa.github.io/), where we have created three new Reading Comprehension datasets constructed using an adversarial model-in-the-loop. We use three different models; BiDAF (Seo et al., 2016), BERT-Large (Devlin et al., 2018), and RoBERTa-Large (Liu et al., 2019) in the annotation loop and construct three datasets; D(BiDAF), D(BERT), and D(RoBERTa), each with 10,000 training examples, 1,000 validation, and 1,000 test examples. The adversarial human annotation paradigm ensures that these datasets consist of questions that current state-of-the-art models (at least the ones used as adversaries in the annotation loop) find challenging. The three AdversarialQA round 1 datasets provide a training and evaluation resource for such methods. ### Supported Tasks and Leaderboards `extractive-qa`: The dataset can be used to train a model for Extractive Question Answering, which consists in selecting the answer to a question from a passage. Success on this task is typically measured by achieving a high word-overlap [F1 score](https://huggingface.co/metrics/f1). The [RoBERTa-Large](https://huggingface.co/roberta-large) model trained on all the data combined with [SQuAD](https://arxiv.org/abs/1606.05250) currently achieves 64.35% F1. This task has an active leaderboard and is available as round 1 of the QA task on [Dynabench](https://dynabench.org/tasks/2#overall) and ranks models based on F1 score. ### Languages The text in the dataset is in English. The associated BCP-47 code is `en`. ## Dataset Structure ### Data Instances Data is provided in the same format as SQuAD 1.1. An example is shown below: ``` { "data": [ { "title": "Oxygen", "paragraphs": [ { "context": "Among the most important classes of organic compounds that contain oxygen are (where \"R\" is an organic group): alcohols (R-OH); ethers (R-O-R); ketones (R-CO-R); aldehydes (R-CO-H); carboxylic acids (R-COOH); esters (R-COO-R); acid anhydrides (R-CO-O-CO-R); and amides (R-C(O)-NR2). There are many important organic solvents that contain oxygen, including: acetone, methanol, ethanol, isopropanol, furan, THF, diethyl ether, dioxane, ethyl acetate, DMF, DMSO, acetic acid, and formic acid. Acetone ((CH3)2CO) and phenol (C6H5OH) are used as feeder materials in the synthesis of many different substances. Other important organic compounds that contain oxygen are: glycerol, formaldehyde, glutaraldehyde, citric acid, acetic anhydride, and acetamide. Epoxides are ethers in which the oxygen atom is part of a ring of three atoms.", "qas": [ { "id": "22bbe104aa72aa9b511dd53237deb11afa14d6e3", "question": "In addition to having oxygen, what do alcohols, ethers and esters have in common, according to the article?", "answers": [ { "answer_start": 36, "text": "organic compounds" } ] }, { "id": "4240a8e708c703796347a3702cf1463eed05584a", "question": "What letter does the abbreviation for acid anhydrides both begin and end in?", "answers": [ { "answer_start": 244, "text": "R" } ] }, { "id": "0681a0a5ec852ec6920d6a30f7ef65dced493366", "question": "Which of the organic compounds, in the article, contains nitrogen?", "answers": [ { "answer_start": 262, "text": "amides" } ] }, { "id": "2990efe1a56ccf81938fa5e18104f7d3803069fb", "question": "Which of the important classes of organic compounds, in the article, has a number in its abbreviation?", "answers": [ { "answer_start": 262, "text": "amides" } ] } ] } ] } ] } ``` ### Data Fields - title: the title of the Wikipedia page from which the context is sourced - context: the context/passage - id: a string identifier for each question - answers: a list of all provided answers (one per question in our case, but multiple may exist in SQuAD) with an `answer_start` field which is the character index of the start of the answer span, and a `text` field which is the answer text ### Data Splits For round 1, the dataset is composed of three different datasets constructed using different models in the loop: BiDAF, BERT-Large, and RoBERTa-Large. Each of these has 10,000 training examples, 1,000 validation examples, and 1,000 test examples for a total of 30,000/3,000/3,000 train/validation/test examples. ## Dataset Creation ### Curation Rationale This dataset was collected to provide a more challenging and diverse Reading Comprehension dataset to state-of-the-art models. ### Source Data #### Initial Data Collection and Normalization The source passages are from Wikipedia and are the same as those used in [SQuAD v1.1](https://arxiv.org/abs/1606.05250). #### Who are the source language producers? The source language produces are Wikipedia editors for the passages, and human annotators on Mechanical Turk for the questions. ### Annotations #### Annotation process The dataset is collected through an adversarial human annotation process which pairs a human annotator and a reading comprehension model in an interactive setting. The human is presented with a passage for which they write a question and highlight the correct answer. The model then tries to answer the question, and, if it fails to answer correctly, the human wins. Otherwise, the human modifies or re-writes their question until the successfully fool the model. #### Who are the annotators? The annotators are from Amazon Mechanical Turk, geographically restricted the the USA, UK and Canada, having previously successfully completed at least 1,000 HITs, and having a HIT approval rate greater than 98%. Crowdworkers undergo intensive training and qualification prior to annotation. ### Personal and Sensitive Information No annotator identifying details are provided. ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to help develop better question answering systems. A system that succeeds at the supported task would be able to provide an accurate extractive answer from a short passage. This dataset is to be seen as a test bed for questions which contemporary state-of-the-art models struggle to answer correctly, thus often requiring more complex comprehension abilities than say detecting phrases explicitly mentioned in the passage with high overlap to the question. It should be noted, however, that the the source passages are both domain-restricted and linguistically specific, and that provided questions and answers do not constitute any particular social application. ### Discussion of Biases The dataset may exhibit various biases in terms of the source passage selection, annotated questions and answers, as well as algorithmic biases resulting from the adversarial annotation protocol. ### Other Known Limitations N/a ## Additional Information ### Dataset Curators This dataset was initially created by Max Bartolo, Alastair Roberts, Johannes Welbl, Sebastian Riedel, and Pontus Stenetorp, during work carried out at University College London (UCL). ### Licensing Information This dataset is distributed under [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/). ### Citation Information ``` @article{bartolo2020beat, author = {Bartolo, Max and Roberts, Alastair and Welbl, Johannes and Riedel, Sebastian and Stenetorp, Pontus}, title = {Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension}, journal = {Transactions of the Association for Computational Linguistics}, volume = {8}, number = {}, pages = {662-678}, year = {2020}, doi = {10.1162/tacl\_a\_00338}, URL = { https://doi.org/10.1162/tacl_a_00338 }, eprint = { https://doi.org/10.1162/tacl_a_00338 }, abstract = { Innovations in annotation methodology have been a catalyst for Reading Comprehension (RC) datasets and models. One recent trend to challenge current RC models is to involve a model in the annotation process: Humans create questions adversarially, such that the model fails to answer them correctly. In this work we investigate this annotation methodology and apply it in three different settings, collecting a total of 36,000 samples with progressively stronger models in the annotation loop. This allows us to explore questions such as the reproducibility of the adversarial effect, transfer from data collected with varying model-in-the-loop strengths, and generalization to data collected without a model. We find that training on adversarially collected samples leads to strong generalization to non-adversarially collected datasets, yet with progressive performance deterioration with increasingly stronger models-in-the-loop. Furthermore, we find that stronger models can still learn from datasets collected with substantially weaker models-in-the-loop. When trained on data collected with a BiDAF model in the loop, RoBERTa achieves 39.9F1 on questions that it cannot answer when trained on SQuAD—only marginally lower than when trained on data collected using RoBERTa itself (41.0F1). } } ``` ### Contributions Thanks to [@maxbartolo](https://github.com/maxbartolo) for adding this dataset.
dynabench/qa
[ "task_categories:question-answering", "task_ids:extractive-qa", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "arxiv:2002.00293", "arxiv:1606.05250", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa", "open-domain-qa"]}
2022-07-02T19:17:58+00:00