Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
csv
Languages:
Russian
Size:
10K - 100K
License:
metadata
license: openrail++
task_categories:
- text-classification
language:
- ru
ParaDetox: Detoxification with Parallel Data (Russian). Content Task Results
This repository contains information about Content Task markup from Russian Paradetox dataset collection pipeline.
ParaDetox Collection Pipeline
The ParaDetox Dataset collection was done via Yandex.Toloka crowdsource platform. The collection was done in three steps:
- Task 1: Generation of Paraphrases: The first crowdsourcing task asks users to eliminate toxicity in a given sentence while keeping the content.
- Task 2: Content Preservation Check: We show users the generated paraphrases along with their original variants and ask them to indicate if they have close meanings.
- Task 3: Toxicity Check: Finally, we check if the workers succeeded in removing toxicity.
Specifically this repo contains the results of Task 2: Content Preservation Check. Here, the samples with markup confidence >= 90 are present. One text in the pair is toxic, another -- its non-toxic paraphrase (should be). Totally, datasets contains 10,975 pairs. Among them, the minor part is negative examples (2,812 pairs).
Citation
@inproceedings{logacheva-etal-2022-study,
title = "A Study on Manual and Automatic Evaluation for Text Style Transfer: The Case of Detoxification",
author = "Logacheva, Varvara and
Dementieva, Daryna and
Krotova, Irina and
Fenogenova, Alena and
Nikishina, Irina and
Shavrina, Tatiana and
Panchenko, Alexander",
booktitle = "Proceedings of the 2nd Workshop on Human Evaluation of NLP Systems (HumEval)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.humeval-1.8",
doi = "10.18653/v1/2022.humeval-1.8",
pages = "90--101",
abstract = "It is often difficult to reliably evaluate models which generate text. Among them, text style transfer is a particularly difficult to evaluate, because its success depends on a number of parameters.We conduct an evaluation of a large number of models on a detoxification task. We explore the relations between the manual and automatic metrics and find that there is only weak correlation between them, which is dependent on the type of model which generated text. Automatic metrics tend to be less reliable for better-performing models. However, our findings suggest that, ChrF and BertScore metrics can be used as a proxy for human evaluation of text detoxification to some extent.",
}
Contacts
For any questions, please contact: Daryna Dementieva ([email protected])