Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
csv
Languages:
Russian
Size:
10K - 100K
License:
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,50 @@
|
|
1 |
---
|
2 |
license: afl-3.0
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: afl-3.0
|
3 |
+
task_categories:
|
4 |
+
- text-classification
|
5 |
+
language:
|
6 |
+
- ru
|
7 |
---
|
8 |
+
|
9 |
+
# ParaDetox: Detoxification with Parallel Data (Russian). Content Task Results
|
10 |
+
|
11 |
+
This repository contains information about **Content Task** markup from [Russian Paradetox dataset](https://huggingface.co/datasets/s-nlp/ru_paradetox) collection pipeline.
|
12 |
+
|
13 |
+
## ParaDetox Collection Pipeline
|
14 |
+
|
15 |
+
The ParaDetox Dataset collection was done via [Yandex.Toloka](https://toloka.yandex.com/) crowdsource platform. The collection was done in three steps:
|
16 |
+
* *Task 1:* **Generation of Paraphrases**: The first crowdsourcing task asks users to eliminate toxicity in a given sentence while keeping the content.
|
17 |
+
* *Task 2:* **Content Preservation Check**: We show users the generated paraphrases along with their original variants and ask them to indicate if they have close meanings.
|
18 |
+
* *Task 3:* **Toxicity Check**: Finally, we check if the workers succeeded in removing toxicity.
|
19 |
+
|
20 |
+
Specifically this repo contains the results of **Task 2: Content Preservation Check**. Here, the samples with markup confidence >= 90 are present. One text in the pair is toxic, another -- its non-toxic paraphrase (should be).
|
21 |
+
Totally, datasets contains 10,975 pairs. Among them, the minor part is negative examples (2,812 pairs).
|
22 |
+
|
23 |
+
## Citation
|
24 |
+
|
25 |
+
```
|
26 |
+
@inproceedings{logacheva-etal-2022-study,
|
27 |
+
title = "A Study on Manual and Automatic Evaluation for Text Style Transfer: The Case of Detoxification",
|
28 |
+
author = "Logacheva, Varvara and
|
29 |
+
Dementieva, Daryna and
|
30 |
+
Krotova, Irina and
|
31 |
+
Fenogenova, Alena and
|
32 |
+
Nikishina, Irina and
|
33 |
+
Shavrina, Tatiana and
|
34 |
+
Panchenko, Alexander",
|
35 |
+
booktitle = "Proceedings of the 2nd Workshop on Human Evaluation of NLP Systems (HumEval)",
|
36 |
+
month = may,
|
37 |
+
year = "2022",
|
38 |
+
address = "Dublin, Ireland",
|
39 |
+
publisher = "Association for Computational Linguistics",
|
40 |
+
url = "https://aclanthology.org/2022.humeval-1.8",
|
41 |
+
doi = "10.18653/v1/2022.humeval-1.8",
|
42 |
+
pages = "90--101",
|
43 |
+
abstract = "It is often difficult to reliably evaluate models which generate text. Among them, text style transfer is a particularly difficult to evaluate, because its success depends on a number of parameters.We conduct an evaluation of a large number of models on a detoxification task. We explore the relations between the manual and automatic metrics and find that there is only weak correlation between them, which is dependent on the type of model which generated text. Automatic metrics tend to be less reliable for better-performing models. However, our findings suggest that, ChrF and BertScore metrics can be used as a proxy for human evaluation of text detoxification to some extent.",
|
44 |
+
}
|
45 |
+
|
46 |
+
```
|
47 |
+
|
48 |
+
## Contacts
|
49 |
+
|
50 |
+
For any questions, please contact: Daryna Dementieva ([email protected])
|