dardem commited on
Commit
0190754
·
verified ·
1 Parent(s): fca016c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -3
README.md CHANGED
@@ -9,6 +9,11 @@ language:
9
  - zh
10
  - ar
11
  - hi
 
 
 
 
 
12
  license: openrail++
13
  size_categories:
14
  - 10K<n<100K
@@ -103,24 +108,36 @@ configs:
103
  path: data/ja-*
104
  ---
105
 
106
- For the shared task [CLEF TextDetox 2024](https://pan.webis.de/clef24/pan24-web/text-detoxification.html), we provide a compilation of binary toxicity classification datasets for each language.
 
 
 
 
107
  Namely, for each language, we provide 5k subparts of the datasets -- 2.5k toxic and 2.5k non-toxic samples.
108
 
109
  The list of original sources:
110
  * English: [Jigsaw](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge), [Unitary AI Toxicity Dataset](https://github.com/unitaryai/detoxify)
111
  * Russian: [Russian Language Toxic Comments](https://www.kaggle.com/datasets/blackmoon/russian-language-toxic-comments), [Toxic Russian Comments](https://www.kaggle.com/datasets/alexandersemiletov/toxic-russian-comments)
112
- * Ukrainian: our labeling based on [Ukrainian Twitter texts](https://github.com/saganoren/ukr-twi-corpus)
113
  * Spanish: [CLANDESTINO, the Spanish toxic language dataset](https://github.com/microsoft/Clandestino/tree/main)
114
  * German: [DeTox-Dataset](https://github.com/hdaSprachtechnologie/detox), [GemEval 2018, 2021](https://aclanthology.org/2021.germeval-1.1/)
115
  * Amhairc: [Amharic Hate Speech](https://github.com/uhh-lt/AmharicHateSpeech)
116
  * Arabic: [OSACT4](https://edinburghnlp.inf.ed.ac.uk/workshops/OSACT4/)
117
  * Hindi: [Hostility Detection Dataset in Hindi](https://competitions.codalab.org/competitions/26654#learn_the_details-dataset), [Overview of the HASOC track at FIRE 2019: Hate Speech and Offensive Content Identification in Indo-European Languages](https://dl.acm.org/doi/pdf/10.1145/3368567.3368584?download=true)
 
 
 
 
 
 
118
 
119
- All credits go to the authors of the original toxic words lists.
120
 
121
  ## Citation
122
  If you would like to acknowledge our work, please, cite the following manuscripts:
123
 
 
 
124
  ```
125
  @inproceedings{dementieva2024overview,
126
  title={Overview of the Multilingual Text Detoxification Task at PAN 2024},
 
9
  - zh
10
  - ar
11
  - hi
12
+ - it
13
+ - fr
14
+ - he
15
+ - ja
16
+ - tt
17
  license: openrail++
18
  size_categories:
19
  - 10K<n<100K
 
108
  path: data/ja-*
109
  ---
110
 
111
+ # Multilingual Toxicity Detection Dataset
112
+
113
+ **[2025]** We extend our binary toxicity classification dataset to **more languages**! Now also covered: Italian, French, Hebrew, Hindglish, Japanese, Tatar. The data is prepared for [TextDetox 2025](https://pan.webis.de/clef25/pan25-web/text-detoxification.html) shared task.
114
+
115
+ **[2024]** For the shared task [TextDetox 2024](https://pan.webis.de/clef24/pan24-web/text-detoxification.html), we provide a compilation of binary toxicity classification datasets for each language.
116
  Namely, for each language, we provide 5k subparts of the datasets -- 2.5k toxic and 2.5k non-toxic samples.
117
 
118
  The list of original sources:
119
  * English: [Jigsaw](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge), [Unitary AI Toxicity Dataset](https://github.com/unitaryai/detoxify)
120
  * Russian: [Russian Language Toxic Comments](https://www.kaggle.com/datasets/blackmoon/russian-language-toxic-comments), [Toxic Russian Comments](https://www.kaggle.com/datasets/alexandersemiletov/toxic-russian-comments)
121
+ * Ukrainian: [ours](https://huggingface.co/datasets/ukr-detect/ukr-toxicity-dataset)
122
  * Spanish: [CLANDESTINO, the Spanish toxic language dataset](https://github.com/microsoft/Clandestino/tree/main)
123
  * German: [DeTox-Dataset](https://github.com/hdaSprachtechnologie/detox), [GemEval 2018, 2021](https://aclanthology.org/2021.germeval-1.1/)
124
  * Amhairc: [Amharic Hate Speech](https://github.com/uhh-lt/AmharicHateSpeech)
125
  * Arabic: [OSACT4](https://edinburghnlp.inf.ed.ac.uk/workshops/OSACT4/)
126
  * Hindi: [Hostility Detection Dataset in Hindi](https://competitions.codalab.org/competitions/26654#learn_the_details-dataset), [Overview of the HASOC track at FIRE 2019: Hate Speech and Offensive Content Identification in Indo-European Languages](https://dl.acm.org/doi/pdf/10.1145/3368567.3368584?download=true)
127
+ * Italian: [AMI](https://github.com/dnozza/ami2020), [HODI](https://github.com/HODI-EVALITA/HODI_2023), [Jigsaw Multilingual Toxic Comment](https://www.kaggle.com/competitions/jigsaw-multilingual-toxic-comment-classification/overview)
128
+ * French: []
129
+ * Hebrew: [Hebrew Offensive Language Dataset](https://github.com/NataliaVanetik/HebrewOffensiveLanguageDatasetForTheDetoxificationProject/blob/main/OLaH-dataset-filtered.xlsx)
130
+ * Hinglish: []
131
+ * Japanese: [filtered](https://huggingface.co/datasets/sobamchan/ja-toxic-text-classification-open2ch) [2chan posts](https://huggingface.co/datasets/p1atdev/open2ch) by Perspective API;
132
+ * Tatar: ours.
133
 
134
+ All credits go to the authors of the original corpora.
135
 
136
  ## Citation
137
  If you would like to acknowledge our work, please, cite the following manuscripts:
138
 
139
+ **[2024]**
140
+
141
  ```
142
  @inproceedings{dementieva2024overview,
143
  title={Overview of the Multilingual Text Detoxification Task at PAN 2024},