annieske's picture
Update README.md
b09bbec
|
raw
history blame
4.68 kB
---
license: cc-by-sa-4.0
task_categories:
- text-classification
language:
- fi
tags:
- toxicity
size_categories:
- 1K<n<10K
---
### Suomi-24-toxicity-annotated
This dataset includes comments from Suomi24 sampled using predictions from a toxicity classifier. The comments were taken in intervals for each label. The process of sampling emphasized difficult borderline cases. 500 comments were sampled for each label.
The annotation process used the labels from Perspective, used e.g. for `TurkuNLP/wikipedia-toxicity-data-fi`.
Instead of multi-label, we annotated each comment only for one label, although a couple comments appear in two labels.
Process of annotation included initial annotation of 100-200 comments followed by a discussion and final annotations. Raw data can be found from [here](https://github.com/TurkuNLP/toxicity-classifier/tree/main/annotations/raw_annotations).
Examples that made it to the dataset are ones that had unanimous agreement or were resolved through discussion.
## Label definitions taken from Perspective API
THREAT: Describes an intention to inflict pain, injury, or violence against an individual or group.
THREATENING: Language that is threatening or encouraging violence or harm, including self-harm.
PROFANITY: Swear words, curse words, or other obscene or profane language.
INSULT: Insulting, inflammatory, or negative comment towards a person or a group of people. Such comments are not necessarily identity specific.
IDENTITY ATTACK: Negative or hateful comments targeting someone because of their identity.
TOXICITY: A rude, disrespectful, or unreasonable comment that is likely to make people leave a discussion.
SEVERE TOXICITY: A very hateful, aggressive, disrespectful comment or otherwise very likely to make a user leave a discussion or give up on sharing their perspective. This attribute is much less sensitive to more mild forms of toxicity, such as comments that include positive uses of curse words.
## Guidelines used for annotation:
### Obscene
swearwords, including mild expletives and misspelled, masked, or other variations
sexually explicit words/terminology that are not topically or contextually appropriate
### Threat
suicidal or self-harm comments, incitement to violence or self-harm, hypothetical situations and wishing harm to somebody
comments that are very unlikely to happen if not marked clearly as sarcasm
only threats towards people are annotated as threat
threats made by somebody else other than the writer NOT included
counterfactuals statements NOT included <!--- as in "if I was there I would have..." --->
### Insult
terms that are insulting towards groups of people (also in identity attack)
insults against political groups, e.g. "vitun demari/suvakki/persu" -> "fucking liberal/conservative etc." <!--- I made this decision here.. --->
negative insulting comments towards oneself, things other than people and hypothetical situations NOT included
<!--- PROBLEM: use of racist or rapist if true, target not clear --->
### Identity attack
comments that have no negative language but are still clearly negative
negative statements towards political groups or groups that nobody self-identifies with are NOT included (unless an insult)
### Toxicity
unreasonably expressed negative comments regardless of the target present and whether the target is known or not
mild or humoristic swearwords are NOT included
positive or neutral sexually explicit comments are NOT included
### Severe toxicity
comments that include only sexually explicit content
only one severely toxic element is needed to have this label and a comment is severely toxic even if the comment contains substantive content
target does not need to be present nor does the target matter
## Inter-annotator agreement:
| Label | Initial (unanimous) | After discussion (unanimous) | Initial (at least 2/3) | After discussion (at least 2/3) |
|------ | ------------------- | ---------------------------- | ---------------------- | ------------------------------- |
| identity attack | 54,5 % | 66,6 % | 92 % | 93,6 % |
| insult | 47,5 % | 49,6 % | 94,5 % | 95,6 % |
| severe toxicity | 63 % | 66 % | 92 % | 96,6 % |
| threat | 82 % | 80,3 % | 98 % | 97,3 % |
| toxicity | 58 % | 54 % | 93 % | 89,6 % |
| obscene | 69 % | 62 % | 97 % | 96 % |
## Licensing Information
Contents of this repository are distributed under the
[Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/).
Copyright of the dataset contents belongs to the original copyright holders.