|
--- |
|
dataset_info: |
|
features: |
|
- name: comment |
|
dtype: string |
|
- name: identity_attack |
|
dtype: float64 |
|
- name: insult |
|
dtype: float64 |
|
- name: obscene |
|
dtype: float64 |
|
- name: severe_toxicity |
|
dtype: float64 |
|
- name: sexual_explicit |
|
dtype: float64 |
|
- name: threat |
|
dtype: float64 |
|
- name: toxicity |
|
dtype: float64 |
|
splits: |
|
- name: train |
|
num_bytes: 9279129 |
|
num_examples: 74780 |
|
download_size: 4110800 |
|
dataset_size: 9279129 |
|
task_categories: |
|
- text-classification |
|
language: |
|
- az |
|
tags: |
|
- toxicity |
|
- content-warning |
|
- nsfw |
|
- azerbaijani |
|
- hate-speech-detection |
|
size_categories: |
|
- 10K<n<100K |
|
extra_gated_prompt: >- |
|
⚠️ This dataset contains adult content, profanity, and toxic language in |
|
Azerbaijani. By accessing this dataset, you confirm that you are 18+ and will |
|
use it responsibly for research purposes only. |
|
extra_gated_fields: |
|
Company/Organization: text |
|
Research Purpose: text |
|
I confirm I am 18+ years old: checkbox |
|
I will use this dataset responsibly: checkbox |
|
license: cc-by-4.0 |
|
pretty_name: Azerbaijani Toxicity Classification Dataset |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
--- |
|
|
|
# Azerbaijani Toxicity Classification Dataset |
|
|
|
⚠️ **CONTENT WARNING** ⚠️ |
|
|
|
**This dataset contains highly offensive and toxic content in Azerbaijani language, including:** |
|
- Explicit sexual content |
|
- Profanity and vulgar language |
|
- Hate speech and personal attacks |
|
- Threats and harassment |
|
- Other harmful and disturbing material |
|
|
|
**🔞 This dataset is intended for mature audiences (18+) and research purposes only.** |
|
|
|
## Dataset Description |
|
|
|
This dataset contains Azerbaijani text comments labeled with toxicity scores across 7 categories: |
|
|
|
- **identity_attack**: Attacks based on identity characteristics |
|
- **insult**: General insults and offensive language |
|
- **obscene**: Profanity and vulgar content |
|
- **severe_toxicity**: Extremely harmful content |
|
- **sexual_explicit**: Sexual content and references |
|
- **threat**: Threats of violence or harm |
|
- **toxicity**: Overall toxicity score |
|
|
|
Each comment is scored 0.0 (not toxic) or 1.0 (highly toxic) for each category. |
|
|
|
|
|
|
|
## Disclaimer |
|
|
|
The views and content expressed in this dataset do not reflect the opinions of the dataset creators or affiliated institutions. This material is provided solely for research purposes to advance the field of AI safety and content moderation. |
|
|
|
--- |
|
|
|
## Contact |
|
|
|
For more information, questions, or issues, please contact LocalDoc at [[email protected]]. |