vrashad commited on
Commit
42aac82
·
verified ·
1 Parent(s): cb8160e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -25
README.md CHANGED
@@ -1,39 +1,76 @@
1
  ---
2
- configs:
3
- - config_name: default
4
- data_files:
5
- - split: train
6
- path: data/train-*
7
  dataset_info:
8
  features:
9
  - name: comment
10
  dtype: string
11
  - name: identity_attack
12
- dtype: float64
13
  - name: insult
14
- dtype: float64
15
  - name: obscene
16
- dtype: float64
17
  - name: severe_toxicity
18
- dtype: float64
19
  - name: sexual_explicit
20
- dtype: float64
21
  - name: threat
22
- dtype: float64
23
  - name: toxicity
24
- dtype: float64
25
- splits:
26
- - name: train
27
- num_bytes: 14252665
28
- num_examples: 118890
29
- download_size: 6015721
30
- dataset_size: 14252665
31
- tags:
32
- - toxicity
33
- - content-warning
34
- - nsfw
35
- - adult-content
 
 
 
 
 
 
36
  ---
37
- # Dataset Card for "toxic_dataset_classification_azerbaijani"
38
 
39
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
 
 
 
2
  dataset_info:
3
  features:
4
  - name: comment
5
  dtype: string
6
  - name: identity_attack
7
+ dtype: float32
8
  - name: insult
9
+ dtype: float32
10
  - name: obscene
11
+ dtype: float32
12
  - name: severe_toxicity
13
+ dtype: float32
14
  - name: sexual_explicit
15
+ dtype: float32
16
  - name: threat
17
+ dtype: float32
18
  - name: toxicity
19
+ dtype: float32
20
+ task_categories:
21
+ - text-classification
22
+ language:
23
+ - az
24
+ size_categories:
25
+ - 100K<n<1M
26
+ extra_gated_prompt: >-
27
+ ⚠️ This dataset contains adult content, profanity, and toxic language in
28
+ Azerbaijani. By accessing this dataset, you confirm that you are 18+ and will
29
+ use it responsibly for research purposes only.
30
+ extra_gated_fields:
31
+ Company/Organization: text
32
+ Research Purpose: text
33
+ I confirm I am 18+ years old: checkbox
34
+ I will use this dataset responsibly: checkbox
35
+ license: cc-by-4.0
36
+ pretty_name: Azerbaijani Toxicity Classification Dataset
37
  ---
 
38
 
39
+ # Azerbaijani Toxicity Classification Dataset
40
+
41
+ ⚠️ **CONTENT WARNING** ⚠️
42
+
43
+ **This dataset contains highly offensive and toxic content in Azerbaijani language, including:**
44
+ - Explicit sexual content
45
+ - Profanity and vulgar language
46
+ - Hate speech and personal attacks
47
+ - Threats and harassment
48
+ - Other harmful and disturbing material
49
+
50
+ **🔞 This dataset is intended for mature audiences (18+) and research purposes only.**
51
+
52
+ ## Dataset Description
53
+
54
+ This dataset contains Azerbaijani text comments labeled with toxicity scores across 7 categories:
55
+
56
+ - **identity_attack**: Attacks based on identity characteristics
57
+ - **insult**: General insults and offensive language
58
+ - **obscene**: Profanity and vulgar content
59
+ - **severe_toxicity**: Extremely harmful content
60
+ - **sexual_explicit**: Sexual content and references
61
+ - **threat**: Threats of violence or harm
62
+ - **toxicity**: Overall toxicity score
63
+
64
+ Each comment is scored from 0.0 (not toxic) to 1.0 (highly toxic) for each category.
65
+
66
+
67
+
68
+ ## Disclaimer
69
+
70
+ The views and content expressed in this dataset do not reflect the opinions of the dataset creators or affiliated institutions. This material is provided solely for research purposes to advance the field of AI safety and content moderation.
71
+
72
+ ---
73
+
74
+ ## Contact
75
+
76
+ For more information, questions, or issues, please contact LocalDoc at [[email protected]].