Datasets:

Modalities:
Text
ArXiv:
DOI:
Libraries:
Datasets
File size: 5,402 Bytes
b0f1abd
 
5e511fb
b0f1abd
 
 
049cb30
b0f1abd
 
 
 
 
 
 
 
 
 
 
 
 
c32b688
986405b
b0f1abd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
be6f3e2
 
b0f1abd
 
 
 
 
 
 
 
 
 
986405b
b0f1abd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
411d485
b0f1abd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
---
annotations_creators:
- expert-generated
language_creators:
- found
languages:
- sq
- sq-AL
licenses:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text_classification
task_ids:
- hate-speech-detection
- text-classification-other-hate-speech-detection
paperswithcode_id: shaj
pretty_name: SHAJ
extra_gated_prompt: "Warning: this repository contains harmful content (abusive language, hate speech)."
---

# Dataset Card for "shaj"

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)

## Dataset Description

- **Homepage:** 
- **Repository:** [https://figshare.com/articles/dataset/SHAJ_Albanian_hate_speech_abusive_language/19333298/1](https://figshare.com/articles/dataset/SHAJ_Albanian_hate_speech_abusive_language/19333298/1)
- **Paper:** [https://arxiv.org/abs/2107.13592](https://arxiv.org/abs/2107.13592)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:** 769.21 KiB
- **Size of the generated dataset:** 1.06 MiB
- **Total amount of disk used:**  1.85 MiB

### Dataset Summary

This is an abusive/offensive language detection dataset for Albanian. The data is formatted
following the OffensEval convention, with three tasks:

* Subtask A: Offensive (OFF) or not (NOT)
* Subtask B: Untargeted (UNT) or targeted insult (TIN)
* Subtask C: Type of target: individual (IND), group (GRP), or other (OTH)

Notes on the above:

* The subtask A field should always be filled.
* The subtask B field should only be filled if there's "offensive" (OFF) in A.
* The subtask C field should only be filled if there's "targeted" (TIN) in B.

The dataset name is a backronym, also standing for "Spoken Hate in the Albanian Jargon"

See the paper [https://arxiv.org/abs/2107.13592](https://arxiv.org/abs/2107.13592) for full details.

### Supported Tasks and Leaderboards

* Task A leaderboard at [paperswithcode.com/sota/hate-speech-detection-on-shaj](https://paperswithcode.com/sota/hate-speech-detection-on-shaj)

### Languages

Albanian (`bcp47:sq-AL`)

## Dataset Structure

### Data Instances

#### shaj

- **Size of downloaded dataset files:** 769.21 KiB
- **Size of the generated dataset:** 1.06 MiB
- **Total amount of disk used:**  1.85 MiB

An example of 'train' looks as follows.

```
{
  'id': '0', 
  'text': 'PLACEHOLDER TEXT', 
  'subtask_a': 1, 
  'subtask_b': 0, 
  'subtask_c': 0
}
```


### Data Fields

- `id`: a `string` feature.
- `text`: a `string`.
- `subtask_a`: whether or not the instance is offensive; `0: OFF, 1: NOT`
- `subtask_b`: whether an offensive instance is a targeted insult; `0: TIN, 1: UNT, 2: not applicable`
- `subtask_c`: what a targeted insult is aimed at; `0: IND, 1: GRP, 2: OTH, 3: not applicable`


### Data Splits

|  name   |train|
|---------|----:|
|shaj|11874 sentences|

## Dataset Creation

### Curation Rationale

Collecting data for enabling offensive speech detection in Albanian

### Source Data

#### Initial Data Collection and Normalization

The text is scraped from comments on popular Albanian YouTube and Instagram accounts.
An extended discussion is given in the paper in section 3.2.

#### Who are the source language producers?

People who comment on a selection of high-activity Albanian instagram and youtube profiles.

### Annotations

#### Annotation process

The annotation scheme was taken from OffensEval 2019 and applied by two native speaker authors of the paper as well as their friends and family.

#### Who are the annotators?

Albanian native speakers, male and female, aged 20-60.

### Personal and Sensitive Information

The data was public at the time of collection. No PII removal has been performed.

## Considerations for Using the Data

### Social Impact of Dataset

The data definitely contains abusive language.

### Discussion of Biases


### Other Known Limitations


## Additional Information

### Dataset Curators

The dataset is curated by the paper's authors.

### Licensing Information

The authors distribute this data under Creative Commons attribution license, CC-BY 4.0. 

### Citation Information

```
@article{nurce2021detecting,
  title={Detecting Abusive Albanian},
  author={Nurce, Erida and Keci, Jorgel and Derczynski, Leon},
  journal={arXiv preprint arXiv:2107.13592},
  year={2021}
}
```


### Contributions

Author-added dataset [@leondz](https://github.com/leondz)