Datasets:

Modalities:
Text
ArXiv:
DOI:
Libraries:
Datasets
shaj / README.md
leondz's picture
fix typo in ann src
5e511fb
---
annotations_creators:
- expert-generated
language_creators:
- found
languages:
- sq
- sq-AL
licenses:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text_classification
task_ids:
- hate-speech-detection
- text-classification-other-hate-speech-detection
paperswithcode_id: shaj
pretty_name: SHAJ
extra_gated_prompt: "Warning: this repository contains harmful content (abusive language, hate speech)."
---
# Dataset Card for "shaj"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** [https://figshare.com/articles/dataset/SHAJ_Albanian_hate_speech_abusive_language/19333298/1](https://figshare.com/articles/dataset/SHAJ_Albanian_hate_speech_abusive_language/19333298/1)
- **Paper:** [https://arxiv.org/abs/2107.13592](https://arxiv.org/abs/2107.13592)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:** 769.21 KiB
- **Size of the generated dataset:** 1.06 MiB
- **Total amount of disk used:** 1.85 MiB
### Dataset Summary
This is an abusive/offensive language detection dataset for Albanian. The data is formatted
following the OffensEval convention, with three tasks:
* Subtask A: Offensive (OFF) or not (NOT)
* Subtask B: Untargeted (UNT) or targeted insult (TIN)
* Subtask C: Type of target: individual (IND), group (GRP), or other (OTH)
Notes on the above:
* The subtask A field should always be filled.
* The subtask B field should only be filled if there's "offensive" (OFF) in A.
* The subtask C field should only be filled if there's "targeted" (TIN) in B.
The dataset name is a backronym, also standing for "Spoken Hate in the Albanian Jargon"
See the paper [https://arxiv.org/abs/2107.13592](https://arxiv.org/abs/2107.13592) for full details.
### Supported Tasks and Leaderboards
* Task A leaderboard at [paperswithcode.com/sota/hate-speech-detection-on-shaj](https://paperswithcode.com/sota/hate-speech-detection-on-shaj)
### Languages
Albanian (`bcp47:sq-AL`)
## Dataset Structure
### Data Instances
#### shaj
- **Size of downloaded dataset files:** 769.21 KiB
- **Size of the generated dataset:** 1.06 MiB
- **Total amount of disk used:** 1.85 MiB
An example of 'train' looks as follows.
```
{
'id': '0',
'text': 'PLACEHOLDER TEXT',
'subtask_a': 1,
'subtask_b': 0,
'subtask_c': 0
}
```
### Data Fields
- `id`: a `string` feature.
- `text`: a `string`.
- `subtask_a`: whether or not the instance is offensive; `0: OFF, 1: NOT`
- `subtask_b`: whether an offensive instance is a targeted insult; `0: TIN, 1: UNT, 2: not applicable`
- `subtask_c`: what a targeted insult is aimed at; `0: IND, 1: GRP, 2: OTH, 3: not applicable`
### Data Splits
| name |train|
|---------|----:|
|shaj|11874 sentences|
## Dataset Creation
### Curation Rationale
Collecting data for enabling offensive speech detection in Albanian
### Source Data
#### Initial Data Collection and Normalization
The text is scraped from comments on popular Albanian YouTube and Instagram accounts.
An extended discussion is given in the paper in section 3.2.
#### Who are the source language producers?
People who comment on a selection of high-activity Albanian instagram and youtube profiles.
### Annotations
#### Annotation process
The annotation scheme was taken from OffensEval 2019 and applied by two native speaker authors of the paper as well as their friends and family.
#### Who are the annotators?
Albanian native speakers, male and female, aged 20-60.
### Personal and Sensitive Information
The data was public at the time of collection. No PII removal has been performed.
## Considerations for Using the Data
### Social Impact of Dataset
The data definitely contains abusive language.
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
The dataset is curated by the paper's authors.
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0.
### Citation Information
```
@article{nurce2021detecting,
title={Detecting Abusive Albanian},
author={Nurce, Erida and Keci, Jorgel and Derczynski, Leon},
journal={arXiv preprint arXiv:2107.13592},
year={2021}
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)