Datasets:
metadata
language:
- bg
task_categories:
- text-classification
license: apache-2.0
tags:
- not-for-all-audiences
- medical
size_categories:
- 1K<n<10K
Warning: This dataset contains content that includes toxic, offensive, or otherwise inappropriate language.
The toxic-data-bg
dataset consists of 4,384 manually annotated sentences across four categories: toxic language, medical terminology, non-toxic language, and terms related to minority communities.
The dataset is an extention of the "Hate speech detection in Bulgarian" dataset and consist of sentences of:
- BG-Jargon
- BG-Nationalisti forum
- BG-Mamma forum
- Proud.bg forum
- Framar.bg forum.
More information in the paper and in the conference presentation.
Code and usage
The code for scraping is available in the GitHub repository of the project.
Reference
If you use this dataset in your academic project, please cite as:
@article
{berbatova2025detecting,
title={Detecting Toxic Language: Ontology and BERT-based Approaches for Bulgarian Text},
author={Berbatova, Melania and Vasev, Tsvetoslav},
year={2025}
}