Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
csv
Languages:
Bengali
Size:
10K - 100K
License:
metadata
language:
- bn
license: cc-by-nc-sa-4.0
task_categories:
- text-classification
tags:
- women-safety
- gender-bias
- social-media
- Bangla-NLP
- sentiment-analysis
pretty_name: 'WoNBias-Partial: A Balanced Subset of the Original WoNBias Dataset for Review'
WoNBias-Partial: A Balanced Subset of the Original WoNBias Dataset for Review
Dataset Details
Overview
A manually annotated corpus of Bangla designed to benchmark and mitigate gender bias specifically targeting women. The dataset supports research in ethical NLP, hate speech detection, and computational social science.
Dataset Description
Basic Info
- Purpose: Detect gender bias against women in Bengali text
- Language: Bengali (Bangla)
- Labels:
0
: Neutral (no bias)1
: Positive (supportive towards women in general)2
: Negative (contains bias against women)
Data Sources
Collected from:
- Social media (TikTok, Facebook, Twitter, Instagram)
- Online forums
- News articles and comments
- Government reports
Collection Methods
- Semi-automated tools with ethical safeguards:
- Only public content collected
- All personal info removed
- Rate-limited scraping
Partnerships
Developed with help from:
- Social media data
- Community reachout
- Local annotators
Dataset Structure
- Format: CSV
- Columns:
Data
: Raw Bangla text.Label
: Integer (0, 1, or 2).
Statistics
- Total Samples: 11,178
- Label Distribution:
Neutral (0)
: 3,644 samples (32.6%)Positive (1)
: 3,506 samples (31.4%)Negative (2)
: 4,028 samples (36.0%)
Ethical Considerations
Data Collection
- Anonymization:
- All user identifiers (usernames, URLs, locations) removed
- Metadata sanitized to prevent re-identification
- Consent:
- For survey/interview data: Explicit participant consent obtained
- Public posts used under platform terms of service
Content Safeguards
- Mental Health Protections:
- Annotators provided with:
- Psychological support resources
- Regular breaks when handling toxic content
- Annotators provided with:
- Trigger Warnings:
- Dataset documentation highlights potentially harmful content
Limitations
Scope Limitations:
- Focuses only on gender bias against women
- Doesn't cover third gender or intersectional biases
Detection Challenges:
- Difficulty identifying implicit bias and contextual nuances
- Performance varies across regional dialects
Language Coverage:
- Currently Bengali-only (may not generalize to other South Asian languages)
Usage
How to Load
from datasets import load_dataset
dataset = load_dataset("won-bias/wonbias-partial-dataset")
Citation
@dataset{wonbias_2024,
title = {{Wonbias-Partial}: WoNBias-partial: A balanced subset of the original for review},
author = {Nishat Tafannum, MD. Raisul Islam Aupi},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/won-bias/wonbias-partial-dataset},
version = {1.0},
license = {CC-BY-NC-SA-4.0},
}
License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
You are free to:
- Share — copy and redistribute the material in any medium or format
- Adapt — remix, transform, and build upon the material
Under the following terms:
- Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made.
- NonCommercial — You may not use the material for commercial purposes.
- ShareAlike — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.