metadata
license: apache-2.0
task_categories:
- text-generation
language:
- am
pretty_name: Amharic Pretraining Corpus
size_categories:
- 100M<n<1B
Amharic Pretraining Corpus is a large-scale dataset (~103M) for general amharic language pretraining tasks. It consists of diverse text sources, including news articles, books, social media posts, government documents, and web content, all written in Amharic.
You can load the dataset as follows
from datasets import load_dataset
ds = load_dataset("yordanoswuletaw/amharic-pretraining-corpus")