Datasets:
language:
- mk
tags:
- macedonian
- text
- corpus
- cleaned
datasets:
- LVSTCK/macedonian-corpus-cleaned
license: cc-by-4.0
Macedonian Corpus - Cleaned
π Key Highlights
- Size: 35.5 GB, Word Count: 3.31 billion
- Filtered for irrelevant and low-quality content using C4 and Gopher filtering.
- Includes text from 10+ sources such as fineweb-2, HPLT-2, Wikipedia, and more.
π Overview
Macedonian is widely recognized as a low-resource language in the field of NLP. Publicly available resources in Macedonian are extremely limited, and as far as we know, no consolidated resource encompassing all available public data exists. Another challenge is the state of digitalized books and documents in Macedonia. The country lags behind in this regard, with many books and documents existing only as scanned images. This makes it difficult to extract textual information, which is critical for advancing linguistic research, education, and NLP applications in Macedonian language. To address these challenges, we created this Macedonian Corpus. This corpus consolidates multiple sources of Macedonian text data, including books, academic papers, web content, and other textual resources.
This version of the corpus is cleaned, meaning the data has been subjected to filtering to ensure high-quality text for NLP tasks. The filtering was done using datatrove, mainly motivated by fineweb-2, but with slightly less aggressive settings to retain a broader range of text sources.
This implementation applies heuristic rules derived from the C4 dataset and Gopher dataset quality heuristic filters. For reference to the specific filtering code used in our processes, see the GitHub repositories for the C4 filters and the Gopher quality filters. For those interested in applying custom filtering, the raw dataset can be accessed at macedonian-corpus-raw.
1. C4-like Filtering. Removes irrelevant or low-quality lines based on content (e.g., "javascript", "lorem ipsum") and structural rules (e.g., minimum word count, terminal punctuation).
- Removed lines containing irrelevant content such as "javascript" or lines with any word exceeding 1000 characters.
- Excluded placeholder content like "lorem ipsum" and policy-related phrases such as "privacy policy" or "terms of use."
- Filtered out lines with fewer than 3 words or lacking terminal punctuation (e.g., ., ?, !).
- Excluded lines where punctuation was missing at the end.
2. Gopher-like Filtering. Filters out documents with excessive bullet points or repetitive ellipses to ensure completeness.
- Limited the presence of bullet points by rejecting documents where more than 90% of lines started with bullet-like characters (e.g., -, β’, *).
- Filtered out documents where more than 30% of lines ended with ellipses (...) to avoid overly repetitive or incomplete content.
3. Language Filtering. Retains only high-confidence Macedonian text.
- Applied FT176LID model to detect and retain only high-confidence Macedonian text.
- Excluded non-Macedonian content - language confidence score below 0.65.
4. Sentence Deduplication. Removes duplicate sentences to improve dataset quality and reduce over-representation.
- Splits documents into sentences.
- Identifies duplicates using unique sentence signatures.
- Removes flagged duplicates.
5. PII Filtering.
- Removed all Personally Identifiable Information (PII), including: Email addresses, IP addresses, Phone numbers.
6. Text Chunking and Cleaning: Breaks texts into manageable chunks, each not exceeding 4000 characters while respecting sentence boundaries, applied only for data sourced from MMORE. This step also involves correcting common errors that were identified after qualitative evaluation, deleting specific unwanted patterns texts.
As a further cleaning step, we performed MinHash Deduplication after step 6. The deduplicated dataset is available here.
The implementation with all filtering steps can be found at GitHub.
π Dataset Sources
The corpus is built by collecting and processing data from the following sources:
Source | Notes | Origin |
---|---|---|
UKIM | Books and dissertations from various topics | UKIM Digital Library, UKIM Repository |
Wikipedia (MK) | Macedonian Wikipedia dump | Wikipedia |
MANU | Various publications from MANU | MANU |
HuggingFace (fineweb-2) | Macedonian subset of FineWeb-2 (mkd_Cyrl) | Hugging Face |
Common Voice (MK) | Macedonian sentences from the Common Voice dataset | Common Voice |
CLARIN MaCoCu-mk 2.0 | Web-crawled Macedonian texts | CLARIN |
UKLO | Resources from UKLO (Academic repository) | UKLO |
UGD | Resources from UGD (Academic repository) | UGD |
SETimes Corpus (MK-EN) | Macedonian-English parallel corpus (only MK sentences used) | SETimes |
HPLT-2 (MK) | Macedonian subset of HPLT-2 | HPLT |
Institute of Macedonian Language | Resources from the Institute of Macedonian Language "Krste Misirkov" | IMJ |
Official PE Gazette of North Macedonia | Official Gazette of North Macedonia | slvesnik |
Dataset Splits
The corpus is divided into the following categories based on the origin of the data:
Origin | Size (GB) | Words (B) | Percentage |
---|---|---|---|
HPLT-2 | 15.51 | 1.45 | 43.72% |
HuggingFace (fineweb-2) | 14.13 | 1.31 | 39.62% |
CLARIN (MaCoCu-mk 2.0) | 5.14 | 0.48 | 14.57% |
Wikipedia | 0.64 | 0.06 | 1.78% |
Other (MMORE) | 0.04 | 0.004 | 0.12% |
SETimes Corpus | 0.06 | 0.004 | 0.13% |
Common Voice | 0.02 | 0.002 | 0.05% |
Total | 35.54 | 3.31 | 100% |
βοΈ Usage
This corpus is intended to support a variety of use cases, including but not limited to:
Pretraining or Fine-tuning LLMs: The corpus can be used to pretrain or fine-tune LLMs specifically for the Macedonian language, enabling tasks like text generation, language understanding, and question answering.
Linguistic Analysis: Researchers can use the corpus to study the morphology, syntax, and semantics of the Macedonian language, contributing to both academic studies and computational linguistic advancements.
Machine Translation: The corpus can serve as a valuable resource for developing or improving machine translation systems between Macedonian and other languages.
Document Retrieval and Search: It can be used to build and evaluate information retrieval systems, such as search engines.
The corpus is provided as a JSONL file, where each line contains two fields:
text
: The raw textual data.source
: The source of the text.
{"text": "ΠΡΠΈΠΌΠ΅Ρ ΡΠ΅ΠΊΡΡ.", "source": "fineweb-2"}
π Acknowledgments
We acknowledge the contributions of the following organizations and projects:
- MMORE for text extraction from PDFs.
- Hugging Face for the Macedonian subset of the FineWeb-2 dataset.
- HPLT for the Macedonian subset of their dataset.
- CLARIN for the MaCoCu-mk 2.0 dataset.
- UKIM (University Ss. Cyril and Methodius, Skopje) for providing access to their library, dissertations, and archival resources.
- UGD (University Goce Delchev, Shtip) for contributing academic and research materials.
- MANU (Macedonian Academy of Sciences and Arts) for their publications, digital resources, and historical archives.
- All other sources listed above for their contributions to this corpus.
π€ How to Contribute?
You can contribute to the Macedonian corpus by:
Digitalize Books and Materials:
- Contribute by digitalizing books, documents, and other materials that are legally in the public domain. These digitalized materials can be used to expand the datasets.
- Ensure that the materials you contribute comply with copyright laws and are explicitly permitted for public use.
Expand Data Collection:
- Share other forms of Macedonian-language text data, such as articles, essays, or transcripts, that can legally be used for training or evaluating language models.
Encourage Institutional Participation:
- We hope this initiative inspires institutions in Macedonia, such as libraries, universities, and research centers, to take part in the digitalization of Macedonian-language materials.
- The availability of such materials will enable the development of specialized software tailored to the needs of Macedonian speakers and researchers.
π¬ Contact
For inquiries, feedback, or contributions, please feel free to reach out to the core team:
π Special Thanks
Also a big thank you to the following individuals:
βοΈ Legal
Notice and Takedown Policy
We adhere strictly to copyright and data ownership laws. If you identify any material within the corpus that infringes on your rights, please contact us following the detailed steps provided in this section to have it reviewed and potentially removed.
License
Creative Commons Attribution 4.0 (CC BY 4.0)