File size: 12,176 Bytes
ee38f3c
 
 
 
 
 
 
 
 
 
d058fa5
ee38f3c
 
 
ef97e67
ee38f3c
d43bf9b
4ba036f
 
4b90fa5
4ba036f
d43bf9b
f6301c1
ee38f3c
9bd9fdf
e7de77c
b1b8cff
e7de77c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41a2a61
b1b8cff
 
d1686d6
ee38f3c
082b765
15e65e9
d43bf9b
ee38f3c
 
 
 
 
 
 
 
 
 
15773d2
221d0e4
ee38f3c
4b90fa5
ee38f3c
 
 
 
 
 
 
 
4b90fa5
b1b8cff
 
 
 
 
 
 
ee38f3c
 
 
d43bf9b
ee38f3c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d43bf9b
ee38f3c
 
 
 
 
 
 
 
 
 
d43bf9b
56e7c5a
 
 
 
 
 
 
 
 
 
 
 
 
9bd20a9
 
 
 
 
 
faf9e7a
9bd20a9
 
 
 
 
 
 
d43bf9b
 
 
 
 
 
d058fa5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
---
language:
- mk
tags:
- macedonian
- text
- corpus
- cleaned
datasets:
- LVSTCK/macedonian-corpus-cleaned
license: cc-by-4.0
---

# Macedonian Corpus - Cleaned 
[raw version here](https://huggingface.co/datasets/LVSTCK/macedonian-corpus-raw)

## 🌟 Key Highlights
- **Size**: 35.5 GB, **Word Count**: 3.31 billion
- Filtered for **irrelevant and low-quality content** using C4 and Gopher filtering.
- Includes text from **10+ sources** such as fineweb-2, HPLT-2, Wikipedia, and more.

## 📋 Overview
Macedonian is widely recognized as a low-resource language in the field of NLP. Publicly available resources in Macedonian are extremely limited, and as far as we know, no consolidated resource encompassing all available public data exists. Another challenge is the state of digitalized books and documents in Macedonia. The country lags behind in this regard, with many books and documents existing only as scanned images. This makes it difficult to extract textual information, which is critical for advancing linguistic research, education, and NLP applications in Macedonian language. To address these challenges, we created this **Macedonian Corpus**. This corpus consolidates multiple sources of Macedonian text data, including books, academic papers, web content, and other textual resources.

This version of the corpus is **cleaned**, meaning the data has been subjected to filtering to ensure high-quality text for NLP tasks. The filtering was done using [datatrove](https://github.com/huggingface/datatrove), mainly motivated by [fineweb-2](https://github.com/huggingface/fineweb-2), but with slightly less aggressive settings to retain a broader range of text sources.
  
This implementation applies heuristic rules derived from the [C4 dataset](https://jmlr.org/papers/volume21/20-074/20-074.pdf) and [Gopher dataset](https://arxiv.org/pdf/2112.11446.pdf) quality heuristic filters. For reference to the specific filtering code used in our processes, see the GitHub repositories for the [C4 filters](https://github.com/huggingface/datatrove/blob/47379fd9783a8731b6d595470d8b06af7da17e83/src/datatrove/pipeline/filters/c4_filters.py#L27) and the [Gopher quality filters](https://github.com/huggingface/datatrove/blob/47379fd9783a8731b6d595470d8b06af7da17e83/src/datatrove/pipeline/filters/gopher_quality_filter.py#L13). For those interested in applying custom filtering, the raw dataset can be accessed at [macedonian-corpus-raw](https://huggingface.co/datasets/LVSTCK/macedonian-corpus-raw). 
  
  **1. C4-like Filtering.** Removes irrelevant or low-quality lines based on content (e.g., "javascript", "lorem ipsum") and structural rules (e.g., minimum word count, terminal punctuation).
  - Removed lines containing irrelevant content such as "javascript" or lines with any word exceeding 1000 characters.
  - Excluded placeholder content like "lorem ipsum" and policy-related phrases such as "privacy policy" or "terms of use."
  - Filtered out lines with fewer than 3 words or lacking terminal punctuation (e.g., ., ?, !).
  - Excluded lines where punctuation was missing at the end.
  
  **2. Gopher-like Filtering.** Filters out documents with excessive bullet points or repetitive ellipses to ensure completeness.
  - Limited the presence of bullet points by rejecting documents where more than 90% of lines started with bullet-like characters (e.g., -, •, *).
  - Filtered out documents where more than 30% of lines ended with ellipses (...) to avoid overly repetitive or incomplete content.
  
  **3. Language Filtering.** Retains only high-confidence Macedonian text.
  - Applied FT176LID model to detect and retain only high-confidence Macedonian text.
  - Excluded non-Macedonian content - language confidence score below 0.65.
  
  **4. Sentence Deduplication.** Removes duplicate sentences to improve dataset quality and reduce over-representation.
  - Splits documents into sentences.
  - Identifies duplicates using unique sentence signatures.
  - Removes flagged duplicates.
  
  **5. PII Filtering.**
  - Removed all Personally Identifiable Information (PII), including: Email addresses, IP addresses, Phone numbers.

  **6. Text Chunking and Cleaning:** Breaks texts into manageable chunks, each not exceeding 4000 characters while respecting sentence boundaries, applied only for data sourced from MMORE. This step also involves correcting common errors that were identified after qualitative evaluation, deleting specific unwanted patterns texts.


As a further cleaning step, we performed MinHash Deduplication after step 6. The deduplicated dataset is available [here](https://huggingface.co/datasets/LVSTCK/macedonian-corpus-cleaned-dedup).

The implementation with all filtering steps can be found at [GitHub](https://github.com/LVSTCK/macedonian-corpus/blob/main/filtering).

## 📚 Dataset Sources
The corpus is built by collecting and processing data from the following sources:

| **Source**                           | **Notes**                                                       | **Origin**                                                              |
|--------------------------------------|-----------------------------------------------------------------|-----------------------------------------------------------------------|
| UKIM                                 | Books and dissertations from various topics                    | [UKIM Digital Library](https://ukim.edu.mk/en/nauka/infrastruktura/digitalna-biblioteka/), [UKIM Repository](https://repository.ukim.mk/) |
| Wikipedia (MK)                            | Macedonian Wikipedia dump                                       | [Wikipedia](https://mk.wikipedia.org)                                   |
| MANU                                 | Various publications from MANU                                  | [MANU](https://manu.edu.mk/)                                            |
| HuggingFace (fineweb-2)                            | Macedonian subset of FineWeb-2 (mkd_Cyrl)                       | [Hugging Face](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) |
| Common Voice (MK)                        | Macedonian sentences from the Common Voice dataset              | [Common Voice](https://commonvoice.mozilla.org/en)                      |
| CLARIN MaCoCu-mk 2.0                  | Web-crawled Macedonian texts                                    | [CLARIN](https://www.clarin.si/repository/xmlui/handle/11356/1801)      |
| UKLO                                 | Resources from UKLO (Academic repository)                                            | [UKLO](https://uklo.edu.mk/?lang=en)                                    |
| UGD                                  | Resources from UGD (Academic repository)                                             | [UGD](https://www.ugd.edu.mk/en/home/)                                  |
| SETimes Corpus (MK-EN)               | Macedonian-English parallel corpus (only MK sentences used)     | [SETimes](https://github.com/stefan-it/nmt-en-mk?tab=readme-ov-file)    |
| HPLT-2 (MK)                            | Macedonian subset of HPLT-2                                       | [HPLT](https://hplt-project.org/datasets/v2.0)                          |
| Institute of Macedonian Language     | Resources from the Institute of Macedonian Language "Krste Misirkov" | [IMJ](http://imj.ukim.edu.mk/)                                          |
| Official PE Gazette of North Macedonia | Official Gazette of North Macedonia                             | [slvesnik](https://www.slvesnik.com.mk/besplaten-pristap-do-izdanija.nspx) |

### Dataset Splits
The corpus is divided into the following categories based on the origin of the data:

| Origin         | Size (GB) | Words (B) | Percentage |
|----------------|-----------|-----------|------------|
| HPLT-2                  | 15.51     | 1.45       | 43.72%     |
| HuggingFace (fineweb-2) | 14.13     | 1.31       | 39.62%     |
| CLARIN (MaCoCu-mk 2.0)  | 5.14      | 0.48       | 14.57%     |
| Wikipedia             | 0.64      | 0.06       | 1.78%      |
| Other (MMORE)         | 0.04      | 0.004      | 0.12%      |
| SETimes Corpus        | 0.06      | 0.004      | 0.13%      |
| Common Voice          | 0.02      | 0.002      | 0.05%      |
| **Total**             | **35.54** | **3.31**   | **100%**   |


---
## ⚙️ Usage

This corpus is intended to support a variety of use cases, including but not limited to:

- **Pretraining or Fine-tuning LLMs:** The corpus can be used to pretrain or fine-tune LLMs specifically for the Macedonian language, enabling tasks like text generation, language understanding, and question answering.

- **Linguistic Analysis:** Researchers can use the corpus to study the morphology, syntax, and semantics of the Macedonian language, contributing to both academic studies and computational linguistic advancements.

- **Machine Translation:** The corpus can serve as a valuable resource for developing or improving machine translation systems between Macedonian and other languages.

- **Document Retrieval and Search:** It can be used to build and evaluate information retrieval systems, such as search engines.

The corpus is provided as a JSONL file, where each line contains two fields:
- `text`: The raw textual data.
- `source`: The source of the text.

```json
{"text": "Пример текст.", "source": "fineweb-2"}
```

---

## 🙏 Acknowledgments
We acknowledge the contributions of the following organizations and projects:
- **MMORE** for text extraction from PDFs.
- **Hugging Face** for the Macedonian subset of the FineWeb-2 dataset.
- **HPLT** for the Macedonian subset of their dataset.
- **CLARIN** for the MaCoCu-mk 2.0 dataset.
- **UKIM (University Ss. Cyril and Methodius, Skopje)** for providing access to their library, dissertations, and archival resources.
- **UGD (University Goce Delchev, Shtip)** for contributing academic and research materials.
- **MANU (Macedonian Academy of Sciences and Arts)** for their publications, digital resources, and historical archives.
- All other sources listed above for their contributions to this corpus.

## 🤝 How to Contribute?
You can contribute to the Macedonian corpus by:

1. **Digitalize Books and Materials**:  
   - Contribute by digitalizing books, documents, and other materials that are legally in the public domain. These digitalized materials can be used to expand the datasets.  
   - Ensure that the materials you contribute comply with copyright laws and are explicitly permitted for public use.

2. **Expand Data Collection**:  
   - Share other forms of Macedonian-language text data, such as articles, essays, or transcripts, that can legally be used for training or evaluating language models.  

3. **Encourage Institutional Participation**:  
   - We hope this initiative inspires institutions in Macedonia, such as libraries, universities, and research centers, to take part in the digitalization of Macedonian-language materials.  
   - The availability of such materials will enable the development of specialized software tailored to the needs of Macedonian speakers and researchers.

## 📬 Contact

For inquiries, feedback, or contributions, please feel free to reach out to the core team:

- [Stefan Krsteski](https://www.linkedin.com/in/stefan-krsteski-136abb235/) [📧](mailto:[email protected])
- [Borjan Sazdov](https://www.linkedin.com/in/borjan-sazdov-4b2187211/) [📧](mailto:[email protected])
- [Matea Tashkovska](https://www.linkedin.com/in/matea-tashkovska-774603198/) [📧](mailto:[email protected])

### 🎉 Special Thanks
Also a big thank you to the following individuals:

- [Said Gürbüz](https://www.linkedin.com/in/saidgurbuz/?originalSubdomain=tr)
- [Vinko Sabolcec](https://huggingface.co/vsabolcec)

## ⚖️ Legal

### Notice and Takedown Policy
We adhere strictly to copyright and data ownership laws. If you identify any material within the corpus that infringes on your rights, please contact us following the detailed steps provided in this section to have it reviewed and potentially removed.

### License
Creative Commons Attribution 4.0 (CC BY 4.0)