Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -16,11 +16,9 @@ license: cc-by-2.5
|
|
16 |
## Overview
|
17 |
Macedonian is widely recognized as a low-resource language in the field of NLP. Publicly available resources in Macedonian are extremely limited, and as far as we know, no consolidated resource encompassing all available public data exists. Another challenge is the state of digitalized books and documents in Macedonia. The country lags behind in this regard, with many books and documents existing only as scanned images. This makes it difficult to extract textual information, which is critical for advancing linguistic research, education, and NLP applications in Macedonian language. To address these challenges, we created this **Macedonian Corpus**. This corpus consolidates multiple sources of Macedonian text data, including books, academic papers, web content, and other textual resources.
|
18 |
|
19 |
-
This version of the corpus is **cleaned**, meaning the data has been subjected to
|
20 |
-
|
21 |
-
### Stage 1: Initial Filtering
|
22 |
|
23 |
-
|
24 |
|
25 |
**1. C4-like Filtering.** Removes irrelevant or low-quality lines based on content (e.g., "javascript", "lorem ipsum") and structural rules (e.g., minimum word count, terminal punctuation).
|
26 |
- Removed lines containing irrelevant content such as "javascript" or lines with any word exceeding 1000 characters.
|
@@ -44,10 +42,10 @@ This version of the corpus is **cleaned**, meaning the data has been subjected t
|
|
44 |
**5. PII Filtering.**
|
45 |
- Removed all Personally Identifiable Information (PII), including: Email addresses, IP addresses, Phone numbers.
|
46 |
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
|
52 |
---
|
53 |
|
@@ -83,14 +81,14 @@ The corpus is divided into the following categories based on the origin of the d
|
|
83 |
|
84 |
| Origin | Size (GB) | Words (B) | Percentage |
|
85 |
|----------------|-----------|-----------|------------|
|
86 |
-
| HPLT
|
87 |
-
| HuggingFace (fineweb-2)
|
88 |
-
| CLARIN (MaCoCu-mk 2.0)
|
89 |
-
|
|
90 |
-
|
|
91 |
-
| SETimes Corpus
|
92 |
-
| Common Voice
|
93 |
-
| **Total**
|
94 |
|
95 |
|
96 |
---
|
|
|
16 |
## Overview
|
17 |
Macedonian is widely recognized as a low-resource language in the field of NLP. Publicly available resources in Macedonian are extremely limited, and as far as we know, no consolidated resource encompassing all available public data exists. Another challenge is the state of digitalized books and documents in Macedonia. The country lags behind in this regard, with many books and documents existing only as scanned images. This makes it difficult to extract textual information, which is critical for advancing linguistic research, education, and NLP applications in Macedonian language. To address these challenges, we created this **Macedonian Corpus**. This corpus consolidates multiple sources of Macedonian text data, including books, academic papers, web content, and other textual resources.
|
18 |
|
19 |
+
This version of the corpus is **cleaned**, meaning the data has been subjected to filtering to ensure high-quality text for NLP tasks. The filtering was done using [datatrove](https://github.com/huggingface/datatrove), mainly motivated by [fineweb-2](https://github.com/huggingface/fineweb-2), but with slightly less aggressive settings to retain a broader range of text sources. The cleaning consisted of:
|
|
|
|
|
20 |
|
21 |
+
This implementation applies heuristic rules derived from the [C4 dataset](https://jmlr.org/papers/volume21/20-074/20-074.pdf) and [Gopher dataset](https://arxiv.org/pdf/2112.11446.pdf) quality heuristic filters. For reference to the specific filtering code used in our processes, see the GitHub repositories for the [C4 filters](https://github.com/huggingface/datatrove/blob/47379fd9783a8731b6d595470d8b06af7da17e83/src/datatrove/pipeline/filters/c4_filters.py#L27) and the [Gopher quality filters](https://github.com/huggingface/datatrove/blob/47379fd9783a8731b6d595470d8b06af7da17e83/src/datatrove/pipeline/filters/gopher_quality_filter.py#L13). For those interested in applying custom filtering, the raw dataset can be accessed at [macedonian-corpus-raw](https://huggingface.co/datasets/LVSTCK/macedonian-corpus-raw).
|
22 |
|
23 |
**1. C4-like Filtering.** Removes irrelevant or low-quality lines based on content (e.g., "javascript", "lorem ipsum") and structural rules (e.g., minimum word count, terminal punctuation).
|
24 |
- Removed lines containing irrelevant content such as "javascript" or lines with any word exceeding 1000 characters.
|
|
|
42 |
**5. PII Filtering.**
|
43 |
- Removed all Personally Identifiable Information (PII), including: Email addresses, IP addresses, Phone numbers.
|
44 |
|
45 |
+
**6. Text Chunking and Cleaning:** Breaks texts into manageable chunks, each not exceeding 4000 characters, applied only where for data sourced from MMORE. This step also involves correcting common errors that were identified after qualitative evaluation, deleting specific unwanted patterns texts.
|
46 |
+
|
47 |
+
|
48 |
+
As a further cleaning step, we performed MinHash Deduplication after step 6. The deduplicated dataset is available [here]().
|
49 |
|
50 |
---
|
51 |
|
|
|
81 |
|
82 |
| Origin | Size (GB) | Words (B) | Percentage |
|
83 |
|----------------|-----------|-----------|------------|
|
84 |
+
| HPLT | 15.51 | 1.45 | 43.72% |
|
85 |
+
| HuggingFace (fineweb-2) | 14.13 | 1.31 | 39.62% |
|
86 |
+
| CLARIN (MaCoCu-mk 2.0) | 5.14 | 0.48 | 14.57% |
|
87 |
+
| Wikipedia | 0.64 | 0.06 | 1.78% |
|
88 |
+
| Other (MMORE) | 0.04 | 0.004 | 0.12% |
|
89 |
+
| SETimes Corpus | 0.06 | 0.004 | 0.13% |
|
90 |
+
| Common Voice | 0.02 | 0.002 | 0.05% |
|
91 |
+
| **Total** | **35.54** | **3.31** | **100%** |
|
92 |
|
93 |
|
94 |
---
|