Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -16,31 +16,40 @@ license: cc-by-2.5
|
|
16 |
## Overview
|
17 |
Macedonian is widely recognized as a low-resource language in the field of NLP. Publicly available resources in Macedonian are extremely limited, and as far as we know, no consolidated resource encompassing all available public data exists. Another challenge is the state of digitalized books and documents in Macedonia. The country lags behind in this regard, with many books and documents existing only as scanned images. This makes it difficult to extract textual information, which is critical for advancing linguistic research, education, and NLP applications in Macedonian language. To address these challenges, we created this **Macedonian Corpus**. This corpus consolidates multiple sources of Macedonian text data, including books, academic papers, web content, and other textual resources.
|
18 |
|
19 |
-
This version of the corpus is **cleaned**, meaning the data has been subjected to rigorous filtering to ensure high-quality text for NLP tasks. The filtering was done using [datatrove](https://github.com/huggingface/datatrove), mainly motivated by [fineweb-2](https://github.com/huggingface/fineweb-2), but with slightly less aggressive settings to retain a broader range of text sources.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
|
21 |
-
|
22 |
-
|
23 |
-
**1. C4-like Filtering.** Removes irrelevant or low-quality lines based on content (e.g., "javascript", "lorem ipsum") and structural rules (e.g., minimum word count, terminal punctuation).
|
24 |
-
- Removed lines containing irrelevant content such as "javascript" or lines with any word exceeding 1000 characters.
|
25 |
-
- Excluded placeholder content like "lorem ipsum" and policy-related phrases such as "privacy policy" or "terms of use."
|
26 |
-
- Filtered out lines with fewer than 3 words or lacking terminal punctuation (e.g., `.`, `?`, `!`).
|
27 |
-
- Excluded lines where punctuation was missing at the end.
|
28 |
-
|
29 |
-
**2. Gopher-like Filtering.** Filters out documents with excessive bullet points or repetitive ellipses to ensure completeness.
|
30 |
-
- Limited the presence of bullet points by rejecting documents where more than 90% of lines started with bullet-like characters (e.g., `-`, `•`, `*`).
|
31 |
-
- Filtered out documents where more than 30% of lines ended with ellipses (`...`) to avoid overly repetitive or incomplete content.
|
32 |
-
|
33 |
-
**3. Language Filtering.** Retains only high-confidence Macedonian text.
|
34 |
-
- Applied FT176LID model to detect and retain only high-confidence Macedonian text.
|
35 |
-
- Excluded non-Macedonian content - language confidence score below 0.65.
|
36 |
-
|
37 |
-
**4. Sentence Deduplication.** Removes duplicate sentences to improve dataset quality and reduce over-representation.
|
38 |
-
- Splits documents into sentences.
|
39 |
-
- Identifies duplicates using unique sentence signatures.
|
40 |
-
- Removes flagged duplicates.
|
41 |
-
|
42 |
-
**5. PII Filtering.**
|
43 |
-
- Removed all Personally Identifiable Information (PII), including: Email addresses, IP addresses, Phone numbers.
|
44 |
|
45 |
The implementation with all filtering steps can be found at [GitHub](https://github.com/LVSTCK/macedonian-corpus/blob/main/filtering/filter.py).
|
46 |
|
|
|
16 |
## Overview
|
17 |
Macedonian is widely recognized as a low-resource language in the field of NLP. Publicly available resources in Macedonian are extremely limited, and as far as we know, no consolidated resource encompassing all available public data exists. Another challenge is the state of digitalized books and documents in Macedonia. The country lags behind in this regard, with many books and documents existing only as scanned images. This makes it difficult to extract textual information, which is critical for advancing linguistic research, education, and NLP applications in Macedonian language. To address these challenges, we created this **Macedonian Corpus**. This corpus consolidates multiple sources of Macedonian text data, including books, academic papers, web content, and other textual resources.
|
18 |
|
19 |
+
This version of the corpus is **cleaned**, meaning the data has been subjected to rigorous filtering to ensure high-quality text for NLP tasks. The filtering was done using [datatrove](https://github.com/huggingface/datatrove), mainly motivated by [fineweb-2](https://github.com/huggingface/fineweb-2), but with slightly less aggressive settings to retain a broader range of text sources. Cleaning was conducted in two stages:
|
20 |
+
|
21 |
+
### Stage 1: Initial Filtering
|
22 |
+
|
23 |
+
This implementation applies heuristic rules derived from the [C4 dataset](https://jmlr.org/papers/volume21/20-074/20-074.pdf) and [Gopher dataset](https://arxiv.org/pdf/2112.11446.pdf) quality heuristic filters. For reference to the specific filtering code used in our processes, see the GitHub repositories for the [C4 filters](https://github.com/huggingface/datatrove/blob/47379fd9783a8731b6d595470d8b06af7da17e83/src/datatrove/pipeline/filters/c4_filters.py#L27) and the [Gopher quality filters](https://github.com/huggingface/datatrove/blob/47379fd9783a8731b6d595470d8b06af7da17e83/src/datatrove/pipeline/filters/gopher_quality_filter.py#L13). For those interested in applying custom filtering, the raw dataset can be accessed at [macedonian-corpus-raw](https://huggingface.co/datasets/LVSTCK/macedonian-corpus-raw).
|
24 |
+
|
25 |
+
**1. C4-like Filtering.** Removes irrelevant or low-quality lines based on content (e.g., "javascript", "lorem ipsum") and structural rules (e.g., minimum word count, terminal punctuation).
|
26 |
+
- Removed lines containing irrelevant content such as "javascript" or lines with any word exceeding 1000 characters.
|
27 |
+
- Excluded placeholder content like "lorem ipsum" and policy-related phrases such as "privacy policy" or "terms of use."
|
28 |
+
- Filtered out lines with fewer than 3 words or lacking terminal punctuation (e.g., ., ?, !).
|
29 |
+
- Excluded lines where punctuation was missing at the end.
|
30 |
+
|
31 |
+
**2. Gopher-like Filtering.** Filters out documents with excessive bullet points or repetitive ellipses to ensure completeness.
|
32 |
+
- Limited the presence of bullet points by rejecting documents where more than 90% of lines started with bullet-like characters (e.g., -, •, *).
|
33 |
+
- Filtered out documents where more than 30% of lines ended with ellipses (...) to avoid overly repetitive or incomplete content.
|
34 |
+
|
35 |
+
**3. Language Filtering.** Retains only high-confidence Macedonian text.
|
36 |
+
- Applied FT176LID model to detect and retain only high-confidence Macedonian text.
|
37 |
+
- Excluded non-Macedonian content - language confidence score below 0.65.
|
38 |
+
|
39 |
+
**4. Sentence Deduplication.** Removes duplicate sentences to improve dataset quality and reduce over-representation.
|
40 |
+
- Splits documents into sentences.
|
41 |
+
- Identifies duplicates using unique sentence signatures.
|
42 |
+
- Removes flagged duplicates.
|
43 |
+
|
44 |
+
**5. PII Filtering.**
|
45 |
+
- Removed all Personally Identifiable Information (PII), including: Email addresses, IP addresses, Phone numbers.
|
46 |
+
|
47 |
+
### Stage 2: Text Chunking and Minhash Deduplication
|
48 |
+
**1. Text Chunking and Cleaning:** Breaks texts into manageable chunks, each not exceeding 4000 characters, applied only where for data sourced from MMORE. This step also involves correcting common errors that were identified after qualitative evaluation, deleting specific unwanted patterns texts.
|
49 |
+
|
50 |
+
**2. Minhash Deduplication**
|
51 |
|
52 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
53 |
|
54 |
The implementation with all filtering steps can be found at [GitHub](https://github.com/LVSTCK/macedonian-corpus/blob/main/filtering/filter.py).
|
55 |
|