Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -14,13 +14,12 @@ license: apache-2.0
|
|
14 |
# Macedonian Corpus - Cleaned
|
15 |
|
16 |
## Overview
|
17 |
-
Macedonian is widely recognized as a low-resource language in the field of NLP. Publicly available resources in Macedonian are extremely limited, and as far as we know, no consolidated resource encompassing all available public data exists. Another challenge is the state of digitalized books and documents in Macedonia. The country lags behind in this regard, with many books and documents existing only as scanned images. This makes it difficult to extract textual information, which is critical for advancing linguistic research, education, and NLP applications in Macedonian language.
|
18 |
-
|
19 |
-
To address these challenges, we created this **Macedonian Corpus**. This corpus consolidates multiple sources of Macedonian text data, including books, academic papers, web content, and other textual resources.
|
20 |
|
21 |
This version of the corpus is **cleaned**, meaning the data has been subjected to rigorous filtering to ensure high-quality text for NLP tasks. The filtering was done using [datatrove](https://github.com/huggingface/datatrove), mainly motivated by [fineweb-2](https://github.com/huggingface/fineweb-2), but with slightly less aggressive settings to retain a broader range of text sources. The following methodologies were applied during the cleaning process:
|
22 |
|
23 |
-
|
|
|
24 |
|
25 |
**0. Chunking**
|
26 |
- In the raw dataset, some documents are extremely long, spanning hundreds of pages. These lengthy documents can cause challenges in later stages. To address this, a specialized chunking mechanism has been implemented in the data processing pipeline to manage and segment these documents into smaller, more manageable parts. This technique is used only for documents originating from the source "MMORE," which typically include books and manuscripts that can span hundreds of pages. The chunker is designed to divide these lengthy documents into smaller segments by grouping a set number of sentences (default 100), into each chunk. Each chunk is then treated as a standalone document, with its own unique identifier derived from the original document's ID.
|
|
|
14 |
# Macedonian Corpus - Cleaned
|
15 |
|
16 |
## Overview
|
17 |
+
Macedonian is widely recognized as a low-resource language in the field of NLP. Publicly available resources in Macedonian are extremely limited, and as far as we know, no consolidated resource encompassing all available public data exists. Another challenge is the state of digitalized books and documents in Macedonia. The country lags behind in this regard, with many books and documents existing only as scanned images. This makes it difficult to extract textual information, which is critical for advancing linguistic research, education, and NLP applications in Macedonian language. To address these challenges, we created this **Macedonian Corpus**. This corpus consolidates multiple sources of Macedonian text data, including books, academic papers, web content, and other textual resources.
|
|
|
|
|
18 |
|
19 |
This version of the corpus is **cleaned**, meaning the data has been subjected to rigorous filtering to ensure high-quality text for NLP tasks. The filtering was done using [datatrove](https://github.com/huggingface/datatrove), mainly motivated by [fineweb-2](https://github.com/huggingface/fineweb-2), but with slightly less aggressive settings to retain a broader range of text sources. The following methodologies were applied during the cleaning process:
|
20 |
|
21 |
+
This implementation applies heuristic rules derived from the [C4 dataset](https://jmlr.org/papers/volume21/20-074/20-074.pdf) and [Gopher dataset](https://arxiv.org/pdf/2112.11446.pdf) quality heuristic filters. For reference to the specific filtering code used in our processes, see the GitHub repositories for the [C4 filters](https://github.com/huggingface/datatrove/blob/47379fd9783a8731b6d595470d8b06af7da17e83/src/datatrove/pipeline/filters/c4_filters.py#L27) and the [Gopher quality filters](https://github.com/huggingface/datatrove/blob/47379fd9783a8731b6d595470d8b06af7da17e83/src/datatrove/pipeline/filters/gopher_quality_filter.py#L13). For those interested in applying custom filtering, the raw dataset can be accessed at [macedonian-corpus-raw](https://huggingface.co/datasets/LVSTCK/macedonian-corpus-raw).
|
22 |
+
|
23 |
|
24 |
**0. Chunking**
|
25 |
- In the raw dataset, some documents are extremely long, spanning hundreds of pages. These lengthy documents can cause challenges in later stages. To address this, a specialized chunking mechanism has been implemented in the data processing pipeline to manage and segment these documents into smaller, more manageable parts. This technique is used only for documents originating from the source "MMORE," which typically include books and manuscripts that can span hundreds of pages. The chunker is designed to divide these lengthy documents into smaller segments by grouping a set number of sentences (default 100), into each chunk. Each chunk is then treated as a standalone document, with its own unique identifier derived from the original document's ID.
|