Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -18,10 +18,13 @@ Macedonian is widely recognized as a low-resource language in the field of NLP.
|
|
18 |
|
19 |
To address these challenges, we created this **Macedonian Corpus**. This corpus consolidates multiple sources of Macedonian text data, including books, academic papers, web content, and other textual resources.
|
20 |
|
21 |
-
This version of the corpus is **cleaned**, meaning the data has been subjected to rigorous filtering to ensure high-quality text for NLP tasks. The filtering was done using [datatrove](https://github.com/huggingface/datatrove)
|
22 |
|
23 |
**TODO: cite C4, Gopher. Also link the github filtering code. Link to the raw dataset in case someone wants to apply custom filtering.**
|
24 |
|
|
|
|
|
|
|
25 |
**1. C4-like Filtering**
|
26 |
- Removed lines containing irrelevant content such as "javascript" or lines with any word exceeding 1000 characters.
|
27 |
- Excluded placeholder content like "lorem ipsum" and policy-related phrases such as "privacy policy" or "terms of use."
|
@@ -43,6 +46,7 @@ This version of the corpus is **cleaned**, meaning the data has been subjected t
|
|
43 |
- IP addresses
|
44 |
- Phone numbers
|
45 |
|
|
|
46 |
As this is only **version 1** of the corpus, we aim to involve the community in expanding and improving the corpus for future iterations. Contributions could include sending books (in formats such as **PDF, DOCX, PPTX, or audiobooks**) written in clean Macedonian language or submitting other forms of textual resources that align with the goal of expanding the corpus. If you'd like to contribute or have suggestions for improvement, please feel free to reach out!
|
47 |
|
48 |
## Dataset Sources
|
|
|
18 |
|
19 |
To address these challenges, we created this **Macedonian Corpus**. This corpus consolidates multiple sources of Macedonian text data, including books, academic papers, web content, and other textual resources.
|
20 |
|
21 |
+
This version of the corpus is **cleaned**, meaning the data has been subjected to rigorous filtering to ensure high-quality text for NLP tasks. The filtering was done using [datatrove](https://github.com/huggingface/datatrove), mainly motivated by [fineweb-2](https://github.com/huggingface/fineweb-2), but with slightly less aggressive settings to retain a broader range of text sources. The following methodologies were applied during the cleaning process:
|
22 |
|
23 |
**TODO: cite C4, Gopher. Also link the github filtering code. Link to the raw dataset in case someone wants to apply custom filtering.**
|
24 |
|
25 |
+
**0. Chunking**
|
26 |
+
- In the raw dataset, some documents are extremely long, spanning hundreds of pages. These lengthy documents can cause challenges in later stages. To address this, a specialized chunking mechanism has been implemented in the data processing pipeline to manage and segment these documents into smaller, more manageable parts. This technique is important for documents originating from the source labeled "MMORE," which typically include books and manuscripts that can span hundreds of pages. The chunker is designed to divide these lengthy documents into smaller segments by grouping a set number of sentences (default 100), into each chunk. Each chunk is then treated as a standalone document, with its own unique identifier derived from the original document's ID.
|
27 |
+
|
28 |
**1. C4-like Filtering**
|
29 |
- Removed lines containing irrelevant content such as "javascript" or lines with any word exceeding 1000 characters.
|
30 |
- Excluded placeholder content like "lorem ipsum" and policy-related phrases such as "privacy policy" or "terms of use."
|
|
|
46 |
- IP addresses
|
47 |
- Phone numbers
|
48 |
|
49 |
+
|
50 |
As this is only **version 1** of the corpus, we aim to involve the community in expanding and improving the corpus for future iterations. Contributions could include sending books (in formats such as **PDF, DOCX, PPTX, or audiobooks**) written in clean Macedonian language or submitting other forms of textual resources that align with the goal of expanding the corpus. If you'd like to contribute or have suggestions for improvement, please feel free to reach out!
|
51 |
|
52 |
## Dataset Sources
|