Datasets:
Improved description in Source Data section
Browse files
README.md
CHANGED
@@ -69,9 +69,12 @@ This dataset is aimed at promoting the development of Machine Translation betwee
|
|
69 |
The dataset is a combination of the following original datasets collected from [Opus](https://opus.nlpl.eu/):
|
70 |
NLLB, MultiCCAligned, WikiMatrix, GNOME, KDE4, OpenSubtitles, GlobalVoices.
|
71 |
|
72 |
-
All
|
73 |
-
|
74 |
-
|
|
|
|
|
|
|
75 |
|
76 |
#### Who are the source language producers?
|
77 |
|
|
|
69 |
The dataset is a combination of the following original datasets collected from [Opus](https://opus.nlpl.eu/):
|
70 |
NLLB, MultiCCAligned, WikiMatrix, GNOME, KDE4, OpenSubtitles, GlobalVoices.
|
71 |
|
72 |
+
All data was filtered according to two specific criteria:
|
73 |
+
- Alignment: sentence level alignments were calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE) and sentence pairs with a score below 0.75 were discarded.
|
74 |
+
|
75 |
+
- Language identification: the probability of being the target language was calculated using [Lingua.py](https://github.com/pemistahl/lingua-py) and sentences with a language probability score below 0.5 were discarded.
|
76 |
+
|
77 |
+
The filtered datasets are then concatenated and deduplicated to form the final corpus.
|
78 |
|
79 |
#### Who are the source language producers?
|
80 |
|