Datasets:
Improved description in Source data section
Browse files
README.md
CHANGED
@@ -66,12 +66,18 @@ This dataset is aimed at promoting the development of Machine Translation betwee
|
|
66 |
|
67 |
#### Initial Data Collection and Normalization
|
68 |
|
69 |
-
The corpus is a combination of the following original datasets collected from [Opus](https://opus.nlpl.eu/):
|
70 |
NLLB, MultiCCAligned, WikiMatrix, GNOME, KDE 4, Open Subtitles.
|
71 |
|
72 |
-
|
73 |
-
|
74 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
75 |
|
76 |
#### Who are the source language producers?
|
77 |
|
|
|
66 |
|
67 |
#### Initial Data Collection and Normalization
|
68 |
|
69 |
+
The first portion of the corpus is a combination of the following original datasets collected from [Opus](https://opus.nlpl.eu/):
|
70 |
NLLB, MultiCCAligned, WikiMatrix, GNOME, KDE 4, Open Subtitles.
|
71 |
|
72 |
+
Additionally, the corpus contains synthetic parallel data generated from a random sampling of the Spanish-French corpora
|
73 |
+
available on [Opus](https://opus.nlpl.eu/) and translated into Catalan using the [PlanTL es-ca](https://huggingface.co/PlanTL-GOB-ES/mt-plantl-es-ca) model.
|
74 |
+
|
75 |
+
All data was filtered according to two specific criteria:
|
76 |
+
- Alignment: sentence level alignments were calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE) and sentence pairs with a score below 0.75 were discarded.
|
77 |
+
|
78 |
+
- Language identification: the probability of being the target language was calculated using [Lingua.py](https://github.com/pemistahl/lingua-py) and sentences with a language probability score below 0.5 were discarded.
|
79 |
+
|
80 |
+
The filtered datasets are then concatenated and deduplicated to form the final corpus.
|
81 |
|
82 |
#### Who are the source language producers?
|
83 |
|