Datasets:

Modalities:
Text
Formats:
parquet
DOI:
Libraries:
Datasets
pandas
License:
fdelucaf commited on
Commit
c4443f5
·
verified ·
1 Parent(s): 1f2ea65

Improved description in Source data section

Browse files
Files changed (1) hide show
  1. README.md +10 -4
README.md CHANGED
@@ -66,12 +66,18 @@ This dataset is aimed at promoting the development of Machine Translation betwee
66
 
67
  #### Initial Data Collection and Normalization
68
 
69
- The corpus is a combination of the following original datasets collected from [Opus](https://opus.nlpl.eu/):
70
  NLLB, MultiCCAligned, WikiMatrix, GNOME, KDE 4, Open Subtitles.
71
 
72
- All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.
73
- This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
74
- The filtered datasets are then concatenated to form the final corpus.
 
 
 
 
 
 
75
 
76
  #### Who are the source language producers?
77
 
 
66
 
67
  #### Initial Data Collection and Normalization
68
 
69
+ The first portion of the corpus is a combination of the following original datasets collected from [Opus](https://opus.nlpl.eu/):
70
  NLLB, MultiCCAligned, WikiMatrix, GNOME, KDE 4, Open Subtitles.
71
 
72
+ Additionally, the corpus contains synthetic parallel data generated from a random sampling of the Spanish-French corpora
73
+ available on [Opus](https://opus.nlpl.eu/) and translated into Catalan using the [PlanTL es-ca](https://huggingface.co/PlanTL-GOB-ES/mt-plantl-es-ca) model.
74
+
75
+ All data was filtered according to two specific criteria:
76
+ - Alignment: sentence level alignments were calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE) and sentence pairs with a score below 0.75 were discarded.
77
+
78
+ - Language identification: the probability of being the target language was calculated using [Lingua.py](https://github.com/pemistahl/lingua-py) and sentences with a language probability score below 0.5 were discarded.
79
+
80
+ The filtered datasets are then concatenated and deduplicated to form the final corpus.
81
 
82
  #### Who are the source language producers?
83