Datasets:

Modalities:
Text
Formats:
parquet
DOI:
Libraries:
Datasets
pandas
License:
xixianliao commited on
Commit
15459dd
·
1 Parent(s): 2cf8bc6

Update the dataset

Browse files
.gitattributes CHANGED
@@ -53,5 +53,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
53
  *.jpg filter=lfs diff=lfs merge=lfs -text
54
  *.jpeg filter=lfs diff=lfs merge=lfs -text
55
  *.webp filter=lfs diff=lfs merge=lfs -text
56
- ca-zh_all_2023_10_26.ca filter=lfs diff=lfs merge=lfs -text
57
- ca-zh_all_2023_10_26.zh filter=lfs diff=lfs merge=lfs -text
 
53
  *.jpg filter=lfs diff=lfs merge=lfs -text
54
  *.jpeg filter=lfs diff=lfs merge=lfs -text
55
  *.webp filter=lfs diff=lfs merge=lfs -text
56
+ ca-zh_all_2024_08_05.ca filter=lfs diff=lfs merge=lfs -text
57
+ ca-zh_all_2024_08_05.zh filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -39,11 +39,11 @@ The sentences included in the dataset are in Catalan (CA) and Chinese (ZH).
39
 
40
  Two separate txt files are provided with the sentences sorted in the same order:
41
 
42
- - ca-zh_all_2023_10_26.ca.
43
 
44
- - ca-zh_all_2023_10_26.zh.
45
 
46
- The dataset is additionally provided in parquet format: ca-zh_all_2023_10_26.parquet.
47
 
48
  The parquet file contains two columns of parallel text obtained from the two original text files.
49
  Each row in the file represents a pair of parallel sentences in the two languages of the dataset.
@@ -69,23 +69,33 @@ This dataset is aimed at promoting the development of Machine Translation betwee
69
 
70
  #### Initial Data Collection and Normalization
71
 
72
- The first portion of the corpus is a combination of the following original Catalan-Chinese datasets collected from [Opus](https://opus.nlpl.eu/):
73
- WikiMatrix, XLENT, GNOME, OpenSubtitles.
74
 
75
- Additionally, the corpus contains synthetic parallel data generated from the following original Spanish-Chinese datasets collected from [Opus](https://opus.nlpl.eu/):
76
- UNPC, CCMatrix, MultiParacrawl.
 
77
 
78
  ### Data preparation
79
 
80
- The Chinese side of all datasets are passed through the [fastlangid](https://github.com/currentslab/fastlangid) language detector and any sentences which are not
81
- identified as simplified Chinese are discarded.
82
- The datasets are then also deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.
83
- This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
84
- The filtered datasets are then concatenated to form the final corpus.
 
 
 
 
 
 
85
 
86
  #### Who are the source language producers?
87
 
88
- [Opus](https://opus.nlpl.eu/)
 
 
 
 
89
 
90
  ### Annotations
91
 
 
39
 
40
  Two separate txt files are provided with the sentences sorted in the same order:
41
 
42
+ - ca-zh_all_2024_08_05.ca.
43
 
44
+ - ca-zh_all_2024_08_05.zh.
45
 
46
+ The dataset is additionally provided in parquet format: ca-zh_all_2024_08_05.parquet.
47
 
48
  The parquet file contains two columns of parallel text obtained from the two original text files.
49
  Each row in the file represents a pair of parallel sentences in the two languages of the dataset.
 
69
 
70
  #### Initial Data Collection and Normalization
71
 
72
+ The first portion of the corpus is a combination of our CA-ZH Wikipedia dataset and the following original Catalan-Chinese datasets collected from [Opus](https://opus.nlpl.eu/): OpenSubtitles, WikiMatrix.
 
73
 
74
+ Additionally, the corpus contains synthetic parallel data generated from Spanish-Chinese News Commentary v18 from [WMT](https://data.statmt.org/news-commentary/v18.1/training/) and the following original Spanish-Chinese datasets collected from [Opus](https://opus.nlpl.eu/): NLLB, UNPC, MultiUN, MultiCCAligned, WikiMatrix, Tatoeba, MultiParaCrawl, OpenSubtitles.
75
+
76
+ Lastly, synthetic parallel data has also been generated from the following original English-Chinese datasets collected from [Opus](https://opus.nlpl.eu/): NLLB, CCAligned, ParaCrawl, WikiMatrix.
77
 
78
  ### Data preparation
79
 
80
+ The Chinese side of all datasets were first processed using the [Hanzi Identifier](https://github.com/tsroten/hanzidentifier) to detect Traditional Chinese, which was subsequently converted to Simplified Chinese using [OpenCC](https://github.com/BYVoid/OpenCC).
81
+
82
+ All data was then filtered according to two specific criteria:
83
+
84
+ - Alignment: sentence level alignments were calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE) and sentence pairs with a score below 0.75 were discarded.
85
+
86
+ - Language identification: the probability of being the target language was calculated using [Lingua.py](https://github.com/pemistahl/lingua-py) and sentences with a language probability score below 0.5 were discarded.
87
+
88
+ Next, Spanish data was translated into Catalan using the Aina Project's [Spanish-Catalan machine translation model](https://huggingface.co/projecte-aina/aina-translator-es-ca), while English data was translated into Catalan using the Aina Project's [English-Catalan machine translation model](https://huggingface.co/projecte-aina/aina-translator-en-ca).
89
+
90
+ The filtered and translated datasets are then concatenated and deduplicated to form the final corpus.
91
 
92
  #### Who are the source language producers?
93
 
94
+ [Opus](https://opus.nlpl.eu/)
95
+
96
+ [WMT](https://machinetranslate.org/wmt)
97
+
98
+ [Projecte Aina](https://huggingface.co/projecte-aina)
99
 
100
  ### Annotations
101
 
ca-zh_all_2024_08_05.ca ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:63cd2b05b43498ddc32e23151f1843576fcf8274d7f993bd7bc154dbe0ff1578
3
+ size 12316506664
ca-zh_all_2024_08_05.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:abde25c775e9429aa9e29bc113544c746dd81807332e4609f1adb0779362aa8a
3
+ size 15187135710
ca-zh_all_2024_08_05.zh ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1688b94022b2862498a19b36248356bec2a35a8abf5aabe37507b3c86eafee3a
3
+ size 9637810856