Datasets:

Modalities:
Text
Formats:
parquet
DOI:
Libraries:
Datasets
pandas
License:
fdelucaf commited on
Commit
6eeee3d
·
verified ·
1 Parent(s): 3ab1b1c

Update datacard

Browse files
Files changed (1) hide show
  1. README.md +7 -19
README.md CHANGED
@@ -21,8 +21,8 @@ license: cc-by-nc-sa-4.0
21
 
22
  ### Dataset Summary
23
 
24
- The CA-ZH Parallel Corpus is a Catalan-Chinese dataset of **6.833.114** parallel sentences. The dataset was created to support Catalan in NLP tasks, specifically
25
- Machine Translation.
26
 
27
  ### Supported Tasks and Leaderboards
28
 
@@ -63,23 +63,11 @@ This dataset is aimed at promoting the development of Machine Translation betwee
63
 
64
  #### Initial Data Collection and Normalization
65
 
66
- The Catalan-Chinese data collected from the web was a combination of the following datasets:
 
67
 
68
- | Dataset | Sentences before cleaning |
69
- |:------------------|---------------:|
70
- | WikiMatrix | 90.643 |
71
- | XLENT | 535.803 |
72
- | GNOME | 78|
73
- | OpenSubtitles | 139.300 |
74
-
75
- The 6.658.607 sentence pairs of synthetic parallel data were created from the following Spanish-Chinese datasets:
76
-
77
- | Dataset | Sentences before cleaning |
78
- |:------------------|---------------:|
79
- | UNPC |17.599.223|
80
- | CCMatrix | 24.051.233 |
81
- | MultiParacrawl| 3.410.087|
82
- | **Total** | **45.060.543** |
83
 
84
  ### Data preparation
85
 
@@ -87,7 +75,7 @@ The Chinese side of all datasets are passed through the [fastlangid](https://git
87
  identified as simplified Chinese are discarded.
88
  The datasets are then also deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.
89
  This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
90
- The filtered datasets are then concatenated to form a final corpus of **6.833.114** parallel sentences.
91
 
92
  #### Who are the source language producers?
93
 
 
21
 
22
  ### Dataset Summary
23
 
24
+ The CA-ZH Parallel Corpus is a Catalan-Chinese dataset of parallel sentences created to
25
+ support Catalan in NLP tasks, specifically Machine Translation.
26
 
27
  ### Supported Tasks and Leaderboards
28
 
 
63
 
64
  #### Initial Data Collection and Normalization
65
 
66
+ The first portion of the corpus is a combination of the following original datasets collected from [Opus](https://opus.nlpl.eu/):
67
+ WikiMatrix, XLENT, GNOME, OpenSubtitles.
68
 
69
+ Additionally, the corpus contains synthetic parallel data generated from the following original Spanish-Chinese datasets collected from [Opus](https://opus.nlpl.eu/):
70
+ UNPC, CCMatrix, MultiParacrawl.
 
 
 
 
 
 
 
 
 
 
 
 
 
71
 
72
  ### Data preparation
73
 
 
75
  identified as simplified Chinese are discarded.
76
  The datasets are then also deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.
77
  This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
78
+ The filtered datasets are then concatenated to form the final corpus.
79
 
80
  #### Who are the source language producers?
81