Datasets:
File size: 5,287 Bytes
9a25903 a47addb 2d597e4 a47addb 2d597e4 9a25903 a47addb 2d597e4 a47addb 6eeee3d a47addb 9b68c45 2d597e4 a47addb 2d597e4 a47addb 2d597e4 a47addb 15459dd 2cf8bc6 15459dd 2cf8bc6 15459dd 2cf8bc6 a47addb 2d597e4 a47addb 2d597e4 a47addb 2d597e4 158d6ac a47addb 158d6ac 15459dd a47addb 15459dd a47addb 2d597e4 15459dd 2d597e4 a47addb 2edc662 a47addb 2d597e4 a47addb 2d597e4 a47addb 2d597e4 a47addb 2d597e4 a47addb 2d597e4 a47addb 2d597e4 a47addb 2d597e4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 |
---
language:
- ca
- zh
multilinguality:
- multilingual
pretty_name: CA-ZH Parallel Corpus
size_categories:
- 1M<n<10M
task_categories:
- translation
task_ids: []
license: cc-by-nc-sa-4.0
---
# Dataset Card for CA-ZH Parallel Corpus
## Dataset Description
- **Point of Contact:** [email protected]
### Dataset Summary
The CA-ZH Parallel Corpus is a Catalan-Chinese dataset of parallel sentences created to
support Catalan in NLP tasks, specifically Machine Translation.
### Supported Tasks and Leaderboards
The dataset can be used to train Bilingual Machine Translation models between Chinese and Catalan in any direction,
as well as Multilingual Machine Translation models.
### Languages
The sentences included in the dataset are in Catalan (CA) and Chinese (ZH).
## Dataset Structure
### Data Instances
Two separate txt files are provided with the sentences sorted in the same order:
- ca-zh_all_2024_08_05.ca.
- ca-zh_all_2024_08_05.zh.
The dataset is additionally provided in parquet format: ca-zh_all_2024_08_05.parquet.
The parquet file contains two columns of parallel text obtained from the two original text files.
Each row in the file represents a pair of parallel sentences in the two languages of the dataset.
### Data Fields
[N/A]
### Data Splits
The dataset contains a single split: `train`.
## Dataset Creation
## Dataset Creation
### Curation Rationale
This dataset is aimed at promoting the development of Machine Translation between Catalan and other languages, specifically Chinese.
### Source Data
#### Initial Data Collection and Normalization
The first portion of the corpus is a combination of Catalan-Chinese data automatically crawled from Wikipedia and the following original Catalan-Chinese datasets collected from [Opus](https://opus.nlpl.eu/):
OpenSubtitles, WikiMatrix.
Additionally, the corpus contains synthetic parallel data generated from Spanish-Chinese News Commentary
v18 from [WMT](https://data.statmt.org/news-commentary/v18.1/training/) and the following original Spanish-Chinese datasets collected
from [Opus](https://opus.nlpl.eu/): NLLB, UNPC, MultiUN, MultiCCAligned, WikiMatrix, Tatoeba, MultiParaCrawl, OpenSubtitles.
Lastly, synthetic parallel data has also been generated from the following original English-Chinese datasets collected from [Opus](https://opus.nlpl.eu/): NLLB, CCAligned, ParaCrawl, WikiMatrix.
### Data preparation
The Chinese side of all datasets were first processed using the [Hanzi Identifier](https://github.com/tsroten/hanzidentifier) to detect Traditional Chinese, which was subsequently converted to Simplified Chinese using [OpenCC](https://github.com/BYVoid/OpenCC).
All data was then filtered according to two specific criteria:
- Alignment: sentence level alignments were calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE) and sentence pairs with a score below 0.75 were discarded.
- Language identification: the probability of being the target language was calculated using [Lingua.py](https://github.com/pemistahl/lingua-py) and sentences with a language probability score below 0.5 were discarded.
Next, Spanish data was translated into Catalan using the Aina Project's [Spanish-Catalan machine translation model](https://huggingface.co/projecte-aina/aina-translator-es-ca), while English data was translated into Catalan using the Aina Project's [English-Catalan machine translation model](https://huggingface.co/projecte-aina/aina-translator-en-ca).
The filtered and translated datasets are then concatenated and deduplicated to form the final corpus.
#### Who are the source language producers?
[Opus](https://opus.nlpl.eu/)
[WMT](https://machinetranslate.org/wmt)
[Projecte Aina](https://huggingface.co/projecte-aina)
### Annotations
#### Annotation process
The dataset does not contain any annotations.
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
Given that this dataset is partly derived from pre-existing datasets that may contain crawled data, and that no specific anonymisation process has been applied,
personal and sensitive information may be present in the data. This needs to be considered when using the data for training models.
## Considerations for Using the Data
### Social Impact of Dataset
By providing this resource, we intend to promote the use of Catalan across NLP tasks, thereby improving the accessibility and visibility of the Catalan language.
### Discussion of Biases
No specific bias mitigation strategies were applied to this dataset.
Inherent biases may exist within the data.
### Other Known Limitations
The dataset contains data of a general domain. Applications of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.
## Additional Information
### Dataset Curators
Language Technologies Unit at the Barcelona Supercomputing Center ([email protected]).
This work has been promoted and financed by the Generalitat de Catalunya through the [Aina project](https://projecteaina.cat/).
### Licensing Information
This work is licensed under a [Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Citation Information
[N/A]
### Contributions
[N/A] |