Datasets:
language:
- ca
- zh
multilinguality:
- multilingual
pretty_name: CA-ZH Parallel Corpus
size_categories:
- 1M<n<10M
task_categories:
- translation
task_ids: []
license: cc-by-nc-sa-4.0
Dataset Card for CA-ZH Parallel Corpus
Dataset Description
- Point of Contact: [email protected]
Dataset Summary
The CA-ZH Parallel Corpus is a Catalan-Chinese dataset of parallel sentences created to support Catalan in NLP tasks, specifically Machine Translation.
Supported Tasks and Leaderboards
The dataset can be used to train Bilingual Machine Translation models between Chinese and Catalan in any direction, as well as Multilingual Machine Translation models.
Languages
The sentences included in the dataset are in Catalan (CA) and Chinese (ZH).
Dataset Structure
Data Instances
Two separate txt files are provided with the sentences sorted in the same order:
ca-zh_all_2024_08_05.ca.
ca-zh_all_2024_08_05.zh.
The dataset is additionally provided in parquet format: ca-zh_all_2024_08_05.parquet.
The parquet file contains two columns of parallel text obtained from the two original text files. Each row in the file represents a pair of parallel sentences in the two languages of the dataset.
Data Fields
[N/A]
Data Splits
The dataset contains a single split: train
.
Dataset Creation
Dataset Creation
Curation Rationale
This dataset is aimed at promoting the development of Machine Translation between Catalan and other languages, specifically Chinese.
Source Data
Initial Data Collection and Normalization
The first portion of the corpus is a combination of Catalan-Chinese data automatically crawled from Wikipedia and the following original Catalan-Chinese datasets collected from Opus: OpenSubtitles, WikiMatrix.
Additionally, the corpus contains synthetic parallel data generated from Spanish-Chinese News Commentary v18 from WMT and the following original Spanish-Chinese datasets collected from Opus: NLLB, UNPC, MultiUN, MultiCCAligned, WikiMatrix, Tatoeba, MultiParaCrawl, OpenSubtitles.
Lastly, synthetic parallel data has also been generated from the following original English-Chinese datasets collected from Opus: NLLB, CCAligned, ParaCrawl, WikiMatrix.
Data preparation
The Chinese side of all datasets were first processed using the Hanzi Identifier to detect Traditional Chinese, which was subsequently converted to Simplified Chinese using OpenCC.
All data was then filtered according to two specific criteria:
Alignment: sentence level alignments were calculated using LaBSE and sentence pairs with a score below 0.75 were discarded.
Language identification: the probability of being the target language was calculated using Lingua.py and sentences with a language probability score below 0.5 were discarded.
Next, Spanish data was translated into Catalan using the Aina Project's Spanish-Catalan machine translation model, while English data was translated into Catalan using the Aina Project's English-Catalan machine translation model.
The filtered and translated datasets are then concatenated and deduplicated to form the final corpus.
Who are the source language producers?
Annotations
Annotation process
The dataset does not contain any annotations.
Who are the annotators?
[N/A]
Personal and Sensitive Information
Given that this dataset is partly derived from pre-existing datasets that may contain crawled data, and that no specific anonymisation process has been applied, personal and sensitive information may be present in the data. This needs to be considered when using the data for training models.
Considerations for Using the Data
Social Impact of Dataset
By providing this resource, we intend to promote the use of Catalan across NLP tasks, thereby improving the accessibility and visibility of the Catalan language.
Discussion of Biases
No specific bias mitigation strategies were applied to this dataset. Inherent biases may exist within the data.
Other Known Limitations
The dataset contains data of a general domain. Applications of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.
Additional Information
Dataset Curators
Language Technologies Unit at the Barcelona Supercomputing Center ([email protected]).
This work has been promoted and financed by the Generalitat de Catalunya through the Aina project.
Licensing Information
This work is licensed under a Attribution-NonCommercial-ShareAlike 4.0 International.
Citation Information
[N/A]
Contributions
[N/A]