Datasets:

ArXiv:
License:
holylovenia commited on
Commit
8114354
·
verified ·
1 Parent(s): d8b2a75

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +104 -0
README.md ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ license: cc-by-sa-4.0
4
+ language:
5
+ - ilo
6
+ - min
7
+ - jav
8
+ - sun
9
+ - ceb
10
+ - ind
11
+ - tgl
12
+ - vie
13
+ pretty_name: Wikimatrix
14
+ task_categories:
15
+ - machine-translation
16
+ tags:
17
+ - machine-translation
18
+ ---
19
+
20
+ WikiMatrix is automatically extracted parallel sentences from the content of Wikipedia articles in 96 languages, including several dialects
21
+ or low-resource languages. 8 languages among them are spoken in Southeast Asia region. In total, there are 135M parallel sentences from 1620
22
+ different language pairs.
23
+
24
+
25
+ ## Languages
26
+
27
+ ilo, min, jav, sun, ceb, ind, tgl, vie
28
+
29
+ ## Supported Tasks
30
+
31
+ Machine Translation
32
+
33
+ ## Dataset Usage
34
+ ### Using `datasets` library
35
+ ```
36
+ from datasets import load_dataset
37
+ dset = datasets.load_dataset("SEACrowd/wikimatrix", trust_remote_code=True)
38
+ ```
39
+ ### Using `seacrowd` library
40
+ ```import seacrowd as sc
41
+ # Load the dataset using the default config
42
+ dset = sc.load_dataset("wikimatrix", schema="seacrowd")
43
+ # Check all available subsets (config names) of the dataset
44
+ print(sc.available_config_names("wikimatrix"))
45
+ # Load the dataset using a specific config
46
+ dset = sc.load_dataset_by_config_name(config_name="<config_name>")
47
+ ```
48
+
49
+ More details on how to load the `seacrowd` library can be found [here](https://github.com/SEACrowd/seacrowd-datahub?tab=readme-ov-file#how-to-use).
50
+
51
+
52
+ ## Dataset Homepage
53
+
54
+ [https://github.com/facebookresearch/LASER/tree/main/tasks/WikiMatrix](https://github.com/facebookresearch/LASER/tree/main/tasks/WikiMatrix)
55
+
56
+ ## Dataset Version
57
+
58
+ Source: 1.0.0. SEACrowd: 2024.06.20.
59
+
60
+ ## Dataset License
61
+
62
+ Creative Commons Attribution Share Alike 4.0 (cc-by-sa-4.0)
63
+
64
+ ## Citation
65
+
66
+ If you are using the **Wikimatrix** dataloader in your work, please cite the following:
67
+ ```
68
+ @inproceedings{schwenk-etal-2021-wikimatrix,
69
+ title = "{W}iki{M}atrix: Mining 135{M} Parallel Sentences in 1620 Language Pairs from {W}ikipedia",
70
+ author = "Schwenk, Holger and
71
+ Chaudhary, Vishrav and
72
+ Sun, Shuo and
73
+ Gong, Hongyu and
74
+ Guzm{'a}n, Francisco",
75
+ editor = "Merlo, Paola and
76
+ Tiedemann, Jorg and
77
+ Tsarfaty, Reut",
78
+ booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
79
+ month = apr,
80
+ year = "2021",
81
+ address = "Online",
82
+ publisher = "Association for Computational Linguistics",
83
+ url = "https://aclanthology.org/2021.eacl-main.115",
84
+ doi = "10.18653/v1/2021.eacl-main.115",
85
+ pages = "1351--1361",
86
+ abstract = "We present an approach based on multilingual sentence embeddings to automatically extract parallel sentences from the content
87
+ of Wikipedia articles in 96 languages, including several dialects or low-resource languages. We do not limit the extraction process to
88
+ alignments with English, but we systematically consider all possible language pairs. In total, we are able to extract 135M parallel sentences
89
+ for 16720 different language pairs, out of which only 34M are aligned with English. This corpus is freely available. To get an indication
90
+ on the quality of the extracted bitexts, we train neural MT baseline systems on the mined data only for 1886 languages pairs, and evaluate
91
+ them on the TED corpus, achieving strong BLEU scores for many language pairs. The WikiMatrix bitexts seem to be particularly interesting
92
+ to train MT systems between distant languages without the need to pivot through English.",
93
+ }
94
+
95
+
96
+ @article{lovenia2024seacrowd,
97
+ title={SEACrowd: A Multilingual Multimodal Data Hub and Benchmark Suite for Southeast Asian Languages},
98
+ author={Holy Lovenia and Rahmad Mahendra and Salsabil Maulana Akbar and Lester James V. Miranda and Jennifer Santoso and Elyanah Aco and Akhdan Fadhilah and Jonibek Mansurov and Joseph Marvin Imperial and Onno P. Kampman and Joel Ruben Antony Moniz and Muhammad Ravi Shulthan Habibi and Frederikus Hudi and Railey Montalan and Ryan Ignatius and Joanito Agili Lopo and William Nixon and Börje F. Karlsson and James Jaya and Ryandito Diandaru and Yuze Gao and Patrick Amadeus and Bin Wang and Jan Christian Blaise Cruz and Chenxi Whitehouse and Ivan Halim Parmonangan and Maria Khelli and Wenyu Zhang and Lucky Susanto and Reynard Adha Ryanda and Sonny Lazuardi Hermawan and Dan John Velasco and Muhammad Dehan Al Kautsar and Willy Fitra Hendria and Yasmin Moslem and Noah Flynn and Muhammad Farid Adilazuarda and Haochen Li and Johanes Lee and R. Damanhuri and Shuo Sun and Muhammad Reza Qorib and Amirbek Djanibekov and Wei Qi Leong and Quyet V. Do and Niklas Muennighoff and Tanrada Pansuwan and Ilham Firdausi Putra and Yan Xu and Ngee Chia Tai and Ayu Purwarianti and Sebastian Ruder and William Tjhi and Peerat Limkonchotiwat and Alham Fikri Aji and Sedrick Keh and Genta Indra Winata and Ruochen Zhang and Fajri Koto and Zheng-Xin Yong and Samuel Cahyawijaya},
99
+ year={2024},
100
+ eprint={2406.10118},
101
+ journal={arXiv preprint arXiv: 2406.10118}
102
+ }
103
+
104
+ ```