Datasets:
language:
- bn
- en
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
- ur
license: cc-by-4.0
size_categories:
- 1M<n<10M
pretty_name: Pralekha
dataset_info:
- config_name: alignable
features:
- name: n_id
dtype: string
- name: doc_id
dtype: string
- name: lang
dtype: string
- name: text
dtype: string
splits:
- name: ben
num_bytes: 651961117
num_examples: 95813
- name: eng
num_bytes: 1048149692
num_examples: 298111
- name: guj
num_bytes: 549286108
num_examples: 67847
- name: hin
num_bytes: 1754308559
num_examples: 204809
- name: kan
num_bytes: 567860764
num_examples: 61998
- name: mal
num_bytes: 498894372
num_examples: 67760
- name: mar
num_bytes: 961277740
num_examples: 135301
- name: ori
num_bytes: 397642857
num_examples: 46167
- name: pan
num_bytes: 872586190
num_examples: 108459
- name: tam
num_bytes: 858335433
num_examples: 149637
- name: tel
num_bytes: 914832899
num_examples: 110077
- name: urd
num_bytes: 1199225480
num_examples: 220425
download_size: 3954199760
dataset_size: 10274361211
- config_name: dev
features:
- name: src_text
dtype: string
- name: tgt_text
dtype: string
splits:
- name: eng_ben
num_bytes: 11878032
num_examples: 1000
- name: eng_guj
num_bytes: 12114408
num_examples: 1000
- name: eng_hin
num_bytes: 11866493
num_examples: 1000
- name: eng_kan
num_bytes: 12737616
num_examples: 1000
- name: eng_mal
num_bytes: 13282361
num_examples: 1000
- name: eng_mar
num_bytes: 12562695
num_examples: 1000
- name: eng_ori
num_bytes: 12440443
num_examples: 1000
- name: eng_pan
num_bytes: 11887954
num_examples: 1000
- name: eng_tam
num_bytes: 10889623
num_examples: 1000
- name: eng_tel
num_bytes: 12862241
num_examples: 1000
- name: eng_urd
num_bytes: 9313209
num_examples: 1000
download_size: 49754255
dataset_size: 131835075
- config_name: test
features:
- name: src_text
dtype: string
- name: tgt_text
dtype: string
splits:
- name: eng_ben
num_bytes: 11326293
num_examples: 1000
- name: eng_guj
num_bytes: 11754732
num_examples: 1000
- name: eng_hin
num_bytes: 11572603
num_examples: 1000
- name: eng_kan
num_bytes: 12210417
num_examples: 1000
- name: eng_mal
num_bytes: 12750095
num_examples: 1000
- name: eng_mar
num_bytes: 12260214
num_examples: 1000
- name: eng_ori
num_bytes: 11926414
num_examples: 1000
- name: eng_pan
num_bytes: 11534797
num_examples: 1000
- name: eng_tam
num_bytes: 11072385
num_examples: 1000
- name: eng_tel
num_bytes: 12530011
num_examples: 1000
- name: eng_urd
num_bytes: 9196555
num_examples: 1000
download_size: 49449543
dataset_size: 128134516
- config_name: unalignable
features:
- name: n_id
dtype: string
- name: doc_id
dtype: string
- name: lang
dtype: string
- name: text
dtype: string
splits:
- name: ben
num_bytes: 273391595
num_examples: 47906
- name: eng
num_bytes: 420307531
num_examples: 149055
- name: guj
num_bytes: 214351582
num_examples: 33923
- name: hin
num_bytes: 683869386
num_examples: 102404
- name: kan
num_bytes: 189633814
num_examples: 30999
- name: mal
num_bytes: 192394324
num_examples: 33880
- name: mar
num_bytes: 428715921
num_examples: 67650
- name: ori
num_bytes: 111986274
num_examples: 23083
- name: pan
num_bytes: 328564948
num_examples: 54229
- name: tam
num_bytes: 614171222
num_examples: 74818
- name: tel
num_bytes: 372531108
num_examples: 55038
- name: urd
num_bytes: 644995094
num_examples: 110212
download_size: 1855179179
dataset_size: 4474912799
configs:
- config_name: alignable
data_files:
- split: ben
path: alignable/ben-*
- split: eng
path: alignable/eng-*
- split: guj
path: alignable/guj-*
- split: hin
path: alignable/hin-*
- split: kan
path: alignable/kan-*
- split: mal
path: alignable/mal-*
- split: mar
path: alignable/mar-*
- split: ori
path: alignable/ori-*
- split: pan
path: alignable/pan-*
- split: tam
path: alignable/tam-*
- split: tel
path: alignable/tel-*
- split: urd
path: alignable/urd-*
- config_name: dev
data_files:
- split: eng_ben
path: dev/eng_ben-*
- split: eng_guj
path: dev/eng_guj-*
- split: eng_hin
path: dev/eng_hin-*
- split: eng_kan
path: dev/eng_kan-*
- split: eng_mal
path: dev/eng_mal-*
- split: eng_mar
path: dev/eng_mar-*
- split: eng_ori
path: dev/eng_ori-*
- split: eng_pan
path: dev/eng_pan-*
- split: eng_tam
path: dev/eng_tam-*
- split: eng_tel
path: dev/eng_tel-*
- split: eng_urd
path: dev/eng_urd-*
- config_name: test
data_files:
- split: eng_ben
path: test/eng_ben-*
- split: eng_guj
path: test/eng_guj-*
- split: eng_hin
path: test/eng_hin-*
- split: eng_kan
path: test/eng_kan-*
- split: eng_mal
path: test/eng_mal-*
- split: eng_mar
path: test/eng_mar-*
- split: eng_ori
path: test/eng_ori-*
- split: eng_pan
path: test/eng_pan-*
- split: eng_tam
path: test/eng_tam-*
- split: eng_tel
path: test/eng_tel-*
- split: eng_urd
path: test/eng_urd-*
- config_name: unalignable
data_files:
- split: ben
path: unalignable/ben-*
- split: eng
path: unalignable/eng-*
- split: guj
path: unalignable/guj-*
- split: hin
path: unalignable/hin-*
- split: kan
path: unalignable/kan-*
- split: mal
path: unalignable/mal-*
- split: mar
path: unalignable/mar-*
- split: ori
path: unalignable/ori-*
- split: pan
path: unalignable/pan-*
- split: tam
path: unalignable/tam-*
- split: tel
path: unalignable/tel-*
- split: urd
path: unalignable/urd-*
tags:
- parallel-corpus
- document-alignment
- machine-translation
task_categories:
- translation
Pralekha: Cross-Lingual Document Alignment for Indic Languages
Pralekha is a large-scale parallel document dataset spanning across 11 Indic languages and English. It comprises over 3 million document pairs, with 1.5 million being English-centric. This dataset serves both as a benchmark for evaluating Cross-Lingual Document Alignment (CLDA) techniques and as a domain-specific parallel corpus for training document-level Machine Translation (MT) models in Indic Languages.
Dataset Description
Pralekha covers 12 languages—Bengali (ben
), Gujarati (guj
), Hindi (hin
), Kannada (kan
), Malayalam (mal
), Marathi (mar
), Odia (ori
), Punjabi (pan
), Tamil (tam
), Telugu (tel
), Urdu (urd
), and English (eng
). It includes a mixture of high- and medium-resource languages, covering 11 different scripts. The dataset spans two broad domains: News Bulletins (Indian Press Information Bureau (PIB)) and Podcast Scripts (Mann Ki Baat (MKB)), offering both written and spoken forms of data. All the data is human-written or human-verified, ensuring high quality.
While this accounts for alignable
(parallel) documents, In real-world scenarios, multilingual corpora often include unalignable
documents. To simulate this for CLDA evaluation, we sample unalignable
documents from Sangraha Unverified, selecting 50% of Pralekha’s size to maintain a 1:2 ratio of unalignable
to alignable
documents.
For Machine Translation (MT) tasks, we first randomly sample 1,000 documents from the alignable
subset per English-Indic language pair for each development (dev) and test set, ensuring a good distribution of varying document lengths. After excluding these sampled documents, we use the remaining documents as the training set for training document-level machine translation models.
Data Fields
Alignable & Unalignable Set:
n_id
: Unique identifier foralignable
document pairs (Randomn_id
's are assigned for theunalignable
set.)doc_id
: Unique identifier for individual documents.lang
: Language of the document (ISO 639-3 code).text
: The textual content of the document.
Train, Dev & Test Set:
src_lang
: Source Language (eng)src_text
: Source Language Texttgt_lang
: Target Language (ISO 639-3 code)tgt_text
: Target Language Text
Usage
You can load specific subsets and splits from this dataset using the datasets
library.
Load an entire subset
from datasets import load_dataset
dataset = load_dataset("ai4bharat/Pralekha", data_dir="<subset>")
# <subset> = alignable, unalignable, train, dev & test.
Load a specific split within a subset
from datasets import load_dataset
dataset = load_dataset("ai4bharat/Pralekha", data_dir="<subset>/<lang>")
# <subset> = alignable, unalignable ; <lang> = ben, eng, guj, hin, kan, mal, mar, ori, pan, tam, tel, urd.
from datasets import load_dataset
dataset = load_dataset("ai4bharat/Pralekha", data_dir="<subset>/eng_<lang>")
# <subset> = train, dev & test ; <lang> = ben, guj, hin, kan, mal, mar, ori, pan, tam, tel, urd.
Data Size Statistics
Split | Number of Documents | Size (bytes) |
---|---|---|
Alignable | 1,566,404 | 10,274,361,211 |
Unalignable | 783,197 | 4,466,506,637 |
Total | 2,349,601 | 14,740,867,848 |
Language-wise Statistics
Language (ISO-3 ) |
Alignable Documents | Unalignable Documents | Total Documents |
---|---|---|---|
Bengali (ben ) |
95,813 | 47,906 | 143,719 |
English (eng ) |
298,111 | 149,055 | 447,166 |
Gujarati (guj ) |
67,847 | 33,923 | 101,770 |
Hindi (hin ) |
204,809 | 102,404 | 307,213 |
Kannada (kan ) |
61,998 | 30,999 | 92,997 |
Malayalam (mal ) |
67,760 | 33,880 | 101,640 |
Marathi (mar ) |
135,301 | 67,650 | 202,951 |
Odia (ori ) |
46,167 | 23,083 | 69,250 |
Punjabi (pan ) |
108,459 | 54,229 | 162,688 |
Tamil (tam ) |
149,637 | 74,818 | 224,455 |
Telugu (tel ) |
110,077 | 55,038 | 165,115 |
Urdu (urd ) |
220,425 | 110,212 | 330,637 |
Citation
If you use Pralekha in your work, please cite us:
@article{suryanarayanan2024pralekha,
title={Pralekha: An Indic Document Alignment Evaluation Benchmark},
author={Suryanarayanan, Sanjay and Song, Haiyue and Khan, Mohammed Safi Ur Rahman and Kunchukuttan, Anoop and Khapra, Mitesh M and Dabre, Raj},
journal={arXiv preprint arXiv:2411.19096},
year={2024}
}
License
This dataset is released under the CC BY 4.0 license.
Contact
For any questions or feedback, please contact:
- Raj Dabre ([email protected])
- Sanjay Suryanarayanan ([email protected])
- Haiyue Song ([email protected])
- Mohammed Safi Ur Rahman Khan ([email protected])
Please get in touch with us for any copyright concerns.