|
--- |
|
dataset_name: IndicST |
|
configs: |
|
- config_name: ast |
|
description: "Dataset for AST (Automated Speech Transcription)" |
|
data_files: |
|
- split: train |
|
path: ast/train.json |
|
- split: dev |
|
path: ast/dev.json |
|
- split: test |
|
path: ast/test.json |
|
- split: kathbath |
|
path: ast/kathbath.json |
|
- split: svarah |
|
path: ast/svarah.json |
|
features: |
|
- name: path |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: task |
|
dtype: string |
|
- name: source_text |
|
dtype: string |
|
- name: source_language |
|
dtype: string |
|
- name: tar_lang |
|
dtype: string |
|
|
|
- config_name: asr |
|
description: "Dataset for ASR (Automatic Speech Recognition)" |
|
data_files: |
|
- split: train |
|
path: asr/train.json |
|
- split: dev |
|
path: asr/dev.json |
|
- split: test |
|
path: asr/test.json |
|
- split: kathbath |
|
path: asr/kathbath.json |
|
- split: svarah |
|
path: asr/svarah.json |
|
features: |
|
- name: path |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: task |
|
dtype: string |
|
language: |
|
- hi |
|
- kn |
|
- mr |
|
- ta |
|
- te |
|
- bn |
|
- ml |
|
- en |
|
- gu |
|
license: other |
|
license_name: krutrim-community-license-agreement-version-1.0 |
|
license_link: LICENSE.md |
|
extra_gated_heading: Acknowledge license to accept the repository |
|
extra_gated_button_content: Acknowledge license |
|
--- |
|
|
|
# IndicST: Indian Multilingual Translation Corpus For Evaluating Speech Large Language Models |
|
|
|
## Introduction |
|
IndicST, a new dataset tailored for training and evaluating Speech LLMs for AST tasks (including ASR and TTS), featuring meticulously curated, automatically, and manually verified synthetic data. The dataset offers 10.8k hrs of training data and 1.13k hrs of evaluation data. |
|
|
|
## Use-Cases |
|
### ASR (Speech-to-Text) |
|
- Transcribing Indic languages |
|
- Handling accents and noisy environments |
|
- Supporting low-resource language ASR |
|
|
|
### Automatic Speech Translation (AST) |
|
- Speech-to-speech and speech-to-text translation |
|
- Real-time multilingual communication |
|
|
|
## Dataset Details |
|
Training data: We utilized ASR data from 14 open-source datasets available publicly, collectively 10.8k hrs spread over nine languages, and Table 1 provides more details. Each dataset consists of input speech audio along with transcription. To synthetically generate the translation for input speech audio and transcription, we used |
|
[IndicTrans2](https://github.com/AI4Bharat/IndicTrans2) tool. we consider two translation directions: one-to-many, where English (source) transcription is translated to text in 8 Indian languages (target), represented as en β X, and many-to-one, where transcription in 8 Indian languages (source) is translated to English (target), represented as X βen. |
|
- **One-to-many**: English (source) transcription is translated to text in 8 Indian languages (target). |
|
- **Many-to-one**: Transcription in 8 Indian languages (source) is translated to English (target). |
|
|
|
|
|
### Table1: Summary of ASR Datasets for various Indian Languages used for training Speech LLM. The Duration is mentioned in K Hrs. |
|
|
|
| Datasets | en | hi | mr | gu | bn | ta | te | ml | kn | Duration (k hrs) | |
|
|------------------------|----|----|----|----|----|----|----|----|----|------------------| |
|
| **Spring Labs** | β | β | β | β | β | β | β | β | β | 2.2 | |
|
| **Common accent** | β | β | β | β | β | β | β | β | β | 0.01 | |
|
| **MUCS** | β | β | β | β | β | β | β | β | β | 0.22 | |
|
| **CMU** | β | β | β | β | β | β | β | β | β | 0.06 | |
|
| **CommonVoice** | β | β | β | β | β | β | β | β | β | 1.6 | |
|
| **Gramavaani** | β | β | β | β | β | β | β | β | β | 0.095 | |
|
| **Vaani** | β | β | β | β | β | β | β | β | β | 0.074 | |
|
| **Lahaja** | β | β | β | β | β | β | β | β | β | 0.011 | |
|
| **Shrutilipi** | β | β | β | β | β | β | β | β | β | 5.319 | |
|
| **Google Corpus** | β | β | β | β | β | β | β | β | β | 0.034 | |
|
| **Google Fleurs** | β | β | β | β | β | β | β | β | β | 0.087 | |
|
| **Microsoft Speech Corpus** | β | β | β | β | β | β | β | β | β | 0.12 | |
|
| **IISc MILE** | β | β | β | β | β | β | β | β | β | 0.45 | |
|
| **IndicVoices** | β | β | β | β | β | β | β | β | β | 0.52 | |
|
| **Total Duration** | 1.4 | 3 | 1.1 | 0.5 | 1.7 | 1.4 | 0.5 | 0.4 | 0.8 | **10.8k hrs** | |
|
|
|
The table includes indicators for language availability in datasets, where a check mark (β) represents availability and a cross (β) indicates the absence of support for that language. |
|
|
|
### Test set: For evaluation, we created a test set for Two scenarios |
|
- Input Speech audio available: We used the [Kathbath](https://huggingface.co/datasets/ai4bharat/kathbath) ASR dataset for this scenario to get the X β en translation pair (more details in Table 2) and the [Svarah](https://github.com/AI4Bharat/Svarah) dataset en β X translation pair. |
|
- No input speech audio is available: For this case, we used the AI4Bharat Conv text-to-text translation dataset and speech audio for the source text pair generated using the [TTS](https://github.com/yl4579/StyleTTS2) model. The duration of this test set is available in Table III. More details about this dataset can be found in [IndicST](https://cdn.olaelectric.com/krutrim/IndicST_ICASSP2025.pdf) paper. |
|
|
|
### Table 2: Language-wise duration (hrs) of audios in Kathbath. |
|
|
|
| Language | Duration (hrs) | |
|
|----------|----------------| |
|
| Hindi (hi) | 137.1 | |
|
| Marathi (mr) | 166.5 | |
|
| Gujarati (gu) | 116.2 | |
|
| Bengali (bn) | 104.2 | |
|
| Tamil (ta) | 166.3 | |
|
| Telugu (te) | 139.2 | |
|
| Malayalam (ml) | 132.2 | |
|
| Kannada (kn) | 149.2 | |
|
|
|
|
|
### Table 3: Language-wise duration (min) of audio in AI4Bharath |
|
|
|
| Language | Duration (mins) | |
|
|----------|-----------------| |
|
| English (en) | 28.9 | |
|
| Hindi (hi) | 36.1 | |
|
| Marathi (mr) | 40.0 | |
|
| Gujarati (gu) | 36.0 | |
|
| Bengali (bn) | 44.3 | |
|
| Tamil (ta) | 39.9 | |
|
| Telugu (te) | 45.2 | |
|
| Malayalam (ml) | 33.1 | |
|
| Kannada (kn) | 35.3 | |
|
|
|
## Evaluation Results |
|
We have benchmarked the dataset for ASR and AST tasks using audio-llm (whisper + llama based LLM). We use whisper-large-v2 as a baseline for both the tasks. Results are given in Table IV and V, respectively, for ASR and AST tasks. |
|
|
|
### Table 4: erformance metric with TP1 (ASR-only) across different models on in-domain Generic-ASR and out-of-domain Svarah and Kathbath test sets. All values are in percentage. |
|
|
|
|
|
| Languages | Baseline | Baseline | Baseline | M1 (TP1) | M1 (TP1) | M1 (TP1) | M2 (TP1) | M2 (TP1) | M2 (TP1) | |
|
| --------- | ----------- | -------- | -------- | ----------- | -------- | -------- | ----------- | ------- | -------- | |
|
| Languages | Generic-ASR | Svarah | Kathbath | Generic-ASR | Svarah | Kathbath | Generic-ASR | Svarah | Kathbath | |
|
| en | 23.3 | 25.6 | | 17.7 | 32 | | 16.5 | 26.4 | | |
|
| hi | 63.7 | | 44.5 | 34.3 | | 14.6 | 27.3 | | 9.9 | |
|
| mr | 99.7 | | 91 | 29.5 | | 31.9 | 24.2 | | 29.7 | |
|
| gu | 109.4 | | 109.9 | 56.3 | | 34.2 | 41.3 | | 25.9 | |
|
| bn | 116.6 | | 110.9 | 69.4 | | 26.8 | 63.2 | | 26.9 | |
|
| ta | 66.6 | | 59.1 | 37.1 | | 39.3 | 38 | | 34.6 | |
|
| te | 111.3 | | 112.7 | 75.4 | | 51.1 | 68.5 | | 37.1 | |
|
| ml | 111.7 | | 117.5 | 47.6 | | 47.2 | 47.4 | | 46.6 | |
|
| kn | 87.7 | | 82.4 | 56.9 | | 44.2 | 42.1 | | 30.4 | |
|
|
|
### Table 5: Performance metric (BLEU) with TP2 (AST-only) and TP3 (ASR + AST) across different models on in-domain Generic-AST and out-of-domain Svarah, Kathbath, and AI4Bharat test sets. All values are in percentage. |
|
|
|
### 5a. En -> X |
|
|
|
| Models | Datasets | enβhi | enβmr | enβgu | enβbn | enβta | enβte | enβml | enβkn | |
|
| -------- | ----------- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | |
|
| Baseline | Generic-AST | | | | | | | | | |
|
|| Svarah | | | | | | | | | |
|
|| Kathbath | | | | | | | | | |
|
|| AI4B | | | | | | | | | |
|
| M1 (TP2) | Generic-AST | 30.2 | 19.9 | 25.1 | 24.4 | 18.5 | 19 | 16.7 | 18.8 | |
|
|| Svarah | 20.9 | 10.6 | 14.9 | 14.5 | 7.9 | 10.2 | 7.4 | 11.5 | |
|
|| Kathbath | | | | | | | | | |
|
|| AI4B | 8.8 | 3.8 | 7.2 | 5.3 | 0.9 | 1.9 | 0.6 | 0.8 | |
|
| M2 (TP2) | Generic-AST | 35.6 | 22.1 | 29 | 27.8 | 21.6 | 25 | 20 | 23.9 | |
|
|| Svarah | 28.9 | 15.1 | 17.7 | 19.2 | 11 | 14.2 | 10.6 | 11 | |
|
|| Kathbath | | | | | | | | | |
|
|| AI4B | 13.4 | 6.9 | 9.5 | 6.3 | 1.6 | 2.1 | 1.2 | 1.2 | |
|
| M2 (TP3) | Generic-AST | 37 | 22.6 | 30.8 | 28.6 | 23 | 25.4 | 20.6 | 23.7 | |
|
|| Svarah | 23.9 | 14.7 | 19.3 | 18.9 | 11.8 | 14.5 | 10.1 | 15.2 | |
|
|| Kathbath | | | | | | | | | |
|
|| AI4B | 14.9 | 7.3 | 11.7 | 8.7 | 1.6 | 2.9 | 1.2 | 1.3 | |
|
|
|
### 5b. X --> En |
|
|
|
| Models | Datasets | hiβen | mrβen | guβen | bnβen | taβen | teβen | mlβen | knβen | |
|
| -------- | ----------- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | |
|
| Baseline | Generic-AST | 16.9 | 13.1 | 10.7 | 7.7 | 11 | 7.7 | 11.9 | 8.1 | |
|
|| Svarah | | | | | | | | | |
|
|| Kathbath | 28.1 | 13.9 | 16.8 | 11.8 | 11.1 | 12.8 | 17.6 | 10.1 | |
|
|| AI4B | 28.8 | 17.1 | 19.3 | 19.7 | 14.5 | 17.1 | 15.7 | 12.7 | |
|
| M1 (TP2) | Generic-AST | 29.2 | 32.4 | 30 | 13 | 24.2 | 14.6 | 29 | 23.8 | |
|
|| Kathbath | 36.6 | 22.3 | 25.3 | 20.8 | 17.7 | 19 | 22 | 15.9 | |
|
|| AI4B | 26.2 | 18.9 | 19.5 | 21.4 | 14.7 | 16.3 | 15.9 | 12.1 | |
|
| M2 (TP2) | Generic-AST | 31 | 32 | 30.3 | 14.7 | 24.6 | 15 | 29.6 | 24.2 | |
|
|| Kathbath | 37.2 | 23.9 | 25.1 | 20.6 | 17.2 | 19.1 | 22.4 | 16.8 | |
|
|| AI4B | 26.7 | 19.2 | 19.4 | 22.1 | 14.7 | 17.4 | 16 | 13 | |
|
| M2 (TP3) | Generic-AST | 30.2 | 33 | 32.3 | 15.4 | 24.4 | 16.2 | 30.5 | 26.2 | |
|
|| Kathbath | 38 | 24.2 | 25.6 | 22.3 | 18.4 | 20.2 | 22.5 | 17.3 | |
|
|| AI4B | 26.1 | 19.6 | 18.8 | 21.2 | 14 | 17.1 | 16.5 | 12.9 | |
|
|
|
|
|
## Dataset Download |
|
To download the dataset, visit the IndicST Hugging Face Repo: |
|
- [IndicST Dataset on Hugging Face](https://huggingface.co/datasets/krutrim-ai-labs/IndicST/blob/main/ast) |
|
|
|
## How to Use and Run |
|
To use this dataset in your project, you can load it using a custom data loading script or directly access the files if integrated with a library that supports JSON. Example usage in Python: |
|
|
|
```python |
|
import json |
|
|
|
def load_dataset(file_path): |
|
with open(file_path, 'r') as file: |
|
data = json.load(file) |
|
return data |
|
|
|
# Load the training data |
|
train_data = load_dataset('path/to/ast/train.json') |
|
``` |
|
|
|
## License |
|
TBD |
|
|
|
## Citation |
|
``` |
|
@inproceedings{ |
|
sanket2025IndicST, |
|
title={{IndicST}: Indian Multilingual Translation Corpus For Evaluating Speech Large Language Models}, |
|
author={Sanket Shah, Kavya Ranjan Saxena, Kancharana Manideep Bharadwaj, Sharath Adavanne, Nagaraj Adiga}, |
|
booktitle={Proc. ICASSP}, |
|
year={2025}, |
|
} |
|
``` |
|
|
|
## Contact |
|
Contributions are welcome! If you have any improvements or suggestions, feel free to submit a pull request on GitHub. |
|
|
|
|
|
|
|
|
|
|