Datasets:
audio
audioduration (s) 0.2
11.2
| text
stringlengths 1
22
|
---|---|
bɛn
|
|
abankɛseɛ,
|
|
woama
|
|
nnwan
|
|
ano
|
|
ɛpo
|
|
anidie
|
|
soɔ!.
|
|
mehwɛ
|
|
sankuo
|
|
Wɔde
|
|
ɛkorɔn.
|
|
woasiesie
|
|
nnwuma
|
|
nyinaa.
|
|
AWURADE,
|
|
mmoa
|
|
onipa
|
|
onipa
|
|
Ao,
|
|
w’animuonyam
|
|
mmɔfra
|
|
enim
|
|
din
|
|
hwan
|
|
wiram
|
|
nyinaa
|
|
Woama
|
|
amua
|
|
ewiem
|
|
wɔnto
|
|
soro,
|
|
anomu.
|
|
ano.
|
|
kɛseyɛ
|
|
ano
|
|
nso.
|
|
asase
|
|
nni
|
|
dwonkyerɛfoɔ
|
|
w’atamfo
|
|
Awurade.
|
|
nsa
|
|
ɔweretɔfoɔ
|
|
Aane,
|
|
nsoromma
|
|
wasɛ
|
|
wode
|
|
yɛn
|
|
adi
|
|
pue
|
|
Onyame.
|
|
nti.
|
|
din
|
|
firi
|
|
dwom.
|
|
diberɛ
|
|
akata
|
|
ɛpo
|
|
ɔsoro.
|
|
w’ani
|
|
nni.
|
|
akwan
|
|
nkɔkoaa
|
|
nnomaa
|
|
Awurade.
|
|
nyinaa
|
|
yɛn
|
|
nsateaa
|
|
mmoa
|
|
wode
|
|
aka
|
|
soɔ.
|
|
“Gittit”
|
|
nne
|
|
sɛ
|
|
AWURADE,
|
|
Dawid
|
|
soɔ!
|
|
animuonyam
|
|
wode
|
|
kakraa
|
|
nyinaa
|
|
ase:
|
|
NNWOM
|
|
8.
|
|
abɔ
|
|
ɛnam
|
|
asase
|
|
wokae
|
|
hwan
|
|
adwuma.
|
|
ɔdasani
|
|
Ao,
|
|
abotire.
|
|
Onyankopɔn
|
|
mpataa.
|
|
ɔtamfoɔ
|
|
w’animuonyam
|
|
bɛn
|
Twi Words Speech-Text Parallel Dataset
Dataset Description
This dataset contains 413463 parallel speech-text pairs for Twi (Akan), a language spoken primarily in Ghana. The dataset consists of audio recordings paired with their corresponding text transcriptions, making it suitable for automatic speech recognition (ASR) and text-to-speech (TTS) tasks.
Dataset Summary
- Language: Twi (Akan) -
tw
- Task: Speech Recognition, Text-to-Speech
- Size: 413463 audio files > 1KB (small/corrupted files filtered out)
- Format: WAV audio files with corresponding text labels
- Modalities: Audio + Text
Supported Tasks
- Automatic Speech Recognition (ASR): Train models to convert Twi speech to text
- Text-to-Speech (TTS): Use parallel data for TTS model development
- Keyword Spotting: Identify specific Twi words in audio
- Phonetic Analysis: Study Twi pronunciation patterns
Dataset Structure
Data Fields
audio
: Audio file in WAV formattext
: Corresponding text transcription
Data Splits
The dataset contains a single training split with 413463 filtered audio files.
File Structure
Each audio segment is stored as a numbered pair:
NNNN.wav
: Audio file (e.g.,0001.wav
)NNNN.txt
: Corresponding text file (e.g.,0001.txt
)
This structure ensures clean organization and easy pairing of audio-text data.
Dataset Creation
Source Data
The audio data has been sourced ethically from consenting contributors. To protect the privacy of the original authors and speakers, specific source information cannot be shared publicly.
Data Processing
- Audio files were processed using forced alignment techniques
- Word-level segmentation was performed with padding to prevent abrupt cuts
- Audio segments were filtered based on:
- Minimum duration requirements
- Volume/vocal content thresholds
- File size validation (> 1KB)
- Each valid segment was saved as a numbered audio-text pair
- Audio processing used the MMS-300M-1130 Forced Aligner tool for alignment and quality assurance
Quality Control
- Empty or silent audio segments were automatically filtered out
- Very short segments (< 200ms) were excluded
- Low-volume segments were removed to ensure vocal content
- Audio padding (100ms) was added to prevent abrupt word cuts
Annotations
Text annotations are stored in separate .txt
files corresponding to each audio file, representing the exact spoken content in each audio segment.
Considerations for Using the Data
Social Impact of Dataset
This dataset contributes to the preservation and digital representation of Twi, supporting:
- Language technology development for underrepresented languages
- Educational resources for Twi language learning
- Cultural preservation through digital archives
Discussion of Biases
- The dataset may reflect the pronunciation patterns and dialects of specific regions or speakers
- Audio quality and recording conditions may vary across samples
- The vocabulary is limited to the words present in the collected samples
Other Known Limitations
- Limited vocabulary scope (word-level rather than sentence-level)
- Potential audio quality variations
- Regional dialect representation may be uneven
- Automatic filtering may have removed some valid segments
Additional Information
Licensing Information
This dataset is released under the Creative Commons Attribution 4.0 International License (CC BY 4.0).
Acknowledgments
- Audio processing and alignment performed using MMS-300M-1130 Forced Aligner
- The original audio is produced by The Ghana Institute of Linguistics, Literacy and Bible Translation in partnership with Davar Partners
- Automated quality filtering and padding applied to ensure high-quality audio segments
Citation Information
If you use this dataset in your research, please cite:
@dataset{twi_words_parallel_2025,
title={Twi Words Speech-Text Parallel Dataset},
year={2025},
publisher={Hugging Face},
howpublished={\url{https://huggingface.co/datasets/michsethowusu/twi-words-speech-text-parallel}}
}
Contact
For questions or concerns about this dataset, please open an issue in the dataset repository.
Usage Example
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("michsethowusu/twi-words-speech-text-parallel")
# Access audio and text pairs
for example in dataset["train"]:
audio = example["audio"]
text = example["text"]
print(f"Text: {text}")
print(f"Audio sample rate: {audio['sampling_rate']}")
- Downloads last month
- 59