whosper-large-v2 / README.md
sudoping01's picture
Update README.md
2db6acf verified
|
raw
history blame
2.47 kB
metadata
library_name: peft
license: mit
base_model: openai/whisper-large-v2
tags:
  - generated_from_trainer
  - ASR
  - Wolof
  - French
  - English
  - Multi-lang
  - Open-Source
  - bilingual
  - code-switched
model-index:
  - name: whosper-large-v3
    results: []
language:
  - wo
  - fr
  - en
metrics:
  - cer
  - wer
pipeline_tag: automatic-speech-recognition

Whosper-large-v3

Model Overview

Whosper-large-v3 is a fine-tuned version of openai/whisper-large-v2 optimized for Wolof and French speech recognition, with improved WER and CER metrics compared to its predecessor.

Performance Metrics

  • Loss: 0.4490
  • WER (Word Error Rate): 0.2409
  • CER (Character Error Rate): 0.1128

Key Features

  • Improved WER and CER compared to whosper-large
  • Optimized for Wolof and French recognition
  • Enhanced performance on bilingual content

Limitations

  • Reduced performance on English compared to whosper-large
  • Less effective for general multilingual content

Training Data

Combined dataset including:

  • ALFFA Public Dataset
  • FLEURS Dataset
  • Bus Urbain Dataset
  • Anta Women TTS Dataset

Training Procedure

Training Hyperparameters

learning_rate: 0.001
train_batch_size: 8
eval_batch_size: 8
seed: 42
gradient_accumulation_steps: 4
total_train_batch_size: 32
optimizer: adamw_torch (betas=0.9,0.999, epsilon=1e-08)
lr_scheduler_type: linear
lr_scheduler_warmup_steps: 50
num_epochs: 6
mixed_precision_training: Native AMP

Training Results

Training Loss Epoch Step Validation Loss
0.7575 0.9998 2354 0.7068
0.6429 1.9998 4708 0.6073
0.5468 2.9998 7062 0.5428
0.4439 3.9998 9416 0.4935
0.3208 4.9998 11770 0.4600
0.2394 5.9998 14124 0.4490

Framework Versions

  • PEFT: 0.14.1.dev0
  • Transformers: 4.49.0.dev0
  • PyTorch: 2.5.1+cu124
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

License

MIT

Citation

@misc{whosper2025,
  title={Whosper-large-v3: An Enhanced ASR Model for Wolof and French},
  author={Caytu Robotics AI Department},
  year={2025},
  publisher={Caytu Robotics}
}

Acknowledgments

This model is developed by the AI Department at Caytu Robotics. It builds upon the OpenAI Whisper Large V2 model.