Whisper-finetune_all

This model is a fine-tuned version of openai/whisper-large-v2 on the Amitabha_all dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0002
  • Cer: 0.1505

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 4000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Cer
0.0581 3.1056 1000 0.0515 6.2281
0.0132 6.2112 2000 0.0075 2.8061
0.0009 9.3168 3000 0.0006 0.3260
0.0001 12.4224 4000 0.0002 0.1505

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
7
Safetensors
Model size
1.54B params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for LeoKuo49/whisper-finetune-all_0823

Finetuned
(189)
this model

Dataset used to train LeoKuo49/whisper-finetune-all_0823