kkkh_w2lm_base_plus_finetune_teacher_clean_mozilla_100_epochs_batch_16
This model is a fine-tuned version of patrickvonplaten/wavlm-libri-clean-100h-base-plus on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.3595
- Wer: 0.2228
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 256
- total_train_batch_size: 4096
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 100
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
0.5282 | 8.78 | 150 | 0.4346 | 0.3154 |
0.3207 | 17.55 | 300 | 0.3596 | 0.2737 |
0.2462 | 26.33 | 450 | 0.3317 | 0.2465 |
0.2021 | 35.11 | 600 | 0.3263 | 0.2369 |
0.1739 | 43.89 | 750 | 0.3276 | 0.2305 |
0.1548 | 52.66 | 900 | 0.3336 | 0.2270 |
0.141 | 61.44 | 1050 | 0.3404 | 0.2249 |
0.13 | 70.22 | 1200 | 0.3484 | 0.2254 |
0.1226 | 78.99 | 1350 | 0.3539 | 0.2238 |
0.1174 | 87.77 | 1500 | 0.3576 | 0.2238 |
0.1144 | 96.55 | 1650 | 0.3595 | 0.2228 |
Framework versions
- Transformers 4.29.2
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
- Downloads last month
- 105
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.