not_overfited_vc

This model is a fine-tuned version of microsoft/speecht5_tts on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4675

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 8
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 64
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • training_steps: 4000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.7032 0.4002 100 0.5819
0.6216 0.8004 200 0.5350
0.5847 1.2041 300 0.5165
0.5625 1.6043 400 0.5083
0.5647 2.0080 500 0.5047
0.5529 2.4082 600 0.4992
0.5524 2.8084 700 0.4944
0.54 3.2121 800 0.4924
0.5296 3.6123 900 0.4879
0.5418 4.0160 1000 0.4858
0.5345 4.4162 1100 0.4862
0.5186 4.8164 1200 0.4817
0.5232 5.2201 1300 0.4819
0.5309 5.6203 1400 0.4820
0.5315 6.0240 1500 0.4793
0.5238 6.4242 1600 0.4771
0.5121 6.8244 1700 0.4769
0.5252 7.2281 1800 0.4755
0.5251 7.6283 1900 0.4758
0.5136 8.0320 2000 0.4721
0.5176 8.4322 2100 0.4729
0.5096 8.8324 2200 0.4746
0.5155 9.2361 2300 0.4729
0.5091 9.6363 2400 0.4699
0.5223 10.0400 2500 0.4707
0.5105 10.4402 2600 0.4695
0.5148 10.8404 2700 0.4689
0.5101 11.2441 2800 0.4694
0.5125 11.6443 2900 0.4690
0.5093 12.0480 3000 0.4686
0.5057 12.4482 3100 0.4671
0.5063 12.8484 3200 0.4693
0.5071 13.2521 3300 0.4669
0.5051 13.6523 3400 0.4685
0.5049 14.0560 3500 0.4660
0.5015 14.4562 3600 0.4679
0.5041 14.8564 3700 0.4663
0.5108 15.2601 3800 0.4678
0.5048 15.6603 3900 0.4680
0.508 16.0640 4000 0.4675

Framework versions

  • Transformers 4.47.0
  • Pytorch 2.5.1+cu121
  • Datasets 3.3.1
  • Tokenizers 0.21.0
Downloads last month
6
Safetensors
Model size
144M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Mehrdad-S/not_overfited_vc

Finetuned
(991)
this model