Mohsen21 commited on
Commit
8e5e7c9
·
verified ·
1 Parent(s): 2e5ee05

End of training

Browse files
Files changed (2) hide show
  1. README.md +15 -47
  2. model.safetensors +1 -1
README.md CHANGED
@@ -1,7 +1,5 @@
1
  ---
2
  library_name: transformers
3
- license: mit
4
- base_model: microsoft/speecht5_tts
5
  tags:
6
  - generated_from_trainer
7
  model-index:
@@ -14,9 +12,9 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # CollectedDataModel
16
 
17
- This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.4390
20
 
21
  ## Model description
22
 
@@ -44,53 +42,23 @@ The following hyperparameters were used during training:
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 100
47
- - training_steps: 4000
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
- | Training Loss | Epoch | Step | Validation Loss |
53
- |:-------------:|:-------:|:----:|:---------------:|
54
- | 0.5567 | 0.9913 | 100 | 0.4913 |
55
- | 0.5127 | 1.9827 | 200 | 0.4692 |
56
- | 0.4915 | 2.9740 | 300 | 0.4562 |
57
- | 0.4862 | 3.9653 | 400 | 0.4524 |
58
- | 0.4745 | 4.9566 | 500 | 0.4483 |
59
- | 0.4735 | 5.9480 | 600 | 0.4458 |
60
- | 0.4681 | 6.9393 | 700 | 0.4397 |
61
- | 0.4656 | 7.9306 | 800 | 0.4408 |
62
- | 0.4576 | 8.9219 | 900 | 0.4336 |
63
- | 0.4571 | 9.9133 | 1000 | 0.4343 |
64
- | 0.451 | 10.9046 | 1100 | 0.4339 |
65
- | 0.4517 | 11.8959 | 1200 | 0.4316 |
66
- | 0.4432 | 12.8872 | 1300 | 0.4315 |
67
- | 0.4448 | 13.8786 | 1400 | 0.4357 |
68
- | 0.4455 | 14.8699 | 1500 | 0.4296 |
69
- | 0.4387 | 15.8612 | 1600 | 0.4331 |
70
- | 0.4334 | 16.8525 | 1700 | 0.4359 |
71
- | 0.4373 | 17.8439 | 1800 | 0.4290 |
72
- | 0.4304 | 18.8352 | 1900 | 0.4318 |
73
- | 0.4279 | 19.8265 | 2000 | 0.4305 |
74
- | 0.4294 | 20.8178 | 2100 | 0.4327 |
75
- | 0.4269 | 21.8092 | 2200 | 0.4327 |
76
- | 0.4248 | 22.8005 | 2300 | 0.4309 |
77
- | 0.4255 | 23.7918 | 2400 | 0.4275 |
78
- | 0.43 | 24.7831 | 2500 | 0.4315 |
79
- | 0.4214 | 25.7745 | 2600 | 0.4345 |
80
- | 0.4166 | 26.7658 | 2700 | 0.4362 |
81
- | 0.4173 | 27.7571 | 2800 | 0.4343 |
82
- | 0.4172 | 28.7485 | 2900 | 0.4325 |
83
- | 0.4142 | 29.7398 | 3000 | 0.4329 |
84
- | 0.4134 | 30.7311 | 3100 | 0.4327 |
85
- | 0.4121 | 31.7224 | 3200 | 0.4388 |
86
- | 0.4085 | 32.7138 | 3300 | 0.4352 |
87
- | 0.4095 | 33.7051 | 3400 | 0.4388 |
88
- | 0.4112 | 34.6964 | 3500 | 0.4372 |
89
- | 0.4106 | 35.6877 | 3600 | 0.4388 |
90
- | 0.4054 | 36.6791 | 3700 | 0.4392 |
91
- | 0.4075 | 37.6704 | 3800 | 0.4395 |
92
- | 0.4086 | 38.6617 | 3900 | 0.4393 |
93
- | 0.4125 | 39.6530 | 4000 | 0.4390 |
94
 
95
 
96
  ### Framework versions
 
1
  ---
2
  library_name: transformers
 
 
3
  tags:
4
  - generated_from_trainer
5
  model-index:
 
12
 
13
  # CollectedDataModel
14
 
15
+ This model was trained from scratch on the None dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Loss: 0.4365
18
 
19
  ## Model description
20
 
 
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
44
  - lr_scheduler_warmup_steps: 100
45
+ - training_steps: 1000
46
  - mixed_precision_training: Native AMP
47
 
48
  ### Training results
49
 
50
+ | Training Loss | Epoch | Step | Validation Loss |
51
+ |:-------------:|:------:|:----:|:---------------:|
52
+ | 0.4302 | 0.9913 | 100 | 0.4359 |
53
+ | 0.4275 | 1.9827 | 200 | 0.4396 |
54
+ | 0.4208 | 2.9740 | 300 | 0.4366 |
55
+ | 0.4217 | 3.9653 | 400 | 0.4389 |
56
+ | 0.4154 | 4.9566 | 500 | 0.4282 |
57
+ | 0.4173 | 5.9480 | 600 | 0.4362 |
58
+ | 0.4127 | 6.9393 | 700 | 0.4378 |
59
+ | 0.4104 | 7.9306 | 800 | 0.4340 |
60
+ | 0.4076 | 8.9219 | 900 | 0.4347 |
61
+ | 0.4065 | 9.9133 | 1000 | 0.4365 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62
 
63
 
64
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ee9b73a31eb5deffe561166b38cb87c5d78c32499f5bca8e99d7a9cfb3f22682
3
  size 577789320
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b2b492ef1b02ea87760d1f890cd96a5f2e11fe05387d240b4e42a2b9ac9e166
3
  size 577789320