KevinCRB commited on
Commit
14ae81e
·
verified ·
1 Parent(s): a9496c9

Training complete

Browse files
README.md ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: mit
4
+ base_model: microsoft/speecht5_tts
5
+ tags:
6
+ - Text-To-Speech
7
+ - generated_from_trainer
8
+ model-index:
9
+ - name: speecht5_ft_french
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # speecht5_ft_french
17
+
18
+ This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 0.6022
21
+
22
+ ## Model description
23
+
24
+ More information needed
25
+
26
+ ## Intended uses & limitations
27
+
28
+ More information needed
29
+
30
+ ## Training and evaluation data
31
+
32
+ More information needed
33
+
34
+ ## Training procedure
35
+
36
+ ### Training hyperparameters
37
+
38
+ The following hyperparameters were used during training:
39
+ - learning_rate: 1e-05
40
+ - train_batch_size: 16
41
+ - eval_batch_size: 8
42
+ - seed: 42
43
+ - gradient_accumulation_steps: 2
44
+ - total_train_batch_size: 32
45
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
46
+ - lr_scheduler_type: linear
47
+ - lr_scheduler_warmup_steps: 100
48
+ - training_steps: 2000
49
+ - mixed_precision_training: Native AMP
50
+
51
+ ### Training results
52
+
53
+ | Training Loss | Epoch | Step | Validation Loss |
54
+ |:-------------:|:-------:|:----:|:---------------:|
55
+ | 0.7809 | 0.5587 | 100 | 0.7240 |
56
+ | 0.7682 | 1.1173 | 200 | 0.6956 |
57
+ | 0.7216 | 1.6760 | 300 | 0.6608 |
58
+ | 0.7083 | 2.2346 | 400 | 0.6578 |
59
+ | 0.6839 | 2.7933 | 500 | 0.6375 |
60
+ | 0.6805 | 3.3520 | 600 | 0.6369 |
61
+ | 0.6587 | 3.9106 | 700 | 0.6269 |
62
+ | 0.6786 | 4.4693 | 800 | 0.6252 |
63
+ | 0.6561 | 5.0279 | 900 | 0.6192 |
64
+ | 0.6553 | 5.5866 | 1000 | 0.6159 |
65
+ | 0.6477 | 6.1453 | 1100 | 0.6108 |
66
+ | 0.6537 | 6.7039 | 1200 | 0.6121 |
67
+ | 0.6635 | 7.2626 | 1300 | 0.6106 |
68
+ | 0.6409 | 7.8212 | 1400 | 0.6059 |
69
+ | 0.6503 | 8.3799 | 1500 | 0.6066 |
70
+ | 0.6391 | 8.9385 | 1600 | 0.6033 |
71
+ | 0.6388 | 9.4972 | 1700 | 0.6039 |
72
+ | 0.6407 | 10.0559 | 1800 | 0.6010 |
73
+ | 0.6388 | 10.6145 | 1900 | 0.6016 |
74
+ | 0.6415 | 11.1732 | 2000 | 0.6022 |
75
+
76
+
77
+ ### Framework versions
78
+
79
+ - Transformers 4.48.3
80
+ - Pytorch 2.5.1+cu124
81
+ - Datasets 3.3.2
82
+ - Tokenizers 0.21.0
generation_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 0,
4
+ "decoder_start_token_id": 2,
5
+ "eos_token_id": 2,
6
+ "max_length": 1876,
7
+ "pad_token_id": 1,
8
+ "transformers_version": "4.48.3"
9
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2c56c96558d962531ff9aaad49bab32aa3087daf2fd2543375f435343bce2cdf
3
  size 577789320
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3bbfeee5766a7da5a20380c5a050f55b0a02f75f144691f5a19d23862179973
3
  size 577789320
runs/Mar03_04-22-26_04fa9bc98ea5/events.out.tfevents.1740975753.04fa9bc98ea5.184.2 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ac00b77125ce8d5ff17c0cfe7f88d1be21f76569003c1062c44288d64f97788d
3
- size 28922
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f52a216a1204f9dd698c68ec893bdbca662ad839a93d524864b7f3ef2bbbef1
3
+ size 29276