devkyle commited on
Commit
12799af
·
verified ·
1 Parent(s): 168f03e

End of training

Browse files
Files changed (1) hide show
  1. README.md +10 -12
README.md CHANGED
@@ -18,8 +18,8 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 1.1016
22
- - Wer: 46.9010
23
 
24
  ## Model description
25
 
@@ -45,26 +45,24 @@ The following hyperparameters were used during training:
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
  - lr_scheduler_warmup_steps: 500
48
- - training_steps: 2000
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Wer |
54
  |:-------------:|:-------:|:----:|:---------------:|:-------:|
55
- | 0.3644 | 4.1667 | 250 | 0.7955 | 59.0359 |
56
- | 0.1196 | 8.3333 | 500 | 0.8738 | 54.2745 |
57
- | 0.0458 | 12.5 | 750 | 0.9793 | 51.6742 |
58
- | 0.0239 | 16.6667 | 1000 | 1.0239 | 51.2942 |
59
- | 0.0118 | 20.8333 | 1250 | 1.0658 | 48.8601 |
60
- | 0.0021 | 25.0 | 1500 | 1.0849 | 47.7321 |
61
- | 0.0004 | 29.1667 | 1750 | 1.0976 | 47.6253 |
62
- | 0.0004 | 33.3333 | 2000 | 1.1016 | 46.9010 |
63
 
64
 
65
  ### Framework versions
66
 
67
  - Transformers 4.44.2
68
  - Pytorch 2.4.0+cu121
69
- - Datasets 2.21.0
70
  - Tokenizers 0.19.1
 
18
 
19
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.1823
22
+ - Wer: 8.7755
23
 
24
  ## Model description
25
 
 
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
  - lr_scheduler_warmup_steps: 500
48
+ - training_steps: 3000
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Wer |
54
  |:-------------:|:-------:|:----:|:---------------:|:-------:|
55
+ | 0.1101 | 8.3333 | 500 | 0.9455 | 59.9493 |
56
+ | 0.0295 | 16.6667 | 1000 | 1.0721 | 50.0664 |
57
+ | 0.0117 | 25.0 | 1500 | 1.1477 | 50.5491 |
58
+ | 0.0008 | 33.3333 | 2000 | 1.1674 | 47.4840 |
59
+ | 0.0016 | 41.6667 | 2500 | 0.1804 | 9.2610 |
60
+ | 0.0004 | 50.0 | 3000 | 0.1823 | 8.7755 |
 
 
61
 
62
 
63
  ### Framework versions
64
 
65
  - Transformers 4.44.2
66
  - Pytorch 2.4.0+cu121
67
+ - Datasets 3.0.0
68
  - Tokenizers 0.19.1