wrice commited on
Commit
976a807
·
1 Parent(s): 4bbddfc

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -0
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - generated_from_trainer
4
+ model-index:
5
+ - name: wavlm-large-timit-punctuation
6
+ results: []
7
+ ---
8
+
9
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
10
+ should probably proofread and complete it, then remove this comment. -->
11
+
12
+ # wavlm-large-timit-punctuation
13
+
14
+ This model is a fine-tuned version of [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) on the None dataset.
15
+ It achieves the following results on the evaluation set:
16
+ - Loss: 0.3360
17
+ - Wer: 0.2580
18
+
19
+ ## Model description
20
+
21
+ More information needed
22
+
23
+ ## Intended uses & limitations
24
+
25
+ More information needed
26
+
27
+ ## Training and evaluation data
28
+
29
+ More information needed
30
+
31
+ ## Training procedure
32
+
33
+ ### Training hyperparameters
34
+
35
+ The following hyperparameters were used during training:
36
+ - learning_rate: 0.0001
37
+ - train_batch_size: 8
38
+ - eval_batch_size: 8
39
+ - seed: 42
40
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
+ - lr_scheduler_type: linear
42
+ - lr_scheduler_warmup_steps: 1000
43
+ - num_epochs: 30
44
+ - mixed_precision_training: Native AMP
45
+
46
+ ### Training results
47
+
48
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
49
+ |:-------------:|:-----:|:-----:|:---------------:|:------:|
50
+ | 5.2206 | 1.0 | 500 | 3.1111 | 1.0 |
51
+ | 2.4555 | 2.01 | 1000 | 1.0331 | 0.7992 |
52
+ | 0.9277 | 3.01 | 1500 | 0.5219 | 0.4888 |
53
+ | 0.5215 | 4.02 | 2000 | 0.3833 | 0.3981 |
54
+ | 0.3557 | 5.02 | 2500 | 0.3330 | 0.3570 |
55
+ | 0.2715 | 6.02 | 3000 | 0.3084 | 0.3255 |
56
+ | 0.2139 | 7.03 | 3500 | 0.2969 | 0.3129 |
57
+ | 0.1858 | 8.03 | 4000 | 0.2884 | 0.3029 |
58
+ | 0.1563 | 9.04 | 4500 | 0.2860 | 0.2960 |
59
+ | 0.149 | 10.04 | 5000 | 0.2972 | 0.2918 |
60
+ | 0.1343 | 11.04 | 5500 | 0.3161 | 0.2927 |
61
+ | 0.11 | 12.05 | 6000 | 0.3061 | 0.2788 |
62
+ | 0.0982 | 13.05 | 6500 | 0.2983 | 0.2802 |
63
+ | 0.0967 | 14.06 | 7000 | 0.3280 | 0.2768 |
64
+ | 0.0873 | 15.06 | 7500 | 0.3185 | 0.2721 |
65
+ | 0.0809 | 16.06 | 8000 | 0.3121 | 0.2694 |
66
+ | 0.0787 | 17.07 | 8500 | 0.3177 | 0.2643 |
67
+ | 0.0709 | 18.07 | 9000 | 0.3189 | 0.2657 |
68
+ | 0.0712 | 19.08 | 9500 | 0.3213 | 0.2628 |
69
+ | 0.0621 | 20.08 | 10000 | 0.3206 | 0.2600 |
70
+ | 0.0601 | 21.08 | 10500 | 0.3191 | 0.2600 |
71
+ | 0.0605 | 22.09 | 11000 | 0.3241 | 0.2591 |
72
+ | 0.058 | 23.09 | 11500 | 0.3230 | 0.2584 |
73
+ | 0.0503 | 24.1 | 12000 | 0.3346 | 0.2602 |
74
+ | 0.0498 | 25.1 | 12500 | 0.3359 | 0.2593 |
75
+ | 0.0506 | 26.1 | 13000 | 0.3339 | 0.2592 |
76
+ | 0.0468 | 27.11 | 13500 | 0.3357 | 0.2563 |
77
+ | 0.0422 | 28.11 | 14000 | 0.3368 | 0.2568 |
78
+ | 0.0512 | 29.12 | 14500 | 0.3360 | 0.2580 |
79
+
80
+
81
+ ### Framework versions
82
+
83
+ - Transformers 4.19.2
84
+ - Pytorch 1.8.2+cu111
85
+ - Datasets 1.17.0
86
+ - Tokenizers 0.11.6