mahmoudmamdouh13 commited on
Commit
643f542
·
verified ·
1 Parent(s): 784b878

End of training

Browse files
Files changed (1) hide show
  1. README.md +90 -0
README.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: bsd-3-clause
4
+ base_model: MIT/ast-finetuned-speech-commands-v2
5
+ tags:
6
+ - generated_from_trainer
7
+ datasets:
8
+ - audiofolder
9
+ metrics:
10
+ - precision
11
+ - recall
12
+ - f1
13
+ model-index:
14
+ - name: ast-finetuned-speech-commands-v2-finetuned-keyword-spotting-finetuned-keyword-spotting
15
+ results:
16
+ - task:
17
+ name: Audio Classification
18
+ type: audio-classification
19
+ dataset:
20
+ name: audiofolder
21
+ type: audiofolder
22
+ config: default
23
+ split: validation
24
+ args: default
25
+ metrics:
26
+ - name: Precision
27
+ type: precision
28
+ value: 0.9861935383961439
29
+ - name: Recall
30
+ type: recall
31
+ value: 0.9861649413727126
32
+ - name: F1
33
+ type: f1
34
+ value: 0.9861100898918743
35
+ ---
36
+
37
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
38
+ should probably proofread and complete it, then remove this comment. -->
39
+
40
+ # ast-finetuned-speech-commands-v2-finetuned-keyword-spotting-finetuned-keyword-spotting
41
+
42
+ This model is a fine-tuned version of [MIT/ast-finetuned-speech-commands-v2](https://huggingface.co/MIT/ast-finetuned-speech-commands-v2) on the audiofolder dataset.
43
+ It achieves the following results on the evaluation set:
44
+ - Loss: 0.0685
45
+ - Precision: 0.9862
46
+ - Recall: 0.9862
47
+ - F1: 0.9861
48
+
49
+ ## Model description
50
+
51
+ More information needed
52
+
53
+ ## Intended uses & limitations
54
+
55
+ More information needed
56
+
57
+ ## Training and evaluation data
58
+
59
+ More information needed
60
+
61
+ ## Training procedure
62
+
63
+ ### Training hyperparameters
64
+
65
+ The following hyperparameters were used during training:
66
+ - learning_rate: 5e-05
67
+ - train_batch_size: 64
68
+ - eval_batch_size: 64
69
+ - seed: 42
70
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
71
+ - lr_scheduler_type: linear
72
+ - lr_scheduler_warmup_ratio: 0.1
73
+ - num_epochs: 3
74
+ - mixed_precision_training: Native AMP
75
+
76
+ ### Training results
77
+
78
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
79
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
80
+ | 0.0682 | 1.0 | 1630 | 0.0976 | 0.9752 | 0.9751 | 0.9749 |
81
+ | 0.0179 | 2.0 | 3260 | 0.0743 | 0.9847 | 0.9846 | 0.9846 |
82
+ | 0.0008 | 3.0 | 4890 | 0.0685 | 0.9862 | 0.9862 | 0.9861 |
83
+
84
+
85
+ ### Framework versions
86
+
87
+ - Transformers 4.51.3
88
+ - Pytorch 2.7.0+cu128
89
+ - Datasets 3.6.0
90
+ - Tokenizers 0.21.1