PergaZuZ commited on
Commit
9bbf705
·
verified ·
1 Parent(s): 8ebebf2

Model save

Browse files
Files changed (1) hide show
  1. README.md +14 -14
README.md CHANGED
@@ -18,8 +18,8 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.2149
22
- - Accuracy: 0.9419
23
 
24
  ## Model description
25
 
@@ -39,31 +39,31 @@ More information needed
39
 
40
  The following hyperparameters were used during training:
41
  - learning_rate: 5e-05
42
- - train_batch_size: 8
43
- - eval_batch_size: 8
44
  - seed: 42
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
  - lr_scheduler_warmup_ratio: 0.1
48
- - training_steps: 300
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
53
  |:-------------:|:------:|:----:|:---------------:|:--------:|
54
- | 2.2201 | 0.1267 | 38 | 1.9095 | 0.5571 |
55
- | 0.8786 | 1.1267 | 76 | 0.8344 | 0.7143 |
56
- | 0.3715 | 2.1267 | 114 | 0.4478 | 0.8714 |
57
- | 0.2574 | 3.1267 | 152 | 0.3398 | 0.9 |
58
- | 0.1851 | 4.1267 | 190 | 0.3879 | 0.8571 |
59
- | 0.0945 | 5.1267 | 228 | 0.1755 | 0.9429 |
60
- | 0.0228 | 6.1267 | 266 | 0.0816 | 0.9714 |
61
- | 0.0228 | 7.1133 | 300 | 0.0699 | 0.9857 |
62
 
63
 
64
  ### Framework versions
65
 
66
  - Transformers 4.45.2
67
- - Pytorch 2.0.1+cu118
68
  - Datasets 3.0.1
69
  - Tokenizers 0.20.1
 
18
 
19
  This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.2252
22
+ - Accuracy: 0.9290
23
 
24
  ## Model description
25
 
 
39
 
40
  The following hyperparameters were used during training:
41
  - learning_rate: 5e-05
42
+ - train_batch_size: 16
43
+ - eval_batch_size: 16
44
  - seed: 42
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
  - lr_scheduler_warmup_ratio: 0.1
48
+ - training_steps: 148
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
53
  |:-------------:|:------:|:----:|:---------------:|:--------:|
54
+ | 2.0466 | 0.1284 | 19 | 1.6349 | 0.6143 |
55
+ | 1.348 | 1.1284 | 38 | 0.8041 | 0.8429 |
56
+ | 0.6208 | 2.1284 | 57 | 0.7583 | 0.7286 |
57
+ | 0.332 | 3.1284 | 76 | 0.4557 | 0.8286 |
58
+ | 0.2229 | 4.1284 | 95 | 0.3133 | 0.8857 |
59
+ | 0.1479 | 5.1284 | 114 | 0.2872 | 0.9 |
60
+ | 0.0761 | 6.1284 | 133 | 0.2888 | 0.9 |
61
+ | 0.0696 | 7.1014 | 148 | 0.2664 | 0.9143 |
62
 
63
 
64
  ### Framework versions
65
 
66
  - Transformers 4.45.2
67
+ - Pytorch 2.1.1+cu118
68
  - Datasets 3.0.1
69
  - Tokenizers 0.20.1