Melo1512's picture
End of training
0442dc3 verified
|
raw
history blame
5.91 kB
metadata
library_name: transformers
license: apache-2.0
base_model: Melo1512/vit-msn-small-lateral_flow_ivalidation_train_test_6
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: vit-msn-small-lateral_flow_ivalidation_train_test_7
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.8754578754578755

vit-msn-small-lateral_flow_ivalidation_train_test_7

This model is a fine-tuned version of Melo1512/vit-msn-small-lateral_flow_ivalidation_train_test_6 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4368
  • Accuracy: 0.8755

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-07
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 512
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: reduce_lr_on_plateau
  • lr_scheduler_warmup_ratio: 0.5
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0.6154 1 0.4368 0.8755
No log 1.8462 3 0.4440 0.8681
No log 2.4615 4 0.4470 0.8645
No log 3.6923 6 0.4443 0.8645
No log 4.9231 8 0.4393 0.8645
No log 5.5385 9 0.4372 0.8681
0.3118 6.7692 11 0.4340 0.8645
0.3118 8.0 13 0.4319 0.8608
0.3118 8.6154 14 0.4313 0.8608
0.3118 9.8462 16 0.4312 0.8681
0.3118 10.4615 17 0.4314 0.8718
0.3118 11.6923 19 0.4306 0.8718
0.3019 12.9231 21 0.4294 0.8718
0.3019 13.5385 22 0.4290 0.8718
0.3019 14.7692 24 0.4262 0.8718
0.3019 16.0 26 0.4223 0.8718
0.3019 16.6154 27 0.4204 0.8718
0.3019 17.8462 29 0.4170 0.8718
0.2922 18.4615 30 0.4160 0.8718
0.2922 19.6923 32 0.4161 0.8718
0.2922 20.9231 34 0.4161 0.8718
0.2922 21.5385 35 0.4162 0.8718
0.2922 22.7692 37 0.4164 0.8718
0.2922 24.0 39 0.4166 0.8718
0.2993 24.6154 40 0.4168 0.8718
0.2993 25.8462 42 0.4170 0.8718
0.2993 26.4615 43 0.4171 0.8718
0.2993 27.6923 45 0.4176 0.8718
0.2993 28.9231 47 0.4179 0.8718
0.2993 29.5385 48 0.4179 0.8718
0.298 30.7692 50 0.4179 0.8718
0.298 32.0 52 0.4179 0.8718
0.298 32.6154 53 0.4179 0.8718
0.298 33.8462 55 0.4179 0.8718
0.298 34.4615 56 0.4179 0.8718
0.298 35.6923 58 0.4179 0.8718
0.2936 36.9231 60 0.4178 0.8718
0.2936 37.5385 61 0.4178 0.8718
0.2936 38.7692 63 0.4178 0.8718
0.2936 40.0 65 0.4178 0.8718
0.2936 40.6154 66 0.4178 0.8718
0.2936 41.8462 68 0.4178 0.8718
0.2936 42.4615 69 0.4177 0.8718
0.2948 43.6923 71 0.4177 0.8718
0.2948 44.9231 73 0.4177 0.8718
0.2948 45.5385 74 0.4176 0.8718
0.2948 46.7692 76 0.4176 0.8718
0.2948 48.0 78 0.4176 0.8718
0.2948 48.6154 79 0.4176 0.8718
0.2965 49.8462 81 0.4176 0.8718
0.2965 50.4615 82 0.4175 0.8718
0.2965 51.6923 84 0.4175 0.8718
0.2965 52.9231 86 0.4175 0.8718
0.2965 53.5385 87 0.4175 0.8718
0.2965 54.7692 89 0.4174 0.8718
0.292 56.0 91 0.4174 0.8718
0.292 56.6154 92 0.4174 0.8718
0.292 57.8462 94 0.4174 0.8718
0.292 58.4615 95 0.4173 0.8718
0.292 59.6923 97 0.4173 0.8718
0.292 60.9231 99 0.4173 0.8718
0.2962 61.5385 100 0.4173 0.8718

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.19.1