hkivancoral's picture
End of training
83f0299
metadata
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: hushem_1x_deit_small_sgd_0001_fold3
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.3023255813953488

hushem_1x_deit_small_sgd_0001_fold3

This model is a fine-tuned version of facebook/deit-small-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5323
  • Accuracy: 0.3023

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 6 1.5973 0.1860
1.5067 2.0 12 1.5935 0.1860
1.5067 3.0 18 1.5902 0.1860
1.4808 4.0 24 1.5868 0.2326
1.4805 5.0 30 1.5835 0.2326
1.4805 6.0 36 1.5802 0.2558
1.4884 7.0 42 1.5770 0.2558
1.4884 8.0 48 1.5745 0.2558
1.4701 9.0 54 1.5717 0.2558
1.4909 10.0 60 1.5691 0.2558
1.4909 11.0 66 1.5667 0.2558
1.4719 12.0 72 1.5643 0.2558
1.4719 13.0 78 1.5620 0.2558
1.4695 14.0 84 1.5598 0.3023
1.4633 15.0 90 1.5576 0.3023
1.4633 16.0 96 1.5555 0.3023
1.4805 17.0 102 1.5536 0.3023
1.4805 18.0 108 1.5518 0.3023
1.4265 19.0 114 1.5500 0.3023
1.4558 20.0 120 1.5483 0.3023
1.4558 21.0 126 1.5468 0.3023
1.4538 22.0 132 1.5454 0.3023
1.4538 23.0 138 1.5441 0.3023
1.4345 24.0 144 1.5427 0.3023
1.435 25.0 150 1.5416 0.3023
1.435 26.0 156 1.5405 0.3023
1.4381 27.0 162 1.5394 0.3023
1.4381 28.0 168 1.5384 0.3023
1.4397 29.0 174 1.5376 0.3023
1.4251 30.0 180 1.5368 0.3023
1.4251 31.0 186 1.5361 0.3023
1.4272 32.0 192 1.5354 0.3023
1.4272 33.0 198 1.5348 0.3023
1.4277 34.0 204 1.5343 0.3023
1.4249 35.0 210 1.5338 0.3023
1.4249 36.0 216 1.5334 0.3023
1.4476 37.0 222 1.5330 0.3023
1.4476 38.0 228 1.5328 0.3023
1.4487 39.0 234 1.5326 0.3023
1.4294 40.0 240 1.5324 0.3023
1.4294 41.0 246 1.5324 0.3023
1.4087 42.0 252 1.5323 0.3023
1.4087 43.0 258 1.5323 0.3023
1.4561 44.0 264 1.5323 0.3023
1.4317 45.0 270 1.5323 0.3023
1.4317 46.0 276 1.5323 0.3023
1.4154 47.0 282 1.5323 0.3023
1.4154 48.0 288 1.5323 0.3023
1.4386 49.0 294 1.5323 0.3023
1.4625 50.0 300 1.5323 0.3023

Framework versions

  • Transformers 4.35.0
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.6
  • Tokenizers 0.14.1