Bigheadjoshy's picture
End of training
79b0159 verified
|
raw
history blame
2.69 kB
metadata
library_name: transformers
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
  - generated_from_trainer
metrics:
  - accuracy
  - f1
model-index:
  - name: layoutlmv3_document_classification
    results: []

layoutlmv3_document_classification

This model is a fine-tuned version of microsoft/layoutlmv3-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8183
  • Accuracy: 0.8340
  • F1: 0.8221

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 7

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
0.6545 0.4274 100 0.9149 0.8233 0.8041
0.6313 0.8547 200 0.9024 0.8276 0.8085
0.5714 1.2821 300 0.9112 0.8223 0.8067
0.5408 1.7094 400 0.8816 0.8298 0.8093
0.482 2.1368 500 0.9015 0.8244 0.8081
0.4546 2.5641 600 0.8779 0.8287 0.8180
0.4718 2.9915 700 0.8879 0.8212 0.8056
0.4302 3.4188 800 0.8562 0.8276 0.8155
0.5039 3.8462 900 0.8382 0.8330 0.8226
0.4644 4.2735 1000 0.8455 0.8308 0.8200
0.4411 4.7009 1100 0.8461 0.8308 0.8228
0.4007 5.1282 1200 0.8304 0.8308 0.8200
0.4023 5.5556 1300 0.8370 0.8330 0.8242
0.3756 5.9829 1400 0.8193 0.8405 0.8286
0.3592 6.4103 1500 0.8185 0.8394 0.8282
0.3429 6.8376 1600 0.8183 0.8340 0.8221

Framework versions

  • Transformers 4.47.1
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.21.0