layoutlmv3-funsd
This model is a fine-tuned version of microsoft/layoutlmv3-base on the funsd dataset. It achieves the following results on the evaluation set:
- Loss: 1.5869
- Answer: {'precision': 0.06117908787541713, 'recall': 0.13597033374536466, 'f1': 0.08438818565400844, 'number': 809}
- Header: {'precision': 0.015789473684210527, 'recall': 0.025210084033613446, 'f1': 0.01941747572815534, 'number': 119}
- Question: {'precision': 0.1918819188191882, 'recall': 0.39061032863849765, 'f1': 0.257346118156511, 'number': 1065}
- Overall Precision: 0.1273
- Overall Recall: 0.2654
- Overall F1: 0.1721
- Overall Accuracy: 0.4198
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
---|---|---|---|---|---|---|---|---|---|---|
1.9534 | 1.0 | 10 | 1.7563 | {'precision': 0.021798365122615803, 'recall': 0.009888751545117428, 'f1': 0.013605442176870746, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.029567053854276663, 'recall': 0.05258215962441314, 'f1': 0.037850625211220006, 'number': 1065} | 0.0283 | 0.0321 | 0.0301 | 0.2212 |
1.7529 | 2.0 | 20 | 1.6621 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.28431372549019607, 'recall': 0.027230046948356807, 'f1': 0.049700085689802914, 'number': 1065} | 0.0769 | 0.0146 | 0.0245 | 0.3060 |
1.6557 | 3.0 | 30 | 1.6846 | {'precision': 0.025611175785797437, 'recall': 0.054388133498145856, 'f1': 0.034823901859912944, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.12563044475011462, 'recall': 0.25727699530516435, 'f1': 0.16882316697473815, 'number': 1065} | 0.0816 | 0.1596 | 0.1079 | 0.3209 |
1.5482 | 4.0 | 40 | 1.6706 | {'precision': 0.03781297904956566, 'recall': 0.09147095179233622, 'f1': 0.05350686912509039, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.1303972366148532, 'recall': 0.28356807511737087, 'f1': 0.17864537119195506, 'number': 1065} | 0.0880 | 0.1887 | 0.1200 | 0.3287 |
1.4535 | 5.0 | 50 | 1.6188 | {'precision': 0.035333707234997194, 'recall': 0.07787391841779975, 'f1': 0.04861111111111111, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.16204690831556504, 'recall': 0.28544600938967135, 'f1': 0.20673240394423667, 'number': 1065} | 0.0988 | 0.1841 | 0.1286 | 0.3580 |
1.3517 | 6.0 | 60 | 1.5478 | {'precision': 0.04584221748400853, 'recall': 0.10630407911001236, 'f1': 0.06405959031657356, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.22127329192546583, 'recall': 0.2676056338028169, 'f1': 0.24224394390140241, 'number': 1065} | 0.1147 | 0.1862 | 0.1420 | 0.4143 |
1.2494 | 7.0 | 70 | 1.5328 | {'precision': 0.049443757725587144, 'recall': 0.09888751545117429, 'f1': 0.06592501030078285, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.16467780429594273, 'recall': 0.323943661971831, 'f1': 0.21835443037974686, 'number': 1065} | 0.1114 | 0.2132 | 0.1463 | 0.4101 |
1.1759 | 8.0 | 80 | 1.5335 | {'precision': 0.051237766263673, 'recall': 0.1100123609394314, 'f1': 0.06991358994501179, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.1746031746031746, 'recall': 0.3408450704225352, 'f1': 0.2309160305343511, 'number': 1065} | 0.1157 | 0.2268 | 0.1532 | 0.4102 |
1.1089 | 9.0 | 90 | 1.5206 | {'precision': 0.055843408175014396, 'recall': 0.11990111248454882, 'f1': 0.07619795758051845, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.18374558303886926, 'recall': 0.34178403755868547, 'f1': 0.23900196979645438, 'number': 1065} | 0.1181 | 0.2313 | 0.1563 | 0.4231 |
1.0817 | 10.0 | 100 | 1.5927 | {'precision': 0.05695830886670581, 'recall': 0.11990111248454882, 'f1': 0.07722929936305732, 'number': 809} | {'precision': 0.006993006993006993, 'recall': 0.008403361344537815, 'f1': 0.007633587786259542, 'number': 119} | {'precision': 0.19786396852164137, 'recall': 0.3305164319248826, 'f1': 0.24753867791842474, 'number': 1065} | 0.1241 | 0.2258 | 0.1602 | 0.4152 |
1.025 | 11.0 | 110 | 1.5822 | {'precision': 0.058394160583941604, 'recall': 0.12855377008652658, 'f1': 0.08030888030888031, 'number': 809} | {'precision': 0.005952380952380952, 'recall': 0.008403361344537815, 'f1': 0.006968641114982578, 'number': 119} | {'precision': 0.20356943669827104, 'recall': 0.3427230046948357, 'f1': 0.2554233729881036, 'number': 1065} | 0.1256 | 0.2358 | 0.1639 | 0.4192 |
1.0025 | 12.0 | 120 | 1.5577 | {'precision': 0.056910569105691054, 'recall': 0.1211372064276885, 'f1': 0.07743974713551956, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.19216589861751152, 'recall': 0.39154929577464787, 'f1': 0.2578052550231839, 'number': 1065} | 0.1269 | 0.2584 | 0.1702 | 0.4225 |
0.9791 | 13.0 | 130 | 1.5920 | {'precision': 0.0602655771195097, 'recall': 0.14585908529048208, 'f1': 0.08529092880375859, 'number': 809} | {'precision': 0.015306122448979591, 'recall': 0.025210084033613446, 'f1': 0.01904761904761905, 'number': 119} | {'precision': 0.19343945972021226, 'recall': 0.37652582159624415, 'f1': 0.2555768005098789, 'number': 1065} | 0.1235 | 0.2619 | 0.1678 | 0.4155 |
0.9566 | 14.0 | 140 | 1.5777 | {'precision': 0.06111111111111111, 'recall': 0.13597033374536466, 'f1': 0.0843234955921809, 'number': 809} | {'precision': 0.016483516483516484, 'recall': 0.025210084033613446, 'f1': 0.019933554817275746, 'number': 119} | {'precision': 0.19855072463768117, 'recall': 0.38591549295774646, 'f1': 0.26220095693779905, 'number': 1065} | 0.1293 | 0.2629 | 0.1734 | 0.4223 |
0.9369 | 15.0 | 150 | 1.5869 | {'precision': 0.06117908787541713, 'recall': 0.13597033374536466, 'f1': 0.08438818565400844, 'number': 809} | {'precision': 0.015789473684210527, 'recall': 0.025210084033613446, 'f1': 0.01941747572815534, 'number': 119} | {'precision': 0.1918819188191882, 'recall': 0.39061032863849765, 'f1': 0.257346118156511, 'number': 1065} | 0.1273 | 0.2654 | 0.1721 | 0.4198 |
Framework versions
- Transformers 4.48.2
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 22
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for apriliantono/layoutlmv3-funsd
Base model
microsoft/layoutlmv3-base