layoutlm-FUNSDxSynthetic-1fold
This model is a fine-tuned version of microsoft/layoutlm-base-uncased on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.7358
- Eader: {'precision': 0.4666666666666667, 'recall': 0.3373493975903614, 'f1': 0.3916083916083916, 'number': 83}
- Nswer: {'precision': 0.5065502183406113, 'recall': 0.5658536585365853, 'f1': 0.5345622119815668, 'number': 205}
- Uestion: {'precision': 0.43373493975903615, 'recall': 0.4675324675324675, 'f1': 0.45, 'number': 231}
- Overall Precision: 0.4684
- Overall Recall: 0.4855
- Overall F1: 0.4768
- Overall Accuracy: 0.7947
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Eader | Nswer | Uestion | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
---|---|---|---|---|---|---|---|---|---|---|
1.2347 | 1.0 | 12 | 1.0175 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 83} | {'precision': 0.12563667232597622, 'recall': 0.36097560975609755, 'f1': 0.18639798488664988, 'number': 205} | {'precision': 0.11375212224108659, 'recall': 0.29004329004329005, 'f1': 0.16341463414634144, 'number': 231} | 0.1197 | 0.2717 | 0.1662 | 0.6488 |
0.9147 | 2.0 | 24 | 0.7456 | {'precision': 0.07142857142857142, 'recall': 0.012048192771084338, 'f1': 0.02061855670103093, 'number': 83} | {'precision': 0.2911764705882353, 'recall': 0.48292682926829267, 'f1': 0.36330275229357795, 'number': 205} | {'precision': 0.24507042253521127, 'recall': 0.37662337662337664, 'f1': 0.29692832764505117, 'number': 231} | 0.2638 | 0.3603 | 0.3046 | 0.7487 |
0.675 | 3.0 | 36 | 0.6233 | {'precision': 0.19148936170212766, 'recall': 0.10843373493975904, 'f1': 0.13846153846153844, 'number': 83} | {'precision': 0.36551724137931035, 'recall': 0.5170731707317073, 'f1': 0.42828282828282827, 'number': 205} | {'precision': 0.32094594594594594, 'recall': 0.41125541125541126, 'f1': 0.36053130929791266, 'number': 231} | 0.3318 | 0.4046 | 0.3646 | 0.7793 |
0.4928 | 4.0 | 48 | 0.6064 | {'precision': 0.3584905660377358, 'recall': 0.2289156626506024, 'f1': 0.2794117647058823, 'number': 83} | {'precision': 0.42592592592592593, 'recall': 0.5609756097560976, 'f1': 0.4842105263157895, 'number': 205} | {'precision': 0.34172661870503596, 'recall': 0.41125541125541126, 'f1': 0.37328094302554027, 'number': 231} | 0.3810 | 0.4412 | 0.4089 | 0.7733 |
0.4142 | 5.0 | 60 | 0.5817 | {'precision': 0.4489795918367347, 'recall': 0.26506024096385544, 'f1': 0.33333333333333337, 'number': 83} | {'precision': 0.4573643410852713, 'recall': 0.5756097560975609, 'f1': 0.5097192224622029, 'number': 205} | {'precision': 0.3754646840148699, 'recall': 0.43722943722943725, 'f1': 0.404, 'number': 231} | 0.4184 | 0.4644 | 0.4402 | 0.7878 |
0.3198 | 6.0 | 72 | 0.5888 | {'precision': 0.39285714285714285, 'recall': 0.26506024096385544, 'f1': 0.3165467625899281, 'number': 83} | {'precision': 0.4674329501915709, 'recall': 0.5951219512195122, 'f1': 0.5236051502145922, 'number': 205} | {'precision': 0.381294964028777, 'recall': 0.4588744588744589, 'f1': 0.4165029469548133, 'number': 231} | 0.4202 | 0.4817 | 0.4488 | 0.7998 |
0.2752 | 7.0 | 84 | 0.6608 | {'precision': 0.3968253968253968, 'recall': 0.30120481927710846, 'f1': 0.34246575342465757, 'number': 83} | {'precision': 0.47639484978540775, 'recall': 0.5414634146341464, 'f1': 0.5068493150684932, 'number': 205} | {'precision': 0.40458015267175573, 'recall': 0.4588744588744589, 'f1': 0.4300202839756592, 'number': 231} | 0.4337 | 0.4663 | 0.4494 | 0.7728 |
0.2275 | 8.0 | 96 | 0.6552 | {'precision': 0.3924050632911392, 'recall': 0.37349397590361444, 'f1': 0.38271604938271603, 'number': 83} | {'precision': 0.47478991596638653, 'recall': 0.551219512195122, 'f1': 0.510158013544018, 'number': 205} | {'precision': 0.37593984962406013, 'recall': 0.4329004329004329, 'f1': 0.40241448692152915, 'number': 231} | 0.4185 | 0.4701 | 0.4428 | 0.7848 |
0.1947 | 9.0 | 108 | 0.6603 | {'precision': 0.4166666666666667, 'recall': 0.30120481927710846, 'f1': 0.34965034965034963, 'number': 83} | {'precision': 0.4769874476987448, 'recall': 0.5560975609756098, 'f1': 0.5135135135135136, 'number': 205} | {'precision': 0.3917910447761194, 'recall': 0.45454545454545453, 'f1': 0.42084168336673344, 'number': 231} | 0.4303 | 0.4701 | 0.4494 | 0.7867 |
0.169 | 10.0 | 120 | 0.6796 | {'precision': 0.4264705882352941, 'recall': 0.3493975903614458, 'f1': 0.3841059602649007, 'number': 83} | {'precision': 0.4915254237288136, 'recall': 0.5658536585365853, 'f1': 0.5260770975056689, 'number': 205} | {'precision': 0.39015151515151514, 'recall': 0.4458874458874459, 'f1': 0.4161616161616162, 'number': 231} | 0.4366 | 0.4778 | 0.4563 | 0.7891 |
0.1462 | 11.0 | 132 | 0.6880 | {'precision': 0.41379310344827586, 'recall': 0.2891566265060241, 'f1': 0.3404255319148936, 'number': 83} | {'precision': 0.5109170305676856, 'recall': 0.5707317073170731, 'f1': 0.5391705069124425, 'number': 205} | {'precision': 0.41832669322709165, 'recall': 0.45454545454545453, 'f1': 0.43568464730290457, 'number': 231} | 0.4572 | 0.4740 | 0.4655 | 0.7949 |
0.1295 | 12.0 | 144 | 0.7007 | {'precision': 0.43283582089552236, 'recall': 0.3493975903614458, 'f1': 0.38666666666666666, 'number': 83} | {'precision': 0.4936708860759494, 'recall': 0.5707317073170731, 'f1': 0.5294117647058822, 'number': 205} | {'precision': 0.42023346303501946, 'recall': 0.4675324675324675, 'f1': 0.4426229508196721, 'number': 231} | 0.4528 | 0.4894 | 0.4704 | 0.7951 |
0.1208 | 13.0 | 156 | 0.7216 | {'precision': 0.4727272727272727, 'recall': 0.3132530120481928, 'f1': 0.3768115942028985, 'number': 83} | {'precision': 0.4957983193277311, 'recall': 0.5756097560975609, 'f1': 0.5327313769751693, 'number': 205} | {'precision': 0.42248062015503873, 'recall': 0.47186147186147187, 'f1': 0.44580777096114516, 'number': 231} | 0.4592 | 0.4875 | 0.4729 | 0.7974 |
0.1146 | 14.0 | 168 | 0.7440 | {'precision': 0.45901639344262296, 'recall': 0.3373493975903614, 'f1': 0.38888888888888884, 'number': 83} | {'precision': 0.5066666666666667, 'recall': 0.5560975609756098, 'f1': 0.5302325581395348, 'number': 205} | {'precision': 0.4291497975708502, 'recall': 0.4588744588744589, 'f1': 0.4435146443514644, 'number': 231} | 0.4653 | 0.4778 | 0.4715 | 0.7910 |
0.1083 | 15.0 | 180 | 0.7358 | {'precision': 0.4666666666666667, 'recall': 0.3373493975903614, 'f1': 0.3916083916083916, 'number': 83} | {'precision': 0.5065502183406113, 'recall': 0.5658536585365853, 'f1': 0.5345622119815668, 'number': 205} | {'precision': 0.43373493975903615, 'recall': 0.4675324675324675, 'f1': 0.45, 'number': 231} | 0.4684 | 0.4855 | 0.4768 | 0.7947 |
Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
- Downloads last month
- 24
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for pabloma09/layoutlm-FUNSDxSynthetic-1fold
Base model
microsoft/layoutlm-base-uncased