layoutlm-funsd
This model is a fine-tuned version of microsoft/layoutlm-base-uncased on the funsd dataset. It achieves the following results on the evaluation set:
- Loss: 0.7132
- Answer: {'precision': 0.712403951701427, 'recall': 0.8022249690976514, 'f1': 0.7546511627906977, 'number': 809}
- Header: {'precision': 0.3492063492063492, 'recall': 0.3697478991596639, 'f1': 0.35918367346938773, 'number': 119}
- Question: {'precision': 0.7774822695035462, 'recall': 0.8234741784037559, 'f1': 0.7998176014591885, 'number': 1065}
- Overall Precision: 0.7252
- Overall Recall: 0.7878
- Overall F1: 0.7552
- Overall Accuracy: 0.8021
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
---|---|---|---|---|---|---|---|---|---|---|
1.8005 | 1.0 | 10 | 1.5968 | {'precision': 0.016826923076923076, 'recall': 0.00865265760197775, 'f1': 0.011428571428571429, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.2391304347826087, 'recall': 0.07230046948356808, 'f1': 0.1110310021629416, 'number': 1065} | 0.1138 | 0.0421 | 0.0615 | 0.3226 |
1.457 | 2.0 | 20 | 1.2316 | {'precision': 0.15958668197474168, 'recall': 0.17181705809641531, 'f1': 0.16547619047619047, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.4478951000690131, 'recall': 0.6093896713615023, 'f1': 0.5163086714399363, 'number': 1065} | 0.3397 | 0.3954 | 0.3654 | 0.6040 |
1.0982 | 3.0 | 30 | 0.9236 | {'precision': 0.5125260960334029, 'recall': 0.6069221260815822, 'f1': 0.5557441992076967, 'number': 809} | {'precision': 0.0625, 'recall': 0.008403361344537815, 'f1': 0.014814814814814815, 'number': 119} | {'precision': 0.595292331055429, 'recall': 0.7361502347417841, 'f1': 0.6582703610411419, 'number': 1065} | 0.5570 | 0.6402 | 0.5957 | 0.7149 |
0.8415 | 4.0 | 40 | 0.7922 | {'precision': 0.6117021276595744, 'recall': 0.7107540173053152, 'f1': 0.6575185820468838, 'number': 809} | {'precision': 0.06, 'recall': 0.025210084033613446, 'f1': 0.03550295857988166, 'number': 119} | {'precision': 0.6822916666666666, 'recall': 0.7380281690140845, 'f1': 0.7090663058186739, 'number': 1065} | 0.6368 | 0.6844 | 0.6597 | 0.7536 |
0.6696 | 5.0 | 50 | 0.7174 | {'precision': 0.6395721925133689, 'recall': 0.7391841779975278, 'f1': 0.6857798165137614, 'number': 809} | {'precision': 0.13953488372093023, 'recall': 0.10084033613445378, 'f1': 0.11707317073170731, 'number': 119} | {'precision': 0.7, 'recall': 0.8018779342723005, 'f1': 0.7474835886214442, 'number': 1065} | 0.6533 | 0.7346 | 0.6915 | 0.7764 |
0.5668 | 6.0 | 60 | 0.6995 | {'precision': 0.6404382470119522, 'recall': 0.7948084054388134, 'f1': 0.7093215664644236, 'number': 809} | {'precision': 0.24675324675324675, 'recall': 0.15966386554621848, 'f1': 0.19387755102040818, 'number': 119} | {'precision': 0.728213977566868, 'recall': 0.7924882629107981, 'f1': 0.7589928057553956, 'number': 1065} | 0.6723 | 0.7556 | 0.7116 | 0.7790 |
0.4909 | 7.0 | 70 | 0.6820 | {'precision': 0.6699029126213593, 'recall': 0.7676143386897404, 'f1': 0.7154377880184332, 'number': 809} | {'precision': 0.24369747899159663, 'recall': 0.24369747899159663, 'f1': 0.24369747899159663, 'number': 119} | {'precision': 0.7497781721384206, 'recall': 0.7934272300469484, 'f1': 0.7709854014598541, 'number': 1065} | 0.6880 | 0.7501 | 0.7177 | 0.7903 |
0.4379 | 8.0 | 80 | 0.6724 | {'precision': 0.6830309498399146, 'recall': 0.7911001236093943, 'f1': 0.7331042382588774, 'number': 809} | {'precision': 0.2540983606557377, 'recall': 0.2605042016806723, 'f1': 0.2572614107883818, 'number': 119} | {'precision': 0.7407087294727744, 'recall': 0.8046948356807512, 'f1': 0.7713771377137714, 'number': 1065} | 0.6895 | 0.7667 | 0.7261 | 0.7970 |
0.3826 | 9.0 | 90 | 0.6814 | {'precision': 0.7010869565217391, 'recall': 0.7972805933250927, 'f1': 0.746096009253904, 'number': 809} | {'precision': 0.25, 'recall': 0.2605042016806723, 'f1': 0.25514403292181076, 'number': 119} | {'precision': 0.7484874675885912, 'recall': 0.8131455399061033, 'f1': 0.7794779477947795, 'number': 1065} | 0.7006 | 0.7737 | 0.7353 | 0.8011 |
0.3715 | 10.0 | 100 | 0.6815 | {'precision': 0.6944444444444444, 'recall': 0.8034610630407911, 'f1': 0.7449856733524356, 'number': 809} | {'precision': 0.3008130081300813, 'recall': 0.31092436974789917, 'f1': 0.3057851239669422, 'number': 119} | {'precision': 0.7789757412398922, 'recall': 0.8140845070422535, 'f1': 0.7961432506887053, 'number': 1065} | 0.7155 | 0.7797 | 0.7462 | 0.8088 |
0.3173 | 11.0 | 110 | 0.6886 | {'precision': 0.6996735582154516, 'recall': 0.7948084054388134, 'f1': 0.744212962962963, 'number': 809} | {'precision': 0.3230769230769231, 'recall': 0.35294117647058826, 'f1': 0.3373493975903615, 'number': 119} | {'precision': 0.7559726962457338, 'recall': 0.831924882629108, 'f1': 0.7921323200715245, 'number': 1065} | 0.7073 | 0.7883 | 0.7456 | 0.8038 |
0.3 | 12.0 | 120 | 0.7026 | {'precision': 0.7111597374179431, 'recall': 0.8034610630407911, 'f1': 0.7544979686593152, 'number': 809} | {'precision': 0.33613445378151263, 'recall': 0.33613445378151263, 'f1': 0.33613445378151263, 'number': 119} | {'precision': 0.7782724844167409, 'recall': 0.8206572769953052, 'f1': 0.7989031078610604, 'number': 1065} | 0.7254 | 0.7847 | 0.7539 | 0.8036 |
0.2864 | 13.0 | 130 | 0.7049 | {'precision': 0.7133406835722161, 'recall': 0.799752781211372, 'f1': 0.7540792540792541, 'number': 809} | {'precision': 0.3203125, 'recall': 0.3445378151260504, 'f1': 0.33198380566801616, 'number': 119} | {'precision': 0.7736516357206012, 'recall': 0.8215962441314554, 'f1': 0.7969034608378871, 'number': 1065} | 0.7216 | 0.7842 | 0.7516 | 0.8027 |
0.2625 | 14.0 | 140 | 0.7129 | {'precision': 0.713971397139714, 'recall': 0.8022249690976514, 'f1': 0.7555296856810244, 'number': 809} | {'precision': 0.35833333333333334, 'recall': 0.36134453781512604, 'f1': 0.35983263598326365, 'number': 119} | {'precision': 0.7776793622674933, 'recall': 0.8244131455399061, 'f1': 0.8003646308113036, 'number': 1065} | 0.7275 | 0.7878 | 0.7564 | 0.8021 |
0.2656 | 15.0 | 150 | 0.7132 | {'precision': 0.712403951701427, 'recall': 0.8022249690976514, 'f1': 0.7546511627906977, 'number': 809} | {'precision': 0.3492063492063492, 'recall': 0.3697478991596639, 'f1': 0.35918367346938773, 'number': 119} | {'precision': 0.7774822695035462, 'recall': 0.8234741784037559, 'f1': 0.7998176014591885, 'number': 1065} | 0.7252 | 0.7878 | 0.7552 | 0.8021 |
Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 10
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for sanmisanti/layoutlm-funsd
Base model
microsoft/layoutlm-base-uncased