layoutlm-funsd
This model was trained from scratch on the funsd dataset. It achieves the following results on the evaluation set:
- Loss: 0.7189
- Answer: {'precision': 0.7106145251396648, 'recall': 0.7861557478368356, 'f1': 0.7464788732394366, 'number': 809}
- Header: {'precision': 0.319672131147541, 'recall': 0.3277310924369748, 'f1': 0.32365145228215775, 'number': 119}
- Question: {'precision': 0.7786596119929453, 'recall': 0.8291079812206573, 'f1': 0.8030923146884948, 'number': 1065}
- Overall Precision: 0.7243
- Overall Recall: 0.7817
- Overall F1: 0.7519
- Overall Accuracy: 0.8021
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
---|---|---|---|---|---|---|---|---|---|---|
1.8213 | 1.0 | 10 | 1.5802 | {'precision': 0.02383419689119171, 'recall': 0.02843016069221261, 'f1': 0.02593010146561443, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.20350877192982456, 'recall': 0.21784037558685446, 'f1': 0.21043083900226758, 'number': 1065} | 0.1211 | 0.1279 | 0.1245 | 0.3954 |
1.3926 | 2.0 | 20 | 1.2004 | {'precision': 0.15946348733233978, 'recall': 0.13226205191594562, 'f1': 0.14459459459459462, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.5316139767054908, 'recall': 0.6, 'f1': 0.5637406263784737, 'number': 1065} | 0.3974 | 0.3743 | 0.3855 | 0.5855 |
1.0495 | 3.0 | 30 | 0.9320 | {'precision': 0.4661558109833972, 'recall': 0.4511742892459827, 'f1': 0.4585427135678392, 'number': 809} | {'precision': 0.02702702702702703, 'recall': 0.008403361344537815, 'f1': 0.01282051282051282, 'number': 119} | {'precision': 0.634020618556701, 'recall': 0.6929577464788732, 'f1': 0.6621803499327052, 'number': 1065} | 0.5565 | 0.5539 | 0.5552 | 0.7115 |
0.8025 | 4.0 | 40 | 0.7743 | {'precision': 0.6133333333333333, 'recall': 0.7391841779975278, 'f1': 0.6704035874439461, 'number': 809} | {'precision': 0.12244897959183673, 'recall': 0.05042016806722689, 'f1': 0.07142857142857142, 'number': 119} | {'precision': 0.6703483432455395, 'recall': 0.7408450704225352, 'f1': 0.7038358608385369, 'number': 1065} | 0.6329 | 0.6989 | 0.6643 | 0.7663 |
0.6413 | 5.0 | 50 | 0.7123 | {'precision': 0.6552462526766595, 'recall': 0.7564894932014833, 'f1': 0.7022375215146299, 'number': 809} | {'precision': 0.24675324675324675, 'recall': 0.15966386554621848, 'f1': 0.19387755102040818, 'number': 119} | {'precision': 0.6920609462710505, 'recall': 0.8103286384976526, 'f1': 0.7465397923875431, 'number': 1065} | 0.6616 | 0.7496 | 0.7029 | 0.7852 |
0.5528 | 6.0 | 60 | 0.6853 | {'precision': 0.6561844863731656, 'recall': 0.7737948084054388, 'f1': 0.7101531480431083, 'number': 809} | {'precision': 0.21621621621621623, 'recall': 0.13445378151260504, 'f1': 0.16580310880829016, 'number': 119} | {'precision': 0.7071729957805907, 'recall': 0.7868544600938967, 'f1': 0.7448888888888887, 'number': 1065} | 0.6688 | 0.7426 | 0.7038 | 0.7858 |
0.4716 | 7.0 | 70 | 0.6697 | {'precision': 0.6731182795698925, 'recall': 0.7737948084054388, 'f1': 0.7199539965497411, 'number': 809} | {'precision': 0.25252525252525254, 'recall': 0.21008403361344538, 'f1': 0.22935779816513763, 'number': 119} | {'precision': 0.7363945578231292, 'recall': 0.8131455399061033, 'f1': 0.7728692547969657, 'number': 1065} | 0.6880 | 0.7612 | 0.7227 | 0.7954 |
0.4138 | 8.0 | 80 | 0.6751 | {'precision': 0.7039911308203991, 'recall': 0.7849196538936959, 'f1': 0.7422559906487435, 'number': 809} | {'precision': 0.22764227642276422, 'recall': 0.23529411764705882, 'f1': 0.23140495867768596, 'number': 119} | {'precision': 0.7502131287297528, 'recall': 0.8262910798122066, 'f1': 0.7864164432529044, 'number': 1065} | 0.7020 | 0.7742 | 0.7363 | 0.7985 |
0.3721 | 9.0 | 90 | 0.6652 | {'precision': 0.710239651416122, 'recall': 0.8059332509270705, 'f1': 0.755066589461494, 'number': 809} | {'precision': 0.2773109243697479, 'recall': 0.2773109243697479, 'f1': 0.2773109243697479, 'number': 119} | {'precision': 0.7715289982425307, 'recall': 0.8244131455399061, 'f1': 0.7970948706309579, 'number': 1065} | 0.7186 | 0.7842 | 0.75 | 0.8042 |
0.3571 | 10.0 | 100 | 0.6931 | {'precision': 0.7142857142857143, 'recall': 0.7911001236093943, 'f1': 0.750733137829912, 'number': 809} | {'precision': 0.2857142857142857, 'recall': 0.25210084033613445, 'f1': 0.26785714285714285, 'number': 119} | {'precision': 0.7804444444444445, 'recall': 0.8244131455399061, 'f1': 0.8018264840182647, 'number': 1065} | 0.7281 | 0.7767 | 0.7516 | 0.8057 |
0.3057 | 11.0 | 110 | 0.6920 | {'precision': 0.7172489082969432, 'recall': 0.8121137206427689, 'f1': 0.7617391304347826, 'number': 809} | {'precision': 0.3225806451612903, 'recall': 0.33613445378151263, 'f1': 0.3292181069958848, 'number': 119} | {'precision': 0.7837354781054513, 'recall': 0.8234741784037559, 'f1': 0.8031135531135531, 'number': 1065} | 0.7290 | 0.7898 | 0.7582 | 0.8040 |
0.2932 | 12.0 | 120 | 0.7032 | {'precision': 0.7149220489977728, 'recall': 0.7935723114956736, 'f1': 0.7521968365553603, 'number': 809} | {'precision': 0.3333333333333333, 'recall': 0.3025210084033613, 'f1': 0.3171806167400881, 'number': 119} | {'precision': 0.7945454545454546, 'recall': 0.8206572769953052, 'f1': 0.8073903002309469, 'number': 1065} | 0.7369 | 0.7787 | 0.7573 | 0.8071 |
0.274 | 13.0 | 130 | 0.7165 | {'precision': 0.7197309417040358, 'recall': 0.7935723114956736, 'f1': 0.7548500881834216, 'number': 809} | {'precision': 0.30708661417322836, 'recall': 0.3277310924369748, 'f1': 0.3170731707317073, 'number': 119} | {'precision': 0.7790492957746479, 'recall': 0.8309859154929577, 'f1': 0.8041799182189914, 'number': 1065} | 0.7267 | 0.7858 | 0.7551 | 0.8032 |
0.2608 | 14.0 | 140 | 0.7181 | {'precision': 0.7203579418344519, 'recall': 0.796044499381953, 'f1': 0.756312389900176, 'number': 809} | {'precision': 0.31451612903225806, 'recall': 0.3277310924369748, 'f1': 0.32098765432098764, 'number': 119} | {'precision': 0.7802491103202847, 'recall': 0.8234741784037559, 'f1': 0.801279122887163, 'number': 1065} | 0.7283 | 0.7827 | 0.7545 | 0.8008 |
0.2542 | 15.0 | 150 | 0.7189 | {'precision': 0.7106145251396648, 'recall': 0.7861557478368356, 'f1': 0.7464788732394366, 'number': 809} | {'precision': 0.319672131147541, 'recall': 0.3277310924369748, 'f1': 0.32365145228215775, 'number': 119} | {'precision': 0.7786596119929453, 'recall': 0.8291079812206573, 'f1': 0.8030923146884948, 'number': 1065} | 0.7243 | 0.7817 | 0.7519 | 0.8021 |
Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1+cu121
- Datasets 3.1.0
- Tokenizers 0.15.2
- Downloads last month
- 5
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.