layoutlm-with-funsd
This model is a fine-tuned version of pabloma09/layoutlm-with-funsd on the funsd dataset. It achieves the following results on the evaluation set:
- Loss: 0.6344
- Eader: {'precision': 0.4888888888888889, 'recall': 0.38596491228070173, 'f1': 0.4313725490196078, 'number': 57}
- Nswer: {'precision': 0.577922077922078, 'recall': 0.6312056737588653, 'f1': 0.6033898305084746, 'number': 141}
- Uestion: {'precision': 0.5172413793103449, 'recall': 0.5590062111801242, 'f1': 0.537313432835821, 'number': 161}
- Overall Precision: 0.5389
- Overall Recall: 0.5599
- Overall F1: 0.5492
- Overall Accuracy: 0.8364
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Eader | Nswer | Uestion | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
---|---|---|---|---|---|---|---|---|---|---|
0.3894 | 1.0 | 9 | 0.5238 | {'precision': 0.34782608695652173, 'recall': 0.2807017543859649, 'f1': 0.3106796116504854, 'number': 57} | {'precision': 0.515527950310559, 'recall': 0.5886524822695035, 'f1': 0.5496688741721855, 'number': 141} | {'precision': 0.4010989010989011, 'recall': 0.453416149068323, 'f1': 0.4256559766763849, 'number': 161} | 0.4422 | 0.4791 | 0.4599 | 0.8174 |
0.3489 | 2.0 | 18 | 0.5037 | {'precision': 0.2978723404255319, 'recall': 0.24561403508771928, 'f1': 0.2692307692307692, 'number': 57} | {'precision': 0.5125, 'recall': 0.5815602836879432, 'f1': 0.5448504983388704, 'number': 141} | {'precision': 0.4, 'recall': 0.4472049689440994, 'f1': 0.42228739002932547, 'number': 161} | 0.4341 | 0.4680 | 0.4504 | 0.8270 |
0.2657 | 3.0 | 27 | 0.5258 | {'precision': 0.3333333333333333, 'recall': 0.2807017543859649, 'f1': 0.3047619047619048, 'number': 57} | {'precision': 0.5123456790123457, 'recall': 0.5886524822695035, 'f1': 0.5478547854785478, 'number': 141} | {'precision': 0.3901098901098901, 'recall': 0.4409937888198758, 'f1': 0.4139941690962099, 'number': 161} | 0.4337 | 0.4735 | 0.4527 | 0.8261 |
0.1907 | 4.0 | 36 | 0.5390 | {'precision': 0.38461538461538464, 'recall': 0.2631578947368421, 'f1': 0.3125, 'number': 57} | {'precision': 0.5827814569536424, 'recall': 0.624113475177305, 'f1': 0.6027397260273973, 'number': 141} | {'precision': 0.47878787878787876, 'recall': 0.4906832298136646, 'f1': 0.48466257668711654, 'number': 161} | 0.5127 | 0.5070 | 0.5098 | 0.8286 |
0.175 | 5.0 | 45 | 0.5489 | {'precision': 0.42105263157894735, 'recall': 0.2807017543859649, 'f1': 0.3368421052631579, 'number': 57} | {'precision': 0.5246913580246914, 'recall': 0.6028368794326241, 'f1': 0.561056105610561, 'number': 141} | {'precision': 0.449438202247191, 'recall': 0.4968944099378882, 'f1': 0.471976401179941, 'number': 161} | 0.4788 | 0.5042 | 0.4912 | 0.8361 |
0.1685 | 6.0 | 54 | 0.5678 | {'precision': 0.4, 'recall': 0.2807017543859649, 'f1': 0.32989690721649484, 'number': 57} | {'precision': 0.5769230769230769, 'recall': 0.6382978723404256, 'f1': 0.6060606060606061, 'number': 141} | {'precision': 0.45901639344262296, 'recall': 0.5217391304347826, 'f1': 0.4883720930232558, 'number': 161} | 0.5013 | 0.5292 | 0.5149 | 0.8370 |
0.1156 | 7.0 | 63 | 0.5749 | {'precision': 0.4864864864864865, 'recall': 0.3157894736842105, 'f1': 0.3829787234042553, 'number': 57} | {'precision': 0.50920245398773, 'recall': 0.5886524822695035, 'f1': 0.5460526315789473, 'number': 141} | {'precision': 0.43575418994413406, 'recall': 0.484472049689441, 'f1': 0.45882352941176474, 'number': 161} | 0.4723 | 0.4986 | 0.4851 | 0.8409 |
0.1019 | 8.0 | 72 | 0.5907 | {'precision': 0.43137254901960786, 'recall': 0.38596491228070173, 'f1': 0.40740740740740744, 'number': 57} | {'precision': 0.5408805031446541, 'recall': 0.6099290780141844, 'f1': 0.5733333333333333, 'number': 141} | {'precision': 0.5113636363636364, 'recall': 0.5590062111801242, 'f1': 0.5341246290801187, 'number': 161} | 0.5130 | 0.5515 | 0.5315 | 0.8337 |
0.0885 | 9.0 | 81 | 0.5899 | {'precision': 0.5, 'recall': 0.43859649122807015, 'f1': 0.46728971962616817, 'number': 57} | {'precision': 0.55, 'recall': 0.624113475177305, 'f1': 0.584717607973422, 'number': 141} | {'precision': 0.5084745762711864, 'recall': 0.5590062111801242, 'f1': 0.5325443786982249, 'number': 161} | 0.5245 | 0.5655 | 0.5442 | 0.8400 |
0.0852 | 10.0 | 90 | 0.6170 | {'precision': 0.45454545454545453, 'recall': 0.3508771929824561, 'f1': 0.396039603960396, 'number': 57} | {'precision': 0.564935064935065, 'recall': 0.6170212765957447, 'f1': 0.5898305084745763, 'number': 141} | {'precision': 0.5027932960893855, 'recall': 0.5590062111801242, 'f1': 0.5294117647058824, 'number': 161} | 0.5225 | 0.5487 | 0.5353 | 0.8364 |
0.0854 | 11.0 | 99 | 0.6107 | {'precision': 0.5111111111111111, 'recall': 0.40350877192982454, 'f1': 0.45098039215686275, 'number': 57} | {'precision': 0.5506329113924051, 'recall': 0.6170212765957447, 'f1': 0.5819397993311038, 'number': 141} | {'precision': 0.5113636363636364, 'recall': 0.5590062111801242, 'f1': 0.5341246290801187, 'number': 161} | 0.5277 | 0.5571 | 0.5420 | 0.8358 |
0.0665 | 12.0 | 108 | 0.6090 | {'precision': 0.5111111111111111, 'recall': 0.40350877192982454, 'f1': 0.45098039215686275, 'number': 57} | {'precision': 0.5365853658536586, 'recall': 0.624113475177305, 'f1': 0.5770491803278689, 'number': 141} | {'precision': 0.4946236559139785, 'recall': 0.5714285714285714, 'f1': 0.5302593659942363, 'number': 161} | 0.5139 | 0.5655 | 0.5385 | 0.8464 |
0.0632 | 13.0 | 117 | 0.6200 | {'precision': 0.44680851063829785, 'recall': 0.3684210526315789, 'f1': 0.40384615384615385, 'number': 57} | {'precision': 0.5370370370370371, 'recall': 0.6170212765957447, 'f1': 0.5742574257425743, 'number': 141} | {'precision': 0.4945054945054945, 'recall': 0.5590062111801242, 'f1': 0.5247813411078717, 'number': 161} | 0.5064 | 0.5515 | 0.528 | 0.8412 |
0.0758 | 14.0 | 126 | 0.6326 | {'precision': 0.5, 'recall': 0.38596491228070173, 'f1': 0.43564356435643564, 'number': 57} | {'precision': 0.5705128205128205, 'recall': 0.6312056737588653, 'f1': 0.5993265993265993, 'number': 141} | {'precision': 0.5142857142857142, 'recall': 0.5590062111801242, 'f1': 0.5357142857142856, 'number': 161} | 0.536 | 0.5599 | 0.5477 | 0.8382 |
0.0573 | 15.0 | 135 | 0.6344 | {'precision': 0.4888888888888889, 'recall': 0.38596491228070173, 'f1': 0.4313725490196078, 'number': 57} | {'precision': 0.577922077922078, 'recall': 0.6312056737588653, 'f1': 0.6033898305084746, 'number': 141} | {'precision': 0.5172413793103449, 'recall': 0.5590062111801242, 'f1': 0.537313432835821, 'number': 161} | 0.5389 | 0.5599 | 0.5492 | 0.8364 |
Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
- Downloads last month
- 56
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for pabloma09/layoutlm-with-funsd
Unable to build the model tree, the base model loops to the model itself. Learn more.