End of training
Browse files
README.md
CHANGED
@@ -15,13 +15,13 @@ should probably proofread and complete it, then remove this comment. -->
|
|
15 |
|
16 |
This model is a fine-tuned version of [deepset/gbert-base](https://huggingface.co/deepset/gbert-base) on an unknown dataset.
|
17 |
It achieves the following results on the evaluation set:
|
18 |
-
- Loss: 0.
|
19 |
-
- Hard: {'precision': 0.
|
20 |
-
- Soft: {'precision': 0.
|
21 |
-
- Overall Precision: 0.
|
22 |
-
- Overall Recall: 0.
|
23 |
-
- Overall F1: 0.
|
24 |
-
- Overall Accuracy: 0.
|
25 |
|
26 |
## Model description
|
27 |
|
@@ -50,13 +50,13 @@ The following hyperparameters were used during training:
|
|
50 |
|
51 |
### Training results
|
52 |
|
53 |
-
| Training Loss | Epoch | Step | Validation Loss | Hard | Soft
|
54 |
-
|
55 |
-
| No log | 1.0 |
|
56 |
-
| No log | 2.0 |
|
57 |
-
|
|
58 |
-
| 0.
|
59 |
-
| 0.
|
60 |
|
61 |
|
62 |
### Framework versions
|
|
|
15 |
|
16 |
This model is a fine-tuned version of [deepset/gbert-base](https://huggingface.co/deepset/gbert-base) on an unknown dataset.
|
17 |
It achieves the following results on the evaluation set:
|
18 |
+
- Loss: 0.1391
|
19 |
+
- Hard: {'precision': 0.6772616136919315, 'recall': 0.760989010989011, 'f1': 0.7166882276843466, 'number': 364}
|
20 |
+
- Soft: {'precision': 0.6883116883116883, 'recall': 0.803030303030303, 'f1': 0.7412587412587411, 'number': 66}
|
21 |
+
- Overall Precision: 0.6790
|
22 |
+
- Overall Recall: 0.7674
|
23 |
+
- Overall F1: 0.7205
|
24 |
+
- Overall Accuracy: 0.9533
|
25 |
|
26 |
## Model description
|
27 |
|
|
|
50 |
|
51 |
### Training results
|
52 |
|
53 |
+
| Training Loss | Epoch | Step | Validation Loss | Hard | Soft | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|
54 |
+
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
|
55 |
+
| No log | 1.0 | 178 | 0.1284 | {'precision': 0.5326315789473685, 'recall': 0.695054945054945, 'f1': 0.6030989272943981, 'number': 364} | {'precision': 0.5, 'recall': 0.5757575757575758, 'f1': 0.5352112676056339, 'number': 66} | 0.5281 | 0.6767 | 0.5933 | 0.9463 |
|
56 |
+
| No log | 2.0 | 356 | 0.1157 | {'precision': 0.6073059360730594, 'recall': 0.7307692307692307, 'f1': 0.6633416458852868, 'number': 364} | {'precision': 0.631578947368421, 'recall': 0.7272727272727273, 'f1': 0.676056338028169, 'number': 66} | 0.6109 | 0.7302 | 0.6653 | 0.9519 |
|
57 |
+
| 0.1468 | 3.0 | 534 | 0.1286 | {'precision': 0.6846153846153846, 'recall': 0.7335164835164835, 'f1': 0.7082228116710876, 'number': 364} | {'precision': 0.6582278481012658, 'recall': 0.7878787878787878, 'f1': 0.7172413793103448, 'number': 66} | 0.6802 | 0.7419 | 0.7097 | 0.9547 |
|
58 |
+
| 0.1468 | 4.0 | 712 | 0.1383 | {'precision': 0.6799007444168734, 'recall': 0.7527472527472527, 'f1': 0.7144719687092568, 'number': 364} | {'precision': 0.6582278481012658, 'recall': 0.7878787878787878, 'f1': 0.7172413793103448, 'number': 66} | 0.6763 | 0.7581 | 0.7149 | 0.9544 |
|
59 |
+
| 0.1468 | 5.0 | 890 | 0.1391 | {'precision': 0.6772616136919315, 'recall': 0.760989010989011, 'f1': 0.7166882276843466, 'number': 364} | {'precision': 0.6883116883116883, 'recall': 0.803030303030303, 'f1': 0.7412587412587411, 'number': 66} | 0.6790 | 0.7674 | 0.7205 | 0.9533 |
|
60 |
|
61 |
|
62 |
### Framework versions
|