eval_p = 0.9822171390155553 eval_r = 0.9790383704405495 eval_f1 = 0.9806131065277388
train result
Epoch | Training Loss | Validation Loss | Precision | Recall | F1 |
---|---|---|---|---|---|
1 | No log | 223.361084 | 0.980017 | 0.937134 | 0.957643 |
2 | No log | 61.423782 | 0.973549 | 0.959132 | 0.965962 |
3 | 2994.676800 | 72.477470 | 0.976213 | 0.962664 | 0.969052 |
4 | 2994.676800 | 103.387581 | 0.971125 | 0.962664 | 0.966845 |
5 | 33.797800 | 156.035553 | 0.975023 | 0.964581 | 0.969666 |
6 | 33.797800 | 265.293549 | 0.971879 | 0.969324 | 0.970583 |
7 | 20.226000 | 766.043457 | 0.974243 | 0.965187 | 0.969429 |
8 | 20.226000 | 1143.557495 | 0.974143 | 0.965691 | 0.969722 |
9 | 23.267200 | 996.235901 | 0.974592 | 0.968517 | 0.971405 |
10 | 23.267200 | 959.597229 | 0.974522 | 0.966398 | 0.970242 |
eval result
Label | Precision | Recall | F1-Score | Support |
---|---|---|---|---|
B-CONT | 1.00 | 1.00 | 1.00 | 33 |
B-EDU | 1.00 | 1.00 | 1.00 | 106 |
B-LOC | 1.00 | 1.00 | 1.00 | 2 |
B-NAME | 1.00 | 1.00 | 1.00 | 110 |
B-ORG | 0.99 | 0.98 | 0.98 | 523 |
B-PRO | 0.95 | 1.00 | 0.97 | 18 |
B-RACE | 1.00 | 1.00 | 1.00 | 15 |
B-TITLE | 0.96 | 0.96 | 0.96 | 690 |
I-CONT | 1.00 | 1.00 | 1.00 | 97 |
I-EDU | 1.00 | 1.00 | 1.00 | 283 |
I-LOC | 1.00 | 1.00 | 1.00 | 8 |
I-NAME | 1.00 | 1.00 | 1.00 | 177 |
I-ORG | 0.99 | 0.98 | 0.99 | 4146 |
I-PRO | 0.93 | 1.00 | 0.96 | 51 |
I-RACE | 1.00 | 1.00 | 1.00 | 14 |
I-TITLE | 0.97 | 0.97 | 0.97 | 2171 |
O | 0.00 | 0.00 | 0.00 | 0 |
{'eval_loss': 684.596923828125, 'eval_precision': 0.9822171390155553, 'eval_recall': 0.9790383704405495, 'eval_f1': 0.9806131065277388, 'eval_runtime': 5.8637, 'eval_samples_per_second': 78.96, 'eval_steps_per_second': 4.946, 'epoch': 10.0}
- Downloads last month
- 166
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for PassbyGrocer/resume_ner_herb
Base model
hfl/chinese-roberta-wwm-ext