Model save
Browse files- README.md +28 -29
- model.safetensors +1 -1
- tb/events.out.tfevents.1725886210.0a1c9bec2a53.24273.0 +2 -2
- train.log +13 -0
README.md
CHANGED
@@ -1,12 +1,11 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
license: apache-2.0
|
4 |
-
base_model:
|
5 |
tags:
|
6 |
-
- token-classification
|
7 |
- generated_from_trainer
|
8 |
datasets:
|
9 |
-
-
|
10 |
metrics:
|
11 |
- precision
|
12 |
- recall
|
@@ -19,24 +18,24 @@ model-index:
|
|
19 |
name: Token Classification
|
20 |
type: token-classification
|
21 |
dataset:
|
22 |
-
name:
|
23 |
-
type:
|
24 |
-
config:
|
25 |
split: validation
|
26 |
-
args:
|
27 |
metrics:
|
28 |
- name: Precision
|
29 |
type: precision
|
30 |
-
value: 0.
|
31 |
- name: Recall
|
32 |
type: recall
|
33 |
-
value: 0.
|
34 |
- name: F1
|
35 |
type: f1
|
36 |
-
value: 0.
|
37 |
- name: Accuracy
|
38 |
type: accuracy
|
39 |
-
value: 0.
|
40 |
---
|
41 |
|
42 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
@@ -44,13 +43,13 @@ should probably proofread and complete it, then remove this comment. -->
|
|
44 |
|
45 |
# output
|
46 |
|
47 |
-
This model is a fine-tuned version of [
|
48 |
It achieves the following results on the evaluation set:
|
49 |
-
- Loss: 0.
|
50 |
-
- Precision: 0.
|
51 |
-
- Recall: 0.
|
52 |
-
- F1: 0.
|
53 |
-
- Accuracy: 0.
|
54 |
|
55 |
## Model description
|
56 |
|
@@ -81,18 +80,18 @@ The following hyperparameters were used during training:
|
|
81 |
|
82 |
### Training results
|
83 |
|
84 |
-
| Training Loss | Epoch
|
85 |
-
|
86 |
-
|
|
87 |
-
|
|
88 |
-
|
|
89 |
-
| 0.
|
90 |
-
| 0.
|
91 |
-
| 0.
|
92 |
-
| 0.
|
93 |
-
| 0.
|
94 |
-
| 0.
|
95 |
-
| 0.
|
96 |
|
97 |
|
98 |
### Framework versions
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
license: apache-2.0
|
4 |
+
base_model: michiyasunaga/BioLinkBERT-base
|
5 |
tags:
|
|
|
6 |
- generated_from_trainer
|
7 |
datasets:
|
8 |
+
- drugtemist-en-fasttext-75-ner
|
9 |
metrics:
|
10 |
- precision
|
11 |
- recall
|
|
|
18 |
name: Token Classification
|
19 |
type: token-classification
|
20 |
dataset:
|
21 |
+
name: drugtemist-en-fasttext-75-ner
|
22 |
+
type: drugtemist-en-fasttext-75-ner
|
23 |
+
config: DrugTEMIST English NER
|
24 |
split: validation
|
25 |
+
args: DrugTEMIST English NER
|
26 |
metrics:
|
27 |
- name: Precision
|
28 |
type: precision
|
29 |
+
value: 0.9194876486733761
|
30 |
- name: Recall
|
31 |
type: recall
|
32 |
+
value: 0.9366262814538676
|
33 |
- name: F1
|
34 |
type: f1
|
35 |
+
value: 0.92797783933518
|
36 |
- name: Accuracy
|
37 |
type: accuracy
|
38 |
+
value: 0.9987511511734993
|
39 |
---
|
40 |
|
41 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
43 |
|
44 |
# output
|
45 |
|
46 |
+
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on the drugtemist-en-fasttext-75-ner dataset.
|
47 |
It achieves the following results on the evaluation set:
|
48 |
+
- Loss: 0.0077
|
49 |
+
- Precision: 0.9195
|
50 |
+
- Recall: 0.9366
|
51 |
+
- F1: 0.9280
|
52 |
+
- Accuracy: 0.9988
|
53 |
|
54 |
## Model description
|
55 |
|
|
|
80 |
|
81 |
### Training results
|
82 |
|
83 |
+
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|
84 |
+
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
|
85 |
+
| 0.0183 | 1.0 | 507 | 0.0055 | 0.8974 | 0.9376 | 0.9170 | 0.9985 |
|
86 |
+
| 0.0043 | 2.0 | 1014 | 0.0059 | 0.9099 | 0.9320 | 0.9208 | 0.9986 |
|
87 |
+
| 0.0022 | 3.0 | 1521 | 0.0057 | 0.9015 | 0.9301 | 0.9156 | 0.9985 |
|
88 |
+
| 0.0018 | 4.0 | 2028 | 0.0072 | 0.9275 | 0.9180 | 0.9227 | 0.9986 |
|
89 |
+
| 0.0009 | 5.0 | 2535 | 0.0064 | 0.9078 | 0.9357 | 0.9215 | 0.9987 |
|
90 |
+
| 0.0007 | 6.0 | 3042 | 0.0064 | 0.9194 | 0.9357 | 0.9275 | 0.9987 |
|
91 |
+
| 0.0004 | 7.0 | 3549 | 0.0072 | 0.9289 | 0.9376 | 0.9332 | 0.9988 |
|
92 |
+
| 0.0004 | 8.0 | 4056 | 0.0076 | 0.9250 | 0.9422 | 0.9335 | 0.9988 |
|
93 |
+
| 0.0003 | 9.0 | 4563 | 0.0077 | 0.9161 | 0.9366 | 0.9263 | 0.9987 |
|
94 |
+
| 0.0002 | 10.0 | 5070 | 0.0077 | 0.9195 | 0.9366 | 0.9280 | 0.9988 |
|
95 |
|
96 |
|
97 |
### Framework versions
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 430601004
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:710b5b97380a4c9d2cf0065106cab4f3ec295524e9a6face82f289921e3f95ca
|
3 |
size 430601004
|
tb/events.out.tfevents.1725886210.0a1c9bec2a53.24273.0
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a173893ab6ea4bae83ba8f7d43d877ca16a06c53d30cb7f414ae20737d881888
|
3 |
+
size 12305
|
train.log
CHANGED
@@ -1416,3 +1416,16 @@ Training completed. Do not forget to share your model on huggingface.co/models =
|
|
1416 |
[INFO|trainer.py:2632] 2024-09-09 13:26:47,099 >> Loading best model from /content/dissertation/scripts/ner/output/checkpoint-4056 (score: 0.9335180055401663).
|
1417 |
|
1418 |
|
1419 |
[INFO|trainer.py:4283] 2024-09-09 13:26:47,281 >> Waiting for the current checkpoint push to be finished, this might take a couple of minutes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1416 |
[INFO|trainer.py:2632] 2024-09-09 13:26:47,099 >> Loading best model from /content/dissertation/scripts/ner/output/checkpoint-4056 (score: 0.9335180055401663).
|
1417 |
|
1418 |
|
1419 |
[INFO|trainer.py:4283] 2024-09-09 13:26:47,281 >> Waiting for the current checkpoint push to be finished, this might take a couple of minutes.
|
1420 |
+
[INFO|trainer.py:3503] 2024-09-09 13:27:14,809 >> Saving model checkpoint to /content/dissertation/scripts/ner/output
|
1421 |
+
[INFO|configuration_utils.py:472] 2024-09-09 13:27:14,811 >> Configuration saved in /content/dissertation/scripts/ner/output/config.json
|
1422 |
+
[INFO|modeling_utils.py:2799] 2024-09-09 13:27:16,086 >> Model weights saved in /content/dissertation/scripts/ner/output/model.safetensors
|
1423 |
+
[INFO|tokenization_utils_base.py:2684] 2024-09-09 13:27:16,087 >> tokenizer config file saved in /content/dissertation/scripts/ner/output/tokenizer_config.json
|
1424 |
+
[INFO|tokenization_utils_base.py:2693] 2024-09-09 13:27:16,087 >> Special tokens file saved in /content/dissertation/scripts/ner/output/special_tokens_map.json
|
1425 |
+
[INFO|trainer.py:3503] 2024-09-09 13:27:16,100 >> Saving model checkpoint to /content/dissertation/scripts/ner/output
|
1426 |
+
[INFO|configuration_utils.py:472] 2024-09-09 13:27:16,101 >> Configuration saved in /content/dissertation/scripts/ner/output/config.json
|
1427 |
+
[INFO|modeling_utils.py:2799] 2024-09-09 13:27:17,200 >> Model weights saved in /content/dissertation/scripts/ner/output/model.safetensors
|
1428 |
+
[INFO|tokenization_utils_base.py:2684] 2024-09-09 13:27:17,201 >> tokenizer config file saved in /content/dissertation/scripts/ner/output/tokenizer_config.json
|
1429 |
+
[INFO|tokenization_utils_base.py:2693] 2024-09-09 13:27:17,201 >> Special tokens file saved in /content/dissertation/scripts/ner/output/special_tokens_map.json
|
1430 |
+
{'eval_loss': 0.00769586768001318, 'eval_precision': 0.9194876486733761, 'eval_recall': 0.9366262814538676, 'eval_f1': 0.92797783933518, 'eval_accuracy': 0.9987511511734993, 'eval_runtime': 15.2047, 'eval_samples_per_second': 456.832, 'eval_steps_per_second': 57.153, 'epoch': 10.0}
|
1431 |
+
{'train_runtime': 2196.5741, 'train_samples_per_second': 147.716, 'train_steps_per_second': 2.308, 'train_loss': 0.0028968164414402532, 'epoch': 10.0}
|
1432 |
+
|