Rodrigo1771 commited on
Commit
7556a63
·
verified ·
1 Parent(s): 7b3bcde

Model save

Browse files
README.md CHANGED
@@ -1,12 +1,11 @@
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
- base_model: PlanTL-GOB-ES/bsc-bio-ehr-es
5
  tags:
6
- - token-classification
7
  - generated_from_trainer
8
  datasets:
9
- - Rodrigo1771/symptemist-fasttext-9-ner
10
  metrics:
11
  - precision
12
  - recall
@@ -19,24 +18,24 @@ model-index:
19
  name: Token Classification
20
  type: token-classification
21
  dataset:
22
- name: Rodrigo1771/symptemist-fasttext-9-ner
23
- type: Rodrigo1771/symptemist-fasttext-9-ner
24
- config: SympTEMIST NER
25
  split: validation
26
- args: SympTEMIST NER
27
  metrics:
28
  - name: Precision
29
  type: precision
30
- value: 0.6659969864389754
31
  - name: Recall
32
  type: recall
33
- value: 0.7257799671592775
34
  - name: F1
35
  type: f1
36
- value: 0.6946045049764275
37
  - name: Accuracy
38
  type: accuracy
39
- value: 0.9496615226667522
40
  ---
41
 
42
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -44,13 +43,13 @@ should probably proofread and complete it, then remove this comment. -->
44
 
45
  # output
46
 
47
- This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) on the Rodrigo1771/symptemist-fasttext-9-ner dataset.
48
  It achieves the following results on the evaluation set:
49
- - Loss: 0.2450
50
- - Precision: 0.6660
51
- - Recall: 0.7258
52
- - F1: 0.6946
53
- - Accuracy: 0.9497
54
 
55
  ## Model description
56
 
@@ -81,18 +80,18 @@ The following hyperparameters were used during training:
81
 
82
  ### Training results
83
 
84
- | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
85
- |:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
86
- | No log | 0.9968 | 155 | 0.1485 | 0.5669 | 0.6218 | 0.5931 | 0.9436 |
87
- | No log | 2.0 | 311 | 0.1609 | 0.5623 | 0.7159 | 0.6299 | 0.9409 |
88
- | No log | 2.9968 | 466 | 0.1635 | 0.6210 | 0.7219 | 0.6677 | 0.9487 |
89
- | 0.1246 | 4.0 | 622 | 0.2047 | 0.6659 | 0.6765 | 0.6712 | 0.9493 |
90
- | 0.1246 | 4.9968 | 777 | 0.2134 | 0.6562 | 0.7115 | 0.6828 | 0.9480 |
91
- | 0.1246 | 6.0 | 933 | 0.2259 | 0.6518 | 0.7099 | 0.6796 | 0.9494 |
92
- | 0.0242 | 6.9968 | 1088 | 0.2450 | 0.6660 | 0.7258 | 0.6946 | 0.9497 |
93
- | 0.0242 | 8.0 | 1244 | 0.2650 | 0.6491 | 0.7230 | 0.6841 | 0.9491 |
94
- | 0.0242 | 8.9968 | 1399 | 0.2745 | 0.6646 | 0.7126 | 0.6878 | 0.9498 |
95
- | 0.0083 | 9.9678 | 1550 | 0.2774 | 0.6628 | 0.7187 | 0.6896 | 0.9503 |
96
 
97
 
98
  ### Framework versions
 
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
+ base_model: michiyasunaga/BioLinkBERT-base
5
  tags:
 
6
  - generated_from_trainer
7
  datasets:
8
+ - drugtemist-en-fasttext-75-ner
9
  metrics:
10
  - precision
11
  - recall
 
18
  name: Token Classification
19
  type: token-classification
20
  dataset:
21
+ name: drugtemist-en-fasttext-75-ner
22
+ type: drugtemist-en-fasttext-75-ner
23
+ config: DrugTEMIST English NER
24
  split: validation
25
+ args: DrugTEMIST English NER
26
  metrics:
27
  - name: Precision
28
  type: precision
29
+ value: 0.9194876486733761
30
  - name: Recall
31
  type: recall
32
+ value: 0.9366262814538676
33
  - name: F1
34
  type: f1
35
+ value: 0.92797783933518
36
  - name: Accuracy
37
  type: accuracy
38
+ value: 0.9987511511734993
39
  ---
40
 
41
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
43
 
44
  # output
45
 
46
+ This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on the drugtemist-en-fasttext-75-ner dataset.
47
  It achieves the following results on the evaluation set:
48
+ - Loss: 0.0077
49
+ - Precision: 0.9195
50
+ - Recall: 0.9366
51
+ - F1: 0.9280
52
+ - Accuracy: 0.9988
53
 
54
  ## Model description
55
 
 
80
 
81
  ### Training results
82
 
83
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
84
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
85
+ | 0.0183 | 1.0 | 507 | 0.0055 | 0.8974 | 0.9376 | 0.9170 | 0.9985 |
86
+ | 0.0043 | 2.0 | 1014 | 0.0059 | 0.9099 | 0.9320 | 0.9208 | 0.9986 |
87
+ | 0.0022 | 3.0 | 1521 | 0.0057 | 0.9015 | 0.9301 | 0.9156 | 0.9985 |
88
+ | 0.0018 | 4.0 | 2028 | 0.0072 | 0.9275 | 0.9180 | 0.9227 | 0.9986 |
89
+ | 0.0009 | 5.0 | 2535 | 0.0064 | 0.9078 | 0.9357 | 0.9215 | 0.9987 |
90
+ | 0.0007 | 6.0 | 3042 | 0.0064 | 0.9194 | 0.9357 | 0.9275 | 0.9987 |
91
+ | 0.0004 | 7.0 | 3549 | 0.0072 | 0.9289 | 0.9376 | 0.9332 | 0.9988 |
92
+ | 0.0004 | 8.0 | 4056 | 0.0076 | 0.9250 | 0.9422 | 0.9335 | 0.9988 |
93
+ | 0.0003 | 9.0 | 4563 | 0.0077 | 0.9161 | 0.9366 | 0.9263 | 0.9987 |
94
+ | 0.0002 | 10.0 | 5070 | 0.0077 | 0.9195 | 0.9366 | 0.9280 | 0.9988 |
95
 
96
 
97
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5e48e5070a2e48a4126a57b8ab250a702091d9f63b94e626651db619192f99cd
3
  size 430601004
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:710b5b97380a4c9d2cf0065106cab4f3ec295524e9a6face82f289921e3f95ca
3
  size 430601004
tb/events.out.tfevents.1725886210.0a1c9bec2a53.24273.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8d6956b7438a04ebcdd54d7b8e8dbc8d1d73ef4c8218c0dbd353b0ce76c8773f
3
- size 11479
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a173893ab6ea4bae83ba8f7d43d877ca16a06c53d30cb7f414ae20737d881888
3
+ size 12305
train.log CHANGED
@@ -1416,3 +1416,16 @@ Training completed. Do not forget to share your model on huggingface.co/models =
1416
  [INFO|trainer.py:2632] 2024-09-09 13:26:47,099 >> Loading best model from /content/dissertation/scripts/ner/output/checkpoint-4056 (score: 0.9335180055401663).
1417
 
1418
 
1419
  [INFO|trainer.py:4283] 2024-09-09 13:26:47,281 >> Waiting for the current checkpoint push to be finished, this might take a couple of minutes.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1416
  [INFO|trainer.py:2632] 2024-09-09 13:26:47,099 >> Loading best model from /content/dissertation/scripts/ner/output/checkpoint-4056 (score: 0.9335180055401663).
1417
 
1418
 
1419
  [INFO|trainer.py:4283] 2024-09-09 13:26:47,281 >> Waiting for the current checkpoint push to be finished, this might take a couple of minutes.
1420
+ [INFO|trainer.py:3503] 2024-09-09 13:27:14,809 >> Saving model checkpoint to /content/dissertation/scripts/ner/output
1421
+ [INFO|configuration_utils.py:472] 2024-09-09 13:27:14,811 >> Configuration saved in /content/dissertation/scripts/ner/output/config.json
1422
+ [INFO|modeling_utils.py:2799] 2024-09-09 13:27:16,086 >> Model weights saved in /content/dissertation/scripts/ner/output/model.safetensors
1423
+ [INFO|tokenization_utils_base.py:2684] 2024-09-09 13:27:16,087 >> tokenizer config file saved in /content/dissertation/scripts/ner/output/tokenizer_config.json
1424
+ [INFO|tokenization_utils_base.py:2693] 2024-09-09 13:27:16,087 >> Special tokens file saved in /content/dissertation/scripts/ner/output/special_tokens_map.json
1425
+ [INFO|trainer.py:3503] 2024-09-09 13:27:16,100 >> Saving model checkpoint to /content/dissertation/scripts/ner/output
1426
+ [INFO|configuration_utils.py:472] 2024-09-09 13:27:16,101 >> Configuration saved in /content/dissertation/scripts/ner/output/config.json
1427
+ [INFO|modeling_utils.py:2799] 2024-09-09 13:27:17,200 >> Model weights saved in /content/dissertation/scripts/ner/output/model.safetensors
1428
+ [INFO|tokenization_utils_base.py:2684] 2024-09-09 13:27:17,201 >> tokenizer config file saved in /content/dissertation/scripts/ner/output/tokenizer_config.json
1429
+ [INFO|tokenization_utils_base.py:2693] 2024-09-09 13:27:17,201 >> Special tokens file saved in /content/dissertation/scripts/ner/output/special_tokens_map.json
1430
+ {'eval_loss': 0.00769586768001318, 'eval_precision': 0.9194876486733761, 'eval_recall': 0.9366262814538676, 'eval_f1': 0.92797783933518, 'eval_accuracy': 0.9987511511734993, 'eval_runtime': 15.2047, 'eval_samples_per_second': 456.832, 'eval_steps_per_second': 57.153, 'epoch': 10.0}
1431
+ {'train_runtime': 2196.5741, 'train_samples_per_second': 147.716, 'train_steps_per_second': 2.308, 'train_loss': 0.0028968164414402532, 'epoch': 10.0}
1432
+