Rodrigo1771 commited on
Commit
f720220
·
verified ·
1 Parent(s): d086439

Model save

Browse files
README.md CHANGED
@@ -3,10 +3,9 @@ library_name: transformers
3
  license: apache-2.0
4
  base_model: PlanTL-GOB-ES/bsc-bio-ehr-es
5
  tags:
6
- - token-classification
7
  - generated_from_trainer
8
  datasets:
9
- - Rodrigo1771/drugtemist-85-ner
10
  metrics:
11
  - precision
12
  - recall
@@ -19,24 +18,24 @@ model-index:
19
  name: Token Classification
20
  type: token-classification
21
  dataset:
22
- name: Rodrigo1771/drugtemist-85-ner
23
- type: Rodrigo1771/drugtemist-85-ner
24
- config: DrugTEMIST NER
25
  split: validation
26
- args: DrugTEMIST NER
27
  metrics:
28
  - name: Precision
29
  type: precision
30
- value: 0.9461187214611873
31
  - name: Recall
32
  type: recall
33
- value: 0.9522058823529411
34
  - name: F1
35
  type: f1
36
- value: 0.9491525423728814
37
  - name: Accuracy
38
  type: accuracy
39
- value: 0.9989426998228679
40
  ---
41
 
42
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -44,13 +43,13 @@ should probably proofread and complete it, then remove this comment. -->
44
 
45
  # output
46
 
47
- This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) on the Rodrigo1771/drugtemist-85-ner dataset.
48
  It achieves the following results on the evaluation set:
49
- - Loss: 0.0048
50
- - Precision: 0.9461
51
- - Recall: 0.9522
52
- - F1: 0.9492
53
- - Accuracy: 0.9989
54
 
55
  ## Model description
56
 
@@ -83,16 +82,16 @@ The following hyperparameters were used during training:
83
 
84
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
85
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
86
- | No log | 1.0 | 466 | 0.0031 | 0.9292 | 0.9412 | 0.9352 | 0.9989 |
87
- | 0.0199 | 2.0 | 932 | 0.0031 | 0.9212 | 0.9568 | 0.9387 | 0.9989 |
88
- | 0.0026 | 3.0 | 1398 | 0.0040 | 0.9365 | 0.9357 | 0.9361 | 0.9989 |
89
- | 0.0011 | 4.0 | 1864 | 0.0052 | 0.9400 | 0.9219 | 0.9309 | 0.9987 |
90
- | 0.001 | 5.0 | 2330 | 0.0048 | 0.9461 | 0.9522 | 0.9492 | 0.9989 |
91
- | 0.0005 | 6.0 | 2796 | 0.0046 | 0.9376 | 0.9522 | 0.9448 | 0.9989 |
92
- | 0.0004 | 7.0 | 3262 | 0.0050 | 0.9328 | 0.9568 | 0.9446 | 0.9990 |
93
- | 0.0002 | 8.0 | 3728 | 0.0055 | 0.9423 | 0.9449 | 0.9436 | 0.9989 |
94
- | 0.0001 | 9.0 | 4194 | 0.0057 | 0.9399 | 0.9485 | 0.9442 | 0.9989 |
95
- | 0.0001 | 10.0 | 4660 | 0.0058 | 0.9348 | 0.9485 | 0.9416 | 0.9989 |
96
 
97
 
98
  ### Framework versions
 
3
  license: apache-2.0
4
  base_model: PlanTL-GOB-ES/bsc-bio-ehr-es
5
  tags:
 
6
  - generated_from_trainer
7
  datasets:
8
+ - combined-train-distemist-dev-85-ner
9
  metrics:
10
  - precision
11
  - recall
 
18
  name: Token Classification
19
  type: token-classification
20
  dataset:
21
+ name: combined-train-distemist-dev-85-ner
22
+ type: combined-train-distemist-dev-85-ner
23
+ config: CombinedTrainDisTEMISTDevNER
24
  split: validation
25
+ args: CombinedTrainDisTEMISTDevNER
26
  metrics:
27
  - name: Precision
28
  type: precision
29
+ value: 0.31162999550965426
30
  - name: Recall
31
  type: recall
32
+ value: 0.8118858212447356
33
  - name: F1
34
  type: f1
35
+ value: 0.45038613797131544
36
  - name: Accuracy
37
  type: accuracy
38
+ value: 0.8533304955579661
39
  ---
40
 
41
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
43
 
44
  # output
45
 
46
+ This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) on the combined-train-distemist-dev-85-ner dataset.
47
  It achieves the following results on the evaluation set:
48
+ - Loss: 1.0023
49
+ - Precision: 0.3116
50
+ - Recall: 0.8119
51
+ - F1: 0.4504
52
+ - Accuracy: 0.8533
53
 
54
  ## Model description
55
 
 
82
 
83
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
84
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
85
+ | 0.3191 | 1.0 | 541 | 0.4772 | 0.2725 | 0.8074 | 0.4075 | 0.8443 |
86
+ | 0.1619 | 2.0 | 1082 | 0.4584 | 0.3041 | 0.7941 | 0.4398 | 0.8553 |
87
+ | 0.11 | 3.0 | 1623 | 0.6447 | 0.2976 | 0.8000 | 0.4338 | 0.8435 |
88
+ | 0.0764 | 4.0 | 2164 | 0.7413 | 0.2896 | 0.7871 | 0.4234 | 0.8399 |
89
+ | 0.0567 | 5.0 | 2705 | 0.7006 | 0.3153 | 0.8145 | 0.4546 | 0.8565 |
90
+ | 0.0428 | 6.0 | 3246 | 0.8112 | 0.3071 | 0.8210 | 0.4470 | 0.8504 |
91
+ | 0.0332 | 7.0 | 3787 | 0.9046 | 0.3114 | 0.8070 | 0.4494 | 0.8533 |
92
+ | 0.0257 | 8.0 | 4328 | 0.9723 | 0.3060 | 0.8109 | 0.4444 | 0.8482 |
93
+ | 0.022 | 9.0 | 4869 | 1.0028 | 0.3087 | 0.8077 | 0.4467 | 0.8502 |
94
+ | 0.0181 | 10.0 | 5410 | 1.0023 | 0.3116 | 0.8119 | 0.4504 | 0.8533 |
95
 
96
 
97
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fb05d214814c2ef6801e19a47050c4e071f9a6a53f337eecc64972e6bf739330
3
  size 496262556
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40d51e380730dc13252fa24e75ff6a3a5d47dfbaa6ff5f8a7a801628c757ba77
3
  size 496262556
tb/events.out.tfevents.1725579443.2a66098fac87.15776.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bbe3f5ff4851f3fc3c5cabd01bde5b3d622ecd3b108e014532660173b1a056f1
3
- size 11819
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c35cf63ce2a6e730611e5daee2288691541b634908c8c8e2a1a19a4eb706abe0
3
+ size 12645
train.log CHANGED
@@ -1482,3 +1482,16 @@ Training completed. Do not forget to share your model on huggingface.co/models =
1482
  [INFO|trainer.py:2632] 2024-09-06 00:03:12,782 >> Loading best model from /content/dissertation/scripts/ner/output/checkpoint-2705 (score: 0.45455732567249935).
1483
 
1484
 
1485
  [INFO|trainer.py:4283] 2024-09-06 00:03:12,976 >> Waiting for the current checkpoint push to be finished, this might take a couple of minutes.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1482
  [INFO|trainer.py:2632] 2024-09-06 00:03:12,782 >> Loading best model from /content/dissertation/scripts/ner/output/checkpoint-2705 (score: 0.45455732567249935).
1483
 
1484
 
1485
  [INFO|trainer.py:4283] 2024-09-06 00:03:12,976 >> Waiting for the current checkpoint push to be finished, this might take a couple of minutes.
1486
+ [INFO|trainer.py:3503] 2024-09-06 00:03:38,550 >> Saving model checkpoint to /content/dissertation/scripts/ner/output
1487
+ [INFO|configuration_utils.py:472] 2024-09-06 00:03:38,551 >> Configuration saved in /content/dissertation/scripts/ner/output/config.json
1488
+ [INFO|modeling_utils.py:2799] 2024-09-06 00:03:39,925 >> Model weights saved in /content/dissertation/scripts/ner/output/model.safetensors
1489
+ [INFO|tokenization_utils_base.py:2684] 2024-09-06 00:03:39,926 >> tokenizer config file saved in /content/dissertation/scripts/ner/output/tokenizer_config.json
1490
+ [INFO|tokenization_utils_base.py:2693] 2024-09-06 00:03:39,926 >> Special tokens file saved in /content/dissertation/scripts/ner/output/special_tokens_map.json
1491
+ [INFO|trainer.py:3503] 2024-09-06 00:03:39,973 >> Saving model checkpoint to /content/dissertation/scripts/ner/output
1492
+ [INFO|configuration_utils.py:472] 2024-09-06 00:03:39,975 >> Configuration saved in /content/dissertation/scripts/ner/output/config.json
1493
+ [INFO|modeling_utils.py:2799] 2024-09-06 00:03:41,523 >> Model weights saved in /content/dissertation/scripts/ner/output/model.safetensors
1494
+ [INFO|tokenization_utils_base.py:2684] 2024-09-06 00:03:41,524 >> tokenizer config file saved in /content/dissertation/scripts/ner/output/tokenizer_config.json
1495
+ [INFO|tokenization_utils_base.py:2693] 2024-09-06 00:03:41,524 >> Special tokens file saved in /content/dissertation/scripts/ner/output/special_tokens_map.json
1496
+ {'eval_loss': 1.0022608041763306, 'eval_precision': 0.31162999550965426, 'eval_recall': 0.8118858212447356, 'eval_f1': 0.45038613797131544, 'eval_accuracy': 0.8533304955579661, 'eval_runtime': 14.4559, 'eval_samples_per_second': 471.089, 'eval_steps_per_second': 58.938, 'epoch': 10.0}
1497
+ {'train_runtime': 1549.44, 'train_samples_per_second': 223.332, 'train_steps_per_second': 3.492, 'train_loss': 0.0812657987344287, 'epoch': 10.0}
1498
+