Climate-TwitterBERT commited on
Commit
6d5100e
·
1 Parent(s): ef35e68

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -11
README.md CHANGED
@@ -19,12 +19,12 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  This model is a fine-tuned version of [digitalepidemiologylab/covid-twitter-bert-v2](https://huggingface.co/digitalepidemiologylab/covid-twitter-bert-v2) on an unknown dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 0.3048
23
- - Accuracy: 0.884
24
- - Precision: 0.8043
25
- - Recall: 0.6491
26
- - F1-weighted: 0.8794
27
- - F1: 0.7184
28
 
29
  ## Model description
30
 
@@ -47,19 +47,20 @@ The following hyperparameters were used during training:
47
  - train_batch_size: 16
48
  - eval_batch_size: 2
49
  - seed: 42
50
- - gradient_accumulation_steps: 6
51
- - total_train_batch_size: 96
52
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
53
  - lr_scheduler_type: linear
54
  - lr_scheduler_warmup_ratio: 0.05
55
- - num_epochs: 6
56
 
57
  ### Training results
58
 
59
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1-weighted | F1 |
60
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:-----------:|:------:|
61
- | 0.4786 | 2.73 | 50 | 0.3590 | 0.864 | 0.7018 | 0.7018 | 0.864 | 0.7018 |
62
- | 0.265 | 5.45 | 100 | 0.3048 | 0.884 | 0.8043 | 0.6491 | 0.8794 | 0.7184 |
 
63
 
64
 
65
  ### Framework versions
 
19
 
20
  This model is a fine-tuned version of [digitalepidemiologylab/covid-twitter-bert-v2](https://huggingface.co/digitalepidemiologylab/covid-twitter-bert-v2) on an unknown dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 0.3348
23
+ - Accuracy: 0.888
24
+ - Precision: 0.7843
25
+ - Recall: 0.7018
26
+ - F1-weighted: 0.8857
27
+ - F1: 0.7407
28
 
29
  ## Model description
30
 
 
47
  - train_batch_size: 16
48
  - eval_batch_size: 2
49
  - seed: 42
50
+ - gradient_accumulation_steps: 8
51
+ - total_train_batch_size: 128
52
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
53
  - lr_scheduler_type: linear
54
  - lr_scheduler_warmup_ratio: 0.05
55
+ - num_epochs: 12
56
 
57
  ### Training results
58
 
59
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1-weighted | F1 |
60
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:-----------:|:------:|
61
+ | 0.4411 | 3.64 | 50 | 0.3396 | 0.876 | 0.8611 | 0.5439 | 0.8652 | 0.6667 |
62
+ | 0.1872 | 7.27 | 100 | 0.3182 | 0.876 | 0.6912 | 0.8246 | 0.8796 | 0.7520 |
63
+ | 0.0724 | 10.91 | 150 | 0.3348 | 0.888 | 0.7843 | 0.7018 | 0.8857 | 0.7407 |
64
 
65
 
66
  ### Framework versions