winegarj commited on
Commit
7837e5e
·
verified ·
1 Parent(s): 4cf14bb

End of training

Browse files
Files changed (1) hide show
  1. README.md +13 -12
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  license: apache-2.0
3
  base_model: distilbert-base-uncased
4
  tags:
@@ -17,8 +18,8 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.2823
21
- - Accuracy: 0.9083
22
 
23
  ## Model description
24
 
@@ -41,7 +42,7 @@ The following hyperparameters were used during training:
41
  - train_batch_size: 512
42
  - eval_batch_size: 512
43
  - seed: 42
44
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - num_epochs: 5
47
 
@@ -49,16 +50,16 @@ The following hyperparameters were used during training:
49
 
50
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
51
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
52
- | No log | 1.0 | 132 | 0.2514 | 0.9002 |
53
- | No log | 2.0 | 264 | 0.2852 | 0.8911 |
54
- | No log | 3.0 | 396 | 0.2823 | 0.9083 |
55
- | 0.1928 | 4.0 | 528 | 0.2995 | 0.9025 |
56
- | 0.1928 | 5.0 | 660 | 0.3086 | 0.9014 |
57
 
58
 
59
  ### Framework versions
60
 
61
- - Transformers 4.43.3
62
- - Pytorch 2.4.0+cu121
63
- - Datasets 2.20.0
64
- - Tokenizers 0.19.1
 
1
  ---
2
+ library_name: transformers
3
  license: apache-2.0
4
  base_model: distilbert-base-uncased
5
  tags:
 
18
 
19
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.3009
22
+ - Accuracy: 0.9048
23
 
24
  ## Model description
25
 
 
42
  - train_batch_size: 512
43
  - eval_batch_size: 512
44
  - seed: 42
45
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
46
  - lr_scheduler_type: linear
47
  - num_epochs: 5
48
 
 
50
 
51
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
52
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
53
+ | No log | 1.0 | 132 | 0.2494 | 0.8968 |
54
+ | No log | 2.0 | 264 | 0.2767 | 0.8968 |
55
+ | No log | 3.0 | 396 | 0.2810 | 0.9002 |
56
+ | 0.195 | 4.0 | 528 | 0.2920 | 0.9025 |
57
+ | 0.195 | 5.0 | 660 | 0.3009 | 0.9048 |
58
 
59
 
60
  ### Framework versions
61
 
62
+ - Transformers 4.46.2
63
+ - Pytorch 2.5.1+cu124
64
+ - Datasets 3.1.0
65
+ - Tokenizers 0.20.3