augustocsc commited on
Commit
2b55993
·
1 Parent(s): b62f5be

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -4
README.md CHANGED
@@ -13,6 +13,8 @@ should probably proofread and complete it, then remove this comment. -->
13
  # gpt-m-large
14
 
15
  This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
 
 
16
 
17
  ## Model description
18
 
@@ -32,16 +34,53 @@ More information needed
32
 
33
  The following hyperparameters were used during training:
34
  - learning_rate: 5e-05
35
- - train_batch_size: 8
36
- - eval_batch_size: 8
37
  - seed: 42
38
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
39
  - lr_scheduler_type: linear
40
- - num_epochs: 3.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
 
42
  ### Framework versions
43
 
44
  - Transformers 4.27.3
45
- - Pytorch 1.13.1+cu116
46
  - Datasets 2.10.1
47
  - Tokenizers 0.13.2
 
13
  # gpt-m-large
14
 
15
  This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
16
+ It achieves the following results on the evaluation set:
17
+ - Loss: 0.0342
18
 
19
  ## Model description
20
 
 
34
 
35
  The following hyperparameters were used during training:
36
  - learning_rate: 5e-05
37
+ - train_batch_size: 64
38
+ - eval_batch_size: 64
39
  - seed: 42
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
+ - num_epochs: 1
43
+
44
+ ### Training results
45
+
46
+ | Training Loss | Epoch | Step | Validation Loss |
47
+ |:-------------:|:-----:|:-----:|:---------------:|
48
+ | 0.6102 | 0.03 | 1000 | 0.3834 |
49
+ | 1.2492 | 0.06 | 2000 | 0.5341 |
50
+ | 0.839 | 0.09 | 3000 | 0.5090 |
51
+ | 1.8656 | 0.13 | 4000 | 1.4058 |
52
+ | 0.0595 | 0.16 | 5000 | 0.0546 |
53
+ | 0.0462 | 0.19 | 6000 | 0.0404 |
54
+ | 0.0399 | 0.22 | 7000 | 0.0372 |
55
+ | 0.0382 | 0.25 | 8000 | 0.0363 |
56
+ | 0.037 | 0.28 | 9000 | 0.0361 |
57
+ | 0.0365 | 0.31 | 10000 | 0.0352 |
58
+ | 0.0362 | 0.35 | 11000 | 0.0349 |
59
+ | 0.0357 | 0.38 | 12000 | 0.0347 |
60
+ | 0.0356 | 0.41 | 13000 | 0.0345 |
61
+ | 0.0349 | 0.44 | 14000 | 0.0344 |
62
+ | 0.0352 | 0.47 | 15000 | 0.0343 |
63
+ | 0.0355 | 0.5 | 16000 | 0.0342 |
64
+ | 0.0354 | 0.53 | 17000 | 0.0342 |
65
+ | 0.0352 | 0.57 | 18000 | 0.0342 |
66
+ | 0.0352 | 0.6 | 19000 | 0.0342 |
67
+ | 0.0352 | 0.63 | 20000 | 0.0342 |
68
+ | 0.0351 | 0.66 | 21000 | 0.0342 |
69
+ | 0.0348 | 0.69 | 22000 | 0.0342 |
70
+ | 0.035 | 0.72 | 23000 | 0.0342 |
71
+ | 0.0354 | 0.75 | 24000 | 0.0342 |
72
+ | 0.0354 | 0.79 | 25000 | 0.0342 |
73
+ | 0.0353 | 0.82 | 26000 | 0.0342 |
74
+ | 0.0352 | 0.85 | 27000 | 0.0342 |
75
+ | 0.035 | 0.88 | 28000 | 0.0342 |
76
+ | 0.0349 | 0.91 | 29000 | 0.0342 |
77
+ | 0.0351 | 0.94 | 30000 | 0.0342 |
78
+ | 0.0355 | 0.97 | 31000 | 0.0342 |
79
+
80
 
81
  ### Framework versions
82
 
83
  - Transformers 4.27.3
84
+ - Pytorch 2.0.0+cu117
85
  - Datasets 2.10.1
86
  - Tokenizers 0.13.2