manirai91 commited on
Commit
7db32f3
·
1 Parent(s): bd0ebe0

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +88 -0
README.md ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - generated_from_trainer
4
+ model-index:
5
+ - name: enlm-roberta-final
6
+ results: []
7
+ ---
8
+
9
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
10
+ should probably proofread and complete it, then remove this comment. -->
11
+
12
+ # enlm-roberta-final
13
+
14
+ This model is a fine-tuned version of [manirai91/enlm-roberta](https://huggingface.co/manirai91/enlm-roberta) on an unknown dataset.
15
+ It achieves the following results on the evaluation set:
16
+ - Loss: 1.4187
17
+
18
+ ## Model description
19
+
20
+ More information needed
21
+
22
+ ## Intended uses & limitations
23
+
24
+ More information needed
25
+
26
+ ## Training and evaluation data
27
+
28
+ More information needed
29
+
30
+ ## Training procedure
31
+
32
+ ### Training hyperparameters
33
+
34
+ The following hyperparameters were used during training:
35
+ - learning_rate: 6e-06
36
+ - train_batch_size: 16
37
+ - eval_batch_size: 16
38
+ - seed: 42
39
+ - distributed_type: multi-GPU
40
+ - num_devices: 4
41
+ - gradient_accumulation_steps: 128
42
+ - total_train_batch_size: 8192
43
+ - total_eval_batch_size: 64
44
+ - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
45
+ - lr_scheduler_type: polynomial
46
+ - num_epochs: 10
47
+
48
+ ### Training results
49
+
50
+ | Training Loss | Epoch | Step | Validation Loss |
51
+ |:-------------:|:-----:|:----:|:---------------:|
52
+ | 1.5245 | 0.34 | 160 | 1.4187 |
53
+ | 1.5245 | 0.69 | 320 | 1.4183 |
54
+ | 1.5259 | 1.03 | 480 | 1.4177 |
55
+ | 1.5265 | 1.37 | 640 | 1.4185 |
56
+ | 1.5245 | 1.72 | 800 | 1.4190 |
57
+ | 1.5241 | 2.06 | 960 | 1.4172 |
58
+ | 1.5227 | 2.4 | 1120 | 1.4165 |
59
+ | 1.5226 | 2.75 | 1280 | 1.4152 |
60
+ | 1.522 | 3.09 | 1440 | 1.4190 |
61
+ | 1.5243 | 3.43 | 1600 | 1.4177 |
62
+ | 1.5213 | 3.78 | 1760 | 1.4134 |
63
+ | 1.524 | 4.12 | 1920 | 1.4140 |
64
+ | 1.5223 | 4.46 | 2080 | 1.4173 |
65
+ | 1.5236 | 4.81 | 2240 | 1.4121 |
66
+ | 1.5239 | 5.15 | 2400 | 1.4186 |
67
+ | 1.5203 | 5.49 | 2560 | 1.4154 |
68
+ | 1.522 | 5.84 | 2720 | 1.4162 |
69
+ | 1.5209 | 6.18 | 2880 | 1.4154 |
70
+ | 1.5196 | 6.52 | 3040 | 1.4153 |
71
+ | 1.5209 | 6.87 | 3200 | 1.4122 |
72
+ | 1.5202 | 7.21 | 3360 | 1.4146 |
73
+ | 1.5192 | 7.55 | 3520 | 1.4141 |
74
+ | 1.5215 | 7.9 | 3680 | 1.4123 |
75
+ | 1.5228 | 8.24 | 3840 | 1.4147 |
76
+ | 1.5222 | 8.58 | 4000 | 1.4144 |
77
+ | 1.5201 | 8.93 | 4160 | 1.4173 |
78
+ | 1.523 | 9.27 | 4320 | 1.4171 |
79
+ | 1.5212 | 9.61 | 4480 | 1.4149 |
80
+ | 1.522 | 9.96 | 4640 | 1.4187 |
81
+
82
+
83
+ ### Framework versions
84
+
85
+ - Transformers 4.20.1
86
+ - Pytorch 1.11.0
87
+ - Datasets 2.3.2
88
+ - Tokenizers 0.12.1