Melo1512 commited on
Commit
a38122e
·
verified ·
1 Parent(s): ced4992

End of training

Browse files
README.md CHANGED
@@ -23,7 +23,7 @@ model-index:
23
  metrics:
24
  - name: Accuracy
25
  type: accuracy
26
- value: 0.9374958397124409
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -33,8 +33,8 @@ should probably proofread and complete it, then remove this comment. -->
33
 
34
  This model is a fine-tuned version of [facebook/vit-msn-small](https://huggingface.co/facebook/vit-msn-small) on the imagefolder dataset.
35
  It achieves the following results on the evaluation set:
36
- - Loss: 0.2370
37
- - Accuracy: 0.9375
38
 
39
  ## Model description
40
 
 
23
  metrics:
24
  - name: Accuracy
25
  type: accuracy
26
+ value: 0.9400918591493044
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
33
 
34
  This model is a fine-tuned version of [facebook/vit-msn-small](https://huggingface.co/facebook/vit-msn-small) on the imagefolder dataset.
35
  It achieves the following results on the evaluation set:
36
+ - Loss: 0.1790
37
+ - Accuracy: 0.9401
38
 
39
  ## Model description
40
 
all_results.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 24.843423799582464,
3
+ "eval_accuracy": 0.9400918591493044,
4
+ "eval_loss": 0.17901407182216644,
5
+ "eval_runtime": 46.8182,
6
+ "eval_samples_per_second": 320.879,
7
+ "eval_steps_per_second": 5.019,
8
+ "total_flos": 1.488814196353273e+19,
9
+ "train_loss": 0.23466382104809544,
10
+ "train_runtime": 6976.8228,
11
+ "train_samples_per_second": 109.728,
12
+ "train_steps_per_second": 0.426
13
+ }
eval_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 24.843423799582464,
3
+ "eval_accuracy": 0.9400918591493044,
4
+ "eval_loss": 0.17901407182216644,
5
+ "eval_runtime": 46.8182,
6
+ "eval_samples_per_second": 320.879,
7
+ "eval_steps_per_second": 5.019
8
+ }
runs/Dec19_12-52-16_ae1aa77fe319/events.out.tfevents.1734622698.ae1aa77fe319.236.25 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:098f81239e26ac2729fd12a554322025b1502afb4bd88238ef30b345fa9dccbb
3
+ size 411
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 24.843423799582464,
3
+ "total_flos": 1.488814196353273e+19,
4
+ "train_loss": 0.23466382104809544,
5
+ "train_runtime": 6976.8228,
6
+ "train_samples_per_second": 109.728,
7
+ "train_steps_per_second": 0.426
8
+ }
trainer_state.json ADDED
@@ -0,0 +1,2346 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": 0.9400918591493044,
3
+ "best_model_checkpoint": "vit-msn-small-wbc-classifier-cells-separated-dataset-agregates-25/checkpoint-1556",
4
+ "epoch": 24.843423799582464,
5
+ "eval_steps": 500,
6
+ "global_step": 2975,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.08350730688935282,
13
+ "grad_norm": 23.778831481933594,
14
+ "learning_rate": 1.6778523489932886e-06,
15
+ "loss": 1.6751,
16
+ "step": 10
17
+ },
18
+ {
19
+ "epoch": 0.16701461377870563,
20
+ "grad_norm": 10.125561714172363,
21
+ "learning_rate": 3.3557046979865773e-06,
22
+ "loss": 1.2395,
23
+ "step": 20
24
+ },
25
+ {
26
+ "epoch": 0.25052192066805845,
27
+ "grad_norm": 7.331182479858398,
28
+ "learning_rate": 5.033557046979865e-06,
29
+ "loss": 0.9635,
30
+ "step": 30
31
+ },
32
+ {
33
+ "epoch": 0.33402922755741127,
34
+ "grad_norm": 6.259941577911377,
35
+ "learning_rate": 6.7114093959731546e-06,
36
+ "loss": 0.7296,
37
+ "step": 40
38
+ },
39
+ {
40
+ "epoch": 0.4175365344467641,
41
+ "grad_norm": 9.791149139404297,
42
+ "learning_rate": 8.389261744966444e-06,
43
+ "loss": 0.582,
44
+ "step": 50
45
+ },
46
+ {
47
+ "epoch": 0.5010438413361169,
48
+ "grad_norm": 8.130660057067871,
49
+ "learning_rate": 1.006711409395973e-05,
50
+ "loss": 0.4722,
51
+ "step": 60
52
+ },
53
+ {
54
+ "epoch": 0.5845511482254697,
55
+ "grad_norm": 6.79920768737793,
56
+ "learning_rate": 1.174496644295302e-05,
57
+ "loss": 0.4658,
58
+ "step": 70
59
+ },
60
+ {
61
+ "epoch": 0.6680584551148225,
62
+ "grad_norm": 6.699377536773682,
63
+ "learning_rate": 1.3422818791946309e-05,
64
+ "loss": 0.394,
65
+ "step": 80
66
+ },
67
+ {
68
+ "epoch": 0.7515657620041754,
69
+ "grad_norm": 18.482280731201172,
70
+ "learning_rate": 1.51006711409396e-05,
71
+ "loss": 0.3988,
72
+ "step": 90
73
+ },
74
+ {
75
+ "epoch": 0.8350730688935282,
76
+ "grad_norm": 14.189501762390137,
77
+ "learning_rate": 1.6778523489932888e-05,
78
+ "loss": 0.3955,
79
+ "step": 100
80
+ },
81
+ {
82
+ "epoch": 0.918580375782881,
83
+ "grad_norm": 9.649164199829102,
84
+ "learning_rate": 1.8456375838926178e-05,
85
+ "loss": 0.351,
86
+ "step": 110
87
+ },
88
+ {
89
+ "epoch": 0.9937369519832986,
90
+ "eval_accuracy": 0.9151301337948479,
91
+ "eval_loss": 0.2522774636745453,
92
+ "eval_runtime": 47.6631,
93
+ "eval_samples_per_second": 315.191,
94
+ "eval_steps_per_second": 4.93,
95
+ "step": 119
96
+ },
97
+ {
98
+ "epoch": 1.0020876826722338,
99
+ "grad_norm": 8.503198623657227,
100
+ "learning_rate": 2.013422818791946e-05,
101
+ "loss": 0.377,
102
+ "step": 120
103
+ },
104
+ {
105
+ "epoch": 1.0855949895615866,
106
+ "grad_norm": 6.6437249183654785,
107
+ "learning_rate": 2.181208053691275e-05,
108
+ "loss": 0.3533,
109
+ "step": 130
110
+ },
111
+ {
112
+ "epoch": 1.1691022964509394,
113
+ "grad_norm": 5.940201282501221,
114
+ "learning_rate": 2.348993288590604e-05,
115
+ "loss": 0.3391,
116
+ "step": 140
117
+ },
118
+ {
119
+ "epoch": 1.2526096033402923,
120
+ "grad_norm": 5.3783135414123535,
121
+ "learning_rate": 2.516778523489933e-05,
122
+ "loss": 0.3416,
123
+ "step": 150
124
+ },
125
+ {
126
+ "epoch": 1.336116910229645,
127
+ "grad_norm": 7.877007484436035,
128
+ "learning_rate": 2.6845637583892618e-05,
129
+ "loss": 0.3356,
130
+ "step": 160
131
+ },
132
+ {
133
+ "epoch": 1.4196242171189979,
134
+ "grad_norm": 8.07664680480957,
135
+ "learning_rate": 2.8523489932885905e-05,
136
+ "loss": 0.3246,
137
+ "step": 170
138
+ },
139
+ {
140
+ "epoch": 1.5031315240083507,
141
+ "grad_norm": 10.465314865112305,
142
+ "learning_rate": 3.02013422818792e-05,
143
+ "loss": 0.3957,
144
+ "step": 180
145
+ },
146
+ {
147
+ "epoch": 1.5866388308977035,
148
+ "grad_norm": 10.284217834472656,
149
+ "learning_rate": 3.1879194630872485e-05,
150
+ "loss": 0.3753,
151
+ "step": 190
152
+ },
153
+ {
154
+ "epoch": 1.6701461377870563,
155
+ "grad_norm": 8.860455513000488,
156
+ "learning_rate": 3.3557046979865775e-05,
157
+ "loss": 0.3678,
158
+ "step": 200
159
+ },
160
+ {
161
+ "epoch": 1.7536534446764092,
162
+ "grad_norm": 6.609280586242676,
163
+ "learning_rate": 3.523489932885906e-05,
164
+ "loss": 0.3449,
165
+ "step": 210
166
+ },
167
+ {
168
+ "epoch": 1.837160751565762,
169
+ "grad_norm": 5.114744186401367,
170
+ "learning_rate": 3.6912751677852356e-05,
171
+ "loss": 0.3281,
172
+ "step": 220
173
+ },
174
+ {
175
+ "epoch": 1.9206680584551148,
176
+ "grad_norm": 5.134997844696045,
177
+ "learning_rate": 3.859060402684564e-05,
178
+ "loss": 0.3364,
179
+ "step": 230
180
+ },
181
+ {
182
+ "epoch": 1.9958246346555324,
183
+ "eval_accuracy": 0.9195233974572322,
184
+ "eval_loss": 0.23547479510307312,
185
+ "eval_runtime": 47.2123,
186
+ "eval_samples_per_second": 318.201,
187
+ "eval_steps_per_second": 4.978,
188
+ "step": 239
189
+ },
190
+ {
191
+ "epoch": 2.0041753653444676,
192
+ "grad_norm": 7.969848155975342,
193
+ "learning_rate": 4.026845637583892e-05,
194
+ "loss": 0.3517,
195
+ "step": 240
196
+ },
197
+ {
198
+ "epoch": 2.0876826722338206,
199
+ "grad_norm": 7.928933143615723,
200
+ "learning_rate": 4.194630872483222e-05,
201
+ "loss": 0.3229,
202
+ "step": 250
203
+ },
204
+ {
205
+ "epoch": 2.1711899791231732,
206
+ "grad_norm": 5.597458839416504,
207
+ "learning_rate": 4.36241610738255e-05,
208
+ "loss": 0.3295,
209
+ "step": 260
210
+ },
211
+ {
212
+ "epoch": 2.2546972860125263,
213
+ "grad_norm": 5.401355266571045,
214
+ "learning_rate": 4.530201342281879e-05,
215
+ "loss": 0.3238,
216
+ "step": 270
217
+ },
218
+ {
219
+ "epoch": 2.338204592901879,
220
+ "grad_norm": 7.695993423461914,
221
+ "learning_rate": 4.697986577181208e-05,
222
+ "loss": 0.3138,
223
+ "step": 280
224
+ },
225
+ {
226
+ "epoch": 2.421711899791232,
227
+ "grad_norm": 8.675772666931152,
228
+ "learning_rate": 4.865771812080537e-05,
229
+ "loss": 0.3461,
230
+ "step": 290
231
+ },
232
+ {
233
+ "epoch": 2.5052192066805845,
234
+ "grad_norm": 9.49290657043457,
235
+ "learning_rate": 4.99626447515876e-05,
236
+ "loss": 0.3524,
237
+ "step": 300
238
+ },
239
+ {
240
+ "epoch": 2.588726513569937,
241
+ "grad_norm": 3.6922707557678223,
242
+ "learning_rate": 4.977586850952559e-05,
243
+ "loss": 0.3145,
244
+ "step": 310
245
+ },
246
+ {
247
+ "epoch": 2.67223382045929,
248
+ "grad_norm": 7.227968215942383,
249
+ "learning_rate": 4.958909226746358e-05,
250
+ "loss": 0.3054,
251
+ "step": 320
252
+ },
253
+ {
254
+ "epoch": 2.755741127348643,
255
+ "grad_norm": 5.04173469543457,
256
+ "learning_rate": 4.940231602540157e-05,
257
+ "loss": 0.3404,
258
+ "step": 330
259
+ },
260
+ {
261
+ "epoch": 2.8392484342379958,
262
+ "grad_norm": 5.110103607177734,
263
+ "learning_rate": 4.9215539783339556e-05,
264
+ "loss": 0.2992,
265
+ "step": 340
266
+ },
267
+ {
268
+ "epoch": 2.9227557411273484,
269
+ "grad_norm": 5.469402313232422,
270
+ "learning_rate": 4.902876354127755e-05,
271
+ "loss": 0.2999,
272
+ "step": 350
273
+ },
274
+ {
275
+ "epoch": 2.997912317327766,
276
+ "eval_accuracy": 0.9169273780203687,
277
+ "eval_loss": 0.23837865889072418,
278
+ "eval_runtime": 46.7529,
279
+ "eval_samples_per_second": 321.327,
280
+ "eval_steps_per_second": 5.026,
281
+ "step": 359
282
+ },
283
+ {
284
+ "epoch": 3.0062630480167014,
285
+ "grad_norm": 5.160531997680664,
286
+ "learning_rate": 4.884198729921554e-05,
287
+ "loss": 0.2911,
288
+ "step": 360
289
+ },
290
+ {
291
+ "epoch": 3.0897703549060545,
292
+ "grad_norm": 5.856198310852051,
293
+ "learning_rate": 4.865521105715353e-05,
294
+ "loss": 0.2959,
295
+ "step": 370
296
+ },
297
+ {
298
+ "epoch": 3.173277661795407,
299
+ "grad_norm": 3.725022792816162,
300
+ "learning_rate": 4.846843481509152e-05,
301
+ "loss": 0.2956,
302
+ "step": 380
303
+ },
304
+ {
305
+ "epoch": 3.25678496868476,
306
+ "grad_norm": 5.262429237365723,
307
+ "learning_rate": 4.828165857302951e-05,
308
+ "loss": 0.3044,
309
+ "step": 390
310
+ },
311
+ {
312
+ "epoch": 3.3402922755741127,
313
+ "grad_norm": 5.595754623413086,
314
+ "learning_rate": 4.80948823309675e-05,
315
+ "loss": 0.3188,
316
+ "step": 400
317
+ },
318
+ {
319
+ "epoch": 3.4237995824634657,
320
+ "grad_norm": 5.292787075042725,
321
+ "learning_rate": 4.790810608890549e-05,
322
+ "loss": 0.3001,
323
+ "step": 410
324
+ },
325
+ {
326
+ "epoch": 3.5073068893528183,
327
+ "grad_norm": 4.534820079803467,
328
+ "learning_rate": 4.772132984684348e-05,
329
+ "loss": 0.3011,
330
+ "step": 420
331
+ },
332
+ {
333
+ "epoch": 3.5908141962421714,
334
+ "grad_norm": 11.022297859191895,
335
+ "learning_rate": 4.753455360478147e-05,
336
+ "loss": 0.2965,
337
+ "step": 430
338
+ },
339
+ {
340
+ "epoch": 3.674321503131524,
341
+ "grad_norm": 5.595705986022949,
342
+ "learning_rate": 4.734777736271946e-05,
343
+ "loss": 0.3172,
344
+ "step": 440
345
+ },
346
+ {
347
+ "epoch": 3.757828810020877,
348
+ "grad_norm": 5.017602443695068,
349
+ "learning_rate": 4.716100112065745e-05,
350
+ "loss": 0.326,
351
+ "step": 450
352
+ },
353
+ {
354
+ "epoch": 3.8413361169102296,
355
+ "grad_norm": 3.5311007499694824,
356
+ "learning_rate": 4.697422487859545e-05,
357
+ "loss": 0.2831,
358
+ "step": 460
359
+ },
360
+ {
361
+ "epoch": 3.9248434237995826,
362
+ "grad_norm": 4.687069892883301,
363
+ "learning_rate": 4.678744863653344e-05,
364
+ "loss": 0.2861,
365
+ "step": 470
366
+ },
367
+ {
368
+ "epoch": 4.0,
369
+ "eval_accuracy": 0.9341010450642349,
370
+ "eval_loss": 0.190222829580307,
371
+ "eval_runtime": 46.9219,
372
+ "eval_samples_per_second": 320.17,
373
+ "eval_steps_per_second": 5.008,
374
+ "step": 479
375
+ },
376
+ {
377
+ "epoch": 4.008350730688935,
378
+ "grad_norm": 4.544097900390625,
379
+ "learning_rate": 4.660067239447142e-05,
380
+ "loss": 0.305,
381
+ "step": 480
382
+ },
383
+ {
384
+ "epoch": 4.091858037578288,
385
+ "grad_norm": 4.896882057189941,
386
+ "learning_rate": 4.641389615240941e-05,
387
+ "loss": 0.2827,
388
+ "step": 490
389
+ },
390
+ {
391
+ "epoch": 4.175365344467641,
392
+ "grad_norm": 3.459932804107666,
393
+ "learning_rate": 4.622711991034741e-05,
394
+ "loss": 0.2754,
395
+ "step": 500
396
+ },
397
+ {
398
+ "epoch": 4.258872651356993,
399
+ "grad_norm": 4.049550533294678,
400
+ "learning_rate": 4.60403436682854e-05,
401
+ "loss": 0.3175,
402
+ "step": 510
403
+ },
404
+ {
405
+ "epoch": 4.3423799582463465,
406
+ "grad_norm": 5.197091579437256,
407
+ "learning_rate": 4.585356742622339e-05,
408
+ "loss": 0.2696,
409
+ "step": 520
410
+ },
411
+ {
412
+ "epoch": 4.4258872651356995,
413
+ "grad_norm": 3.382556915283203,
414
+ "learning_rate": 4.566679118416138e-05,
415
+ "loss": 0.2625,
416
+ "step": 530
417
+ },
418
+ {
419
+ "epoch": 4.509394572025053,
420
+ "grad_norm": 5.480315208435059,
421
+ "learning_rate": 4.548001494209937e-05,
422
+ "loss": 0.3011,
423
+ "step": 540
424
+ },
425
+ {
426
+ "epoch": 4.592901878914405,
427
+ "grad_norm": 4.315974712371826,
428
+ "learning_rate": 4.529323870003736e-05,
429
+ "loss": 0.2671,
430
+ "step": 550
431
+ },
432
+ {
433
+ "epoch": 4.676409185803758,
434
+ "grad_norm": 4.109740734100342,
435
+ "learning_rate": 4.510646245797535e-05,
436
+ "loss": 0.2873,
437
+ "step": 560
438
+ },
439
+ {
440
+ "epoch": 4.759916492693111,
441
+ "grad_norm": 4.254931449890137,
442
+ "learning_rate": 4.491968621591334e-05,
443
+ "loss": 0.2989,
444
+ "step": 570
445
+ },
446
+ {
447
+ "epoch": 4.843423799582464,
448
+ "grad_norm": 2.6343159675598145,
449
+ "learning_rate": 4.473290997385133e-05,
450
+ "loss": 0.2851,
451
+ "step": 580
452
+ },
453
+ {
454
+ "epoch": 4.926931106471816,
455
+ "grad_norm": 4.918463230133057,
456
+ "learning_rate": 4.454613373178932e-05,
457
+ "loss": 0.3014,
458
+ "step": 590
459
+ },
460
+ {
461
+ "epoch": 4.993736951983299,
462
+ "eval_accuracy": 0.9289755707914531,
463
+ "eval_loss": 0.21536433696746826,
464
+ "eval_runtime": 46.4537,
465
+ "eval_samples_per_second": 323.397,
466
+ "eval_steps_per_second": 5.059,
467
+ "step": 598
468
+ },
469
+ {
470
+ "epoch": 5.010438413361169,
471
+ "grad_norm": 4.048580646514893,
472
+ "learning_rate": 4.435935748972731e-05,
473
+ "loss": 0.3089,
474
+ "step": 600
475
+ },
476
+ {
477
+ "epoch": 5.093945720250522,
478
+ "grad_norm": 2.6534504890441895,
479
+ "learning_rate": 4.4172581247665304e-05,
480
+ "loss": 0.2496,
481
+ "step": 610
482
+ },
483
+ {
484
+ "epoch": 5.177453027139875,
485
+ "grad_norm": 5.901323318481445,
486
+ "learning_rate": 4.398580500560329e-05,
487
+ "loss": 0.3034,
488
+ "step": 620
489
+ },
490
+ {
491
+ "epoch": 5.260960334029227,
492
+ "grad_norm": 4.444337844848633,
493
+ "learning_rate": 4.379902876354128e-05,
494
+ "loss": 0.2669,
495
+ "step": 630
496
+ },
497
+ {
498
+ "epoch": 5.34446764091858,
499
+ "grad_norm": 6.911639213562012,
500
+ "learning_rate": 4.361225252147927e-05,
501
+ "loss": 0.26,
502
+ "step": 640
503
+ },
504
+ {
505
+ "epoch": 5.427974947807933,
506
+ "grad_norm": 4.877162933349609,
507
+ "learning_rate": 4.342547627941726e-05,
508
+ "loss": 0.2732,
509
+ "step": 650
510
+ },
511
+ {
512
+ "epoch": 5.511482254697286,
513
+ "grad_norm": 3.9637715816497803,
514
+ "learning_rate": 4.3238700037355254e-05,
515
+ "loss": 0.2957,
516
+ "step": 660
517
+ },
518
+ {
519
+ "epoch": 5.5949895615866385,
520
+ "grad_norm": 4.107086181640625,
521
+ "learning_rate": 4.3051923795293244e-05,
522
+ "loss": 0.2746,
523
+ "step": 670
524
+ },
525
+ {
526
+ "epoch": 5.6784968684759916,
527
+ "grad_norm": 4.021276950836182,
528
+ "learning_rate": 4.2865147553231234e-05,
529
+ "loss": 0.2755,
530
+ "step": 680
531
+ },
532
+ {
533
+ "epoch": 5.762004175365345,
534
+ "grad_norm": 3.1536874771118164,
535
+ "learning_rate": 4.2678371311169224e-05,
536
+ "loss": 0.2781,
537
+ "step": 690
538
+ },
539
+ {
540
+ "epoch": 5.845511482254698,
541
+ "grad_norm": 4.305199146270752,
542
+ "learning_rate": 4.249159506910721e-05,
543
+ "loss": 0.2536,
544
+ "step": 700
545
+ },
546
+ {
547
+ "epoch": 5.92901878914405,
548
+ "grad_norm": 3.679614305496216,
549
+ "learning_rate": 4.2304818827045204e-05,
550
+ "loss": 0.292,
551
+ "step": 710
552
+ },
553
+ {
554
+ "epoch": 5.995824634655532,
555
+ "eval_accuracy": 0.9382946149237835,
556
+ "eval_loss": 0.176395446062088,
557
+ "eval_runtime": 47.1844,
558
+ "eval_samples_per_second": 318.389,
559
+ "eval_steps_per_second": 4.98,
560
+ "step": 718
561
+ },
562
+ {
563
+ "epoch": 6.012526096033403,
564
+ "grad_norm": 4.32383394241333,
565
+ "learning_rate": 4.2118042584983194e-05,
566
+ "loss": 0.2623,
567
+ "step": 720
568
+ },
569
+ {
570
+ "epoch": 6.096033402922756,
571
+ "grad_norm": 3.5262672901153564,
572
+ "learning_rate": 4.1931266342921183e-05,
573
+ "loss": 0.2638,
574
+ "step": 730
575
+ },
576
+ {
577
+ "epoch": 6.179540709812109,
578
+ "grad_norm": 4.600202560424805,
579
+ "learning_rate": 4.1744490100859173e-05,
580
+ "loss": 0.2747,
581
+ "step": 740
582
+ },
583
+ {
584
+ "epoch": 6.263048016701461,
585
+ "grad_norm": 3.81154465675354,
586
+ "learning_rate": 4.155771385879716e-05,
587
+ "loss": 0.2528,
588
+ "step": 750
589
+ },
590
+ {
591
+ "epoch": 6.346555323590814,
592
+ "grad_norm": 4.463883399963379,
593
+ "learning_rate": 4.137093761673515e-05,
594
+ "loss": 0.2588,
595
+ "step": 760
596
+ },
597
+ {
598
+ "epoch": 6.430062630480167,
599
+ "grad_norm": 3.8353664875030518,
600
+ "learning_rate": 4.118416137467314e-05,
601
+ "loss": 0.2618,
602
+ "step": 770
603
+ },
604
+ {
605
+ "epoch": 6.51356993736952,
606
+ "grad_norm": 3.2617974281311035,
607
+ "learning_rate": 4.099738513261113e-05,
608
+ "loss": 0.2581,
609
+ "step": 780
610
+ },
611
+ {
612
+ "epoch": 6.597077244258872,
613
+ "grad_norm": 6.762134075164795,
614
+ "learning_rate": 4.081060889054912e-05,
615
+ "loss": 0.288,
616
+ "step": 790
617
+ },
618
+ {
619
+ "epoch": 6.680584551148225,
620
+ "grad_norm": 3.397378444671631,
621
+ "learning_rate": 4.062383264848711e-05,
622
+ "loss": 0.2972,
623
+ "step": 800
624
+ },
625
+ {
626
+ "epoch": 6.764091858037578,
627
+ "grad_norm": 3.1325230598449707,
628
+ "learning_rate": 4.04370564064251e-05,
629
+ "loss": 0.2783,
630
+ "step": 810
631
+ },
632
+ {
633
+ "epoch": 6.847599164926931,
634
+ "grad_norm": 4.460423946380615,
635
+ "learning_rate": 4.02502801643631e-05,
636
+ "loss": 0.2532,
637
+ "step": 820
638
+ },
639
+ {
640
+ "epoch": 6.931106471816284,
641
+ "grad_norm": 4.6303839683532715,
642
+ "learning_rate": 4.006350392230109e-05,
643
+ "loss": 0.2441,
644
+ "step": 830
645
+ },
646
+ {
647
+ "epoch": 6.997912317327766,
648
+ "eval_accuracy": 0.9348332556746323,
649
+ "eval_loss": 0.1894187480211258,
650
+ "eval_runtime": 47.4308,
651
+ "eval_samples_per_second": 316.735,
652
+ "eval_steps_per_second": 4.955,
653
+ "step": 838
654
+ },
655
+ {
656
+ "epoch": 7.014613778705637,
657
+ "grad_norm": 3.667642593383789,
658
+ "learning_rate": 3.987672768023907e-05,
659
+ "loss": 0.2548,
660
+ "step": 840
661
+ },
662
+ {
663
+ "epoch": 7.09812108559499,
664
+ "grad_norm": 3.9511961936950684,
665
+ "learning_rate": 3.968995143817706e-05,
666
+ "loss": 0.2458,
667
+ "step": 850
668
+ },
669
+ {
670
+ "epoch": 7.181628392484343,
671
+ "grad_norm": 4.884164810180664,
672
+ "learning_rate": 3.950317519611505e-05,
673
+ "loss": 0.2311,
674
+ "step": 860
675
+ },
676
+ {
677
+ "epoch": 7.265135699373695,
678
+ "grad_norm": 5.010523796081543,
679
+ "learning_rate": 3.931639895405305e-05,
680
+ "loss": 0.2719,
681
+ "step": 870
682
+ },
683
+ {
684
+ "epoch": 7.348643006263048,
685
+ "grad_norm": 4.1294636726379395,
686
+ "learning_rate": 3.912962271199104e-05,
687
+ "loss": 0.2674,
688
+ "step": 880
689
+ },
690
+ {
691
+ "epoch": 7.432150313152401,
692
+ "grad_norm": 4.736259460449219,
693
+ "learning_rate": 3.894284646992903e-05,
694
+ "loss": 0.2591,
695
+ "step": 890
696
+ },
697
+ {
698
+ "epoch": 7.515657620041754,
699
+ "grad_norm": 4.864129066467285,
700
+ "learning_rate": 3.875607022786702e-05,
701
+ "loss": 0.2645,
702
+ "step": 900
703
+ },
704
+ {
705
+ "epoch": 7.599164926931106,
706
+ "grad_norm": 3.61091685295105,
707
+ "learning_rate": 3.8569293985805e-05,
708
+ "loss": 0.2505,
709
+ "step": 910
710
+ },
711
+ {
712
+ "epoch": 7.682672233820459,
713
+ "grad_norm": 4.3915863037109375,
714
+ "learning_rate": 3.8382517743743e-05,
715
+ "loss": 0.2629,
716
+ "step": 920
717
+ },
718
+ {
719
+ "epoch": 7.766179540709812,
720
+ "grad_norm": 4.1510419845581055,
721
+ "learning_rate": 3.819574150168099e-05,
722
+ "loss": 0.2463,
723
+ "step": 930
724
+ },
725
+ {
726
+ "epoch": 7.849686847599165,
727
+ "grad_norm": 3.3655436038970947,
728
+ "learning_rate": 3.800896525961898e-05,
729
+ "loss": 0.2588,
730
+ "step": 940
731
+ },
732
+ {
733
+ "epoch": 7.933194154488517,
734
+ "grad_norm": 3.585494041442871,
735
+ "learning_rate": 3.782218901755697e-05,
736
+ "loss": 0.2416,
737
+ "step": 950
738
+ },
739
+ {
740
+ "epoch": 8.0,
741
+ "eval_accuracy": 0.9348998202755775,
742
+ "eval_loss": 0.19133438169956207,
743
+ "eval_runtime": 47.4532,
744
+ "eval_samples_per_second": 316.585,
745
+ "eval_steps_per_second": 4.952,
746
+ "step": 958
747
+ },
748
+ {
749
+ "epoch": 8.01670146137787,
750
+ "grad_norm": 4.570364475250244,
751
+ "learning_rate": 3.763541277549496e-05,
752
+ "loss": 0.2512,
753
+ "step": 960
754
+ },
755
+ {
756
+ "epoch": 8.100208768267223,
757
+ "grad_norm": 3.438917398452759,
758
+ "learning_rate": 3.744863653343295e-05,
759
+ "loss": 0.2376,
760
+ "step": 970
761
+ },
762
+ {
763
+ "epoch": 8.183716075156577,
764
+ "grad_norm": 6.080543518066406,
765
+ "learning_rate": 3.726186029137094e-05,
766
+ "loss": 0.2552,
767
+ "step": 980
768
+ },
769
+ {
770
+ "epoch": 8.267223382045929,
771
+ "grad_norm": 4.013462543487549,
772
+ "learning_rate": 3.707508404930893e-05,
773
+ "loss": 0.2494,
774
+ "step": 990
775
+ },
776
+ {
777
+ "epoch": 8.350730688935283,
778
+ "grad_norm": 3.6214938163757324,
779
+ "learning_rate": 3.688830780724692e-05,
780
+ "loss": 0.2411,
781
+ "step": 1000
782
+ },
783
+ {
784
+ "epoch": 8.434237995824635,
785
+ "grad_norm": 3.5283772945404053,
786
+ "learning_rate": 3.670153156518491e-05,
787
+ "loss": 0.2266,
788
+ "step": 1010
789
+ },
790
+ {
791
+ "epoch": 8.517745302713987,
792
+ "grad_norm": 6.508725166320801,
793
+ "learning_rate": 3.65147553231229e-05,
794
+ "loss": 0.2463,
795
+ "step": 1020
796
+ },
797
+ {
798
+ "epoch": 8.60125260960334,
799
+ "grad_norm": 3.3739099502563477,
800
+ "learning_rate": 3.6327979081060895e-05,
801
+ "loss": 0.2359,
802
+ "step": 1030
803
+ },
804
+ {
805
+ "epoch": 8.684759916492693,
806
+ "grad_norm": 3.321066379547119,
807
+ "learning_rate": 3.6141202838998885e-05,
808
+ "loss": 0.2445,
809
+ "step": 1040
810
+ },
811
+ {
812
+ "epoch": 8.768267223382045,
813
+ "grad_norm": 3.418613910675049,
814
+ "learning_rate": 3.595442659693687e-05,
815
+ "loss": 0.2588,
816
+ "step": 1050
817
+ },
818
+ {
819
+ "epoch": 8.851774530271399,
820
+ "grad_norm": 3.841691255569458,
821
+ "learning_rate": 3.576765035487486e-05,
822
+ "loss": 0.2431,
823
+ "step": 1060
824
+ },
825
+ {
826
+ "epoch": 8.935281837160751,
827
+ "grad_norm": 4.421708583831787,
828
+ "learning_rate": 3.558087411281285e-05,
829
+ "loss": 0.2642,
830
+ "step": 1070
831
+ },
832
+ {
833
+ "epoch": 8.993736951983298,
834
+ "eval_accuracy": 0.9384943087266192,
835
+ "eval_loss": 0.17376160621643066,
836
+ "eval_runtime": 47.0041,
837
+ "eval_samples_per_second": 319.611,
838
+ "eval_steps_per_second": 5.0,
839
+ "step": 1077
840
+ },
841
+ {
842
+ "epoch": 9.018789144050105,
843
+ "grad_norm": 3.4606330394744873,
844
+ "learning_rate": 3.5394097870750844e-05,
845
+ "loss": 0.2577,
846
+ "step": 1080
847
+ },
848
+ {
849
+ "epoch": 9.102296450939457,
850
+ "grad_norm": 2.89786434173584,
851
+ "learning_rate": 3.5207321628688834e-05,
852
+ "loss": 0.2562,
853
+ "step": 1090
854
+ },
855
+ {
856
+ "epoch": 9.18580375782881,
857
+ "grad_norm": 4.433104515075684,
858
+ "learning_rate": 3.5020545386626824e-05,
859
+ "loss": 0.2368,
860
+ "step": 1100
861
+ },
862
+ {
863
+ "epoch": 9.269311064718163,
864
+ "grad_norm": 3.6675984859466553,
865
+ "learning_rate": 3.4833769144564814e-05,
866
+ "loss": 0.2429,
867
+ "step": 1110
868
+ },
869
+ {
870
+ "epoch": 9.352818371607516,
871
+ "grad_norm": 3.072791814804077,
872
+ "learning_rate": 3.46469929025028e-05,
873
+ "loss": 0.2117,
874
+ "step": 1120
875
+ },
876
+ {
877
+ "epoch": 9.436325678496868,
878
+ "grad_norm": 3.7456765174865723,
879
+ "learning_rate": 3.4460216660440794e-05,
880
+ "loss": 0.2368,
881
+ "step": 1130
882
+ },
883
+ {
884
+ "epoch": 9.519832985386222,
885
+ "grad_norm": 5.997681140899658,
886
+ "learning_rate": 3.4273440418378784e-05,
887
+ "loss": 0.2513,
888
+ "step": 1140
889
+ },
890
+ {
891
+ "epoch": 9.603340292275574,
892
+ "grad_norm": 4.1110124588012695,
893
+ "learning_rate": 3.4086664176316774e-05,
894
+ "loss": 0.2602,
895
+ "step": 1150
896
+ },
897
+ {
898
+ "epoch": 9.686847599164928,
899
+ "grad_norm": 3.402402639389038,
900
+ "learning_rate": 3.3899887934254764e-05,
901
+ "loss": 0.2323,
902
+ "step": 1160
903
+ },
904
+ {
905
+ "epoch": 9.77035490605428,
906
+ "grad_norm": 3.7895450592041016,
907
+ "learning_rate": 3.3713111692192754e-05,
908
+ "loss": 0.2339,
909
+ "step": 1170
910
+ },
911
+ {
912
+ "epoch": 9.853862212943632,
913
+ "grad_norm": 2.2243807315826416,
914
+ "learning_rate": 3.3526335450130744e-05,
915
+ "loss": 0.2496,
916
+ "step": 1180
917
+ },
918
+ {
919
+ "epoch": 9.937369519832986,
920
+ "grad_norm": 4.900872707366943,
921
+ "learning_rate": 3.333955920806874e-05,
922
+ "loss": 0.2482,
923
+ "step": 1190
924
+ },
925
+ {
926
+ "epoch": 9.995824634655532,
927
+ "eval_accuracy": 0.9370964521067696,
928
+ "eval_loss": 0.19109387695789337,
929
+ "eval_runtime": 47.2977,
930
+ "eval_samples_per_second": 317.626,
931
+ "eval_steps_per_second": 4.969,
932
+ "step": 1197
933
+ },
934
+ {
935
+ "epoch": 10.020876826722338,
936
+ "grad_norm": 5.6255998611450195,
937
+ "learning_rate": 3.3152782966006724e-05,
938
+ "loss": 0.2399,
939
+ "step": 1200
940
+ },
941
+ {
942
+ "epoch": 10.10438413361169,
943
+ "grad_norm": 3.8569297790527344,
944
+ "learning_rate": 3.2966006723944714e-05,
945
+ "loss": 0.2342,
946
+ "step": 1210
947
+ },
948
+ {
949
+ "epoch": 10.187891440501044,
950
+ "grad_norm": 3.362445831298828,
951
+ "learning_rate": 3.2779230481882703e-05,
952
+ "loss": 0.2104,
953
+ "step": 1220
954
+ },
955
+ {
956
+ "epoch": 10.271398747390396,
957
+ "grad_norm": 3.918388605117798,
958
+ "learning_rate": 3.2592454239820693e-05,
959
+ "loss": 0.2319,
960
+ "step": 1230
961
+ },
962
+ {
963
+ "epoch": 10.35490605427975,
964
+ "grad_norm": 4.792023658752441,
965
+ "learning_rate": 3.240567799775869e-05,
966
+ "loss": 0.2151,
967
+ "step": 1240
968
+ },
969
+ {
970
+ "epoch": 10.438413361169102,
971
+ "grad_norm": 3.7443833351135254,
972
+ "learning_rate": 3.221890175569668e-05,
973
+ "loss": 0.2096,
974
+ "step": 1250
975
+ },
976
+ {
977
+ "epoch": 10.521920668058454,
978
+ "grad_norm": 3.4863789081573486,
979
+ "learning_rate": 3.203212551363467e-05,
980
+ "loss": 0.2439,
981
+ "step": 1260
982
+ },
983
+ {
984
+ "epoch": 10.605427974947808,
985
+ "grad_norm": 3.4557056427001953,
986
+ "learning_rate": 3.184534927157265e-05,
987
+ "loss": 0.2258,
988
+ "step": 1270
989
+ },
990
+ {
991
+ "epoch": 10.68893528183716,
992
+ "grad_norm": 3.1916327476501465,
993
+ "learning_rate": 3.165857302951064e-05,
994
+ "loss": 0.2314,
995
+ "step": 1280
996
+ },
997
+ {
998
+ "epoch": 10.772442588726513,
999
+ "grad_norm": 3.4237937927246094,
1000
+ "learning_rate": 3.147179678744864e-05,
1001
+ "loss": 0.2217,
1002
+ "step": 1290
1003
+ },
1004
+ {
1005
+ "epoch": 10.855949895615867,
1006
+ "grad_norm": 4.48100471496582,
1007
+ "learning_rate": 3.128502054538663e-05,
1008
+ "loss": 0.2268,
1009
+ "step": 1300
1010
+ },
1011
+ {
1012
+ "epoch": 10.939457202505219,
1013
+ "grad_norm": 4.131446361541748,
1014
+ "learning_rate": 3.109824430332462e-05,
1015
+ "loss": 0.2279,
1016
+ "step": 1310
1017
+ },
1018
+ {
1019
+ "epoch": 10.997912317327767,
1020
+ "eval_accuracy": 0.9380949211209478,
1021
+ "eval_loss": 0.1867293417453766,
1022
+ "eval_runtime": 47.3757,
1023
+ "eval_samples_per_second": 317.104,
1024
+ "eval_steps_per_second": 4.96,
1025
+ "step": 1317
1026
+ },
1027
+ {
1028
+ "epoch": 11.022964509394573,
1029
+ "grad_norm": 3.454545736312866,
1030
+ "learning_rate": 3.091146806126261e-05,
1031
+ "loss": 0.2134,
1032
+ "step": 1320
1033
+ },
1034
+ {
1035
+ "epoch": 11.106471816283925,
1036
+ "grad_norm": 5.787585258483887,
1037
+ "learning_rate": 3.07246918192006e-05,
1038
+ "loss": 0.2072,
1039
+ "step": 1330
1040
+ },
1041
+ {
1042
+ "epoch": 11.189979123173277,
1043
+ "grad_norm": 3.804635763168335,
1044
+ "learning_rate": 3.053791557713859e-05,
1045
+ "loss": 0.2113,
1046
+ "step": 1340
1047
+ },
1048
+ {
1049
+ "epoch": 11.273486430062631,
1050
+ "grad_norm": 2.5503077507019043,
1051
+ "learning_rate": 3.035113933507658e-05,
1052
+ "loss": 0.2299,
1053
+ "step": 1350
1054
+ },
1055
+ {
1056
+ "epoch": 11.356993736951983,
1057
+ "grad_norm": 3.30344295501709,
1058
+ "learning_rate": 3.016436309301457e-05,
1059
+ "loss": 0.2037,
1060
+ "step": 1360
1061
+ },
1062
+ {
1063
+ "epoch": 11.440501043841335,
1064
+ "grad_norm": 3.5026683807373047,
1065
+ "learning_rate": 2.997758685095256e-05,
1066
+ "loss": 0.2242,
1067
+ "step": 1370
1068
+ },
1069
+ {
1070
+ "epoch": 11.52400835073069,
1071
+ "grad_norm": 4.991878032684326,
1072
+ "learning_rate": 2.9790810608890552e-05,
1073
+ "loss": 0.2234,
1074
+ "step": 1380
1075
+ },
1076
+ {
1077
+ "epoch": 11.607515657620041,
1078
+ "grad_norm": 5.6365437507629395,
1079
+ "learning_rate": 2.9604034366828542e-05,
1080
+ "loss": 0.2238,
1081
+ "step": 1390
1082
+ },
1083
+ {
1084
+ "epoch": 11.691022964509395,
1085
+ "grad_norm": 3.200317144393921,
1086
+ "learning_rate": 2.9417258124766532e-05,
1087
+ "loss": 0.213,
1088
+ "step": 1400
1089
+ },
1090
+ {
1091
+ "epoch": 11.774530271398747,
1092
+ "grad_norm": 4.857986927032471,
1093
+ "learning_rate": 2.923048188270452e-05,
1094
+ "loss": 0.2279,
1095
+ "step": 1410
1096
+ },
1097
+ {
1098
+ "epoch": 11.8580375782881,
1099
+ "grad_norm": 3.4726953506469727,
1100
+ "learning_rate": 2.904370564064251e-05,
1101
+ "loss": 0.2346,
1102
+ "step": 1420
1103
+ },
1104
+ {
1105
+ "epoch": 11.941544885177453,
1106
+ "grad_norm": 4.036787986755371,
1107
+ "learning_rate": 2.8856929398580502e-05,
1108
+ "loss": 0.2331,
1109
+ "step": 1430
1110
+ },
1111
+ {
1112
+ "epoch": 12.0,
1113
+ "eval_accuracy": 0.9388936963322905,
1114
+ "eval_loss": 0.1814269721508026,
1115
+ "eval_runtime": 46.6546,
1116
+ "eval_samples_per_second": 322.004,
1117
+ "eval_steps_per_second": 5.037,
1118
+ "step": 1437
1119
+ },
1120
+ {
1121
+ "epoch": 12.025052192066806,
1122
+ "grad_norm": 6.564273834228516,
1123
+ "learning_rate": 2.8670153156518492e-05,
1124
+ "loss": 0.2032,
1125
+ "step": 1440
1126
+ },
1127
+ {
1128
+ "epoch": 12.108559498956158,
1129
+ "grad_norm": 2.8712363243103027,
1130
+ "learning_rate": 2.8483376914456482e-05,
1131
+ "loss": 0.2101,
1132
+ "step": 1450
1133
+ },
1134
+ {
1135
+ "epoch": 12.192066805845512,
1136
+ "grad_norm": 4.087766647338867,
1137
+ "learning_rate": 2.8296600672394475e-05,
1138
+ "loss": 0.2016,
1139
+ "step": 1460
1140
+ },
1141
+ {
1142
+ "epoch": 12.275574112734864,
1143
+ "grad_norm": 4.2068867683410645,
1144
+ "learning_rate": 2.8109824430332465e-05,
1145
+ "loss": 0.2072,
1146
+ "step": 1470
1147
+ },
1148
+ {
1149
+ "epoch": 12.359081419624218,
1150
+ "grad_norm": 2.1417343616485596,
1151
+ "learning_rate": 2.7923048188270452e-05,
1152
+ "loss": 0.2167,
1153
+ "step": 1480
1154
+ },
1155
+ {
1156
+ "epoch": 12.44258872651357,
1157
+ "grad_norm": 4.0305256843566895,
1158
+ "learning_rate": 2.773627194620844e-05,
1159
+ "loss": 0.2179,
1160
+ "step": 1490
1161
+ },
1162
+ {
1163
+ "epoch": 12.526096033402922,
1164
+ "grad_norm": 3.431574821472168,
1165
+ "learning_rate": 2.754949570414643e-05,
1166
+ "loss": 0.2102,
1167
+ "step": 1500
1168
+ },
1169
+ {
1170
+ "epoch": 12.609603340292276,
1171
+ "grad_norm": 3.1864852905273438,
1172
+ "learning_rate": 2.7362719462084425e-05,
1173
+ "loss": 0.21,
1174
+ "step": 1510
1175
+ },
1176
+ {
1177
+ "epoch": 12.693110647181628,
1178
+ "grad_norm": 4.349884033203125,
1179
+ "learning_rate": 2.7175943220022415e-05,
1180
+ "loss": 0.2339,
1181
+ "step": 1520
1182
+ },
1183
+ {
1184
+ "epoch": 12.776617954070982,
1185
+ "grad_norm": 3.7044005393981934,
1186
+ "learning_rate": 2.6989166977960405e-05,
1187
+ "loss": 0.2364,
1188
+ "step": 1530
1189
+ },
1190
+ {
1191
+ "epoch": 12.860125260960334,
1192
+ "grad_norm": 4.139816761016846,
1193
+ "learning_rate": 2.6802390735898398e-05,
1194
+ "loss": 0.1971,
1195
+ "step": 1540
1196
+ },
1197
+ {
1198
+ "epoch": 12.943632567849686,
1199
+ "grad_norm": 3.852142333984375,
1200
+ "learning_rate": 2.661561449383638e-05,
1201
+ "loss": 0.2208,
1202
+ "step": 1550
1203
+ },
1204
+ {
1205
+ "epoch": 12.993736951983298,
1206
+ "eval_accuracy": 0.9400918591493044,
1207
+ "eval_loss": 0.17901407182216644,
1208
+ "eval_runtime": 47.4241,
1209
+ "eval_samples_per_second": 316.78,
1210
+ "eval_steps_per_second": 4.955,
1211
+ "step": 1556
1212
+ },
1213
+ {
1214
+ "epoch": 13.02713987473904,
1215
+ "grad_norm": 3.2813127040863037,
1216
+ "learning_rate": 2.6428838251774375e-05,
1217
+ "loss": 0.1947,
1218
+ "step": 1560
1219
+ },
1220
+ {
1221
+ "epoch": 13.110647181628392,
1222
+ "grad_norm": 3.5853235721588135,
1223
+ "learning_rate": 2.6242062009712364e-05,
1224
+ "loss": 0.2054,
1225
+ "step": 1570
1226
+ },
1227
+ {
1228
+ "epoch": 13.194154488517745,
1229
+ "grad_norm": 5.702735900878906,
1230
+ "learning_rate": 2.6055285767650354e-05,
1231
+ "loss": 0.2423,
1232
+ "step": 1580
1233
+ },
1234
+ {
1235
+ "epoch": 13.277661795407099,
1236
+ "grad_norm": 2.9345734119415283,
1237
+ "learning_rate": 2.5868509525588348e-05,
1238
+ "loss": 0.2051,
1239
+ "step": 1590
1240
+ },
1241
+ {
1242
+ "epoch": 13.36116910229645,
1243
+ "grad_norm": 3.577324390411377,
1244
+ "learning_rate": 2.5681733283526338e-05,
1245
+ "loss": 0.2146,
1246
+ "step": 1600
1247
+ },
1248
+ {
1249
+ "epoch": 13.444676409185803,
1250
+ "grad_norm": 4.566287994384766,
1251
+ "learning_rate": 2.5494957041464328e-05,
1252
+ "loss": 0.1991,
1253
+ "step": 1610
1254
+ },
1255
+ {
1256
+ "epoch": 13.528183716075157,
1257
+ "grad_norm": 3.031125545501709,
1258
+ "learning_rate": 2.5308180799402314e-05,
1259
+ "loss": 0.2154,
1260
+ "step": 1620
1261
+ },
1262
+ {
1263
+ "epoch": 13.611691022964509,
1264
+ "grad_norm": 4.158732891082764,
1265
+ "learning_rate": 2.5121404557340304e-05,
1266
+ "loss": 0.1946,
1267
+ "step": 1630
1268
+ },
1269
+ {
1270
+ "epoch": 13.695198329853863,
1271
+ "grad_norm": 4.315579414367676,
1272
+ "learning_rate": 2.4934628315278297e-05,
1273
+ "loss": 0.1917,
1274
+ "step": 1640
1275
+ },
1276
+ {
1277
+ "epoch": 13.778705636743215,
1278
+ "grad_norm": 4.10720157623291,
1279
+ "learning_rate": 2.4747852073216287e-05,
1280
+ "loss": 0.1939,
1281
+ "step": 1650
1282
+ },
1283
+ {
1284
+ "epoch": 13.862212943632567,
1285
+ "grad_norm": 3.212385892868042,
1286
+ "learning_rate": 2.4561075831154277e-05,
1287
+ "loss": 0.2117,
1288
+ "step": 1660
1289
+ },
1290
+ {
1291
+ "epoch": 13.945720250521921,
1292
+ "grad_norm": 6.1147260665893555,
1293
+ "learning_rate": 2.437429958909227e-05,
1294
+ "loss": 0.2326,
1295
+ "step": 1670
1296
+ },
1297
+ {
1298
+ "epoch": 13.995824634655532,
1299
+ "eval_accuracy": 0.9366304999001531,
1300
+ "eval_loss": 0.19255150854587555,
1301
+ "eval_runtime": 46.9856,
1302
+ "eval_samples_per_second": 319.736,
1303
+ "eval_steps_per_second": 5.002,
1304
+ "step": 1676
1305
+ },
1306
+ {
1307
+ "epoch": 14.029227557411273,
1308
+ "grad_norm": 3.787531852722168,
1309
+ "learning_rate": 2.4187523347030257e-05,
1310
+ "loss": 0.185,
1311
+ "step": 1680
1312
+ },
1313
+ {
1314
+ "epoch": 14.112734864300627,
1315
+ "grad_norm": 3.1980652809143066,
1316
+ "learning_rate": 2.400074710496825e-05,
1317
+ "loss": 0.2099,
1318
+ "step": 1690
1319
+ },
1320
+ {
1321
+ "epoch": 14.19624217118998,
1322
+ "grad_norm": 5.701913356781006,
1323
+ "learning_rate": 2.381397086290624e-05,
1324
+ "loss": 0.1836,
1325
+ "step": 1700
1326
+ },
1327
+ {
1328
+ "epoch": 14.279749478079331,
1329
+ "grad_norm": 3.619966745376587,
1330
+ "learning_rate": 2.362719462084423e-05,
1331
+ "loss": 0.2131,
1332
+ "step": 1710
1333
+ },
1334
+ {
1335
+ "epoch": 14.363256784968685,
1336
+ "grad_norm": 2.987546443939209,
1337
+ "learning_rate": 2.344041837878222e-05,
1338
+ "loss": 0.1942,
1339
+ "step": 1720
1340
+ },
1341
+ {
1342
+ "epoch": 14.446764091858038,
1343
+ "grad_norm": 3.103140115737915,
1344
+ "learning_rate": 2.325364213672021e-05,
1345
+ "loss": 0.1909,
1346
+ "step": 1730
1347
+ },
1348
+ {
1349
+ "epoch": 14.53027139874739,
1350
+ "grad_norm": 3.544508695602417,
1351
+ "learning_rate": 2.3066865894658203e-05,
1352
+ "loss": 0.2042,
1353
+ "step": 1740
1354
+ },
1355
+ {
1356
+ "epoch": 14.613778705636744,
1357
+ "grad_norm": 4.177450656890869,
1358
+ "learning_rate": 2.288008965259619e-05,
1359
+ "loss": 0.1984,
1360
+ "step": 1750
1361
+ },
1362
+ {
1363
+ "epoch": 14.697286012526096,
1364
+ "grad_norm": 3.7388668060302734,
1365
+ "learning_rate": 2.269331341053418e-05,
1366
+ "loss": 0.2138,
1367
+ "step": 1760
1368
+ },
1369
+ {
1370
+ "epoch": 14.780793319415448,
1371
+ "grad_norm": 2.8315608501434326,
1372
+ "learning_rate": 2.2506537168472173e-05,
1373
+ "loss": 0.2081,
1374
+ "step": 1770
1375
+ },
1376
+ {
1377
+ "epoch": 14.864300626304802,
1378
+ "grad_norm": 3.59485125541687,
1379
+ "learning_rate": 2.231976092641016e-05,
1380
+ "loss": 0.1898,
1381
+ "step": 1780
1382
+ },
1383
+ {
1384
+ "epoch": 14.947807933194154,
1385
+ "grad_norm": 4.520532608032227,
1386
+ "learning_rate": 2.2132984684348153e-05,
1387
+ "loss": 0.1899,
1388
+ "step": 1790
1389
+ },
1390
+ {
1391
+ "epoch": 14.997912317327767,
1392
+ "eval_accuracy": 0.9371630167077148,
1393
+ "eval_loss": 0.19751960039138794,
1394
+ "eval_runtime": 47.3817,
1395
+ "eval_samples_per_second": 317.063,
1396
+ "eval_steps_per_second": 4.96,
1397
+ "step": 1796
1398
+ },
1399
+ {
1400
+ "epoch": 15.031315240083508,
1401
+ "grad_norm": 4.287206172943115,
1402
+ "learning_rate": 2.1946208442286143e-05,
1403
+ "loss": 0.1856,
1404
+ "step": 1800
1405
+ },
1406
+ {
1407
+ "epoch": 15.11482254697286,
1408
+ "grad_norm": 3.0820515155792236,
1409
+ "learning_rate": 2.1759432200224133e-05,
1410
+ "loss": 0.1809,
1411
+ "step": 1810
1412
+ },
1413
+ {
1414
+ "epoch": 15.198329853862212,
1415
+ "grad_norm": 4.431981086730957,
1416
+ "learning_rate": 2.1572655958162123e-05,
1417
+ "loss": 0.1904,
1418
+ "step": 1820
1419
+ },
1420
+ {
1421
+ "epoch": 15.281837160751566,
1422
+ "grad_norm": 4.667430400848389,
1423
+ "learning_rate": 2.1385879716100113e-05,
1424
+ "loss": 0.1854,
1425
+ "step": 1830
1426
+ },
1427
+ {
1428
+ "epoch": 15.365344467640918,
1429
+ "grad_norm": 3.1775588989257812,
1430
+ "learning_rate": 2.1199103474038103e-05,
1431
+ "loss": 0.206,
1432
+ "step": 1840
1433
+ },
1434
+ {
1435
+ "epoch": 15.448851774530272,
1436
+ "grad_norm": 4.10621976852417,
1437
+ "learning_rate": 2.1012327231976096e-05,
1438
+ "loss": 0.1997,
1439
+ "step": 1850
1440
+ },
1441
+ {
1442
+ "epoch": 15.532359081419624,
1443
+ "grad_norm": 3.4218993186950684,
1444
+ "learning_rate": 2.0825550989914083e-05,
1445
+ "loss": 0.1791,
1446
+ "step": 1860
1447
+ },
1448
+ {
1449
+ "epoch": 15.615866388308977,
1450
+ "grad_norm": 3.493351936340332,
1451
+ "learning_rate": 2.0638774747852076e-05,
1452
+ "loss": 0.1951,
1453
+ "step": 1870
1454
+ },
1455
+ {
1456
+ "epoch": 15.69937369519833,
1457
+ "grad_norm": 4.011974811553955,
1458
+ "learning_rate": 2.0451998505790066e-05,
1459
+ "loss": 0.1871,
1460
+ "step": 1880
1461
+ },
1462
+ {
1463
+ "epoch": 15.782881002087683,
1464
+ "grad_norm": 5.0797576904296875,
1465
+ "learning_rate": 2.0265222263728052e-05,
1466
+ "loss": 0.197,
1467
+ "step": 1890
1468
+ },
1469
+ {
1470
+ "epoch": 15.866388308977035,
1471
+ "grad_norm": 4.040988922119141,
1472
+ "learning_rate": 2.0078446021666046e-05,
1473
+ "loss": 0.1797,
1474
+ "step": 1900
1475
+ },
1476
+ {
1477
+ "epoch": 15.949895615866389,
1478
+ "grad_norm": 3.886463165283203,
1479
+ "learning_rate": 1.9891669779604036e-05,
1480
+ "loss": 0.1822,
1481
+ "step": 1910
1482
+ },
1483
+ {
1484
+ "epoch": 16.0,
1485
+ "eval_accuracy": 0.9351660786793583,
1486
+ "eval_loss": 0.20523545145988464,
1487
+ "eval_runtime": 46.5997,
1488
+ "eval_samples_per_second": 322.384,
1489
+ "eval_steps_per_second": 5.043,
1490
+ "step": 1916
1491
+ },
1492
+ {
1493
+ "epoch": 16.03340292275574,
1494
+ "grad_norm": 3.4292867183685303,
1495
+ "learning_rate": 1.9704893537542025e-05,
1496
+ "loss": 0.1948,
1497
+ "step": 1920
1498
+ },
1499
+ {
1500
+ "epoch": 16.116910229645093,
1501
+ "grad_norm": 4.07982063293457,
1502
+ "learning_rate": 1.9518117295480015e-05,
1503
+ "loss": 0.2022,
1504
+ "step": 1930
1505
+ },
1506
+ {
1507
+ "epoch": 16.200417536534445,
1508
+ "grad_norm": 3.2541186809539795,
1509
+ "learning_rate": 1.9331341053418005e-05,
1510
+ "loss": 0.1607,
1511
+ "step": 1940
1512
+ },
1513
+ {
1514
+ "epoch": 16.2839248434238,
1515
+ "grad_norm": 4.076241493225098,
1516
+ "learning_rate": 1.9144564811356e-05,
1517
+ "loss": 0.1579,
1518
+ "step": 1950
1519
+ },
1520
+ {
1521
+ "epoch": 16.367432150313153,
1522
+ "grad_norm": 3.7536261081695557,
1523
+ "learning_rate": 1.8957788569293985e-05,
1524
+ "loss": 0.1899,
1525
+ "step": 1960
1526
+ },
1527
+ {
1528
+ "epoch": 16.450939457202505,
1529
+ "grad_norm": 3.6366031169891357,
1530
+ "learning_rate": 1.8771012327231975e-05,
1531
+ "loss": 0.1818,
1532
+ "step": 1970
1533
+ },
1534
+ {
1535
+ "epoch": 16.534446764091857,
1536
+ "grad_norm": 3.176820755004883,
1537
+ "learning_rate": 1.858423608516997e-05,
1538
+ "loss": 0.1808,
1539
+ "step": 1980
1540
+ },
1541
+ {
1542
+ "epoch": 16.61795407098121,
1543
+ "grad_norm": 2.9712882041931152,
1544
+ "learning_rate": 1.839745984310796e-05,
1545
+ "loss": 0.1535,
1546
+ "step": 1990
1547
+ },
1548
+ {
1549
+ "epoch": 16.701461377870565,
1550
+ "grad_norm": 5.626895904541016,
1551
+ "learning_rate": 1.8210683601045948e-05,
1552
+ "loss": 0.1861,
1553
+ "step": 2000
1554
+ },
1555
+ {
1556
+ "epoch": 16.784968684759917,
1557
+ "grad_norm": 3.897157907485962,
1558
+ "learning_rate": 1.8023907358983938e-05,
1559
+ "loss": 0.1849,
1560
+ "step": 2010
1561
+ },
1562
+ {
1563
+ "epoch": 16.86847599164927,
1564
+ "grad_norm": 3.6972334384918213,
1565
+ "learning_rate": 1.7837131116921928e-05,
1566
+ "loss": 0.1562,
1567
+ "step": 2020
1568
+ },
1569
+ {
1570
+ "epoch": 16.95198329853862,
1571
+ "grad_norm": 6.524864673614502,
1572
+ "learning_rate": 1.7650354874859918e-05,
1573
+ "loss": 0.1837,
1574
+ "step": 2030
1575
+ },
1576
+ {
1577
+ "epoch": 16.993736951983298,
1578
+ "eval_accuracy": 0.9363642414963722,
1579
+ "eval_loss": 0.2078283280134201,
1580
+ "eval_runtime": 47.5905,
1581
+ "eval_samples_per_second": 315.672,
1582
+ "eval_steps_per_second": 4.938,
1583
+ "step": 2035
1584
+ },
1585
+ {
1586
+ "epoch": 17.035490605427974,
1587
+ "grad_norm": 4.618429183959961,
1588
+ "learning_rate": 1.7463578632797908e-05,
1589
+ "loss": 0.2028,
1590
+ "step": 2040
1591
+ },
1592
+ {
1593
+ "epoch": 17.11899791231733,
1594
+ "grad_norm": 4.595980167388916,
1595
+ "learning_rate": 1.7276802390735898e-05,
1596
+ "loss": 0.1687,
1597
+ "step": 2050
1598
+ },
1599
+ {
1600
+ "epoch": 17.20250521920668,
1601
+ "grad_norm": 2.575201988220215,
1602
+ "learning_rate": 1.709002614867389e-05,
1603
+ "loss": 0.1714,
1604
+ "step": 2060
1605
+ },
1606
+ {
1607
+ "epoch": 17.286012526096034,
1608
+ "grad_norm": 4.105851650238037,
1609
+ "learning_rate": 1.6903249906611878e-05,
1610
+ "loss": 0.1723,
1611
+ "step": 2070
1612
+ },
1613
+ {
1614
+ "epoch": 17.369519832985386,
1615
+ "grad_norm": 4.654282569885254,
1616
+ "learning_rate": 1.671647366454987e-05,
1617
+ "loss": 0.1891,
1618
+ "step": 2080
1619
+ },
1620
+ {
1621
+ "epoch": 17.453027139874738,
1622
+ "grad_norm": 3.8776493072509766,
1623
+ "learning_rate": 1.652969742248786e-05,
1624
+ "loss": 0.1662,
1625
+ "step": 2090
1626
+ },
1627
+ {
1628
+ "epoch": 17.53653444676409,
1629
+ "grad_norm": 7.09526252746582,
1630
+ "learning_rate": 1.634292118042585e-05,
1631
+ "loss": 0.1683,
1632
+ "step": 2100
1633
+ },
1634
+ {
1635
+ "epoch": 17.620041753653446,
1636
+ "grad_norm": 5.3857035636901855,
1637
+ "learning_rate": 1.615614493836384e-05,
1638
+ "loss": 0.1727,
1639
+ "step": 2110
1640
+ },
1641
+ {
1642
+ "epoch": 17.703549060542798,
1643
+ "grad_norm": 3.283949136734009,
1644
+ "learning_rate": 1.596936869630183e-05,
1645
+ "loss": 0.1903,
1646
+ "step": 2120
1647
+ },
1648
+ {
1649
+ "epoch": 17.78705636743215,
1650
+ "grad_norm": 5.982066631317139,
1651
+ "learning_rate": 1.578259245423982e-05,
1652
+ "loss": 0.1929,
1653
+ "step": 2130
1654
+ },
1655
+ {
1656
+ "epoch": 17.870563674321502,
1657
+ "grad_norm": 3.7765073776245117,
1658
+ "learning_rate": 1.559581621217781e-05,
1659
+ "loss": 0.1703,
1660
+ "step": 2140
1661
+ },
1662
+ {
1663
+ "epoch": 17.954070981210855,
1664
+ "grad_norm": 2.5590310096740723,
1665
+ "learning_rate": 1.54090399701158e-05,
1666
+ "loss": 0.1712,
1667
+ "step": 2150
1668
+ },
1669
+ {
1670
+ "epoch": 17.995824634655534,
1671
+ "eval_accuracy": 0.9288424415895626,
1672
+ "eval_loss": 0.23452672362327576,
1673
+ "eval_runtime": 47.2955,
1674
+ "eval_samples_per_second": 317.641,
1675
+ "eval_steps_per_second": 4.969,
1676
+ "step": 2155
1677
+ },
1678
+ {
1679
+ "epoch": 18.03757828810021,
1680
+ "grad_norm": 3.893446683883667,
1681
+ "learning_rate": 1.5222263728053792e-05,
1682
+ "loss": 0.1684,
1683
+ "step": 2160
1684
+ },
1685
+ {
1686
+ "epoch": 18.121085594989562,
1687
+ "grad_norm": 3.8617587089538574,
1688
+ "learning_rate": 1.5035487485991784e-05,
1689
+ "loss": 0.1913,
1690
+ "step": 2170
1691
+ },
1692
+ {
1693
+ "epoch": 18.204592901878915,
1694
+ "grad_norm": 3.266658306121826,
1695
+ "learning_rate": 1.4848711243929772e-05,
1696
+ "loss": 0.1699,
1697
+ "step": 2180
1698
+ },
1699
+ {
1700
+ "epoch": 18.288100208768267,
1701
+ "grad_norm": 3.2592689990997314,
1702
+ "learning_rate": 1.4661935001867764e-05,
1703
+ "loss": 0.1803,
1704
+ "step": 2190
1705
+ },
1706
+ {
1707
+ "epoch": 18.37160751565762,
1708
+ "grad_norm": 5.846588611602783,
1709
+ "learning_rate": 1.4475158759805754e-05,
1710
+ "loss": 0.1673,
1711
+ "step": 2200
1712
+ },
1713
+ {
1714
+ "epoch": 18.455114822546975,
1715
+ "grad_norm": 3.6083693504333496,
1716
+ "learning_rate": 1.4288382517743742e-05,
1717
+ "loss": 0.1631,
1718
+ "step": 2210
1719
+ },
1720
+ {
1721
+ "epoch": 18.538622129436327,
1722
+ "grad_norm": 3.4165074825286865,
1723
+ "learning_rate": 1.4101606275681733e-05,
1724
+ "loss": 0.152,
1725
+ "step": 2220
1726
+ },
1727
+ {
1728
+ "epoch": 18.62212943632568,
1729
+ "grad_norm": 3.8259122371673584,
1730
+ "learning_rate": 1.3914830033619725e-05,
1731
+ "loss": 0.1642,
1732
+ "step": 2230
1733
+ },
1734
+ {
1735
+ "epoch": 18.70563674321503,
1736
+ "grad_norm": 4.859554767608643,
1737
+ "learning_rate": 1.3728053791557715e-05,
1738
+ "loss": 0.1596,
1739
+ "step": 2240
1740
+ },
1741
+ {
1742
+ "epoch": 18.789144050104383,
1743
+ "grad_norm": 2.8771893978118896,
1744
+ "learning_rate": 1.3541277549495703e-05,
1745
+ "loss": 0.1484,
1746
+ "step": 2250
1747
+ },
1748
+ {
1749
+ "epoch": 18.872651356993735,
1750
+ "grad_norm": 4.107051849365234,
1751
+ "learning_rate": 1.3354501307433695e-05,
1752
+ "loss": 0.2161,
1753
+ "step": 2260
1754
+ },
1755
+ {
1756
+ "epoch": 18.95615866388309,
1757
+ "grad_norm": 5.493374347686768,
1758
+ "learning_rate": 1.3167725065371686e-05,
1759
+ "loss": 0.1715,
1760
+ "step": 2270
1761
+ },
1762
+ {
1763
+ "epoch": 18.997912317327767,
1764
+ "eval_accuracy": 0.9368301937029887,
1765
+ "eval_loss": 0.21558375656604767,
1766
+ "eval_runtime": 47.5388,
1767
+ "eval_samples_per_second": 316.016,
1768
+ "eval_steps_per_second": 4.943,
1769
+ "step": 2275
1770
+ },
1771
+ {
1772
+ "epoch": 19.039665970772443,
1773
+ "grad_norm": 2.9170358180999756,
1774
+ "learning_rate": 1.2980948823309675e-05,
1775
+ "loss": 0.1595,
1776
+ "step": 2280
1777
+ },
1778
+ {
1779
+ "epoch": 19.123173277661795,
1780
+ "grad_norm": 5.0754804611206055,
1781
+ "learning_rate": 1.2794172581247665e-05,
1782
+ "loss": 0.1671,
1783
+ "step": 2290
1784
+ },
1785
+ {
1786
+ "epoch": 19.206680584551147,
1787
+ "grad_norm": 4.310447692871094,
1788
+ "learning_rate": 1.2607396339185656e-05,
1789
+ "loss": 0.1759,
1790
+ "step": 2300
1791
+ },
1792
+ {
1793
+ "epoch": 19.2901878914405,
1794
+ "grad_norm": 5.011526584625244,
1795
+ "learning_rate": 1.2420620097123646e-05,
1796
+ "loss": 0.16,
1797
+ "step": 2310
1798
+ },
1799
+ {
1800
+ "epoch": 19.373695198329855,
1801
+ "grad_norm": 3.7505524158477783,
1802
+ "learning_rate": 1.2233843855061638e-05,
1803
+ "loss": 0.1622,
1804
+ "step": 2320
1805
+ },
1806
+ {
1807
+ "epoch": 19.457202505219207,
1808
+ "grad_norm": 4.531961917877197,
1809
+ "learning_rate": 1.2047067612999626e-05,
1810
+ "loss": 0.1648,
1811
+ "step": 2330
1812
+ },
1813
+ {
1814
+ "epoch": 19.54070981210856,
1815
+ "grad_norm": 2.6007046699523926,
1816
+ "learning_rate": 1.1860291370937618e-05,
1817
+ "loss": 0.1625,
1818
+ "step": 2340
1819
+ },
1820
+ {
1821
+ "epoch": 19.62421711899791,
1822
+ "grad_norm": 4.619992256164551,
1823
+ "learning_rate": 1.1673515128875608e-05,
1824
+ "loss": 0.1493,
1825
+ "step": 2350
1826
+ },
1827
+ {
1828
+ "epoch": 19.707724425887264,
1829
+ "grad_norm": 3.1437652111053467,
1830
+ "learning_rate": 1.1486738886813597e-05,
1831
+ "loss": 0.15,
1832
+ "step": 2360
1833
+ },
1834
+ {
1835
+ "epoch": 19.79123173277662,
1836
+ "grad_norm": 3.8565125465393066,
1837
+ "learning_rate": 1.1299962644751589e-05,
1838
+ "loss": 0.1433,
1839
+ "step": 2370
1840
+ },
1841
+ {
1842
+ "epoch": 19.87473903966597,
1843
+ "grad_norm": 4.988433837890625,
1844
+ "learning_rate": 1.1113186402689577e-05,
1845
+ "loss": 0.1663,
1846
+ "step": 2380
1847
+ },
1848
+ {
1849
+ "epoch": 19.958246346555324,
1850
+ "grad_norm": 4.064280033111572,
1851
+ "learning_rate": 1.0926410160627569e-05,
1852
+ "loss": 0.1516,
1853
+ "step": 2390
1854
+ },
1855
+ {
1856
+ "epoch": 20.0,
1857
+ "eval_accuracy": 0.9368301937029887,
1858
+ "eval_loss": 0.22790031135082245,
1859
+ "eval_runtime": 47.6095,
1860
+ "eval_samples_per_second": 315.546,
1861
+ "eval_steps_per_second": 4.936,
1862
+ "step": 2395
1863
+ },
1864
+ {
1865
+ "epoch": 20.041753653444676,
1866
+ "grad_norm": 4.115837097167969,
1867
+ "learning_rate": 1.0739633918565559e-05,
1868
+ "loss": 0.1424,
1869
+ "step": 2400
1870
+ },
1871
+ {
1872
+ "epoch": 20.12526096033403,
1873
+ "grad_norm": 4.192341327667236,
1874
+ "learning_rate": 1.055285767650355e-05,
1875
+ "loss": 0.155,
1876
+ "step": 2410
1877
+ },
1878
+ {
1879
+ "epoch": 20.20876826722338,
1880
+ "grad_norm": 4.125138282775879,
1881
+ "learning_rate": 1.0366081434441539e-05,
1882
+ "loss": 0.1576,
1883
+ "step": 2420
1884
+ },
1885
+ {
1886
+ "epoch": 20.292275574112736,
1887
+ "grad_norm": 3.7545394897460938,
1888
+ "learning_rate": 1.017930519237953e-05,
1889
+ "loss": 0.168,
1890
+ "step": 2430
1891
+ },
1892
+ {
1893
+ "epoch": 20.37578288100209,
1894
+ "grad_norm": 4.370765209197998,
1895
+ "learning_rate": 9.99252895031752e-06,
1896
+ "loss": 0.1566,
1897
+ "step": 2440
1898
+ },
1899
+ {
1900
+ "epoch": 20.45929018789144,
1901
+ "grad_norm": 2.987387180328369,
1902
+ "learning_rate": 9.80575270825551e-06,
1903
+ "loss": 0.1399,
1904
+ "step": 2450
1905
+ },
1906
+ {
1907
+ "epoch": 20.542797494780793,
1908
+ "grad_norm": 4.639241695404053,
1909
+ "learning_rate": 9.6189764661935e-06,
1910
+ "loss": 0.1447,
1911
+ "step": 2460
1912
+ },
1913
+ {
1914
+ "epoch": 20.626304801670145,
1915
+ "grad_norm": 2.296684741973877,
1916
+ "learning_rate": 9.43220022413149e-06,
1917
+ "loss": 0.1392,
1918
+ "step": 2470
1919
+ },
1920
+ {
1921
+ "epoch": 20.7098121085595,
1922
+ "grad_norm": 2.6289243698120117,
1923
+ "learning_rate": 9.245423982069482e-06,
1924
+ "loss": 0.1488,
1925
+ "step": 2480
1926
+ },
1927
+ {
1928
+ "epoch": 20.793319415448853,
1929
+ "grad_norm": 3.7014503479003906,
1930
+ "learning_rate": 9.058647740007472e-06,
1931
+ "loss": 0.1495,
1932
+ "step": 2490
1933
+ },
1934
+ {
1935
+ "epoch": 20.876826722338205,
1936
+ "grad_norm": 3.3644683361053467,
1937
+ "learning_rate": 8.871871497945462e-06,
1938
+ "loss": 0.1512,
1939
+ "step": 2500
1940
+ },
1941
+ {
1942
+ "epoch": 20.960334029227557,
1943
+ "grad_norm": 3.517514705657959,
1944
+ "learning_rate": 8.685095255883451e-06,
1945
+ "loss": 0.1504,
1946
+ "step": 2510
1947
+ },
1948
+ {
1949
+ "epoch": 20.993736951983298,
1950
+ "eval_accuracy": 0.9381614857218931,
1951
+ "eval_loss": 0.22127582132816315,
1952
+ "eval_runtime": 46.9949,
1953
+ "eval_samples_per_second": 319.673,
1954
+ "eval_steps_per_second": 5.001,
1955
+ "step": 2514
1956
+ },
1957
+ {
1958
+ "epoch": 21.04384133611691,
1959
+ "grad_norm": 2.8736536502838135,
1960
+ "learning_rate": 8.498319013821441e-06,
1961
+ "loss": 0.1439,
1962
+ "step": 2520
1963
+ },
1964
+ {
1965
+ "epoch": 21.127348643006265,
1966
+ "grad_norm": 3.371739387512207,
1967
+ "learning_rate": 8.311542771759433e-06,
1968
+ "loss": 0.1465,
1969
+ "step": 2530
1970
+ },
1971
+ {
1972
+ "epoch": 21.210855949895617,
1973
+ "grad_norm": 4.341720104217529,
1974
+ "learning_rate": 8.124766529697423e-06,
1975
+ "loss": 0.1272,
1976
+ "step": 2540
1977
+ },
1978
+ {
1979
+ "epoch": 21.29436325678497,
1980
+ "grad_norm": 4.213035583496094,
1981
+ "learning_rate": 7.937990287635413e-06,
1982
+ "loss": 0.1372,
1983
+ "step": 2550
1984
+ },
1985
+ {
1986
+ "epoch": 21.37787056367432,
1987
+ "grad_norm": 4.820446491241455,
1988
+ "learning_rate": 7.751214045573403e-06,
1989
+ "loss": 0.1413,
1990
+ "step": 2560
1991
+ },
1992
+ {
1993
+ "epoch": 21.461377870563673,
1994
+ "grad_norm": 5.027878761291504,
1995
+ "learning_rate": 7.564437803511394e-06,
1996
+ "loss": 0.1342,
1997
+ "step": 2570
1998
+ },
1999
+ {
2000
+ "epoch": 21.544885177453025,
2001
+ "grad_norm": 3.3291218280792236,
2002
+ "learning_rate": 7.3776615614493835e-06,
2003
+ "loss": 0.1385,
2004
+ "step": 2580
2005
+ },
2006
+ {
2007
+ "epoch": 21.62839248434238,
2008
+ "grad_norm": 4.18372106552124,
2009
+ "learning_rate": 7.190885319387375e-06,
2010
+ "loss": 0.1597,
2011
+ "step": 2590
2012
+ },
2013
+ {
2014
+ "epoch": 21.711899791231733,
2015
+ "grad_norm": 3.464853048324585,
2016
+ "learning_rate": 7.004109077325364e-06,
2017
+ "loss": 0.1516,
2018
+ "step": 2600
2019
+ },
2020
+ {
2021
+ "epoch": 21.795407098121085,
2022
+ "grad_norm": 3.7913620471954346,
2023
+ "learning_rate": 6.817332835263354e-06,
2024
+ "loss": 0.1407,
2025
+ "step": 2610
2026
+ },
2027
+ {
2028
+ "epoch": 21.878914405010438,
2029
+ "grad_norm": 3.104400873184204,
2030
+ "learning_rate": 6.630556593201345e-06,
2031
+ "loss": 0.1548,
2032
+ "step": 2620
2033
+ },
2034
+ {
2035
+ "epoch": 21.96242171189979,
2036
+ "grad_norm": 3.58847975730896,
2037
+ "learning_rate": 6.443780351139335e-06,
2038
+ "loss": 0.139,
2039
+ "step": 2630
2040
+ },
2041
+ {
2042
+ "epoch": 21.995824634655534,
2043
+ "eval_accuracy": 0.9370298875058244,
2044
+ "eval_loss": 0.2247340828180313,
2045
+ "eval_runtime": 47.4958,
2046
+ "eval_samples_per_second": 316.302,
2047
+ "eval_steps_per_second": 4.948,
2048
+ "step": 2634
2049
+ },
2050
+ {
2051
+ "epoch": 22.045929018789145,
2052
+ "grad_norm": 2.6478941440582275,
2053
+ "learning_rate": 6.257004109077326e-06,
2054
+ "loss": 0.1507,
2055
+ "step": 2640
2056
+ },
2057
+ {
2058
+ "epoch": 22.129436325678498,
2059
+ "grad_norm": 3.125650644302368,
2060
+ "learning_rate": 6.0702278670153155e-06,
2061
+ "loss": 0.1503,
2062
+ "step": 2650
2063
+ },
2064
+ {
2065
+ "epoch": 22.21294363256785,
2066
+ "grad_norm": 2.9791300296783447,
2067
+ "learning_rate": 5.883451624953306e-06,
2068
+ "loss": 0.1317,
2069
+ "step": 2660
2070
+ },
2071
+ {
2072
+ "epoch": 22.296450939457202,
2073
+ "grad_norm": 3.1157150268554688,
2074
+ "learning_rate": 5.696675382891296e-06,
2075
+ "loss": 0.1584,
2076
+ "step": 2670
2077
+ },
2078
+ {
2079
+ "epoch": 22.379958246346554,
2080
+ "grad_norm": 3.433903217315674,
2081
+ "learning_rate": 5.509899140829287e-06,
2082
+ "loss": 0.1424,
2083
+ "step": 2680
2084
+ },
2085
+ {
2086
+ "epoch": 22.46346555323591,
2087
+ "grad_norm": 2.764221429824829,
2088
+ "learning_rate": 5.323122898767277e-06,
2089
+ "loss": 0.135,
2090
+ "step": 2690
2091
+ },
2092
+ {
2093
+ "epoch": 22.546972860125262,
2094
+ "grad_norm": 6.406319618225098,
2095
+ "learning_rate": 5.136346656705268e-06,
2096
+ "loss": 0.1307,
2097
+ "step": 2700
2098
+ },
2099
+ {
2100
+ "epoch": 22.630480167014614,
2101
+ "grad_norm": 2.927151918411255,
2102
+ "learning_rate": 4.949570414643258e-06,
2103
+ "loss": 0.1259,
2104
+ "step": 2710
2105
+ },
2106
+ {
2107
+ "epoch": 22.713987473903966,
2108
+ "grad_norm": 2.335951089859009,
2109
+ "learning_rate": 4.762794172581248e-06,
2110
+ "loss": 0.138,
2111
+ "step": 2720
2112
+ },
2113
+ {
2114
+ "epoch": 22.79749478079332,
2115
+ "grad_norm": 3.3406760692596436,
2116
+ "learning_rate": 4.5760179305192375e-06,
2117
+ "loss": 0.1295,
2118
+ "step": 2730
2119
+ },
2120
+ {
2121
+ "epoch": 22.88100208768267,
2122
+ "grad_norm": 4.615182399749756,
2123
+ "learning_rate": 4.389241688457228e-06,
2124
+ "loss": 0.146,
2125
+ "step": 2740
2126
+ },
2127
+ {
2128
+ "epoch": 22.964509394572026,
2129
+ "grad_norm": 3.860175848007202,
2130
+ "learning_rate": 4.202465446395218e-06,
2131
+ "loss": 0.1264,
2132
+ "step": 2750
2133
+ },
2134
+ {
2135
+ "epoch": 22.997912317327767,
2136
+ "eval_accuracy": 0.938427744125674,
2137
+ "eval_loss": 0.23570792376995087,
2138
+ "eval_runtime": 47.1554,
2139
+ "eval_samples_per_second": 318.585,
2140
+ "eval_steps_per_second": 4.984,
2141
+ "step": 2754
2142
+ },
2143
+ {
2144
+ "epoch": 23.04801670146138,
2145
+ "grad_norm": 3.362682580947876,
2146
+ "learning_rate": 4.015689204333209e-06,
2147
+ "loss": 0.1557,
2148
+ "step": 2760
2149
+ },
2150
+ {
2151
+ "epoch": 23.13152400835073,
2152
+ "grad_norm": 4.399153709411621,
2153
+ "learning_rate": 3.8289129622712e-06,
2154
+ "loss": 0.1472,
2155
+ "step": 2770
2156
+ },
2157
+ {
2158
+ "epoch": 23.215031315240083,
2159
+ "grad_norm": 3.276092052459717,
2160
+ "learning_rate": 3.6421367202091897e-06,
2161
+ "loss": 0.1266,
2162
+ "step": 2780
2163
+ },
2164
+ {
2165
+ "epoch": 23.298538622129435,
2166
+ "grad_norm": 3.5454556941986084,
2167
+ "learning_rate": 3.45536047814718e-06,
2168
+ "loss": 0.1237,
2169
+ "step": 2790
2170
+ },
2171
+ {
2172
+ "epoch": 23.38204592901879,
2173
+ "grad_norm": 4.570245742797852,
2174
+ "learning_rate": 3.2685842360851704e-06,
2175
+ "loss": 0.138,
2176
+ "step": 2800
2177
+ },
2178
+ {
2179
+ "epoch": 23.465553235908143,
2180
+ "grad_norm": 4.357471466064453,
2181
+ "learning_rate": 3.0818079940231603e-06,
2182
+ "loss": 0.1675,
2183
+ "step": 2810
2184
+ },
2185
+ {
2186
+ "epoch": 23.549060542797495,
2187
+ "grad_norm": 2.96999192237854,
2188
+ "learning_rate": 2.8950317519611506e-06,
2189
+ "loss": 0.1432,
2190
+ "step": 2820
2191
+ },
2192
+ {
2193
+ "epoch": 23.632567849686847,
2194
+ "grad_norm": 2.6909894943237305,
2195
+ "learning_rate": 2.708255509899141e-06,
2196
+ "loss": 0.1306,
2197
+ "step": 2830
2198
+ },
2199
+ {
2200
+ "epoch": 23.7160751565762,
2201
+ "grad_norm": 2.8530972003936768,
2202
+ "learning_rate": 2.5214792678371313e-06,
2203
+ "loss": 0.1418,
2204
+ "step": 2840
2205
+ },
2206
+ {
2207
+ "epoch": 23.799582463465555,
2208
+ "grad_norm": 3.2527079582214355,
2209
+ "learning_rate": 2.3347030257751217e-06,
2210
+ "loss": 0.1367,
2211
+ "step": 2850
2212
+ },
2213
+ {
2214
+ "epoch": 23.883089770354907,
2215
+ "grad_norm": 4.381546497344971,
2216
+ "learning_rate": 2.147926783713112e-06,
2217
+ "loss": 0.1324,
2218
+ "step": 2860
2219
+ },
2220
+ {
2221
+ "epoch": 23.96659707724426,
2222
+ "grad_norm": 3.197265863418579,
2223
+ "learning_rate": 1.9611505416511024e-06,
2224
+ "loss": 0.1266,
2225
+ "step": 2870
2226
+ },
2227
+ {
2228
+ "epoch": 24.0,
2229
+ "eval_accuracy": 0.9380949211209478,
2230
+ "eval_loss": 0.23597599565982819,
2231
+ "eval_runtime": 47.4629,
2232
+ "eval_samples_per_second": 316.521,
2233
+ "eval_steps_per_second": 4.951,
2234
+ "step": 2874
2235
+ },
2236
+ {
2237
+ "epoch": 24.05010438413361,
2238
+ "grad_norm": 4.494243621826172,
2239
+ "learning_rate": 1.7743742995890923e-06,
2240
+ "loss": 0.1277,
2241
+ "step": 2880
2242
+ },
2243
+ {
2244
+ "epoch": 24.133611691022963,
2245
+ "grad_norm": 2.9121129512786865,
2246
+ "learning_rate": 1.5875980575270827e-06,
2247
+ "loss": 0.1119,
2248
+ "step": 2890
2249
+ },
2250
+ {
2251
+ "epoch": 24.217118997912316,
2252
+ "grad_norm": 4.936645984649658,
2253
+ "learning_rate": 1.400821815465073e-06,
2254
+ "loss": 0.1321,
2255
+ "step": 2900
2256
+ },
2257
+ {
2258
+ "epoch": 24.30062630480167,
2259
+ "grad_norm": 3.3100907802581787,
2260
+ "learning_rate": 1.2140455734030631e-06,
2261
+ "loss": 0.1347,
2262
+ "step": 2910
2263
+ },
2264
+ {
2265
+ "epoch": 24.384133611691023,
2266
+ "grad_norm": 3.270296096801758,
2267
+ "learning_rate": 1.0272693313410535e-06,
2268
+ "loss": 0.1339,
2269
+ "step": 2920
2270
+ },
2271
+ {
2272
+ "epoch": 24.467640918580376,
2273
+ "grad_norm": 5.395968437194824,
2274
+ "learning_rate": 8.404930892790437e-07,
2275
+ "loss": 0.1348,
2276
+ "step": 2930
2277
+ },
2278
+ {
2279
+ "epoch": 24.551148225469728,
2280
+ "grad_norm": 3.4229140281677246,
2281
+ "learning_rate": 6.53716847217034e-07,
2282
+ "loss": 0.1362,
2283
+ "step": 2940
2284
+ },
2285
+ {
2286
+ "epoch": 24.63465553235908,
2287
+ "grad_norm": 3.543994188308716,
2288
+ "learning_rate": 4.669406051550243e-07,
2289
+ "loss": 0.1277,
2290
+ "step": 2950
2291
+ },
2292
+ {
2293
+ "epoch": 24.718162839248436,
2294
+ "grad_norm": 3.7220823764801025,
2295
+ "learning_rate": 2.801643630930146e-07,
2296
+ "loss": 0.1366,
2297
+ "step": 2960
2298
+ },
2299
+ {
2300
+ "epoch": 24.801670146137788,
2301
+ "grad_norm": 3.1788253784179688,
2302
+ "learning_rate": 9.338812103100486e-08,
2303
+ "loss": 0.1144,
2304
+ "step": 2970
2305
+ },
2306
+ {
2307
+ "epoch": 24.843423799582464,
2308
+ "eval_accuracy": 0.9374958397124409,
2309
+ "eval_loss": 0.23700778186321259,
2310
+ "eval_runtime": 47.4891,
2311
+ "eval_samples_per_second": 316.346,
2312
+ "eval_steps_per_second": 4.948,
2313
+ "step": 2975
2314
+ },
2315
+ {
2316
+ "epoch": 24.843423799582464,
2317
+ "step": 2975,
2318
+ "total_flos": 1.488814196353273e+19,
2319
+ "train_loss": 0.23466382104809544,
2320
+ "train_runtime": 6976.8228,
2321
+ "train_samples_per_second": 109.728,
2322
+ "train_steps_per_second": 0.426
2323
+ }
2324
+ ],
2325
+ "logging_steps": 10,
2326
+ "max_steps": 2975,
2327
+ "num_input_tokens_seen": 0,
2328
+ "num_train_epochs": 25,
2329
+ "save_steps": 500,
2330
+ "stateful_callbacks": {
2331
+ "TrainerControl": {
2332
+ "args": {
2333
+ "should_epoch_stop": false,
2334
+ "should_evaluate": false,
2335
+ "should_log": false,
2336
+ "should_save": true,
2337
+ "should_training_stop": true
2338
+ },
2339
+ "attributes": {}
2340
+ }
2341
+ },
2342
+ "total_flos": 1.488814196353273e+19,
2343
+ "train_batch_size": 64,
2344
+ "trial_name": null,
2345
+ "trial_params": null
2346
+ }