stojchet commited on
Commit
05f3018
·
verified ·
1 Parent(s): 3da6e7d

End of training

Browse files
Files changed (1) hide show
  1. README.md +143 -0
README.md ADDED
@@ -0,0 +1,143 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: deepseek-ai/deepseek-coder-1.3b-base
3
+ datasets:
4
+ - generator
5
+ library_name: peft
6
+ license: other
7
+ tags:
8
+ - trl
9
+ - sft
10
+ - generated_from_trainer
11
+ model-index:
12
+ - name: lr_sft2
13
+ results: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/stojchets/huggingface/runs/lr_sft2)
20
+ # lr_sft2
21
+
22
+ This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) on the generator dataset.
23
+ It achieves the following results on the evaluation set:
24
+ - Loss: 1.2345
25
+
26
+ ## Model description
27
+
28
+ More information needed
29
+
30
+ ## Intended uses & limitations
31
+
32
+ More information needed
33
+
34
+ ## Training and evaluation data
35
+
36
+ More information needed
37
+
38
+ ## Training procedure
39
+
40
+ ### Training hyperparameters
41
+
42
+ The following hyperparameters were used during training:
43
+ - learning_rate: 1.41e-06
44
+ - train_batch_size: 8
45
+ - eval_batch_size: 8
46
+ - seed: 42
47
+ - gradient_accumulation_steps: 16
48
+ - total_train_batch_size: 128
49
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
+ - lr_scheduler_type: linear
51
+ - num_epochs: 1
52
+
53
+ ### Training results
54
+
55
+ | Training Loss | Epoch | Step | Validation Loss |
56
+ |:-------------:|:------:|:----:|:---------------:|
57
+ | 1.2662 | 0.0128 | 1 | 1.2399 |
58
+ | 1.2265 | 0.0256 | 2 | 1.2398 |
59
+ | 1.2592 | 0.0384 | 3 | 1.2396 |
60
+ | 1.1588 | 0.0512 | 4 | 1.2395 |
61
+ | 1.2261 | 0.064 | 5 | 1.2393 |
62
+ | 1.2145 | 0.0768 | 6 | 1.2392 |
63
+ | 1.2194 | 0.0896 | 7 | 1.2390 |
64
+ | 1.2688 | 0.1024 | 8 | 1.2389 |
65
+ | 1.2326 | 0.1152 | 9 | 1.2388 |
66
+ | 1.2506 | 0.128 | 10 | 1.2386 |
67
+ | 1.2719 | 0.1408 | 11 | 1.2385 |
68
+ | 1.2007 | 0.1536 | 12 | 1.2384 |
69
+ | 1.1761 | 0.1664 | 13 | 1.2383 |
70
+ | 1.2937 | 0.1792 | 14 | 1.2382 |
71
+ | 1.2277 | 0.192 | 15 | 1.2381 |
72
+ | 1.2658 | 0.2048 | 16 | 1.2379 |
73
+ | 1.2467 | 0.2176 | 17 | 1.2378 |
74
+ | 1.258 | 0.2304 | 18 | 1.2377 |
75
+ | 1.2024 | 0.2432 | 19 | 1.2376 |
76
+ | 1.2011 | 0.256 | 20 | 1.2375 |
77
+ | 1.2371 | 0.2688 | 21 | 1.2374 |
78
+ | 1.2095 | 0.2816 | 22 | 1.2373 |
79
+ | 1.2481 | 0.2944 | 23 | 1.2372 |
80
+ | 1.2934 | 0.3072 | 24 | 1.2371 |
81
+ | 1.2088 | 0.32 | 25 | 1.2370 |
82
+ | 1.2565 | 0.3328 | 26 | 1.2369 |
83
+ | 1.2254 | 0.3456 | 27 | 1.2368 |
84
+ | 1.2002 | 0.3584 | 28 | 1.2367 |
85
+ | 1.1977 | 0.3712 | 29 | 1.2366 |
86
+ | 1.1858 | 0.384 | 30 | 1.2366 |
87
+ | 1.1915 | 0.3968 | 31 | 1.2365 |
88
+ | 1.22 | 0.4096 | 32 | 1.2364 |
89
+ | 1.2649 | 0.4224 | 33 | 1.2363 |
90
+ | 1.2383 | 0.4352 | 34 | 1.2362 |
91
+ | 1.1996 | 0.448 | 35 | 1.2361 |
92
+ | 1.1884 | 0.4608 | 36 | 1.2361 |
93
+ | 1.2159 | 0.4736 | 37 | 1.2360 |
94
+ | 1.2392 | 0.4864 | 38 | 1.2359 |
95
+ | 1.272 | 0.4992 | 39 | 1.2359 |
96
+ | 1.2083 | 0.512 | 40 | 1.2358 |
97
+ | 1.2369 | 0.5248 | 41 | 1.2357 |
98
+ | 1.2324 | 0.5376 | 42 | 1.2357 |
99
+ | 1.1785 | 0.5504 | 43 | 1.2356 |
100
+ | 1.2122 | 0.5632 | 44 | 1.2355 |
101
+ | 1.2011 | 0.576 | 45 | 1.2355 |
102
+ | 1.2412 | 0.5888 | 46 | 1.2354 |
103
+ | 1.187 | 0.6016 | 47 | 1.2353 |
104
+ | 1.2275 | 0.6144 | 48 | 1.2353 |
105
+ | 1.2167 | 0.6272 | 49 | 1.2352 |
106
+ | 1.2042 | 0.64 | 50 | 1.2352 |
107
+ | 1.239 | 0.6528 | 51 | 1.2351 |
108
+ | 1.1876 | 0.6656 | 52 | 1.2351 |
109
+ | 1.2362 | 0.6784 | 53 | 1.2350 |
110
+ | 1.2018 | 0.6912 | 54 | 1.2350 |
111
+ | 1.1839 | 0.704 | 55 | 1.2350 |
112
+ | 1.2025 | 0.7168 | 56 | 1.2349 |
113
+ | 1.2289 | 0.7296 | 57 | 1.2349 |
114
+ | 1.2228 | 0.7424 | 58 | 1.2348 |
115
+ | 1.1969 | 0.7552 | 59 | 1.2348 |
116
+ | 1.2393 | 0.768 | 60 | 1.2348 |
117
+ | 1.2783 | 0.7808 | 61 | 1.2347 |
118
+ | 1.2625 | 0.7936 | 62 | 1.2347 |
119
+ | 1.1973 | 0.8064 | 63 | 1.2347 |
120
+ | 1.2449 | 0.8192 | 64 | 1.2346 |
121
+ | 1.1992 | 0.832 | 65 | 1.2346 |
122
+ | 1.1581 | 0.8448 | 66 | 1.2346 |
123
+ | 1.2901 | 0.8576 | 67 | 1.2346 |
124
+ | 1.1731 | 0.8704 | 68 | 1.2346 |
125
+ | 1.1956 | 0.8832 | 69 | 1.2345 |
126
+ | 1.1748 | 0.896 | 70 | 1.2345 |
127
+ | 1.2399 | 0.9088 | 71 | 1.2345 |
128
+ | 1.2649 | 0.9216 | 72 | 1.2345 |
129
+ | 1.2461 | 0.9344 | 73 | 1.2345 |
130
+ | 1.1934 | 0.9472 | 74 | 1.2345 |
131
+ | 1.2389 | 0.96 | 75 | 1.2345 |
132
+ | 1.2689 | 0.9728 | 76 | 1.2345 |
133
+ | 1.2085 | 0.9856 | 77 | 1.2345 |
134
+ | 1.226 | 0.9984 | 78 | 1.2345 |
135
+
136
+
137
+ ### Framework versions
138
+
139
+ - PEFT 0.10.0
140
+ - Transformers 4.43.0.dev0
141
+ - Pytorch 2.2.2+cu121
142
+ - Datasets 2.19.2
143
+ - Tokenizers 0.19.1