abhimanyu2747 commited on
Commit
5d1390a
·
verified ·
1 Parent(s): 893525b

Upload folder using huggingface_hub

Browse files
checkpoint-100/README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ base_model: mistralai/Mistral-7B-v0.1
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.10.0
checkpoint-100/adapter_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "mistralai/Mistral-7B-v0.1",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 16,
14
+ "lora_dropout": 0.05,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 16,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": [
23
+ "q_proj",
24
+ "down_proj",
25
+ "gate_proj",
26
+ "o_proj",
27
+ "up_proj",
28
+ "v_proj",
29
+ "k_proj"
30
+ ],
31
+ "task_type": "CAUSAL_LM",
32
+ "use_dora": false,
33
+ "use_rslora": false
34
+ }
checkpoint-100/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c7195143b0e667ffbf4bc130cd7c80a09de58ba6972b6b0b3a98fedb41b49af
3
+ size 167832240
checkpoint-100/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:96bbd80eb4dfed014ed8b9ab305752962c4b4199f3b813bf9ad222bceae4086c
3
+ size 84575956
checkpoint-100/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:68c0a7bb5c807ad5bc1c4dff28401d8ba88a8d5d80ef889d2be2ca17beb56b13
3
+ size 14244
checkpoint-100/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ad0f1f2e7b55785a1593e27bff66a57496d0f22101944415c8a8dca31514181
3
+ size 1064
checkpoint-100/special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "</s>",
17
+ "unk_token": {
18
+ "content": "<unk>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
checkpoint-100/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-100/tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dadfd56d766715c61d2ef780a525ab43b8e6da4de6865bda3d95fdef5e134055
3
+ size 493443
checkpoint-100/tokenizer_config.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": true,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ }
29
+ },
30
+ "additional_special_tokens": [],
31
+ "bos_token": "<s>",
32
+ "clean_up_tokenization_spaces": false,
33
+ "eos_token": "</s>",
34
+ "legacy": true,
35
+ "model_max_length": 1000000000000000019884624838656,
36
+ "pad_token": "</s>",
37
+ "sp_model_kwargs": {},
38
+ "spaces_between_special_tokens": false,
39
+ "tokenizer_class": "LlamaTokenizer",
40
+ "unk_token": "<unk>",
41
+ "use_default_system_prompt": false
42
+ }
checkpoint-100/trainer_state.json ADDED
@@ -0,0 +1,761 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 0.5333333333333333,
5
+ "eval_steps": 5,
6
+ "global_step": 100,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.03,
13
+ "grad_norm": 11.568901062011719,
14
+ "learning_rate": 4e-06,
15
+ "log_odds_chosen": -0.04809989407658577,
16
+ "log_odds_ratio": -0.7309754490852356,
17
+ "logits/chosen": -2.8434011936187744,
18
+ "logits/rejected": -2.879807472229004,
19
+ "logps/chosen": -1.3107571601867676,
20
+ "logps/rejected": -1.2733912467956543,
21
+ "loss": 2.9218,
22
+ "nll_loss": 2.8486790657043457,
23
+ "rewards/accuracies": 0.44999998807907104,
24
+ "rewards/chosen": -0.13107571005821228,
25
+ "rewards/margins": -0.003736583050340414,
26
+ "rewards/rejected": -0.12733913958072662,
27
+ "step": 5
28
+ },
29
+ {
30
+ "epoch": 0.03,
31
+ "eval_log_odds_chosen": -0.03959529474377632,
32
+ "eval_log_odds_ratio": -0.7268435955047607,
33
+ "eval_logits/chosen": -2.8285341262817383,
34
+ "eval_logits/rejected": -2.8396682739257812,
35
+ "eval_logps/chosen": -1.3169336318969727,
36
+ "eval_logps/rejected": -1.2877413034439087,
37
+ "eval_loss": 2.917881965637207,
38
+ "eval_nll_loss": 2.8451976776123047,
39
+ "eval_rewards/accuracies": 0.4560000002384186,
40
+ "eval_rewards/chosen": -0.13169336318969727,
41
+ "eval_rewards/margins": -0.0029192266520112753,
42
+ "eval_rewards/rejected": -0.12877413630485535,
43
+ "eval_runtime": 1022.3754,
44
+ "eval_samples_per_second": 0.489,
45
+ "eval_steps_per_second": 0.245,
46
+ "step": 5
47
+ },
48
+ {
49
+ "epoch": 0.05,
50
+ "grad_norm": 11.521967887878418,
51
+ "learning_rate": 8e-06,
52
+ "log_odds_chosen": -0.007230323739349842,
53
+ "log_odds_ratio": -0.7128439545631409,
54
+ "logits/chosen": -2.8024983406066895,
55
+ "logits/rejected": -2.807565212249756,
56
+ "logps/chosen": -1.3020203113555908,
57
+ "logps/rejected": -1.2930128574371338,
58
+ "loss": 2.8904,
59
+ "nll_loss": 2.819147825241089,
60
+ "rewards/accuracies": 0.5,
61
+ "rewards/chosen": -0.1302020400762558,
62
+ "rewards/margins": -0.00090073945466429,
63
+ "rewards/rejected": -0.1293012797832489,
64
+ "step": 10
65
+ },
66
+ {
67
+ "epoch": 0.05,
68
+ "eval_log_odds_chosen": -0.034889113157987595,
69
+ "eval_log_odds_ratio": -0.7245000004768372,
70
+ "eval_logits/chosen": -2.8870224952697754,
71
+ "eval_logits/rejected": -2.8951852321624756,
72
+ "eval_logps/chosen": -1.2974005937576294,
73
+ "eval_logps/rejected": -1.271844506263733,
74
+ "eval_loss": 2.572023391723633,
75
+ "eval_nll_loss": 2.499573230743408,
76
+ "eval_rewards/accuracies": 0.4620000123977661,
77
+ "eval_rewards/chosen": -0.12974008917808533,
78
+ "eval_rewards/margins": -0.00255563179962337,
79
+ "eval_rewards/rejected": -0.1271844357252121,
80
+ "eval_runtime": 1022.4911,
81
+ "eval_samples_per_second": 0.489,
82
+ "eval_steps_per_second": 0.245,
83
+ "step": 10
84
+ },
85
+ {
86
+ "epoch": 0.08,
87
+ "grad_norm": 2.9545536041259766,
88
+ "learning_rate": 7.555555555555555e-06,
89
+ "log_odds_chosen": -0.004089765250682831,
90
+ "log_odds_ratio": -0.7142491340637207,
91
+ "logits/chosen": -2.9420180320739746,
92
+ "logits/rejected": -2.9477128982543945,
93
+ "logps/chosen": -1.2463443279266357,
94
+ "logps/rejected": -1.2475553750991821,
95
+ "loss": 2.3041,
96
+ "nll_loss": 2.232625961303711,
97
+ "rewards/accuracies": 0.44999998807907104,
98
+ "rewards/chosen": -0.12463442981243134,
99
+ "rewards/margins": 0.00012110620446037501,
100
+ "rewards/rejected": -0.12475553900003433,
101
+ "step": 15
102
+ },
103
+ {
104
+ "epoch": 0.08,
105
+ "eval_log_odds_chosen": -0.027417659759521484,
106
+ "eval_log_odds_ratio": -0.7208431363105774,
107
+ "eval_logits/chosen": -2.9438834190368652,
108
+ "eval_logits/rejected": -2.949639081954956,
109
+ "eval_logps/chosen": -1.2658034563064575,
110
+ "eval_logps/rejected": -1.2459032535552979,
111
+ "eval_loss": 2.15436053276062,
112
+ "eval_nll_loss": 2.0822763442993164,
113
+ "eval_rewards/accuracies": 0.4959999918937683,
114
+ "eval_rewards/chosen": -0.1265803724527359,
115
+ "eval_rewards/margins": -0.0019900340121239424,
116
+ "eval_rewards/rejected": -0.12459032237529755,
117
+ "eval_runtime": 1022.5816,
118
+ "eval_samples_per_second": 0.489,
119
+ "eval_steps_per_second": 0.244,
120
+ "step": 15
121
+ },
122
+ {
123
+ "epoch": 0.11,
124
+ "grad_norm": 2.3092970848083496,
125
+ "learning_rate": 7.11111111111111e-06,
126
+ "log_odds_chosen": -0.05510986968874931,
127
+ "log_odds_ratio": -0.733123779296875,
128
+ "logits/chosen": -2.956840753555298,
129
+ "logits/rejected": -2.9664785861968994,
130
+ "logps/chosen": -1.2388333082199097,
131
+ "logps/rejected": -1.1989364624023438,
132
+ "loss": 2.1002,
133
+ "nll_loss": 2.0268614292144775,
134
+ "rewards/accuracies": 0.4000000059604645,
135
+ "rewards/chosen": -0.12388332188129425,
136
+ "rewards/margins": -0.003989681135863066,
137
+ "rewards/rejected": -0.1198936477303505,
138
+ "step": 20
139
+ },
140
+ {
141
+ "epoch": 0.11,
142
+ "eval_log_odds_chosen": -0.017652124166488647,
143
+ "eval_log_odds_ratio": -0.7160712480545044,
144
+ "eval_logits/chosen": -2.950247287750244,
145
+ "eval_logits/rejected": -2.956831216812134,
146
+ "eval_logps/chosen": -1.2254964113235474,
147
+ "eval_logps/rejected": -1.212648868560791,
148
+ "eval_loss": 2.0004026889801025,
149
+ "eval_nll_loss": 1.9287954568862915,
150
+ "eval_rewards/accuracies": 0.5139999985694885,
151
+ "eval_rewards/chosen": -0.12254965305328369,
152
+ "eval_rewards/margins": -0.001284759840928018,
153
+ "eval_rewards/rejected": -0.12126488238573074,
154
+ "eval_runtime": 1022.5746,
155
+ "eval_samples_per_second": 0.489,
156
+ "eval_steps_per_second": 0.244,
157
+ "step": 20
158
+ },
159
+ {
160
+ "epoch": 0.13,
161
+ "grad_norm": 1.386775255203247,
162
+ "learning_rate": 6.666666666666667e-06,
163
+ "log_odds_chosen": -0.022222867235541344,
164
+ "log_odds_ratio": -0.7165368795394897,
165
+ "logits/chosen": -2.9471981525421143,
166
+ "logits/rejected": -2.9485559463500977,
167
+ "logps/chosen": -1.1919796466827393,
168
+ "logps/rejected": -1.1759862899780273,
169
+ "loss": 1.9579,
170
+ "nll_loss": 1.8862320184707642,
171
+ "rewards/accuracies": 0.4749999940395355,
172
+ "rewards/chosen": -0.11919797956943512,
173
+ "rewards/margins": -0.001599342213012278,
174
+ "rewards/rejected": -0.11759863048791885,
175
+ "step": 25
176
+ },
177
+ {
178
+ "epoch": 0.13,
179
+ "eval_log_odds_chosen": -0.004669802729040384,
180
+ "eval_log_odds_ratio": -0.7097506523132324,
181
+ "eval_logits/chosen": -2.93721604347229,
182
+ "eval_logits/rejected": -2.9449150562286377,
183
+ "eval_logps/chosen": -1.1749383211135864,
184
+ "eval_logps/rejected": -1.1712193489074707,
185
+ "eval_loss": 1.9110203981399536,
186
+ "eval_nll_loss": 1.8400453329086304,
187
+ "eval_rewards/accuracies": 0.5379999876022339,
188
+ "eval_rewards/chosen": -0.11749383807182312,
189
+ "eval_rewards/margins": -0.00037189823342487216,
190
+ "eval_rewards/rejected": -0.11712193489074707,
191
+ "eval_runtime": 1022.6406,
192
+ "eval_samples_per_second": 0.489,
193
+ "eval_steps_per_second": 0.244,
194
+ "step": 25
195
+ },
196
+ {
197
+ "epoch": 0.16,
198
+ "grad_norm": 1.1365916728973389,
199
+ "learning_rate": 6.222222222222222e-06,
200
+ "log_odds_chosen": 0.007038115058094263,
201
+ "log_odds_ratio": -0.706188440322876,
202
+ "logits/chosen": -2.932509660720825,
203
+ "logits/rejected": -2.9490396976470947,
204
+ "logps/chosen": -1.1644437313079834,
205
+ "logps/rejected": -1.1744678020477295,
206
+ "loss": 1.8475,
207
+ "nll_loss": 1.776898741722107,
208
+ "rewards/accuracies": 0.574999988079071,
209
+ "rewards/chosen": -0.11644438654184341,
210
+ "rewards/margins": 0.0010024005314335227,
211
+ "rewards/rejected": -0.11744678020477295,
212
+ "step": 30
213
+ },
214
+ {
215
+ "epoch": 0.16,
216
+ "eval_log_odds_chosen": 0.008890592493116856,
217
+ "eval_log_odds_ratio": -0.7029520273208618,
218
+ "eval_logits/chosen": -2.928407669067383,
219
+ "eval_logits/rejected": -2.9365015029907227,
220
+ "eval_logps/chosen": -1.12498140335083,
221
+ "eval_logps/rejected": -1.1307774782180786,
222
+ "eval_loss": 1.831018328666687,
223
+ "eval_nll_loss": 1.7607231140136719,
224
+ "eval_rewards/accuracies": 0.5580000281333923,
225
+ "eval_rewards/chosen": -0.11249814182519913,
226
+ "eval_rewards/margins": 0.0005796164623461664,
227
+ "eval_rewards/rejected": -0.11307775229215622,
228
+ "eval_runtime": 1022.5768,
229
+ "eval_samples_per_second": 0.489,
230
+ "eval_steps_per_second": 0.244,
231
+ "step": 30
232
+ },
233
+ {
234
+ "epoch": 0.19,
235
+ "grad_norm": 1.2610535621643066,
236
+ "learning_rate": 5.777777777777777e-06,
237
+ "log_odds_chosen": -0.00892117340117693,
238
+ "log_odds_ratio": -0.7147808074951172,
239
+ "logits/chosen": -2.920056104660034,
240
+ "logits/rejected": -2.925736665725708,
241
+ "logps/chosen": -1.0991065502166748,
242
+ "logps/rejected": -1.0894601345062256,
243
+ "loss": 1.7514,
244
+ "nll_loss": 1.6798721551895142,
245
+ "rewards/accuracies": 0.574999988079071,
246
+ "rewards/chosen": -0.10991065204143524,
247
+ "rewards/margins": -0.0009646398248150945,
248
+ "rewards/rejected": -0.10894601047039032,
249
+ "step": 35
250
+ },
251
+ {
252
+ "epoch": 0.19,
253
+ "eval_log_odds_chosen": 0.023294491693377495,
254
+ "eval_log_odds_ratio": -0.6957657933235168,
255
+ "eval_logits/chosen": -2.9222519397735596,
256
+ "eval_logits/rejected": -2.9305875301361084,
257
+ "eval_logps/chosen": -1.0795469284057617,
258
+ "eval_logps/rejected": -1.094985008239746,
259
+ "eval_loss": 1.7491860389709473,
260
+ "eval_nll_loss": 1.6796095371246338,
261
+ "eval_rewards/accuracies": 0.5659999847412109,
262
+ "eval_rewards/chosen": -0.10795468837022781,
263
+ "eval_rewards/margins": 0.0015438096597790718,
264
+ "eval_rewards/rejected": -0.10949849337339401,
265
+ "eval_runtime": 1022.5375,
266
+ "eval_samples_per_second": 0.489,
267
+ "eval_steps_per_second": 0.244,
268
+ "step": 35
269
+ },
270
+ {
271
+ "epoch": 0.21,
272
+ "grad_norm": 1.0528180599212646,
273
+ "learning_rate": 5.333333333333333e-06,
274
+ "log_odds_chosen": 0.0462532714009285,
275
+ "log_odds_ratio": -0.6852200031280518,
276
+ "logits/chosen": -2.9107460975646973,
277
+ "logits/rejected": -2.9340782165527344,
278
+ "logps/chosen": -1.0957037210464478,
279
+ "logps/rejected": -1.1278812885284424,
280
+ "loss": 1.6771,
281
+ "nll_loss": 1.608576774597168,
282
+ "rewards/accuracies": 0.6000000238418579,
283
+ "rewards/chosen": -0.10957036167383194,
284
+ "rewards/margins": 0.0032177655957639217,
285
+ "rewards/rejected": -0.1127881407737732,
286
+ "step": 40
287
+ },
288
+ {
289
+ "epoch": 0.21,
290
+ "eval_log_odds_chosen": 0.03804260119795799,
291
+ "eval_log_odds_ratio": -0.6886653900146484,
292
+ "eval_logits/chosen": -2.9180362224578857,
293
+ "eval_logits/rejected": -2.9264867305755615,
294
+ "eval_logps/chosen": -1.0399984121322632,
295
+ "eval_logps/rejected": -1.0647239685058594,
296
+ "eval_loss": 1.6708022356033325,
297
+ "eval_nll_loss": 1.601935625076294,
298
+ "eval_rewards/accuracies": 0.5759999752044678,
299
+ "eval_rewards/chosen": -0.10399983078241348,
300
+ "eval_rewards/margins": 0.0024725706316530704,
301
+ "eval_rewards/rejected": -0.10647241026163101,
302
+ "eval_runtime": 1022.6911,
303
+ "eval_samples_per_second": 0.489,
304
+ "eval_steps_per_second": 0.244,
305
+ "step": 40
306
+ },
307
+ {
308
+ "epoch": 0.24,
309
+ "grad_norm": 1.1447864770889282,
310
+ "learning_rate": 4.888888888888889e-06,
311
+ "log_odds_chosen": -0.03256305307149887,
312
+ "log_odds_ratio": -0.7240562438964844,
313
+ "logits/chosen": -2.913600444793701,
314
+ "logits/rejected": -2.9339098930358887,
315
+ "logps/chosen": -1.0398883819580078,
316
+ "logps/rejected": -1.0196235179901123,
317
+ "loss": 1.6456,
318
+ "nll_loss": 1.5732176303863525,
319
+ "rewards/accuracies": 0.42500001192092896,
320
+ "rewards/chosen": -0.10398884862661362,
321
+ "rewards/margins": -0.002026502974331379,
322
+ "rewards/rejected": -0.10196234285831451,
323
+ "step": 45
324
+ },
325
+ {
326
+ "epoch": 0.24,
327
+ "eval_log_odds_chosen": 0.05289135128259659,
328
+ "eval_log_odds_ratio": -0.6815592646598816,
329
+ "eval_logits/chosen": -2.9165074825286865,
330
+ "eval_logits/rejected": -2.9247732162475586,
331
+ "eval_logps/chosen": -1.005520224571228,
332
+ "eval_logps/rejected": -1.0391840934753418,
333
+ "eval_loss": 1.591990351676941,
334
+ "eval_nll_loss": 1.523834466934204,
335
+ "eval_rewards/accuracies": 0.5820000171661377,
336
+ "eval_rewards/chosen": -0.1005520299077034,
337
+ "eval_rewards/margins": 0.0033663813956081867,
338
+ "eval_rewards/rejected": -0.1039184182882309,
339
+ "eval_runtime": 1022.6889,
340
+ "eval_samples_per_second": 0.489,
341
+ "eval_steps_per_second": 0.244,
342
+ "step": 45
343
+ },
344
+ {
345
+ "epoch": 0.27,
346
+ "grad_norm": 1.2685421705245972,
347
+ "learning_rate": 4.444444444444444e-06,
348
+ "log_odds_chosen": 0.009501500055193901,
349
+ "log_odds_ratio": -0.7021796107292175,
350
+ "logits/chosen": -2.917152166366577,
351
+ "logits/rejected": -2.9314322471618652,
352
+ "logps/chosen": -0.9614133834838867,
353
+ "logps/rejected": -0.9648770093917847,
354
+ "loss": 1.5613,
355
+ "nll_loss": 1.4910422563552856,
356
+ "rewards/accuracies": 0.6000000238418579,
357
+ "rewards/chosen": -0.09614135324954987,
358
+ "rewards/margins": 0.0003463650937192142,
359
+ "rewards/rejected": -0.09648770838975906,
360
+ "step": 50
361
+ },
362
+ {
363
+ "epoch": 0.27,
364
+ "eval_log_odds_chosen": 0.06923460215330124,
365
+ "eval_log_odds_ratio": -0.6738842129707336,
366
+ "eval_logits/chosen": -2.913623332977295,
367
+ "eval_logits/rejected": -2.9217159748077393,
368
+ "eval_logps/chosen": -0.9739399552345276,
369
+ "eval_logps/rejected": -1.017154335975647,
370
+ "eval_loss": 1.5113871097564697,
371
+ "eval_nll_loss": 1.4439988136291504,
372
+ "eval_rewards/accuracies": 0.5860000252723694,
373
+ "eval_rewards/chosen": -0.09739399701356888,
374
+ "eval_rewards/margins": 0.004321432206779718,
375
+ "eval_rewards/rejected": -0.10171543061733246,
376
+ "eval_runtime": 1022.6706,
377
+ "eval_samples_per_second": 0.489,
378
+ "eval_steps_per_second": 0.244,
379
+ "step": 50
380
+ },
381
+ {
382
+ "epoch": 0.29,
383
+ "grad_norm": 1.4695621728897095,
384
+ "learning_rate": 4e-06,
385
+ "log_odds_chosen": -0.04625839367508888,
386
+ "log_odds_ratio": -0.7300089001655579,
387
+ "logits/chosen": -2.9374871253967285,
388
+ "logits/rejected": -2.9447882175445557,
389
+ "logps/chosen": -0.9806947708129883,
390
+ "logps/rejected": -0.9502049684524536,
391
+ "loss": 1.5105,
392
+ "nll_loss": 1.4374949932098389,
393
+ "rewards/accuracies": 0.4000000059604645,
394
+ "rewards/chosen": -0.09806947410106659,
395
+ "rewards/margins": -0.0030489820055663586,
396
+ "rewards/rejected": -0.09502050280570984,
397
+ "step": 55
398
+ },
399
+ {
400
+ "epoch": 0.29,
401
+ "eval_log_odds_chosen": 0.08856023102998734,
402
+ "eval_log_odds_ratio": -0.6650052666664124,
403
+ "eval_logits/chosen": -2.9045321941375732,
404
+ "eval_logits/rejected": -2.912545919418335,
405
+ "eval_logps/chosen": -0.9449537992477417,
406
+ "eval_logps/rejected": -0.9992481470108032,
407
+ "eval_loss": 1.4219770431518555,
408
+ "eval_nll_loss": 1.3554766178131104,
409
+ "eval_rewards/accuracies": 0.6159999966621399,
410
+ "eval_rewards/chosen": -0.09449537843465805,
411
+ "eval_rewards/margins": 0.0054294392466545105,
412
+ "eval_rewards/rejected": -0.09992481023073196,
413
+ "eval_runtime": 1022.7786,
414
+ "eval_samples_per_second": 0.489,
415
+ "eval_steps_per_second": 0.244,
416
+ "step": 55
417
+ },
418
+ {
419
+ "epoch": 0.32,
420
+ "grad_norm": 1.6138032674789429,
421
+ "learning_rate": 3.555555555555555e-06,
422
+ "log_odds_chosen": 0.13964158296585083,
423
+ "log_odds_ratio": -0.635947585105896,
424
+ "logits/chosen": -2.8947017192840576,
425
+ "logits/rejected": -2.9101452827453613,
426
+ "logps/chosen": -0.8633010983467102,
427
+ "logps/rejected": -0.9464691281318665,
428
+ "loss": 1.3538,
429
+ "nll_loss": 1.2902143001556396,
430
+ "rewards/accuracies": 0.625,
431
+ "rewards/chosen": -0.0863301008939743,
432
+ "rewards/margins": 0.008316809311509132,
433
+ "rewards/rejected": -0.09464691579341888,
434
+ "step": 60
435
+ },
436
+ {
437
+ "epoch": 0.32,
438
+ "eval_log_odds_chosen": 0.10931771248579025,
439
+ "eval_log_odds_ratio": -0.6556519865989685,
440
+ "eval_logits/chosen": -2.892199993133545,
441
+ "eval_logits/rejected": -2.9001097679138184,
442
+ "eval_logps/chosen": -0.917702317237854,
443
+ "eval_logps/rejected": -0.9835168123245239,
444
+ "eval_loss": 1.335745096206665,
445
+ "eval_nll_loss": 1.270180106163025,
446
+ "eval_rewards/accuracies": 0.6359999775886536,
447
+ "eval_rewards/chosen": -0.091770239174366,
448
+ "eval_rewards/margins": 0.006581444293260574,
449
+ "eval_rewards/rejected": -0.09835167974233627,
450
+ "eval_runtime": 1022.7752,
451
+ "eval_samples_per_second": 0.489,
452
+ "eval_steps_per_second": 0.244,
453
+ "step": 60
454
+ },
455
+ {
456
+ "epoch": 0.35,
457
+ "grad_norm": 1.5814971923828125,
458
+ "learning_rate": 3.111111111111111e-06,
459
+ "log_odds_chosen": 0.10876095294952393,
460
+ "log_odds_ratio": -0.6668508648872375,
461
+ "logits/chosen": -2.8826332092285156,
462
+ "logits/rejected": -2.8916373252868652,
463
+ "logps/chosen": -0.9191996455192566,
464
+ "logps/rejected": -0.9877899289131165,
465
+ "loss": 1.3041,
466
+ "nll_loss": 1.2373846769332886,
467
+ "rewards/accuracies": 0.574999988079071,
468
+ "rewards/chosen": -0.09191995859146118,
469
+ "rewards/margins": 0.006859032902866602,
470
+ "rewards/rejected": -0.09877900034189224,
471
+ "step": 65
472
+ },
473
+ {
474
+ "epoch": 0.35,
475
+ "eval_log_odds_chosen": 0.1319531947374344,
476
+ "eval_log_odds_ratio": -0.6457509398460388,
477
+ "eval_logits/chosen": -2.88071870803833,
478
+ "eval_logits/rejected": -2.888591766357422,
479
+ "eval_logps/chosen": -0.891889750957489,
480
+ "eval_logps/rejected": -0.9699481129646301,
481
+ "eval_loss": 1.2609809637069702,
482
+ "eval_nll_loss": 1.1964057683944702,
483
+ "eval_rewards/accuracies": 0.6439999938011169,
484
+ "eval_rewards/chosen": -0.08918897062540054,
485
+ "eval_rewards/margins": 0.007805844768881798,
486
+ "eval_rewards/rejected": -0.09699482470750809,
487
+ "eval_runtime": 1022.7672,
488
+ "eval_samples_per_second": 0.489,
489
+ "eval_steps_per_second": 0.244,
490
+ "step": 65
491
+ },
492
+ {
493
+ "epoch": 0.37,
494
+ "grad_norm": 1.659564733505249,
495
+ "learning_rate": 2.6666666666666664e-06,
496
+ "log_odds_chosen": 0.12550300359725952,
497
+ "log_odds_ratio": -0.649591326713562,
498
+ "logits/chosen": -2.8679802417755127,
499
+ "logits/rejected": -2.8701627254486084,
500
+ "logps/chosen": -0.8406276702880859,
501
+ "logps/rejected": -0.909234344959259,
502
+ "loss": 1.2107,
503
+ "nll_loss": 1.1457639932632446,
504
+ "rewards/accuracies": 0.6499999761581421,
505
+ "rewards/chosen": -0.08406275510787964,
506
+ "rewards/margins": 0.006860665045678616,
507
+ "rewards/rejected": -0.09092343598604202,
508
+ "step": 70
509
+ },
510
+ {
511
+ "epoch": 0.37,
512
+ "eval_log_odds_chosen": 0.157026469707489,
513
+ "eval_log_odds_ratio": -0.6350610256195068,
514
+ "eval_logits/chosen": -2.8666250705718994,
515
+ "eval_logits/rejected": -2.874605655670166,
516
+ "eval_logps/chosen": -0.8690526485443115,
517
+ "eval_logps/rejected": -0.9602731466293335,
518
+ "eval_loss": 1.194831132888794,
519
+ "eval_nll_loss": 1.131325125694275,
520
+ "eval_rewards/accuracies": 0.6700000166893005,
521
+ "eval_rewards/chosen": -0.08690525591373444,
522
+ "eval_rewards/margins": 0.009122052229940891,
523
+ "eval_rewards/rejected": -0.09602731466293335,
524
+ "eval_runtime": 1022.8442,
525
+ "eval_samples_per_second": 0.489,
526
+ "eval_steps_per_second": 0.244,
527
+ "step": 70
528
+ },
529
+ {
530
+ "epoch": 0.4,
531
+ "grad_norm": 1.6513782739639282,
532
+ "learning_rate": 2.222222222222222e-06,
533
+ "log_odds_chosen": 0.17446637153625488,
534
+ "log_odds_ratio": -0.6316618323326111,
535
+ "logits/chosen": -2.8589344024658203,
536
+ "logits/rejected": -2.869570732116699,
537
+ "logps/chosen": -0.8645883798599243,
538
+ "logps/rejected": -0.9688776135444641,
539
+ "loss": 1.1669,
540
+ "nll_loss": 1.1037009954452515,
541
+ "rewards/accuracies": 0.7250000238418579,
542
+ "rewards/chosen": -0.08645883202552795,
543
+ "rewards/margins": 0.010428930632770061,
544
+ "rewards/rejected": -0.0968877524137497,
545
+ "step": 75
546
+ },
547
+ {
548
+ "epoch": 0.4,
549
+ "eval_log_odds_chosen": 0.18124479055404663,
550
+ "eval_log_odds_ratio": -0.6248182058334351,
551
+ "eval_logits/chosen": -2.8483848571777344,
552
+ "eval_logits/rejected": -2.856492757797241,
553
+ "eval_logps/chosen": -0.8498203754425049,
554
+ "eval_logps/rejected": -0.9533628225326538,
555
+ "eval_loss": 1.1421345472335815,
556
+ "eval_nll_loss": 1.0796527862548828,
557
+ "eval_rewards/accuracies": 0.6779999732971191,
558
+ "eval_rewards/chosen": -0.08498203754425049,
559
+ "eval_rewards/margins": 0.010354244150221348,
560
+ "eval_rewards/rejected": -0.09533628821372986,
561
+ "eval_runtime": 1022.9696,
562
+ "eval_samples_per_second": 0.489,
563
+ "eval_steps_per_second": 0.244,
564
+ "step": 75
565
+ },
566
+ {
567
+ "epoch": 0.43,
568
+ "grad_norm": 1.5368865728378296,
569
+ "learning_rate": 1.7777777777777775e-06,
570
+ "log_odds_chosen": 0.14982302486896515,
571
+ "log_odds_ratio": -0.6351920962333679,
572
+ "logits/chosen": -2.850210666656494,
573
+ "logits/rejected": -2.8676600456237793,
574
+ "logps/chosen": -0.8382354974746704,
575
+ "logps/rejected": -0.9211069345474243,
576
+ "loss": 1.1242,
577
+ "nll_loss": 1.0606878995895386,
578
+ "rewards/accuracies": 0.6000000238418579,
579
+ "rewards/chosen": -0.0838235467672348,
580
+ "rewards/margins": 0.00828714668750763,
581
+ "rewards/rejected": -0.09211069345474243,
582
+ "step": 80
583
+ },
584
+ {
585
+ "epoch": 0.43,
586
+ "eval_log_odds_chosen": 0.19973799586296082,
587
+ "eval_log_odds_ratio": -0.6170535683631897,
588
+ "eval_logits/chosen": -2.836185932159424,
589
+ "eval_logits/rejected": -2.8443472385406494,
590
+ "eval_logps/chosen": -0.8345023989677429,
591
+ "eval_logps/rejected": -0.9470006823539734,
592
+ "eval_loss": 1.096219778060913,
593
+ "eval_nll_loss": 1.0345144271850586,
594
+ "eval_rewards/accuracies": 0.6980000138282776,
595
+ "eval_rewards/chosen": -0.08345023542642593,
596
+ "eval_rewards/margins": 0.01124983187764883,
597
+ "eval_rewards/rejected": -0.09470007568597794,
598
+ "eval_runtime": 1022.8769,
599
+ "eval_samples_per_second": 0.489,
600
+ "eval_steps_per_second": 0.244,
601
+ "step": 80
602
+ },
603
+ {
604
+ "epoch": 0.45,
605
+ "grad_norm": 1.7374863624572754,
606
+ "learning_rate": 1.3333333333333332e-06,
607
+ "log_odds_chosen": 0.2539418935775757,
608
+ "log_odds_ratio": -0.595883309841156,
609
+ "logits/chosen": -2.8221487998962402,
610
+ "logits/rejected": -2.8260178565979004,
611
+ "logps/chosen": -0.8204309344291687,
612
+ "logps/rejected": -0.9650068283081055,
613
+ "loss": 1.0908,
614
+ "nll_loss": 1.031245470046997,
615
+ "rewards/accuracies": 0.699999988079071,
616
+ "rewards/chosen": -0.0820431038737297,
617
+ "rewards/margins": 0.014457575976848602,
618
+ "rewards/rejected": -0.09650067985057831,
619
+ "step": 85
620
+ },
621
+ {
622
+ "epoch": 0.45,
623
+ "eval_log_odds_chosen": 0.2158733457326889,
624
+ "eval_log_odds_ratio": -0.61031174659729,
625
+ "eval_logits/chosen": -2.8228869438171387,
626
+ "eval_logits/rejected": -2.831103563308716,
627
+ "eval_logps/chosen": -0.8219822645187378,
628
+ "eval_logps/rejected": -0.9421792030334473,
629
+ "eval_loss": 1.0606101751327515,
630
+ "eval_nll_loss": 0.9995790719985962,
631
+ "eval_rewards/accuracies": 0.722000002861023,
632
+ "eval_rewards/chosen": -0.08219822496175766,
633
+ "eval_rewards/margins": 0.012019689194858074,
634
+ "eval_rewards/rejected": -0.0942179262638092,
635
+ "eval_runtime": 1022.854,
636
+ "eval_samples_per_second": 0.489,
637
+ "eval_steps_per_second": 0.244,
638
+ "step": 85
639
+ },
640
+ {
641
+ "epoch": 0.48,
642
+ "grad_norm": 1.5390028953552246,
643
+ "learning_rate": 8.888888888888888e-07,
644
+ "log_odds_chosen": 0.12312959134578705,
645
+ "log_odds_ratio": -0.6452575922012329,
646
+ "logits/chosen": -2.799548864364624,
647
+ "logits/rejected": -2.790161371231079,
648
+ "logps/chosen": -0.9113782048225403,
649
+ "logps/rejected": -0.978347897529602,
650
+ "loss": 1.0736,
651
+ "nll_loss": 1.009043574333191,
652
+ "rewards/accuracies": 0.6499999761581421,
653
+ "rewards/chosen": -0.09113781899213791,
654
+ "rewards/margins": 0.006696968339383602,
655
+ "rewards/rejected": -0.09783479571342468,
656
+ "step": 90
657
+ },
658
+ {
659
+ "epoch": 0.48,
660
+ "eval_log_odds_chosen": 0.22884733974933624,
661
+ "eval_log_odds_ratio": -0.6049832701683044,
662
+ "eval_logits/chosen": -2.80894136428833,
663
+ "eval_logits/rejected": -2.8171987533569336,
664
+ "eval_logps/chosen": -0.8134253025054932,
665
+ "eval_logps/rejected": -0.9399018883705139,
666
+ "eval_loss": 1.0346410274505615,
667
+ "eval_nll_loss": 0.9741426110267639,
668
+ "eval_rewards/accuracies": 0.7319999933242798,
669
+ "eval_rewards/chosen": -0.08134253323078156,
670
+ "eval_rewards/margins": 0.012647655792534351,
671
+ "eval_rewards/rejected": -0.09399019181728363,
672
+ "eval_runtime": 1022.8962,
673
+ "eval_samples_per_second": 0.489,
674
+ "eval_steps_per_second": 0.244,
675
+ "step": 90
676
+ },
677
+ {
678
+ "epoch": 0.51,
679
+ "grad_norm": 1.422467589378357,
680
+ "learning_rate": 4.444444444444444e-07,
681
+ "log_odds_chosen": 0.30131229758262634,
682
+ "log_odds_ratio": -0.5648499131202698,
683
+ "logits/chosen": -2.788285732269287,
684
+ "logits/rejected": -2.8017640113830566,
685
+ "logps/chosen": -0.8434479832649231,
686
+ "logps/rejected": -1.0182573795318604,
687
+ "loss": 0.9874,
688
+ "nll_loss": 0.9309436678886414,
689
+ "rewards/accuracies": 0.875,
690
+ "rewards/chosen": -0.0843447893857956,
691
+ "rewards/margins": 0.017480935901403427,
692
+ "rewards/rejected": -0.10182573646306992,
693
+ "step": 95
694
+ },
695
+ {
696
+ "epoch": 0.51,
697
+ "eval_log_odds_chosen": 0.23686276376247406,
698
+ "eval_log_odds_ratio": -0.6016983389854431,
699
+ "eval_logits/chosen": -2.800295829772949,
700
+ "eval_logits/rejected": -2.808558702468872,
701
+ "eval_logps/chosen": -0.8078622221946716,
702
+ "eval_logps/rejected": -0.9380316734313965,
703
+ "eval_loss": 1.018244981765747,
704
+ "eval_nll_loss": 0.9580751061439514,
705
+ "eval_rewards/accuracies": 0.7379999756813049,
706
+ "eval_rewards/chosen": -0.08078622072935104,
707
+ "eval_rewards/margins": 0.013016944751143456,
708
+ "eval_rewards/rejected": -0.09380316734313965,
709
+ "eval_runtime": 1022.8824,
710
+ "eval_samples_per_second": 0.489,
711
+ "eval_steps_per_second": 0.244,
712
+ "step": 95
713
+ },
714
+ {
715
+ "epoch": 0.53,
716
+ "grad_norm": 1.5611515045166016,
717
+ "learning_rate": 0.0,
718
+ "log_odds_chosen": 0.25769370794296265,
719
+ "log_odds_ratio": -0.5990554094314575,
720
+ "logits/chosen": -2.7713541984558105,
721
+ "logits/rejected": -2.770143508911133,
722
+ "logps/chosen": -0.9012691378593445,
723
+ "logps/rejected": -1.0490459203720093,
724
+ "loss": 1.0083,
725
+ "nll_loss": 0.9483591914176941,
726
+ "rewards/accuracies": 0.675000011920929,
727
+ "rewards/chosen": -0.09012691676616669,
728
+ "rewards/margins": 0.014777670614421368,
729
+ "rewards/rejected": -0.10490457713603973,
730
+ "step": 100
731
+ },
732
+ {
733
+ "epoch": 0.53,
734
+ "eval_log_odds_chosen": 0.23977938294410706,
735
+ "eval_log_odds_ratio": -0.6004971861839294,
736
+ "eval_logits/chosen": -2.796983003616333,
737
+ "eval_logits/rejected": -2.8052423000335693,
738
+ "eval_logps/chosen": -0.8057038187980652,
739
+ "eval_logps/rejected": -0.9371922612190247,
740
+ "eval_loss": 1.0119801759719849,
741
+ "eval_nll_loss": 0.9519305229187012,
742
+ "eval_rewards/accuracies": 0.7419999837875366,
743
+ "eval_rewards/chosen": -0.08057038486003876,
744
+ "eval_rewards/margins": 0.013148845173418522,
745
+ "eval_rewards/rejected": -0.09371922165155411,
746
+ "eval_runtime": 1022.9549,
747
+ "eval_samples_per_second": 0.489,
748
+ "eval_steps_per_second": 0.244,
749
+ "step": 100
750
+ }
751
+ ],
752
+ "logging_steps": 5,
753
+ "max_steps": 100,
754
+ "num_input_tokens_seen": 0,
755
+ "num_train_epochs": 1,
756
+ "save_steps": 10,
757
+ "total_flos": 0.0,
758
+ "train_batch_size": 2,
759
+ "trial_name": null,
760
+ "trial_params": null
761
+ }
checkpoint-100/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d9737abfc82e80f1ffe42063a196d09bdb7c9108c466f05a465c4921fc65a67
3
+ size 5240
runs/Apr12_07-31-24_b5c8ec495088/events.out.tfevents.1712907095.b5c8ec495088.34.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa24d8ef7a58e17aa8472f29baec686c74e90a0721d1437cd10ead5da560d8bf
3
+ size 40528