DongfuJiang
commited on
End of training
Browse files- README.md +2 -1
- all_results.json +12 -0
- eval_results.json +7 -0
- train_results.json +8 -0
- trainer_state.json +0 -0
- training_eval_loss.png +0 -0
- training_loss.png +0 -0
README.md
CHANGED
@@ -4,6 +4,7 @@ license: llama3.1
|
|
4 |
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
|
5 |
tags:
|
6 |
- llama-factory
|
|
|
7 |
- generated_from_trainer
|
8 |
model-index:
|
9 |
- name: prm_version3_full_hf
|
@@ -15,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
|
|
15 |
|
16 |
# prm_version3_full_hf
|
17 |
|
18 |
-
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on
|
19 |
It achieves the following results on the evaluation set:
|
20 |
- Loss: 0.1166
|
21 |
|
|
|
4 |
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
|
5 |
tags:
|
6 |
- llama-factory
|
7 |
+
- full
|
8 |
- generated_from_trainer
|
9 |
model-index:
|
10 |
- name: prm_version3_full_hf
|
|
|
16 |
|
17 |
# prm_version3_full_hf
|
18 |
|
19 |
+
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on the prm_conversations_prm_version3_math+webinstructsub-mcq+webinstructsub-oe+apps+gsm_mix_ref_hf dataset.
|
20 |
It achieves the following results on the evaluation set:
|
21 |
- Loss: 0.1166
|
22 |
|
all_results.json
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"epoch": 0.999953931911365,
|
3 |
+
"eval_loss": 0.11656492948532104,
|
4 |
+
"eval_runtime": 300.7975,
|
5 |
+
"eval_samples_per_second": 23.328,
|
6 |
+
"eval_steps_per_second": 2.919,
|
7 |
+
"total_flos": 1908258935930880.0,
|
8 |
+
"train_loss": 0.14796658507510943,
|
9 |
+
"train_runtime": 123515.0295,
|
10 |
+
"train_samples_per_second": 5.624,
|
11 |
+
"train_steps_per_second": 0.088
|
12 |
+
}
|
eval_results.json
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"epoch": 0.999953931911365,
|
3 |
+
"eval_loss": 0.11656492948532104,
|
4 |
+
"eval_runtime": 300.7975,
|
5 |
+
"eval_samples_per_second": 23.328,
|
6 |
+
"eval_steps_per_second": 2.919
|
7 |
+
}
|
train_results.json
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"epoch": 0.999953931911365,
|
3 |
+
"total_flos": 1908258935930880.0,
|
4 |
+
"train_loss": 0.14796658507510943,
|
5 |
+
"train_runtime": 123515.0295,
|
6 |
+
"train_samples_per_second": 5.624,
|
7 |
+
"train_steps_per_second": 0.088
|
8 |
+
}
|
trainer_state.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
training_eval_loss.png
ADDED
training_loss.png
ADDED