Small Model Learnability Gap: Models
Collection
24 items
•
Updated
•
1
This model is a fine-tuned version of meta-llama/Llama-3.1-8B-Instruct on the MATH_training_Qwen2.5-32B-Instruct dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.2115 | 0.5988 | 200 | 0.2219 |
0.0572 | 1.1976 | 400 | 0.2203 |
0.0592 | 1.7964 | 600 | 0.2050 |
Base model
meta-llama/Llama-3.1-8B