Small Model Learnability Gap: Models
Collection
24 items
•
Updated
•
1
This model is a fine-tuned version of meta-llama/Llama-3.2-3B-Instruct on the MATH_training_Qwen_QwQ_32B_Preview dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.3483 | 0.5988 | 200 | 0.3796 |
0.2544 | 1.1976 | 400 | 0.3797 |
0.2508 | 1.7964 | 600 | 0.3729 |
Base model
meta-llama/Llama-3.2-3B-Instruct