llm3br256-v
This model is a fine-tuned version of meta-llama/Llama-3.2-3B-Instruct on the Goavanto dataset. It achieves the following results on the evaluation set:
- Loss: 0.0185
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.1227 | 0.2475 | 25 | 0.1074 |
0.0553 | 0.4950 | 50 | 0.0551 |
0.0338 | 0.7426 | 75 | 0.0350 |
0.0302 | 0.9901 | 100 | 0.0277 |
0.0373 | 1.2376 | 125 | 0.0256 |
0.0321 | 1.4851 | 150 | 0.0251 |
0.026 | 1.7327 | 175 | 0.0228 |
0.029 | 1.9802 | 200 | 0.0212 |
0.0152 | 2.2277 | 225 | 0.0216 |
0.011 | 2.4752 | 250 | 0.0205 |
0.0154 | 2.7228 | 275 | 0.0194 |
0.021 | 2.9703 | 300 | 0.0192 |
0.0282 | 3.2178 | 325 | 0.0186 |
0.007 | 3.4653 | 350 | 0.0181 |
0.017 | 3.7129 | 375 | 0.0188 |
0.0315 | 3.9604 | 400 | 0.0185 |
0.0156 | 4.2079 | 425 | 0.0193 |
0.0059 | 4.4554 | 450 | 0.0197 |
0.0136 | 4.7030 | 475 | 0.0198 |
0.0092 | 4.9505 | 500 | 0.0217 |
0.008 | 5.1980 | 525 | 0.0189 |
Framework versions
- PEFT 0.12.0
- Transformers 4.46.1
- Pytorch 2.4.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
- Downloads last month
- 3
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no pipeline_tag.