Qwen2.5-1.5B-Instruct Fine-tuned Model
This model is a fine-tuned version of Qwen/Qwen2.5-1.5B-Instruct using LoRA (Low-Rank Adaptation).
Training Details
- Model was trained for 2 epochs on a custom dataset
- Used 4-bit quantization for efficient training
- Used the LoRA+ technique with 16.0 ratio
- Trained with a batch size of 1 and gradient accumulation steps of 12
- Downloads last month
- 26
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no pipeline_tag.