--- base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en datasets: - yahma/alpaca-cleaned pipeline_tag: text-generation --- # qwen2.5-7b-lora - **Developed by:** ihumaunkabir - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit This qwen2[1] model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [1] A. Yang et al., “Qwen2 Technical Report,” arXiv preprint arXiv:2407.10671, 2024.