Quazim0t0's picture
Update README.md
f624dbb verified
---
base_model: unsloth/phi-4-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
datasets:
- bespokelabs/Bespoke-Stratos-17k
---
# Phi4 Turn R1Distill LoRA Adapters
## Overview
Hey! These LoRA adapters are trained using different reasoning datasets that utilize **Thought** and **Solution** for reasoning responses.
I hope these help jumpstart your project! These adapters have been trained on an **A800 GPU** and should provide a solid base for fine-tuning or merging.
Everything on my page is left **public** for Open Source use.
## Available LoRA Adapters
Here are the links to the available adapters as of **January 30, 2025**:
- [Phi4.Turn.R1Distill-Lora1](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora1)
- [Phi4.Turn.R1Distill-Lora2](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora2)
- [Phi4.Turn.R1Distill-Lora3](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora3)
- [Phi4.Turn.R1Distill-Lora4](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora4)
- [Phi4.Turn.R1Distill-Lora5](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora5)
- [Phi4.Turn.R1Distill-Lora6](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora6)
- [Phi4.Turn.R1Distill-Lora7](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora7)
- [Phi4.Turn.R1Distill-Lora8](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora8)
## Usage
These adapters can be loaded and used with `peft` and `transformers`. Here’s a quick example:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model = "microsoft/Phi-4"
lora_adapter = "Quazim0t0/Phi4.Turn.R1Distill-Lora1"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model)
model = PeftModel.from_pretrained(model, lora_adapter)
model.eval()