Quazim0t0 commited on
Commit
beec57d
·
verified ·
1 Parent(s): 5b6e08a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -1
README.md CHANGED
@@ -30,4 +30,71 @@ datasets:
30
  - **15$ Training...I'm actually amazed by the results.**
31
 
32
 
33
- If using this model for Open WebUI here is a simple function to organize the models responses: https://openwebui.com/f/quaz93/phi4_turn_r1_distill_thought_function_v1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
  - **15$ Training...I'm actually amazed by the results.**
31
 
32
 
33
+ If using this model for Open WebUI here is a simple function to organize the models responses: https://openwebui.com/f/quaz93/phi4_turn_r1_distill_thought_function_v1
34
+
35
+ # Phi4 Turn R1Distill LoRA Adapters
36
+
37
+ ## Overview
38
+ These **LoRA adapters** were trained using diverse **reasoning datasets** that incorporate structured **Thought** and **Solution** responses to enhance logical inference. This project was designed to **test the R1 dataset** on **Phi-4**, aiming to create a **lightweight, fast, and efficient reasoning model**.
39
+
40
+ All adapters were fine-tuned using an **NVIDIA A800 GPU**, ensuring high performance and compatibility for continued training, merging, or direct deployment.
41
+ As part of an open-source initiative, all resources are made **publicly available** for unrestricted research and development.
42
+
43
+ ---
44
+
45
+ ## LoRA Adapters
46
+ Below are the currently available LoRA fine-tuned adapters (**as of January 30, 2025**):
47
+
48
+ - [Phi4.Turn.R1Distill-Lora1](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora1)
49
+ - [Phi4.Turn.R1Distill-Lora2](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora2)
50
+ - [Phi4.Turn.R1Distill-Lora3](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora3)
51
+ - [Phi4.Turn.R1Distill-Lora4](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora4)
52
+ - [Phi4.Turn.R1Distill-Lora5](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora5)
53
+ - [Phi4.Turn.R1Distill-Lora6](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora6)
54
+ - [Phi4.Turn.R1Distill-Lora7](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora7)
55
+ - [Phi4.Turn.R1Distill-Lora8](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora8)
56
+
57
+ ---
58
+
59
+ ## GGUF Full & Quantized Models
60
+ To facilitate broader testing and real-world inference, **GGUF Full and Quantized versions** have been provided for evaluation on **Open WebUI** and other LLM interfaces.
61
+
62
+ ### **Version 1**
63
+ - [Phi4.Turn.R1Distill.Q8_0](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill.Q8_0)
64
+ - [Phi4.Turn.R1Distill.Q4_k](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill.Q4_k)
65
+ - [Phi4.Turn.R1Distill.16bit](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill.16bit)
66
+
67
+ ### **Version 1.1**
68
+ - [Phi4.Turn.R1Distill_v1.1_Q4_k](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill_v1.1_Q4_k)
69
+
70
+ ### **Version 1.2**
71
+ - [Phi4.Turn.R1Distill_v1.2_Q4_k](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill_v1.2_Q4_k)
72
+
73
+ ### **Version 1.3**
74
+ - [Phi4.Turn.R1Distill_v1.3_Q4_k-GGUF](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill_v1.3_Q4_k-GGUF)
75
+
76
+ ### **Version 1.4**
77
+ - [Phi4.Turn.R1Distill_v1.4_Q4_k-GGUF](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill_v1.4_Q4_k-GGUF)
78
+
79
+ ### **Version 1.5**
80
+ - [Phi4.Turn.R1Distill_v1.5_Q4_k-GGUF](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill_v1.5_Q4_k-GGUF)
81
+
82
+ ---
83
+
84
+ ## Usage
85
+
86
+ ### **Loading LoRA Adapters with `transformers` and `peft`**
87
+ To load and apply the LoRA adapters on Phi-4, use the following approach:
88
+
89
+ ```python
90
+ from transformers import AutoModelForCausalLM, AutoTokenizer
91
+ from peft import PeftModel
92
+
93
+ base_model = "microsoft/Phi-4"
94
+ lora_adapter = "Quazim0t0/Phi4.Turn.R1Distill-Lora1"
95
+
96
+ tokenizer = AutoTokenizer.from_pretrained(base_model)
97
+ model = AutoModelForCausalLM.from_pretrained(base_model)
98
+ model = PeftModel.from_pretrained(model, lora_adapter)
99
+
100
+ model.eval()