Update README.md
Browse files
README.md
CHANGED
@@ -5,9 +5,9 @@ base_model:
|
|
5 |
- win10/Phi-4-llama-t1-lora
|
6 |
---
|
7 |
|
8 |
-
Full 16bit model of [win10/Phi-4-llama-t1-lora](https://huggingface.co/win10/Phi-4-llama-t1-lora), please always thank the original author for all the hardwork.
|
9 |
|
10 |
-
Run with
|
11 |
|
12 |
```python
|
13 |
import transformers
|
@@ -26,4 +26,8 @@ outputs = pipeline(messages, max_new_tokens=128)
|
|
26 |
print(outputs[0]["generated_text"])
|
27 |
```
|
28 |
|
29 |
-
Or can do static GGUF version of quants: [benhaotang/Phi-4-llama-t1-full](https://huggingface.co/benhaotang/Phi-4-llama-t1-full-Q4_K_M-GGUF)
|
|
|
|
|
|
|
|
|
|
5 |
- win10/Phi-4-llama-t1-lora
|
6 |
---
|
7 |
|
8 |
+
Full merged 16bit model of [win10/Phi-4-llama-t1-lora](https://huggingface.co/win10/Phi-4-llama-t1-lora), please always thank the original author for all the hardwork!!! All I did the simple merging work on colab.
|
9 |
|
10 |
+
Run with Pytorch
|
11 |
|
12 |
```python
|
13 |
import transformers
|
|
|
26 |
print(outputs[0]["generated_text"])
|
27 |
```
|
28 |
|
29 |
+
Or can do static GGUF version of quants: [benhaotang/Phi-4-llama-t1-full](https://huggingface.co/benhaotang/Phi-4-llama-t1-full-Q4_K_M-GGUF)
|
30 |
+
|
31 |
+
```
|
32 |
+
ollama run hf.co/benhaotang/Phi-4-llama-t1-full-Q4_K_M-GGUF
|
33 |
+
```
|