Updated README.md
Browse files
README.md
CHANGED
@@ -18,6 +18,63 @@ language:
|
|
18 |
- **License:** apache-2.0
|
19 |
- **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit
|
20 |
|
21 |
-
This
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
|
23 |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
|
|
18 |
- **License:** apache-2.0
|
19 |
- **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit
|
20 |
|
21 |
+
This phi model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
22 |
+
|
23 |
+
## How to Use the Model for Inferencing
|
24 |
+
|
25 |
+
You can use the model for inferencing via Hugging Face's API by following the steps below:
|
26 |
+
|
27 |
+
### 1. Install Required Libraries
|
28 |
+
|
29 |
+
Ensure that you have the `requests` library installed:
|
30 |
+
|
31 |
+
```bash
|
32 |
+
pip install requests
|
33 |
+
|
34 |
+
|
35 |
+
```
|
36 |
+
## Steps to use the model for inferencing using Hugging Face API
|
37 |
+
|
38 |
+
import requests
|
39 |
+
|
40 |
+
# API URL for the model hosted on Hugging Face
|
41 |
+
API_URL = "https://api-inference.huggingface.co/models/Ishika08/phi-4_fine-tuned_mdl"
|
42 |
+
|
43 |
+
# Set up your Hugging Face API token
|
44 |
+
HEADERS = {"Authorization": f"Bearer token_id"}
|
45 |
+
|
46 |
+
# The input you want to pass to the model
|
47 |
+
payload = {
|
48 |
+
"inputs": "What is the capital of France? Tell me some of the tourist places in bullet points."
|
49 |
+
}
|
50 |
+
|
51 |
+
# Make the request to the API
|
52 |
+
response = requests.post(API_URL, headers=HEADERS, json=payload)
|
53 |
+
|
54 |
+
# Print the response from the model
|
55 |
+
print(response.json()) # Get the response output
|
56 |
+
|
57 |
+
# OUTPUT
|
58 |
+
{
|
59 |
+
"generated_text": "Paris is the capital of France. Some of the famous tourist places include:\n- Eiffel Tower\n- Louvre Museum\n- Notre-Dame Cathedral\n- Sacré-Cœur Basilica"
|
60 |
+
}
|
61 |
+
|
62 |
+
|
63 |
+
## Steps to use model using InferenceClient library from huggingface_hub
|
64 |
+
|
65 |
+
|
66 |
+
from huggingface_hub import InferenceClient
|
67 |
+
|
68 |
+
# Initialize the client with model name and Hugging Face token
|
69 |
+
client = InferenceClient(model="Ishika08/phi-4_fine-tuned_mdl", token=""")
|
70 |
+
|
71 |
+
# Perform inference (text generation in this case)
|
72 |
+
response = client.text_generation("What is the capital of France? Tell me about Eiffel Tower history in bullet points.")
|
73 |
+
|
74 |
+
# Print the response from the model
|
75 |
+
print(response)
|
76 |
+
|
77 |
+
|
78 |
+
|
79 |
|
80 |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|