Model Name: orca_mini_phi-4

orca_mini_phi-4 is trained with various SFT Datasets on microsoft/phi-4 using Llama's architecture.

"Obsessed with Open Source GenAI's potential? So am I ! Let's Contribute together 🚀 https://www.linkedin.com/in/pankajam"

NOTICE

By providing proper credit and attribution, you are granted permission to use this model as a foundational base for further Full fine tuning, DPO, PPO or ORPO tuning and any kind of Merges. I actively encourage users to customize and enhance the model according to their specific needs, as this version is designed to be a comprehensive general model. Dive in and innovate!

Example Usage

Use this model for Free on Google Colab with T4 GPU :)

Open In Colab

Example Usage on Your Personal Computer

Download GGUF version here and Follow Ollama instructions: https://huggingface.co/pankajmathur/orca_mini_phi-4-GGUF

Below shows a code example on how to use this model in default half precision (bfloat16) format

import torch
from transformers import pipeline

model_slug = "pankajmathur/orca_mini_phi-4"
pipeline = pipeline(
    "text-generation",
    model=model_slug,
    device_map="auto",
)
messages = [
    {"role": "system", "content": "You are Orca Mini, a helpful AI assistant."},
    {"role": "user", "content": "Hello Orca Mini, what can you do for me?"}
]
outputs = pipeline(messages, max_new_tokens=128, do_sample=True, temperature=0.01, top_k=100, top_p=0.95)
print(outputs[0]["generated_text"][-1])

Below shows a code example on how to use this model in 4-bit format via bitsandbytes library

import torch
from transformers import BitsAndBytesConfig, pipeline

model_slug = "pankajmathur/orca_mini_phi-4"
quantization_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype="float16",
    bnb_4bit_use_double_quant=True,
)
pipeline = pipeline(
    "text-generation",
    model=model_slug,
    model_kwargs={"quantization_config": quantization_config},
    device_map="auto",
)
messages = [
    {"role": "system", "content": "You are Orca Mini, a helpful AI assistant."},
    {"role": "user", "content": "Hello Orca Mini, what can you do for me?"}
]
outputs = pipeline(messages, max_new_tokens=128, do_sample=True, temperature=0.01, top_k=100, top_p=0.95)
print(outputs[0]["generated_text"][-1])

Below shows a code example on how to use this model in 8-bit format via bitsandbytes library

import torch
from transformers import BitsAndBytesConfig, pipeline

model_slug = "pankajmathur/orca_mini_phi-4"
quantization_config = BitsAndBytesConfig(
    load_in_8bit=True
)
pipeline = pipeline(
    "text-generation",
    model=model_slug,
    model_kwargs={"quantization_config": quantization_config},
    device_map="auto",
)
messages = [
    {"role": "system", "content": "You are Orca Mini, a helpful AI assistant."},
    {"role": "user", "content": "Hello Orca Mini, what can you do for me?"}
]
outputs = pipeline(messages, max_new_tokens=128, do_sample=True, temperature=0.01, top_k=100, top_p=0.95)
print(outputs[0]["generated_text"][-1])

Built with Axolotl

Downloads last month
289
Safetensors
Model size
14.7B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for pankajmathur/orca_mini_phi-4

Base model

microsoft/phi-4
Finetuned
(47)
this model
Merges
1 model
Quantizations
5 models

Datasets used to train pankajmathur/orca_mini_phi-4