AXCXEPT's picture
Update README.md
0da30e8 verified
metadata
library_name: transformers
license: mit
datasets:
  - AI-MO/NuminaMath-TIR
  - bespokelabs/Bespoke-Stratos-17k
  - meta-math/MetaMathQA
language:
  - en
  - ja
base_model:
  - microsoft/phi-4
pipeline_tag: text-generation

AXCXEPT/phi-4-deepseek-R1K-RL-EZO

image/png

Model Details

Model Description

EZO × PHI-4 × RL - Advancing LLM Training with Deepseek Knowledge

Overview

This model is the result of combining Phi-4 with a reinforcement learning (RL) approach, incorporating insights from the latest research on Deepseek R1. By leveraging a novel training methodology, we successfully improved both Japanese and English capabilities while maintaining a high level of performance across key benchmarks.

Key Features & Improvements

Enhanced Multilingual Performance: Unlike previous iterations, this model strengthens English capabilities without compromising Japanese proficiency. Optimized Training Efficiency: Inspired by Deepseek R1 research, we fine-tuned Phi-4 with a 14K dataset in just two days, achieving both gains. Benchmark-Proven Quality: Outperforms the base Phi-4 model on OpenAI’s Simple-eval and translation benchmarks (Japanese MT Bench, MT Bench). Surpasses gpt-4o-mini in multiple evaluation categories, proving its capability as a high-performance 14B model.

Why Local LLMs Still Matter

Despite rapid advancements in cloud-based models, local LLMs remain crucial for enterprises that require high security and strict data privacy compliance. Many organizations—especially in public institutions, manufacturing, and design industries—cannot risk exposing sensitive data externally. This model is developed with the goal of delivering state-of-the-art performance in a secure, closed environment.

Future Prospects

Our successful short-term training experiment demonstrates the potential for domain-specific LLMs tailored to high-security industries. Moving forward, we will continue refining this methodology and developing specialized AI models for enterprise applications. In parallel, we are actively working on AI solutions (including SaaS offerings) to accelerate the adoption of LLM technology in Japan and beyond.

Bench Mark

image/png

image/png

How To Use

Vllm(Recommendation)

Install
pip install -U vllm
Start vllm server
vllm serve AXCXEPT/phi-4-deepseek-R1K-RL-EZO
Call vllm serve via API
from openai import OpenAI
client = OpenAI(
    base_url="http://localhost:8000/v1",
    api_key="token-abc123",
)

prompt = f"Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?"

completion = client.chat.completions.create(
  model="AXCXEPT/phi-4-deepseek-R1K-RL-EZO",
  messages = [
    {"role": "system", "content": "Please reason step by step, and put your final answer within \\boxed{}."},
    {"role": "user", "content": prompt}
  ]
)

print(completion.choices[0].message)

Transformers

Install
pip install --upgrade transformers accelerate datasets trl
Predict

from transformers import AutoModelForCausalLM, AutoTokenizer



model_name = "AXCXEPT/phi-4-deepseek-R1K-RL-EZO"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = f"Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?"
messages = [
    {"role": "system", "content": "Please reason step by step, and put your final answer within \\boxed{}."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=1024
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

print(response)

Special Thanks:

To the Phi-4 development team who developed high-quality base model, the Deepseek research team, and everyone who contributed to this project.