File size: 4,774 Bytes
2f008b2
 
944bf35
 
 
 
 
 
 
 
 
 
 
2f008b2
 
944bf35
2f008b2
 
 
944bf35
2f008b2
 
 
 
 
 
944bf35
 
94003ae
944bf35
 
 
0da30e8
944bf35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0da30e8
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
---
library_name: transformers
license: mit
datasets:
- AI-MO/NuminaMath-TIR
- bespokelabs/Bespoke-Stratos-17k
- meta-math/MetaMathQA
language:
- en
- ja
base_model:
- microsoft/phi-4
pipeline_tag: text-generation
---

# AXCXEPT/phi-4-deepseek-R1K-RL-EZO

<!-- Provide a quick summary of what the model is/does. -->

![image/png](https://cdn-uploads.huggingface.co/production/uploads/657e900beaad53ff67ba84db/34cg6vhtUAnC0FbHmX4kE.png)


## Model Details

### Model Description

#### EZO × PHI-4 × RL - Advancing LLM Training with Deepseek Knowledge
##### Overview
This model is the result of combining Phi-4 with a reinforcement learning (RL) approach, incorporating insights from the latest research on Deepseek R1. By leveraging a novel training methodology, we successfully improved both Japanese and English capabilities while maintaining a high level of performance across key benchmarks.

##### Key Features & Improvements
Enhanced Multilingual Performance: Unlike previous iterations, this model strengthens English capabilities without compromising Japanese proficiency.
Optimized Training Efficiency: Inspired by Deepseek R1 research, we fine-tuned Phi-4 with a 14K dataset in just two days, achieving both gains.
Benchmark-Proven Quality:
Outperforms the base Phi-4 model on OpenAI’s Simple-eval and translation benchmarks (Japanese MT Bench, MT Bench).
Surpasses gpt-4o-mini in multiple evaluation categories, proving its capability as a high-performance 14B model.

##### Why Local LLMs Still Matter
Despite rapid advancements in cloud-based models, local LLMs remain crucial for enterprises that require high security and strict data privacy compliance. Many organizations—especially in public institutions, manufacturing, and design industries—cannot risk exposing sensitive data externally. This model is developed with the goal of delivering state-of-the-art performance in a secure, closed environment.

#### Future Prospects
Our successful short-term training experiment demonstrates the potential for domain-specific LLMs tailored to high-security industries. Moving forward, we will continue refining this methodology and developing specialized AI models for enterprise applications. In parallel, we are actively working on AI solutions (including SaaS offerings) to accelerate the adoption of LLM technology in Japan and beyond.

### Bench Mark

![image/png](https://cdn-uploads.huggingface.co/production/uploads/657e900beaad53ff67ba84db/7jlb3rVf3qKNbhOCpiJ6K.png)

![image/png](https://cdn-uploads.huggingface.co/production/uploads/657e900beaad53ff67ba84db/p8vw5SzLMpMLrexe9uKhE.png)


### How To Use

#### Vllm(Recommendation)
##### Install
```
pip install -U vllm
```
##### Start vllm server
```
vllm serve AXCXEPT/phi-4-deepseek-R1K-RL-EZO
```

##### Call vllm serve via API
```
from openai import OpenAI
client = OpenAI(
    base_url="http://localhost:8000/v1",
    api_key="token-abc123",
)

prompt = f"Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?"

completion = client.chat.completions.create(
  model="AXCXEPT/phi-4-deepseek-R1K-RL-EZO",
  messages = [
    {"role": "system", "content": "Please reason step by step, and put your final answer within \\boxed{}."},
    {"role": "user", "content": prompt}
  ]
)

print(completion.choices[0].message)
```

#### Transformers
##### Install
```
pip install --upgrade transformers accelerate datasets trl
```

##### Predict
```

from transformers import AutoModelForCausalLM, AutoTokenizer



model_name = "AXCXEPT/phi-4-deepseek-R1K-RL-EZO"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = f"Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?"
messages = [
    {"role": "system", "content": "Please reason step by step, and put your final answer within \\boxed{}."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=1024
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

print(response)
```

### Special Thanks:
To the Phi-4 development team who developed high-quality base model, the Deepseek research team, and everyone who contributed to this project.