Update README.md
Browse files
README.md
CHANGED
@@ -1,199 +1,133 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
-
#
|
7 |
|
8 |
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
|
|
|
10 |
|
11 |
|
12 |
## Model Details
|
13 |
|
14 |
### Model Description
|
15 |
|
16 |
-
|
17 |
-
|
18 |
-
This is the
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
-
|
24 |
-
-
|
25 |
-
-
|
26 |
-
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
[
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
-
|
127 |
-
### Results
|
128 |
-
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
-
|
141 |
-
## Environmental Impact
|
142 |
-
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
-
|
159 |
-
### Compute Infrastructure
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
## Model Card Authors [optional]
|
194 |
-
|
195 |
-
[More Information Needed]
|
196 |
-
|
197 |
-
## Model Card Contact
|
198 |
-
|
199 |
-
[More Information Needed]
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
license: mit
|
4 |
+
datasets:
|
5 |
+
- AI-MO/NuminaMath-TIR
|
6 |
+
- bespokelabs/Bespoke-Stratos-17k
|
7 |
+
- meta-math/MetaMathQA
|
8 |
+
language:
|
9 |
+
- en
|
10 |
+
- ja
|
11 |
+
base_model:
|
12 |
+
- microsoft/phi-4
|
13 |
+
pipeline_tag: text-generation
|
14 |
---
|
15 |
|
16 |
+
# AXCXEPT/phi-4-deepseek-R1K-RL-EZO
|
17 |
|
18 |
<!-- Provide a quick summary of what the model is/does. -->
|
19 |
|
20 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/657e900beaad53ff67ba84db/34cg6vhtUAnC0FbHmX4kE.png)
|
21 |
|
22 |
|
23 |
## Model Details
|
24 |
|
25 |
### Model Description
|
26 |
|
27 |
+
#### EZO × PHI-4 × RL - Advancing LLM Training with Deepseek Knowledge
|
28 |
+
##### Overview
|
29 |
+
This model is the result of combining OpenAI’s Phi-4 with a reinforcement learning (RL) approach, incorporating insights from the latest research on Deepseek R1. By leveraging a novel training methodology, we successfully improved both Japanese and English capabilities while maintaining a high level of performance across key benchmarks.
|
30 |
+
|
31 |
+
##### Key Features & Improvements
|
32 |
+
Enhanced Multilingual Performance: Unlike previous iterations, this model strengthens English capabilities without compromising Japanese proficiency.
|
33 |
+
Optimized Training Efficiency: Inspired by Deepseek R1 research, we fine-tuned Phi-4 with a 14K dataset in just two days, achieving substantial gains.
|
34 |
+
Benchmark-Proven Quality:
|
35 |
+
Outperforms the base Phi-4 model on OpenAI’s Simple-eval and translation benchmarks (Japanese MT Bench, MT Bench).
|
36 |
+
Surpasses gpt-4o-mini in multiple evaluation categories, proving its capability as a high-performance 14B model.
|
37 |
+
Secure and Scalable for Enterprises: Designed to function efficiently in local and on-premise environments, making it suitable for high-security industries where cloud-based solutions are not viable.
|
38 |
+
|
39 |
+
##### Why Local LLMs Still Matter
|
40 |
+
Despite rapid advancements in cloud-based models, local LLMs remain crucial for enterprises that require high security and strict data privacy compliance. Many organizations—especially in public institutions, manufacturing, and design industries—cannot risk exposing sensitive data externally. This model is developed with the goal of delivering state-of-the-art performance in a secure, closed environment.
|
41 |
+
|
42 |
+
#### Future Prospects
|
43 |
+
Our successful short-term training experiment demonstrates the potential for domain-specific LLMs tailored to high-security industries. Moving forward, we will continue refining this methodology and developing specialized AI models for enterprise applications. In parallel, we are actively working on AI solutions (including SaaS offerings) to accelerate the adoption of LLM technology in Japan and beyond.
|
44 |
+
|
45 |
+
### Bench Mark
|
46 |
+
|
47 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/657e900beaad53ff67ba84db/7jlb3rVf3qKNbhOCpiJ6K.png)
|
48 |
+
|
49 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/657e900beaad53ff67ba84db/p8vw5SzLMpMLrexe9uKhE.png)
|
50 |
+
|
51 |
+
|
52 |
+
### How To Use
|
53 |
+
|
54 |
+
#### Vllm(Recommendation)
|
55 |
+
##### Install
|
56 |
+
```
|
57 |
+
pip install -U vllm
|
58 |
+
```
|
59 |
+
##### Start vllm server
|
60 |
+
```
|
61 |
+
vllm serve AXCXEPT/phi-4-deepseek-R1K-RL-EZO
|
62 |
+
```
|
63 |
+
|
64 |
+
##### Call vllm serve via API
|
65 |
+
```
|
66 |
+
from openai import OpenAI
|
67 |
+
client = OpenAI(
|
68 |
+
base_url="http://localhost:8000/v1",
|
69 |
+
api_key="token-abc123",
|
70 |
+
)
|
71 |
+
|
72 |
+
prompt = f"Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?"
|
73 |
+
|
74 |
+
completion = client.chat.completions.create(
|
75 |
+
model="AXCXEPT/phi-4-deepseek-R1K-RL-EZO",
|
76 |
+
messages = [
|
77 |
+
{"role": "system", "content": "Please reason step by step, and put your final answer within \\boxed{}."},
|
78 |
+
{"role": "user", "content": prompt}
|
79 |
+
]
|
80 |
+
)
|
81 |
+
|
82 |
+
print(completion.choices[0].message)
|
83 |
+
```
|
84 |
+
|
85 |
+
#### Transformers
|
86 |
+
##### Install
|
87 |
+
```
|
88 |
+
pip install --upgrade transformers accelerate datasets trl
|
89 |
+
```
|
90 |
+
|
91 |
+
##### Predict
|
92 |
+
```
|
93 |
+
|
94 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
95 |
+
|
96 |
+
|
97 |
+
|
98 |
+
model_name = "AXCXEPT/phi-4-deepseek-R1K-RL-EZO"
|
99 |
+
|
100 |
+
model = AutoModelForCausalLM.from_pretrained(
|
101 |
+
model_name,
|
102 |
+
torch_dtype="auto",
|
103 |
+
device_map="auto"
|
104 |
+
)
|
105 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
106 |
+
|
107 |
+
prompt = f"Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?"
|
108 |
+
messages = [
|
109 |
+
{"role": "system", "content": "Please reason step by step, and put your final answer within \\boxed{}."},
|
110 |
+
{"role": "user", "content": prompt}
|
111 |
+
]
|
112 |
+
text = tokenizer.apply_chat_template(
|
113 |
+
messages,
|
114 |
+
tokenize=False,
|
115 |
+
add_generation_prompt=True
|
116 |
+
)
|
117 |
+
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
118 |
+
|
119 |
+
generated_ids = model.generate(
|
120 |
+
**model_inputs,
|
121 |
+
max_new_tokens=1024
|
122 |
+
)
|
123 |
+
generated_ids = [
|
124 |
+
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
125 |
+
]
|
126 |
+
|
127 |
+
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
128 |
+
|
129 |
+
print(response)
|
130 |
+
```
|
131 |
+
|
132 |
+
### Special Thanks:
|
133 |
+
To the Phi-4 development team, the Deepseek research team, and everyone who contributed to this project.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|