xcjthu nielsr HF Staff commited on
Commit
fc9b50f
·
verified ·
1 Parent(s): 0231958

Add model description and link to paper (#5)

Browse files

- Add model description and link to paper (f9c341a8fbe4c21c893d7e20421ba25e884ba8c9)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +6 -118
README.md CHANGED
@@ -1,11 +1,14 @@
1
  ---
2
- license: apache-2.0
3
  language:
4
  - zh
5
  - en
6
- pipeline_tag: text-generation
7
  library_name: transformers
 
 
8
  ---
 
 
 
9
  <div align="center">
10
  <img src="https://github.com/OpenBMB/MiniCPM/blob/main/assets/minicpm_logo.png?raw=true" width="500em" ></img>
11
  </div>
@@ -80,13 +83,6 @@ MiniCPM4 natively supports context lengths of up to 32,768 tokens. To reproduce
80
  }
81
  ```
82
 
83
- After modification, you can run the following command to reproduce the long-context acceleration effect (the script will automatically download the model weights from HuggingFace)
84
- ```bash
85
- python3 tests/test_generate.py
86
- ```
87
-
88
- For more details about CPM.cu, please refer to [the repo CPM.cu](https://github.com/OpenBMB/cpm.cu).
89
-
90
  ### Inference with Transformers
91
  ```python
92
  from transformers import AutoModelForCausalLM, AutoTokenizer
@@ -199,112 +195,4 @@ Then you can use the chat interface by running the following command:
199
  ```python
200
  import openai
201
 
202
- client = openai.Client(base_url=f"http://localhost:30000/v1", api_key="None")
203
-
204
- response = client.chat.completions.create(
205
- model="openbmb/MiniCPM4-8B",
206
- messages=[
207
- {"role": "user", "content": "Write an article about Artificial Intelligence."},
208
- ],
209
- temperature=0.7,
210
- max_tokens=1024,
211
- )
212
-
213
- print(response.choices[0].message.content)
214
- ```
215
-
216
- ### Inference with [vLLM](https://github.com/vllm-project/vllm)
217
- For now, you need to install the latest version of vLLM.
218
- ```
219
- pip install -U vllm \
220
- --pre \
221
- --extra-index-url https://wheels.vllm.ai/nightly
222
- ```
223
-
224
- Then you can inference MiniCPM4-8B with vLLM:
225
- ```python
226
- from transformers import AutoTokenizer
227
- from vllm import LLM, SamplingParams
228
-
229
- model_name = "openbmb/MiniCPM4-8B"
230
- prompt = [{"role": "user", "content": "Please recommend 5 tourist attractions in Beijing. "}]
231
-
232
- tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
233
- input_text = tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True)
234
-
235
- llm = LLM(
236
- model=model_name,
237
- trust_remote_code=True,
238
- max_num_batched_tokens=32768,
239
- dtype="bfloat16",
240
- gpu_memory_utilization=0.8,
241
- )
242
- sampling_params = SamplingParams(top_p=0.7, temperature=0.7, max_tokens=1024, repetition_penalty=1.02)
243
-
244
- outputs = llm.generate(prompts=input_text, sampling_params=sampling_params)
245
-
246
- print(outputs[0].outputs[0].text)
247
- ```
248
-
249
- Also, you can start the inference server by running the following command:
250
- > **Note**: In vLLM's chat API, `add_special_tokens` is `False` by default. This means important special tokens—such as the beginning-of-sequence (BOS) token—will not be added automatically. To ensure the input prompt is correctly formatted for the model, you should explicitly set `extra_body={"add_special_tokens": True}`.
251
-
252
- ```bash
253
- vllm serve openbmb/MiniCPM4-8B
254
- ```
255
-
256
- Then you can use the chat interface by running the following code:
257
-
258
- ```python
259
- import openai
260
-
261
- client = openai.Client(base_url="http://localhost:8000/v1", api_key="EMPTY")
262
-
263
- response = client.chat.completions.create(
264
- model="openbmb/MiniCPM4-8B",
265
- messages=[
266
- {"role": "user", "content": "Write an article about Artificial Intelligence."},
267
- ],
268
- temperature=0.7,
269
- max_tokens=1024,
270
- extra_body=dict(add_special_tokens=True), # Ensures special tokens are added for chat template
271
-
272
- )
273
-
274
- print(response.choices[0].message.content)
275
- ```
276
-
277
- ## Evaluation Results
278
- On two typical end-side chips, Jetson AGX Orin and RTX 4090, MiniCPM4 demonstrates significantly faster processing speed compared to similar-size models in long text processing tasks. As text length increases, MiniCPM4's efficiency advantage becomes more pronounced. On the Jetson AGX Orin platform, compared to Qwen3-8B, MiniCPM4 achieves approximately 7x decoding speed improvement.
279
-
280
- ![benchmark](https://github.com/OpenBMB/MiniCPM/blob/main/assets/minicpm4/efficiency.png?raw=true)
281
-
282
- #### Comprehensive Evaluation
283
- MiniCPM4 launches end-side versions with 8B and 0.5B parameter scales, both achieving best-in-class performance in their respective categories.
284
-
285
- ![benchmark](https://github.com/OpenBMB/MiniCPM/blob/main/assets/minicpm4/benchmark.png?raw=true)
286
-
287
- #### Long Text Evaluation
288
- MiniCPM4 is pre-trained on 32K long texts and achieves length extension through YaRN technology. In the 128K long text needle-in-a-haystack task, MiniCPM4 demonstrates outstanding performance.
289
-
290
- ![long-niah](https://github.com/OpenBMB/MiniCPM/blob/main/assets/minicpm4/128k-niah.png?raw=true)
291
-
292
- ## Statement
293
- - As a language model, MiniCPM generates content by learning from a vast amount of text.
294
- - However, it does not possess the ability to comprehend or express personal opinions or value judgments.
295
- - Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers.
296
- - Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own.
297
-
298
- ## LICENSE
299
- - This repository and MiniCPM models are released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License.
300
-
301
- ## Citation
302
- - Please cite our [paper](https://github.com/OpenBMB/MiniCPM/tree/main/report/MiniCPM_4_Technical_Report.pdf) if you find our work valuable.
303
-
304
- ```bibtex
305
- @article{minicpm4,
306
- title={{MiniCPM4}: Ultra-Efficient LLMs on End Devices},
307
- author={MiniCPM Team},
308
- year={2025}
309
- }
310
- ```
 
1
  ---
 
2
  language:
3
  - zh
4
  - en
 
5
  library_name: transformers
6
+ license: apache-2.0
7
+ pipeline_tag: text-generation
8
  ---
9
+
10
+ MiniCPM4-8B is a highly efficient large language model (LLM) designed explicitly for end-side devices. It achieves this efficiency through systematic innovation in model architecture, training data, training algorithms, and inference systems. The details can be found in [MiniCPM4: Ultra-Efficient LLMs on End Devices](https://huggingface.co/papers/2506.07900).
11
+
12
  <div align="center">
13
  <img src="https://github.com/OpenBMB/MiniCPM/blob/main/assets/minicpm_logo.png?raw=true" width="500em" ></img>
14
  </div>
 
83
  }
84
  ```
85
 
 
 
 
 
 
 
 
86
  ### Inference with Transformers
87
  ```python
88
  from transformers import AutoModelForCausalLM, AutoTokenizer
 
195
  ```python
196
  import openai
197
 
198
+ client =