Update README.md
Browse files
README.md
CHANGED
|
@@ -40,7 +40,7 @@ python multiprocess_inference.py
|
|
| 40 |
If the performance is not satisfactory, you can change the CPU scheduler to keep the CPU running at the highest frequency, and bind the inference program to the big core cluster (`taskset -c 4-7 python multiprocess_inference.py`).
|
| 41 |
|
| 42 |
test.jpg:
|
| 43 |
-

|
| 135 |
- [openbmb/MiniCPM-V-2_6](https://huggingface.co/openbmb/MiniCPM-V-2_6)
|
| 136 |
-
- [Qwen/Qwen2-7B](https://huggingface.co/Qwen/Qwen2-7B)
|
|
|
|
|
|
| 40 |
If the performance is not satisfactory, you can change the CPU scheduler to keep the CPU running at the highest frequency, and bind the inference program to the big core cluster (`taskset -c 4-7 python multiprocess_inference.py`).
|
| 41 |
|
| 42 |
test.jpg:
|
| 43 |
+

|
| 44 |
|
| 45 |
>```
|
| 46 |
>Start loading language model (size: 7810.02 MB)
|
|
|
|
| 133 |
|
| 134 |
- [sophgo/LLM-TPU models/MiniCPM-V-2_6](https://github.com/sophgo/LLM-TPU/tree/main/models/MiniCPM-V-2_6)
|
| 135 |
- [openbmb/MiniCPM-V-2_6](https://huggingface.co/openbmb/MiniCPM-V-2_6)
|
| 136 |
+
- [Qwen/Qwen2-7B](https://huggingface.co/Qwen/Qwen2-7B)
|
| 137 |
+
- [happyme531/MiniCPM-V-2_6-rkllm](https://huggingface.co/happyme531/MiniCPM-V-2_6-rkllm)
|