Update LMDeploy version
#2
by
jack-zxy
- opened
README.md
CHANGED
@@ -187,7 +187,7 @@ The minimum hardware requirements for deploying Intern-S1 series models are:
|
|
187 |
|
188 |
You can utilize one of the following LLM inference frameworks to create an OpenAI compatible server:
|
189 |
|
190 |
-
#### [lmdeploy (>=0.9.2)](https://github.com/InternLM/lmdeploy)
|
191 |
|
192 |
```bash
|
193 |
lmdeploy serve api_server internlm/Intern-S1-mini --reasoning-parser intern-s1 --tool-call-parser intern-s1
|
|
|
187 |
|
188 |
You can utilize one of the following LLM inference frameworks to create an OpenAI compatible server:
|
189 |
|
190 |
+
#### [lmdeploy (>=0.9.2.post1)](https://github.com/InternLM/lmdeploy)
|
191 |
|
192 |
```bash
|
193 |
lmdeploy serve api_server internlm/Intern-S1-mini --reasoning-parser intern-s1 --tool-call-parser intern-s1
|