writinwaters
commited on
Commit
·
30640f7
1
Parent(s):
202a17d
[doc] Updated default value of quote in 'get answers' (#1093)
Browse files### What problem does this PR solve?
_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._
### Type of change
- [x] Documentation Update
- docs/guides/deploy_local_llm.md +11 -7
- docs/references/api.md +1 -1
docs/guides/deploy_local_llm.md
CHANGED
@@ -115,34 +115,38 @@ Xorbits Inference([Xinference](https://github.com/xorbitsai/inference)) enables
|
|
115 |
- For a complete list of supported models, see the [Builtin Models](https://inference.readthedocs.io/en/latest/models/builtin/).
|
116 |
:::
|
117 |
|
118 |
-
To deploy a local model, e.g., **
|
119 |
|
120 |
-
### 1.
|
|
|
|
|
|
|
|
|
121 |
|
122 |
```bash
|
123 |
$ xinference-local --host 0.0.0.0 --port 9997
|
124 |
```
|
125 |
|
126 |
-
###
|
127 |
|
128 |
Launch your local model (**Mistral**), ensuring that you replace `${quantization}` with your chosen quantization method
|
129 |
:
|
130 |
```bash
|
131 |
$ xinference launch -u mistral --model-name mistral-v0.1 --size-in-billions 7 --model-format pytorch --quantization ${quantization}
|
132 |
```
|
133 |
-
###
|
134 |
|
135 |
In RAGFlow, click on your logo on the top right of the page **>** **Model Providers** and add Xinference to RAGFlow:
|
136 |
|
137 |

|
138 |
|
139 |
-
###
|
140 |
|
141 |
Enter an accessible base URL, such as `http://<your-xinference-endpoint-domain>:9997/v1`.
|
142 |
|
143 |
-
###
|
144 |
|
145 |
-
Click on your logo **>** **Model Providers** **>** **System Model Settings** to update your model
|
146 |
|
147 |
*You should now be able to find **mistral** from the dropdown list under **Chat model**.*
|
148 |
|
|
|
115 |
- For a complete list of supported models, see the [Builtin Models](https://inference.readthedocs.io/en/latest/models/builtin/).
|
116 |
:::
|
117 |
|
118 |
+
To deploy a local model, e.g., **Mistral**, using Xinference:
|
119 |
|
120 |
+
### 1. Check firewall settings
|
121 |
+
|
122 |
+
Ensure that your host machine's firewall allows inbound connections on port 9997.
|
123 |
+
|
124 |
+
### 2. Start an Xinference instance
|
125 |
|
126 |
```bash
|
127 |
$ xinference-local --host 0.0.0.0 --port 9997
|
128 |
```
|
129 |
|
130 |
+
### 3. Launch your local model
|
131 |
|
132 |
Launch your local model (**Mistral**), ensuring that you replace `${quantization}` with your chosen quantization method
|
133 |
:
|
134 |
```bash
|
135 |
$ xinference launch -u mistral --model-name mistral-v0.1 --size-in-billions 7 --model-format pytorch --quantization ${quantization}
|
136 |
```
|
137 |
+
### 4. Add Xinference
|
138 |
|
139 |
In RAGFlow, click on your logo on the top right of the page **>** **Model Providers** and add Xinference to RAGFlow:
|
140 |
|
141 |

|
142 |
|
143 |
+
### 5. Complete basic Xinference settings
|
144 |
|
145 |
Enter an accessible base URL, such as `http://<your-xinference-endpoint-domain>:9997/v1`.
|
146 |
|
147 |
+
### 6. Update System Model Settings
|
148 |
|
149 |
+
Click on your logo **>** **Model Providers** **>** **System Model Settings** to update your model.
|
150 |
|
151 |
*You should now be able to find **mistral** from the dropdown list under **Chat model**.*
|
152 |
|
docs/references/api.md
CHANGED
@@ -224,7 +224,7 @@ This method retrieves from RAGFlow the answer to the user's latest question.
|
|
224 |
|------------------|--------|----------|---------------|
|
225 |
| `conversation_id`| string | Yes | The ID of the conversation session. Call ['GET' /new_conversation](#create-conversation) to retrieve the ID.|
|
226 |
| `messages` | json | Yes | The latest question in a JSON form, such as `[{"role": "user", "content": "How are you doing!"}]`|
|
227 |
-
| `quote` | bool | No | Default:
|
228 |
| `stream` | bool | No | Default: true |
|
229 |
| `doc_ids` | string | No | Document IDs delimited by comma, like `c790da40ea8911ee928e0242ac180005,23dsf34ree928e0242ac180005`. The retrieved contents will be confined to these documents. |
|
230 |
|
|
|
224 |
|------------------|--------|----------|---------------|
|
225 |
| `conversation_id`| string | Yes | The ID of the conversation session. Call ['GET' /new_conversation](#create-conversation) to retrieve the ID.|
|
226 |
| `messages` | json | Yes | The latest question in a JSON form, such as `[{"role": "user", "content": "How are you doing!"}]`|
|
227 |
+
| `quote` | bool | No | Default: false|
|
228 |
| `stream` | bool | No | Default: true |
|
229 |
| `doc_ids` | string | No | Document IDs delimited by comma, like `c790da40ea8911ee928e0242ac180005,23dsf34ree928e0242ac180005`. The retrieved contents will be confined to these documents. |
|
230 |
|