Kevin Hu commited on
Commit
c3bad71
·
1 Parent(s): 39ed6ea

Updated obsolete faqs (#3575)

Browse files

### What problem does this PR solve?

### Type of change

- [x] Documentation Update

docker/README.md CHANGED
@@ -77,7 +77,7 @@ The [.env](./.env) file contains important environment variables for Docker.
77
 
78
  - `infiniflow/ragflow:dev-slim` (default): The RAGFlow Docker image without embedding models.
79
  - `infiniflow/ragflow:dev`: The RAGFlow Docker image with embedding models including:
80
- - Embedded embedding models:
81
  - `BAAI/bge-large-zh-v1.5`
82
  - `BAAI/bge-reranker-v2-m3`
83
  - `maidalun1020/bce-embedding-base_v1`
@@ -117,6 +117,11 @@ The [.env](./.env) file contains important environment variables for Docker.
117
  - `MACOS`
118
  Optimizations for MacOS. It is disabled by default. You can uncomment this line if your OS is MacOS.
119
 
 
 
 
 
 
120
  ## 🐋 Service configuration
121
 
122
  [service_conf.yaml](./service_conf.yaml) specifies the system-level configuration for RAGFlow and is used by its API server and task executor. In a dockerized setup, this file is automatically created based on the [service_conf.yaml.template](./service_conf.yaml.template) file (replacing all environment variables by their values).
@@ -154,4 +159,4 @@ The [.env](./.env) file contains important environment variables for Docker.
154
  - `api_key`: The API key for the specified LLM. You will need to apply for your model API key online.
155
 
156
  > [!TIP]
157
- > If you do not set the default LLM here, configure the default LLM on the **Settings** page in the RAGFlow UI.
 
77
 
78
  - `infiniflow/ragflow:dev-slim` (default): The RAGFlow Docker image without embedding models.
79
  - `infiniflow/ragflow:dev`: The RAGFlow Docker image with embedding models including:
80
+ - Built-in embedding models:
81
  - `BAAI/bge-large-zh-v1.5`
82
  - `BAAI/bge-reranker-v2-m3`
83
  - `maidalun1020/bce-embedding-base_v1`
 
117
  - `MACOS`
118
  Optimizations for MacOS. It is disabled by default. You can uncomment this line if your OS is MacOS.
119
 
120
+ ### Maximum file size
121
+
122
+ - `MAX_CONTENT_LENGTH`
123
+ The maximum file size for each uploaded file, in bytes. You can uncomment this line if you wish to change 128M file size limit.
124
+
125
  ## 🐋 Service configuration
126
 
127
  [service_conf.yaml](./service_conf.yaml) specifies the system-level configuration for RAGFlow and is used by its API server and task executor. In a dockerized setup, this file is automatically created based on the [service_conf.yaml.template](./service_conf.yaml.template) file (replacing all environment variables by their values).
 
159
  - `api_key`: The API key for the specified LLM. You will need to apply for your model API key online.
160
 
161
  > [!TIP]
162
+ > If you do not set the default LLM here, configure the default LLM on the **Settings** page in the RAGFlow UI.
docs/configurations.md CHANGED
@@ -64,7 +64,7 @@ The [.env](https://github.com/infiniflow/ragflow/blob/main/docker/.env) file con
64
  ### MySQL
65
 
66
  - `MYSQL_PASSWORD`
67
- The password for MySQL.
68
  - `MYSQL_PORT`
69
  The port used to expose the MySQL service to the host machine, allowing **external** access to the MySQL database running inside the Docker container. Defaults to `5455`.
70
 
@@ -75,7 +75,7 @@ The [.env](https://github.com/infiniflow/ragflow/blob/main/docker/.env) file con
75
  - `MINIO_PORT`
76
  The port used to expose the MinIO API service to the host machine, allowing **external** access to the MinIO object storage service running inside the Docker container. Defaults to `9000`.
77
  - `MINIO_USER`
78
- The username for MinIO.
79
  - `MINIO_PASSWORD`
80
  The password for MinIO. accordingly.
81
 
@@ -95,7 +95,7 @@ The [.env](https://github.com/infiniflow/ragflow/blob/main/docker/.env) file con
95
 
96
  - `infiniflow/ragflow:dev-slim` (default): The RAGFlow Docker image without embedding models.
97
  - `infiniflow/ragflow:dev`: The RAGFlow Docker image with embedding models including:
98
- - Embedded embedding models:
99
  - `BAAI/bge-large-zh-v1.5`
100
  - `BAAI/bge-reranker-v2-m3`
101
  - `maidalun1020/bce-embedding-base_v1`
@@ -181,4 +181,4 @@ The default LLM to use for a new RAGFlow user. It is disabled by default. To ena
181
 
182
  :::tip NOTE
183
  If you do not set the default LLM here, configure the default LLM on the **Settings** page in the RAGFlow UI.
184
- :::
 
64
  ### MySQL
65
 
66
  - `MYSQL_PASSWORD`
67
+ The password for MySQL.
68
  - `MYSQL_PORT`
69
  The port used to expose the MySQL service to the host machine, allowing **external** access to the MySQL database running inside the Docker container. Defaults to `5455`.
70
 
 
75
  - `MINIO_PORT`
76
  The port used to expose the MinIO API service to the host machine, allowing **external** access to the MinIO object storage service running inside the Docker container. Defaults to `9000`.
77
  - `MINIO_USER`
78
+ The username for MinIO.
79
  - `MINIO_PASSWORD`
80
  The password for MinIO. accordingly.
81
 
 
95
 
96
  - `infiniflow/ragflow:dev-slim` (default): The RAGFlow Docker image without embedding models.
97
  - `infiniflow/ragflow:dev`: The RAGFlow Docker image with embedding models including:
98
+ - Built-in embedding models:
99
  - `BAAI/bge-large-zh-v1.5`
100
  - `BAAI/bge-reranker-v2-m3`
101
  - `maidalun1020/bce-embedding-base_v1`
 
181
 
182
  :::tip NOTE
183
  If you do not set the default LLM here, configure the default LLM on the **Settings** page in the RAGFlow UI.
184
+ :::
docs/quickstart.mdx CHANGED
@@ -286,15 +286,15 @@ Once you have selected an embedding model and used it to parse a file, you are n
286
  _When the file parsing completes, its parsing status changes to **SUCCESS**._
287
 
288
  :::caution NOTE
289
- - If your file parsing gets stuck at below 1%, see [FAQ 4.3](https://ragflow.io/docs/dev/faq#43-why-does-my-document-parsing-stall-at-under-one-percent).
290
- - If your file parsing gets stuck at near completion, see [FAQ 4.4](https://ragflow.io/docs/dev/faq#44-why-does-my-pdf-parsing-stall-near-completion-while-the-log-does-not-show-any-error)
291
  :::
292
 
293
- ## Intervene with file parsing
294
 
295
- RAGFlow features visibility and explainability, allowing you to view the chunking results and intervene where necessary. To do so:
296
 
297
- 1. Click on the file that completes file parsing to view the chunking results:
298
 
299
  _You are taken to the **Chunk** page:_
300
 
@@ -306,8 +306,8 @@ RAGFlow features visibility and explainability, allowing you to view the chunkin
306
 
307
  ![update chunk](https://github.com/infiniflow/ragflow/assets/93570324/1d84b408-4e9f-46fd-9413-8c1059bf9c76)
308
 
309
- :::caution NOTE
310
- You can add keywords to a file chunk to improve its ranking for queries containing those keywords. This action increases its keyword weight and can improve its position in search list.
311
  :::
312
 
313
  4. In Retrieval testing, ask a quick question in **Test text** to double check if your configurations work:
@@ -318,17 +318,17 @@ You can add keywords to a file chunk to improve its ranking for queries containi
318
 
319
  ## Set up an AI chat
320
 
321
- Conversations in RAGFlow are based on a particular knowledge base or multiple knowledge bases. Once you have created your knowledge base and finished file parsing, you can go ahead and start an AI conversation.
322
 
323
  1. Click the **Chat** tab in the middle top of the mage **>** **Create an assistant** to show the **Chat Configuration** dialogue *of your next dialogue*.
324
  > RAGFlow offer the flexibility of choosing a different chat model for each dialogue, while allowing you to set the default models in **System Model Settings**.
325
 
326
- 2. Update **Assistant Setting**:
327
 
328
  - Name your assistant and specify your knowledge bases.
329
  - **Empty response**:
330
- - If you wish to *confine* RAGFlow's answers to your knowledge bases, leave a response here. Then when it doesn't retrieve an answer, it *uniformly* responds with what you set here.
331
- - If you wish RAGFlow to *improvise* when it doesn't retrieve an answer from your knowledge bases, leave it blank, which may give rise to hallucinations.
332
 
333
  3. Update **Prompt Engine** or leave it as is for the beginning.
334
 
@@ -347,4 +347,4 @@ RAGFlow also offers HTTP and Python APIs for you to integrate RAGFlow's capabili
347
  - [Acquire a RAGFlow API key](./guides/develop/acquire_ragflow_api_key.md)
348
  - [HTTP API reference](./references/http_api_reference.md)
349
  - [Python API reference](./references/python_api_reference.md)
350
- :::
 
286
  _When the file parsing completes, its parsing status changes to **SUCCESS**._
287
 
288
  :::caution NOTE
289
+ - If your file parsing gets stuck at below 1%, see [this FAQ](https://ragflow.io/docs/dev/faq#why-does-my-document-parsing-stall-at-under-one-percent).
290
+ - If your file parsing gets stuck at near completion, see [this FAQ](https://ragflow.io/docs/dev/faq#why-does-my-pdf-parsing-stall-near-completion-while-the-log-does-not-show-any-error)
291
  :::
292
 
293
+ ## Intervene with file parsing
294
 
295
+ RAGFlow features visibility and explainability, allowing you to view the chunking results and intervene where necessary. To do so:
296
 
297
+ 1. Click on the file that completes file parsing to view the chunking results:
298
 
299
  _You are taken to the **Chunk** page:_
300
 
 
306
 
307
  ![update chunk](https://github.com/infiniflow/ragflow/assets/93570324/1d84b408-4e9f-46fd-9413-8c1059bf9c76)
308
 
309
+ :::caution NOTE
310
+ You can add keywords to a file chunk to improve its ranking for queries containing those keywords. This action increases its keyword weight and can improve its position in search list.
311
  :::
312
 
313
  4. In Retrieval testing, ask a quick question in **Test text** to double check if your configurations work:
 
318
 
319
  ## Set up an AI chat
320
 
321
+ Conversations in RAGFlow are based on a particular knowledge base or multiple knowledge bases. Once you have created your knowledge base and finished file parsing, you can go ahead and start an AI conversation.
322
 
323
  1. Click the **Chat** tab in the middle top of the mage **>** **Create an assistant** to show the **Chat Configuration** dialogue *of your next dialogue*.
324
  > RAGFlow offer the flexibility of choosing a different chat model for each dialogue, while allowing you to set the default models in **System Model Settings**.
325
 
326
+ 2. Update **Assistant Setting**:
327
 
328
  - Name your assistant and specify your knowledge bases.
329
  - **Empty response**:
330
+ - If you wish to *confine* RAGFlow's answers to your knowledge bases, leave a response here. Then when it doesn't retrieve an answer, it *uniformly* responds with what you set here.
331
+ - If you wish RAGFlow to *improvise* when it doesn't retrieve an answer from your knowledge bases, leave it blank, which may give rise to hallucinations.
332
 
333
  3. Update **Prompt Engine** or leave it as is for the beginning.
334
 
 
347
  - [Acquire a RAGFlow API key](./guides/develop/acquire_ragflow_api_key.md)
348
  - [HTTP API reference](./references/http_api_reference.md)
349
  - [Python API reference](./references/python_api_reference.md)
350
+ :::
docs/references/faq.md CHANGED
@@ -5,162 +5,161 @@ slug: /faq
5
 
6
  # Frequently asked questions
7
 
8
- Queries regarding general usage, troubleshooting, features, performance, and more.
9
 
10
- ## General
 
 
11
 
12
- ### 1. What sets RAGFlow apart from other RAG products?
 
 
13
 
14
  The "garbage in garbage out" status quo remains unchanged despite the fact that LLMs have advanced Natural Language Processing (NLP) significantly. In response, RAGFlow introduces two unique features compared to other Retrieval-Augmented Generation (RAG) products.
15
 
16
  - Fine-grained document parsing: Document parsing involves images and tables, with the flexibility for you to intervene as needed.
17
  - Traceable answers with reduced hallucinations: You can trust RAGFlow's responses as you can view the citations and references supporting them.
18
 
19
- ### 2. Which languages does RAGFlow support?
20
 
21
- English, simplified Chinese, traditional Chinese for now.
22
 
23
- ### 3. Which embedding models can be deployed locally?
24
 
25
- - BAAI/bge-large-zh-v1.5
26
- - BAAI/bge-base-en-v1.5
27
- - BAAI/bge-large-en-v1.5
28
- - BAAI/bge-small-en-v1.5
29
- - BAAI/bge-small-zh-v1.5
30
- - jinaai/jina-embeddings-v2-base-en
31
- - jinaai/jina-embeddings-v2-small-en
32
- - nomic-ai/nomic-embed-text-v1.5
33
- - sentence-transformers/all-MiniLM-L6-v2
34
- - maidalun1020/bce-embedding-base_v1
35
 
36
- ## Performance
37
 
38
- ### 1. Why does it take longer for RAGFlow to parse a document than LangChain?
39
 
40
- We put painstaking effort into document pre-processing tasks like layout analysis, table structure recognition, and OCR (Optical Character Recognition) using our vision model. This contributes to the additional time required.
41
 
42
- ### 2. Why does RAGFlow require more resources than other projects?
43
 
44
- RAGFlow has a number of built-in models for document structure parsing, which account for the additional computational resources.
45
 
46
- ## Feature
47
 
48
- ### 1. Which architectures or devices does RAGFlow support?
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
 
50
- Currently, we only support x86 CPU and Nvidia GPU.
51
 
52
- ### 2. Do you offer an API for integration with third-party applications?
53
 
54
  The corresponding APIs are now available. See the [RAGFlow HTTP API Reference](./http_api_reference.md) or the [RAGFlow Python API Reference](./python_api_reference.md) for more information.
55
 
56
- ### 3. Do you support stream output?
 
 
57
 
58
- This feature is supported.
 
 
59
 
60
- ### 4. Is it possible to share dialogue through URL?
61
 
62
  No, this feature is not supported.
63
 
64
- ### 5. Do you support multiple rounds of dialogues, i.e., referencing previous dialogues as context for the current dialogue?
 
 
65
 
66
  This feature and the related APIs are still in development. Contributions are welcome.
67
 
 
68
 
69
  ## Troubleshooting
70
 
71
- ### 1. Issues with docker images
72
 
73
- #### 1.1 How to build the RAGFlow image from scratch?
74
 
75
- ```
76
- $ git clone https://github.com/infiniflow/ragflow.git
77
- $ cd ragflow
78
- $ docker build -t infiniflow/ragflow:latest .
79
- $ cd ragflow/docker
80
- $ docker compose up -d
81
- ```
82
 
83
- #### 1.2 `process "/bin/sh -c cd ./web && npm i && npm run build"` failed
84
 
85
- 1. Check your network from within Docker, for example:
86
- ```bash
87
- curl https://hf-mirror.com
88
- ```
89
 
90
- 2. If your network works fine, the issue lies with the Docker network configuration. Replace the Docker building command:
91
- ```bash
92
- docker build -t infiniflow/ragflow:vX.Y.Z.
93
- ```
94
- With this:
95
- ```bash
96
- docker build -t infiniflow/ragflow:vX.Y.Z. --network host
97
- ```
98
 
99
- ### 2. Issues with huggingface models
100
 
101
- #### 2.1 Cannot access https://huggingface.co
102
-
103
- A *locally* deployed RAGflow downloads OCR and embedding modules from [Huggingface website](https://huggingface.co) by default. If your machine is unable to access this site, the following error occurs and PDF parsing fails:
104
 
105
  ```
106
  FileNotFoundError: [Errno 2] No such file or directory: '/root/.cache/huggingface/hub/models--InfiniFlow--deepdoc/snapshots/be0c1e50eef6047b412d1800aa89aba4d275f997/ocr.res'
107
  ```
108
- To fix this issue, use https://hf-mirror.com instead:
109
 
110
- 1. Stop all containers and remove all related resources:
111
 
112
- ```bash
113
- cd ragflow/docker/
114
- docker compose down
115
- ```
116
 
117
- 2. Replace `https://huggingface.co` with `https://hf-mirror.com` in **ragflow/docker/docker-compose.yml**.
118
-
119
- 3. Start up the server:
 
120
 
121
- ```bash
122
- docker compose up -d
123
- ```
124
 
125
- #### 2.2. `MaxRetryError: HTTPSConnectionPool(host='hf-mirror.com', port=443)`
 
 
126
 
127
- This error suggests that you do not have Internet access or are unable to connect to hf-mirror.com. Try the following:
128
 
129
- 1. Manually download the resource files from [huggingface.co/InfiniFlow/deepdoc](https://huggingface.co/InfiniFlow/deepdoc) to your local folder **~/deepdoc**.
 
 
 
 
 
 
 
 
 
 
130
  2. Add a volumes to **docker-compose.yml**, for example:
131
- ```
132
- - ~/deepdoc:/ragflow/rag/res/deepdoc
133
- ```
134
 
135
- #### 2.3 `FileNotFoundError: [Errno 2] No such file or directory: '/root/.cache/huggingface/hub/models--InfiniFlow--deepdoc/snapshots/FileNotFoundError: [Errno 2] No such file or directory: '/ragflow/rag/res/deepdoc/ocr.res'be0c1e50eef6047b412d1800aa89aba4d275f997/ocr.res'`
 
 
136
 
137
- 1. Check your network from within Docker, for example:
138
- ```bash
139
- curl https://hf-mirror.com
140
- ```
141
- 2. Run `ifconfig` to check the `mtu` value. If the server's `mtu` is `1450` while the NIC's `mtu` in the container is `1500`, this mismatch may cause network instability. Adjust the `mtu` policy as follows:
142
 
143
- ```
144
- vim docker-compose-base.yml
145
- # Original configuration:
146
- networks:
147
- ragflow:
148
- driver: bridge
149
- # Modified configuration:
150
- networks:
151
- ragflow:
152
- driver: bridge
153
- driver_opts:
154
- com.docker.network.driver.mtu: 1450
155
- ```
156
 
157
- ### 3. Issues with RAGFlow servers
158
 
159
- #### 3.1 `WARNING: can't find /raglof/rag/res/borker.tm`
160
 
161
  Ignore this warning and continue. All system warnings can be ignored.
162
 
163
- #### 3.2 `network anomaly There is an abnormality in your network and you cannot connect to the server.`
 
 
164
 
165
  ![anomaly](https://github.com/infiniflow/ragflow/assets/93570324/beb7ad10-92e4-4a58-8886-bfb7cbd09e5d)
166
 
@@ -181,64 +180,79 @@ You will not log in to RAGFlow unless the server is fully initialized. Run `dock
181
  INFO:werkzeug:Press CTRL+C to quit
182
  ```
183
 
 
184
 
185
- ### 4. Issues with RAGFlow backend services
186
-
187
- #### 4.1 `dependency failed to start: container ragflow-mysql is unhealthy`
188
 
189
- `dependency failed to start: container ragflow-mysql is unhealthy` means that your MySQL container failed to start. Try replacing `mysql:5.7.18` with `mariadb:10.5.8` in **docker-compose-base.yml**.
190
 
191
- #### 4.2 `Realtime synonym is disabled, since no redis connection`
192
 
193
  Ignore this warning and continue. All system warnings can be ignored.
194
 
195
  ![](https://github.com/infiniflow/ragflow/assets/93570324/ef5a6194-084a-4fe3-bdd5-1c025b40865c)
196
 
197
- #### 4.3 Why does my document parsing stall at under one percent?
 
 
198
 
199
  ![stall](https://github.com/infiniflow/ragflow/assets/93570324/3589cc25-c733-47d5-bbfc-fedb74a3da50)
200
 
201
- Click the red cross beside the 'parsing status' bar, then restart the parsing process to see if the issue remains. If the issue persists and your RAGFlow is deployed locally, try the following:
202
 
203
  1. Check the log of your RAGFlow server to see if it is running properly:
204
- ```bash
205
- docker logs -f ragflow-server
206
- ```
 
 
207
  2. Check if the **task_executor.py** process exists.
208
  3. Check if your RAGFlow server can access hf-mirror.com or huggingface.com.
209
 
210
- #### 4.4 Why does my pdf parsing stall near completion, while the log does not show any error?
 
 
211
 
212
  Click the red cross beside the 'parsing status' bar, then restart the parsing process to see if the issue remains. If the issue persists and your RAGFlow is deployed locally, the parsing process is likely killed due to insufficient RAM. Try increasing your memory allocation by increasing the `MEM_LIMIT` value in **docker/.env**.
213
 
214
  :::note
215
  Ensure that you restart up your RAGFlow server for your changes to take effect!
 
216
  ```bash
217
  docker compose stop
218
  ```
 
219
  ```bash
220
  docker compose up -d
221
  ```
 
222
  :::
223
 
224
  ![nearcompletion](https://github.com/infiniflow/ragflow/assets/93570324/563974c3-f8bb-4ec8-b241-adcda8929cbb)
225
 
226
- #### 4.5 `Index failure`
 
 
227
 
228
  An index failure usually indicates an unavailable Elasticsearch service.
229
 
230
- #### 4.6 How to check the log of RAGFlow?
 
 
231
 
232
  ```bash
233
- tail -f path_to_ragflow/docker/ragflow-logs/rag/*.log
234
  ```
235
 
236
- #### 4.7 How to check the status of each component in RAGFlow?
 
 
237
 
238
  ```bash
239
  $ docker ps
240
  ```
241
- *The system displays the following if all your RAGFlow components are running properly:*
 
242
 
243
  ```
244
  5bc45806b680 infiniflow/ragflow:latest "./entrypoint.sh" 11 hours ago Up 11 hours 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp, 0.0.0.0:9380->9380/tcp, :::9380->9380/tcp ragflow-server
@@ -247,21 +261,24 @@ d8c86f06c56b mysql:5.7.18 "docker-entrypoint.s…" 7 days ago Up
247
  cd29bcb254bc quay.io/minio/minio:RELEASE.2023-12-20T01-00-02Z "/usr/bin/docker-ent…" 2 weeks ago Up 11 hours 0.0.0.0:9001->9001/tcp, :::9001->9001/tcp, 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp ragflow-minio
248
  ```
249
 
250
- #### 4.8 `Exception: Can't connect to ES cluster`
 
 
251
 
252
  1. Check the status of your Elasticsearch component:
253
 
254
- ```bash
255
- $ docker ps
256
- ```
 
257
  *The status of a 'healthy' Elasticsearch component in your RAGFlow should look as follows:*
258
- ```
259
- 91220e3285dd docker.elastic.co/elasticsearch/elasticsearch:8.11.3 "/bin/tini -- /usr/l…" 11 hours ago Up 11 hours (healthy) 9300/tcp, 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp ragflow-es-01
260
- ```
 
261
 
262
  2. If your container keeps restarting, ensure `vm.max_map_count` >= 262144 as per [this README](https://github.com/infiniflow/ragflow?tab=readme-ov-file#-start-up-the-server). Updating the `vm.max_map_count` value in **/etc/sysctl.conf** is required, if you wish to keep your change permanent. This configuration works only for Linux.
263
 
264
-
265
  3. If your issue persists, ensure that the ES host setting is correct:
266
 
267
  - If you are running RAGFlow with Docker, it is in **docker/service_conf.yml**. Set it as follows:
@@ -269,135 +286,127 @@ $ docker ps
269
  es:
270
  hosts: 'http://es01:9200'
271
  ```
272
- - If you run RAGFlow outside of Docker, verify the ES host setting in **conf/service_conf.yml** using:
273
  ```bash
274
  curl http://<IP_OF_ES>:<PORT_OF_ES>
275
  ```
276
 
277
- #### 4.9 Can't start ES container and get `Elasticsearch did not exit normally`
 
 
278
 
279
- This is because you forgot to update the `vm.max_map_count` value in **/etc/sysctl.conf** and your change to this value was reset after a system reboot.
 
 
280
 
281
- #### 4.10 `{"data":null,"code":100,"message":"<NotFound '404: Not Found'>"}`
282
 
283
  Your IP address or port number may be incorrect. If you are using the default configurations, enter `http://<IP_OF_YOUR_MACHINE>` (**NOT 9380, AND NO PORT NUMBER REQUIRED!**) in your browser. This should work.
284
 
285
- #### 4.11 `Ollama - Mistral instance running at 127.0.0.1:11434 but cannot add Ollama as model in RagFlow`
 
 
286
 
287
  A correct Ollama IP address and port is crucial to adding models to Ollama:
288
 
289
- - If you are on demo.ragflow.io, ensure that the server hosting Ollama has a publicly accessible IP address.Note that 127.0.0.1 is not a publicly accessible IP address.
290
  - If you deploy RAGFlow locally, ensure that Ollama and RAGFlow are in the same LAN and can comunicate with each other.
291
 
292
- #### 4.12 Do you offer examples of using deepdoc to parse PDF or other files?
293
 
294
- Yes, we do. See the Python files under the **rag/app** folder.
295
 
296
- #### 4.13 Why did I fail to upload a 10MB+ file to my locally deployed RAGFlow?
297
 
298
- You probably forgot to update the **MAX_CONTENT_LENGTH** environment variable:
 
 
 
 
 
 
 
 
 
 
 
 
299
 
300
- 1. Add environment variable `MAX_CONTENT_LENGTH` to **ragflow/docker/.env**:
301
- ```
302
- MAX_CONTENT_LENGTH=100000000
303
- ```
304
  2. Update **docker-compose.yml**:
305
- ```
306
- environment:
307
- - MAX_CONTENT_LENGTH=${MAX_CONTENT_LENGTH}
308
- ```
 
 
309
  3. Restart the RAGFlow server:
310
- ```
311
- docker compose up ragflow -d
312
- ```
313
- *Now you should be able to upload files of sizes less than 100MB.*
314
-
315
- #### 4.14 `Table 'rag_flow.document' doesn't exist`
316
-
317
- This exception occurs when starting up the RAGFlow server. Try the following:
318
-
319
- 1. Prolong the sleep time: Go to **docker/entrypoint.sh**, locate line 26, and replace `sleep 60` with `sleep 280`.
320
- 2. If using Windows, ensure that the **entrypoint.sh** has LF end-lines.
321
- 3. Go to **docker/docker-compose.yml**, add the following:
322
- ```
323
- ./entrypoint.sh:/ragflow/entrypoint.sh
324
- ```
325
- 4. Change directory:
326
- ```bash
327
- cd docker
328
- ```
329
- 5. Stop the RAGFlow server:
330
- ```bash
331
- docker compose stop
332
- ```
333
- 6. Restart up the RAGFlow server:
334
- ```bash
335
- docker compose up
336
- ```
337
-
338
- #### 4.15 `hint : 102 Fail to access model Connection error`
339
-
340
- ![hint102](https://github.com/infiniflow/ragflow/assets/93570324/6633d892-b4f8-49b5-9a0a-37a0a8fba3d2)
341
-
342
- 1. Ensure that the RAGFlow server can access the base URL.
343
- 2. Do not forget to append `/v1/` to `http://IP:port`:
344
- `http://IP:port/v1/`
345
-
346
- #### 4.16 `FileNotFoundError: [Errno 2] No such file or directory`
347
-
348
- 1. Check if the status of your minio container is healthy:
349
  ```bash
350
  docker ps
351
  ```
 
352
  2. Ensure that the username and password settings of MySQL and MinIO in **docker/.env** are in line with those in **docker/service_conf.yml**.
353
 
 
 
354
  ## Usage
355
 
356
- ### 1. How to increase the length of RAGFlow responses?
 
 
357
 
358
  1. Right click the desired dialog to display the **Chat Configuration** window.
359
  2. Switch to the **Model Setting** tab and adjust the **Max Tokens** slider to get the desired length.
360
  3. Click **OK** to confirm your change.
361
 
 
362
 
363
- ### 2. What does Empty response mean? How to set it?
364
-
365
- You limit what the system responds to what you specify in **Empty response** if nothing is retrieved from your knowledge base. If you do not specify anything in **Empty response**, you let your LLM improvise, giving it a chance to hallucinate.
366
-
367
- ### 3. Can I set the base URL for OpenAI somewhere?
368
-
369
- ![](https://github.com/infiniflow/ragflow/assets/93570324/8cfb6fa4-8a97-415d-b9fa-b6f405a055f3)
370
 
371
- ### 4. How to run RAGFlow with a locally deployed LLM?
372
 
373
- You can use Ollama to deploy local LLM. See [here](../guides/deploy_local_llm.mdx) for more information.
374
 
375
- ### 5. How to link up ragflow and ollama servers?
376
 
377
- - If RAGFlow is locally deployed, ensure that your RAGFlow and Ollama are in the same LAN.
378
  - If you are using our online demo, ensure that the IP address of your Ollama server is public and accessible.
379
 
380
- ### 6. How to configure RAGFlow to respond with 100% matched results, rather than utilizing LLM?
381
-
382
- 1. Click **Knowledge Base** in the middle top of the page.
383
- 2. Right click the desired knowledge base to display the **Configuration** dialogue.
384
- 3. Choose **Q&A** as the chunk method and click **Save** to confirm your change.
385
 
386
- ### 7. Do I need to connect to Redis?
387
-
388
- No, connecting to Redis is not required.
389
 
390
- ### 8. `Error: Range of input length should be [1, 30000]`
391
 
392
- This error occurs because there are too many chunks matching your search criteria. Try reducing the **TopN** and increasing **Similarity threshold** to fix this issue:
393
 
394
- 1. Click **Chat** in the middle top of the page.
395
  2. Right click the desired conversation > **Edit** > **Prompt Engine**
396
  3. Reduce the **TopN** and/or raise **Silimarity threshold**.
397
  4. Click **OK** to confirm your changes.
398
 
399
  ![topn](https://github.com/infiniflow/ragflow/assets/93570324/7ec72ab3-0dd2-4cff-af44-e2663b67b2fc)
400
 
401
- ### 9. How to upgrade RAGFlow?
 
 
 
 
 
 
 
 
402
 
403
  See [Upgrade RAGFlow](../guides/upgrade_ragflow.mdx) for more information.
 
 
 
5
 
6
  # Frequently asked questions
7
 
8
+ Queries regarding general features, troubleshooting, performance, and more.
9
 
10
+ ---
11
+
12
+ ## General features
13
 
14
+ ---
15
+
16
+ ### What sets RAGFlow apart from other RAG products?
17
 
18
  The "garbage in garbage out" status quo remains unchanged despite the fact that LLMs have advanced Natural Language Processing (NLP) significantly. In response, RAGFlow introduces two unique features compared to other Retrieval-Augmented Generation (RAG) products.
19
 
20
  - Fine-grained document parsing: Document parsing involves images and tables, with the flexibility for you to intervene as needed.
21
  - Traceable answers with reduced hallucinations: You can trust RAGFlow's responses as you can view the citations and references supporting them.
22
 
23
+ ---
24
 
25
+ ### Why does it take longer for RAGFlow to parse a document than LangChain?
26
 
27
+ We put painstaking effort into document pre-processing tasks like layout analysis, table structure recognition, and OCR (Optical Character Recognition) using our vision models. This contributes to the additional time required.
28
 
29
+ ---
 
 
 
 
 
 
 
 
 
30
 
31
+ ### Why does RAGFlow require more resources than other projects?
32
 
33
+ RAGFlow has a number of built-in models for document structure parsing, which account for the additional computational resources.
34
 
35
+ ---
36
 
37
+ ### Which architectures or devices does RAGFlow support?
38
 
39
+ We officially support x86 CPU and nvidia GPU. While we also test RAGFlow on ARM64 platforms, we do not plan to maintain RAGFlow Docker images for ARM.
40
 
41
+ ---
42
 
43
+ ### Which embedding models can be deployed locally?
44
+
45
+ RAGFlow offers two Docker image editions, `dev-slim` and `dev`:
46
+
47
+ - `infiniflow/ragflow:dev-slim` (default): The RAGFlow Docker image without embedding models.
48
+ - `infiniflow/ragflow:dev`: The RAGFlow Docker image with embedding models including:
49
+ - Built-in embedding models:
50
+ - `BAAI/bge-large-zh-v1.5`
51
+ - `BAAI/bge-reranker-v2-m3`
52
+ - `maidalun1020/bce-embedding-base_v1`
53
+ - `maidalun1020/bce-reranker-base_v1`
54
+ - Embedding models that will be downloaded once you select them in the RAGFlow UI:
55
+ - `BAAI/bge-base-en-v1.5`
56
+ - `BAAI/bge-large-en-v1.5`
57
+ - `BAAI/bge-small-en-v1.5`
58
+ - `BAAI/bge-small-zh-v1.5`
59
+ - `jinaai/jina-embeddings-v2-base-en`
60
+ - `jinaai/jina-embeddings-v2-small-en`
61
+ - `nomic-ai/nomic-embed-text-v1.5`
62
+ - `sentence-transformers/all-MiniLM-L6-v2`
63
 
64
+ ---
65
 
66
+ ### Do you offer an API for integration with third-party applications?
67
 
68
  The corresponding APIs are now available. See the [RAGFlow HTTP API Reference](./http_api_reference.md) or the [RAGFlow Python API Reference](./python_api_reference.md) for more information.
69
 
70
+ ---
71
+
72
+ ### Do you support stream output?
73
 
74
+ Yes, we do.
75
+
76
+ ---
77
 
78
+ ### Is it possible to share dialogue through URL?
79
 
80
  No, this feature is not supported.
81
 
82
+ ---
83
+
84
+ ### Do you support multiple rounds of dialogues, i.e., referencing previous dialogues as context for the current dialogue?
85
 
86
  This feature and the related APIs are still in development. Contributions are welcome.
87
 
88
+ ---
89
 
90
  ## Troubleshooting
91
 
92
+ ---
93
 
94
+ ### Issues with Docker images
95
 
96
+ ---
 
 
 
 
 
 
97
 
98
+ #### How to build the RAGFlow image from scratch?
99
 
100
+ See [Build a RAGFlow Docker image](https://ragflow.io/docs/dev/build_docker_image).
 
 
 
101
 
102
+ ---
103
+
104
+ ### Issues with huggingface models
 
 
 
 
 
105
 
106
+ ---
107
 
108
+ #### Cannot access https://huggingface.co
109
+
110
+ A locally deployed RAGflow downloads OCR and embedding modules from [Huggingface website](https://huggingface.co) by default. If your machine is unable to access this site, the following error occurs and PDF parsing fails:
111
 
112
  ```
113
  FileNotFoundError: [Errno 2] No such file or directory: '/root/.cache/huggingface/hub/models--InfiniFlow--deepdoc/snapshots/be0c1e50eef6047b412d1800aa89aba4d275f997/ocr.res'
114
  ```
 
115
 
116
+ To fix this issue, use https://hf-mirror.com instead:
117
 
118
+ 1. Stop all containers and remove all related resources:
 
 
 
119
 
120
+ ```bash
121
+ cd ragflow/docker/
122
+ docker compose down
123
+ ```
124
 
125
+ 2. Uncomment the following line in **ragflow/docker/.env**:
 
 
126
 
127
+ ```
128
+ # HF_ENDPOINT=https://hf-mirror.com
129
+ ```
130
 
131
+ 3. Start up the server:
132
 
133
+ ```bash
134
+ docker compose up -d
135
+ ```
136
+
137
+ ---
138
+
139
+ #### `MaxRetryError: HTTPSConnectionPool(host='hf-mirror.com', port=443)`
140
+
141
+ This error suggests that you do not have Internet access or are unable to connect to hf-mirror.com. Try the following:
142
+
143
+ 1. Manually download the resource files from [huggingface.co/InfiniFlow/deepdoc](https://huggingface.co/InfiniFlow/deepdoc) to your local folder **~/deepdoc**.
144
  2. Add a volumes to **docker-compose.yml**, for example:
 
 
 
145
 
146
+ ```
147
+ - ~/deepdoc:/ragflow/rag/res/deepdoc
148
+ ```
149
 
150
+ ---
 
 
 
 
151
 
152
+ ### Issues with RAGFlow servers
 
 
 
 
 
 
 
 
 
 
 
 
153
 
154
+ ---
155
 
156
+ #### `WARNING: can't find /raglof/rag/res/borker.tm`
157
 
158
  Ignore this warning and continue. All system warnings can be ignored.
159
 
160
+ ---
161
+
162
+ #### `network anomaly There is an abnormality in your network and you cannot connect to the server.`
163
 
164
  ![anomaly](https://github.com/infiniflow/ragflow/assets/93570324/beb7ad10-92e4-4a58-8886-bfb7cbd09e5d)
165
 
 
180
  INFO:werkzeug:Press CTRL+C to quit
181
  ```
182
 
183
+ ---
184
 
185
+ ### Issues with RAGFlow backend services
 
 
186
 
187
+ ---
188
 
189
+ #### `Realtime synonym is disabled, since no redis connection`
190
 
191
  Ignore this warning and continue. All system warnings can be ignored.
192
 
193
  ![](https://github.com/infiniflow/ragflow/assets/93570324/ef5a6194-084a-4fe3-bdd5-1c025b40865c)
194
 
195
+ ---
196
+
197
+ #### Why does my document parsing stall at under one percent?
198
 
199
  ![stall](https://github.com/infiniflow/ragflow/assets/93570324/3589cc25-c733-47d5-bbfc-fedb74a3da50)
200
 
201
+ Click the red cross beside the 'parsing status' bar, then restart the parsing process to see if the issue remains. If the issue persists and your RAGFlow is deployed locally, try the following:
202
 
203
  1. Check the log of your RAGFlow server to see if it is running properly:
204
+
205
+ ```bash
206
+ docker logs -f ragflow-server
207
+ ```
208
+
209
  2. Check if the **task_executor.py** process exists.
210
  3. Check if your RAGFlow server can access hf-mirror.com or huggingface.com.
211
 
212
+ ---
213
+
214
+ #### Why does my pdf parsing stall near completion, while the log does not show any error?
215
 
216
  Click the red cross beside the 'parsing status' bar, then restart the parsing process to see if the issue remains. If the issue persists and your RAGFlow is deployed locally, the parsing process is likely killed due to insufficient RAM. Try increasing your memory allocation by increasing the `MEM_LIMIT` value in **docker/.env**.
217
 
218
  :::note
219
  Ensure that you restart up your RAGFlow server for your changes to take effect!
220
+
221
  ```bash
222
  docker compose stop
223
  ```
224
+
225
  ```bash
226
  docker compose up -d
227
  ```
228
+
229
  :::
230
 
231
  ![nearcompletion](https://github.com/infiniflow/ragflow/assets/93570324/563974c3-f8bb-4ec8-b241-adcda8929cbb)
232
 
233
+ ---
234
+
235
+ #### `Index failure`
236
 
237
  An index failure usually indicates an unavailable Elasticsearch service.
238
 
239
+ ---
240
+
241
+ #### How to check the log of RAGFlow?
242
 
243
  ```bash
244
+ tail -f ragflow/docker/ragflow-logs/*.log
245
  ```
246
 
247
+ ---
248
+
249
+ #### How to check the status of each component in RAGFlow?
250
 
251
  ```bash
252
  $ docker ps
253
  ```
254
+
255
+ *The system displays the following if all your RAGFlow components are running properly:*
256
 
257
  ```
258
  5bc45806b680 infiniflow/ragflow:latest "./entrypoint.sh" 11 hours ago Up 11 hours 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp, 0.0.0.0:9380->9380/tcp, :::9380->9380/tcp ragflow-server
 
261
  cd29bcb254bc quay.io/minio/minio:RELEASE.2023-12-20T01-00-02Z "/usr/bin/docker-ent…" 2 weeks ago Up 11 hours 0.0.0.0:9001->9001/tcp, :::9001->9001/tcp, 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp ragflow-minio
262
  ```
263
 
264
+ ---
265
+
266
+ #### `Exception: Can't connect to ES cluster`
267
 
268
  1. Check the status of your Elasticsearch component:
269
 
270
+ ```bash
271
+ $ docker ps
272
+ ```
273
+
274
  *The status of a 'healthy' Elasticsearch component in your RAGFlow should look as follows:*
275
+
276
+ ```
277
+ 91220e3285dd docker.elastic.co/elasticsearch/elasticsearch:8.11.3 "/bin/tini -- /usr/l…" 11 hours ago Up 11 hours (healthy) 9300/tcp, 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp ragflow-es-01
278
+ ```
279
 
280
  2. If your container keeps restarting, ensure `vm.max_map_count` >= 262144 as per [this README](https://github.com/infiniflow/ragflow?tab=readme-ov-file#-start-up-the-server). Updating the `vm.max_map_count` value in **/etc/sysctl.conf** is required, if you wish to keep your change permanent. This configuration works only for Linux.
281
 
 
282
  3. If your issue persists, ensure that the ES host setting is correct:
283
 
284
  - If you are running RAGFlow with Docker, it is in **docker/service_conf.yml**. Set it as follows:
 
286
  es:
287
  hosts: 'http://es01:9200'
288
  ```
289
+ - If you run RAGFlow outside of Docker, verify the ES host setting in **conf/service_conf.yml** using:
290
  ```bash
291
  curl http://<IP_OF_ES>:<PORT_OF_ES>
292
  ```
293
 
294
+ ---
295
+
296
+ #### Can't start ES container and get `Elasticsearch did not exit normally`
297
 
298
+ This is because you forgot to update the `vm.max_map_count` value in **/etc/sysctl.conf** and your change to this value was reset after a system reboot.
299
+
300
+ ---
301
 
302
+ #### `{"data":null,"code":100,"message":"<NotFound '404: Not Found'>"}`
303
 
304
  Your IP address or port number may be incorrect. If you are using the default configurations, enter `http://<IP_OF_YOUR_MACHINE>` (**NOT 9380, AND NO PORT NUMBER REQUIRED!**) in your browser. This should work.
305
 
306
+ ---
307
+
308
+ #### `Ollama - Mistral instance running at 127.0.0.1:11434 but cannot add Ollama as model in RagFlow`
309
 
310
  A correct Ollama IP address and port is crucial to adding models to Ollama:
311
 
312
+ - If you are on demo.ragflow.io, ensure that the server hosting Ollama has a publicly accessible IP address. Note that 127.0.0.1 is not a publicly accessible IP address.
313
  - If you deploy RAGFlow locally, ensure that Ollama and RAGFlow are in the same LAN and can comunicate with each other.
314
 
315
+ See [Deploy a local LLM](../guides/deploy_local_llm.mdx) for more information.
316
 
317
+ ---
318
 
319
+ #### Do you offer examples of using deepdoc to parse PDF or other files?
320
 
321
+ Yes, we do. See the Python files under the **rag/app** folder.
322
+
323
+ ---
324
+
325
+ #### Why did I fail to upload a 128MB+ file to my locally deployed RAGFlow?
326
+
327
+ Ensure that you update the **MAX_CONTENT_LENGTH** environment variable:
328
+
329
+ 1. In **ragflow/docker/.env**, uncomment environment variable `MAX_CONTENT_LENGTH`:
330
+
331
+ ```
332
+ MAX_CONTENT_LENGTH=128000000
333
+ ```
334
 
 
 
 
 
335
  2. Update **docker-compose.yml**:
336
+
337
+ ```
338
+ environment:
339
+ - MAX_CONTENT_LENGTH=${MAX_CONTENT_LENGTH}
340
+ ```
341
+
342
  3. Restart the RAGFlow server:
343
+
344
+ ```
345
+ docker compose up ragflow -d
346
+ ```
347
+
348
+ ---
349
+
350
+ #### `FileNotFoundError: [Errno 2] No such file or directory`
351
+
352
+ 1. Check if the status of your MinIO container is healthy:
353
+
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
354
  ```bash
355
  docker ps
356
  ```
357
+
358
  2. Ensure that the username and password settings of MySQL and MinIO in **docker/.env** are in line with those in **docker/service_conf.yml**.
359
 
360
+ ---
361
+
362
  ## Usage
363
 
364
+ ---
365
+
366
+ ### How to increase the length of RAGFlow responses?
367
 
368
  1. Right click the desired dialog to display the **Chat Configuration** window.
369
  2. Switch to the **Model Setting** tab and adjust the **Max Tokens** slider to get the desired length.
370
  3. Click **OK** to confirm your change.
371
 
372
+ ---
373
 
374
+ ### How to run RAGFlow with a locally deployed LLM?
 
 
 
 
 
 
375
 
376
+ You can use Ollama or Xinference to deploy local LLM. See [here](../guides/deploy_local_llm.mdx) for more information.
377
 
378
+ ---
379
 
380
+ ### How to interconnect RAGFlow with Ollama?
381
 
382
+ - If RAGFlow is locally deployed, ensure that your RAGFlow and Ollama are in the same LAN.
383
  - If you are using our online demo, ensure that the IP address of your Ollama server is public and accessible.
384
 
385
+ See [here](../guides/deploy_local_llm.mdx) for more information.
 
 
 
 
386
 
387
+ ---
 
 
388
 
389
+ ### `Error: Range of input length should be [1, 30000]`
390
 
391
+ This error occurs because there are too many chunks matching your search criteria. Try reducing the **TopN** and increasing **Similarity threshold** to fix this issue:
392
 
393
+ 1. Click **Chat** in the middle top of the page.
394
  2. Right click the desired conversation > **Edit** > **Prompt Engine**
395
  3. Reduce the **TopN** and/or raise **Silimarity threshold**.
396
  4. Click **OK** to confirm your changes.
397
 
398
  ![topn](https://github.com/infiniflow/ragflow/assets/93570324/7ec72ab3-0dd2-4cff-af44-e2663b67b2fc)
399
 
400
+ ---
401
+
402
+ ### How to get an API key for integration with third-party applications?
403
+
404
+ See [Acquire a RAGFlow API key](../guides/develop/acquire_ragflow_api_key.md).
405
+
406
+ ---
407
+
408
+ ### How to upgrade RAGFlow?
409
 
410
  See [Upgrade RAGFlow](../guides/upgrade_ragflow.mdx) for more information.
411
+
412
+ ---
docs/references/http_api_reference.md CHANGED
@@ -2028,8 +2028,8 @@ curl --request POST \
2028
  The question to start an AI-powered conversation.
2029
  - `"stream"`: (*Body Parameter*), `boolean`
2030
  Indicates whether to output responses in a streaming way:
2031
- - `true`: Enable streaming.
2032
- - `false`: Disable streaming (default).
2033
  - `"session_id"`: (*Body Parameter*)
2034
  The ID of session. If it is not provided, a new session will be generated.
2035
 
@@ -2239,8 +2239,8 @@ curl --request POST \
2239
  The question to start an AI-powered conversation.
2240
  - `"stream"`: (*Body Parameter*), `boolean`
2241
  Indicates whether to output responses in a streaming way:
2242
- - `true`: Enable streaming.
2243
- - `false`: Disable streaming (default).
2244
  - `"session_id"`: (*Body Parameter*)
2245
  The ID of the session. If it is not provided, a new session will be generated.
2246
 
@@ -2364,4 +2364,4 @@ Failure:
2364
  }
2365
  ```
2366
 
2367
- ---
 
2028
  The question to start an AI-powered conversation.
2029
  - `"stream"`: (*Body Parameter*), `boolean`
2030
  Indicates whether to output responses in a streaming way:
2031
+ - `true`: Enable streaming (default).
2032
+ - `false`: Disable streaming.
2033
  - `"session_id"`: (*Body Parameter*)
2034
  The ID of session. If it is not provided, a new session will be generated.
2035
 
 
2239
  The question to start an AI-powered conversation.
2240
  - `"stream"`: (*Body Parameter*), `boolean`
2241
  Indicates whether to output responses in a streaming way:
2242
+ - `true`: Enable streaming (default).
2243
+ - `false`: Disable streaming.
2244
  - `"session_id"`: (*Body Parameter*)
2245
  The ID of the session. If it is not provided, a new session will be generated.
2246
 
 
2364
  }
2365
  ```
2366
 
2367
+ ---
docs/references/python_api_reference.md CHANGED
@@ -1332,8 +1332,8 @@ The question to start an AI-powered conversation.
1332
 
1333
  Indicates whether to output responses in a streaming way:
1334
 
1335
- - `True`: Enable streaming.
1336
- - `False`: Disable streaming (default).
1337
 
1338
  ### Returns
1339
 
@@ -1450,8 +1450,8 @@ The question to start an AI-powered conversation.
1450
 
1451
  Indicates whether to output responses in a streaming way:
1452
 
1453
- - `True`: Enable streaming.
1454
- - `False`: Disable streaming (default).
1455
 
1456
  ### Returns
1457
 
@@ -1513,4 +1513,4 @@ while True:
1513
  for ans in session.ask(question, stream=True):
1514
  print(ans.content[len(cont):], end='', flush=True)
1515
  cont = ans.content
1516
- ```
 
1332
 
1333
  Indicates whether to output responses in a streaming way:
1334
 
1335
+ - `True`: Enable streaming (default).
1336
+ - `False`: Disable streaming.
1337
 
1338
  ### Returns
1339
 
 
1450
 
1451
  Indicates whether to output responses in a streaming way:
1452
 
1453
+ - `True`: Enable streaming (default).
1454
+ - `False`: Disable streaming.
1455
 
1456
  ### Returns
1457
 
 
1513
  for ans in session.ask(question, stream=True):
1514
  print(ans.content[len(cont):], end='', flush=True)
1515
  cont = ans.content
1516
+ ```