File size: 14,888 Bytes
e090e58 0812124 e090e58 236885d e57dead 5bc7f36 c3bad71 236885d c3bad71 236885d c3bad71 236885d c3bad71 236885d c3bad71 ddeb877 c3bad71 ddeb877 c3bad71 236885d c3bad71 236885d c3bad71 236885d c3bad71 8a2ce44 c3bad71 8a2ce44 c3bad71 236885d c3bad71 db17f34 c3bad71 db17f34 c3bad71 236885d c3bad71 236885d c3bad71 236885d 935e280 236885d c3bad71 236885d c3bad71 236885d c3bad71 236885d c739a3d 236885d c3bad71 186246a 236885d 186246a 236885d c3bad71 236885d 8a2ce44 236885d c3bad71 236885d c3bad71 236885d c3bad71 236885d c3bad71 236885d c3bad71 236885d c3bad71 01957f5 c3bad71 1d9d2fb c3bad71 01957f5 c3bad71 01957f5 c3bad71 01957f5 c3bad71 236885d c3bad71 318a7e5 c3bad71 236885d c3bad71 236885d c3bad71 8a2ce44 236885d c3bad71 236885d c3bad71 236885d c3bad71 236885d c3bad71 236885d c3bad71 236885d c3bad71 8a2ce44 434b253 8a2ce44 c3bad71 8a2ce44 c3bad71 f37edc0 c3bad71 8a2ce44 c3bad71 236885d c3bad71 9487109 fb9c758 c3bad71 9487109 e090e58 c3bad71 e090e58 9487109 c3bad71 fb9c758 ccccbdd d30c752 ccccbdd c3bad71 ccccbdd c3bad71 ccccbdd c3bad71 ccccbdd d30c752 c3bad71 236885d c3bad71 236885d c3bad71 236885d c3bad71 236885d e57dead c3bad71 e57dead 236885d e57dead 236885d c3bad71 236885d e57dead 236885d c3bad71 e57dead c3bad71 236885d e57dead 236885d e57dead 236885d e57dead 236885d c3bad71 236885d c3bad71 d30c752 c3bad71 236885d 75603d2 236885d c3bad71 3a0bf13 c3bad71 3a0bf13 c3bad71 236885d c3bad71 1d9d2fb c3bad71 1d9d2fb c3bad71 3343a14 c3bad71 236885d 3343a14 c3bad71 3343a14 c3bad71 3a0bf13 c3bad71 e57dead c3bad71 d30c752 e57dead d30c752 c3bad71 e57dead 8a2ce44 c3bad71 8a2ce44 c3bad71 8a2ce44 dc34855 8a2ce44 c3bad71 8a2ce44 c3bad71 8a2ce44 c3bad71 8a2ce44 c3bad71 f37edc0 186246a c3bad71 f37edc0 c3bad71 8a2ce44 c3bad71 f37edc0 c3bad71 86dad2d c3bad71 86dad2d c3bad71 f37edc0 c3bad71 86dad2d d30c752 c3bad71 5245317 fd3a37d c3bad71 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 |
---
sidebar_position: 10
slug: /faq
---
# Frequently asked questions
Queries regarding general features, troubleshooting, usage, and more.
---
## General features
---
### What sets RAGFlow apart from other RAG products?
The "garbage in garbage out" status quo remains unchanged despite the fact that LLMs have advanced Natural Language Processing (NLP) significantly. In response, RAGFlow introduces two unique features compared to other Retrieval-Augmented Generation (RAG) products.
- Fine-grained document parsing: Document parsing involves images and tables, with the flexibility for you to intervene as needed.
- Traceable answers with reduced hallucinations: You can trust RAGFlow's responses as you can view the citations and references supporting them.
---
### Why does it take longer for RAGFlow to parse a document than LangChain?
We put painstaking effort into document pre-processing tasks like layout analysis, table structure recognition, and OCR (Optical Character Recognition) using our vision models. This contributes to the additional time required.
---
### Why does RAGFlow require more resources than other projects?
RAGFlow has a number of built-in models for document structure parsing, which account for the additional computational resources.
---
### Which architectures or devices does RAGFlow support?
We officially support x86 CPU and nvidia GPU. While we also test RAGFlow on ARM64 platforms, we do not plan to maintain RAGFlow Docker images for ARM.
---
### Which embedding models can be deployed locally?
RAGFlow offers two Docker image editions, `v0.15.1-slim` and `v0.15.1`:
- `infiniflow/ragflow:v0.15.1-slim` (default): The RAGFlow Docker image without embedding models.
- `infiniflow/ragflow:v0.15.1`: The RAGFlow Docker image with embedding models including:
- Built-in embedding models:
- `BAAI/bge-large-zh-v1.5`
- `BAAI/bge-reranker-v2-m3`
- `maidalun1020/bce-embedding-base_v1`
- `maidalun1020/bce-reranker-base_v1`
- Embedding models that will be downloaded once you select them in the RAGFlow UI:
- `BAAI/bge-base-en-v1.5`
- `BAAI/bge-large-en-v1.5`
- `BAAI/bge-small-en-v1.5`
- `BAAI/bge-small-zh-v1.5`
- `jinaai/jina-embeddings-v2-base-en`
- `jinaai/jina-embeddings-v2-small-en`
- `nomic-ai/nomic-embed-text-v1.5`
- `sentence-transformers/all-MiniLM-L6-v2`
---
### Do you offer an API for integration with third-party applications?
The corresponding APIs are now available. See the [RAGFlow HTTP API Reference](./http_api_reference.md) or the [RAGFlow Python API Reference](./python_api_reference.md) for more information.
---
### Do you support stream output?
Yes, we do.
---
### Is it possible to share dialogue through URL?
No, this feature is not supported.
---
### Do you support multiple rounds of dialogues, referencing previous dialogues as context for the current query?
Yes, we support enhancing user queries based on existing context of an ongoing conversation:
1. On the **Chat** page, hover over the desired assistant and select **Edit**.
2. In the **Chat Configuration** popup, click the **Prompt Engine** tab.
3. Toggle on **Multi-turn optimization** to enable this feature.
---
## Troubleshooting
---
### Issues with Docker images
---
#### How to build the RAGFlow image from scratch?
See [Build a RAGFlow Docker image](https://ragflow.io/docs/dev/build_docker_image).
---
### Issues with huggingface models
---
#### Cannot access https://huggingface.co
A locally deployed RAGflow downloads OCR and embedding modules from [Huggingface website](https://huggingface.co) by default. If your machine is unable to access this site, the following error occurs and PDF parsing fails:
```
FileNotFoundError: [Errno 2] No such file or directory: '/root/.cache/huggingface/hub/models--InfiniFlow--deepdoc/snapshots/be0c1e50eef6047b412d1800aa89aba4d275f997/ocr.res'
```
To fix this issue, use https://hf-mirror.com instead:
1. Stop all containers and remove all related resources:
```bash
cd ragflow/docker/
docker compose down
```
2. Uncomment the following line in **ragflow/docker/.env**:
```
# HF_ENDPOINT=https://hf-mirror.com
```
3. Start up the server:
```bash
docker compose up -d
```
---
#### `MaxRetryError: HTTPSConnectionPool(host='hf-mirror.com', port=443)`
This error suggests that you do not have Internet access or are unable to connect to hf-mirror.com. Try the following:
1. Manually download the resource files from [huggingface.co/InfiniFlow/deepdoc](https://huggingface.co/InfiniFlow/deepdoc) to your local folder **~/deepdoc**.
2. Add a volumes to **docker-compose.yml**, for example:
```
- ~/deepdoc:/ragflow/rag/res/deepdoc
```
---
### Issues with RAGFlow servers
---
#### `WARNING: can't find /raglof/rag/res/borker.tm`
Ignore this warning and continue. All system warnings can be ignored.
---
#### `network anomaly There is an abnormality in your network and you cannot connect to the server.`

You will not log in to RAGFlow unless the server is fully initialized. Run `docker logs -f ragflow-server`.
*The server is successfully initialized, if your system displays the following:*
```
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:9380
* Running on http://x.x.x.x:9380
INFO:werkzeug:Press CTRL+C to quit
```
---
### Issues with RAGFlow backend services
---
#### `Realtime synonym is disabled, since no redis connection`
Ignore this warning and continue. All system warnings can be ignored.

---
#### Why does my document parsing stall at under one percent?

Click the red cross beside the 'parsing status' bar, then restart the parsing process to see if the issue remains. If the issue persists and your RAGFlow is deployed locally, try the following:
1. Check the log of your RAGFlow server to see if it is running properly:
```bash
docker logs -f ragflow-server
```
2. Check if the **task_executor.py** process exists.
3. Check if your RAGFlow server can access hf-mirror.com or huggingface.com.
---
#### Why does my pdf parsing stall near completion, while the log does not show any error?
Click the red cross beside the 'parsing status' bar, then restart the parsing process to see if the issue remains. If the issue persists and your RAGFlow is deployed locally, the parsing process is likely killed due to insufficient RAM. Try increasing your memory allocation by increasing the `MEM_LIMIT` value in **docker/.env**.
:::note
Ensure that you restart up your RAGFlow server for your changes to take effect!
```bash
docker compose stop
```
```bash
docker compose up -d
```
:::

---
#### `Index failure`
An index failure usually indicates an unavailable Elasticsearch service.
---
#### How to check the log of RAGFlow?
```bash
tail -f ragflow/docker/ragflow-logs/*.log
```
---
#### How to check the status of each component in RAGFlow?
1. Check the status of the Elasticsearch Docker container:
```bash
$ docker ps
```
*The following is an example result:*
```bash
5bc45806b680 infiniflow/ragflow:latest "./entrypoint.sh" 11 hours ago Up 11 hours 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp, 0.0.0.0:9380->9380/tcp, :::9380->9380/tcp ragflow-server
91220e3285dd docker.elastic.co/elasticsearch/elasticsearch:8.11.3 "/bin/tini -- /usr/l…" 11 hours ago Up 11 hours (healthy) 9300/tcp, 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp ragflow-es-01
d8c86f06c56b mysql:5.7.18 "docker-entrypoint.s…" 7 days ago Up 16 seconds (healthy) 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp ragflow-mysql
cd29bcb254bc quay.io/minio/minio:RELEASE.2023-12-20T01-00-02Z "/usr/bin/docker-ent…" 2 weeks ago Up 11 hours 0.0.0.0:9001->9001/tcp, :::9001->9001/tcp, 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp ragflow-minio
```
2. Follow [this document](../guides/run_health_check.md) to check the health status of the Elasticsearch service.
:::danger IMPORTANT
The status of a Docker container status does not necessarily reflect the status of the service. You may find that your services are unhealthy even when the corresponding Docker containers are up running. Possible reasons for this include network failures, incorrect port numbers, or DNS issues.
:::
---
#### `Exception: Can't connect to ES cluster`
1. Check the status of the Elasticsearch Docker container:
```bash
$ docker ps
```
*The status of a healthy Elasticsearch component should look as follows:*
```
91220e3285dd docker.elastic.co/elasticsearch/elasticsearch:8.11.3 "/bin/tini -- /usr/l…" 11 hours ago Up 11 hours (healthy) 9300/tcp, 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp ragflow-es-01
```
2. Follow [this document](../guides/run_health_check.md) to check the health status of the Elasticsearch service.
:::danger IMPORTANT
The status of a Docker container status does not necessarily reflect the status of the service. You may find that your services are unhealthy even when the corresponding Docker containers are up running. Possible reasons for this include network failures, incorrect port numbers, or DNS issues.
:::
3. If your container keeps restarting, ensure `vm.max_map_count` >= 262144 as per [this README](https://github.com/infiniflow/ragflow?tab=readme-ov-file#-start-up-the-server). Updating the `vm.max_map_count` value in **/etc/sysctl.conf** is required, if you wish to keep your change permanent. Note that this configuration works only for Linux.
---
#### Can't start ES container and get `Elasticsearch did not exit normally`
This is because you forgot to update the `vm.max_map_count` value in **/etc/sysctl.conf** and your change to this value was reset after a system reboot.
---
#### `{"data":null,"code":100,"message":"<NotFound '404: Not Found'>"}`
Your IP address or port number may be incorrect. If you are using the default configurations, enter `http://<IP_OF_YOUR_MACHINE>` (**NOT 9380, AND NO PORT NUMBER REQUIRED!**) in your browser. This should work.
---
#### `Ollama - Mistral instance running at 127.0.0.1:11434 but cannot add Ollama as model in RagFlow`
A correct Ollama IP address and port is crucial to adding models to Ollama:
- If you are on demo.ragflow.io, ensure that the server hosting Ollama has a publicly accessible IP address. Note that 127.0.0.1 is not a publicly accessible IP address.
- If you deploy RAGFlow locally, ensure that Ollama and RAGFlow are in the same LAN and can comunicate with each other.
See [Deploy a local LLM](../guides/deploy_local_llm.mdx) for more information.
---
#### Do you offer examples of using deepdoc to parse PDF or other files?
Yes, we do. See the Python files under the **rag/app** folder.
---
#### Why did I fail to upload a 128MB+ file to my locally deployed RAGFlow?
Ensure that you update the **MAX_CONTENT_LENGTH** environment variable:
1. In **ragflow/docker/.env**, uncomment environment variable `MAX_CONTENT_LENGTH`:
```
MAX_CONTENT_LENGTH=176160768 # 168MB
```
2. Update **ragflow/docker/nginx/nginx.conf**:
```
client_max_body_size 168M
```
3. Restart the RAGFlow server:
```
docker compose up ragflow -d
```
---
#### `FileNotFoundError: [Errno 2] No such file or directory`
1. Check the status of the MinIO Docker container:
```bash
$ docker ps
```
*The status of a healthy Elasticsearch component should look as follows:*
```bash
cd29bcb254bc quay.io/minio/minio:RELEASE.2023-12-20T01-00-02Z "/usr/bin/docker-ent…" 2 weeks ago Up 11 hours 0.0.0.0:9001->9001/tcp, :::9001->9001/tcp, 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp ragflow-minio
```
2. Follow [this document](../guides/run_health_check.md) to check the health status of the Elasticsearch service.
:::danger IMPORTANT
The status of a Docker container status does not necessarily reflect the status of the service. You may find that your services are unhealthy even when the corresponding Docker containers are up running. Possible reasons for this include network failures, incorrect port numbers, or DNS issues.
:::
---
## Usage
---
### How to increase the length of RAGFlow responses?
1. Right click the desired dialog to display the **Chat Configuration** window.
2. Switch to the **Model Setting** tab and adjust the **Max Tokens** slider to get the desired length.
3. Click **OK** to confirm your change.
---
### How to run RAGFlow with a locally deployed LLM?
You can use Ollama or Xinference to deploy local LLM. See [here](../guides/deploy_local_llm.mdx) for more information.
---
### Is it possible to add an LLM that is not supported?
If your model is not currently supported but has APIs compatible with those of OpenAI, click **OpenAI-API-Compatible** on the **Model providers** page to configure your model:

---
### How to interconnect RAGFlow with Ollama?
- If RAGFlow is locally deployed, ensure that your RAGFlow and Ollama are in the same LAN.
- If you are using our online demo, ensure that the IP address of your Ollama server is public and accessible.
See [here](../guides/deploy_local_llm.mdx) for more information.
---
### `Error: Range of input length should be [1, 30000]`
This error occurs because there are too many chunks matching your search criteria. Try reducing the **TopN** and increasing **Similarity threshold** to fix this issue:
1. Click **Chat** in the middle top of the page.
2. Right click the desired conversation > **Edit** > **Prompt Engine**
3. Reduce the **TopN** and/or raise **Silimarity threshold**.
4. Click **OK** to confirm your changes.

---
### How to get an API key for integration with third-party applications?
See [Acquire a RAGFlow API key](../guides/develop/acquire_ragflow_api_key.md).
---
### How to upgrade RAGFlow?
See [Upgrade RAGFlow](../guides/upgrade_ragflow.mdx) for more information.
--- |