writinwaters
commited on
Commit
·
fb9c758
1
Parent(s):
2332381
Added some debugging FAQs (#413)
Browse files### What problem does this PR solve?
### Type of change
- [x] Documentation Update
- README.md +2 -2
- docs/faq.md +42 -1
README.md
CHANGED
@@ -56,7 +56,7 @@
|
|
56 |
## 📌 Latest Features
|
57 |
|
58 |
- 2024-04-16 Add an embedding model 'bce-embedding-base_v1' from [BCEmbedding](https://github.com/netease-youdao/BCEmbedding).
|
59 |
-
- 2024-04-16 Add [FastEmbed](https://github.com/qdrant/fastembed) is designed for light and
|
60 |
- 2024-04-11 Support [Xinference](./docs/xinference.md) for local LLM deployment.
|
61 |
- 2024-04-10 Add a new layout recognization model for analyzing Laws documentation.
|
62 |
- 2024-04-08 Support [Ollama](./docs/ollama.md) for local LLM deployment.
|
@@ -139,7 +139,7 @@
|
|
139 |
```
|
140 |
|
141 |
5. In your web browser, enter the IP address of your server and log in to RAGFlow.
|
142 |
-
> In the given scenario, you only need to enter `http://IP_OF_YOUR_MACHINE` (sans port number) as the default HTTP serving port `80` can be omitted when using the default configurations.
|
143 |
6. In [service_conf.yaml](./docker/service_conf.yaml), select the desired LLM factory in `user_default_llm` and update the `API_KEY` field with the corresponding API key.
|
144 |
|
145 |
> See [./docs/llm_api_key_setup.md](./docs/llm_api_key_setup.md) for more information.
|
|
|
56 |
## 📌 Latest Features
|
57 |
|
58 |
- 2024-04-16 Add an embedding model 'bce-embedding-base_v1' from [BCEmbedding](https://github.com/netease-youdao/BCEmbedding).
|
59 |
+
- 2024-04-16 Add [FastEmbed](https://github.com/qdrant/fastembed), which is designed specifically for light and speedy embedding.
|
60 |
- 2024-04-11 Support [Xinference](./docs/xinference.md) for local LLM deployment.
|
61 |
- 2024-04-10 Add a new layout recognization model for analyzing Laws documentation.
|
62 |
- 2024-04-08 Support [Ollama](./docs/ollama.md) for local LLM deployment.
|
|
|
139 |
```
|
140 |
|
141 |
5. In your web browser, enter the IP address of your server and log in to RAGFlow.
|
142 |
+
> In the given scenario, you only need to enter `http://IP_OF_YOUR_MACHINE` (**sans** port number) as the default HTTP serving port `80` can be omitted when using the default configurations.
|
143 |
6. In [service_conf.yaml](./docker/service_conf.yaml), select the desired LLM factory in `user_default_llm` and update the `API_KEY` field with the corresponding API key.
|
144 |
|
145 |
> See [./docs/llm_api_key_setup.md](./docs/llm_api_key_setup.md) for more information.
|
docs/faq.md
CHANGED
@@ -96,6 +96,8 @@ Parsing requests have to wait in queue due to limited server resources. We are c
|
|
96 |
|
97 |
### Why does my document parsing stall at under one percent?
|
98 |
|
|
|
|
|
99 |
If your RAGFlow is deployed *locally*, try the following:
|
100 |
|
101 |
1. Check the log of your RAGFlow server to see if it is running properly:
|
@@ -105,6 +107,16 @@ docker logs -f ragflow-server
|
|
105 |
2. Check if the **tast_executor.py** process exist.
|
106 |
3. Check if your RAGFlow server can access hf-mirror.com or huggingface.com.
|
107 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
108 |
### `Index failure`
|
109 |
|
110 |
An index failure usually indicates an unavailable Elasticsearch service.
|
@@ -165,7 +177,7 @@ Your IP address or port number may be incorrect. If you are using the default co
|
|
165 |
|
166 |
A correct Ollama IP address and port is crucial to adding models to Ollama:
|
167 |
|
168 |
-
- If you are on demo.ragflow.io, ensure that the server hosting Ollama has a publicly accessible IP address. 127.0.0.1 is not
|
169 |
- If you deploy RAGFlow locally, ensure that Ollama and RAGFlow are in the same LAN and can comunicate with each other.
|
170 |
|
171 |
### Do you offer examples of using deepdoc to parse PDF or other files?
|
@@ -191,3 +203,32 @@ docker compose up ragflow -d
|
|
191 |
```
|
192 |
*Now you should be able to upload files of sizes less than 100MB.*
|
193 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
96 |
|
97 |
### Why does my document parsing stall at under one percent?
|
98 |
|
99 |
+

|
100 |
+
|
101 |
If your RAGFlow is deployed *locally*, try the following:
|
102 |
|
103 |
1. Check the log of your RAGFlow server to see if it is running properly:
|
|
|
107 |
2. Check if the **tast_executor.py** process exist.
|
108 |
3. Check if your RAGFlow server can access hf-mirror.com or huggingface.com.
|
109 |
|
110 |
+
### `MaxRetryError: HTTPSConnectionPool(host='hf-mirror.com', port=443)`
|
111 |
+
|
112 |
+
This error suggests that you do not have Internet access or are unable to connect to hf-mirror.com. Try the following:
|
113 |
+
|
114 |
+
1. Manually download the resource files from [huggingface.co/InfiniFlow/deepdoc](https://huggingface.co/InfiniFlow/deepdoc) to your local folder **~/deepdoc**.
|
115 |
+
2. Add a volumes to **docker-compose.yml**, for example:
|
116 |
+
```
|
117 |
+
- ~/deepdoc:/ragflow/rag/res/deepdoc
|
118 |
+
```
|
119 |
+
|
120 |
### `Index failure`
|
121 |
|
122 |
An index failure usually indicates an unavailable Elasticsearch service.
|
|
|
177 |
|
178 |
A correct Ollama IP address and port is crucial to adding models to Ollama:
|
179 |
|
180 |
+
- If you are on demo.ragflow.io, ensure that the server hosting Ollama has a publicly accessible IP address.Note that 127.0.0.1 is not a publicly accessible IP address.
|
181 |
- If you deploy RAGFlow locally, ensure that Ollama and RAGFlow are in the same LAN and can comunicate with each other.
|
182 |
|
183 |
### Do you offer examples of using deepdoc to parse PDF or other files?
|
|
|
203 |
```
|
204 |
*Now you should be able to upload files of sizes less than 100MB.*
|
205 |
|
206 |
+
### `Table 'rag_flow.document' doesn't exist`
|
207 |
+
|
208 |
+
This exception occurs when starting up the RAGFlow server. Try the following:
|
209 |
+
|
210 |
+
1. Prolong the sleep time: Go to **docker/entrypoint.sh**, locate line 26, and replace `sleep 60` with `sleep 280`.
|
211 |
+
2. Go to **docker/docker-compose.yml**, add the following after line 109:
|
212 |
+
```
|
213 |
+
./entrypoint.sh:/ragflow/entrypoint.sh
|
214 |
+
```
|
215 |
+
3. Change directory:
|
216 |
+
```bash
|
217 |
+
cd docker
|
218 |
+
```
|
219 |
+
4. Stop the RAGFlow server:
|
220 |
+
```bash
|
221 |
+
docker compose stop
|
222 |
+
```
|
223 |
+
5. Restart up the RAGFlow server:
|
224 |
+
```bash
|
225 |
+
docker compose up
|
226 |
+
```
|
227 |
+
|
228 |
+
### `hint : 102 Fail to access model Connection error`
|
229 |
+
|
230 |
+

|
231 |
+
|
232 |
+
1. Ensure that the RAGFlow server can access the base URL.
|
233 |
+
2. Do not forget to append **/v1/** to **http://IP:port**:
|
234 |
+
**http://IP:port/v1/**
|