黄腾 aopstudio commited on
Commit
b403f7a
·
1 Parent(s): 63f5b44

update quickstart and llm_api_key_setup document (#1615)

Browse files

### What problem does this PR solve?

update quickstart and llm_api_key_setup document

### Type of change

- [x] Documentation Update

---------

Co-authored-by: Zhedong Cen <[email protected]>

docs/guides/llm_api_key_setup.md CHANGED
@@ -11,22 +11,25 @@ An API key is required for RAGFlow to interact with an online AI model. This gui
11
 
12
  For now, RAGFlow supports the following online LLMs. Click the corresponding link to apply for your API key. Most LLM providers grant newly-created accounts trial credit, which will expire in a couple of months, or a promotional amount of free quota.
13
 
14
- - [OpenAI](https://platform.openai.com/login?launch),
15
- - Azure-OpenAI,
16
- - Gemini,
17
- - Groq,
18
- - Mistral,
19
- - Bedrock,
20
- - [Tongyi-Qianwen](https://dashscope.console.aliyun.com/model),
21
- - [ZHIPU-AI](https://open.bigmodel.cn/),
22
- - MiniMax
23
- - [Moonshot](https://platform.moonshot.cn/docs),
24
- - [DeepSeek](https://platform.deepseek.com/api-docs/),
25
- - [Baichuan](https://www.baichuan-ai.com/home),
26
- - [VolcEngine](https://www.volcengine.com/docs/82379).
 
 
 
27
 
28
  :::note
29
- If you find your online LLM is not on the list, don't feel disheartened. The list is expanding, and you can [file a feature request](https://github.com/infiniflow/ragflow/issues/new?assignees=&labels=feature+request&projects=&template=feature_request.yml&title=%5BFeature+Request%5D%3A+) with us! Alternatively, if you have customized or locally-deployed models, you can [bind them to RAGFlow using Ollama or Xinference](./deploy_local_llm.md).
30
  :::
31
 
32
  ## Configure your API key
 
11
 
12
  For now, RAGFlow supports the following online LLMs. Click the corresponding link to apply for your API key. Most LLM providers grant newly-created accounts trial credit, which will expire in a couple of months, or a promotional amount of free quota.
13
 
14
+ - [OpenAI](https://platform.openai.com/login?launch)
15
+ - [Azure-OpenAI](https://ai.azure.com/)
16
+ - [Gemini](https://aistudio.google.com/)
17
+ - [Groq](https://console.groq.com/)
18
+ - [Mistral](https://mistral.ai/)
19
+ - [Bedrock](https://aws.amazon.com/cn/bedrock/)
20
+ - [Tongyi-Qianwen](https://dashscope.console.aliyun.com/model)
21
+ - [ZHIPU-AI](https://open.bigmodel.cn/)
22
+ - [MiniMax](https://platform.minimaxi.com/)
23
+ - [Moonshot](https://platform.moonshot.cn/docs)
24
+ - [DeepSeek](https://platform.deepseek.com/api-docs/)
25
+ - [Baichuan](https://www.baichuan-ai.com/home)
26
+ - [VolcEngine](https://www.volcengine.com/docs/82379)
27
+ - [Jina](https://jina.ai/reader/)
28
+ - [OpenRouter](https://openrouter.ai/)
29
+ - [StepFun](https://platform.stepfun.com/)
30
 
31
  :::note
32
+ If you find your online LLM is not on the list, don't feel disheartened. The list is expanding, and you can [file a feature request](https://github.com/infiniflow/ragflow/issues/new?assignees=&labels=feature+request&projects=&template=feature_request.yml&title=%5BFeature+Request%5D%3A+) with us! Alternatively, if you have customized or locally-deployed models, you can [bind them to RAGFlow using Ollama, Xinferenc, or LocalAI](./deploy_local_llm.md).
33
  :::
34
 
35
  ## Configure your API key
docs/quickstart.mdx CHANGED
@@ -176,22 +176,25 @@ With the default settings, you only need to enter `http://IP_OF_YOUR_MACHINE` (*
176
 
177
  RAGFlow is a RAG engine, and it needs to work with an LLM to offer grounded, hallucination-free question-answering capabilities. For now, RAGFlow supports the following LLMs, and the list is expanding:
178
 
179
- - OpenAI
180
- - Azure-OpenAI
181
- - Gemini
182
- - Groq
183
- - Mistral
184
- - Bedrock
185
- - Tongyi-Qianwen
186
- - ZHIPU-AI
187
- - MiniMax
188
- - Moonshot
189
- - DeepSeek-V2
190
- - Baichuan
191
- - VolcEngine
 
 
 
192
 
193
  :::note
194
- RAGFlow also supports deploying LLMs locally using Ollama or Xinference, but this part is not covered in this quick start guide.
195
  :::
196
 
197
  To add and configure an LLM:
 
176
 
177
  RAGFlow is a RAG engine, and it needs to work with an LLM to offer grounded, hallucination-free question-answering capabilities. For now, RAGFlow supports the following LLMs, and the list is expanding:
178
 
179
+ - [OpenAI](https://platform.openai.com/login?launch)
180
+ - [Azure-OpenAI](https://ai.azure.com/)
181
+ - [Gemini](https://aistudio.google.com/)
182
+ - [Groq](https://console.groq.com/)
183
+ - [Mistral](https://mistral.ai/)
184
+ - [Bedrock](https://aws.amazon.com/cn/bedrock/)
185
+ - [Tongyi-Qianwen](https://dashscope.console.aliyun.com/model)
186
+ - [ZHIPU-AI](https://open.bigmodel.cn/)
187
+ - [MiniMax](https://platform.minimaxi.com/)
188
+ - [Moonshot](https://platform.moonshot.cn/docs)
189
+ - [DeepSeek](https://platform.deepseek.com/api-docs/)
190
+ - [Baichuan](https://www.baichuan-ai.com/home)
191
+ - [VolcEngine](https://www.volcengine.com/docs/82379)
192
+ - [Jina](https://jina.ai/reader/)
193
+ - [OpenRouter](https://openrouter.ai/)
194
+ - [StepFun](https://platform.stepfun.com/)
195
 
196
  :::note
197
+ RAGFlow also supports deploying LLMs locally using Ollama, Xinference, or LocalAI, but this part is not covered in this quick start guide.
198
  :::
199
 
200
  To add and configure an LLM: