writinwaters commited on
Commit
5461e28
·
1 Parent(s): 0b31353

Updated UI (#4011)

Browse files

### What problem does this PR solve?



### Type of change


- [x] Documentation Update

docs/guides/manage_team_members.md CHANGED
@@ -7,6 +7,8 @@ slug: /manage_team_members
7
 
8
  Invite or remove team members, join or leave a team.
9
 
 
 
10
  By default, each RAGFlow user is assigned a single team named after their name. RAGFlow allows you to invite RAGFlow users to your team. Your team members can help you:
11
 
12
  - Upload documents to your datasets.
 
7
 
8
  Invite or remove team members, join or leave a team.
9
 
10
+ ---
11
+
12
  By default, each RAGFlow user is assigned a single team named after their name. RAGFlow allows you to invite RAGFlow users to your team. Your team members can help you:
13
 
14
  - Upload documents to your datasets.
docs/references/http_api_reference.md CHANGED
@@ -1380,7 +1380,7 @@ curl --request POST \
1380
  - `"frequency penalty"`: `float`
1381
  Similar to the presence penalty, this reduces the model’s tendency to repeat the same words frequently. Defaults to `0.7`.
1382
  - `"max_token"`: `integer`
1383
- The maximum length of the models output, measured in the number of tokens (words or pieces of words). Defaults to `512`.
1384
  - `"prompt"`: (*Body parameter*), `object`
1385
  Instructions for the LLM to follow. If it is not explicitly set, a JSON object with the following values will be generated as the default. A `prompt` JSON object contains the following attributes:
1386
  - `"similarity_threshold"`: `float` RAGFlow employs either a combination of weighted keyword similarity and weighted vector cosine similarity, or a combination of weighted keyword similarity and weighted reranking score during retrieval. This argument sets the threshold for similarities between the user query and chunks. If a similarity score falls below this threshold, the corresponding chunk will be excluded from the results. The default value is `0.2`.
@@ -1515,7 +1515,7 @@ curl --request PUT \
1515
  - `"frequency penalty"`: `float`
1516
  Similar to the presence penalty, this reduces the model’s tendency to repeat the same words frequently. Defaults to `0.7`.
1517
  - `"max_token"`: `integer`
1518
- The maximum length of the models output, measured in the number of tokens (words or pieces of words). Defaults to `512`.
1519
  - `"prompt"`: (*Body parameter*), `object`
1520
  Instructions for the LLM to follow. A `prompt` object contains the following attributes:
1521
  - `"similarity_threshold"`: `float` RAGFlow employs either a combination of weighted keyword similarity and weighted vector cosine similarity, or a combination of weighted keyword similarity and weighted rerank score during retrieval. This argument sets the threshold for similarities between the user query and chunks. If a similarity score falls below this threshold, the corresponding chunk will be excluded from the results. The default value is `0.2`.
 
1380
  - `"frequency penalty"`: `float`
1381
  Similar to the presence penalty, this reduces the model’s tendency to repeat the same words frequently. Defaults to `0.7`.
1382
  - `"max_token"`: `integer`
1383
+ The maximum length of the model's output, measured in the number of tokens (words or pieces of words). If disabled, you lift the maximum token limit, allowing the model to determine the number of tokens in its responses. Defaults to `512`.
1384
  - `"prompt"`: (*Body parameter*), `object`
1385
  Instructions for the LLM to follow. If it is not explicitly set, a JSON object with the following values will be generated as the default. A `prompt` JSON object contains the following attributes:
1386
  - `"similarity_threshold"`: `float` RAGFlow employs either a combination of weighted keyword similarity and weighted vector cosine similarity, or a combination of weighted keyword similarity and weighted reranking score during retrieval. This argument sets the threshold for similarities between the user query and chunks. If a similarity score falls below this threshold, the corresponding chunk will be excluded from the results. The default value is `0.2`.
 
1515
  - `"frequency penalty"`: `float`
1516
  Similar to the presence penalty, this reduces the model’s tendency to repeat the same words frequently. Defaults to `0.7`.
1517
  - `"max_token"`: `integer`
1518
+ The maximum length of the model's output, measured in the number of tokens (words or pieces of words). If disabled, you lift the maximum token limit, allowing the model to determine the number of tokens in its responses. Defaults to `512`.
1519
  - `"prompt"`: (*Body parameter*), `object`
1520
  Instructions for the LLM to follow. A `prompt` object contains the following attributes:
1521
  - `"similarity_threshold"`: `float` RAGFlow employs either a combination of weighted keyword similarity and weighted vector cosine similarity, or a combination of weighted keyword similarity and weighted rerank score during retrieval. This argument sets the threshold for similarities between the user query and chunks. If a similarity score falls below this threshold, the corresponding chunk will be excluded from the results. The default value is `0.2`.
docs/references/python_api_reference.md CHANGED
@@ -951,7 +951,7 @@ The LLM settings for the chat assistant to create. Defaults to `None`. When the
951
  - `frequency penalty`: `float`
952
  Similar to the presence penalty, this reduces the model’s tendency to repeat the same words frequently. Defaults to `0.7`.
953
  - `max_token`: `int`
954
- The maximum length of the models output, measured in the number of tokens (words or pieces of words). Defaults to `512`.
955
 
956
  #### prompt: `Chat.Prompt`
957
 
@@ -1013,7 +1013,7 @@ A dictionary representing the attributes to update, with the following keys:
1013
  - `"top_p"`, `float` Also known as “nucleus sampling”, this parameter sets a threshold to select a smaller set of words to sample from.
1014
  - `"presence_penalty"`, `float` This discourages the model from repeating the same information by penalizing words that have appeared in the conversation.
1015
  - `"frequency penalty"`, `float` Similar to presence penalty, this reduces the model’s tendency to repeat the same words.
1016
- - `"max_token"`, `int` The maximum length of the models output, measured in the number of tokens (words or pieces of words).
1017
  - `"prompt"` : Instructions for the LLM to follow.
1018
  - `"similarity_threshold"`: `float` RAGFlow employs either a combination of weighted keyword similarity and weighted vector cosine similarity, or a combination of weighted keyword similarity and weighted rerank score during retrieval. This argument sets the threshold for similarities between the user query and chunks. If a similarity score falls below this threshold, the corresponding chunk will be excluded from the results. The default value is `0.2`.
1019
  - `"keywords_similarity_weight"`: `float` This argument sets the weight of keyword similarity in the hybrid similarity score with vector cosine similarity or reranking model similarity. By adjusting this weight, you can control the influence of keyword similarity in relation to other similarity measures. The default value is `0.7`.
 
951
  - `frequency penalty`: `float`
952
  Similar to the presence penalty, this reduces the model’s tendency to repeat the same words frequently. Defaults to `0.7`.
953
  - `max_token`: `int`
954
+ The maximum length of the model's output, measured in the number of tokens (words or pieces of words). If disabled, you lift the maximum token limit, allowing the model to determine the number of tokens in its responses. Defaults to `512`.
955
 
956
  #### prompt: `Chat.Prompt`
957
 
 
1013
  - `"top_p"`, `float` Also known as “nucleus sampling”, this parameter sets a threshold to select a smaller set of words to sample from.
1014
  - `"presence_penalty"`, `float` This discourages the model from repeating the same information by penalizing words that have appeared in the conversation.
1015
  - `"frequency penalty"`, `float` Similar to presence penalty, this reduces the model’s tendency to repeat the same words.
1016
+ - `"max_token"`, `int` The maximum length of the model's output, measured in the number of tokens (words or pieces of words). If disabled, you lift the maximum token limit, allowing the model to determine the number of tokens in its responses. Defaults to `512`.
1017
  - `"prompt"` : Instructions for the LLM to follow.
1018
  - `"similarity_threshold"`: `float` RAGFlow employs either a combination of weighted keyword similarity and weighted vector cosine similarity, or a combination of weighted keyword similarity and weighted rerank score during retrieval. This argument sets the threshold for similarities between the user query and chunks. If a similarity score falls below this threshold, the corresponding chunk will be excluded from the results. The default value is `0.2`.
1019
  - `"keywords_similarity_weight"`: `float` This argument sets the weight of keyword similarity in the hybrid similarity score with vector cosine similarity or reranking model similarity. By adjusting this weight, you can control the influence of keyword similarity in relation to other similarity measures. The default value is `0.7`.
web/src/locales/en.ts CHANGED
@@ -366,10 +366,7 @@ The above is the content you need to summarize.`,
366
  topN: 'Top N',
367
  topNTip: `Not all chunks with similarity score above the 'similarity threshold' will be sent to the LLM. This selects 'Top N' chunks from the retrieved ones.`,
368
  variable: 'Variable',
369
- variableTip: `If you use dialog APIs, the varialbes might help you chat with your clients with different strategies.
370
- The variables are used to fill-in the 'System' part in prompt in order to give LLM a hint.
371
- The 'knowledge' is a very special variable which will be filled-in with the retrieved chunks.
372
- All the variables in 'System' should be curly bracketed.`,
373
  add: 'Add',
374
  key: 'Key',
375
  optional: 'Optional',
@@ -381,15 +378,15 @@ The above is the content you need to summarize.`,
381
  improvise: 'Improvise',
382
  precise: 'Precise',
383
  balance: 'Balance',
384
- freedomTip: `'Precise' means the LLM will be conservative and answer your question cautiously. 'Improvise' means the you want LLM talk much and freely. 'Balance' is between cautiously and freely.`,
385
  temperature: 'Temperature',
386
  temperatureMessage: 'Temperature is required',
387
  temperatureTip:
388
- 'This parameter controls the randomness of predictions by the model. A lower temperature makes the model more confident in its responses, while a higher temperature makes it more creative and diverse.',
389
  topP: 'Top P',
390
  topPMessage: 'Top P is required',
391
  topPTip:
392
- 'Also known as nucleus sampling,” this parameter sets a threshold to select a smaller set of words to sample from. It focuses on the most likely words, cutting off the less probable ones.',
393
  presencePenalty: 'Presence penalty',
394
  presencePenaltyMessage: 'Presence penalty is required',
395
  presencePenaltyTip:
@@ -401,7 +398,7 @@ The above is the content you need to summarize.`,
401
  maxTokens: 'Max tokens',
402
  maxTokensMessage: 'Max tokens is required',
403
  maxTokensTip:
404
- 'This sets the maximum length of the models output, measured in the number of tokens (words or pieces of words).',
405
  maxTokensInvalidMessage: 'Please enter a valid number for Max Tokens.',
406
  maxTokensMinMessage: 'Max Tokens cannot be less than 0.',
407
  quote: 'Show quote',
@@ -456,7 +453,8 @@ The above is the content you need to summarize.`,
456
  profileDescription: 'Update your photo and personal details here.',
457
  maxTokens: 'Max Tokens',
458
  maxTokensMessage: 'Max Tokens is required',
459
- maxTokensTip: `This sets the maximum length of the model's output, measured in the number of tokens (words or pieces of words).`,
 
460
  maxTokensInvalidMessage: 'Please enter a valid number for Max Tokens.',
461
  maxTokensMinMessage: 'Max Tokens cannot be less than 0.',
462
  password: 'Password',
 
366
  topN: 'Top N',
367
  topNTip: `Not all chunks with similarity score above the 'similarity threshold' will be sent to the LLM. This selects 'Top N' chunks from the retrieved ones.`,
368
  variable: 'Variable',
369
+ variableTip: `Variables can assist in developing more flexible strategies, particularly when you are using our chat assistant management APIs. These variables will be used by 'System' as part of the prompts for the LLM. The variable {knowledge} is a reserved special variable representing your selected knowledge base(s), and all variables should be enclosed in curly braces {}.`,
 
 
 
370
  add: 'Add',
371
  key: 'Key',
372
  optional: 'Optional',
 
378
  improvise: 'Improvise',
379
  precise: 'Precise',
380
  balance: 'Balance',
381
+ freedomTip: `Set the freedom level to 'Precise' to strictly confine the LLM's response to your selected knowledge base(s). Choose 'Improvise' to grant the LLM greater freedom in its responses, which may lead to hallucinations. 'Balance' is an intermediate level; choose 'Balance' for more balanced responses.`,
382
  temperature: 'Temperature',
383
  temperatureMessage: 'Temperature is required',
384
  temperatureTip:
385
+ `This parameter controls the randomness of the model's predictions. A lower temperature results in more conservative responses, while a higher temperature yields more creative and diverse responses.`,
386
  topP: 'Top P',
387
  topPMessage: 'Top P is required',
388
  topPTip:
389
+ 'Also known as "nucleus sampling", this parameter sets a threshold for selecting a smaller set of the most likely words to sample from, cutting off the less probable ones.',
390
  presencePenalty: 'Presence penalty',
391
  presencePenaltyMessage: 'Presence penalty is required',
392
  presencePenaltyTip:
 
398
  maxTokens: 'Max tokens',
399
  maxTokensMessage: 'Max tokens is required',
400
  maxTokensTip:
401
+ `This sets the maximum length of the model's output, measured in the number of tokens (words or pieces of words). If disabled, you lift the maximum token limit, allowing the model to determine the number of tokens in its responses. Defaults to 512.`,
402
  maxTokensInvalidMessage: 'Please enter a valid number for Max Tokens.',
403
  maxTokensMinMessage: 'Max Tokens cannot be less than 0.',
404
  quote: 'Show quote',
 
453
  profileDescription: 'Update your photo and personal details here.',
454
  maxTokens: 'Max Tokens',
455
  maxTokensMessage: 'Max Tokens is required',
456
+ maxTokensTip:
457
+ `This sets the maximum length of the model's output, measured in the number of tokens (words or pieces of words). If disabled, you lift the maximum token limit, allowing the model to determine the number of tokens in its responses. Defaults to 512.`,
458
  maxTokensInvalidMessage: 'Please enter a valid number for Max Tokens.',
459
  maxTokensMinMessage: 'Max Tokens cannot be less than 0.',
460
  password: 'Password',