zhuhao
commited on
Commit
·
aa96c78
1
Parent(s):
b808748
fix the max token of Tongyi-Qianwen text-embedding-v3 model to 8k (#2118)
Browse files### What problem does this PR solve?
_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._
fix the max token of Tongyi-Qianwen text-embedding-v3 model to 8k
close #2117
### Type of change
- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
- conf/llm_factories.json +2 -2
conf/llm_factories.json
CHANGED
@@ -106,8 +106,8 @@
|
|
106 |
},
|
107 |
{
|
108 |
"llm_name": "text-embedding-v3",
|
109 |
-
"tags": "TEXT EMBEDDING,
|
110 |
-
"max_tokens":
|
111 |
"model_type": "embedding"
|
112 |
},
|
113 |
{
|
|
|
106 |
},
|
107 |
{
|
108 |
"llm_name": "text-embedding-v3",
|
109 |
+
"tags": "TEXT EMBEDDING,8K",
|
110 |
+
"max_tokens": 8192,
|
111 |
"model_type": "embedding"
|
112 |
},
|
113 |
{
|