yusufs commited on
Commit
4a9e328
·
1 Parent(s): 811d851

feat(sail/Sailor-4B-Chat): try increase gpu-memory-utilization to 0.9 before changing the token length

Browse files
Files changed (1) hide show
  1. run-sailor.sh +4 -1
run-sailor.sh CHANGED
@@ -10,6 +10,9 @@ printf "Running sail/Sailor-4B-Chat using vLLM OpenAI compatible API Server at p
10
  # INFO 11-27 15:32:10 gpu_executor.py:113] # GPU blocks: 471, # CPU blocks: 655
11
  # INFO 11-27 15:32:10 gpu_executor.py:117] Maximum concurrency for 32768 tokens per request: 0.23x
12
  # ERROR 11-27 15:32:10 engine.py:366] The model's max seq len (32768) is larger than the maximum number of tokens that can be stored in KV cache (7536). Try increasing `gpu_memory_utilization` or decreasing `max_model_len` when initializing the engine.
 
 
 
13
  python -u /app/openai_compatible_api_server.py \
14
  --model sail/Sailor-4B-Chat \
15
  --revision 89a866a7041e6ec023dd462adeca8e28dd53c83e \
@@ -19,4 +22,4 @@ python -u /app/openai_compatible_api_server.py \
19
  --max-model-len 32768 \
20
  --dtype half \
21
  --enforce-eager \
22
- --gpu-memory-utilization 0.85
 
10
  # INFO 11-27 15:32:10 gpu_executor.py:113] # GPU blocks: 471, # CPU blocks: 655
11
  # INFO 11-27 15:32:10 gpu_executor.py:117] Maximum concurrency for 32768 tokens per request: 0.23x
12
  # ERROR 11-27 15:32:10 engine.py:366] The model's max seq len (32768) is larger than the maximum number of tokens that can be stored in KV cache (7536). Try increasing `gpu_memory_utilization` or decreasing `max_model_len` when initializing the engine.
13
+
14
+ # 7536tokens÷1.2=6280words.
15
+ # 6280words÷500words/page=12.56pages. (For single-spaced)
16
  python -u /app/openai_compatible_api_server.py \
17
  --model sail/Sailor-4B-Chat \
18
  --revision 89a866a7041e6ec023dd462adeca8e28dd53c83e \
 
22
  --max-model-len 32768 \
23
  --dtype half \
24
  --enforce-eager \
25
+ --gpu-memory-utilization 0.9