how to solve this problem (Error 422)

#141
by AhmadOmar25 - opened

πŸ˜ƒ: hello
πŸ€–: Step 1
πŸ€–: Error in generating model output:
422 Client Error: Unprocessable Entity for url: https://api-inference.huggingface.co/models/Qwen/Qwen2.5-Coder-32B-Instruct/v1/chat/completions (Request ID: 2Zc1LHnfKM4l93X5A_w15)

Input validation error: inputs tokens + max_new_tokens must be <= 16000. Given: 220616 inputs tokens and 2096 max_new_tokens
Make sure 'text-generation' task is supported by the model.
πŸ€–: Step 1 | Input-tokens:12,930 | Output-tokens:130 | Duration: 0.43
πŸ€–: -----
πŸ€–: Step 2
πŸ€–: Error in generating model output:
422 Client Error: Unprocessable Entity for url: https://api-inference.huggingface.co/models/Qwen/Qwen2.5-Coder-32B-Instruct/v1/chat/completions (Request ID: TjWBxLAkAe_UfRQzjOsS8)

Input validation error: inputs tokens + max_new_tokens must be <= 16000. Given: 220779 inputs tokens and 2096 max_new_tokens
Make sure 'text-generation' task is supported by the model.
πŸ€–: Step 2 | Input-tokens:12,930 | Output-tokens:130 | Duration: 0.36
πŸ€–: -----
πŸ€–: Step 3
πŸ€–: Error in generating model output:
422 Client Error: Unprocessable Entity for url: https://api-inference.huggingface.co/models/Qwen/Qwen2.5-Coder-32B-Instruct/v1/chat/completions (Request ID: K2dDB45ha9ZiaU-v_yoah)

Input validation error: inputs tokens + max_new_tokens must be <= 16000. Given: 220940 inputs tokens and 2096 max_new_tokens
Make sure 'text-generation' task is supported by the model.
πŸ€–: Step 3 | Input-tokens:12,930 | Output-tokens:130 | Duration: 0.39
πŸ€–: -----
πŸ€–: Step 4
πŸ€–: Error in generating model output:
422 Client Error: Unprocessable Entity for url: https://api-inference.huggingface.co/models/Qwen/Qwen2.5-Coder-32B-Instruct/v1/chat/completions (Request ID: RFR8rdioTzMZ5ZSdT8Y_B)

Input validation error: inputs tokens + max_new_tokens must be <= 16000. Given: 221099 inputs tokens and 2096 max_new_tokens
Make sure 'text-generation' task is supported by the model.
πŸ€–: Step 4 | Input-tokens:12,930 | Output-tokens:130 | Duration: 0.4
πŸ€–: -----
πŸ€–: Step 5
πŸ€–: Error in generating model output:
422 Client Error: Unprocessable Entity for url: https://api-inference.huggingface.co/models/Qwen/Qwen2.5-Coder-32B-Instruct/v1/chat/completions (Request ID: _8Hd6XBqRlAbbefJ4zs-W)

Input validation error: inputs tokens + max_new_tokens must be <= 16000. Given: 221258 inputs tokens and 2096 max_new_tokens
Make sure 'text-generation' task is supported by the model.
πŸ€–: Step 5 | Input-tokens:12,930 | Output-tokens:130 | Duration: 0.4
πŸ€–: -----
πŸ€–: Step 6
πŸ€–: Error in generating model output:
422 Client Error: Unprocessable Entity for url: https://api-inference.huggingface.co/models/Qwen/Qwen2.5-Coder-32B-Instruct/v1/chat/completions (Request ID: g0dus4kkBhHqOJSBlVPbV)

Input validation error: inputs tokens + max_new_tokens must be <= 16000. Given: 221417 inputs tokens and 2096 max_new_tokens
Make sure 'text-generation' task is supported by the model.
πŸ€–: Step 6 | Input-tokens:12,930 | Output-tokens:130 | Duration: 0.36
πŸ€–: -----
πŸ€–: Step 7
πŸ€–: Reached max steps.
πŸ€–: Step 7 | Input-tokens:12,930 | Output-tokens:130 | Duration: 0.36
πŸ€–: -----
πŸ€–: Final answer:
Error in generating final LLM output:
422 Client Error: Unprocessable Entity for url: https://api-inference.huggingface.co/models/Qwen/Qwen2.5-Coder-32B-Instruct/v1/chat/completions (Request ID: 9enJv0Nm-LTITr5axUAB5)

Input validation error: inputs tokens + max_new_tokens must be <= 16000. Given: 219634 inputs tokens and 2096 max_new_tokens
Make sure 'text-generation' task is supported by the model.

need to add your tool to CodeAgent & than commit your changes
agent = CodeAgent(
model=model,
tools=[final_answer,get_current_time_in_timezone], ## add your tools here (don't remove final answer)
max_steps=6,
verbosity_level=1,
grammar=None,
planning_interval=None,
name=None,
description=None,
prompt_templates=prompt_templates
)

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment