My first agent template error
π: Hello
π€: Step 1
π€: Error in generating model output:
422 Client Error: Unprocessable Entity for url: https://api-inference.huggingface.co/models/Qwen/Qwen2.5-Coder-32B-Instruct/v1/chat/completions (Request ID: qbTRPsfWZvqCDupHXjR_Q)
Input validation error: inputs
tokens + max_new_tokens
must be <= 32768. Given: 62563 inputs
tokens and 2096 max_new_tokens
Make sure 'text-generation' task is supported by the model.
π€: Step 1 | Input-tokens:29,812 | Output-tokens:366 | Duration: 0.14
π€: -----
π€: Step 2
π€: Error in generating model output:
422 Client Error: Unprocessable Entity for url: https://api-inference.huggingface.co/models/Qwen/Qwen2.5-Coder-32B-Instruct/v1/chat/completions (Request ID: QVBfI0pOFVsXbzgFPI1gQ)
Input validation error: inputs
tokens + max_new_tokens
must be <= 32768. Given: 62720 inputs
tokens and 2096 max_new_tokens
Make sure 'text-generation' task is supported by the model.
π€: Step 2 | Input-tokens:29,812 | Output-tokens:366 | Duration: 0.13
π€: -----
π€: Step 3
π€: Error in generating model output:
422 Client Error: Unprocessable Entity for url: https://api-inference.huggingface.co/models/Qwen/Qwen2.5-Coder-32B-Instruct/v1/chat/completions (Request ID: gRi8BjipkU6d78SFZUlH4)
Input validation error: inputs
tokens + max_new_tokens
must be <= 32768. Given: 62879 inputs
tokens and 2096 max_new_tokens
Make sure 'text-generation' task is supported by the model.
π€: Step 3 | Input-tokens:29,812 | Output-tokens:366 | Duration: 0.12
π€: -----
π€: Step 4
π€: Error in generating model output:
422 Client Error: Unprocessable Entity for url: https://api-inference.huggingface.co/models/Qwen/Qwen2.5-Coder-32B-Instruct/v1/chat/completions (Request ID: 4zgXd2nSfjsRYJW9_zr3_)
Input validation error: inputs
tokens + max_new_tokens
must be <= 32768. Given: 63040 inputs
tokens and 2096 max_new_tokens
Make sure 'text-generation' task is supported by the model.
π€: Step 4 | Input-tokens:29,812 | Output-tokens:366 | Duration: 0.14
π€: -----
π€: Step 5
π€: Error in generating model output:
422 Client Error: Unprocessable Entity for url: https://api-inference.huggingface.co/models/Qwen/Qwen2.5-Coder-32B-Instruct/v1/chat/completions (Request ID: cLRwYf0Xf32aQbicqQLa_)
Input validation error: inputs
tokens + max_new_tokens
must be <= 32768. Given: 63201 inputs
tokens and 2096 max_new_tokens
Make sure 'text-generation' task is supported by the model.
π€: Step 5 | Input-tokens:29,812 | Output-tokens:366 | Duration: 0.12
π€: -----
π€: Step 6
π€: Error in generating model output:
422 Client Error: Unprocessable Entity for url: https://api-inference.huggingface.co/models/Qwen/Qwen2.5-Coder-32B-Instruct/v1/chat/completions (Request ID: RFrHd-TPgvDYTnlhrnsyL)
Input validation error: inputs
tokens + max_new_tokens
must be <= 32768. Given: 63361 inputs
tokens and 2096 max_new_tokens
Make sure 'text-generation' task is supported by the model.
π€: Step 6 | Input-tokens:29,812 | Output-tokens:366 | Duration: 0.13
π€: -----
π€: Step 7
π€: Reached max steps.
π€: Step 7 | Input-tokens:29,812 | Output-tokens:366 | Duration: 0.13
π€: -----
π€: Final answer:
Error in generating final LLM output:
422 Client Error: Unprocessable Entity for url: https://api-inference.huggingface.co/models/Qwen/Qwen2.5-Coder-32B-Instruct/v1/chat/completions (Request ID: 4wsD2DWM7_-1kpUVAtClF)
Input validation error: inputs
tokens + max_new_tokens
must be <= 32768. Given: 61575 inputs
tokens and 2096 max_new_tokens
Make sure 'text-generation' task is supported by the model.
Is there a reason why this is throwing up an error?