Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
Reality123bΒ 
posted an update 1 day ago
Post
884
I got an issue with inference API

whatever model I choose, whatever input I give, it outputs "failed to fetch" in my laptop, pc, phone and every device, I tried different Accounts, etc. but still this error.

help me as my hf space (almost all of them) uses the API.

also, in my space, it shows, "{'error': {'message': 'Model is overloaded', 'http_status_code': 422}}"

same

Error in generating model output:
500 Server Error: Internal Server Error for url: https://api-inference.huggingface.co/models/Qwen/Qwen2.5-Coder-32B-Instruct/v1/chat/completions (Request ID: 6gIxzV)

Model too busy, unable to get response in less than 60 second(s)

"this post was flagged"

image.png