HELP US SOLVE THE "ZeroGPU worker error - GPU task aborted" Error

#16
by Kryzys7 - opened

"ZeroGPU worker error
GPU task aborted"

When generating a voice clone through Dia model. And not only that, it takes up the free limited quota for the user.
image_2025-08-03_033429893.png

no logs?

Why would you need logs for brother, I don't have the option to show logs

Why would you need logs for brother, I don't have the option to show logs

I misunderstood "us" :)

I have the same problem

I have the same problem

It looks like all these spaces have been abandoned, ZeroGPU can't handle all these spaces so they left them out

same problem

Same issue here. Lets investigate...

Hi! I took a look at the public endpoints for the Space. Whenever I queue a request (/gradio_api/call/generate_audio) I get an event_id, but polling /gradio_api/result/<id> always returns 404 and /queue/status currently responds with 500 Internal Error. That points to the worker crashing right after the job is enqueued.

Because I don’t have maintainer access, I can’t retrieve the runtime logs from the CLI, so could someone who does run hf spaces logs nari-labs/Dia-1.6B --limit … (or check the Space dashboard) to grab the stack trace?

One thing that stands out is the Space metadata: the front-matter reads python_version: 3.10, but the Hub seems to have parsed it as the float 3.1. That can put the build in a broken environment. I’d recommend changing it to a string (python_version: "3.10") and redeploying.

Once that’s fixed, also double-check that Dia.from_pretrained("nari-labs/Dia-1.6B-0626") can actually download the weights and that descript-audio-codec (via dac.utils.download()) is allowed during startup—failures there are another common cause of the worker exiting.

Hope this helps! Let me know if you need more detail.

Sign up or log in to comment