Runtime error

#69
by Barney15916 - opened

I have been using huggingfaces.co/spaces/mrfrakername/E2-F5-TTS for over 6 months now. Today when I opened huggingface up I got this runtime error.
Exit code: 1. Reason: : 0%| | 0.00/1.35G [00:00<?, ?B/s]

model_1250000.safetensors: 23%|โ–ˆโ–ˆโ–Ž | 315M/1.35G [00:01<00:03, 298MB/s]
model_1250000.safetensors: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‰| 1.35G/1.35G [00:02<00:00, 674MB/s]

vocab : /usr/local/lib/python3.10/site-packages/f5_tts/infer/examples/vocab.txt
token : custom
model : /home/user/.cache/huggingface/hub/models--SWivid--F5-TTS/snapshots/84e5a410d9cead4de2f847e7c9369a6440bdfaca/F5TTS_v1_Base/model_1250000.safetensors

model_1200000.safetensors: 0%| | 0.00/1.33G [00:00<?, ?B/s]

model_1200000.safetensors: 35%|โ–ˆโ–ˆโ–ˆโ– | 464M/1.33G [00:01<00:01, 464MB/s]

model_1200000.safetensors: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‰| 1.33G/1.33G [00:02<00:00, 647MB/s]
model_1200000.safetensors: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‰| 1.33G/1.33G [00:02<00:00, 602MB/s]

vocab : /usr/local/lib/python3.10/site-packages/f5_tts/infer/examples/vocab.txt
token : custom
model : /home/user/.cache/huggingface/hub/models--SWivid--E2-TTS/snapshots/851141880b5ca38050025e98dfdee27dc553f86e/E2TTS_Base/model_1200000.safetensors

SPACES_ZERO_GPU_DEBUG self.arg_queue._writer.fileno()=20
SPACES_ZERO_GPU_DEBUG self.res_queue._writer.fileno()=22
SPACES_ZERO_GPU_DEBUG fds=[]
Loading chat model: Qwen/Qwen2.5-3B-Instruct

config.json: 0%| | 0.00/661 [00:00<?, ?B/s]
config.json: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 661/661 [00:00<00:00, 5.87MB/s]

model.safetensors.index.json: 0%| | 0.00/35.6k [00:00<?, ?B/s]
model.safetensors.index.json: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 35.6k/35.6k [00:00<00:00, 169MB/s]

model-00001-of-00002.safetensors: 0%| | 0.00/3.97G [00:00<?, ?B/s]Traceback (most recent call last):
File "/home/user/app/app.py", line 681, in
load_chat_model(chat_model_name_list[0])
File "/usr/local/lib/python3.10/site-packages/spaces/zero/wrappers.py", line 210, in gradio_handler
raise error("ZeroGPU worker error", "GPU task aborted")
gradio.exceptions.Error: 'GPU task aborted'
Container logs:

Failed to retrieve error logs: SSE is not enabled

Thanks for reporting, I have reached out to the Hugging Face team as this looks like an issue with Spaces.

mrfakename changed discussion status to closed

Sign up or log in to comment