runtime error

Exit code: 1. Reason: e above exception, another exception occurred: Traceback (most recent call last): File "/app/app.py", line 186, in <module> pipe = _load_pipe_with_version(AIO_VERSION) File "/app/app.py", line 165, in _load_pipe_with_version transformer=QwenImageTransformer2DModel.from_pretrained( ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ AIO_REPO_ID, ^^^^^^^^^^^^ ...<2 lines>... device_map="cuda", ^^^^^^^^^^^^^^^^^^ ), ^ File "/root/.pyenv/versions/3.13.11/lib/python3.13/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/root/.pyenv/versions/3.13.11/lib/python3.13/site-packages/diffusers/models/modeling_utils.py", line 1314, in from_pretrained ) = cls._load_pretrained_model( ~~~~~~~~~~~~~~~~~~~~~~~~~~^ model, ^^^^^^ ...<14 lines>... disable_mmap=disable_mmap, ^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/root/.pyenv/versions/3.13.11/lib/python3.13/site-packages/diffusers/models/modeling_utils.py", line 1654, in _load_pretrained_model _caching_allocator_warmup(model, expanded_device_map, dtype, hf_quantizer) ~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/.pyenv/versions/3.13.11/lib/python3.13/site-packages/diffusers/models/model_loading_utils.py", line 759, in _caching_allocator_warmup _ = torch.empty(warmup_elems, dtype=dtype, device=device, requires_grad=False) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 19.03 GiB. GPU 0 has a total capacity of 22.30 GiB of which 28.69 MiB is free. Process 18229 has 22.27 GiB memory in use. Of the allocated memory 22.02 GiB is allocated by PyTorch, and 4.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

Container logs:

Fetching error logs...