i2v OOM

#291
by Theoldsong - opened

torch.OutOfMemoryError: Allocation on device

Memory summary: |===========================================================================|

PyTorch CUDA memory summary, device ID 0
CUDA OOMs: 0
===========================================================================
Metric
---------------------------------------------------------------------------
Allocated memory
from large pool
from small pool
---------------------------------------------------------------------------
Active memory
from large pool
from small pool
---------------------------------------------------------------------------
Requested memory
from large pool
from small pool
---------------------------------------------------------------------------
GPU reserved memory
from large pool
from small pool
---------------------------------------------------------------------------
Non-releasable memory
from large pool
from small pool
---------------------------------------------------------------------------
Allocations
from large pool
from small pool
---------------------------------------------------------------------------
Active allocs
from large pool
from small pool
---------------------------------------------------------------------------
GPU reserved segments
from large pool
from small pool
---------------------------------------------------------------------------
Non-releasable allocs
from large pool
from small pool
---------------------------------------------------------------------------
Oversize allocations
---------------------------------------------------------------------------
Oversize GPU segments
===========================================================================

Got an OOM, unloading all loaded models.
Prompt executed in 514.25 seconds

t2v is functioning normally
i2v directly OOM
How much VRAM does this model require to run?

RTX4080 16G

16gb is plenty. Are you trying to create a really large image?

16gb is plenty. Are you trying to create a really large image?

The image is 720P, and the default width and height parameters were used for generation

I2V for a I2I/T2I model sounds off - as does 720p when describing images.
Wrong workflow , wrong channel or just wrong term ?

I am getting oom when doing i2i on 768x1024 resolution, batch of 1, euler ancentry beta as suggested in the readme . Using 1 or 2 images doesn't matter. Reference photos are 3k by 4k(from a samsung s21). Using the stock workflow provided.

It works but every few generations the error shows up:

TextEncodeQwenImageEditPlus

Allocation on device 0 would exceed allowed memory. (out of memory)
Currently allocated : 11.63 GiB
Requested : 2.03 GiB
Device limit : 15.46 GiB
Free (according to CUDA): 45.25 MiB
PyTorch limit (set by user-supplied memory fraction)
: 17179869184.00 GiB
This error means you ran out of memory on your GPU.

TIPS: If the workflow worked before you might have accidentally set the batch_size to a large number.

Specs:
7950X3D
64GB DDR5
5070 ti 16GB
Ubuntu 24.04 LTS

I can help in anyway possible, i'm an software engineer.

Sign up or log in to comment