How to use TenStrip/LTX2.3-10Eros with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image, export_to_video # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("TenStrip/LTX2.3-10Eros", dtype=torch.bfloat16, device_map="cuda") pipe.to("cuda") prompt = "A man with short gray hair plays a red electric guitar." image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png" ) output = pipe(image=image, prompt=prompt).frames[0] export_to_video(output, "output.mp4")
Hey, any chance for gguf models?
It seems very complex, main issue is that the data changes tensor dimension and comfyui gguf loaders do not like to see that when loading an LTX model with those shapes from my attempts.
Β· Sign up or log in to comment