Wan2.2 T2V / I2V + LoRA β€” 4GB VRAM GGUF

By: The_frizzy1 Hardware target: RTX 3050 Laptop (4 GB VRAM) CivitAI: https://civitai.com/models/1817858/wan22-workflowlora-i2vt2v-4gb-vram-gguf YouTube: https://www.youtube.com/@the_frizzy1

πŸŽ₯ Video Explainer: https://www.youtube.com/watch?v=C7ZttV320qk


What This Is

Wan2.2 T2V, I2V, and hybrid workflows on 4 GB VRAM using GGUF-quantised models.

Use 14B models. Lightx2v LoRA strongly recommended. Without LoRA: use CFG 6, 30–60 steps. Second sampler denoise: 0.3–0.5 for 14B models.


Included Workflows

  • Wan2.2 T2V 14B β€” Text to Video
  • Wan2.2 I2V 14B β€” Image to Video
  • Wan2.2 TI2V 5B β€” Hybrid Text + Image to Video

Wan2.2 Highlights

  • MoE architecture β€” high-noise + low-noise experts for better quality
  • Cinematic control β€” lighting, color, composition
  • Smooth motion modeling β€” complex camera and object movement
  • TI2V-5B β€” runs on 8 GB VRAM with offloading

Required Custom Nodes


Model Downloads

CLIP / Text Encoder: https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files


Changelog

Version Notes
v1.0 Initial release
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for The-frizzy1/Wan22-T2V-I2V-LORA-4GB

Adapter
(288)
this model