Wan2.2 T2V / I2V + LoRA β 4GB VRAM GGUF
By: The_frizzy1 Hardware target: RTX 3050 Laptop (4 GB VRAM) CivitAI: https://civitai.com/models/1817858/wan22-workflowlora-i2vt2v-4gb-vram-gguf YouTube: https://www.youtube.com/@the_frizzy1
π₯ Video Explainer: https://www.youtube.com/watch?v=C7ZttV320qk
What This Is
Wan2.2 T2V, I2V, and hybrid workflows on 4 GB VRAM using GGUF-quantised models.
Use 14B models. Lightx2v LoRA strongly recommended. Without LoRA: use CFG 6, 30β60 steps. Second sampler denoise: 0.3β0.5 for 14B models.
Included Workflows
- Wan2.2 T2V 14B β Text to Video
- Wan2.2 I2V 14B β Image to Video
- Wan2.2 TI2V 5B β Hybrid Text + Image to Video
Wan2.2 Highlights
- MoE architecture β high-noise + low-noise experts for better quality
- Cinematic control β lighting, color, composition
- Smooth motion modeling β complex camera and object movement
- TI2V-5B β runs on 8 GB VRAM with offloading
Required Custom Nodes
| Node | Link |
|---|---|
| GGUF | https://github.com/calcuis/gguf |
| WanVideoWrapper | https://github.com/kijai/ComfyUI-WanVideoWrapper |
| Tiled KSampler | https://github.com/FlyingFireCo/tiled_ksampler |
| KJNodes | https://github.com/kijai/ComfyUI-KJNodes |
| Video Helper Suite | https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite |
| rgthree-comfy (LoRA stacking only) | https://github.com/rgthree/rgthree-comfy |
Model Downloads
| Model | Link |
|---|---|
| TI2V 5B | https://huggingface.co/QuantStack/Wan2.2-TI2V-5B-GGUF |
| T2V 14B | https://huggingface.co/QuantStack/Wan2.2-T2V-A14B-GGUF/ |
| I2V 14B | https://huggingface.co/bullerwins/Wan2.2-I2V-A14B-GGUF |
CLIP / Text Encoder: https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files
Changelog
| Version | Notes |
|---|---|
| v1.0 | Initial release |
Model tree for The-frizzy1/Wan22-T2V-I2V-LORA-4GB
Base model
Wan-AI/Wan2.2-I2V-A14B