🎬 Wan2.2 14B I2V Distill LightX2V 4-Step (GGUF)

This repository contains GGUF-quantized versions of the LightX2V Distilled Wan2.2 14B Image-to-Video Model.

By quantizing the ultra-fast 4-step distilled models into the GGUF format, this repository allows for drastically reduced VRAM/RAM requirements while maintaining near real-time, high-performance video generation.

🌟 What's Special About This Version?

  • ⚑ Ultra-Fast 4-Step Generation: Distillation-accelerated version of Wan2.2 requires only 4 steps instead of the traditional 50+ steps.
  • πŸ’Ύ VRAM & Memory Efficient: Utilizing GGUF quantizations (from Q3 up to Q8), you can run full 14B parameter models on consumer-grade hardware.

πŸ“¦ Available GGUF Models

We provide quantizations ranging from heavily compressed (Q3) to near-lossless (Q8) to fit your specific memory and quality requirements. Both High Noise and Low Noise versions are available.

🎨 High Noise Models (/high_noise_260412/)

Filename Size Quality Level
wan2.2_i2v_A14b_high_noise_lightx2v_4step_720p_260412-Q3_K_M.gguf 7.19 GB High Compression, Lowest VRAM
wan2.2_i2v_A14b_high_noise_lightx2v_4step_720p_260412-Q4_K_M.gguf 9.66 GB Good Balance
wan2.2_i2v_A14b_high_noise_lightx2v_4step_720p_260412-Q5_K_M.gguf 10.8 GB High Quality, Moderate VRAM
wan2.2_i2v_A14b_high_noise_lightx2v_4step_720p_260412-Q6_K.gguf 12.0 GB Very High Quality
wan2.2_i2v_A14b_high_noise_lightx2v_4step_720p_260412-Q8_0.gguf 15.4 GB Near-Lossless (Recommended if VRAM permits)

🎯 Low Noise Models (/low_noise_260412/)

Filename Size Quality Level
wan2.2_i2v_A14b_low_noise_lightx2v_4step_720p_260412-Q3_K_M.gguf 7.19 GB High Compression, Lowest VRAM
wan2.2_i2v_A14b_low_noise_lightx2v_4step_720p_260412-Q4_K_M.gguf 9.66 GB Good Balance
wan2.2_i2v_A14b_low_noise_lightx2v_4step_720p_260412-Q5_K_M.gguf 10.8 GB High Quality, Moderate VRAM
wan2.2_i2v_A14b_low_noise_lightx2v_4step_720p_260412-Q6_K.gguf 12.0 GB Very High Quality
wan2.2_i2v_A14b_low_noise_lightx2v_4step_720p_260412-Q8_0.gguf 15.4 GB Near-Lossless (Recommended if VRAM permits)

Downloads last month
970
GGUF
Model size
14B params
Architecture
wan
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Abiray/Wan2.2-LightX2V-260412-4STEP-GGUF

Quantized
(2)
this model

Collection including Abiray/Wan2.2-LightX2V-260412-4STEP-GGUF