VBVR: A Very Big Video Reasoning Suite – GGUF

GGUF quantized version of LiconStudio/VBVR-wan2.2-comfy-bf16
Optimized for CPU and GPUs with limited VRAM.

Quantization Method

GGUF quantization made by my DIY framework - HQF_by_LL (Hot Quantization Framework) :) MB I'll released it someday...

πŸ“Š Specification

Quantization File Size
Q4_K_S ~8.5 GB
Q5_K_M ~10.5 GB
Q8_0 ~15.4 GB

Other coming soon...

πŸ› οΈ ComfyUI Usage

  1. Install ComfyUI-GGUF or ComfyUI-KJNodes(My Best XD) custom node.
  2. Place .gguf into ComfyUI/models/models/diffusion_models/.
  3. Use UnetLoaderGGUF or Diffusion Model Loader KJ node to load the model.
  4. Profit!

βš–οΈ License

Wan-AI Software License – see LICENSE.txt

Downloads last month
758
GGUF
Model size
14B params
Architecture
wan
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for methelina/VBVR-wan2.2-I2V-14B-high-SNR-Calibrated-Hybrid-GGUF