VBVR: A Very Big Video Reasoning Suite β GGUF
GGUF quantized version of LiconStudio/VBVR-wan2.2-comfy-bf16
Optimized for CPU and GPUs with limited VRAM.
Quantization Method
GGUF quantization made by my DIY framework - HQF_by_LL (Hot Quantization Framework) :) MB I'll released it someday...
π Specification
| Quantization | File Size |
|---|---|
| Q4_K_S | ~8.5 GB |
| Q5_K_M | ~10.5 GB |
| Q8_0 | ~15.4 GB |
Other coming soon...
π οΈ ComfyUI Usage
- Install ComfyUI-GGUF or ComfyUI-KJNodes(My Best XD) custom node.
- Place
.ggufintoComfyUI/models/models/diffusion_models/. - Use
UnetLoaderGGUForDiffusion Model Loader KJnode to load the model. - Profit!
βοΈ License
Wan-AI Software License β see LICENSE.txt
- Downloads last month
- 758
Hardware compatibility
Log In to add your hardware
4-bit
5-bit
8-bit
Model tree for methelina/VBVR-wan2.2-I2V-14B-high-SNR-Calibrated-Hybrid-GGUF
Base model
Wan-AI/Wan2.2-I2V-A14B-Diffusers Finetuned
Video-Reason/VBVR-Wan2.2 Finetuned
LiconStudio/VBVR-wan2.2-comfy-bf16