Pony1

This is a quantized version of Pony Diffusion V6 XL, Built-in VAE and CLIP included.

The default FP16 model can take up 6-7gb just to load vram and do image Diffusion, with a separate gguf, the unet model only needs 2.8gb vram on KSampler / Processing Image.

ComfyUI Recommended

Base Model: https://civitai.com/models/257749/pony-diffusion-v6-xl All credits to the original creator: https://civitai.com/user/PurpleSmartAI

Support My Work & Request Custom Quantization: It takes time, testing, and electricity to compress these models perfectly

Got a specific heavy Checkpoint or LoRA you desperately need quantized for your PC? Drop a Potatos and your request here: https://ko-fi.com/morikomorizz

Downloads last month
1,988
GGUF
Model size
3B params
Architecture
sdxl
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for morikomorizz/Pony-Diffusion-V6-XL-GGUF

Quantized
(1)
this model
Quantizations
1 model

Spaces using morikomorizz/Pony-Diffusion-V6-XL-GGUF 2