gguf quantization

#2
by Pragg100510 - opened

Can I get this in gguf quantization?

What you're going to do with GGUF?

Qwen3-VL is not supported by llama.cpp yet

Why? Where as Qwen 2.5 vl does.

Pragg100510 changed discussion status to closed

Sign up or log in to comment