Mixed Precision GGUF layer quantization of Qwen2.5-VL-3B-Instruct by Qwen

Original model: https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct

The hybrid quant employs different quantization levels on a per layer basis to enable both high performance and small file size at the same time. This particular quant achieves a ~2.8G gguf ~same perplexity as a ~3.3G Q8_0 GGUF. The quants employed are all K to avoid slow CPU or older GPU processing of IQ quants. For this file the Q8_0_H layer quants are as follows:

   LAYER_TYPES='[
   [0 ,"Q8_0"  ],[1 ,"Q5_K_M"],[2 ,"Q5_K_M"],[3 ,"Q5_K_M"],[4 ,"Q5_K_M"],[5 ,"Q5_K_M"],
   [6 ,"Q5_K_M"],[7 ,"Q5_K_M"],[8, "Q5_K_M"],[9, "Q5_K_M"],[10,"Q5_K_M"],[11,"Q5_K_M"],
   [12,"Q6_K"  ],[13,"Q6_K"  ],[14,"Q6_K"  ],[15,"Q6_K"  ],[16,"Q6_K"  ],[17,"Q6_K"  ],
   [18,"Q6_K"  ],[19,"Q6_K"  ],[20,"Q6_K"  ],[21,"Q6_K"  ],[22,"Q6_K"  ],[23,"Q6_K"  ],
   [24,"Q8_0"  ],[25,"Q8_0"  ],[26,"Q8_0"  ],[27,"Q8_0"  ],[28,"Q8_0"  ],[29,"Q8_0"  ],
   [30,"Q8_0"  ],[31,"Q8_0"  ],[32,"Q8_0"  ],[33,"Q8_0"  ],[34,"Q8_0"  ],[35,"Q8_0"  ]
   ]'
   FLAGS="--token-embedding-type Q8_0 --output-tensor-type Q6_K"

A Q6_K_H quant is also available:

Q5_K_L : attn_v = q8_0 attn_o = q6_k ffn_d = q6_k
Q6_K_S : Q6_K
Q6_K_M : attn_v = q8_0 ffn_d = q8_0
Q6_K_L : attn_v = q8_0 attn_o = q8_0 ffn_d = q8_0

   LAYER_TYPES='[
   [0 ,"Q6_K_S"],[1 ,"Q5_K_L"],[2 ,"Q5_K_M"],[3 ,"Q5_K_M"],[4 ,"Q5_K_M"],[5 ,"Q5_K_M"],
   [6 ,"Q5_K_M"],[7 ,"Q5_K_M"],[8, "Q5_K_M"],[9, "Q5_K_M"],[10,"Q5_K_M"],[11,"Q5_K_M"],
   [12,"Q5_K_M"],[13,"Q5_K_M"],[14,"Q5_K_M"],[15,"Q5_K_M"],[16,"Q5_K_M"],[17,"Q5_K_M"],
   [18,"Q6_K_S"],[19,"Q5_K_L"],[20,"Q6_K_S"],[21,"Q5_K_L"],[22,"Q6_K_S"],[23,"Q5_K_L"],
   [24,"Q6_K_S"],[25,"Q6_K_M"],[26,"Q6_K_S"],[27,"Q6_K_M"],[28,"Q6_K_S"],[29,"Q6_K_M"],
   [30,"Q6_K_M"],[31,"Q6_K_M"],[32,"Q6_K_M"],[33,"Q6_K_L"],[34,"Q6_K_L"],[35,"Q6_K_L"]
   ]'
   FLAGS="--token-embedding-type Q6_K --output-tensor-type Q6_K --layer-types-high"

A Q4_K_H quant is also available:

Q4_K_L : Q4_K_M + attn_o = q6_k
Q5_K_L : attn_v = q8_0 attn_o = q6_k ffn_d = q6_k
Q6_K_S : Q6_K
Q6_K_M : attn_v = q8_0 ffn_d = q8_0
Q6_K_L : attn_v = q8_0 attn_o = q8_0 ffn_d = q8_0

   LAYER_TYPES='[
   [0 ,"Q6_K_L"],[1 ,"Q6_K_M"],[2 ,"Q6_K_S"],[3 ,"Q5_K_L"],[4 ,"Q5_K_M"],[5 ,"Q5_K_S"],
   [6 ,"Q4_K_S"],[7 ,"Q4_K_S"],[8, "Q4_K_S"],[9, "Q4_K_S"],[10,"Q4_K_S"],[11,"Q4_K_S"],
   [12,"Q4_K_M"],[13,"Q4_K_S"],[14,"Q4_K_M"],[15,"Q4_K_S"],[16,"Q4_K_M"],[17,"Q4_K_S"],
   [18,"Q4_K_M"],[19,"Q4_K_S"],[20,"Q4_K_M"],[21,"Q4_K_S"],[22,"Q4_K_M"],[23,"Q4_K_S"],
   [24,"Q4_K_M"],[25,"Q4_K_M"],[26,"Q4_K_M"],[27,"Q4_K_M"],[28,"Q4_K_M"],[29,"Q4_K_M"],
   [30,"Q4_K_M"],[31,"Q4_K_L"],[32,"Q5_K_S"],[33,"Q5_K_M"],[34,"Q5_K_L"],[35,"Q6_K_S"]
   ]'
   FLAGS="--token-embedding-type Q4_K --output-tensor-type Q6_K --layer-types-high"

Comparison:

Quant size PPL Comment
IQ4_XS 1.8e9 11.5 -
Q4_K_H 2e9 11.6 -
Q6_K 2.5e9 11.2 -
Q6_K_H 2.5e9 11.2 -
Q8_0 3.3e9 11.6 Q8_0 with default embedding and output
Q8_0_H 2.8e9 11.3 Hybrid quant with Q8_0 embedding Q6_K output

Usage:

Qwen2.5-VL-3B-Instruct is a vision capable model. It can be used together with its multimedia projector layers to process images and text inputs and generate text outputs. The mmproj file is made available in this repository. To test vision mode follow the docs in the mtmd readme in the tools directory of the source tree https://github.com/ggml-org/llama.cpp/blob/master/tools/mtmd/README.md .

Inference Bugs/issues:

Certain image dimensions combined with certain image pixel values can result in infinite generation of ? characters due to NaN generation in image embeddings compute in mtmd: https://github.com/ggml-org/llama.cpp/issues/17534 . This problems renders the REALWORLDQA eval for the model invalid. Further, after getting into an infinite ? response with one image subsequent inferences appear to also be compromised due to some unknown state residual from the NaN generation. Also, the model will sometimes fall into rep loops if asked to solve an image/prompt with chain of thought (many models do this so this problem is not endemic to this particular model)

Note that after b7210 update the NaNs are no longer generated on the failing image described above but the root cause of the bug (F16 overflows in embeddings compute) is not addressed, see further comments in https://github.com/ggml-org/llama.cpp/issues/17534 .

Benchmarks:

A full set of vision benchmarks with corrected inference for Qwen2.5 VL (llama.cpp version 6915 and above) are given here: https://huggingface.co/spaces/steampunque/benchlm

Download the file from below:

Link Type Size/e9 B Notes
Qwen2.5-VL-3B-Instruct.Q4_K_H.gguf Q4_K_H 2e9 B -
Qwen2.5-VL-3B-Instruct.Q6_K_H.gguf Q6_K_H 2.5e9 B -
Qwen2.5-VL-3B-Instruct.Q8_0_H.gguf Q8_0_H 2.8e9 B 0.5B smaller than Q8_0
Qwen2.5-VL-3B-Instruct.mmproj.gguf mmproj 1.34e9 B multimedia projector

A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:

https://github.com/ggml-org/llama.cpp/discussions/13040

Downloads last month
66
GGUF
Model size
3B params
Architecture
qwen2vl
Hardware compatibility
Log In to add your hardware

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for steampunque/Qwen2.5-VL-3B-Instruct-MP-GGUF

Quantized
(77)
this model