About

imatrix quants of Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v3

File Name Description
nqlsg_Q5_K_M.gguf This file contains the quantized model weights using the Q5_K_M quantization type.
nqlsg_dynamic.gguf This file contains the model weights with experimental dynamic IQ3_XXS quantization applied.

For more information on dynamic quantization, see (https://huggingface.co/Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v3-GGUF/discussions/1).

Downloads last month
11
GGUF
Model size
15B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v3-GGUF

Quantized
(3)
this model