Qwen3.5-4B-GGUF-Q4_K_M

GGUF conversion of Qwen/Qwen3.5-4B for llama.cpp.

Files

  • Qwen3.5-4B-Q4_K_M.gguf (recommended for laptop use)
  • Qwen3.5-4B-f16.gguf (source full precision)

Example (llama.cpp)

llama-cli -m Qwen3.5-4B-Q4_K_M.gguf -p "Hello"
Downloads last month
680
GGUF
Model size
4B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

4-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for hinny/Qwen3.5-4B-GGUF-Q4_K_M

Finetuned
Qwen/Qwen3.5-4B
Quantized
(144)
this model