Qwen3-1.7B Q4_K_M GGUF

This is a Q4_K_M quantized GGUF conversion of Qwen/Qwen3-1.7B optimized for on-device inference with llama.cpp.

Model Details

Property Value
Original Model Qwen3-1.7B
Parameters 1.7 billion
Quantization Q4_K_M (4-bit, medium quality)
File Size ~1.1 GB
Context Window 32,768 tokens
Architecture Qwen3 (RoPE, SwiGLU, RMSNorm, GQA)

Intended Use

This model is optimized for:

  • Mobile/Edge Deployment: Runs efficiently on most iOS devices
  • llama.cpp Integration: Compatible with llama.cpp and its bindings
  • On-Device AI: Private, offline inference without cloud dependencies

Capabilities

  • Reasoning and analysis
  • Code generation
  • Creative writing
  • Multilingual support (29+ languages)
  • Balanced speed and quality

Usage with llama.cpp

./llama-cli -m Qwen3-1.7B-Q4_K_M.gguf -p "Your prompt here" -n 512

License

This model inherits the Apache 2.0 license from the original Qwen3 model.

Attribution

  • Original Model: Qwen3-1.7B by Qwen Team, Alibaba Cloud
  • Quantization: jc-builds
Downloads last month
31
GGUF
Model size
2B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for jc-builds/Qwen3-1.7B-Q4_K_M-GGUF

Finetuned
Qwen/Qwen3-1.7B
Quantized
(254)
this model