Qwen3-0.6B Q4_K_M GGUF
This is a Q4_K_M quantized GGUF conversion of Qwen/Qwen3-0.6B optimized for on-device inference with llama.cpp.
Model Details
| Property | Value |
|---|---|
| Original Model | Qwen3-0.6B |
| Parameters | 600 million |
| Quantization | Q4_K_M (4-bit, medium quality) |
| File Size | ~400 MB |
| Context Window | 32,768 tokens |
| Architecture | Qwen3 (RoPE, SwiGLU, RMSNorm, GQA) |
Intended Use
This model is optimized for:
- Mobile/Edge Deployment: Runs efficiently on all iOS devices including older models
- llama.cpp Integration: Compatible with llama.cpp and its bindings
- On-Device AI: Private, offline inference without cloud dependencies
Capabilities
- Ultra-fast inference
- Basic conversation and Q&A
- Simple text tasks
- Multilingual support
- Minimal resource usage
Usage with llama.cpp
./llama-cli -m Qwen3-0.6B-Q4_K_M.gguf -p "Your prompt here" -n 512
License
This model inherits the Apache 2.0 license from the original Qwen3 model.
Attribution
- Original Model: Qwen3-0.6B by Qwen Team, Alibaba Cloud
- Quantization: jc-builds
- Downloads last month
- 30
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support