DeepSeek-R1-Distill-1.5B Q4_K_M GGUF
This is a Q4_K_M quantized GGUF conversion of deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B optimized for on-device inference with llama.cpp.
Model Details
| Property | Value |
|---|---|
| Original Model | DeepSeek-R1-Distill-Qwen-1.5B |
| Parameters | 1.5 billion |
| Quantization | Q4_K_M (4-bit, medium quality) |
| File Size | ~1.0 GB |
| Context Window | 32,768 tokens |
| Architecture | Qwen2 (distilled from DeepSeek-R1) |
Intended Use
This model is optimized for:
- Mobile/Edge Deployment: Runs efficiently on most iOS devices
- llama.cpp Integration: Compatible with llama.cpp and its bindings
- On-Device AI: Private, offline inference without cloud dependencies
Capabilities
- Reasoning: Distilled from DeepSeek-R1 for enhanced reasoning
- Step-by-step Thinking: Good at breaking down problems
- Code Generation: Capable coding assistance
- Mathematical Problem Solving: Strong analytical capabilities
- Compact Size: Great quality-to-size ratio
Usage with llama.cpp
./llama-cli -m DeepSeek-R1-Distill-1.5B-Q4_K_M.gguf -p "Your prompt here" -n 512
License
This model is released under the MIT License.
Attribution
- Original Model: DeepSeek-R1-Distill-Qwen-1.5B by DeepSeek AI
- Quantization: jc-builds
- Downloads last month
- 9
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for jc-builds/DeepSeek-R1-Distill-1.5B-Q4_K_M-GGUF
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B