Phi-4-mini Q4_K_M GGUF
This is a Q4_K_M quantized GGUF conversion of microsoft/Phi-4-mini-instruct optimized for on-device inference with llama.cpp.
Model Details
| Property | Value |
|---|---|
| Original Model | Phi-4-mini-instruct |
| Parameters | 3.8 billion |
| Quantization | Q4_K_M (4-bit, medium quality) |
| File Size | ~2.4 GB |
| Context Window | 128,000 tokens |
| Architecture | Dense decoder-only Transformer |
Intended Use
This model is optimized for:
- Mobile/Edge Deployment: Runs efficiently on iOS devices with 8GB+ RAM (iPhone 15 Pro, 16, etc.)
- llama.cpp Integration: Compatible with llama.cpp and its bindings
- On-Device AI: Private, offline inference without cloud dependencies
Capabilities
- Exceptional Reasoning: Rivals much larger models in logical reasoning
- Long Context: 128K token context window for processing long documents
- Code Generation: Strong coding capabilities
- Mathematical Problem Solving: Excellent at math and analytical tasks
- Multilingual Support: Works across multiple languages
- Function Calling: Supports structured tool use
Usage with llama.cpp
./llama-cli -m Phi-4-mini-Q4_K_M.gguf -p "Your prompt here" -n 512
License
This model is released under the MIT License, making it freely usable for commercial and personal projects.
Attribution
- Original Model: Phi-4-mini-instruct by Microsoft
- Quantization: jc-builds
- Downloads last month
- 56
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for jc-builds/Phi-4-mini-Q4_K_M-GGUF
Base model
microsoft/Phi-4-mini-instruct