SmolLM2-360M-Instruct Q4_K_M GGUF
This is a Q4_K_M quantized GGUF conversion of HuggingFaceTB/SmolLM2-360M-Instruct optimized for on-device inference with llama.cpp.
Model Details
| Property | Value |
|---|---|
| Original Model | SmolLM2-360M-Instruct |
| Parameters | 360 million |
| Quantization | Q4_K_M (4-bit, medium quality) |
| File Size | ~258 MB |
| Context Window | 8,192 tokens |
| Architecture | LLaMA-style transformer |
| Training Data | 4 trillion tokens |
Intended Use
This model is optimized for:
- Mobile/Edge Deployment: Runs efficiently on all iOS devices
- llama.cpp Integration: Compatible with llama.cpp and its bindings
- On-Device AI: Private, offline inference without cloud dependencies
Capabilities
- Fast Responses: Very quick inference
- Text Rewriting: Good at summarization and rewriting
- Instruction Following: Solid instruction following for its size
- Low Resource Usage: Minimal RAM and storage requirements
- Trained on 4T Tokens: Quality upgrade from 135M version
Usage with llama.cpp
./llama-cli -m SmolLM2-360M-Instruct.Q4_K_M.gguf -p "Your prompt here" -n 512
License
This model inherits the Apache 2.0 license from the original SmolLM2 model.
Attribution
- Original Model: SmolLM2-360M-Instruct by Hugging Face
- Quantization: jc-builds
- Downloads last month
- 48
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for jc-builds/SmolLM2-360M-Instruct-Q4_K_M-GGUF
Base model
HuggingFaceTB/SmolLM2-360M Quantized
HuggingFaceTB/SmolLM2-360M-Instruct