LFM2-2.6B - Quantized GGUF Model
This is a quantized GGUF model (Q4_K_M) compatible with Ollama.
Model Details
- Base Model: LiquidAI/LFM2-2.6B
- Quantization: Q4_K_M
- Framework: Ollama
Usage with Ollama
You can pull and run this model directly with Ollama:
ollama pull hf.co/Sadiah/ollama-q4_k_m-LFM2-2.6B:Q4_K_M
Then run it:
ollama run hf.co/Sadiah/ollama-q4_k_m-LFM2-2.6B:Q4_K_M "Write your prompt here"
Features
- Efficient quantization (Q4_K_M) for faster inference
- Compatible with Ollama's inference engine
License
Please refer to the original model card for licensing information.
- Downloads last month
- 24
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for abi3420/ollama-q4_k_m-LFM2-2.6B
Base model
LiquidAI/LFM2-2.6B