This is LFM2.5-1.2B-Thinking quantized with AutoRound to W4A16. The model is compatible with vLLM (tested: v0.14.0; have to pass --allow-deprecated-quantization, not sure why). Tested with an L4 (Google Colab).

Downloads last month
6
Safetensors
Model size
0.7B params
Tensor type
I32
·
BF16
·
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kaitchup/LFM2.5-1.2B-Thinking-autoround-W4A16

Quantized
(33)
this model

Collection including kaitchup/LFM2.5-1.2B-Thinking-autoround-W4A16