GGUF
conversational

banner

LFM2.5-1.2B-Elm

Model Description

LFM2.5-1.2B-Elm is a fine-tuned version of LiquidAI/LFM2.5-1.2B-Thinking.

The agentic capabilities have been specifically enhanced for the Hermes-Agent harness. This model was trained using the following datasets:

  • kai-os/carnice-glm5-hermes-traces
  • lambda/hermes-agent-reasoning-traces

Performance & Benchmarks

This model is optimized for speed and efficiency. Inference speeds were measured using llama.cpp (CUDA v2.11.0) within LM Studio:

Hardware Tokens/s Notes
RTX 3090 ~506 Tok/s Blazing fast inference
RTX 4060 Laptop ~250 Tok/s Excellent portability performance
Downloads last month
42
GGUF
Model size
1B params
Architecture
lfm2
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Odontome/LFM2.5-1.2B-Elm

Quantized
(33)
this model

Datasets used to train Odontome/LFM2.5-1.2B-Elm