LFM2 350M Parity Q4_K_M GGUF

Same-origin GGUF artifact for mesh-llm backend comparison.

  • Source checkpoint: LiquidAI/LFM2-350M
  • Conversion path: original checkpoint -> GGUF f16 -> GGUF Q4_K_M
  • Intended pair: meshllm/lfm2-350m-parity-4bit-mlx

Validation

Validated locally with the mesh-llm exact smoke suite on 2026-04-06.

This pair is useful as a backend-drift detector rather than a clean parity canary. In the same-origin exact run, the GGUF side was materially worse than the MLX side on several simple prompts.

Files

  • lfm2-350m-q4_k_m.gguf
Downloads last month
149
GGUF
Model size
0.4B params
Architecture
lfm2
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for meshllm/lfm2-350m-parity-q4_k_m-gguf

Quantized
(32)
this model