LFM2.5-1.2B-Thinking-Claude-High-Reasoning GGUF
GGUF quantizations of DavidAU/LFM2.5-1.2B-Instruct-Thinking-Claude-High-Reasoning.
Quant Details
| Quant | Size | BPW |
|---|---|---|
| Q8_0 | 1.19 GB | 8.50 |
| Q4_K_M | 695 MB | 4.98 |
Original F16: 2.23 GB (16.00 BPW)
About
LiquidAI LFM2.5-1.2B-Instruct fine-tuned with Unsloth on TeichAI/claude-4.5-opus-high-reasoning-250x dataset to produce reasoning traces as plain text (not in thinking blocks).
Reasoning Trace Behavior
Tested with 3 prompts - all produced visible reasoning traces before the final answer. Reasoning appears as plain text, NOT in special thinking/thought blocks.
Tips from the original model card:
- Use prompts like "Think carefully..." or "Think deeply before you answer..." to activate reasoning
- Regen may be needed to activate reasoning on some prompts
- Temp 0.7, rep_pen 1.05, top_p 0.95, min_p 0.05, top_k 40
Quantized by
- Downloads last month
- 203
Hardware compatibility
Log In to add your hardware
4-bit
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for OpenTransformer/LFM2.5-1.2B-Thinking-Claude-GGUF
Base model
LiquidAI/LFM2.5-1.2B-Base Finetuned
LiquidAI/LFM2.5-1.2B-Instruct