How to use from
Ollama
ollama run hf.co/openensemble/reason-gguf:Q8_0
Quick Links

No model card

Downloads last month
71
GGUF
Model size
0.1B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support