Your model is powerful
As the title says, I think this is your most powerful small model yet, it's currently the best one in huggingface that is below 0.5 billion parameters (as of Feb, 2026). I've tested it on several custom benchmarks and it blows all the models with less than 0.5b parameters and even some with 0.6b parameters.
| Model | Score |
|---|---|
| LFM2-700M-Heretic | 8/10 |
| granite-4.0-350m | 8/10 |
| LFM2-350M | 7/10 |
| Falcon-H1-Tiny-R-0.6B | 6/10 |
I have made even more custom benchmarks, specially related to language tasks, and it just consistently beats any other model its size. I have checked the official Github repo for answers on training details but no luck so far.
Are you planning a blog for these models?
Thanks in advance
Thanks for sharing your experience of the model! We're glad it holds up well in your use cases. We have a couple of blogs related to these models and their place in the broader Granite 4.0 family:
- Training process deep-dive for the Granite 4 family: https://research.ibm.com/blog/granite-ethical-ai
- Release blog around the Nano family (1b and 350m): https://huggingface.co/blog/ibm-granite/granite-4-nano
- Discussion blog around the Nano family launch: https://www.ibm.com/think/news/granite-4-0-nano-could-change-use-of-ai
Thanks for sharing your experience of the model! We're glad it holds up well in your use cases. We have a couple of blogs related to these models and their place in the broader Granite 4.0 family:
- Training process deep-dive for the Granite 4 family: https://research.ibm.com/blog/granite-ethical-ai
- Release blog around the Nano family (1b and 350m): https://huggingface.co/blog/ibm-granite/granite-4-nano
- Discussion blog around the Nano family launch: https://www.ibm.com/think/news/granite-4-0-nano-could-change-use-of-ai
Thank you