Suggestion
#1
by FlameF0X - opened
I suggest to also benchmark LiquidAI/LFM2.5-1.2B-Thinking, or add pre-existing results of the LFM2.5-1.2B-Thinking from somewhere else? (if you are lazy to run the model which i expect to be slow because its a CoT model)
i put models with less than 1B active parameters, but when i make larger and more capable models, i will include models of such tiers (like llama 3.2 1b, qwen3 1.7b, etc.)
okay, but the thinking model is a fine tune of the instruct
yeah
FlameF0X changed discussion status to closed