Confusion about the amount of parameters
Hi,
The repo name mentions 24B model size but according to your mergekit-yaml configurations it should be a 12.2B merge which is the size of Mistral-Nemo-12B-Instruct and afaik frankenmerges and MoE models become bigger afterwards but TIES shouldn't make a 24B model out of two 12B ones.
I could be wrong but I thought I'd just let you know. Anyways... Thanks for all the great finetuned models you trained and released lately, especially the Mistral Small models are great because they are the perfect balance of model size which is small enough to have acceptable inference speed using a Q4 quant on my RTX 3050 Laptop 6GB/64GB DDR4 i5-12500h system and at the same time have amazing performance that feels like it is better than many models that have far more parameters.
There was a 12B and 24B made at the same time and a mistake was made uploading to the correct repository. It should be correct now.