image * evals calculated with llama.cpp llama-perplexity

mistralai/Ministral-3-3B-Instruct-2512 neopolitized with projected shards and fragments of mistralai/Ministral-3-8B-Instruct-2512.

  • projection method: 2
  • merge method: 0
  • layers: 0-4 [x->x]
  • alpha: 0.85-0.85
  • tensors: attn_q, attn_k, attn_v, attn_o.T
                             8 w  w       
8d8b. .d88b .d8b. 88b. .d8b. 8 w w8ww .d88
8P Y8 8.dP' 8' .8 8  8 8' .8 8 8  8   8  8
8   8 `Y88P `Y8P' 88P' `Y8P' 8 8  Y8P `Y88
                  8                       
Downloads last month
5
GGUF
Model size
3B params
Architecture
mistral3
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for neopolita/Neo-Ministral-3-3B-Instruct-2512-v0

Collection including neopolita/Neo-Ministral-3-3B-Instruct-2512-v0