------------------------------------------------
- Model Details and Specifications: -
------------------------------------------------
Ministral-3 14B Instruct 2512 (GGUF)
This release contains:
Llama.cpp and Ollama compatible GGUF converted and Quantized model files
(Compatible with both Ollama, and Llama.cpp)
Quantized GGUF version of:
- Ministral-3-14B-Instruct-2512-BF16
(by MistralAI)
Original Model Link:
Description:
This release includes GGUF (Ollama + Llama.cpp - compatible) model files and three working multi-modal projector(s)
(mmproj) files for the Vision Projector; offering full capabilities in Ollama or Llama.cpp.
What's New?
As of February 2026 all available GGUF (both F16 and K-Quants) files have been completely overhauled in order to offer the community
what I was able to come up with as best size vs quality possible for this model from MistralAI (Thank you MistralAI for your release!).
This was done by first taking the BF16 model's safetensors and up-casting them to F32, from there the model was quantized down to the
currently available files you find here. Beyond the F32-->Quant conversion that is a new methodology; These GGUF F16+Quantize files also
include a completely custom tokenizer chat template.
Why Up-Cast to F32 from BF16?
Some will argue or perhaps be curious as to why the model was up-cast from BF16 to F32... This is because although it is true that there
is no direct quality benefit from casting a BF16 release to a F32 bit file size, there is however a benefit of less quality loss
when quantizing from an up-cast BF16 to F32 as the F32 has in a sense more "scratchpad" available mathematically and Quants derrived from an
F32 inherently degrade less per-Quant compared to a direct F16 or BF16 conversion. (It's worth a Google if anyone is ever curious!)
What is the "Custom Tokenizer Chat Template?"
As apposed to the standard "Chat Template" made available by MistralAI - this release of GGUF converted and quantized files offer a totally
unique and custom Tokenizer Chat Template in order to provide: Smoother, Faster, Efficient, and Sophisticated interaction/inference with the model.
This template strips away all of the "fluff" or genuinely unneeded logic from the JINJA Chat/Tokenizer Template - allowing anyone who uses the model
for inferencing can enjoy a significant improvement in speed, quality and context adherence without sacrificing any aspect of the initial release by MistralAI.
For reference - here is the new JINJA Tokenizer Chat-Template:
(This template features a sliding context window of EIGHT (8) interactions, which may be adjusted per-individual requirements simply by altering
the third line of this template from the number nine (9) to either higher or lower numerical values to increase or decrease the sliding context window)
{{- $remMessage := false }}
{{- range $index, $_ := .Messages }}
{{- if lt (len (slice $.Messages $index)) 9 }}
{{- $remMessage = true }}
{{- end }}
{{- if eq .Role "system" }}[SYSTEM_PROMPT]{{ .Content }}[/SYSTEM_PROMPT]{{- end }}
{{- if $remMessage }}
{{- if eq .Role "user" }}
[INST]{{ .Content }}[/INST]
{{- else if eq .Role "assistant" }}
{{- if .Content}}{{ .Content }}{{- end }}
{{- end }}
{{- end }}
{{- end }}
No modifications, edits, or configurations are required to use this model with
Ollama, it works natively! Both Vision and Text work with Ollama. (^.^)
Coming Soon!!!
Check back occasionally - as a automated installer/configure Python-3 script is making its way to all of my releases!
This allows anyone who is interested in using these models a hassle-free and stress-free experience where the Python-3
script takes care of setting up the model for Ollama (specifically for Ollama, other software optimizations coming later).
It is highly recommended to use the Ollama "create" command along with the supplied ".modelfile" to ensure proper configuration
for anyone who wishes to get the most out of this particular release. Though, the Python-3 automated installer/configuration tool
will handle such aspects if it is chosen to be used.
Happy Inferencing!
-- Jon Z (EnlistedGhost)
Model Updates (As of: February 27th, 2026)
- Updated: All GGUF model file(s) with significantly altered/reworked GGUF conversion(s) and Quant file(s)
Final Quantized and full-F16 modelfiles are being uploaded!!! - Check back for I-Matrix quant files and other static K-Quants if you do not see your desired edition (They are being uploaded, thank you for your patience!)
-------------------------------------------------------------
- GGUF Conversion and Quantization Details: -
-------------------------------------------------------------
Software used to convert Safetensors to GGUF:
Software used to create Quantized GGUF Files:
Specific GitHub Commit Point:
Converted to GGUF and Quantized by:
--------------------------
---- Original Info ----
--------------------------
(Crossposted from the link in the above section: "Model Details"):
Ministral 3 14B Instruct 2512 BF16
The largest model in the Ministral 3 family, Ministral 3 14B offers frontier capabilities and performance comparable to its larger Mistral Small 3.2 24B counterpart. A powerful and efficient language model with vision capabilities.
This model is the instruct post-trained version, fine-tuned for instruction tasks, making it ideal for chat and instruction based use cases.
The Ministral 3 family is designed for edge deployment, capable of running on a wide range of hardware. Ministral 3 14B can even be deployed locally, capable of fitting in 32GB of VRAM in BF16, and less than 24GB of RAM/VRAM when quantized.
We provide a no-loss FP8 version here, you can find other formats and quantizations in the Ministral 3 - Additional Checkpoints collection.
Key Features
Ministral 3 14B consists of two main architectural components:
- 13.5B Language Model
- 0.4B Vision Encoder
The Ministral 3 14B Instruct model offers the following capabilities:
- Vision: Enables the model to analyze images and provide insights based on visual content, in addition to text.
- Multilingual: Supports dozens of languages, including English, French, Spanish, German, Italian, Portuguese, Dutch, Chinese, Japanese, Korean, Arabic.
- System Prompt: Maintains strong adherence and support for system prompts.
- Agentic: Offers best-in-class agentic capabilities with native function calling and JSON outputting.
- Edge-Optimized: Delivers best-in-class performance at a small scale, deployable anywhere.
- Apache 2.0 License: Open-source license allowing usage and modification for both commercial and non-commercial purposes.
- Large Context Window: Supports a 256k context window.
Use Cases
Private AI deployments where advanced capabilities meet practical hardware constraints:
- Private/custom chat and AI assistant deployments in constrained environments
- Advanced local agentic use cases
- Fine-tuning and specialization
- And more...
Bringing advanced AI capabilities to most environments.
Ministral 3 Family
| Model Name | Type | Precision | Link |
|---|---|---|---|
| Ministral 3 3B Base 2512 | Base pre-trained | BF16 | Hugging Face |
| Ministral 3 3B Instruct 2512 | Instruct post-trained | BF16 | Hugging Face |
| Ministral 3 3B Reasoning 2512 | Reasoning capable | BF16 | Hugging Face |
| Ministral 3 8B Base 2512 | Base pre-trained | BF16 | Hugging Face |
| Ministral 3 8B Instruct 2512 | Instruct post-trained | BF16 | Hugging Face |
| Ministral 3 8B Reasoning 2512 | Reasoning capable | BF16 | Hugging Face |
| Ministral 3 14B Base 2512 | Base pre-trained | BF16 | Hugging Face |
| Ministral 3 14B Instruct 2512 | Instruct post-trained | BF16 | Hugging Face |
| Ministral 3 14B Reasoning 2512 | Reasoning capable | BF16 | Hugging Face |
Other formats available here.
Benchmark Results
We compare Ministral 3 to similar sized models.
Reasoning
| Model | AIME25 | AIME24 | GPQA Diamond | LiveCodeBench |
|---|---|---|---|---|
| Ministral 3 14B | 0.850 | 0.898 | 0.712 | 0.646 |
| Qwen3-14B (Thinking) | 0.737 | 0.837 | 0.663 | 0.593 |
| Ministral 3 8B | 0.787 | 0.860 | 0.668 | 0.616 |
| Qwen3-VL-8B-Thinking | 0.798 | 0.860 | 0.671 | 0.580 |
| Ministral 3 3B | 0.721 | 0.775 | 0.534 | 0.548 |
| Qwen3-VL-4B-Thinking | 0.697 | 0.729 | 0.601 | 0.513 |
Instruct
| Model | Arena Hard | WildBench | MATH Maj@1 | MM MTBench |
|---|---|---|---|---|
| Ministral 3 14B | 0.551 | 68.5 | 0.904 | 8.49 |
| Qwen3 14B (Non-Thinking) | 0.427 | 65.1 | 0.870 | NOT MULTIMODAL |
| Gemma3-12B-Instruct | 0.436 | 63.2 | 0.854 | 6.70 |
| Ministral 3 8B | 0.509 | 66.8 | 0.876 | 8.08 |
| Qwen3-VL-8B-Instruct | 0.528 | 66.3 | 0.946 | 8.00 |
| Ministral 3 3B | 0.305 | 56.8 | 0.830 | 7.83 |
| Qwen3-VL-4B-Instruct | 0.438 | 56.8 | 0.900 | 8.01 |
| Qwen3-VL-2B-Instruct | 0.163 | 42.2 | 0.786 | 6.36 |
| Gemma3-4B-Instruct | 0.318 | 49.1 | 0.759 | 5.23 |
Base
| Model | Multilingual MMLU | MATH CoT 2-Shot | AGIEval 5-shot | MMLU Redux 5-shot | MMLU 5-shot | TriviaQA 5-shot |
|---|---|---|---|---|---|---|
| Ministral 3 14B | 0.742 | 0.676 | 0.648 | 0.820 | 0.794 | 0.749 |
| Qwen3 14B Base | 0.754 | 0.620 | 0.661 | 0.837 | 0.804 | 0.703 |
| Gemma 3 12B Base | 0.690 | 0.487 | 0.587 | 0.766 | 0.745 | 0.788 |
| Ministral 3 8B | 0.706 | 0.626 | 0.591 | 0.793 | 0.761 | 0.681 |
| Qwen 3 8B Base | 0.700 | 0.576 | 0.596 | 0.794 | 0.760 | 0.639 |
| Ministral 3 3B | 0.652 | 0.601 | 0.511 | 0.735 | 0.707 | 0.592 |
| Qwen 3 4B Base | 0.677 | 0.405 | 0.570 | 0.759 | 0.713 | 0.530 |
| Gemma 3 4B Base | 0.516 | 0.294 | 0.430 | 0.626 | 0.589 | 0.640 |
License
This model is licensed under the Apache 2.0 License.
You must not use this model in a manner that infringes, misappropriates, or otherwise violates any third party’s rights, including intellectual property rights.
- Downloads last month
- 505
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Model tree for EnlistedGhost/Ministral-3-14B-Instruct-2512-GGUF
Base model
mistralai/Ministral-3-14B-Base-2512