Ministral-3-3B-Instruct-2512-BF16-Heretic-GGUF

This model was converted to GGUF format from 0xA50C1A1/Ministral-3-3B-Instruct-2512-BF16-Heretic using GGUF Forge.

Quants

The following quants are available: Q4_0, Q4_K_S, Q8_0, Q6_K, Q4_K_M, Q5_K_M, Q5_K_S, Q3_K_L, Q2_K, Q3_K_M, Q3_K_S, Q5_0

Ollama Support

Full Ollama support is provided by merging any sharded GGUF output into a single file after quantization.

Conversion Stats

Metric Value
Job ID b829ba7e-2d2f-4e62-b488-8d1701fd7ea7
GGUF Forge Version v7.4
Total Time 26.8min
Avg Time per Quant 26.6s

Step Breakdown

  • Download: 1.0min
  • FP16 Conversion: 24.0s
  • Quantization: 25.4min

πŸš€ Convert Your Own Models

Want to convert more models to GGUF?

πŸ‘‰ gguforge.com β€” Free hosted GGUF conversion service. Login with HuggingFace and request conversions instantly!

Links

  • 🌐 Free Hosted Service: gguforge.com
  • πŸ› οΈ Self-host GGUF Forge: GitHub
  • πŸ“¦ llama.cpp (quantization engine): GitHub
  • πŸ’¬ Community & Support: Discord

Converted automatically by GGUF Forge v7.4

Downloads last month
201
GGUF
Model size
3B params
Architecture
mistral3
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Akicou/Ministral-3-3B-Instruct-2512-BF16-Heretic-GGUF

Quantized
(1)
this model