Ministral-3-3B-Instruct-2512-BF16-Heretic-GGUF
This model was converted to GGUF format from 0xA50C1A1/Ministral-3-3B-Instruct-2512-BF16-Heretic using GGUF Forge.
Quants
The following quants are available: Q4_0, Q4_K_S, Q8_0, Q6_K, Q4_K_M, Q5_K_M, Q5_K_S, Q3_K_L, Q2_K, Q3_K_M, Q3_K_S, Q5_0
Ollama Support
Full Ollama support is provided by merging any sharded GGUF output into a single file after quantization.
Conversion Stats
| Metric | Value |
|---|---|
| Job ID | b829ba7e-2d2f-4e62-b488-8d1701fd7ea7 |
| GGUF Forge Version | v7.4 |
| Total Time | 26.8min |
| Avg Time per Quant | 26.6s |
Step Breakdown
- Download: 1.0min
- FP16 Conversion: 24.0s
- Quantization: 25.4min
π Convert Your Own Models
Want to convert more models to GGUF?
π gguforge.com β Free hosted GGUF conversion service. Login with HuggingFace and request conversions instantly!
Links
- π Free Hosted Service: gguforge.com
- π οΈ Self-host GGUF Forge: GitHub
- π¦ llama.cpp (quantization engine): GitHub
- π¬ Community & Support: Discord
Converted automatically by GGUF Forge v7.4
- Downloads last month
- 201
Hardware compatibility
Log In to add your hardware
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support