Mistral-Nemo-12B-Instruct-v1-GGUF (Platinum Series)

Status Format Series Support

This repository contains the Platinum Series universal GGUF release of Mistral-Nemo-12B-Instruct-v1. This collection provides multiple quantization levels optimized for high-performance reasoning and creative applications, bridging the gap between small and large-scale language models.

πŸ“¦ Available Files & Quantization Details

File Name Quantization Size Accuracy Recommended For
Mistral-Nemo-12B-Instruct-Platinum-F16.gguf FP16 ~24.5 GB 100% Master Reference / Benchmarking
Mistral-Nemo-12B-Instruct-Platinum-Q8_0.gguf Q8_0 ~13.0 GB 99.9% Platinum Reference / High-Fidelity
Mistral-Nemo-12B-Instruct-Platinum-Q6_K.gguf Q6_K ~10.0 GB 99.8% High-End GPU / Complex Logic
Mistral-Nemo-12B-Instruct-Platinum-Q5_K_M.gguf Q5_K_M ~8.7 GB 99.6% Balanced Performance
Mistral-Nemo-12B-Instruct-Platinum-Q4_K_M.gguf Q4_K_M ~7.4 GB 99.2% Efficiency / Mid-Range VRAM

🐍 Python Inference (llama-cpp-python)

To run these engines using Python:

from llama_cpp import Llama

llm = Llama(
    model_path="Mistral-Nemo-12B-Instruct-Platinum-Q8_0.gguf",
    n_gpu_layers=-1, # Target all layers to NVIDIA/Apple GPU
    n_ctx=8192
)

output = llm("Discuss the impact of the Mistral-Nemo architecture.", max_tokens=200)
print(output["choices"][0]["text"])

πŸ’» For C# / .NET Users (LLamaSharp)

This collection is fully compatible with .NET applications via the LLamaSharp library.

using LLama.Common;
using LLama;

var parameters = new ModelParams("Mistral-Nemo-12B-Instruct-Platinum-Q8_0.gguf") {
    ContextSize = 8192,
    GpuLayerCount = 40 
};

using var model = LLamaWeights.LoadFromFile(parameters);
using var context = model.CreateContext(parameters);
var executor = new InteractiveExecutor(context);

Console.WriteLine("Universal Logic Engine Active.");

πŸ—οΈ Technical Details

  • Optimization Tool: llama.cpp (CUDA-accelerated)
  • Architecture: Mistral Nemo (12B)
  • Hardware Validation: Dual-GPU (RTX 3090 + RTX A4000)

β˜• Support the Forge

Maintaining high-capacity workstations for model conversion requires hardware investment. If these tools power your industrial workflows or local research, please consider supporting the development:

Platform Support Link
Global & India Support via Razorpay

Scan to support via UPI (India Only):


Connect with the architect: Abhishek Jaiswal on LinkedIn

Downloads last month
502
GGUF
Model size
12B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for CelesteImperia/Mistral-Nemo-Instruct-Platinum-GGUF