Mistral-Nemo-12B-Instruct-OpenVINO-INT8 (Silver Series)

Status Architecture Precision Support

This repository contains the Silver Series optimized OpenVINOβ„’ IR version of Mistral-Nemo-12B-Instruct, quantized to INT8 precision using NNCF. This model represents the heavy-duty reasoning tier of the Forge, offering 12B parameter depth with high-fidelity 8-bit weights for complex logical tasks.


🐍 Python Inference (Optimum-Intel)

To run this Silver Series engine locally using the optimum-intel library:

from optimum.intel import OVModelForCausalLM
from transformers import AutoTokenizer

model_id = "CelesteImperia/Mistral-Nemo-12B-Instruct-OpenVINO-INT8"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForCausalLM.from_pretrained(model_id)

prompt = "Analyze the architectural benefits of using a 12B parameter model for industrial automation logic."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=300)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

πŸ’» For C# / .NET Users (OpenVINO.GenAI)

Mistral Nemo's architecture is fully compatible with the native OpenVINO.GenAI framework, providing a high-performance path for large-scale model deployment in .NET environments.

using OpenVino.GenAI;

// 1. Initialize the LLM Pipeline
var device = "CPU"; // Use "GPU" to leverage your RTX 3090/A4000 VRAM
using var pipe = new LLMPipeline("path/to/mistral-nemo-int8-model", device);

// 2. Set Generation Config
var config = new GenerationConfig { MaxNewTokens = 1024, Temperature = 0.4f };

// 3. Execute Inference
var prompt = "Write a comprehensive technical specification for a C# based AI middleware.";
var result = pipe.Generate(prompt, config);

Console.WriteLine(result);

πŸ—οΈ Technical Details

  • Optimization Tool: NNCF (Neural Network Compression Framework)
  • Quantization: INT8 Symmetric (Per-channel)
  • Model Context: 128k Tokens
  • Workstation Validation: Dual-GPU (RTX 3090 + RTX A4000)

β˜• Support the Forge

Maintaining the production of heavy-duty, high-fidelity models requires significant hardware and electrical resources. If these tools power your industrial projects, please consider supporting our research:

Platform Support Link
Global & India Support via Razorpay

Scan to support via UPI (India Only):


πŸ“œ License

This model is released under the Apache 2.0 License.


Connect with the architect: Abhishek Jaiswal on LinkedIn

Downloads last month
24
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for CelesteImperia/Mistral-Nemo-12B-Instruct-OpenVINO-INT8