Llama-3.2-3B-Instruct-OpenVINO-INT8 (Silver Series)
This repository contains the Silver Series optimized OpenVINOβ’ IR version of Llama-3.2-3B-Instruct, quantized to INT8 precision using NNCF. This model serves as the "Balanced Assistant" tier of the Forge, offering a superior mix of reasoning depth and high-speed local inference.
π Python Inference (Optimum-Intel)
To run this Silver Series engine locally using the optimum-intel library:
from optimum.intel import OVModelForCausalLM
from transformers import AutoTokenizer
model_id = "CelesteImperia/Llama-3.2-3B-Instruct-OpenVINO-INT8"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForCausalLM.from_pretrained(model_id)
prompt = "Explain the architectural differences between Llama 3.1 and 3.2."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
π» For C# / .NET Users (OpenVINO.GenAI)
The Silver Series is specifically optimized for the native OpenVINO.GenAI NuGet package, enabling high-performance integration into Windows desktop and server applications.
using OpenVino.GenAI;
// 1. Initialize the LLM Pipeline
var device = "CPU"; // Use "GPU" for RTX 3090/A4000 hardware acceleration
using var pipe = new LLMPipeline("path/to/llama-3.2-3b-int8-model", device);
// 2. Set Generation Parameters
var config = new GenerationConfig { MaxNewTokens = 512, Temperature = 0.7f };
// 3. Execute Threaded Inference
var prompt = "Design a C# class for a factory automation logging system.";
var result = pipe.Generate(prompt, config);
Console.WriteLine(result);
ποΈ Technical Details
- Optimization Tool: NNCF (Neural Network Compression Framework)
- Quantization: INT8 Symmetric (Per-channel)
- Series: Silver (High Fidelity)
- Workstation Validation: Dual-GPU (RTX 3090 + RTX A4000)
- Target Hardware: Intel Core i5-11400 / Windows 11
β Support the Forge
High-fidelity model production and workstation maintenance require significant resources. If these tools power your industrial projects or research, please consider supporting our development:
| Platform | Support Link |
|---|---|
| Global & India | Support via Razorpay |
Scan to support via UPI (India Only):
π License
This model is released under the Llama 3.2 Community License.
Connect with the architect: Abhishek Jaiswal on LinkedIn
- Downloads last month
- 23
Model tree for CelesteImperia/Llama-3.2-3B-Instruct-OpenVINO-INT8
Base model
meta-llama/Llama-3.2-3B-Instruct