Qwen2.5-Coder-7B-Instruct-OpenVINO-INT8 (Silver Series)

Status Architecture Precision Support

This repository contains the Silver Series optimized OpenVINOβ„’ IR version of Qwen2.5-Coder-7B-Instruct, quantized to INT8 precision using NNCF. This model is the premier choice for local C# development assistance, offering high-fidelity code generation and technical reasoning.


🐍 Python Inference (Optimum-Intel)

To run this Silver Series coder locally using the optimum-intel library:

from optimum.intel import OVModelForCausalLM
from transformers import AutoTokenizer

model_id = "CelesteImperia/Qwen2.5-Coder-7B-Instruct-OpenVINO-INT8"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForCausalLM.from_pretrained(model_id)

prompt = "Write a thread-safe Singleton pattern in C# using Lazy<T>."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=500)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

πŸ’» For C# / .NET Users (OpenVINO.GenAI)

The Qwen2.5 Coder is optimized for the native OpenVINO.GenAI framework, providing a low-latency coding companion directly within your Windows development environment.

using OpenVino.GenAI;

// 1. Initialize the LLM Pipeline
var device = "CPU"; // Use "GPU" to leverage RTX 3090/A4000
using var pipe = new LLMPipeline("path/to/qwen2.5-coder-int8-model", device);

// 2. Set Generation Config
var config = new GenerationConfig { MaxNewTokens = 1024, Temperature = 0.2f }; // Low temp for precise code

// 3. Execute Inference
var prompt = "Generate a C# WPF ViewModel for a real-time hardware monitoring dashboard.";
var result = pipe.Generate(prompt, config);

Console.WriteLine(result);

πŸ—οΈ Technical Details

  • Optimization Tool: NNCF (Neural Network Compression Framework)
  • Quantization: INT8 Symmetric (Per-channel)
  • Primary Domain: Software Engineering / Coding
  • Workstation Validation: Dual-GPU (RTX 3090 + RTX A4000)

β˜• Support the Forge

Maintaining the production line for elite-tier coding models requires significant resources. If these tools power your industrial projects or automation scripts, please consider supporting the Forge:

Platform Support Link
Global & India Support via Razorpay

Scan to support via UPI (India Only):


πŸ“œ License

This model is released under the Apache 2.0 License.


Connect with the architect: Abhishek Jaiswal on LinkedIn

Downloads last month
16
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for CelesteImperia/Qwen2.5-Coder-7B-Instruct-OpenVINO-INT8

Base model

Qwen/Qwen2.5-7B
Finetuned
(331)
this model