Qwen2.5-Coder-14B-Instruct-HQQ-INT4

INT4 quantization of Qwen/Qwen2.5-Coder-14B-Instruct. Calibration-free companion to drawais/Qwen2.5-Coder-14B-Instruct-AWQ-INT4.

Footprint

Source params 14B (code-specialized)
Quantized weights ~9.5 GB on disk
Inference VRAM (incl. KV cache @ 32K context) ~16 GB

Best at native ≤32K context. For long-context use the AWQ companion via vLLM.

Quick start

from transformers import AutoTokenizer
from hqq.models.hf.base import AutoHQQHFModel

tok = AutoTokenizer.from_pretrained("drawais/Qwen2.5-Coder-14B-Instruct-HQQ-INT4")
model = AutoHQQHFModel.from_quantized("drawais/Qwen2.5-Coder-14B-Instruct-HQQ-INT4", device="cuda")

License

Apache 2.0 (inherits from base model).

Downloads last month
11
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for drawais/Qwen2.5-Coder-14B-Instruct-HQQ-INT4

Base model

Qwen/Qwen2.5-14B
Finetuned
(79)
this model

Collection including drawais/Qwen2.5-Coder-14B-Instruct-HQQ-INT4