Qwen3-14B-HQQ-INT4

INT4 quantization of Qwen/Qwen3-14B. Calibration-free companion to drawais/Qwen3-14B-AWQ-INT4.

Footprint

Source params 14B
Quantized weights ~9.5 GB on disk
Inference VRAM (incl. KV cache @ 32K context) ~16 GB

Recommended use

Best at native 40K context. This release is intended for chat / interactive workloads at the base model's native context length.

For long-context (>40K) use the AWQ companion — drawais/Qwen3-14B-AWQ-INT4, which scored 90% overall on needle-1M-bench-mvp 50K with YaRN extension via vLLM.

Bench

The companion AWQ release scored 90.0% overall (paper-anchored 80%, synthetic 100%) on drawais/needle-1M-bench-mvp 50K with YaRN-extended context via vLLM. A direct HQQ score at native ≤40K context is queued for a future release.

Quick start

from transformers import AutoTokenizer
from hqq.models.hf.base import AutoHQQHFModel

tok = AutoTokenizer.from_pretrained("drawais/Qwen3-14B-HQQ-INT4")
model = AutoHQQHFModel.from_quantized("drawais/Qwen3-14B-HQQ-INT4", device="cuda")

License

Apache 2.0 (inherits from base model).

Downloads last month
43
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for drawais/Qwen3-14B-HQQ-INT4

Finetuned
Qwen/Qwen3-14B
Finetuned
(243)
this model

Collection including drawais/Qwen3-14B-HQQ-INT4