Qwen3-32B-HQQ-INT4

INT4 quantization of Qwen/Qwen3-32B. Companion to drawais/Qwen3-32B-AWQ-INT4.

Footprint

Source params 32B
Quantized weights ~19.7 GB on disk
Inference VRAM (incl. KV cache @ 32K context) ~24 GB

Bench

The companion AWQ release drawais/Qwen3-32B-AWQ-INT4 scored 100.0% overall on drawais/needle-1M-bench-mvp (50K-token haystack, real arxiv text). Same base, comparable quality expected. A direct HQQ score will land once a vLLM-compatible HQQ pathway is finalized.

Quick start

from transformers import AutoTokenizer
from hqq.models.hf.base import AutoHQQHFModel

tok = AutoTokenizer.from_pretrained("drawais/Qwen3-32B-HQQ-INT4")
model = AutoHQQHFModel.from_quantized("drawais/Qwen3-32B-HQQ-INT4", device="cuda")

License

Apache 2.0 (inherits from base model).

Downloads last month
12
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for drawais/Qwen3-32B-HQQ-INT4

Base model

Qwen/Qwen3-32B
Finetuned
(507)
this model

Collection including drawais/Qwen3-32B-HQQ-INT4