needle-1M-bench + Qwen3 quantizations
Collection
Long-context faithfulness benchmark + audit-friendly Qwen3 quantized releases. Outputs ship; inputs are auditable. • 23 items • Updated
INT4 quantization of Qwen/Qwen3-32B. Companion to drawais/Qwen3-32B-AWQ-INT4.
| Source params | 32B |
| Quantized weights | ~19.7 GB on disk |
| Inference VRAM (incl. KV cache @ 32K context) | ~24 GB |
The companion AWQ release drawais/Qwen3-32B-AWQ-INT4 scored 100.0% overall on drawais/needle-1M-bench-mvp (50K-token haystack, real arxiv text). Same base, comparable quality expected. A direct HQQ score will land once a vLLM-compatible HQQ pathway is finalized.
from transformers import AutoTokenizer
from hqq.models.hf.base import AutoHQQHFModel
tok = AutoTokenizer.from_pretrained("drawais/Qwen3-32B-HQQ-INT4")
model = AutoHQQHFModel.from_quantized("drawais/Qwen3-32B-HQQ-INT4", device="cuda")
Apache 2.0 (inherits from base model).
Base model
Qwen/Qwen3-32B