needle-1M-bench + Qwen3 quantizations
Collection
Long-context faithfulness benchmark + audit-friendly Qwen3 quantized releases. Outputs ship; inputs are auditable. • 23 items • Updated
INT4 quantization of Qwen/Qwen2.5-Coder-14B-Instruct. Calibration-free companion to drawais/Qwen2.5-Coder-14B-Instruct-AWQ-INT4.
| Source params | 14B (code-specialized) |
| Quantized weights | ~9.5 GB on disk |
| Inference VRAM (incl. KV cache @ 32K context) | ~16 GB |
Best at native ≤32K context. For long-context use the AWQ companion via vLLM.
from transformers import AutoTokenizer
from hqq.models.hf.base import AutoHQQHFModel
tok = AutoTokenizer.from_pretrained("drawais/Qwen2.5-Coder-14B-Instruct-HQQ-INT4")
model = AutoHQQHFModel.from_quantized("drawais/Qwen2.5-Coder-14B-Instruct-HQQ-INT4", device="cuda")
Apache 2.0 (inherits from base model).
Base model
Qwen/Qwen2.5-14B