needle-1M-bench + Qwen3 quantizations
Collection
Long-context faithfulness benchmark + audit-friendly Qwen3 quantized releases. Outputs ship; inputs are auditable. • 23 items • Updated
INT4 quantization of deepseek-ai/DeepSeek-R1-Distill-Qwen-7B. Calibration-free companion to drawais/DeepSeek-R1-Distill-Qwen-7B-AWQ-INT4.
| Source params | 7B (distilled from R1) |
| Quantized weights | ~5.3 GB on disk |
| Inference VRAM (incl. KV cache @ 32K context) | ~10 GB |
Best at native ≤32K context. For long-context use the AWQ companion via vLLM.
from transformers import AutoTokenizer
from hqq.models.hf.base import AutoHQQHFModel
tok = AutoTokenizer.from_pretrained("drawais/DeepSeek-R1-Distill-Qwen-7B-HQQ-INT4")
model = AutoHQQHFModel.from_quantized("drawais/DeepSeek-R1-Distill-Qwen-7B-HQQ-INT4", device="cuda")
MIT (DeepSeek-R1-Distill series). Underlying base model (Qwen2.5-Math-7B) is Apache 2.0.
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-7B