needle-1M-bench + Qwen3 quantizations
Collection
Long-context faithfulness benchmark + audit-friendly Qwen3 quantized releases. Outputs ship; inputs are auditable. • 23 items • Updated
INT4 quantization of Qwen/Qwen3-32B. Built to run on a single 24 GB+ GPU.
| Source params | 32B |
| Quantized weights | ~18 GB on disk |
| Inference VRAM (incl. KV cache @ 32K context) | ~24 GB |
Fits any 24 GB+ GPU: RTX 3090 / 4090 / 5090, A5000, A6000, A100 40GB, etc.
Scored on drawais/needle-1M-bench-mvp (50K-token haystack, real arxiv text):
| Metric | Score |
|---|---|
| Overall recall | 100.0% |
| Paper-anchored | 100.0% |
| Synthetic codes | 100.0% |
vllm serve drawais/Qwen3-32B-AWQ-INT4 --quantization awq_marlin --max-model-len 32768
from transformers import AutoTokenizer, AutoModelForCausalLM
tok = AutoTokenizer.from_pretrained("drawais/Qwen3-32B-AWQ-INT4")
model = AutoModelForCausalLM.from_pretrained("drawais/Qwen3-32B-AWQ-INT4", device_map="auto")
Native: 40,960 tokens (inherits from base model). For longer contexts, enable YaRN rope-scaling per the base model's config.
Apache 2.0 (inherits from base model).
Base model
Qwen/Qwen3-32B