needle-1M-bench + Qwen3 quantizations
Collection
Long-context faithfulness benchmark + audit-friendly Qwen3 quantized releases. Outputs ship; inputs are auditable. • 23 items • Updated
INT4 quantization of Qwen/Qwen3-8B. Built to run on a single 12 GB+ consumer GPU.
| Source params | 8B |
| Quantized weights | ~5.7 GB on disk |
| Inference VRAM (incl. KV cache @ 32K context) | ~10 GB |
Fits any 12 GB+ consumer card: RTX 3060 / 4060 / 4070 / 5070, even some integrated mobile GPUs with shared memory. No homelab needed.
Scored on drawais/needle-1M-bench-mvp (50K-token haystack, real arxiv text):
| Metric | Score |
|---|---|
| Overall recall | 80.0% |
| Paper-anchored | 80.0% |
| Synthetic codes | 80.0% |
vllm serve drawais/Qwen3-8B-AWQ-INT4 --quantization awq_marlin --max-model-len 32768
from transformers import AutoTokenizer, AutoModelForCausalLM
tok = AutoTokenizer.from_pretrained("drawais/Qwen3-8B-AWQ-INT4")
model = AutoModelForCausalLM.from_pretrained("drawais/Qwen3-8B-AWQ-INT4", device_map="auto")
Native: 40,960 tokens. For longer contexts, enable YaRN rope-scaling per the base model's config.
Apache 2.0 (inherits from base model).