Qwen3-8B-AWQ-INT4

INT4 quantization of Qwen/Qwen3-8B. Built to run on a single 12 GB+ consumer GPU.

Footprint

Source params 8B
Quantized weights ~5.7 GB on disk
Inference VRAM (incl. KV cache @ 32K context) ~10 GB

Fits any 12 GB+ consumer card: RTX 3060 / 4060 / 4070 / 5070, even some integrated mobile GPUs with shared memory. No homelab needed.

Bench

Scored on drawais/needle-1M-bench-mvp (50K-token haystack, real arxiv text):

Metric Score
Overall recall 80.0%
Paper-anchored 80.0%
Synthetic codes 80.0%

Quick start

vllm serve drawais/Qwen3-8B-AWQ-INT4 --quantization awq_marlin --max-model-len 32768
from transformers import AutoTokenizer, AutoModelForCausalLM
tok = AutoTokenizer.from_pretrained("drawais/Qwen3-8B-AWQ-INT4")
model = AutoModelForCausalLM.from_pretrained("drawais/Qwen3-8B-AWQ-INT4", device_map="auto")

Context length

Native: 40,960 tokens. For longer contexts, enable YaRN rope-scaling per the base model's config.

License

Apache 2.0 (inherits from base model).

Downloads last month
58
Safetensors
Model size
8B params
Tensor type
I32
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for drawais/Qwen3-8B-AWQ-INT4

Finetuned
Qwen/Qwen3-8B
Quantized
(275)
this model

Collection including drawais/Qwen3-8B-AWQ-INT4