Qwen3.5-27B AWQ-4bit (calibrated โ v2 thinking-aware)
v2 (2026-04-19): re-calibrated with thinking-aware data, replaces v1. v1 (Open-Platypus calibration) silently broke <think> termination โ the model emitted unbounded reasoning tokens even on trivial questions like "What is the capital of France?". v2 fixes it; in-place update so existing users get the correction automatically. Old commit retained on the v1-broken-thinking git tag for reproducibility.
TL;DR
| Checkpoint | basic ("capital of France?") | thinking |
|---|---|---|
| v1 of this repo (Open-Platypus calibration) and most community AWQ | โ empty content (model loops in <think> until max_tokens) |
โ |
| v2 (current) | โ
"Paris" with finish_reason=stop, 45 reasoning tokens |
โ engages thinking, terminates cleanly on simple QA |
Why this exists
The default AWQ calibration recipes (Open-Platypus, ShareGPT, etc.) have no <think> traces in the assistant turns. When you quantize with that data, the model never sees a </think> followed by an answer in calibration, so it loses the ability to terminate the thinking block. Result: validate_capabilities.py basic test ("What is the capital of France? Answer in one word.") returns empty content because all 2048 generated tokens live inside an unclosed <think> block โ SGLang's --reasoning-parser qwen3 strips those into reasoning_content and you get back nothing in content.
This checkpoint was calibrated with a thinking-aware mixed dataset:
- 50% a-m-team/AM-Thinking-v1-Distilled โ Qwen3-verified
<think>...</think>traces - 25% AI-MO/NuminaMath-CoT โ math reasoning
- 25% HuggingFaceH4/ultrachat_200k โ general dialogue
with tokenizer.apply_chat_template(..., enable_thinking=True) so the <think>...</think> structure appears in every render. 256 samples ร 1024 tokens, GPTQ via llmcompressor (CPU, ~6h on AMD Ryzen 9 7900), then converted to native AWQ format.
Sampling โ IMPORTANT
Do NOT use temperature=0 (greedy decode) โ Qwen3-family models loop on greedy: "Paris\n</think>\nParis\n</think>...". Use the model's recommended sampling, which SGLang picks up automatically via sampling_defaults='model':
temperature=0.7
top_p=0.95
top_k=20
Validator confirms temperature=0.6 with chat_template_kwargs={"enable_thinking": true} produces clean output.
Validation results
validate_capabilities.py --skip-vision --skip-video (from the calibration repo):
[PASS] basic finish=stop answer='paris' (45 reasoning tokens, was BROKEN on original)
[~] thinking reasoning_seen answer_ok (model derives correct $0.05 for the
ball-and-bat puzzle in ~400 reasoning
tokens at temp=0.6; verbose at temp=0.7
โ bumped validator budget to 4096 tok)
Thinking on simple QA terminates in tens of tokens. Hard reasoning (multi-step math) the model is verbose at recommended sampling โ the answer ends up inside the reasoning block before finish_reason=stop. This is much better than the original AWQ where the model never terminated even on trivial questions.
Architecture
- Base: Qwen/Qwen3.5-27B (multimodal
Qwen3_5ForConditionalGeneration, 48 layers, hybrid DeltaNet + full-attention) - Quantization: AWQ 4-bit, group_size=128
- Excluded from quantization (kept BF16):
lm_head, DeltaNetin_proj_a/in_proj_b(recurrent state โ INT4 destroys it), vision tower - Format: native AWQ (
qweight+scales+qzeros). Loadable by SGLang's AWQ Triton + HIP GEMV kernels. - Vision weights preserved:
model-vision.safetensorscarries the BF16 vision tower +preprocessor_config.jsonis included so multimodal inference still works.
Files
| File | What it is |
|---|---|
model.safetensors (+ index.json) |
AWQ language-model weights |
model-vision.safetensors |
BF16 vision tower (untouched, preserved from base) |
chat_template.jinja |
Original Qwen3.5 template (supports enable_thinking) |
tokenizer.json, tokenizer_config.json |
Stock Qwen3.5 tokenizer |
preprocessor_config.json, processor_config.json |
Multimodal processor configs |
config.json, generation_config.json |
Stock + AWQ quantization config |
SGLang launch (RDNA4)
MODEL=mattbucci/Qwen3.5-27B-AWQ-4bit-calibrated python -m sglang.launch_server \
--model-path $MODEL \
--tensor-parallel-size 2 \
--dtype float16 \
--kv-cache-dtype fp8_e4m3 \
--context-length 262144 \
--quantization awq \
--reasoning-parser qwen3 \
--disable-cuda-graph \
--disable-custom-all-reduce \
--disable-overlap-schedule \
--attention-backend triton
Or via the reference launcher: MODEL=$MODEL ./scripts/launch.sh qwen35
Performance (2x AMD R9700, RDNA4, ROCm 7.2)
Single-user, FP8 KV cache, --disable-cuda-graph:
| Context | tok/s | TTFT |
|---|---|---|
| 128 | 26 | 5s |
| 16K | 18.7 | 8.7s |
| 32K | 15.3 | 20s |
| 65K | 13.0 | 47s |
| 131K | 9.5 | 100s |
| 256K | 5.8 | 209s |
Dense DeltaNet hybrid โ bandwidth-bound at short context, full-attention layers dominate at long. For 256K agent workloads, prefer Qwen3.6-35B-A3B MoE (12-13 tok/s @ 256K).
Calibration repo
All scripts, validator, and benchmark code: github.com/mattbucci/2x-R9700-RDNA4-GFX1201-sglang-inference โ see scripts/quantize/quantize_qwen35_thinking_aware.py and scripts/quantize/calibration_datasets.py.
License
Apache 2.0, inherited from the base model.
- Downloads last month
- 618
Model tree for mattbucci/Qwen3.5-27B-AWQ
Base model
Qwen/Qwen3.5-27B