Qwen3.5-9B - TevunahAi Multi-Level GPTQ
| Property | Value |
|---|---|
| Base Model | Qwen/Qwen3.5-9B |
| Architecture | HYBRID (Gated DeltaNet + Full Attention, VLM) |
| Parameters | 9B (dense) |
| Context Length | 262K (native), extensible to ~1M |
| Quantization | TevunahAi Multi-Level GPTQ + EoRA |
| Original Size | ~18 GB (BF16) |
| Quantized Size | 7.7 GB |
| Compression | 57.2% reduction |
Model Description
Qwen3.5-9B is Alibaba Cloud's latest-generation multimodal foundation model, featuring a hybrid architecture that combines Gated Delta Networks (linear attention) with standard full attention in a 3:1 pattern. It is natively multimodal — trained from scratch on text, images, and video through early fusion of multimodal tokens, not bolted-on adapters.
This quantization preserves the full hybrid architecture with precision-aware layer treatment, applying EoRA rank-64 error correction across all quantizable layers for maximum quality retention.
Architecture Details
Qwen3.5-9B uses a 32-layer text decoder arranged in repeating 4-layer blocks:
Block Pattern (×8): [DeltaNet+MLP] × 3 → [FullAttn+MLP] × 1
| Component | Specification |
|---|---|
| Total Decoder Layers | 32 |
| Gated DeltaNet Layers | 24 (linear attention, O(n) complexity) |
| Full Attention Layers | 8 (GQA: 16 Q heads, 4 KV heads, head_dim=256) |
| Dense MLP | SiLU activation, intermediate_size=12,288 |
| Hidden Size | 4,096 |
| Vocab Size | 248K |
| Vision Encoder | DeepStack ViT, 27 layers, patch_size=16, hidden_size=1,152 |
| Multimodal RoPE | Interleaved sections [11, 11, 10] |
| Multi-Token Prediction | 1 MTP layer |
| Languages | 201 languages and dialects |
Why This Architecture Matters:
- Gated DeltaNet provides constant memory complexity for long-range context — 262K native context with linear scaling instead of quadratic
- Full attention layers at every 4th position ensure precise information retrieval
- 9B dense model outperforms previous-generation Qwen3-30B (3× larger) on GPQA, IFEval, and LongBench
- Natively processes text, images, and video from unified weights
Quantization Strategy
TevunahAi Multi-Level GPTQ with EoRA rank-64 applied to ALL quantizable layers. No MoE routing in the 9B variant means EoRA is affordable across the full model, providing comprehensive error correction.
| Component | Precision | Rationale |
|---|---|---|
| Full Attention (q/k/v/o_proj) — 8 layers | INT8 + EoRA rank-64 | Quality preservation for critical attention layers |
| Linear Attention (in_proj_qkv/in_proj_z/out_proj) — 24 layers | INT4 + EoRA rank-64 | DeltaNet projections, EoRA compensates for aggressive compression |
| Dense MLP (gate/up/down_proj) — 32 layers | INT4 + EoRA rank-64 | Standard MLP compression with error correction |
| Vision Encoder — 27 layers | FP16 (unquantized) | Full precision preserved for visual understanding |
| Embeddings & LM Head | FP16 | Preserved for accuracy |
Calibration
- 2,048 samples (8× industry standard of 256)
- 4,096 sequence length
- Premium calibration for superior quality retention
Performance Benchmarks
Original Model (Qwen benchmarks):
| Benchmark | Score |
|---|---|
| MMLU-Pro | 82.5% |
| GPQA Diamond | 81.7% |
| HMMT Feb/Nov | 90% / 90% |
| LiveCodeBench v6 | 82.7% |
| MMMU-Pro (vision) | 70.1% |
| MathVision | 78.9% |
| LongBench v2 | 55.2% |
| VideoMME (w/ subs) | 84.5% |
| MMMLU (multilingual) | 81.2% |
Notable Comparisons:
- Outperforms OpenAI GPT-OSS-120B (13× larger) on MMLU-Pro, GPQA Diamond, and MMMLU
- Beats GPT-5-Nano on MMMU-Pro by 22.5% (70.1 vs 57.2)
- Outperforms previous Qwen3-30B on GPQA Diamond (+8 pts), IFEval (+3 pts), LongBench v2 (+10 pts)
Expected Quantized Performance:
- Reasoning tasks: 97-99% of baseline (EoRA recovery on attention layers)
- Code generation: 96-98% of baseline
- Long context: 96-99% of baseline (DeltaNet advantage + EoRA compensation)
- Vision tasks: 99-100% of baseline (encoder preserved at FP16)
- General chat: 98-99% of baseline
Formal benchmarks pending — inference quality verified manually.
Usage
GPTQModel (Recommended):
from gptqmodel import GPTQModel
from transformers import AutoTokenizer
model = GPTQModel.load(
"TevunahAi/Qwen3.5-9B-TevunahAi-GPTQ",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(
"TevunahAi/Qwen3.5-9B-TevunahAi-GPTQ",
trust_remote_code=True
)
# Generate
prompt = "Explain the difference between linear and quadratic attention complexity."
output = model.generate(
**tokenizer(prompt, return_tensors='pt').to('cuda'),
max_new_tokens=256
)
print(tokenizer.decode(output[0]))
With Thinking Mode (default):
messages = [{"role": "user", "content": "Solve: What is the integral of x²·sin(x)?"}]
tokenized = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt"
).to('cuda')
outputs = model.generate(
tokenized,
max_new_tokens=2048,
temperature=1.0,
top_p=0.95,
top_k=20,
presence_penalty=1.5
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Direct Response (disable thinking):
tokenized = tokenizer.apply_chat_template(
messages,
tokenize=True,
enable_thinking=False,
add_generation_prompt=True,
return_tensors="pt"
).to('cuda')
outputs = model.generate(
tokenized,
max_new_tokens=128,
do_sample=False,
num_beams=1
)
Installation:
pip install gptqmodel transformers>=4.48
vLLM (Experimental):
pip install -U "vllm>=0.12.0"
vllm serve TevunahAi/Qwen3.5-9B-TevunahAi-GPTQ \
--max-num-seqs 8 \
--tensor-parallel-size 1 \
--max-model-len 32768 \
--trust-remote-code
Memory Requirements
Inference (quantized model):
- Minimum: 8-10 GB VRAM (short context)
- Recommended: 12-16 GB VRAM
- For 262K context: 24-32 GB+ VRAM
Quantization (reproduction):
- Hardware Used: Dual Xeon Max 9480 (128GB HBM2e + 256GB DDR5) + RTX 5000 Ada 32GB
- Time: 47.6 minutes (0.8 hours)
Quantization Details
| Specification | Value |
|---|---|
| Method | GPTQ + EoRA rank-64 |
| Quantizer | GPTQModel |
| Calibration Samples | 2,048 (8× industry standard) |
| Sequence Length | 4,096 tokens |
| desc_act | True (activation ordering) |
| sym | True (symmetric quantization) |
| group_size | 128 |
Use Cases
Ideal for:
- Mathematical reasoning (HMMT-level problems)
- Code generation & debugging (LiveCodeBench capable)
- Long-context analysis (262K-1M tokens, linear scaling)
- Multimodal understanding (text + images + video)
- Agentic applications (tool use, multi-step reasoning)
- Multilingual deployment (201 languages)
- Resource-constrained deployment on consumer GPUs
Technical Specifications
| Specification | Value |
|---|---|
| Model Family | Qwen3.5 |
| Variant | 9B (Dense, Hybrid DeltaNet) |
| Total Parameters | 9B |
| Total Layers | 32 (text) + 27 (vision) |
| DeltaNet Layers | 24 |
| Full Attention Layers | 8 |
| Attention Heads | 16 Q, 4 KV (GQA) |
| Head Dimension | 256 |
| Hidden Size | 4,096 |
| Intermediate Size | 12,288 |
| Context Length | 262K (native), ~1M (extended) |
| Vocab Size | 248K |
| Supported Languages | 201 |
| Multimodal | Text, Image, Video (native) |
License
Apache 2.0
Citation
@software{qwen35_9b_gptq_2026,
title = {Qwen3.5-9B - TevunahAi Multi-Level GPTQ},
author = {TevunahAi},
year = {2026},
note = {Multi-Level GPTQ with EoRA rank-64 for hybrid Gated DeltaNet + Attention architecture},
url = {https://huggingface.co/TevunahAi/Qwen3.5-9B-TevunahAi-GPTQ}
}
@misc{qwen35_2026,
title = {Qwen3.5 Technical Report},
author = {Qwen Team, Alibaba Cloud},
year = {2026},
url = {https://qwenlm.github.io/blog/qwen3.5/}
}
@article{liu2024eora,
title = {EoRA: Training-free Compensation for Compressed LLM with Eigenspace Low-Rank Approximation},
author = {Liu, Shih-Yang and Wang, Huck Yang and Cheng, Hong-Yi Michael and Khailany, Brucek and Molchanov, Pavlo},
journal = {arXiv preprint arXiv:2410.21271},
year = {2024},
url = {https://arxiv.org/abs/2410.21271},
note = {NVIDIA Research}
}
Acknowledgments
This quantization leverages the hybrid Gated DeltaNet + Full Attention architecture, requiring precision-aware treatment of fundamentally different layer types (linear vs quadratic attention). EoRA rank-64 error correction is applied comprehensively across all quantizable layers to maximize quality retention at aggressive compression ratios.
Quantized by TevunahAi LLC
- Downloads last month
- 90