Qwen3.5-27B Opus-Distilled β Proposal C Mixed Quantization (MLX)
A sensitivity-aware mixed-precision quantization of Qwen3.5-27B, distilled from Claude Opus reasoning traces and quantized using a per-layer bit allocation strategy optimized for Apple Silicon (MLX).
Key Stats
| Metric | Value |
|---|---|
| Parameters | 27B dense |
| Disk size | ~20 GB |
| Avg bits/weight | ~6.5 |
| Peak memory | ~22 GB |
| Throughput (M3 Ultra) | 27 tok/s (oMLX), 23 tok/s (mlx_lm) |
| Quality (Phipps eval v3) | 4.45β4.47 |
Quantization Methodology β "Proposal C" (8/8/6/5)
This is NOT uniform quantization. Each component type gets a precision level matched to its sensitivity:
Bit Allocation
| Component | Bits | Rationale |
|---|---|---|
Embeddings (embed_tokens) |
8 | Tokenβcontinuous space mapping β errors cascade |
LM Head (lm_head) |
8 | Directly impacts output probabilities |
| GQA self-attention (q/k/v/o_proj) | 8 | Traditional softmax attention β the "precise reasoning" layers |
| DeltaNet linear attention inputs (qkv, z, b, a) | 6 | Linear recurrence is more robust to quantization noise |
| DeltaNet output projection | 8 | Last transform before residual stream |
| All MLP layers (gate/down/up_proj) | 5 | Bulk of parameters, tolerant due to element-wise nonlinearities |
Architecture-Aware Design
Qwen3.5-27B uses a hybrid DeltaNet/GQA architecture with a 3:1 ratio:
- Layers 0, 1, 2 β DeltaNet (linear attention)
- Layer 3 β GQA (softmax attention)
- Pattern repeats across all 64 layers (48 DeltaNet + 16 GQA)
Proposal C exploits this: GQA layers (which handle precise key-value attention) get full 8-bit, while DeltaNet layers (which use linear recurrence with gating) can tolerate 6-bit on inputs. MLP weights β the parameter-count majority β compress to 5-bit with minimal quality impact.
Why This Works
- Protect information bottlenecks: Embeddings, LM head, and attention output projections are all 8-bit because errors there propagate directly into the residual stream
- Match precision to mechanism: GQA softmax attention needs precision; DeltaNet's gating mechanism absorbs quantization noise
- Compress where it's cheap: MLP layers have the most parameters but are the most quantization-tolerant
Results vs Uniform Quantization
Proposal C was validated against uniform 4-bit, 5-bit, and 8-bit baselines on the 80B MoE variant (Proposal A). Zero measurable quality loss vs the fp16 baseline at ~3x compression. The same methodology applied to the 27B dense model scores 4.45β4.47 on our eval harness β higher than the 80B MoE at 4.25 (which has only 3B active parameters per token vs 27B dense).
Usage
from mlx_lm import load, generate
model, tokenizer = load("Phipper/qwen3.5-27b-opus-distilled-proposal-c")
prompt = tokenizer.apply_chat_template(
[{"role": "user", "content": "What is a leveraged buyout?"}],
tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, max_tokens=512)
print(response)
Sampling Recommendations
- Temperature: 0.7
- Top-p: 0.8
- Top-k: 20
- Max tokens: 16384
Hardware Requirements
- Apple Silicon with β₯24GB unified memory (M2 Pro/Max, M3 Pro/Max/Ultra, M4 Pro/Max)
- MLX β₯0.31.1, mlx-lm β₯0.31.1
Credits
- Created by Nate Baranski (@Phipper)
- Base model: Qwen/Qwen3.5-27B (Apache 2.0)
- Distillation: Opus reasoning traces
- Quantization methodology & evaluation: Nate Baranski + Bessemer (Claude Code agent)
- Downloads last month
- 817
5-bit
Model tree for Phipper/qwen3.5-27b-opus-distilled-proposal-c
Base model
Qwen/Qwen3.5-27B