mattbucci commited on
Commit
df3932b
·
verified ·
1 Parent(s): c56e263

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +75 -0
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: cerebras/Qwen3-Coder-REAP-25B-A3B
3
+ tags:
4
+ - awq
5
+ - 4-bit
6
+ - moe
7
+ - reap
8
+ - rdna4
9
+ - gfx1201
10
+ - rocm
11
+ - sglang
12
+ - code
13
+ - thinking
14
+ - quantized
15
+ license: apache-2.0
16
+ ---
17
+
18
+ # Qwen3-Coder-REAP-25B-A3B AWQ 4-bit
19
+
20
+ AWQ 4-bit quantization of [Cerebras Qwen3-Coder-REAP-25B-A3B](https://huggingface.co/cerebras/Qwen3-Coder-REAP-25B-A3B) — a REAP-pruned ([arxiv:2510.13999](https://arxiv.org/abs/2510.13999)) variant of `Qwen3-Coder-30B-A3B-Instruct` — calibrated with thinking + code data, optimized for AMD RDNA4 (gfx1201) inference with [SGLang](https://github.com/sgl-project/sglang).
21
+
22
+ ## Model Details
23
+
24
+ | | |
25
+ |---|---|
26
+ | **Base model** | [cerebras/Qwen3-Coder-REAP-25B-A3B](https://huggingface.co/cerebras/Qwen3-Coder-REAP-25B-A3B) (REAP prune of [Qwen3-Coder-30B-A3B-Instruct](https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct)) |
27
+ | **Architecture** | Qwen3 MoE (96 experts post-REAP, top-8) |
28
+ | **Parameters** | ~25B total / 3B active |
29
+ | **Pruning method** | REAP (Router-aware Expert pruning, 25% drop) — distinct from REAM (expert merging) |
30
+ | **Layers** | 48 |
31
+ | **Context** | 131K (tested), 256K supported by base |
32
+ | **Quantization** | Native AWQ 4-bit, group_size=128, fused Triton GEMM |
33
+ | **Calibration** | GPTQ via llmcompressor, 256 samples × 1024 tokens, `code_thinking` mix (AM-Thinking-v1, NuminaMath-CoT, ultrachat); ignore=`lm_head, mlp.gate, shared_expert.*` |
34
+
35
+ ## Performance (2x AMD Radeon AI PRO R9700, TP=2, fp8 KV)
36
+
37
+ `sglang.bench_serving`, single user, FP8 KV cache, `--disable-cuda-graph`:
38
+
39
+ | Context | TPOT (ms) | tok/s |
40
+ |--------:|----------:|------:|
41
+ | 128 | 43.6 | 22.9 |
42
+ | 1024 | 43.7 | 22.9 |
43
+ | 8192 | 44.1 | 22.7 |
44
+ | 32768 | 44.2 | 22.6 |
45
+ | 65536 | 45.5 | 22.0 |
46
+ | 131072 | 45.6 | 21.9 |
47
+
48
+ Flat ~22.5 tok/s decode across the full 131K range — A3B MoE stays bandwidth-bound, no attention scaling cliff.
49
+
50
+ ## Notes
51
+
52
+ This is **REAP**, not REAM. REAP prunes experts based on router-aware impact scores; REAM (Samsung SAIL) instead merges similar experts. Both shrink MoE models, but with different algorithms and tradeoffs — they're not interchangeable. The base Cerebras prune drops 32 of 128 experts (25% reduction).
53
+
54
+ The CT (compressed-tensors) format from llmcompressor was converted to native AWQ via [`convert_moe_ct_to_awq.py`](https://github.com/mattbucci/2x-R9700-RDNA4-GFX1201-sglang-inference/blob/main/scripts/quantize/convert_moe_ct_to_awq.py) — on ROCm the AWQ Triton GEMM kernel is 6× faster than the compressed-tensors path on the same weights.
55
+
56
+ `shared_expert.{gate,up,down}_proj` and `mlp.gate` (router) are preserved in BF16 to avoid the always-on residual / routing path going through INT4. `shared_expert_gate` (output dim 1) auto-falls-back to BF16 in the converter since AWQ packing requires divisibility by 8.
57
+
58
+ ## Usage with SGLang
59
+
60
+ Tested on the [RDNA4 inference stack](https://github.com/mattbucci/2x-R9700-RDNA4-GFX1201-sglang-inference) (SGLang v0.5.10 + 16 RDNA4 patches):
61
+
62
+ ```bash
63
+ git clone https://github.com/mattbucci/2x-R9700-RDNA4-GFX1201-sglang-inference
64
+ cd 2x-R9700-RDNA4-GFX1201-sglang-inference
65
+ ./scripts/setup.sh
66
+ MODEL=mattbucci/Qwen3-Coder-REAP-25B-A3B-AWQ scripts/launch.sh coder-reap-25b
67
+ ```
68
+
69
+ The `coder-reap-25b` preset auto-detects the AWQ format and uses `--quantization moe_wna16` with FP8 KV cache for 131K context single-user.
70
+
71
+ For other inference engines, this is a standard AWQ 4-bit checkpoint (group_size=128, asymmetric, fused MoE) and should load via `vllm` / `transformers + autoawq` without modification.
72
+
73
+ ## Hardware
74
+
75
+ Calibrated and benchmarked on 2× AMD Radeon AI PRO R9700 (gfx1201, RDNA4, 64 GB total VRAM) with ROCm 7.2 and SGLang v0.5.10 + RDNA4 patches. Per-GPU: ~6 GB weights + ~4 GB FP8 KV at 131K + overhead.