Mixed Precision GGUF layer quantization of Qwen3-30B-A3B by Qwen
Original model: https://huggingface.co/Qwen/Qwen3-30B-A3B
The hybrid quant employs different quantization levels on a per layer basis to increased flexibility of trading off performance vs file size. Less parameter bits are used at deep layers and more bits at cortex layers to simulultaneously optimize quantized size and model performance. All K-quants are used for faster CPU processing. For this file the layer quants are as follows: (refreshed 4/21/2026)
Q4_K_L : attn_v = q6_k attn_o = q6_k ffn_d = q6_k
Q5_K_L : attn_v = q8_0 attn_o = q6_k ffn_d = q6_k
Q6_K_S : Q6_K
LAYER_TYPES='[
[0 ,"Q5_K_L"],[1 ,"Q4_K_S"],[2 ,"Q3_K_L"],[3 ,"Q4_K_S"],[4 ,"Q3_K_L"],[5 ,"Q4_K_S"],[6 ,"Q3_K_L"],[7 ,"Q4_K_S"],
[8 ,"Q3_K_L"],[9 ,"Q3_K_L"],[10,"Q3_K_L"],[11,"Q3_K_L"],[12,"Q3_K_L"],[13,"Q3_K_L"],[14,"Q3_K_L"],[15,"Q3_K_L"],
[16,"Q4_K_S"],[17,"Q3_K_L"],[18,"Q4_K_S"],[19,"Q3_K_L"],[20,"Q4_K_S"],[21,"Q3_K_L"],[22,"Q4_K_S"],[23,"Q3_K_L"],
[24,"Q4_K_S"],[25,"Q4_K_S"],[26,"Q4_K_S"],[27,"Q4_K_S"],[28,"Q4_K_S"],[29,"Q4_K_S"],[30,"Q4_K_S"],[31,"Q4_K_S"],
[32,"Q4_K_M"],[33,"Q4_K_S"],[34,"Q4_K_M"],[35,"Q4_K_S"],[36,"Q4_K_M"],[37,"Q4_K_S"],[38,"Q4_K_M"],[39,"Q4_K_S"],
[40,"Q4_K_M"],[41,"Q4_K_M"],[42,"Q4_K_M"],[43,"Q4_K_L"],[44,"Q5_K_S"],[45,"Q5_K_M"],[46,"Q5_K_L"],[47,"Q6_K_S"]
]'
FLAGS="--token-embedding-type Q4_K --output-tensor-type Q6_K --layer-types-high"
The layer quants were optimized for high reasoning performance across a curated set of test prompts.
Comparison:
| Quant | size | PPL | Comment |
|---|---|---|---|
| IQ4_XS | 16.6e9 | 9.15 | default embed and output |
| Q4_K_H | 17.8e9 | 9.0 | Q4_K embed Q6_K output |
Usage:
This is a RL moe model. By default it will crank out a think block delimited by
THINK_START="<think>\n"
THINK_STOP="\n</think>\n\n"
To bypass thinking inject the think block delimiters following the assistant prompt template. The model is strong with think blocked bypassed but less accurate on harder prompts. It was found to be prone to overthinking on many of the eval prompts, but did solve most of the test prompts correctly in think mode.
The model can be efficiently run by offloading expert tensors to CPU via -ot exps=CPU to open up very large context space on even limited GPU VRAM platforms (8-12G GPUs). The smaller size of the optimally quantized parameters will give an effective boost in CPU processing speed due to reducing the memory BW needed to repeatedly copy them from main memory to SIMD regs. It can also run fully offloaded on GPU via RPC or high VRAM GPU where gen rates should be very high due to only 3B active parameters.
High context yarn config is as follows: Arbitrarily set base context for yarn rope scale compute to 35840 (vs a training context of 40960), then with a context of NCTX tokens the rope scale = NCTX / 35840. Note: this factor can be tweaked on a per application or even per prompt basis. For instance testing with other qwen3 128k yarn models showed 40000 worked best on a long context prompt, but this model appears to work well with using 34840 together with the Q4_K_H quant.
Example: on 12G VRAM, F16 KV, NCTX maxes out at 107520 tokens with -ot exps=CPU. Then rope scale = 107520.0 / 35840.0 = 3.0.
Then on model start pass -c 107520 --rope-scaling yarn --yarn-orig-ctx 35840 --rope_scale 3.0
Later versions of llama.cpp have a bug which soft caps context length to the training context, effectively disabling yarn context extension. Patch server-context.cpp according to https://github.com/ggml-org/llama.cpp/issues/22140 to fix it.
Long context tests:
These tests were done with -ot exps=CPU and greedy sampling with no speculation.
With Q8 KV, the model successfully answers the long context test prompt: https://thireus.com/REDDIT/Qwen3_Runescape_Massive_Prompt.txt
With F16KV on a 12G VRAM GPU, the model is limited to 105k tokens and succesfully answers a reduced 85k tokens version of the above prompt: https://huggingface.co/steampunque/Qwen3-8B-MP-GGUF/blob/main/Qwen3_Runescape_Massive_Prompt_85k.txt.
Benchmarks:
Partial evals for the model (original, not refreshed quant) are given here: https://huggingface.co/spaces/steampunque/benchlm.
Download the file from below:
| Link | Type | Size/e9 B | Notes |
|---|---|---|---|
| Qwen3-30B-A3B.Q4_K_H.gguf | Q4_K_H | 17.8e9 B | ~IQ4_XS size |
A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:
- Downloads last month
- 28