Mixed Precision GGUF layer quantization of GLM-Z1-32B-0414 by zai-org
Original model: https://huggingface.co/zai-org/GLM-Z1-32B-0414
The hybrid quant employs different quantization levels on a per layer basis to enable both high performance and small file size at the same time. This quant is sized at ~IQ4_XS bpw. The quants employed are all K to avoid slow CPU or older GPU processing of IQ quants. For this file the Q4_K_H layer quants are as follows:
Q4_K_L : Q4_K_L : Q4_K_M + attn_o = q6_k
Q5_K_L : attn_v = q8_0 attn_o = q6_k ffn_d = q6_k
Q6_K_S : Q6_K
LAYER_TYPES='[
[0 ,"Q4_K_M"],[1 ,"Q4_K_S"],[2 ,"Q3_K_L"],[3 ,"Q3_K_M"],[4 ,"Q3_K_M"],[5 ,"Q3_K_M"],[6 ,"Q3_K_M"],[7 ,"Q3_K_M"],
[8 ,"Q3_K_M"],[9 ,"Q3_K_M"],[10,"Q3_K_M"],[11,"Q3_K_M"],[12,"Q3_K_M"],[13,"Q3_K_M"],[14,"Q3_K_M"],[15,"Q3_K_M"],
[16,"Q3_K_L"],[17,"Q3_K_M"],[18,"Q3_K_L"],[19,"Q3_K_M"],[20,"Q3_K_L"],[21,"Q3_K_M"],[22,"Q3_K_L"],[23,"Q3_K_M"],
[24,"Q3_K_L"],[25,"Q3_K_M"],[26,"Q3_K_L"],[27,"Q3_K_M"],[28,"Q3_K_L"],[29,"Q3_K_M"],[30,"Q3_K_L"],[31,"Q3_K_M"],
[32,"Q3_K_L"],[33,"Q3_K_L"],[34,"Q3_K_L"],[35,"Q3_K_L"],[36,"Q3_K_L"],[37,"Q3_K_L"],[38,"Q3_K_L"],[39,"Q3_K_L"],
[40,"Q4_K_S"],[41,"Q3_K_L"],[42,"Q4_K_S"],[43,"Q3_K_L"],[44,"Q4_K_S"],[45,"Q3_K_L"],[46,"Q4_K_S"],[47,"Q3_K_L"],
[48,"Q4_K_S"],[49,"Q4_K_S"],[50,"Q4_K_S"],[51,"Q4_K_S"],[52,"Q4_K_M"],[53,"Q4_K_M"],[54,"Q4_K_M"],[55,"Q4_K_M"],
[56,"Q4_K_L"],[57,"Q5_K_S"],[58,"Q5_K_M"],[59,"Q5_K_L"],[60,"Q6_K_S"]
]'
FLAGS="--token-embedding-type Q4_K --output-tensor-type Q6_K"
Comparison:
| Quant | size | PPL | Comment |
|---|---|---|---|
| IQ4_XS | 17.8e9 | 7.7 | - |
| Q4_K_H | 17.8e9 | 7.8 | Hybrid quant with Q4_K embedding Q6_K output |
The quant was evaluated for good reasoning performance across a curated set of test prompts.
Usage:
This is a RL trained thinking model. The layer quants for this model were evaluated on a set of test/eval prompts using greedy sampling. The quant/model show good performance in both minimal overthinking and/or getting stuck in infinite generations. (No infinite gen was found across all test prompts and most replies are quite efficient unless it can't solve the problem)
The model can be speculated using Qwen3 0.6B if the inference engine can support dynamic vocab translation between draft and target models. Approx performance using a downstream speculator with llama.cpp on two 4070s (12G VRAM) with layers and context fully in GPU and connected via RPC:
| Q | QKV | ND | NKV | gen tps | Comment |
|---|---|---|---|---|---|
| Q4_K_H | F16 | 0 | 32k | 22 | No draft |
| Q4_K_H | F16 | 4 | 16k | 35 | Spec 4 |
| Q4_K_H | Q8_0 | 4 | 18k | 34 | Spec 4 |
Benchmarks:
Math benchmarks for the model are given here: https://huggingface.co/spaces/steampunque/benchlm
Download the file from below:
| Link | Type | Size/e9 B | Notes |
|---|---|---|---|
| GLM-Z1-32B-0414.Q4_K_H.gguf | Q4_K_H | 17.8e9 B | ~IQ4_XS size |
A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:
- Downloads last month
- 36
Model tree for steampunque/GLM-Z1-32B-0414-MP-GGUF
Base model
zai-org/GLM-Z1-32B-0414