arXiv: REAM: Merging Improves Pruning of Experts in LLMs
GLM-4.5-Air-REAP
This model is a compressed version of zai-org/GLM-4.5-Air. It is obtained by reducing the number of experts in each MoE layer from 128 to 96 using the REAP baseline method as described in https://bknyaz.github.io/blog/2026/moe/.
Compared to other models obtained in this collection, more code data is used in the calibration data during pruning/merging to better preserve original's model coding abilities. Specifically, the ratio between c4, math and coding data (see https://bknyaz.github.io/blog/2026/moe/) is 0.0, 0.3, 0.7. The calibration data used here is the same as in GLM-4.5-Air-REAM.
The compressed model has 82B params (164GB) instead of 110B (220GB) of the original model, reducing storage and GPU memory requirements by roughly 25%. At the same time, the model retains >=93% of the original model's performance on a variety of benchmarks (see Results section below). Additional efficiency optimization (e.g., quantization) can be added similarly to the original model.
MTP layer is ignored in this model, but can be added using our code https://github.com/SamsungSAILMontreal/ream.
For eval on HumanEval and LiveCodeBench we use https://github.com/zai-org/glm-simple-evals. See additional details at Qwen3-30B-A3B-Instruct-2507-REAM.
Results
| Model | IFeval | AIME25 | GSM8K | GPQA-D | HumanEval | LiveCodeBench | AVG |
|---|---|---|---|---|---|---|---|
| GLM-4.5-Air | 90.4 | 83.3 | 94.8 | 42.9 | 93.9 | 57.4 | 77.1 |
| GLM-4.5-Air-REAP | 80.6 | 76.7 | 93.9 | 38.4 | 90.2 | 51.7 | 71.9 |
License
Please refer to the license of the original model zai-org/GLM-4.5-Air.
- Downloads last month
- 375
Model tree for SamsungSAILMontreal/GLM-4.5-Air-REAP
Base model
zai-org/GLM-4.5-Air