arXiv: REAM: Merging Improves Pruning of Experts in LLMs

GLM-4.5-Air-REAM

This model is a compressed version of zai-org/GLM-4.5-Air. It is obtained by reducing the number of experts in each MoE layer from 128 to 96. This reduction is achieved by the REAM method described in https://bknyaz.github.io/blog/2026/moe/.

Compared to other models obtained in this collection, more code data is used in the calibration data during pruning/merging to better preserve original's model coding abilities. Specifically, the ratio between c4, math and coding data (see https://bknyaz.github.io/blog/2026/moe/) is 0.0, 0.3, 0.7. The calibration data used here is the same as in GLM-4.5-Air-REAP.

The compressed model has 82B params (164GB) instead of 110B (220GB) of the original model, reducing storage and GPU memory requirements by roughly 25%. At the same time, the model retains >=95% of the original model's performance on a variety of benchmarks (see Results section below). Additional efficiency optimization (e.g., quantization) can be added similarly to the original model.

MTP layer is ignored in this model, but can be added using our code https://github.com/SamsungSAILMontreal/ream.

For eval on HumanEval and LiveCodeBench we use https://github.com/zai-org/glm-simple-evals. See additional details at Qwen3-30B-A3B-Instruct-2507-REAM.

Results

Model IFeval AIME25 GSM8K GPQA-D HumanEval LiveCodeBench AVG
GLM-4.5-Air 90.4 83.3 94.8 42.9 93.9 57.4 77.1
GLM-4.5-Air-REAM 83.6 83.3 94.9 37.9 90.2 53.7 73.9

License

Please refer to the license of the original model zai-org/GLM-4.5-Air.

Downloads last month
318
Safetensors
Model size
82B params
Tensor type
F32
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for bknyaz/GLM-4.5-Air-REAM

Finetuned
(38)
this model

Paper for bknyaz/GLM-4.5-Air-REAM