GigaChat
Collection
5 items • Updated • 1
This model was converted to MLX format from ai-sage/GigaChat3.1-10B-A1.8B using mlx-lm v0.31.1.
Multi-Token Prediction (MTP) had to be disabled (
"num_nextn_predict_layers": 0) and related layers had to be removed (model.layers.26.*).
Thanks RockTalk/GigaChat3.1-10B-A1.8B-MLX-4bit for the tip.
mlx_lm.convert --hf-path ai-sage/GigaChat3.1-10B-A1.8B --mlx-path deepsweet/GigaChat3.1-10B-A1.8B-MLX-MXFP4 --quantize --q-mode mxfp4 --q-group-size 32
4-bit
Base model
ai-sage/GigaChat3-10B-A1.8B-base