This model was converted to MLX format from yandex/YandexGPT-5-Lite-8B-instruct using mlx-lm version 0.30.7:

mlx_lm.convert --hf-path yandex/YandexGPT-5-Lite-8B-instruct --mlx-path deepsweet/YandexGPT-5-Lite-8B-Instruct-MLX-MXFP4 --quantize --q-mode mxfp4 --q-group-size 32
Downloads last month
153
Safetensors
Model size
2B params
Tensor type
U8
·
U32
·
F16
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for deepsweet/YandexGPT-5-Lite-8B-Instruct-MLX-MXFP4

Collection including deepsweet/YandexGPT-5-Lite-8B-Instruct-MLX-MXFP4