LetheanNetwork/lemmy-mlx-8bit
Gemma 4 26B A4B MoE in MLX format, 8-bit quantized, converted from
LetheanNetwork/lemmy's bf16
safetensors via mlx_lm.convert. Higher-precision sibling of
LetheanNetwork/lemmy-mlx
(4-bit). For the LEK-merged variant see
lthn/lemmy.
License
Apache 2.0, subject to the Gemma Terms of Use.
- Downloads last month
- 16
Model size
25B params
Tensor type
BF16
·
U32 ·
Hardware compatibility
Log In to add your hardware
8-bit