LetheanNetwork/lemma-mlx

Gemma 4 in MLX format, 4-bit quantized, converted from LetheanNetwork/lemma's bf16 safetensors via mlx_lm.convert. Unmodified Google weights hosted in the Lethean namespace so downstream tools don't have to depend on external mlx-community mirrors.

For the LEK-merged sibling see lthn/lemma.

License

Apache 2.0, subject to the Gemma Terms of Use.

Downloads last month
15
Safetensors
Model size
1B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for LetheanNetwork/lemma-mlx

Quantized
(2)
this model