LetheanNetwork/lemrd-mlx-bf16

Gemma 4 in MLX format, full bf16 precision, converted from LetheanNetwork/lemrd's bf16 safetensors via mlx_lm.convert --dtype bfloat16 (no quantization). The full-precision reference for the MLX family. For the LEK-merged variant see lthn/lemrd.

License

Apache 2.0, subject to the Gemma Terms of Use.

Downloads last month
16
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for LetheanNetwork/lemrd-mlx-bf16

Finetuned
(2)
this model