Huihui-gemma-4-26B-A4B-it-abliterated-mlx-4bit
MLX-VLM conversion of huihui-ai/Huihui-gemma-4-26B-A4B-it-abliterated.
Overview
- Format:
MLX-VLM - Precision:
4bit - Size:
15G - Source model type:
Gemma4ForConditionalGeneration - Source pipeline:
any-to-any - Intended runtime:
mlx-vlmandLM Studio - Quantization result:
4.843 bits/weight
Conversion Notes
- Converted for local Apple Silicon inference with
mlx-vlm - Kept in MLX-VLM multimodal layout for image-text generation
- Includes local config compatibility fixes required during conversion/debugging
Validation
Local checks on Apple Silicon:
- model loading in
mlx-vlm:passed - local conversion / quantization:
passed - text generation smoke test:
mixed - image generation path smoke test:
mixed - lm studio 0.4.11+1 loading:
not supported yet - notes:
Conversion finished successfully, but Gemma 4 runtime behavior still depends on current mlx-vlm and LM Studio support. In local checks, the models load in mlx-vlm and can emit tokens, but output quality still needs tuning, and LM Studio 0.4.11+1 currently refuses to load Gemma 4.
Files
Important files in this repo:
config.jsongeneration_config.jsonchat_template.jinjaprocessor_config.jsontokenizer.jsontokenizer_config.jsonmodel.safetensors.index.jsonmodel-*.safetensors
Usage
Text generation
mlx_vlm.generate \
--model /path/to/Huihui-gemma-4-26B-A4B-it-abliterated-mlx-4bit \
--prompt "Describe local inference in one short sentence." \
--max-tokens 128 \
--temperature 1.0 \
--trust-remote-code
Image prompt
mlx_vlm.generate \
--model /path/to/Huihui-gemma-4-26B-A4B-it-abliterated-mlx-4bit \
--image /path/to/example.png \
--prompt "Describe this image." \
--max-tokens 128 \
--temperature 1.0 \
--trust-remote-code
LM Studio
LM Studio 0.4.11+1 detects these repos but currently refuses to load Gemma 4 with ValueError: Gemma 4 support is not ready yet, stay tuned!. Use mlx-vlm for now until LM Studio adds native Gemma 4 support.
Notes
This repo reflects a local conversion workflow and validation pass on Apple Silicon. Behavior can vary with mlx-vlm version, sampling parameters, and prompt style.
- Downloads last month
- 485
Model size
5B params
Tensor type
BF16
·
U32 ·
Hardware compatibility
Log In to add your hardware
4-bit
Model tree for vanch007/Huihui-gemma-4-26B-A4B-it-abliterated-mlx-4bit
Base model
google/gemma-4-26B-A4B