guardiangate1775/gemma-4-31B-it-assistant-4bit

This model was converted to MLX format from mlx-community/gemma-4-31B-it-assistant-bf16 using mlx-vlm version 0.5.0. Refer to the original model card for more details on the model.

Use with mlx

pip install -U mlx-vlm
python -m mlx_vlm.generate --model guardiangate1775/gemma-4-31B-it-assistant-4bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
Downloads last month
89
Safetensors
Model size
73.4M params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for guardiangate1775/gemma-4-31B-it-assistant-4bit