πŸ¦† zecanard/Gemopus-4-E4B-it-MLX-2bit-mixed_2_6

This model was converted to MLX from Jackrong/Gemopus-4-E4B-it using mlx-vlm version 0.4.4. Please refer to the original model card for more details.

🌟 Quality

Mixed-weights quantized vision language model with an effective 5.407 bits per weight.

mlx_vlm.convert --quantize --q-group-size 32 --quant-predicate mixed_2_6

πŸ› οΈ Customizations

This quant is aware of the current date, and also enables thinking (if available). You may disable this behavior by deleting the following line from the chat template:

{%- set enable_thinking = true %}

You may also need to adjust your environment’s Reasoning Section Parsing to recognize <|channel>thought as the Start String, and <channel|> as the End String.

πŸ–₯️ Use with mlx

pip install -U mlx-vlm
mlx_vlm.generate --model zecanard/Gemopus-4-E4B-it-MLX-2bit-mixed_2_6 --max-tokens 100 --temperature 0 --prompt "Describe this image." --image <path_to_image>
Downloads last month
90
Safetensors
Model size
2B params
Tensor type
BF16
Β·
U32
Β·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for zecanard/Gemopus-4-E4B-it-MLX-2bit-mixed_2_6

Quantized
(10)
this model