π¦ zecanard/Gemopus-4-E4B-it-MLX-2bit-mixed_2_6
This model was converted to MLX from Jackrong/Gemopus-4-E4B-it using mlx-vlm version 0.4.4.
Please refer to the original model card for more details.
π Quality
Mixed-weights quantized vision language model with an effective 5.407 bits per weight.
mlx_vlm.convert --quantize --q-group-size 32 --quant-predicate mixed_2_6
π οΈ Customizations
This quant is aware of the current date, and also enables thinking (if available). You may disable this behavior by deleting the following line from the chat template:
{%- set enable_thinking = true %}
You may also need to adjust your environmentβs Reasoning Section Parsing to recognize <|channel>thought as the Start String, and <channel|> as the End String.
π₯οΈ Use with mlx
pip install -U mlx-vlm
mlx_vlm.generate --model zecanard/Gemopus-4-E4B-it-MLX-2bit-mixed_2_6 --max-tokens 100 --temperature 0 --prompt "Describe this image." --image <path_to_image>
- Downloads last month
- 90
Model size
2B params
Tensor type
BF16
Β·
U32 Β·
Hardware compatibility
Log In to add your hardware
4-bit
Model tree for zecanard/Gemopus-4-E4B-it-MLX-2bit-mixed_2_6
Base model
Jackrong/Gemopus-4-E4B-it