You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

alibayram/magibu-26b-ft-identity-mlx

This model was converted to MLX format from alibayram/magibu-26b-ft-identity-v1 using mlx-vlm version 0.3.12. Refer to the original model card for more details on the model.

Use with mlx

pip install -U mlx-vlm
python -m mlx_vlm.generate --model alibayram/magibu-26b-ft-identity-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
Downloads last month
-
Safetensors
Model size
5B params
Tensor type
BF16
U32
F32
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for alibayram/magibu-26b-ft-identity-mlx

Base model

Qwen/Qwen3.5-27B
Quantized
(202)
this model