Image-Text-to-Text
MLX
Safetensors
zaya1_vl
zaya
mixture-of-experts
hybrid-attention
cca-attention
apple-silicon
reasoning
tool-use
quantized
vision
multimodal
vision-language
qwen2_5_vl-vit
mxfp4
jang
osaurus
conversational
Instructions to use OsaurusAI/ZAYA1-VL-8B-MXFP4 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use OsaurusAI/ZAYA1-VL-8B-MXFP4 with MLX:
# Make sure mlx-vlm is installed # pip install --upgrade mlx-vlm from mlx_vlm import load, generate from mlx_vlm.prompt_utils import apply_chat_template from mlx_vlm.utils import load_config # Load the model model, processor = load("OsaurusAI/ZAYA1-VL-8B-MXFP4") config = load_config("OsaurusAI/ZAYA1-VL-8B-MXFP4") # Prepare input image = ["http://images.cocodataset.org/val2017/000000039769.jpg"] prompt = "Describe this image." # Apply chat template formatted_prompt = apply_chat_template( processor, config, prompt, num_images=1 ) # Generate output output = generate(model, processor, formatted_prompt, image) print(output) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
fix(caps): supports_thinking=True — ZAYA reasons by default
Browse files- config.json +2 -2
config.json
CHANGED
|
@@ -79,9 +79,9 @@
|
|
| 79 |
"tool_parser": "zaya_xml",
|
| 80 |
"think_in_template": false,
|
| 81 |
"supports_tools": true,
|
| 82 |
-
"supports_thinking":
|
| 83 |
"family": "zaya1_vl",
|
| 84 |
"modality": "vision",
|
| 85 |
"cache_type": "hybrid"
|
| 86 |
}
|
| 87 |
-
}
|
|
|
|
| 79 |
"tool_parser": "zaya_xml",
|
| 80 |
"think_in_template": false,
|
| 81 |
"supports_tools": true,
|
| 82 |
+
"supports_thinking": true,
|
| 83 |
"family": "zaya1_vl",
|
| 84 |
"modality": "vision",
|
| 85 |
"cache_type": "hybrid"
|
| 86 |
}
|
| 87 |
+
}
|