Image-Text-to-Text
MLX
Safetensors
zaya1_vl
zaya
mixture-of-experts
hybrid-attention
cca-attention
apple-silicon
reasoning
tool-use
quantized
vision
multimodal
vision-language
qwen2_5_vl-vit
jang
jangtq
mxtq
jangtq-prestack
osaurus
conversational
Instructions to use OsaurusAI/ZAYA1-VL-8B-JANGTQ4 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use OsaurusAI/ZAYA1-VL-8B-JANGTQ4 with MLX:
# Make sure mlx-vlm is installed # pip install --upgrade mlx-vlm from mlx_vlm import load, generate from mlx_vlm.prompt_utils import apply_chat_template from mlx_vlm.utils import load_config # Load the model model, processor = load("OsaurusAI/ZAYA1-VL-8B-JANGTQ4") config = load_config("OsaurusAI/ZAYA1-VL-8B-JANGTQ4") # Prepare input image = ["http://images.cocodataset.org/val2017/000000039769.jpg"] prompt = "Describe this image." # Apply chat template formatted_prompt = apply_chat_template( processor, config, prompt, num_images=1 ) # Generate output output = generate(model, processor, formatted_prompt, image) print(output) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
| { | |
| "do_convert_rgb": true, | |
| "do_normalize": true, | |
| "do_rescale": true, | |
| "do_resize": true, | |
| "image_mean": [ | |
| 0.48145466, | |
| 0.4578275, | |
| 0.40821073 | |
| ], | |
| "image_processor_type": "Qwen2VLImageProcessor", | |
| "image_std": [ | |
| 0.26862954, | |
| 0.26130258, | |
| 0.27577711 | |
| ], | |
| "max_pixels": 12845056, | |
| "merge_size": 2, | |
| "min_pixels": 3136, | |
| "patch_size": 14, | |
| "processor_class": "Zaya1VLProcessor", | |
| "resample": 3, | |
| "rescale_factor": 0.00392156862745098, | |
| "size": { | |
| "longest_edge": 12845056, | |
| "shortest_edge": 3136 | |
| }, | |
| "temporal_patch_size": 1 | |
| } | |