Instructions to use Zyphra/ZAYA1-VL-8B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Zyphra/ZAYA1-VL-8B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="Zyphra/ZAYA1-VL-8B") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained("Zyphra/ZAYA1-VL-8B", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Zyphra/ZAYA1-VL-8B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Zyphra/ZAYA1-VL-8B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Zyphra/ZAYA1-VL-8B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/Zyphra/ZAYA1-VL-8B
- SGLang
How to use Zyphra/ZAYA1-VL-8B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Zyphra/ZAYA1-VL-8B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Zyphra/ZAYA1-VL-8B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Zyphra/ZAYA1-VL-8B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Zyphra/ZAYA1-VL-8B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use Zyphra/ZAYA1-VL-8B with Docker Model Runner:
docker model run hf.co/Zyphra/ZAYA1-VL-8B
ZAYA1-VL-8B
ZAYA1-VL-8B is a vision-language model (VLM) built upon Zyphra's ZAYA1-8B LLM. It has state-of-the-art performance among VLMs for its size and inference efficiency.
- Paper: ZAYA1-VL-8B Technical Report
- Code: GitHub (zaya1-vl branch)
- Blog: Announcement blog post
ZAYA1-VL-8B is open-sourced under the Apache 2.0 license.
Performance
ZAYA1-VL-8B performs extremely strongly against models of a comparable size and inference flops including outperforming several strong larger models.
Model Architecture
ZAYA1-VL-8B builds upon and uses our ZAYA1-8B LLM as its base text decoder. We also use the Qwen2.5-VL vision encoder for the ViT. ZAYA1-VL-8B introduces two novel architectural innovations:
Vision-specific LoRA parameters: ZAYA1-VL-8B utilizes specialized LoRA parameters on its MLPs and CCA weights which are only activated on vision tokens. We find that adding vision-specific parameters substantially improves model performance since the model has the option to devote specific parameters solely to visual processing. We train these LoRA parameters alongside the main model parameters during training.
Bidirectional Attention for image tokens: ZAYA1-VL-8B processes all image token inputs with a bidirectional attention mask, meaning attention is not causal across an image. We find that this improves performance by not imposing an arbitrary causal order to image tokens which are naturally non-causal.
ZAYA1-VL-8B is trained only upon open data. Detailed dataset descriptions can be found in the accompanying technical report.
| Eval | ZAYA1-VL-8B(0.7B / 8B) | MolmoE(1.2B / 8B) | Qwen3.5-2B | InternVL3.5-20B(20B / 4B) | Molmo2-4B | Qwen3.5-4B |
|---|---|---|---|---|---|---|
| AI2D (test) | 87.5 | 73.6 | 78.6 | 85.5 | 85.4 | 83.7 |
| ChartQA (test) | 82.2 | 77.9 | 78.4 | 87.0 | 86.1 | 82.4 |
| DocVQA (test) | 92.5 | 77.7 | -- | 92.9 | 87.8 | -- |
| InfoVQA (test) | 74.0 | 53.9 | -- | 78.1 | 78.6 | -- |
| TextVQA (val) | 74.4 | 78.1 | 79.0 | 78.5 | 83.1 | 81.1 |
| OCRBench | 79.8 | 55.0 | 83.1 | 86.7 | 62.0 | 85.3 |
| VQA v2.0 (val) | 80.0 | 82.8 | 78.3 | 78.4 | 85.3 | 80.4 |
| MathVista (mini) | 64.0 | 39.1 | 52.9 | 73.5 | 56.5 | 82.3 |
| MMMU (val) | 46.0 | -- | 49.2 | 72.6 | 48.8 | 56.9 |
| SEED (image) | 72.7 | 68.7 | 75.8 | 76.8 | 78.0 | 76.6 |
| Blink (val) | 45.9 | -- | 61.0 | 58.9 | 63.5 | 56.8 |
| RealWorldQA | 65.0 | 60.4 | 69.0 | 71.2 | 73.8 | 74.2 |
| CountBenchQA | 88.1 | 77.4 | 84.2 | 82.1 | 91.2 | 84.8 |
| PixMoCount (test) | 83.1 | 45.2 | 65.5 | 47.3 | 87.0 | 84.2 |
| Point-Bench (avg) | 58.0 | 58.0 | 40.6 | -- | 68.5 | 64.4 |
| RefCOCO (avg) | 84.3 | -- | 80.1 | 89.1 | -- | 87.7 |
All numbers are run on the Zyphra evaluation harness (based on VLMEvalKit). Models are ordered by total parameter count. Bold indicates the best score in each row, while underlined values indicate the lowest score.
Quick start
Prerequisites
To use ZAYA1-VL, install zaya1-vl branch from our fork of transformers library, which is based on the v4.57.1 of transformers:
pip install "transformers @ git+https://github.com/Zyphra/transformers.git@zaya1-vl"
pip install qwen-vl-utils==0.0.2
pip install flash_attn
The command above relies on requirements for transformers v4.57.1 being installed in your environment. If you're installing in a fresh Python environment, you might want to specify a specific extra, like [dev-torch], to install all the dependencies:
pip install "transformers[dev-torch] @ git+https://github.com/Zyphra/transformers.git@zaya1-vl"
For the fastest setup, ensure your environment matches an existing flash_attn wheel, otherwise the installation will build from source.
Inference
from transformers import Zaya1VLForConditionalGeneration, Zaya1VLProcessor
import torch
from PIL import Image
from qwen_vl_utils import process_vision_info
import requests
device = "cuda"
processor = Zaya1VLProcessor.from_pretrained("Zyphra/ZAYA1-VL-8B", temporal_patch_size=1)
model = Zaya1VLForConditionalGeneration.from_pretrained("Zyphra/ZAYA1-VL-8B", device_map=device, torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
question = "What do you see in the image? Give us some detail."
num_img_tokens = 8000
conversation = [
{"role": "user", "content": [
{"type": "image", "image": image, "max_pixels" : num_img_tokens * 28 * 28, "min_pixels" : 10 * 28 * 28},
{"type": "text", "text": question},
]
},
]
prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
images, _ = process_vision_info(conversation)
inputs = processor(text=prompt, images=images, add_special_tokens=True, return_tensors="pt")
inputs = {key: value.to(device) for key, value in inputs.items()}
outputs = model.generate(**inputs, max_new_tokens=100)
print(processor.tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))
- Downloads last month
- 418

