ZAYA1-VL-8B / README.md
hassansh's picture
Update README.md
9084f2e verified
metadata
base_model:
  - Zyphra/ZAYA1-base
license: apache-2.0
pipeline_tag: image-text-to-text
library_name: transformers

ZAYA1-VL-8B

ZAYA1-VL-8B is a vision-language model (VLM) built upon Zyphra's ZAYA1-8B LLM. It has state-of-the-art performance among VLMs for its size and inference efficiency.

ZAYA1-VL-8B is open-sourced under the Apache 2.0 license.

Performance

ZAYA1-VL-8B performs extremely strongly against models of a comparable size and inference flops including outperforming several strong larger models.

zyphra_active_param_scatter_ylabels

Model Architecture

ZAYA1-VL-8B builds upon and uses our ZAYA1-8B LLM as its base text decoder. We also use the Qwen2.5-VL vision encoder for the ViT. ZAYA1-VL-8B introduces two novel architectural innovations:

  • Vision-specific LoRA parameters: ZAYA1-VL-8B utilizes specialized LoRA parameters on its MLPs and CCA weights which are only activated on vision tokens. We find that adding vision-specific parameters substantially improves model performance since the model has the option to devote specific parameters solely to visual processing. We train these LoRA parameters alongside the main model parameters during training.

  • Bidirectional Attention for image tokens: ZAYA1-VL-8B processes all image token inputs with a bidirectional attention mask, meaning attention is not causal across an image. We find that this improves performance by not imposing an arbitrary causal order to image tokens which are naturally non-causal.

Screenshot 2026-05-08 at 3.28.56 PM

ZAYA1-VL-8B is trained only upon open data. Detailed dataset descriptions can be found in the accompanying technical report.

Eval ZAYA1-VL-8B(0.7B / 8B) MolmoE(1.2B / 8B) Qwen3.5-2B InternVL3.5-20B(20B / 4B) Molmo2-4B Qwen3.5-4B
AI2D (test) 87.5 82.5 86.7 85.5 93.8 93.4
ChartQA (test) 82.2 77.9 78.4 87.0 86.1 82.4
DocVQA (test) 92.5 77.7 -- 92.9 87.8 --
InfoVQA (test) 74.0 53.9 -- 78.1 78.6 --
TextVQA (val) 74.4 78.1 79.0 78.5 83.1 81.1
OCRBench 79.8 55.0 83.1 86.7 62.0 85.3
VQA v2.0 (val) 80.0 82.8 78.3 78.4 85.3 80.4
MathVista (mini) 64.0 39.1 52.9 73.5 56.5 82.3
MMMU (val) 46.0 -- 49.2 72.6 48.8 56.9
SEED (image) 72.7 68.7 75.8 76.8 78.0 76.6
Blink (val) 45.9 -- 61.0 58.9 63.5 56.8
RealWorldQA 65.0 60.4 69.0 71.2 73.8 74.2
CountBenchQA 88.1 77.4 84.2 82.1 91.2 84.8
PixMoCount (test) 83.1 45.2 65.5 47.3 87.0 84.2
Point-Bench (avg) 58.0 58.0 40.6 -- 68.5 64.4
RefCOCO (avg) 84.3 -- 80.1 89.1 -- 87.7

All numbers are run on the Zyphra evaluation harness (based on VLMEvalKit). Models are ordered by total parameter count. Bold indicates the best score in each row, while underlined values indicate the lowest score.

Quick start

Prerequisites

To use ZAYA1-VL, install zaya1-vl branch from our fork of transformers library, which is based on the v4.57.1 of transformers:

pip install "transformers @ git+https://github.com/Zyphra/transformers.git@zaya1-vl"
pip install qwen-vl-utils==0.0.2
pip install flash_attn

The command above relies on requirements for transformers v4.57.1 being installed in your environment. If you're installing in a fresh Python environment, you might want to specify a specific extra, like [dev-torch], to install all the dependencies:

pip install "transformers[dev-torch] @ git+https://github.com/Zyphra/transformers.git@zaya1-vl"

For the fastest setup, ensure your environment matches an existing flash_attn wheel, otherwise the installation will build from source.

Inference

from transformers import Zaya1VLForConditionalGeneration, Zaya1VLProcessor
import torch
from PIL import Image
from qwen_vl_utils import process_vision_info
import requests

device = "cuda"
processor = Zaya1VLProcessor.from_pretrained("Zyphra/ZAYA1-VL-8B", temporal_patch_size=1)
model = Zaya1VLForConditionalGeneration.from_pretrained("Zyphra/ZAYA1-VL-8B", device_map=device, torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2")

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
question = "What do you see in the image? Give us some detail."
num_img_tokens = 8000

conversation = [
    {"role": "user", "content": [
        {"type": "image", "image": image, "max_pixels" : num_img_tokens * 28 * 28, "min_pixels" : 10 * 28 * 28},
        {"type": "text", "text": question},
      ]
    },
]
prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
images, _ = process_vision_info(conversation)
inputs = processor(text=prompt, images=images, add_special_tokens=True, return_tensors="pt")
inputs = {key: value.to(device) for key, value in inputs.items()}

outputs = model.generate(**inputs, max_new_tokens=100)
print(processor.tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))