VLM-CapCurriculum-Qwen2.5-VL-7B-Staged

A vision-language model post-trained from Qwen/Qwen2.5-VL-7B-Instruct with the staged, capability-dimension curriculum from "From Seeing to Thinking: Decoupling Perception and Reasoning Improves Post-Training of Vision-Language Models" (ICML 2026).

TL;DR. Visual perception โ€” not reasoning length โ€” is the dominant bottleneck for visual reasoning in VLMs. We fix this by post-training along a capability axis (perception โ†’ textual reasoning โ†’ visual reasoning) rather than mixing all data together.

Headline numbers

Setting Visual Math AVG Perception AVG Overall AVG
Qwen2.5-VL-7B (base) 37.03 76.33 56.68
Qwen2.5-VL-7B + Merged training 40.74 75.95 58.34
Qwen2.5-VL-7B + Staged (this model) 42.26 77.24 59.75

Visual math = MathVista / MathVision / MathVerse(VI) / WeMath. Perception = A-OKVQA / RealWorldQA / MMStar / POPE.

Compared with the merged baseline on the same backbone, this model also produces shorter reasoning traces โ€” better perception lets the model think less.

How it was trained

Three RLVR stages with GRPO (on top of EasyR1):

  1. Stage 1 โ€” visual perception on UCSC-VLAA/VLM-CapCurriculum-Perception (synthesised + filtered DOCCI MCQs).
  2. Stage 2 โ€” textual reasoning on UCSC-VLAA/VLM-CapCurriculum-TextReasoning (ORZ-Math-13k).
  3. Stage 3 โ€” visual reasoning on UCSC-VLAA/VLM-CapCurriculum-VisualReasoning (CLEVR-Math + GeoQA170K + Math PUMA + DocVQA + ArxivQA mix).

All three stages share one system / format prompt โ€” see Inference below.

Detailed launch scripts: training/examples/qwen2_5_vl_7b/ in the code repo.

Inference

The model expects the unified system prompt that it was trained against:

You FIRST think about the reasoning process as an internal monologue and then
provide the final answer. The reasoning process MUST BE enclosed within
<think> </think> tags. The final answer MUST BE put in \boxed{}.
i.e. <think> reasoning here </think> \boxed{final answer here}

Quick start with vLLM:

vllm serve UCSC-VLAA/VLM-CapCurriculum-Qwen2.5-VL-7B-Staged \
  --tensor-parallel-size 4 --gpu-memory-utilization 0.9 --port 23340

Then send chat completions including the system prompt above.

For VLMEvalKit-style benchmark eval, plug it in via the Qwen2_5_VL_7B_Staged alias defined in evaluation/configs/models.py.

Intended use & limitations

Intended for research on vision-language reasoning, post-training methodology, and capability-dimension curriculum learning. Inherits the safety / bias profile of the underlying Qwen2.5-VL-7B-Instruct backbone; we have not added additional alignment fine-tuning. Not recommended for high-stakes deployments without further evaluation.

The model was trained at the 7B parameter scale with 2048-token max prompt length and a fixed group size of 5. Behaviour at much longer contexts or substantially different prompt formats has not been characterised.

License & citation

Released under Apache-2.0, matching the upstream backbone. If you use this model, please cite:

@inproceedings{vlmcapcurriculum2026,
  title  = {From Seeing to Thinking: Decoupling Perception and Reasoning Improves Post-Training of Vision-Language Models},
  author = {Juncheng Wu and Hardy Chen and Haoqin Tu and Xianfeng Tang and Freda Shi and Hui Liu and Hanqing Lu and Cihang Xie and Yuyin Zhou},
  booktitle = {Proceedings of the International Conference on Machine Learning (ICML)},
  year   = {2026}
}

Acknowledgements

Built on top of EasyR1, VLMEvalKit, and the Qwen2.5-VL family.

Downloads last month
-
Safetensors
Model size
8B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for UCSC-VLAA/VLM-CapCurriculum-Qwen2.5-VL-7B-Staged

Finetuned
(1063)
this model

Collection including UCSC-VLAA/VLM-CapCurriculum-Qwen2.5-VL-7B-Staged