Image-Text-to-Text
Transformers
Safetensors
English
internvl
vision-language-model
vlm
reasoning
perception
rlvr
grpo
icml-2026
conversational
Instructions to use UCSC-VLAA/VLM-CapCurriculum-InternVL3-8B-Staged with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use UCSC-VLAA/VLM-CapCurriculum-InternVL3-8B-Staged with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="UCSC-VLAA/VLM-CapCurriculum-InternVL3-8B-Staged") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("UCSC-VLAA/VLM-CapCurriculum-InternVL3-8B-Staged") model = AutoModelForImageTextToText.from_pretrained("UCSC-VLAA/VLM-CapCurriculum-InternVL3-8B-Staged") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use UCSC-VLAA/VLM-CapCurriculum-InternVL3-8B-Staged with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "UCSC-VLAA/VLM-CapCurriculum-InternVL3-8B-Staged" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "UCSC-VLAA/VLM-CapCurriculum-InternVL3-8B-Staged", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/UCSC-VLAA/VLM-CapCurriculum-InternVL3-8B-Staged
- SGLang
How to use UCSC-VLAA/VLM-CapCurriculum-InternVL3-8B-Staged with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "UCSC-VLAA/VLM-CapCurriculum-InternVL3-8B-Staged" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "UCSC-VLAA/VLM-CapCurriculum-InternVL3-8B-Staged", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "UCSC-VLAA/VLM-CapCurriculum-InternVL3-8B-Staged" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "UCSC-VLAA/VLM-CapCurriculum-InternVL3-8B-Staged", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use UCSC-VLAA/VLM-CapCurriculum-InternVL3-8B-Staged with Docker Model Runner:
docker model run hf.co/UCSC-VLAA/VLM-CapCurriculum-InternVL3-8B-Staged
File size: 4,929 Bytes
3fbd09a 82fb5eb 3fbd09a 82fb5eb 3fbd09a 82fb5eb 3fbd09a 82fb5eb 3fbd09a | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 | ---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: image-text-to-text
base_model: OpenGVLab/InternVL3-8B-hf
tags:
- vision-language-model
- vlm
- reasoning
- perception
- rlvr
- grpo
- icml-2026
---
# VLM-CapCurriculum-InternVL3-8B-Staged
A vision-language model post-trained from **OpenGVLab/InternVL3-8B-hf** with the staged, capability-dimension curriculum from
*"From Seeing to Thinking: Decoupling Perception and Reasoning Improves Post-Training of Vision-Language Models"*
(ICML 2026).
This release is the **Stage-3 step-186** checkpoint, which gave the largest gain over the merged baseline among the four backbones we tried.
> **TL;DR.** Visual perception β not reasoning length β is the dominant bottleneck for visual reasoning in VLMs. We fix this by post-training along a **capability axis** (perception β textual reasoning β visual reasoning) rather than mixing all data together.
| Resource | Link |
|---|---|
| π Paper | <TODO_PAPER_URL> |
| π» Code | https://github.com/UCSC-VLAA/VLM-CapCurriculum |
| π Project page | https://ucsc-vlaa.github.io/VLM-CapCurriculum |
| π€ Collection (model + data + eval) | https://huggingface.co/collections/UCSC-VLAA/vlm-capcurriculum-from-seeing-to-thinking-icml-2026-6a07691f944148ccb2b183b8 |
## Headline numbers (extended benchmark suite, AVG over 10 benchmarks)
| Setting | Extended AVG |
|---|:---:|
| InternVL3-8B (base) | 29.69 |
| InternVL3-8B + Merged training | 41.94 |
| **InternVL3-8B + Staged (this model)** | **45.71** |
Ξ over merged: **+3.77** β the largest staged-vs-merged gap among the four backbones. Most striking is WeMath (+9.90 over merged), evidence that decoupling perception and reasoning is especially impactful for weaker base models. See Appendix Table 9 of the paper for the full per-benchmark breakdown.
## How it was trained
Three RLVR stages with GRPO (on top of [EasyR1](https://github.com/hiyouga/EasyR1)):
1. **Stage 1 β visual perception** on `UCSC-VLAA/VLM-CapCurriculum-Perception` (synthesised + filtered DOCCI MCQs).
2. **Stage 2 β textual reasoning** on `UCSC-VLAA/VLM-CapCurriculum-TextReasoning` (ORZ-Math-13k).
3. **Stage 3 β visual reasoning** on `UCSC-VLAA/VLM-CapCurriculum-VisualReasoning` (CLEVR-Math + GeoQA170K + Math PUMA + DocVQA + ArxivQA mix). **Released checkpoint: step 186 of Stage 3.**
InternVL3 needs damped optimisation in Stage 2 to avoid entropy explosion (`lr=3e-7`, `kl=5e-2`, `clip=0.15`, `max_grad_norm=0.5`) β these are baked into the launch scripts.
All three stages share **one** system / format prompt β see [Inference](#inference) below.
Detailed launch scripts: [`training/examples/internvl3_8b/`](https://github.com/UCSC-VLAA/VLM-CapCurriculum/tree/main/training/examples/internvl3_8b) in the code repo.
## Inference
The model expects the unified system prompt that it was trained against:
```
You FIRST think about the reasoning process as an internal monologue and then
provide the final answer. The reasoning process MUST BE enclosed within
<think> </think> tags. The final answer MUST BE put in \boxed{}.
i.e. <think> reasoning here </think> \boxed{final answer here}
```
Quick start with LMDeploy (the InternVL family is served via LMDeploy in our setup):
```bash
lmdeploy serve api_server UCSC-VLAA/VLM-CapCurriculum-InternVL3-8B-Staged \
--server-port 23342 --tp 4
```
For VLMEvalKit-style benchmark eval, plug it in via the `InternVL3_8B_Staged` alias defined in [`evaluation/configs/models.py`](https://github.com/UCSC-VLAA/VLM-CapCurriculum/blob/main/evaluation/configs/models.py).
## Intended use & limitations
Intended for research on vision-language reasoning, post-training methodology, and capability-dimension curriculum learning. Inherits the safety / bias profile of the underlying InternVL3-8B backbone; we have not added additional alignment fine-tuning. Not recommended for high-stakes deployments without further evaluation.
Trained at the 8B parameter scale with 4096-token max prompt length and a fixed group size of 5. Behaviour at much longer contexts or substantially different prompt formats has not been characterised.
## License & citation
Released under **Apache-2.0**, matching the upstream backbone. If you use this model, please cite:
```bibtex
@inproceedings{vlmcapcurriculum2026,
title = {From Seeing to Thinking: Decoupling Perception and Reasoning Improves Post-Training of Vision-Language Models},
author = {Juncheng Wu and Hardy Chen and Haoqin Tu and Xianfeng Tang and Freda Shi and Hui Liu and Hanqing Lu and Cihang Xie and Yuyin Zhou},
booktitle = {Proceedings of the International Conference on Machine Learning (ICML)},
year = {2026}
}
```
## Acknowledgements
Built on top of [EasyR1](https://github.com/hiyouga/EasyR1), [VLMEvalKit](https://github.com/open-compass/VLMEvalKit), and the [InternVL](https://huggingface.co/OpenGVLab) family.
|