Image-Text-to-Text
Transformers
Safetensors
English
internvl
vision-language-model
vlm
reasoning
perception
rlvr
grpo
icml-2026
conversational
Instructions to use UCSC-VLAA/VLM-CapCurriculum-InternVL3.5-8B-Staged with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use UCSC-VLAA/VLM-CapCurriculum-InternVL3.5-8B-Staged with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="UCSC-VLAA/VLM-CapCurriculum-InternVL3.5-8B-Staged") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("UCSC-VLAA/VLM-CapCurriculum-InternVL3.5-8B-Staged") model = AutoModelForImageTextToText.from_pretrained("UCSC-VLAA/VLM-CapCurriculum-InternVL3.5-8B-Staged") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use UCSC-VLAA/VLM-CapCurriculum-InternVL3.5-8B-Staged with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "UCSC-VLAA/VLM-CapCurriculum-InternVL3.5-8B-Staged" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "UCSC-VLAA/VLM-CapCurriculum-InternVL3.5-8B-Staged", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/UCSC-VLAA/VLM-CapCurriculum-InternVL3.5-8B-Staged
- SGLang
How to use UCSC-VLAA/VLM-CapCurriculum-InternVL3.5-8B-Staged with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "UCSC-VLAA/VLM-CapCurriculum-InternVL3.5-8B-Staged" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "UCSC-VLAA/VLM-CapCurriculum-InternVL3.5-8B-Staged", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "UCSC-VLAA/VLM-CapCurriculum-InternVL3.5-8B-Staged" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "UCSC-VLAA/VLM-CapCurriculum-InternVL3.5-8B-Staged", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use UCSC-VLAA/VLM-CapCurriculum-InternVL3.5-8B-Staged with Docker Model Runner:
docker model run hf.co/UCSC-VLAA/VLM-CapCurriculum-InternVL3.5-8B-Staged
| license: apache-2.0 | |
| language: | |
| - en | |
| library_name: transformers | |
| pipeline_tag: image-text-to-text | |
| base_model: OpenGVLab/InternVL3_5-8B-HF | |
| tags: | |
| - vision-language-model | |
| - vlm | |
| - reasoning | |
| - perception | |
| - rlvr | |
| - grpo | |
| - icml-2026 | |
| # VLM-CapCurriculum-InternVL3.5-8B-Staged | |
| A vision-language model post-trained from **OpenGVLab/InternVL3_5-8B-HF** with the staged, capability-dimension curriculum from | |
| *"From Seeing to Thinking: Decoupling Perception and Reasoning Improves Post-Training of Vision-Language Models"* | |
| (ICML 2026). | |
| This release is the **Stage-3 step-279** checkpoint. | |
| > **TL;DR.** Visual perception β not reasoning length β is the dominant bottleneck for visual reasoning in VLMs. We fix this by post-training along a **capability axis** (perception β textual reasoning β visual reasoning) rather than mixing all data together. | |
| | Resource | Link | | |
| |---|---| | |
| | π Paper | <TODO_PAPER_URL> | | |
| | π» Code | https://github.com/UCSC-VLAA/VLM-CapCurriculum | | |
| | π Project page | https://ucsc-vlaa.github.io/VLM-CapCurriculum | | |
| | π€ Collection (model + data + eval) | https://huggingface.co/collections/UCSC-VLAA/vlm-capcurriculum-from-seeing-to-thinking-icml-2026-6a07691f944148ccb2b183b8 | | |
| ## Headline numbers (extended benchmark suite, AVG over 10 benchmarks) | |
| | Setting | Extended AVG | | |
| |---|:---:| | |
| | InternVL3.5-8B (base) | 37.33 | | |
| | InternVL3.5-8B + Merged training | 52.76 | | |
| | **InternVL3.5-8B + Staged (this model)** | **53.71** | | |
| See Appendix Table 9 of the paper for the full per-benchmark breakdown. | |
| ## How it was trained | |
| Three RLVR stages with GRPO (on top of [EasyR1](https://github.com/hiyouga/EasyR1)): | |
| 1. **Stage 1 β visual perception** on `UCSC-VLAA/VLM-CapCurriculum-Perception` (synthesised + filtered DOCCI MCQs). | |
| 2. **Stage 2 β textual reasoning** on `UCSC-VLAA/VLM-CapCurriculum-TextReasoning` (ORZ-Math-13k). | |
| 3. **Stage 3 β visual reasoning** on `UCSC-VLAA/VLM-CapCurriculum-VisualReasoning` (CLEVR-Math + GeoQA170K + Math PUMA + DocVQA + ArxivQA mix). **Released checkpoint: step 279 of Stage 3.** | |
| All three stages share **one** system / format prompt β see [Inference](#inference) below. | |
| Detailed launch scripts: [`training/examples/internvl3_5_8b/`](https://github.com/UCSC-VLAA/VLM-CapCurriculum/tree/main/training/examples/internvl3_5_8b) in the code repo. | |
| ## Inference | |
| The model expects the unified system prompt that it was trained against: | |
| ``` | |
| You FIRST think about the reasoning process as an internal monologue and then | |
| provide the final answer. The reasoning process MUST BE enclosed within | |
| <think> </think> tags. The final answer MUST BE put in \boxed{}. | |
| i.e. <think> reasoning here </think> \boxed{final answer here} | |
| ``` | |
| Quick start with LMDeploy: | |
| ```bash | |
| lmdeploy serve api_server UCSC-VLAA/VLM-CapCurriculum-InternVL3.5-8B-Staged \ | |
| --server-port 23343 --tp 4 | |
| ``` | |
| For VLMEvalKit-style benchmark eval, plug it in via the `InternVL3_5_8B_Staged` alias defined in [`evaluation/configs/models.py`](https://github.com/UCSC-VLAA/VLM-CapCurriculum/blob/main/evaluation/configs/models.py). | |
| ## Intended use & limitations | |
| Intended for research on vision-language reasoning, post-training methodology, and capability-dimension curriculum learning. Inherits the safety / bias profile of the underlying InternVL3.5-8B backbone; we have not added additional alignment fine-tuning. Not recommended for high-stakes deployments without further evaluation. | |
| Trained at the 8B parameter scale with 4096-token max prompt length and a fixed group size of 5. Behaviour at much longer contexts or substantially different prompt formats has not been characterised. | |
| ## License & citation | |
| Released under **Apache-2.0**, matching the upstream backbone. If you use this model, please cite: | |
| ```bibtex | |
| @inproceedings{vlmcapcurriculum2026, | |
| title = {From Seeing to Thinking: Decoupling Perception and Reasoning Improves Post-Training of Vision-Language Models}, | |
| author = {Juncheng Wu and Hardy Chen and Haoqin Tu and Xianfeng Tang and Freda Shi and Hui Liu and Hanqing Lu and Cihang Xie and Yuyin Zhou}, | |
| booktitle = {Proceedings of the International Conference on Machine Learning (ICML)}, | |
| year = {2026} | |
| } | |
| ``` | |
| ## Acknowledgements | |
| Built on top of [EasyR1](https://github.com/hiyouga/EasyR1), [VLMEvalKit](https://github.com/open-compass/VLMEvalKit), and the [InternVL](https://huggingface.co/OpenGVLab) family. | |