How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "allenai/Molmo2-ER"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "allenai/Molmo2-ER",
		"messages": [
			{
				"role": "user",
				"content": [
					{
						"type": "text",
						"text": "Describe this image in one sentence."
					},
					{
						"type": "image_url",
						"image_url": {
							"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
						}
					}
				]
			}
		]
	}'
Use Docker
docker model run hf.co/allenai/Molmo2-ER
Quick Links

Molmo2-ER

Molmo2-ER (Embodied Reasoning) is a 4B vision–language model specialized for the embodied perception skills that downstream action models depend on: scene understanding, pixel-accurate pointing, multi-image and egocentric–exocentric correspondence, and video temporal reasoning.

It is built on top of Molmo2 (Qwen3-4B backbone + SigLIP2 vision encoder) and serves as the vision–language backbone of the MolmoAct2 action reasoning model.

Highlights

  • Outperforms every open-weight baseline as well as the strongest closed-source models — including Gemini Robot-ER 1.5 Thinking and GPT-5 — on 9 of 13 established embodied reasoning benchmarks (Point-Bench, RefSpatial, BLINK, CV-Bench, ERQA, EmbSpatial, MindCube, SAT, VSI-Bench).
  • Overall average 63.8%, a +17 point improvement over the Molmo2 starting point.

Training

Molmo2-ER is trained from the released Molmo2 checkpoint with a two-stage specialize-then-rehearse recipe:

Stage Steps Mixture Seq. len. Per-device BS
1. Embodied specialization 20K 3.3M-sample embodied corpus (SAT, RoboPoint, RefSpatial, VST-P, VSI-590K, SIMS-VSI, RoboVQA, SenseNova-SI, CLEVR, GRiD-3D) + 8% Tulu-3 4,200 4
2. Joint refinement 1.5K 50% embodied / 42% Molmo2 general / 8% Tulu-3 16,384 1

All other hyperparameters follow Molmo2.

Resources

Usage

See https://github.com/allenai/molmo2 for inference, evaluation, and training code.

License

Apache-2.0.

Citation

@misc{fang2026molmoact2actionreasoningmodels,
      title={MolmoAct2: Action Reasoning Models for Real-world Deployment}, 
      author={Haoquan Fang and Jiafei Duan and Donovan Clay and Sam Wang and Shuo Liu and Weikai Huang and Xiang Fan and Wei-Chuan Tsai and Shirui Chen and Yi Ru Wang and Shanli Xing and Jaemin Cho and Jae Sung Park and Ainaz Eftekhar and Peter Sushko and Karen Farley and Angad Wadhwa and Cole Harrison and Winson Han and Ying-Chun Lee and Eli VanderBilt and Rose Hendrix and Suveen Ellawela and Lucas Ngoo and Joyce Chai and Zhongzheng Ren and Ali Farhadi and Dieter Fox and Ranjay Krishna},
      year={2026},
      eprint={2605.02881},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2605.02881}, 
}
Downloads last month
73,859
Safetensors
Model size
5B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for allenai/Molmo2-ER

Finetuned
(8)
this model

Collection including allenai/Molmo2-ER

Paper for allenai/Molmo2-ER