Molmo2-ER-RoboVQA / README.md
Duanj1's picture
placeholder README
a6f5d6b verified
metadata
license: cc-by-4.0
language:
  - en
pretty_name: Molmo2-ER robovqa
tags:
  - embodied-reasoning
  - molmo2
  - molmo2-er
  - vlm-training-data

Molmo2-ER · Google DeepMind RoboVQA

Human-annotated long-horizon robotics video QA across three embodiments.

This is a re-hosted, loader-ready subset of the upstream dataset, used to train allenai/Molmo2-ER-4B. Files mirror the upstream layout; nothing in the data has been modified.

Upstream source

If you use this data, please cite the original authors:

@misc{sermanet2023robovqamultimodallonghorizonreasoning,
  title={RoboVQA: Multimodal Long-Horizon Reasoning for Robotics},
  author={Pierre Sermanet and Tianli Ding and Jeffrey Zhao and others},
  year={2023},
  eprint={2311.00899},
  archivePrefix={arXiv}
}

Extracting before training

This release ships archives. Extract them in-place before pointing SPATIAL_DATA_HOME at this directory:

cat clips_extracted.tar.* > clips_extracted.tar && tar -xf clips_extracted.tar

Usage in Molmo2-ER

See the allenai/molmo2 repository for the data loader and training recipe. The relevant loader class for this dataset lives in olmo/data/spatial_datasets.py.