Duanj1's picture
add README.md
7766d12 verified
metadata
license: apache-2.0
language:
  - en
pretty_name: Molmo2-ER RefSpatial
tags:
  - embodied-reasoning
  - molmo2
  - molmo2-er
  - vlm-training-data

Molmo2-ER · JingkunAn/RefSpatial

2.5M spatial-referring corpus (web + indoor + simulated) covering 31 spatial relations.

This is a re-hosted, loader-ready subset of the upstream dataset, used to train allenai/Molmo2-ER-4B. Files mirror the upstream layout; nothing in the data has been modified.

Upstream source

  • Original dataset: JingkunAn/RefSpatial
  • Paper: RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics (arXiv:2506.04308)
  • License: apache-2.0 (inherits from upstream)

If you use this data, please cite the original authors:

@misc{zhou2026roboreferspatialreferringreasoning,
  title={RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics},
  author={Enshen Zhou and Jingkun An and Cheng Chi and others},
  year={2026},
  eprint={2506.04308},
  archivePrefix={arXiv}
}

Extracting before training

This release ships archives. Extract them in-place before pointing SPATIAL_DATA_HOME at this directory:

# Reassemble multipart archives, then extract
cat 2D/depth/depth.tar.gz.part_* > 2D/depth/depth.tar.gz
cat 2D/image/image.tar.gz.part_* > 2D/image/image.tar.gz
cat 3D/image_visual_choice/image_visual_choice.tar.gz.part_* > 3D/image_visual_choice/image_visual_choice.tar.gz
find . -name '*.tar.gz' -execdir tar -xzf {} \;

Usage in Molmo2-ER

See the allenai/molmo2 repository for the data loader and training recipe. The relevant loader class for this dataset lives in olmo/data/spatial_datasets.py.