license: apache-2.0
task_categories:
- video-text-to-text
tags:
- 3d
- spatial-intelligence
- vlm
- visual-grounding
SpaceSpan Dataset
SpaceSpan is a large-scale dataset curated for the training and evaluation of 3D vision-language models (VLMs), specifically introduced in the paper Proxy3D: Efficient 3D Representations for Vision-Language Models via Semantic Clustering and Alignment.
Project Page | GitHub Repository
Dataset Description
The SpaceSpan dataset is designed to help VLMs develop spatial intelligence through 3D proxy representations. It incorporates heterogeneous visual information with a unified data format, enabling multi-stage training for skills ranging from simple image-text alignment to complex 3D spatial reasoning, 3D visual question answering (VQA), and visual grounding.
The dataset includes approximately 318K samples used across four progressive training stages.
Dataset Structure
The repository typically includes the following components used for the Proxy3D training pipeline:
- Training Instructions: JSON files for stages 1 through 4 (e.g.,
stage_4_train_318K.json). - Embeddings: Pre-computed vision embeddings for efficiency.
- Geometric Data: Pointmaps and camera poses for 3D reconstruction and scene representation.
For evaluation annotations, please refer to the Proxy3D-annotations repository.
Citation
If you use this dataset in your research, please cite the following paper:
@article{proxy3d2026,
title={Proxy3D: Efficient 3D Representations for Vision-Language Models via Semantic Clustering and Alignment},
author={Jiang, Jerry and Sun, Haowen and Gudovskiy, Denis and Nakata, Yohei and Okuno, Tomoyuki and Keutzer, Kurt and Zheng Wenzhao},
journal={arXiv preprint arXiv:2605.08064},
year={2026}
}