File size: 2,055 Bytes
e2daf0b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
license: apache-2.0
task_categories:
- video-text-to-text
tags:
- 3D
- vision-language
- spatial-intelligence
---

# SpaceSpan Dataset

SpaceSpan is a large-scale dataset curated for aligning 3D proxy representations with Vision-Language Models (VLMs), introduced in the paper [Proxy3D: Efficient 3D Representations for Vision-Language Models via Semantic Clustering and Alignment](https://huggingface.co/papers/2605.08064).

The dataset incorporates heterogeneous visual information into a unified format to support multi-stage training for developing spatial intelligence. It enables models to progress from simple image-text alignment to complex 3D reasoning tasks, such as 3D visual question answering (VQA) and visual grounding.

[**Project Page**](https://wzzheng.net/Proxy3D) | [**GitHub**](https://github.com/Spacedreamer2384/Proxy3D) | [**Paper**](https://huggingface.co/papers/2605.08064)

## Dataset Description

The SpaceSpan dataset (specifically the SpaceSpan-318K version) supports four progressive training stages:
- **Stage 1**: Initial spatial alignment.
- **Stage 2-3**: Intermediate spatial reasoning development.
- **Stage 4**: Full-scale 3D reasoning.

### Directory Structure

Based on the official repository, the dataset is typically organized as follows:

```bash 
data/               # Training and inference data
β”œβ”€β”€ icon_image_embeds_qwen25.pt
β”œβ”€β”€ number_image_embeds_qwen25.pt
β”œβ”€β”€ stage_1_train.json
β”œβ”€β”€ stage_2_train.json
β”œβ”€β”€ stage_3_train.json
β”œβ”€β”€ stage_4_train_318K.json
β”œβ”€β”€ pointmaps_wo_markers
β”œβ”€β”€ poses
└── ... 
```

## Citation

If you find this dataset useful for your research, please cite the following paper:

```bibtex
@article{proxy3d2026,
  title={Proxy3D: Efficient 3D Representations for Vision-Language Models via Semantic Clustering and Alignment},
  author={Jiang, Jerry and Sun, Haowen and Gudovskiy, Denis and Nakata, Yohei and Okuno, Tomoyuki and Keutzer, Kurt and Zheng Wenzhao},
  journal={arXiv preprint arXiv:2605.08064},
  year={2026}
}
```