Add dataset card and link to paper

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +44 -3
README.md CHANGED
@@ -1,3 +1,44 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - video-text-to-text
5
+ tags:
6
+ - 3d
7
+ - spatial-intelligence
8
+ - vlm
9
+ - visual-grounding
10
+ ---
11
+
12
+ # SpaceSpan Dataset
13
+
14
+ SpaceSpan is a large-scale dataset curated for the training and evaluation of 3D vision-language models (VLMs), specifically introduced in the paper [Proxy3D: Efficient 3D Representations for Vision-Language Models via Semantic Clustering and Alignment](https://huggingface.co/papers/2605.08064).
15
+
16
+ [**Project Page**](https://wzzheng.net/Proxy3D) | [**GitHub Repository**](https://github.com/Spacedreamer2384/Proxy3D)
17
+
18
+ ## Dataset Description
19
+
20
+ The SpaceSpan dataset is designed to help VLMs develop spatial intelligence through 3D proxy representations. It incorporates heterogeneous visual information with a unified data format, enabling multi-stage training for skills ranging from simple image-text alignment to complex 3D spatial reasoning, 3D visual question answering (VQA), and visual grounding.
21
+
22
+ The dataset includes approximately **318K samples** used across four progressive training stages.
23
+
24
+ ## Dataset Structure
25
+
26
+ The repository typically includes the following components used for the Proxy3D training pipeline:
27
+ - **Training Instructions**: JSON files for stages 1 through 4 (e.g., `stage_4_train_318K.json`).
28
+ - **Embeddings**: Pre-computed vision embeddings for efficiency.
29
+ - **Geometric Data**: Pointmaps and camera poses for 3D reconstruction and scene representation.
30
+
31
+ For evaluation annotations, please refer to the [Proxy3D-annotations](https://huggingface.co/datasets/Spacewanderer8263/Proxy3D-annotations) repository.
32
+
33
+ ## Citation
34
+
35
+ If you use this dataset in your research, please cite the following paper:
36
+
37
+ ```bibtex
38
+ @article{proxy3d2026,
39
+ title={Proxy3D: Efficient 3D Representations for Vision-Language Models via Semantic Clustering and Alignment},
40
+ author={Jiang, Jerry and Sun, Haowen and Gudovskiy, Denis and Nakata, Yohei and Okuno, Tomoyuki and Keutzer, Kurt and Zheng Wenzhao},
41
+ journal={arXiv preprint arXiv:2605.08064},
42
+ year={2026}
43
+ }
44
+ ```