Duanj1 commited on
Commit
1a5ac52
·
verified ·
1 Parent(s): 99251b2

add README_upstream.md

Browse files
Files changed (1) hide show
  1. README_upstream.md +329 -0
README_upstream.md ADDED
@@ -0,0 +1,329 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ dataset_info:
3
+ config_name: SubsetVisualization
4
+ features:
5
+ - name: json_file
6
+ dtype: string
7
+ - name: image_0
8
+ dtype: image
9
+ - name: image_1
10
+ dtype: image
11
+ - name: depth_0
12
+ dtype: image
13
+ - name: depth_1
14
+ dtype: image
15
+ - name: conversations
16
+ sequence:
17
+ - name: from
18
+ dtype: string
19
+ - name: value
20
+ dtype: string
21
+ - name: thinking
22
+ sequence:
23
+ - name: question_type
24
+ dtype: string
25
+ - name: thinking
26
+ dtype: string
27
+ - name: image_names
28
+ sequence: string
29
+ - name: depth_names
30
+ sequence: string
31
+ - name: image_root
32
+ dtype: string
33
+ - name: depth_root
34
+ dtype: string
35
+ splits:
36
+ - name: 2D
37
+ num_bytes: 71248668.0
38
+ num_examples: 200
39
+ - name: 3D
40
+ num_bytes: 167399765.0
41
+ num_examples: 500
42
+ - name: Simulator
43
+ num_bytes: 11230975.0
44
+ num_examples: 100
45
+ download_size: 245201596
46
+ dataset_size: 249879408.0
47
+ configs:
48
+ - config_name: SubsetVisualization
49
+ data_files:
50
+ - split: 2D
51
+ path: SubsetVisualization/2D-*
52
+ - split: 3D
53
+ path: SubsetVisualization/3D-*
54
+ - split: Simulator
55
+ path: SubsetVisualization/Simulator-*
56
+ license: apache-2.0
57
+ task_categories:
58
+ - question-answering
59
+ size_categories:
60
+ - 1M<n<10M
61
+
62
+ ---
63
+
64
+ <div style="background-color: #ffe4e6; border-left: 4px solid #dc2626; padding: 0.75em 1em; margin-top: 1em; color: #b91c1c; font-weight: bold; border-radius: 0.375em;">
65
+ ⚠️ Warning: The Dataset Viewer and Data Studio above are for display only. They show just 800 samples from the full RefSpatial dataset, taken from the "SubsetVisualization" folder in Hugging Face ".parquet" format.
66
+ </div>
67
+
68
+
69
+ <div style="background-color: #eff6ff; border-left: 4px solid #3b82f6; padding: 0.75em 1em; margin-top: 1em; color: #1e3a8a; font-weight: bold; border-radius: 0.375em;">
70
+ ℹ️ Info: The full raw dataset (~357GB) is available in non-HF formats (e.g., images, depth maps, JSON files).
71
+ </div>
72
+ <img src="https://api.visitorbadge.io/api/combined?path=https%3A%2F%2Fzhoues.github.io&labelColor=%232ccce4&countColor=%230158f9" alt="visitor badge" style="display: none;" />
73
+ <img src="https://api.visitorbadge.io/api/combined?path=https%3A%2F%2Fanjingkun.github.io&labelColor=%232ccce4&countColor=%230158f9" alt="visitor badge" style="display: none;" />
74
+ <h1 style="display: flex; align-items: center; justify-content: center; font-size: 1.75em; font-weight: 600;">
75
+
76
+ <img src="assets/logo.png" style="height: 60px; flex-shrink: 0;">
77
+
78
+ <span style="line-height: 1.2; margin-left: 0px; text-align: center;">
79
+ RefSpatial: A Large-scale Dataset for teaching a general VLM to achieve spatial referring with reasoning
80
+ </span>
81
+
82
+ </h1>
83
+
84
+
85
+ <!-- [![Generic badge](https://img.shields.io/badge/🤗%20Datasets-BAAI/RefSpatial--Bench-blue.svg)](https://huggingface.co/datasets/BAAI/RefSpatial-Bench) -->
86
+
87
+ <p align="center">
88
+ <a href="https://zhoues.github.io/RoboRefer"><img src="https://img.shields.io/badge/%F0%9F%8F%A0%20Project-Homepage-blue" alt="HomePage"></a>
89
+ &nbsp;
90
+ <a href="https://arxiv.org/abs/2506.04308"><img src="https://img.shields.io/badge/arXiv-2506.04308-b31b1b.svg?logo=arxiv" alt="arXiv"></a>
91
+ &nbsp;
92
+ <a href="https://github.com/Zhoues/RoboRefer"><img src="https://img.shields.io/badge/Code-RoboRefer-black?logo=github" alt="Project Homepage"></a>
93
+ &nbsp;
94
+ <a href="https://huggingface.co/datasets/JingkunAn/RefSpatial-Bench"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Benchmark-RefSpatial--Bench-green" alt="Benchmark"></a>
95
+ &nbsp;
96
+ <a href="https://huggingface.co/collections/Zhoues/roborefer-and-refspatial-6857c97848fab02271310b89"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Weights-RoboRefer-yellow" alt="Weights"></a>
97
+ </p>
98
+
99
+ ## 🔭 Overview
100
+
101
+
102
+ **RefSpatial** is a comprehensive dataset combining 2D images from OpenImages, 3D videos from CA-1M, and simulated data generated from Blender ([Fig. 1 (a)](#fig1)). Its key features include:
103
+
104
+ - **(1) Fine-Grained Annotations:** Includes multiple instances of the same object category, each with hierarchical captions (e.g., "the third cup from the left") for unambiguous reference in cluttered environments.
105
+ - **(2) Multi-Dimensionality:** Supports complex, multi-step spatial reasoning by annotating detailed reasoning processes (all simulated data).
106
+ - **(3) High Quality:** Data is rigorously filtered, selecting 466k images, 100k video frames, and 3k manually annotated assets from much larger pools.
107
+ - **(4) Large Scale:** Contains 2.5 million samples and 20 million question-answer pairs ([Fig. 1 (b)](#fig1)).
108
+ - **(5) Rich Diversity:** Covers a wide range of indoor and outdoor scenes with 31 distinct spatial relations ([Fig. 1 (c)](#fig1)).
109
+ - **(6) Easy Scalability:** Our pipeline seamlessly scales spatial referring data using diverse sources, including 2D images, 3D videos with bounding boxes, and simulation assets.
110
+
111
+
112
+ <div id="fig1" align="center">
113
+ <img src="assets/dataset_pipeline.png" alt="RefSpatial Dataset Pipeline" width="90%">
114
+ <p><b>Figure 1:</b> RefSpatial dataset pipeline and statistics of 31 spatial relations.</p>
115
+ </div>
116
+
117
+
118
+ In [Fig. 1](#fig1), we present the dataset recipe that progressively integrates 2D, 3D, and simulated data to enable general VLMs to adapt to spatial referring tasks in a bottom-up manner.
119
+ For more details of our dataset pipeline, please refer to our paper **[RoboRefer](https://arxiv.org/abs/2506.04308)**.
120
+
121
+ ## 🗂️ Directory Structure
122
+
123
+ ### 1. Initial Structure After Download
124
+
125
+ After downloading all the data from Hugging Face, your `RefSpatial` folder will look like this. Large files (like 2D images) are split into parts (`.part_a`, etc.), while others are single `.tar.gz` files.
126
+
127
+ ```
128
+ RefSpatial/
129
+ ├── 2D/
130
+ │ ├── image/
131
+ │ │ ├── image.tar.gz.part_a
132
+ │ │ └── ... (more split files)
133
+ │ ├── depth/
134
+ │ │ ├── depth.tar.gz.part_a
135
+ │ │ └── ... (more split files)
136
+ │ ├── choice_qa.json
137
+ │ └── reasoning_template_qa.json
138
+ ├── 3D/
139
+ │ ├── image/
140
+ │ │ └── image.tar.gz
141
+ │ ├── depth/
142
+ │ │ └── depth.tar.gz
143
+ │ ├── image_multi_view/
144
+ │ │ └── image_multi_view.tar.gz
145
+ │ ├── depth_multi_view/
146
+ │ │ └── depth_multi_view.tar.gz
147
+ │ ├── image_visual_choice/
148
+ │ │ ├── image_visual_choice.tar.gz.part_a
149
+ │ │ └── ... (more split files)
150
+ │ ├── choice_qa.json
151
+ │ ├── multi_view_qa.json
152
+ │ ├── reasoning_template_qa.json
153
+ │ ├── vacant_qa.json
154
+ │ └── visual_choice_qa.json
155
+ ├── Simulator/
156
+ │ ├── image/
157
+ │ │ └── image.tar.gz
158
+ │ ├── depth/
159
+ │ │ └── depth.tar.gz
160
+ │ └── metadata.json
161
+ ├── SubsetVisualization/
162
+ │ ├── 2D-00000-of-00001.parquet
163
+ │ └── ... (more .parquet files)
164
+ ├── unzip_dataset.sh
165
+ └── delete_tar_gz.sh
166
+ ```
167
+
168
+ ### 2. Explanation of Main Folder Contents
169
+
170
+ - **`2D/`**: Contains all data related to 2D spatial reasoning tasks.
171
+ - `image/`, `depth/`: Contain the **split archives** for 2D images and their corresponding depth maps.
172
+ - `*.json`: The relevant question-answering and reasoning annotations.
173
+ - **`3D/`**: Contains data for more complex 3D scene understanding tasks.
174
+ - `image/`, `depth/`: Standard single-view 3D scene images and depth maps.
175
+ - `image_multi_view/`, `depth_multi_view/`: Data for **multi-view** tasks.
176
+ - `image_visual_choice/`: Data for **visual choice question** tasks; its larger size requires it to be split into archives.
177
+ - `*.json`: Annotations for the various 3D tasks.
178
+ - **`Simulator/`**: Contains data generated from a simulation environment.
179
+ - `image/`, `depth/`: Images and depth maps generated by the simulator, which are perfectly aligned and annotated.
180
+ - `metadata.json`: Metadata for the scenes.
181
+ - **`SubsetVisualization/`**: Contains subset samples for quick visualization and data inspection.
182
+ - `*.parquet`: These files preview a small part of the dataset without needing to unzip everything.
183
+
184
+ ## 🛠️ How to Use RefSpatial Dataset
185
+
186
+ ### 1. Decompress the Dataset
187
+
188
+ The provided `unzip_dataset.sh` script could decompress all of the `*.tar.gz` files. Please run it from the `RefSpatial` root directory.
189
+
190
+ ```bash
191
+ cd RefSpatial
192
+ bash unzip_dataset.sh
193
+ ```
194
+
195
+ This script will automatically perform the following actions:
196
+
197
+ 1. **Merge Split Files**: For files that are split into `.part_a`, `.part_b`, etc., the script will use the `cat` command to combine them into a single, complete `.tar.gz` file. For example, `image.tar.gz.part_a`, `...` will be merged into `image.tar.gz`.
198
+ 2. **Extract Archives**: The script will then use the `tar` command to extract all `.tar.gz` archives into their current directories.
199
+
200
+ ### 2. (Optional) Clean Up Archives
201
+
202
+ If you wish to delete all `.tar.gz` and `.part_*` files after successful decompression to save disk space, you can run:
203
+
204
+ ```bash
205
+ bash delete_tar_gz.sh
206
+ ```
207
+ <div style="background-color: #ffe4e6; border-left: 4px solid #dc2626; padding: 0.75em 1em; margin-top: 1em; color: #b91c1c; font-weight: bold; border-radius: 0.375em;">
208
+ ⚠️ Warning: Please run this script only after confirming that all data has been successfully decompressed.
209
+ </div>
210
+
211
+ ### 3. Use the dataset with RoboRefer
212
+
213
+ For details on how to use the RefSpatial dataset with the **RoboRefer** series of models, please refer to the official implementation repository: **[Zhoues/RoboRefer"](https://github.com/Zhoues/RoboRefer)**.
214
+
215
+ ## 🗂️ Final Structure After Decompression
216
+
217
+ After successfully running the decompression script, all archives will be replaced by the actual image (e.g., `.jpg`, `.png`) and depth map files. The final directory structure will be as follows:
218
+
219
+ ```
220
+ RefSpatial/
221
+ ├── 2D/
222
+ │ ├── image/
223
+ │ │ ├── 000002b97e5471a0.jpg
224
+ │ │ └── ... (all 2D image files)
225
+ │ ├── depth/
226
+ │ │ ├── 000002b97e5471a0.png
227
+ │ │ └── ... (all 2D depth map files)
228
+ │ ├── choice_qa.json
229
+ │ └── reasoning_template_qa.json
230
+ ├── 3D/
231
+ │ ├── image/
232
+ │ │ ├── 42444499_2458914221666_wide_image.png
233
+ │ │ └── ... (all 3D single-view images)
234
+ │ ├── depth/
235
+ │ │ ├── 42444499_2458914221666_wide_depth.png
236
+ │ │ └── ... (all 3D single-view depth maps)
237
+ │ ├── image_multi_view/
238
+ │ │ ├── 42444499_2460713483458_wide_image.png
239
+ │ │ └── ... (all 3D multi-view images)
240
+ │ ├── depth_multi_view/
241
+ │ │ ├── 42444499_2460713483458_wide_depth.png
242
+ │ │ └── ... (all 3D multi-view depth maps)
243
+ │ ├── image_visual_choice/
244
+ │ │ ├── 42444499_2458914221666_image_with_bbox_0.png
245
+ │ │ └── ... (all 3D visual choice images)
246
+ │ ├── choice_qa.json
247
+ │ └── ... (other 3D json files)
248
+ ├── Simulator/
249
+ │ │ └── 00020ec1a2dbc971.png
250
+ │ │ └── ... (all simulator images)
251
+ │ ├── depth/
252
+ │ │ └── 00020ec1a2dbc971.png
253
+ │ │ └── ... (all simulator depth maps)
254
+ │ └── metadata.json
255
+ └── ... (scripts and visualization folder)
256
+ ```
257
+
258
+ ## 🗺️ Data Mapping
259
+
260
+ To use this dataset for model training, you need to match the entries of the image and depth path in the JSON files with the decompressed image and depth map files. Below is the mapping of each JSON file to its corresponding `image` and `depth` folders.
261
+
262
+ ```json
263
+ {
264
+ "2D": {
265
+ "folder": "RefSpatial/2D",
266
+ "jsons": {
267
+ "choice_qa.json": {
268
+ "image_root": "RefSpatial/2D/image",
269
+ "depth_root": "RefSpatial/2D/depth"
270
+ },
271
+ "reasoning_template_qa.json": {
272
+ "image_root": "RefSpatial/2D/image",
273
+ "depth_root": "RefSpatial/2D/depth"
274
+ }
275
+ }
276
+ },
277
+ "3D": {
278
+ "folder": "RefSpatial/3D",
279
+ "jsons": {
280
+ "choice_qa.json": {
281
+ "depth_root": "RefSpatial/3D/depth",
282
+ "image_root": "RefSpatial/3D/image"
283
+ },
284
+ "multi_view_qa.json": {
285
+ "depth_root": "RefSpatial/3D/depth_multi_view",
286
+ "image_root": "RefSpatial/3D/image_multi_view"
287
+ },
288
+ "reasoning_template_qa.json": {
289
+ "depth_root": "RefSpatial/3D/depth",
290
+ "image_root": "RefSpatial/3D/image"
291
+ },
292
+ "vacant_qa.json": {
293
+ "depth_root": "RefSpatial/3D/depth",
294
+ "image_root": "RefSpatial/3D/image"
295
+ },
296
+ "visual_choice_qa.json": {
297
+ "depth_root": "RefSpatial/3D/depth",
298
+ "image_root": "RefSpatial/3D/image_visual_choice"
299
+ },
300
+ }
301
+ },
302
+ "Simulator": {
303
+ "folder": "RefSpatial/Simulator",
304
+ "jsons": {
305
+ "metadata.json": {
306
+ "image_root": "RefSpatial/Simulator/image",
307
+ "depth_root": "RefSpatial/Simulator/depth"
308
+ }
309
+ }
310
+ }
311
+ }
312
+ ```
313
+
314
+ ## 📫 Contact
315
+
316
+ If you have any questions about the dataset, feel free to email Jingkun An (anjingkun02@gmail.com) Yi Han (hany01@buaa.edu.cn), and Enshen Zhou(zhouenshen@buaa.edu.cn).
317
+
318
+ ## 📜 Citation
319
+
320
+ Please consider citing our work if this dataset is useful for your research.
321
+
322
+ ```
323
+ @article{zhou2025roborefer,
324
+ title={RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics},
325
+ author={Zhou, Enshen and An, Jingkun and Chi, Cheng and Han, Yi and Rong, Shanyu and Zhang, Chi and Wang, Pengwei and Wang, Zhongyuan and Huang, Tiejun and Sheng, Lu and others},
326
+ journal={arXiv preprint arXiv:2506.04308},
327
+ year={2025}
328
+ }
329
+ ```