File size: 8,377 Bytes
0d5a72a
1f24ae6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
# ScanQA 3R Data Processing


## 04.22 更新
- 整理输出,以及批量下载
- 使用ScanQA 的QA对

### QA 

路径:`ScanQA/data/qa/ScanQA_v1.0_train.json`

其中还有的验证集

### 批量下载数据

#### scannat 数据集下载
这个数据集里只有800个
```bash
cd preprocessing
# -o 设置输出的路径 
python download_from_scan_id_txt.py -o ../data/raw_data/scannet --ids_file ../data/qa/scan_id.txt --type .sens --type _vh_clean_2.0.010000.segs.json --type .aggregation.json --type _vh_clean_2.ply
```

如果需要某个特定的id的话
```bash
python download-scannetv2.py -o ../data/raw_data/scannet --type .sens --type _vh_clean_2 --type .0.010000.segs.json --type .aggregation.json --type _vh_clean_2.ply --id scene0000_01
```

#### 采样图像

```bash
# num_frames 可以设置4-8 
python export_sampled_frames.py \
    --scans_dir ../data/raw_data/scannet/scans \
    --output_dir ../data/processed_data/ScanNet \
    --train_val_splits_path ./Benchmark \
    --num_frames 4 \
    --max_workers 8 \
    --image_size 480 640
```


#### 生成voxel

```bash
python ScanNet200/preprocess_scannet200.py \
        --dataset_root ../data/raw_data/scannet/scans \
        --output_root ../data/processed_data/scannet/point_cloud \
        --label_map_file ScanNet200/scannetv2-labels.combined.tsv \
        --train_val_splits_path ScanNet200/Tasks \
        --num_workers 4 \
        --voxel_size 0.01 \
        --normalize_pointcloud
```



## 04.09 更新

04.09 处理了scannet数据集中关于scene0000_00的metadata。
1和2 执行完了之后,直接拿VLM-3R-DATA中的数据筛选了关于scene0000_00对应的QA,我突然发现这个QA非常的复杂,有很多的种类,同时输入的是视频。


### 数据结构

#### 图像/视频数据
`data/processed_data/ScanNet/videos/train`

#### **voxel的数据** 
 
 在VLM-3R/vlm_3r_data_process/data/processed_data/ScanNet/point_cloud/train下

`scene0000_00_voxel_0.1.ply`后面的数据对应不同的voxel_size 
voxel的结构:

- 这 N 行数据中,每一行代表一个体素块(Voxel)。由于我们做了 np.hstack 拼接,每一行的 8 个数值分别代表:

- [:, 0:3] (X, Y, Z): 这是体素的空间坐标。经过前面的处理(通常是除以 voxel_size 后向下取整),这些坐标通常已经变成了离散的整数索引。你可以把它理解为 3D 空间中网格的行、列、层号(比如 [10, 5, -2]),而不是真实的物理米数。

- [:, 3:6] (R, G, B): 颜色通道。也就是这个体素块呈现的颜色(通常是 0-255 的整数)。如果一个体素格子里原本包含了多个真实点,由于前面我们使用了 np.unique(..., return_index=True),这里保留的是第一个落入该格子的点的颜色。

- [:, 6] (Label): 语义标签(Semantic ID)。比如 3 代表椅子,4 代表桌子。用于语义分割任务。对应的种类在`VLM-3R/vlm_3r_data_process/datasets/ScanNet200/scannet200_constants.py`

- [:, 7] (Instance): 实例 ID(Instance ID)。用于区分同类物体的不同个体。比如场景里有两把椅子,它们的 Label 都是 3,但 Instance ID 可能是 101 和 102。

### 读取voxel

```python

def read_custom_ply(filepath):
    """
    第一步:读取包含自定义 label 和 instance_id 的 PLY 文件
    """
    print(f"正在读取文件: {filepath}")
    with open(filepath, 'rb') as f:
        plydata = PlyData.read(f)
    
    vertex_data = plydata['vertex'].data
    
    # 提取各个字段
    x = vertex_data['x']
    y = vertex_data['y']
    z = vertex_data['z']
    r = vertex_data['red']
    g = vertex_data['green']
    b = vertex_data['blue']
    label = vertex_data['label']
    instance = vertex_data['instance_id']
    
    # 拼装回 N x 8 的矩阵
    voxel_pc = np.vstack((x, y, z, r, g, b, label, instance)).T
    return voxel_pc

```
可视化可以运行
```bash
python vis_data_my.py
```
![voxel_rgb.png](assert/voxel_rgb.png)

![semantic.png](assert/semantic.png)



如果这些都不符合要求执行修改voxel_size.
```bash
python datasets/ScanNet200/preprocess_scannet200.py \
        --dataset_root ./data/raw_data/scannet/scans \
        --output_root ./data/processed_data/ScanNet/point_cloud \
        --label_map_file ./data/raw_data/scannet/scannetv2-labels.combined.tsv \
        --train_val_splits_path datasets/ScanNet200/Tasks \
        --num_workers 4 \
        --voxel_size 0.1
```


# ScanQA: 3D Question Answering for Spatial Scene Understanding

<p align="center"><img width="540" src="./docs/overview.png"></p>

This is the official repository of our paper [**ScanQA: 3D Question Answering for Spatial Scene Understanding (CVPR 2022)**](https://arxiv.org/abs/2112.10482) by Daichi Azuma, Taiki Miyanishi, Shuhei Kurita, and Motoki Kawanabe.
## Abstract
We propose a new 3D spatial understanding task for 3D question answering (3D-QA). In the 3D-QA task, models receive visual information from the entire 3D scene of a rich RGB-D indoor scan and answer given textual questions about the 3D scene.
Unlike the 2D-question answering of visual question answering, the conventional 2D-QA models suffer from problems with spatial understanding of object alignment and directions and fail in object localization from the textual questions in 3D-QA. We propose a baseline model for 3D-QA, called the ScanQA model, which learns a fused descriptor from 3D object proposals and encoded sentence embeddings. This learned descriptor correlates language expressions with the underlying geometric features of the 3D scan and facilitates the regression of 3D bounding boxes to determine the described objects in textual questions. We collected human-edited question-answer pairs with free-form answers grounded in 3D objects in each 3D scene. Our new ScanQA dataset contains over 41k question-answer pairs from 800 indoor scenes obtained from the ScanNet dataset. To the best of our knowledge, ScanQA is the first large-scale effort to perform object-grounded question answering in 3D environments.

## Installation

Please refer to [installation guide](docs/installation.md).

## Dataset

Please refer to [data preparation](docs/dataset.md) for preparing the ScanNet v2 and ScanQA datasets.
## Usage

### Training
- Start training the ScanQA model with RGB values:

  ```shell
  python scripts/train.py --use_color --tag <tag_name>
  ```

  For more training options, please run `scripts/train.py -h`.

### Inference
- Evaluation of trained ScanQA models with the val dataset:

  ```shell
  python scripts/eval.py --folder <folder_name> --qa --force
  ```

  <folder_name> corresponds to the folder under outputs/ with the timestamp + <tag_name>.

- Scoring with the val dataset:

  ```shell
  python scripts/score.py --folder <folder_name>
  ```

- Prediction with the test dataset:

  ```shell
  python scripts/predict.py --folder <folder_name> --test_type test_w_obj (or test_wo_obj)
  ```

  The [ScanQA benchmark](https://eval.ai/web/challenges/challenge-page/1715/overview) is hosted on [EvalAI](https://eval.ai/). 
  Please submit the `outputs/<folder_name>/pred.test_w_obj.json` and `pred.test_wo_obj.json` to this site for the evaluation of the test with and without objects.


## Citation
If you find our work helpful for your research. Please consider citing our paper.
```bibtex
@inproceedings{azuma_2022_CVPR,
  title={ScanQA: 3D Question Answering for Spatial Scene Understanding},
  author={Azuma, Daichi and Miyanishi, Taiki and Kurita, Shuhei and Kawanabe, Motoaki},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2022}
}
```

## Acknowledgements
We would like to thank [facebookresearch/votenet](https://github.com/facebookresearch/votenet) for the 3D object detection and [daveredrum/ScanRefer](https://github.com/daveredrum/ScanRefer) for the 3D localization codebase.
<!-- [facebookresearch/votenet](https://github.com/daveredrum/ScanRefer) for the 3D object detection codebase and [erikwijmans/Pointnet2_PyTorch](https://github.com/erikwijmans/Pointnet2_PyTorch) for the CUDA accelerated PointNet++ implementation. -->

## License
ScanQA is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License](LICENSE).

Copyright (c) 2022 Daichi Azuma, Taiki Miyanishi, Shuhei Kurita, Motoaki Kawanabe