docs: add GSO30 dataset readme
Browse files
README.md
CHANGED
|
@@ -1,3 +1,96 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
---
|
| 4 |
+
|
| 5 |
+
# Streaming3D Dataset
|
| 6 |
+
|
| 7 |
+
This dataset contains assets used by the Streaming3D benchmark. The current
|
| 8 |
+
release documents the `GSO30` subset; other subsets may be added later.
|
| 9 |
+
|
| 10 |
+
## GSO30
|
| 11 |
+
|
| 12 |
+
`GSO30` is a 30-object subset derived from Google Scanned Objects. Each object
|
| 13 |
+
directory contains training renders, evaluation assets, and the original object
|
| 14 |
+
mesh/material files.
|
| 15 |
+
|
| 16 |
+
### Object List
|
| 17 |
+
|
| 18 |
+
```text
|
| 19 |
+
alarm backpack bell blocks chicken cream elephant grandfather grandmother hat
|
| 20 |
+
leather lion lunch_bag mario oil school_bus1 school_bus2 shoe shoe1 shoe2
|
| 21 |
+
shoe3 soap sofa sorter sorting_board stucking_cups teapot toaster train turtle
|
| 22 |
+
```
|
| 23 |
+
|
| 24 |
+
### Directory Structure
|
| 25 |
+
|
| 26 |
+
```text
|
| 27 |
+
GSO30/
|
| 28 |
+
<object_id>/
|
| 29 |
+
meshes/
|
| 30 |
+
model.glb
|
| 31 |
+
model.obj
|
| 32 |
+
model.mtl
|
| 33 |
+
texture.png
|
| 34 |
+
render_spiral_100/
|
| 35 |
+
images/
|
| 36 |
+
000.png ... 099.png
|
| 37 |
+
masks/
|
| 38 |
+
000.png ... 099.png
|
| 39 |
+
model/
|
| 40 |
+
000.png ... 099.png
|
| 41 |
+
000.npy ... 099.npy
|
| 42 |
+
transforms.json
|
| 43 |
+
model_norm.obj
|
| 44 |
+
model_norm.mtl
|
| 45 |
+
render_mvs_25/
|
| 46 |
+
model_norm.glb
|
| 47 |
+
model_norm.obj
|
| 48 |
+
model_norm.mtl
|
| 49 |
+
model/
|
| 50 |
+
000.png ... 024.png
|
| 51 |
+
000.npy ... 024.npy
|
| 52 |
+
```
|
| 53 |
+
|
| 54 |
+
Some object folders also include auxiliary metadata, thumbnails, or legacy
|
| 55 |
+
render folders. The benchmark protocol uses the paths above.
|
| 56 |
+
|
| 57 |
+
### Usage
|
| 58 |
+
|
| 59 |
+
For training or reconstruction input, use all 100 images from:
|
| 60 |
+
|
| 61 |
+
```text
|
| 62 |
+
GSO30/<object_id>/render_spiral_100/images/{000..099}.png
|
| 63 |
+
```
|
| 64 |
+
|
| 65 |
+
The corresponding masks are stored in:
|
| 66 |
+
|
| 67 |
+
```text
|
| 68 |
+
GSO30/<object_id>/render_spiral_100/masks/{000..099}.png
|
| 69 |
+
```
|
| 70 |
+
|
| 71 |
+
Camera metadata for the 100 spiral views is available in:
|
| 72 |
+
|
| 73 |
+
```text
|
| 74 |
+
GSO30/<object_id>/render_spiral_100/transforms.json
|
| 75 |
+
GSO30/<object_id>/render_spiral_100/model/{000..099}.npy
|
| 76 |
+
```
|
| 77 |
+
|
| 78 |
+
For evaluation, use the normalized GLB mesh and the 25 provided camera views
|
| 79 |
+
from `render_mvs_25`:
|
| 80 |
+
|
| 81 |
+
```text
|
| 82 |
+
GSO30/<object_id>/render_mvs_25/model_norm.glb
|
| 83 |
+
GSO30/<object_id>/render_mvs_25/model/{000..024}.npy
|
| 84 |
+
```
|
| 85 |
+
|
| 86 |
+
The matching reference renders for those views are:
|
| 87 |
+
|
| 88 |
+
```text
|
| 89 |
+
GSO30/<object_id>/render_mvs_25/model/{000..024}.png
|
| 90 |
+
```
|
| 91 |
+
|
| 92 |
+
In short, the default protocol is:
|
| 93 |
+
|
| 94 |
+
1. Train or reconstruct from all `render_spiral_100/images` frames.
|
| 95 |
+
2. Evaluate by rendering or comparing against `render_mvs_25/model_norm.glb`
|
| 96 |
+
using the 25 camera poses in `render_mvs_25/model/*.npy`.
|