The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 82, in _split_generators
raise ValueError(
ValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
PointOdyssey (converted to VLBM format)
This dataset contains 131 training sequences from the PointOdyssey dataset converted to the VLBM-compatible format using preprocess_pointodyssey.py. The sequences have been compressed into .tar.gz archives in chunks of 50 sequences per archive.
Scale
| Metric | Value |
|---|---|
| Total sequences | 131 |
| Image resolution | 960 x 540 px |
| Depth type | Dense (ground truth) |
Dataset Structure
Each sequence directory follows this layout:
<seq_name>/
βββ rgbs/
β βββ rgb_00000.jpg
β βββ rgb_00001.jpg
β βββ ...
βββ depths/
β βββ depth_00000.npz
β βββ depth_00001.npz
β βββ ...
βββ intrinsics.npy
βββ extrinsics.npy
βββ trajs_2d.npy
βββ trajs_3d.npy
βββ visibilities.npy
βββ scene_info.json
File Descriptions
rgbs/: RGB frames saved as JPEG (rgb_XXXXX.jpg). Resolution is 960x540 pixels.depths/: Dense depth maps saved as compressed NumPy archives (depth_XXXXX.npz). Each archive stores a float16 array under the keydepthof shape(H, W)in meters. Original PointOdyssey depth (uint16 PNG, max uint16 maps to 1000m) converted to float16 meters.intrinsics.npy: Camera intrinsic matrices for each frame(T, 3, 3)float16.extrinsics.npy: World-to-camera extrinsic matrices (W2C) for each frame(T, 4, 4)float16.trajs_2d.npy: 2D trajectories(T, N, 2)float16 -- pixel coordinates (x, y).trajs_3d.npy: 3D trajectories(T, N, 3)float16 -- world-space coordinates (x, y, z).visibilities.npy: Visibility flags(T, N)float16 (1.0 visible, 0.0 not visible).scene_info.json: JSON file with per-sequence metadata includingnum_frames,image_size,num_trajectories,source, anddepth_range.
Trajectory Source
PointOdyssey provides ground-truth dense point trajectories in anno.npz files, containing trajs_3d, trajs_2d, visibs, intrinsics, and extrinsics. The conversion maps visibs to visibilities and tiles single intrinsic/extrinsic matrices across all frames when needed.
Data Specifications
- Image format: JPEG (RGB), 960x540 px
- Depth format: NPZ (float16), dense (ground truth from PointOdyssey)
- Annotation format: Individual
.npyfiles (float16) - Extrinsics: World-to-camera (W2C) 4x4 matrices
Usage Example (Python)
import numpy as np
from PIL import Image
from pathlib import Path
import json
seq_dir = Path("data/pointodyssey_vlbm/ani")
# Load annotations
trajs_2d = np.load(seq_dir / "trajs_2d.npy") # (T, N, 2)
trajs_3d = np.load(seq_dir / "trajs_3d.npy") # (T, N, 3)
vis = np.load(seq_dir / "visibilities.npy") # (T, N)
intrinsics = np.load(seq_dir / "intrinsics.npy") # (T, 3, 3)
extrinsics = np.load(seq_dir / "extrinsics.npy") # (T, 4, 4)
# Load an image and depth map
frame_idx = 0
rgb = Image.open(seq_dir / "rgbs" / f"rgb_{frame_idx:05d}.jpg")
depth_npz = np.load(seq_dir / "depths" / f"depth_{frame_idx:05d}.npz")
depth = depth_npz['depth'] # float16 array (H, W)
# Load scene info
with open(seq_dir / "scene_info.json", 'r') as f:
scene_info = json.load(f)
print(scene_info)
Citation
Please cite the original PointOdyssey dataset when using the converted data:
@inproceedings{zheng2023pointodyssey,
title={PointOdyssey: A Large-Scale Synthetic Dataset for Long-Term Point Tracking},
author={Zheng, Yang and Harley, Adam W. and Shen, Bokui and Wetzstein, Gordon and Guibas, Leonidas J.},
booktitle={ICCV},
year={2023}
}
- Downloads last month
- 50