Datasets:
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
DPT (Decoupled Point Transformer) Dataset & Weights
This repository contains the processed datasets and pre-trained checkpoints for DPT (Decoupled Point Transformer), an advanced 3D semantic segmentation framework built on PTV3, integrating deep 2D visual priors from DINO.
Developed by Bole Zhang (University of Bristol).
π Overview
DPT addresses the semantic gap in 3D Point Clouds by decoupling geometric features and deep 2D cognitive priors. This dataset includes our 1031D (or 1025D) aligned features specifically designed for Urban Remote Sensing.
Key Features:
- PTV3 Backbone: Leveraging state-of-the-art Serialized Attention.
- DINOv2/v3 Integration: Each point is augmented with a 1024-dimensional visual descriptor.
- GCDM Module: Geometric-Cognitive Decoupling Module for dynamic modal fusion.
- Top Performance: Achieved SOTA on SensatUrban and STPLS3D datasets.
π¦ Data Structure
The processed data is stored in .npy or .pth chunks (50m stride for SensatUrban). Each data point contains:
| Feature Name | Dims | Description |
|---|---|---|
coord |
3 | Normalized XYZ coordinates |
color |
3 | RGB values (0-255) |
rel_z |
1 | Global Relative Elevation (Physical Prior) |
dino_feat |
1024 | Deep semantic priors from DINOv2/v3 |
| Total | 1031 | Pure Aligned Feature Vector |
π οΈ Usage
Environment Setup
This project requires the Pointcept framework and spconv 2.x.
# Clone the repository
git clone [https://github.com/zbole/DPT.git](https://github.com/zbole/DPT.git)
cd DPT
- Downloads last month
- 19