File size: 7,545 Bytes
7d4bd6d | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 | # Point2Mesh β A Self-Prior for Deformable Meshes
Pure Python/PyTorch reimplementation of **[Point2Mesh (SIGGRAPH 2020)](https://arxiv.org/abs/2005.11084)** by Hanocka et al.
**Input:** a point cloud (`.ply`, `.pcd`, `.xyz`, `.obj`)
**Output:** a shrink-wrapped triangle mesh (`.obj`, `.ply`, `.stl`)
No training data needed β the method optimises a single CNN per shape at inference time, exploiting the network's architectural bias toward self-similar structure as a shape prior.
---
## Quick Start
```bash
# Install
pip install torch numpy scipy
git clone https://huggingface.co/bdck/point2mesh
cd point2mesh
# Run
python -m point2mesh --input my_cloud.ply --output mesh.obj
# Quick test (fast, lower quality)
python -m point2mesh -i cloud.ply -o mesh.obj --n-levels 2 --iters 200 --init-faces 500
# Full quality
python -m point2mesh -i cloud.ply -o mesh.obj --n-levels 5 --iters 1500 --max-faces 40000 --device cuda
```
## How It Works
```
Point Cloud βββ Convex Hull βββ [ CNN optimisation ] βββ Shrink-wrapped Mesh
(coarse) (coarse-to-fine) (detailed)
```
1. **Initialise** a coarse mesh from the convex hull of the input points
2. **Optimise** a MeshCNN U-Net to deform the mesh surface toward the point cloud:
- The CNN input is fixed random noise (not the geometry)
- The CNN outputs per-vertex displacements
- Losses: bidirectional Chamfer distance + beam-gap loss + normal alignment
3. **Remesh** (subdivide + decimate) and repeat at finer resolution
4. **Export** the final mesh
The key insight is the **self-prior**: the CNN architecture itself acts as a regulariser, preferring coherent, self-similar deformations over noise. No external training data is used.
## CLI Reference
```
python -m point2mesh [OPTIONS]
Required:
--input, -i Input point cloud (.ply, .pcd, .xyz, .obj)
--output, -o Output mesh (.obj, .ply, .stl)
Optimisation:
--n-levels Coarse-to-fine levels (default: 4)
--iters Iterations per level (default: 1000)
--lr Learning rate (default: 0.0002)
--samples-start Surface samples at iter 0 (default: 15000)
--samples-end Surface samples at final iter (default: 50000)
Mesh resolution:
--init-faces Initial mesh face count (default: 2000)
--face-growth Face multiplier between levels (default: 1.5)
--max-faces Stop subdividing above this (default: 20000)
Loss weights:
--lambda-beam Beam-gap loss weight (default: 1.0)
--lambda-normal Normal alignment weight (default: 0.1)
--beam-epsilon Beam cylinder radius (default: 0.5)
Network:
--in-channels Random input features per edge (default: 6)
--enc-channels Encoder widths (default: 64 128 256 256)
Memory:
--part-threshold Use PartMesh above this face count (default: 10000)
--n-parts Spatial grid res for PartMesh (default: 2)
Output:
--device torch device (auto-detect if omitted)
--save-intermediates Save mesh after each level
--output-dir Directory for intermediates (default: .)
--log-every Print loss every N iters (default: 50)
--verbose, -v Debug logging
```
## Python API
```python
from point2mesh.optimize import run_point2mesh, Point2MeshConfig
cfg = Point2MeshConfig(
n_levels=4,
iters_per_level=1000,
init_faces=2000,
max_faces=20000,
device="cuda",
)
run_point2mesh("cloud.ply", "mesh.obj", cfg)
```
### With progress callback
```python
def on_progress(level, iteration, loss):
print(f"Level {level}, iter {iteration}: loss = {loss:.6f}")
run_point2mesh("cloud.ply", "mesh.obj", cfg, progress_callback=on_progress)
```
## Architecture
```
point2mesh/
βββ __init__.py # Package root
βββ __main__.py # CLI entry point
βββ mesh.py # Mesh data structure + edge topology + PartMesh
βββ layers.py # MeshCNN conv / pool / unpool
βββ network.py # Point2Mesh U-Net (encoder-decoder)
βββ losses.py # Chamfer, beam-gap, normal alignment, surface sampling
βββ optimize.py # Main optimisation loop
βββ io_utils.py # PCD/PLY/XYZ/OBJ loaders, mesh exporters, remeshing
```
### Module Details
| Module | Description |
|--------|-------------|
| `mesh.py` | Half-edge-style mesh with GEMM adjacency for MeshCNN. Builds edgeβ4-neighbor topology. `PartMesh` splits large meshes into spatial sub-grids. |
| `layers.py` | **MeshConv**: edge convolution with symmetric neighbor aggregation `[e, \|aβc\|, a+c, \|bβd\|, b+d]`. **MeshPool**: edge collapse by L2-norm priority. **MeshUnpool**: topology restoration from stored history. |
| `network.py` | U-Net encoder-decoder on edges. Input: fixed random noise. Output: per-edge vertex displacements `[N_e, 2, 3]`. Output head initialised to zero (no initial displacement). |
| `losses.py` | Bidirectional Chamfer distance (batched for large clouds). Beam-gap loss with Ξ΅-cylinder and mutual k-NN skip. Unoriented normal alignment `1 β \|nβΒ·nβ\|`. Differentiable area-weighted surface sampling. |
| `optimize.py` | Full coarse-to-fine loop. Re-initialises network + noise each level. Linear sample-count ramp. Remeshing (subdivide β smooth β decimate) between levels. |
| `io_utils.py` | Zero-dependency PCD/PLY/XYZ/OBJ loaders (binary + ASCII). OBJ/PLY/STL mesh writers. Convex hull initialisation. PCA-based normal estimation. Midpoint subdivision, Laplacian smoothing, greedy edge-collapse decimation. |
## Dependencies
Only **three** packages:
- `torch >= 2.0` β autograd, GPU acceleration
- `numpy >= 1.24` β array operations
- `scipy >= 1.10` β convex hull, KD-tree for normal estimation
No Open3D, no PyTorch3D, no trimesh, no pymeshlab.
## Performance Tips
| Scenario | Recommendation |
|----------|---------------|
| Quick preview | `--n-levels 2 --iters 200 --init-faces 500` |
| Standard quality | Default settings (4 levels, 1000 iters) |
| High quality | `--n-levels 5 --iters 1500 --max-faces 40000` |
| Large point clouds (>100k pts) | Use GPU (`--device cuda`) |
| High-res meshes (>10k faces) | PartMesh auto-activates; tune `--n-parts 3` if OOM |
| CPU only | Works, but ~10Γ slower than GPU |
## Differences from Original Implementation
| Aspect | Original | This reimplementation |
|--------|----------|----------------------|
| Remeshing | RWM (Robust Watertight Manifold, external C++ binary) | Midpoint subdivision + Laplacian smooth + greedy decimation |
| Mesh pooling | Full half-edge data structure with manifold guards | Simplified edge collapse with adjacency redirect |
| Dependencies | PyTorch, Open3D, numpy, scipy, CUDA ops | PyTorch, numpy, scipy only |
| Initial mesh (genus > 0) | Alpha shape β coarse RWM | Convex hull (genus-0 assumption) |
The main simplification is the remeshing step: the original uses the external [Manifold](https://github.com/hjwdzh/Manifold) binary for guaranteed watertight, non-self-intersecting output between levels. This reimplementation uses pure-Python subdivision + decimation which works well for most shapes but may produce self-intersections on complex topology.
## Citation
```bibtex
@article{hanocka2020point2mesh,
title = {Point2Mesh: A Self-Prior for Deformable Meshes},
author = {Hanocka, Rana and Metzer, Gal and Giryes, Raja and Cohen-Or, Daniel},
journal = {ACM Transactions on Graphics (TOG)},
volume = {39},
number = {4},
year = {2020},
publisher = {ACM}
}
```
|