File size: 7,845 Bytes
acd9352 91a403a 9cf9be5 5b18b3e 91a403a 5b18b3e 91a403a 5b18b3e cc5de6d 5b18b3e 91a403a 5b18b3e 91a403a 5b18b3e 91a403a 5b18b3e 91a403a 5b18b3e 91a403a 5b18b3e 91a403a 5b18b3e 91a403a cc5de6d 91a403a cc5de6d 91a403a cc5de6d 5b18b3e cc5de6d 91a403a cc5de6d 91a403a cc5de6d 91a403a 5b18b3e 91a403a 5b18b3e 91a403a cc5de6d 91a403a cc5de6d 91a403a 5b18b3e 91a403a 5b18b3e 91a403a 5b18b3e 91a403a 5b18b3e 91a403a 5b18b3e 91a403a 5b18b3e 91a403a 5b18b3e cc5de6d 5b18b3e acd9352 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 | ---
tags:
- ml-intern
---
# LightweightMR β Mesh from Point Cloud (Beginner Guide)
> **TL;DR** β Give it a `.ply` / `.pcd` / `.xyz` file full of 3D points, and it spits out a nice triangle mesh (`.ply` or `.obj`).
>
> Only depends on **PyTorch + NumPy + SciPy**. No CUDA compiling, no Open3D, no CGAL.
---
## π§ What does this actually do?
Imagine you have a laser scan of a statue β millions of dots floating in space. This code turns those dots into a **solid surface** made of triangles.
It does this in **two stages**:
```
Point Cloud Stage 1: Learn SDF Stage 2: Mesh
(just dots) β (learn a "distance field") β (triangles!)
```
### Stage 1 β Learning a Distance Field (SDF)
The code trains a small neural network to answer:
> *"For any random 3D point, how far is it from the surface, and which side is it on?"*
Positive = outside, negative = inside, zero = exactly on the surface.
It learns this purely from your point cloud β no camera images, no manual labels.
### Stage 2 β Building the Mesh
Now that the network knows inside vs. outside, the code:
1. Sprinkles candidate vertices near the surface
2. Uses another tiny network to nudge them onto high-detail areas (curvature)
3. Projects them exactly onto the zero-distance surface
4. Builds a **3D Delaunay triangulation** (like connecting dots with tetrahedra)
5. Labels each tetrahedron as "inside" or "outside"
6. The walls between inside/outside *are* your surface β extracted as triangles
7. Cleans up non-manifold edges by adding midpoints
---
## π Quick Start (5 minutes)
### 1. Install
```bash
pip install torch numpy scipy
```
That's it. No C++ compilers, no 2 GB dependencies.
### 2. Try on a synthetic sphere (no data needed)
We included a tiny script that makes a fake point cloud so you can see it work immediately:
```bash
# Download / clone the repo files, then:
python example/make_sphere.py # creates example/sphere.ply (3000 points)
python -m lightweightmr -i example/sphere.ply -o example/sphere_mesh.ply --device cpu
```
The second command will:
- Print progress bars for SDF training (~20k steps)
- Print progress bars for vertex generation (~8k steps)
- Save `example/sphere_mesh.ply`
**On CPU this takes ~20β40 minutes.** On a CUDA GPU (`--device cuda`) it's ~2β4 minutes.
### 3. Use your own scan
```bash
python -m lightweightmr -i myscan.ply -o mymesh.ply --device cpu
```
Supported inputs: `.ply` (ASCII or binary), `.pcd`, `.xyz`
---
## π What files do I need?
You only need the `lightweightmr/` folder (9 Python files). Nothing else.
```
lightweightmr/
__init__.py # package marker
__main__.py # CLI (the command you run)
optimize.py # the two-stage runner (Stage 1 + Stage 2)
sdfnet.py # neural network for distance field
vgnet.py # neural network for vertex placement
losses.py # math that teaches the networks
meshing.py # Delaunay + surface extraction
embedder.py # positional encoding (helps the networks)
io_utils.py # loading PLY/PCD/XYZ, saving meshes
```
---
## βοΈ CLI Options Explained
| Flag | Default | What it means |
|------|---------|---------------|
| `-i` / `--input` | **required** | Your point cloud file |
| `-o` / `--output` | **required** | Output mesh file (`.ply` or `.obj`) |
| `--device` | `cpu` | `cpu` or `cuda`. GPU is much faster. |
| `--sdf-iters` | `20000` | How long to train the distance field. More = better quality on noisy scans. |
| `--vg-iters` | `8000` | How long to train vertex placement. |
| `--vertices` | `3400` | Target number of vertices in final mesh. More = finer detail, slower. |
| `--k-samples` | `21` | Samples per tetrahedron when labeling inside/outside. Higher = cleaner mesh, slower. |
| `--save-freq` | `2000` | Save a checkpoint every N iterations (so you can resume). |
| `--resume-sdf` | β | Path to a `.pth` checkpoint to skip Stage 1. |
### Common recipes
**Fast preview (lower quality):**
```bash
python -m lightweightmr -i scan.ply -o mesh.ply --sdf-iters 5000 --vg-iters 2000 --vertices 800
```
**High quality (slower):**
```bash
python -m lightweightmr -i scan.ply -o mesh.ply --sdf-iters 40000 --vg-iters 12000 --vertices 10000
```
**Resume after Stage 1 crash:**
```bash
python -m lightweightmr -i scan.ply -o mesh.ply --resume-sdf output/sdf_checkpoints/sdf_final.pth
```
---
## π Python API (for scripts)
If you want to call it from your own code instead of the command line:
```python
from lightweightmr.optimize import Runner
runner = Runner(
pointcloud_path="myscan.ply",
out_dir="./output",
device="cpu", # or "cuda"
sdf_iters=20_000,
vg_iters=8_000,
vertices_size=3_400,
)
# Run both stages
vertices, faces = runner.run(mesh_path="mymesh.ply")
# Or run stages separately:
runner.train_sdf() # Stage 1
verts = runner.train_vg() # Stage 2
v, f = runner.generate_mesh(verts, save_path="mymesh.ply")
```
---
## π§ͺ Understanding the Output
After running, you'll see a new folder `./output/` with:
```
output/
sdf_checkpoints/
sdf_final.pth # trained distance field (can resume from this)
```
And your chosen output file (`-o mesh.ply`) contains the mesh.
You can view `.ply` meshes with:
- **Blender** (free, drag & drop)
- **MeshLab** (free)
- **Windows 3D Viewer**
---
## π οΈ Troubleshooting
| Problem | Likely cause | Fix |
|---------|--------------|-----|
| Takes forever | CPU training | Use `--device cuda` if you have a GPU |
| Output mesh has holes | Not enough vertices | Increase `--vertices` |
| Noisy / wobbly mesh | Noisy input + too few SDF iters | Increase `--sdf-iters` to `30000+` |
| `ModuleNotFoundError` | Missing dependency | `pip install torch numpy scipy` |
| `ValueError` on `.ply` | Binary PLY variant we don't parse | Convert to ASCII PLY in MeshLab/Blender |
---
## π How is this different from the original paper?
The original CVPR 2025 code is **powerful but heavy** β it needs:
- CUDA-compiled hash encoders
- CGAL (C++ geometry library)
- Open3D, `torch_scatter`, `spconv`, `fpsample`, `mcubes`, `trimesh`
This reimplementation replaces all of that with pure Python + PyTorch + SciPy:
| Original | This version |
|----------|--------------|
| CUDA hash grid | Positional encoding (slower but no compile) |
| PointTransformerV3 vertex generator | Simple MLP (faster, no extra deps) |
| CGAL Delaunay + meshing | SciPy Delaunay + our own surface extractor |
| C++ KDTree | SciPy KDTree |
**Trade-off:** The SDF stage may need a few more iterations on very detailed scans, but the output quality is comparable for most shapes.
---
## π Citation
If you use this, cite the original paper:
```bibtex
@inproceedings{zhang2025high,
title={High-Fidelity Lightweight Mesh Reconstruction from Point Clouds},
author={Zhang, Chen and Wang, Wentao and Li, Ximeng and Liao, Xinyao and Su, Wanjuan and Tao, Wenbing},
booktitle={CVPR},
pages={11739--11748},
year={2025}
}
```
---
License: MIT (reimplementation). Original paper and code Β© authors.
<!-- ml-intern-provenance -->
## Generated by ML Intern
This model repository was generated by [ML Intern](https://github.com/huggingface/ml-intern), an agent for machine learning research and development on the Hugging Face Hub.
- Try ML Intern: https://smolagents-ml-intern.hf.space
- Source code: https://github.com/huggingface/ml-intern
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "bdck/lightweightmr"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
```
For non-causal architectures, replace `AutoModelForCausalLM` with the appropriate `AutoModel` class.
|