lightweightmr / README.md
bdck's picture
Update ML Intern artifact metadata
9cf9be5 verified
|
raw
history blame
5.57 kB
metadata
tags:
  - ml-intern

LightweightMR β€” Pure-Python Mesh Reconstruction

Pure-Python reimplementation of "High-Fidelity Lightweight Mesh Reconstruction from Point Clouds" (CVPR 2025 Highlight, Zhang et al.)

Input: PLY / PCD / XYZ point cloud
Output: Triangle mesh (PLY / OBJ)


Quick Start

# Install
pip install torch numpy scipy

# Run
python -m lightweightmr -i myscan.ply -o mesh.ply

Or use the Python API:

from lightweightmr.optimize import Runner

runner = Runner("myscan.ply", out_dir="./output", device="cpu")
v, f = runner.run(mesh_path="mesh.ply")
print(f"Mesh: {len(v)} vertices, {len(f)} faces")

Two-Stage Pipeline

  1. SDF Learning β€” Train a coordinate MLP with positional encoding to fit an implicit signed distance field to the point cloud.
  2. Vertex Generation + Delaunay Meshing β€”
    • Sample surface queries using SDF gradient projection.
    • Train a vertex generator (MLP-only, no PointTransformerV3) to displace initial FPS samples.
    • Move vertices to the learned SDF surface.
    • Build a 3D Delaunay triangulation (scipy.spatial.Delaunay).
    • Label tetrahedra as inside/outside by sampling the SDF.
    • Extract the surface as facets between differently-labeled cells.
    • Post-process with midpoint-vertex insertion to fix non-manifold edges.

Differences from Original

Feature Original This Reimplementation
Hash encoding CUDA hash grid + triplane Positional encoding only (no CUDA compilation)
Vertex generator PointTransformerV3 + MLP MLP-only (faster, no spconv/torch_scatter)
KDTree C++ libkdtree scipy.spatial.KDTree
Delaunay meshing CGAL C++ binary scipy.spatial.Delaunay
Mesh extraction CGAL create_mesh Pure Python facet extraction
Dependencies Open3D, CGAL, boost, fpsample, mcubes, trimesh, torch_scatter, spconv Only torch, numpy, scipy

Trade-off: Without the original hash encoding, the SDF stage converges slightly slower and may need a few more iterations on highly detailed scans. The meshing quality is comparable for typical genus-0/1 shapes.


CLI Reference

python -m lightweightmr -i INPUT.ply -o OUTPUT.ply [options]

Options:
  --device                cpu | cuda        (default: cpu)
  --sdf-iters             20000             SDF training iterations
  --vg-iters              8000              Vertex generator iterations
  --sdf-lr                0.001
  --vg-lr                 0.001
  --sdf-batch             5000              Batch size for SDF queries
  --vertices              3400              Target vertex count
  --update-size           5                 Curriculum update steps
  --update-ratio          1.2               Vertex count growth ratio
  --k-samples             21                Interior samples per tetrahedron
  --multires              8                 Positional encoding frequencies
  --project-sdf-level     0.0               Surface SDF level
  --save-freq             2000              Checkpoint frequency
  --resume-sdf            PATH.pth          Resume from SDF checkpoint

Package Structure

lightweightmr/
  __init__.py
  embedder.py      β€” Positional encoding (NeRF-style)
  sdfnet.py        β€” SDF MLP network
  vgnet.py         β€” Vertex generator MLP
  losses.py        β€” All loss functions (Chamfer, eikonal, divergence, curvature, normals)
  meshing.py       β€” Delaunay + SDF labeling + surface extraction + midpoint fix
  io_utils.py      β€” PLY/PCD/XYZ loaders, mesh exporters, FPS, normal estimation
  optimize.py      β€” Two-stage Runner (SDF then VG + meshing)
  __main__.py      β€” CLI entry point

Tips

  • CPU-only is slow β€” SDF training on CPU with 20k iterations takes ~30–60 min depending on your machine. If you have a GPU, use --device cuda.
  • Vertex count β€” Increase --vertices for finer detail (slower meshing). Decrease for faster/cleaner low-poly results.
  • Noise β€” If your point cloud is noisy, increase --sdf-iters to 30k+ and use a small --project-sdf-level (e.g. 0.001) to pull slightly inward.
  • Large clouds β€” The code automatically subsamples to ~1/60th of input points for the SDF training set. For very large scans, reduce --queries-size.

Citation

@inproceedings{zhang2025high,
  title={High-Fidelity Lightweight Mesh Reconstruction from Point Clouds},
  author={Zhang, Chen and Wang, Wentao and Li, Ximeng and Liao, Xinyao and Su, Wanjuan and Tao, Wenbing},
  booktitle={CVPR},
  pages={11739--11748},
  year={2025}
}

License: MIT (reimplementation). Original paper and code Β© authors.

Generated by ML Intern

This model repository was generated by ML Intern, an agent for machine learning research and development on the Hugging Face Hub.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "bdck/lightweightmr"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

For non-causal architectures, replace AutoModelForCausalLM with the appropriate AutoModel class.