lightweightmr / README.md
bdck's picture
Update ML Intern artifact metadata
acd9352 verified
metadata
tags:
  - ml-intern

LightweightMR β€” Mesh from Point Cloud (Beginner Guide)

TL;DR β€” Give it a .ply / .pcd / .xyz file full of 3D points, and it spits out a nice triangle mesh (.ply or .obj).

Only depends on PyTorch + NumPy + SciPy. No CUDA compiling, no Open3D, no CGAL.


🧭 What does this actually do?

Imagine you have a laser scan of a statue β€” millions of dots floating in space. This code turns those dots into a solid surface made of triangles.

It does this in two stages:

Point Cloud          Stage 1: Learn SDF           Stage 2: Mesh
(just dots)    β†’   (learn a "distance field")  β†’  (triangles!)

Stage 1 β€” Learning a Distance Field (SDF)

The code trains a small neural network to answer:

"For any random 3D point, how far is it from the surface, and which side is it on?"

Positive = outside, negative = inside, zero = exactly on the surface.

It learns this purely from your point cloud β€” no camera images, no manual labels.

Stage 2 β€” Building the Mesh

Now that the network knows inside vs. outside, the code:

  1. Sprinkles candidate vertices near the surface
  2. Uses another tiny network to nudge them onto high-detail areas (curvature)
  3. Projects them exactly onto the zero-distance surface
  4. Builds a 3D Delaunay triangulation (like connecting dots with tetrahedra)
  5. Labels each tetrahedron as "inside" or "outside"
  6. The walls between inside/outside are your surface β†’ extracted as triangles
  7. Cleans up non-manifold edges by adding midpoints

πŸš€ Quick Start (5 minutes)

1. Install

pip install torch numpy scipy

That's it. No C++ compilers, no 2 GB dependencies.

2. Try on a synthetic sphere (no data needed)

We included a tiny script that makes a fake point cloud so you can see it work immediately:

# Download / clone the repo files, then:
python example/make_sphere.py        # creates example/sphere.ply (3000 points)
python -m lightweightmr -i example/sphere.ply -o example/sphere_mesh.ply --device cpu

The second command will:

  • Print progress bars for SDF training (~20k steps)
  • Print progress bars for vertex generation (~8k steps)
  • Save example/sphere_mesh.ply

On CPU this takes ~20–40 minutes. On a CUDA GPU (--device cuda) it's ~2–4 minutes.

3. Use your own scan

python -m lightweightmr -i myscan.ply -o mymesh.ply --device cpu

Supported inputs: .ply (ASCII or binary), .pcd, .xyz


πŸ“‚ What files do I need?

You only need the lightweightmr/ folder (9 Python files). Nothing else.

lightweightmr/
  __init__.py          # package marker
  __main__.py          # CLI (the command you run)
  optimize.py          # the two-stage runner (Stage 1 + Stage 2)
  sdfnet.py            # neural network for distance field
  vgnet.py             # neural network for vertex placement
  losses.py            # math that teaches the networks
  meshing.py           # Delaunay + surface extraction
  embedder.py          # positional encoding (helps the networks)
  io_utils.py          # loading PLY/PCD/XYZ, saving meshes

βš™οΈ CLI Options Explained

Flag Default What it means
-i / --input required Your point cloud file
-o / --output required Output mesh file (.ply or .obj)
--device cpu cpu or cuda. GPU is much faster.
--sdf-iters 20000 How long to train the distance field. More = better quality on noisy scans.
--vg-iters 8000 How long to train vertex placement.
--vertices 3400 Target number of vertices in final mesh. More = finer detail, slower.
--k-samples 21 Samples per tetrahedron when labeling inside/outside. Higher = cleaner mesh, slower.
--save-freq 2000 Save a checkpoint every N iterations (so you can resume).
--resume-sdf β€” Path to a .pth checkpoint to skip Stage 1.

Common recipes

Fast preview (lower quality):

python -m lightweightmr -i scan.ply -o mesh.ply --sdf-iters 5000 --vg-iters 2000 --vertices 800

High quality (slower):

python -m lightweightmr -i scan.ply -o mesh.ply --sdf-iters 40000 --vg-iters 12000 --vertices 10000

Resume after Stage 1 crash:

python -m lightweightmr -i scan.ply -o mesh.ply --resume-sdf output/sdf_checkpoints/sdf_final.pth

🐍 Python API (for scripts)

If you want to call it from your own code instead of the command line:

from lightweightmr.optimize import Runner

runner = Runner(
    pointcloud_path="myscan.ply",
    out_dir="./output",
    device="cpu",          # or "cuda"
    sdf_iters=20_000,
    vg_iters=8_000,
    vertices_size=3_400,
)

# Run both stages
vertices, faces = runner.run(mesh_path="mymesh.ply")

# Or run stages separately:
runner.train_sdf()                       # Stage 1
verts = runner.train_vg()                # Stage 2
v, f = runner.generate_mesh(verts, save_path="mymesh.ply")

πŸ§ͺ Understanding the Output

After running, you'll see a new folder ./output/ with:

output/
  sdf_checkpoints/
    sdf_final.pth          # trained distance field (can resume from this)

And your chosen output file (-o mesh.ply) contains the mesh.

You can view .ply meshes with:

  • Blender (free, drag & drop)
  • MeshLab (free)
  • Windows 3D Viewer

πŸ› οΈ Troubleshooting

Problem Likely cause Fix
Takes forever CPU training Use --device cuda if you have a GPU
Output mesh has holes Not enough vertices Increase --vertices
Noisy / wobbly mesh Noisy input + too few SDF iters Increase --sdf-iters to 30000+
ModuleNotFoundError Missing dependency pip install torch numpy scipy
ValueError on .ply Binary PLY variant we don't parse Convert to ASCII PLY in MeshLab/Blender

πŸ“– How is this different from the original paper?

The original CVPR 2025 code is powerful but heavy β€” it needs:

  • CUDA-compiled hash encoders
  • CGAL (C++ geometry library)
  • Open3D, torch_scatter, spconv, fpsample, mcubes, trimesh

This reimplementation replaces all of that with pure Python + PyTorch + SciPy:

Original This version
CUDA hash grid Positional encoding (slower but no compile)
PointTransformerV3 vertex generator Simple MLP (faster, no extra deps)
CGAL Delaunay + meshing SciPy Delaunay + our own surface extractor
C++ KDTree SciPy KDTree

Trade-off: The SDF stage may need a few more iterations on very detailed scans, but the output quality is comparable for most shapes.


πŸ“š Citation

If you use this, cite the original paper:

@inproceedings{zhang2025high,
  title={High-Fidelity Lightweight Mesh Reconstruction from Point Clouds},
  author={Zhang, Chen and Wang, Wentao and Li, Ximeng and Liao, Xinyao and Su, Wanjuan and Tao, Wenbing},
  booktitle={CVPR},
  pages={11739--11748},
  year={2025}
}

License: MIT (reimplementation). Original paper and code Β© authors.

Generated by ML Intern

This model repository was generated by ML Intern, an agent for machine learning research and development on the Hugging Face Hub.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "bdck/lightweightmr"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

For non-causal architectures, replace AutoModelForCausalLM with the appropriate AutoModel class.