bdck commited on
Commit
12087e5
·
verified ·
1 Parent(s): 9a33b78

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +141 -14
README.md CHANGED
@@ -1,26 +1,153 @@
 
 
 
 
 
 
 
 
1
  ---
2
- tags:
3
- - ml-intern
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
- # bdck/point-sam-inference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
 
8
- <!-- ml-intern-provenance -->
9
- ## Generated by ML Intern
10
 
11
- This model repository was generated by [ML Intern](https://github.com/huggingface/ml-intern), an agent for machine learning research and development on the Hugging Face Hub.
12
 
13
- - Try ML Intern: https://smolagents-ml-intern.hf.space
14
- - Source code: https://github.com/huggingface/ml-intern
15
 
16
- ## Usage
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
 
18
  ```python
19
- from transformers import AutoModelForCausalLM, AutoTokenizer
 
 
 
 
 
 
 
20
 
21
- model_id = "bdck/point-sam-inference"
22
- tokenizer = AutoTokenizer.from_pretrained(model_id)
23
- model = AutoModelForCausalLM.from_pretrained(model_id)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  ```
25
 
26
- For non-causal architectures, replace `AutoModelForCausalLM` with the appropriate `AutoModel` class.
 
 
 
1
+ # Point-SAM: Promptable 3D Segmentation
2
+
3
+ A clean, self-contained Python inference package for **Point-SAM** (ICLR 2025), extending SAM's promptable segmentation to 3D point clouds.
4
+
5
+ > **Paper**: [Point-SAM: Promptable 3D Segmentation Model for Point Clouds](https://arxiv.org/abs/2406.17741)
6
+ > **Original Code**: [github.com/zyc00/Point-SAM](https://github.com/zyc00/Point-SAM)
7
+ > **Pretrained Weights**: [`yuchen0187/Point-SAM`](https://huggingface.co/yuchen0187/Point-SAM)
8
+
9
  ---
10
+
11
+ ## Quick Start
12
+
13
+ ```bash
14
+ pip install torch timm safetensors huggingface_hub numpy
15
+ ```
16
+
17
+ ```python
18
+ from point_sam import PointSAM, load_pointcloud
19
+
20
+ # 1. Load a point cloud (PLY or PCD)
21
+ coords, rgb, original = load_pointcloud("scene.ply")
22
+ # coords: [N, 3] normalized to [-1, 1]
23
+ # rgb: [N, 3] in [0, 255]
24
+
25
+ # 2. Load the pretrained model (downloads weights from HF Hub)
26
+ model = PointSAM.from_pretrained(checkpoint_path="model.safetensors", device="cuda")
27
+
28
+ # 3. Cache the cloud for fast repeated queries
29
+ model.set_pointcloud(coords, rgb)
30
+
31
+ # 4. Segment with a prompt point (in normalized [-1, 1] space)
32
+ masks, iou_scores = model.predict(
33
+ coords=None, # use cached cloud
34
+ rgb=None,
35
+ prompt_point=[0.5, 0.1, -0.2],
36
+ prompt_label=1, # 1 = foreground, 0 = background
37
+ multimask_output=True,
38
+ )
39
+
40
+ # 5. Pick the best mask by IoU score
41
+ best_mask = masks[iou_scores.argmax()] # [N] boolean
42
+ ```
43
+
44
+ Command-line example:
45
+
46
+ ```bash
47
+ python examples/segment_ply.py scene.ply 0.5 0.1 -0.2 --checkpoint model.safetensors
48
+ ```
49
+
50
  ---
51
 
52
+ ## How It Works Internally
53
+
54
+ Point-SAM is a direct 3D adaptation of [SAM](https://github.com/facebookresearch/segment-anything). It has the same three-part architecture, but replaces the 2D image backbone with a **point cloud encoder**.
55
+
56
+ ### 1. Point-Cloud Encoder
57
+
58
+ The encoder turns an unstructured point cloud into a compact set of **patch embeddings** — the 3D equivalent of image patches.
59
+
60
+ **Voronoi Tokenizer** (the key speed trick)
61
+ - Sample `G` center points from the cloud via **Farthest Point Sampling** (FPS). This spreads centers evenly across the shape.
62
+ - Group each point with its **K nearest neighbors** around one of those centers.
63
+ - Run a small **PointNet-style MLP** on each group:
64
+ - Input: relative XYZ positions + RGB colors
65
+ - Max-pool over the K neighbors → one vector per group
66
+ - Result: `G` patch embeddings, each summarizing a local neighborhood.
67
+
68
+ **Vision Transformer (ViT) backbone**
69
+ - The patch embeddings are fed into a standard ViT — `eva02_large_patch14_448` for the *large* variant, or `eva_giant_patch14_560` for *giant*.
70
+ - The ViT adds learned positional embeddings based on the 3D center coordinates and runs self-attention to build a global scene representation.
71
+ - Output: `[B, num_patches, D]` embedding tensor (default `D = 256`).
72
+
73
+ ### 2. Prompt Encoder
74
 
75
+ - **Point prompts**: A user clicks (or specifies) a 3D coordinate. The coordinate is mapped through a random Fourier positional encoding (same Gaussian-frequency trick SAM uses) and then a learned embedding is added depending on whether the label is **positive** (foreground) or **negative** (background).
76
+ - **Mask prompts** (optional): If you already have a rough mask from a previous iteration, it is grouped into patches (same KNN grouping as the encoder) and encoded into dense embeddings. On the first call this is `None`, so a learned "no mask" embedding is used instead.
77
 
78
+ ### 3. Mask Decoder
79
 
80
+ The decoder is a **two-way transformer** — identical in spirit to SAM's decoder:
 
81
 
82
+ 1. **Cross-attention layers** alternate between:
83
+ - *Prompt tokens → point cloud patches* (the prompts "look at" the scene)
84
+ - *Point cloud patches → prompt tokens* (the scene "looks back" at the prompts)
85
+ 2. After 2 layers, a **final attention** from prompts to patches refines the token representation.
86
+ 3. **Upsampling**: The decoder works at patch resolution. To get back to per-point logits, features are interpolated to every original point using **inverse-distance weighted KNN** (3 nearest patch centers).
87
+ 4. **Hypernetwork MLPs**: Each candidate mask has its own tiny MLP that produces a dynamic weight vector. This vector is dot-producted with the upsampled per-point features to produce the final mask logits.
88
+ 5. **IoU head**: A small MLP on the IoU token predicts the quality of each mask candidate. At inference time you simply pick the one with the highest predicted IoU.
89
+
90
+ The decoder always outputs **4 candidates** (1 default + 3 multimask). The first candidate is a "safe" single mask; the other three are alternatives at different granularities.
91
+
92
+ ### 4. Iterative Prompt Refinement (training only)
93
+
94
+ During training, Point-SAM simulates a user iteratively adding prompts:
95
+ - Iteration 0: no prompt → random positive point from the target object.
96
+ - Iteration 1: previous mask is fed back as a mask prompt; a new point prompt is sampled from the **error region** (false positives / false negatives).
97
+ - ... repeated for 5 iterations (large model) or 10 (giant).
98
+
99
+ At **inference time** you only do a single forward pass with whatever prompt you provide — the model was trained to produce a good mask even from one point.
100
+
101
+ ---
102
+
103
+ ## Supported File Formats
104
+
105
+ | Format | Notes |
106
+ |--------|-------|
107
+ | **PLY** | ASCII `.ply` with `x y z r g b` columns |
108
+ | **PCD** | ASCII `.pcd` with `x y z r g b` columns (Point Cloud Library format) |
109
+
110
+ Both loaders normalize coordinates to a **unit sphere in [-1, 1]** and scale colors to **[0, 255]**. This normalization is **required** — the positional encoding will raise a `ValueError` if coordinates fall outside [-1, 1].
111
+
112
+ ---
113
+
114
+ ## Handling Large Point Clouds
115
+
116
+ If your cloud has > 100k points, increase the patch resolution to avoid OOM:
117
 
118
  ```python
119
+ model.adjust_patch_params(num_groups=2048, group_size=256)
120
+ ```
121
+
122
+ The default is `num_groups=1024, group_size=256` for the large model.
123
+
124
+ ---
125
+
126
+ ## What Changed From the Original Repo?
127
 
128
+ | Original | This Package |
129
+ |----------|-------------|
130
+ | Requires `hydra` + `omegaconf` for config | Pure Python, no YAML configs needed |
131
+ | Requires compiling `torkit3d` (CUDA ops) | Pure-PyTorch FPS, KNN, and index operations |
132
+ | Requires compiling `apex` for FusedLayerNorm | Standard `nn.LayerNorm` by default; apex optional |
133
+ | Scattered evaluation scripts | One clean `PointSAM` class with `predict()` |
134
+ | Heavy training codebase | Only inference + minimal model code |
135
+
136
+ ---
137
+
138
+ ## Citation
139
+
140
+ ```bibtex
141
+ @inproceedings{
142
+ zhou2025pointsam,
143
+ title={Point-{SAM}: Promptable 3D Segmentation Model for Point Clouds},
144
+ author={Yuchen Zhou and Jiayuan Gu and Tung Yen Chiang and Fanbo Xiang and Hao Su},
145
+ booktitle={The Thirteenth International Conference on Learning Representations},
146
+ year={2025},
147
+ url={https://openreview.net/forum?id=yXCTDhZDh6}
148
+ }
149
  ```
150
 
151
+ ## License
152
+
153
+ MIT (same as the original repository).