HongshengY commited on
Commit
cb0ee71
·
verified ·
1 Parent(s): 9792b4b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -29
README.md CHANGED
@@ -3,46 +3,46 @@ license: mit
3
  language:
4
  - en
5
  tags:
6
- - 3D
7
- - 3D-Reconstruction
8
- - Sketch-to-3D
9
- - Transformer
10
- - Pytorch
 
11
  ---
12
 
13
- # S2V-Net (NeuralSketch2Surf)
14
 
15
- S2V-Net is the core model from the paper **"NeuralSketch2Surf: Fast Neural Surfacing of Unoriented 3D Sketches"**. It instantly converts sparse, unoriented 3D sketches (like those drawn in VR) into smooth, closed 3D meshes.
16
 
17
- ![Teaser Image](./Architecture.png)
18
- ![result Image](./UpdatedResults.png)
19
- ![datasets Image](./UpdatedDataset.png)
20
 
21
- ## Core Highlights
 
 
22
 
23
- * **Lightning Fast (Interactive Rates):** The entire pipeline takes **< 0.7 seconds** on a GPU, making it perfectly suited for real-time VR applications.
24
- * **No Normals Required:** Unlike many existing methods, it processes completely unoriented, raw 3D strokes. Users don't need to worry about stroke directions.
25
- * **Robust & Accurate:** Accurately fills large spatial gaps between sparse strokes while preserving high-frequency geometric details.
26
 
27
- ## How it Works
 
 
 
 
28
 
29
- The pipeline treats 3D surfacing as a binary voxel-occupancy prediction task operating on a $112^3$ grid:
30
- 1. **Backbone (Global Shape):** A custom **SwinUNETR v2** transformer infers the global topology and bridges large gaps between sparse input strokes.
31
- 2. **Refinement (Local Details):** A lightweight 3D CNN acts as a geometric denoiser to sharpen boundaries and recover fine details.
32
- 3. **Meshing:** The predicted occupancy grid is extracted via Marching Cubes and smoothed using a locally controllable Laplacian filter.
33
 
34
- ## Limitations
35
- * **Closed Surfaces Only:** The model assumes the input sketch represents a solid, closed object. It is not designed for open surfaces.
36
 
37
  ## Citation
38
 
39
- If you use S2V-Net in your research or project, please cite:
40
 
41
  ```bibtex
42
- @inproceedings{neuralsketch2surf2026,
43
- author = {Anonymous Author(s)},
44
- title = {NeuralSketch2Surf: Fast Neural Surfacing of Unoriented 3D Sketches},
45
- booktitle = {Proceedings of ACM Trans. Graph.},
46
- year = {2026},
47
- publisher = {ACM}
48
- }
 
3
  language:
4
  - en
5
  tags:
6
+ - 3d
7
+ - 3d-reconstruction
8
+ - sketch-based-modeling
9
+ - surface-reconstruction
10
+ - pytorch
11
+ - siggraph-2026
12
  ---
13
 
14
+ # S2V-Net for NeuralSketch2Surf
15
 
16
+ This repository hosts the pretrained S2V-Net weights used by **NeuralSketch2Surf: Fast Neural Surfacing of Unoriented 3D Sketches**.
17
 
18
+ S2V-Net reconstructs closed surfaces from sparse, unoriented 3D sketch curves. The model predicts a `112^3` volumetric occupancy field from voxelized sketch strokes; the final mesh is extracted with Marching Cubes and can be refined with the smoothing tool from the project repository.
 
 
19
 
20
+ ![Architecture](./Architecture.png)
21
+ ![Results](./UpdatedResults.png)
22
+ ![Cross-dataset results](./UpdatedDataset.png)
23
 
24
+ ## Model
 
 
25
 
26
+ - **Input:** voxelized 3D sketch strokes on a `112^3` grid.
27
+ - **Output:** binary surface occupancy probabilities.
28
+ - **Backbone:** SwinUNETR-style 3D transformer for global shape inference.
29
+ - **Refinement:** lightweight 3D residual module for local boundary correction.
30
+ - **Use case:** interactive sketch-based surface reconstruction from raw 3D strokes.
31
 
32
+ ## Notes
 
 
 
33
 
34
+ - The model is trained for closed-surface reconstruction.
35
+ - Very thin structures may be limited by the `112^3` voxel resolution.
36
 
37
  ## Citation
38
 
39
+ If you use these weights, please cite:
40
 
41
  ```bibtex
42
+ @article{neuralsketch2surf2026,
43
+ title = {NeuralSketch2Surf: Fast Neural Surfacing of Unoriented 3D Sketches},
44
+ author = {Ye, Hongsheng and Sureshkumar, Anandhu and Wang, Zhonghan and Cani, Marie-Paule and Hahmann, Stefanie and Bonneau, Georges-Pierre and Parakkat, Amal Dev},
45
+ journal = {ACM Transactions on Graphics (Proceedings of SIGGRAPH)},
46
+ year = {2026}
47
+ }
48
+ ```