jaychempan commited on
Commit
5fdac74
·
verified ·
1 Parent(s): 655dd29

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +154 -3
README.md CHANGED
@@ -1,3 +1,154 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ <p align="center">
5
+ <img src="assets/v2sam-logo.png" alt="Image" width="70">
6
+ </p>
7
+ <div align="center">
8
+ <h1 align="center">
9
+ V²-SAM: Marrying SAM2 with Multi-Prompt Experts for Cross-View Object Correspondence
10
+ </h1>
11
+ <h4 align="center"><em>Jiancheng Pan*,     Runze Wang*,     Tianwen Qian,     Mohammad Mahdi,     Xiangyang Xue,</em></h4>
12
+ <h4 align="center"><em>Xiaomeng Huang,     Luc Van Gool,     Danda Pani Paudel,     Yuqian Fu✉ </em></h4>
13
+ <p align="center">
14
+ <img src="assets/ins.png" alt="Image" width="350">
15
+ </p>
16
+
17
+ \* *Equal Contribution* &nbsp; &nbsp; Corresponding Author ✉
18
+ </div>
19
+
20
+ <p align="center">
21
+ <a href="https://arxiv.org/abs/2511.20886"><img src="https://img.shields.io/badge/Arxiv-2511.20886-b31b1b.svg?logo=arXiv"></a>
22
+ <a href="https://arxiv.org/abs/2511.20886"><img src="https://img.shields.io/badge/CVPR'26-Paper-blue"></a>
23
+ <a href="https://huggingface.co/jaychempan/V2-SAM"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Model-HuggingFace-yellow?style=flat&logo=hug"></a>
24
+ <a href="https://jianchengpan.space/projects/V2-SAM/"><img src="https://img.shields.io/badge/V2--SAM-Project_Page-green"></a>
25
+ <a href="https://github.com/jaychempan/V2SAM/blob/main/LICENSE"><img src="https://img.shields.io/badge/License-MIT-yellow"></a>
26
+ </p>
27
+
28
+ <p align="center">
29
+ <a href="#news">News</a> |
30
+ <a href="#abstract">Abstract</a> |
31
+ <a href="#dataset">Dataset</a> |
32
+ <a href="#model">Model</a> |
33
+ <a href="#statement">Statement</a>
34
+ </p>
35
+
36
+
37
+ ## News
38
+ - [2026/2/21] Our V²-SAM is accepted by CVPR 2026. Thanks to all contributors.
39
+ - [2025/11/25] Our paper of "V²-SAM: Marrying SAM2 with Multi-Prompt Experts for Cross-View Object Correspondence" is up on [arXiv](https://arxiv.org/abs/2511.20886).
40
+
41
+ ## Abstract
42
+ Cross-view object correspondence, exemplified by the representative task of ego-exo object correspondence, aims to establish consistent associations of the same object across different viewpoints (e.g., ego-centric and exo-centric). This task poses significant challenges due to drastic viewpoint and appearance variations, making existing segmentation models, such as SAM2, non-trivial to apply directly. To address this, we present V2-SAM, a unified cross-view object correspondence framework that adapts SAM2 from single-view segmentation to cross-view correspondence through two complementary prompt generators. Specifically, the Cross-View Anchor Prompt Generator (V2-Anchor), built upon DINOv3 features, establishes geometry-aware correspondences and, for the first time, unlocks coordinate-based prompting for SAM2 in cross-view scenarios, while the Cross-View Visual Prompt Generator (V2-Visual) enhances appearance-guided cues via a novel visual prompt matcher that aligns ego-exo representations from both feature and structural perspectives. To effectively exploit the strengths of both prompts, we further adopt a multi-expert design and introduce a Post-hoc Cyclic Consistency Selector (PCCS) that adaptively selects the most reliable expert based on cyclic consistency. Extensive experiments validate the effectiveness of V2-SAM, achieving new state-of-the-art performance on Ego-Exo4D (ego-exo object correspondence), DAVIS-2017 (video object tracking), and HANDAL-X (robotic-ready cross-view correspondence).
43
+
44
+ <p align="center">
45
+ <img src="assets/v2sam-framework.png" alt="Image">
46
+ </p>
47
+
48
+ ## Dataset
49
+ Our method based on Ego-Exo4D (ego-exo object correspondence), DAVIS-2017 (video object tracking), and HANDAL-X (robotic-ready cross-view correspondence).
50
+
51
+ We provide the processed versions of these datasets on HuggingFace for easy access:
52
+
53
+ ### 🔹 Ego-Exo4D
54
+ - [Train Split](https://huggingface.co/datasets/jaychempan/Ego-Exo4D-Relation-Train)
55
+ - [Test Split](https://huggingface.co/datasets/jaychempan/Ego-Exo4D-Relation-Test)
56
+
57
+ ### 🔹 DAVIS-2017
58
+ - [Dataset Link](https://huggingface.co/datasets/jaychempan/DAVIS)
59
+
60
+ ### 🔹 HANDAL-X
61
+ - [Dataset Link](https://huggingface.co/datasets/jaychempan/HANDAL)
62
+
63
+ ## Model
64
+ ### Environment Setup
65
+
66
+ ```
67
+ conda create -n v2sam python=3.10 -y
68
+ conda activate v2sam
69
+ cd ~/projects/V2-SAM
70
+ export LD_LIBRARY_PATH=/opt/modules/nvidia-cuda-12.1.0/lib64:$LD_LIBRARY_PATH
71
+ export PATH=/opt/modules/nvidia-cuda-12.1.0/bin:$PATH
72
+ # conda install pytorch==2.3.1 torchvision==0.18.1 pytorch-cuda=12.1 cuda -c pytorch -c "nvidia/label/cuda-12.1.0" -c "nvidia/label/cuda-12.1.1"
73
+ pip install torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 --index-url https://download.pytorch.org/whl/cu121
74
+
75
+ # pip install mmcv==2.1.0 -f https://download.openmmlab.com/mmcv/dist/cu121/torch2.3/index.html
76
+ pip install -U openmim
77
+ mim install mmengine
78
+ mim install "mmcv>=2.1.0"
79
+ pip install -r requirements.txt
80
+ pip install prettytable
81
+
82
+ # use local mmengine for use the thrid party tools
83
+ cd mmengine
84
+ pip install -e .
85
+ ```
86
+
87
+ ### SAM2 and DINOV3 weights
88
+ Choose the base model weights to use.
89
+ ```
90
+ huggingface-cli download jaychempan/sam2 --local-dir weights/sam2 --include sam2_hiera_large.pt
91
+
92
+ huggingface-cli download jaychempan/dinov2 --local-dir weights/dinov2 --include dinov2_vitg14_reg4_pretrain.pth
93
+
94
+ huggingface-cli download jaychempan/dinov3 --local-dir weights/dinov3 --include dinov3_vitl16_pretrain_lvd1689m-8aa4cbdd.pth
95
+ ```
96
+
97
+
98
+ ### Train
99
+
100
+ ```
101
+ bash tools/dist.sh train projects/v2sam/configs/v2sam.py 4
102
+ ```
103
+ if `V²-Visual`, rename the project's dir `projects/v2sam_visual` --> `projects/v2sam`
104
+
105
+ else `V²-Fusion`, rename the project's dir `projects/v2sam_fusion` --> `projects/v2sam`
106
+
107
+ > Note: `V²-Anchor` no need to train (use sam2 offical decoder checkpoint)
108
+
109
+ ### Test
110
+
111
+ ```
112
+ bash tools/test.sh test projects/v2sam/configs/v2sam.py 4 /path/to/checkpoint
113
+
114
+ bash tools/test_all.sh test projects/v2sam/configs/v2sam.py 4 /path/to/checkpoint/dir
115
+ ```
116
+
117
+ ## Statement
118
+ ### Acknowledgement
119
+
120
+ This project references and uses the following open source models and datasets.
121
+
122
+ #### Related Open Source Models
123
+
124
+ - [Sa2VA](https://arxiv.org/abs/2501.04001)
125
+ - [SAM2](https://arxiv.org/abs/2408.00714)
126
+ - [DINOv2](https://arxiv.org/abs/2304.07193)
127
+ - [DINOv3](https://arxiv.org/abs/2508.10104)
128
+
129
+ #### Related Open Source Datasets
130
+
131
+ - [Ego-Exo4D Dataset](https://ego-exo4d-data.org/)
132
+ - [DAVIS-2017 Dataset](https://davischallenge.org/davis2017/code.html)
133
+ - [HANDAL-X Dataset](https://nvlabs.github.io/HANDAL/)
134
+
135
+
136
+ ### Citation
137
+
138
+ If you are interested in the following work or want to use our dataset, please cite the following paper.
139
+
140
+ ```bibtex
141
+ @inproceedings{pan2026v,
142
+ title={V$^{2}$-SAM: Marrying SAM2 with Multi-Prompt Experts for Cross-View Object Correspondence},
143
+ author={Pan, Jiancheng and Wang, Runze and Qian, Tianwen and Mahdi, Mohammad and Fu, Yanwei and Xue, Xiangyang and Huang, Xiaomeng and Van Gool, Luc and Paudel, Danda Pani and Fu, Yuqian},
144
+ booktitle={CVPR},
145
+ year={2026}
146
+ }
147
+
148
+ @inproceedings{fu2025objectrelator,
149
+ title={Objectrelator: Enabling cross-view object relation understanding across ego-centric and exo-centric perspectives},
150
+ author={Fu, Yuqian and Wang, Runze and Ren, Bin and Sun, Guolei and Gong, Biao and Fu, Yanwei and Paudel, Danda Pani and Huang, Xuanjing and Van Gool, Luc},
151
+ booktitle={ICCV},
152
+ year={2025}
153
+ }
154
+ ```