paper_id
uint32
0
2.87k
title
stringlengths
15
149
authors
listlengths
1
69
cvf_url
stringlengths
94
199
pdf_url
stringlengths
95
200
supp_url
stringlengths
100
148
arxiv_id
stringlengths
10
10
bibtex
large_stringlengths
285
1.82k
abstract
large_stringlengths
547
2.44k
embedding
listlengths
768
768
200
Improving the Training of Data-Efficient GANs via Quality Aware Dynamic Discriminator Rejection Sampling
[ "Zhaoyu Zhang", "Yang Hua", "Guanxiong Sun", "Hui Wang", "Seán McLoone" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Zhang_Improving_the_Training_of_Data-Efficient_GANs_via_Quality_Aware_Dynamic_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Zhang_Improving_the_Training_of_Data-Efficient_GANs_via_Quality_Aware_Dynamic_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhang_Improving_the_Training_CVPR_2025_supplemental.pdf
null
@InProceedings{Zhang_2025_CVPR, author = {Zhang, Zhaoyu and Hua, Yang and Sun, Guanxiong and Wang, Hui and McLoone, Se\'an}, title = {Improving the Training of Data-Efficient GANs via Quality Aware Dynamic Discriminator Rejection Sampling}, booktitle = {Proceedings of the Computer Vision and Pattern ...
Data-Efficient Generative Adversarial Nets (DE-GANs) have become more and more popular in recent years. Existing methods apply data augmentation, noise injection and pre-trained models to maximumly increase the number of training samples thus improving the training of DE-GANs. However, none of these methods considers t...
[ -0.0003491667448543012, -0.027907969430088997, -0.01691100187599659, 0.054938893765211105, 0.026314178481698036, 0.017857426777482033, -0.009180196560919285, -0.007671687286347151, 0.004302725661545992, -0.07661226391792297, -0.030840221792459488, -0.0005351598374545574, -0.0708034485578537,...
201
Shading Meets Motion: Self-supervised Indoor 3D Reconstruction Via Simultaneous Shape-from-Shading and Structure-from-Motion
[ "Guoyu Lu" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Lu_Shading_Meets_Motion_Self-supervised_Indoor_3D_Reconstruction_Via_Simultaneous_Shape-from-Shading_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Lu_Shading_Meets_Motion_Self-supervised_Indoor_3D_Reconstruction_Via_Simultaneous_Shape-from-Shading_CVPR_2025_paper.pdf
null
null
@InProceedings{Lu_2025_CVPR, author = {Lu, Guoyu}, title = {Shading Meets Motion: Self-supervised Indoor 3D Reconstruction Via Simultaneous Shape-from-Shading and Structure-from-Motion}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}...
Scene reconstruction has a wide range of applications in computer vision and robotics. To build practical constraints and feature Scene reconstruction has a wide range of applications in computer vision and robotics. To build practical constraints and feature correspondences, rich textures and distinguished gradient va...
[ 0.02287750504910946, -0.025355858728289604, 0.021738572046160698, 0.011625520884990692, 0.036914315074682236, 0.022161906585097313, 0.017079738900065422, 0.00944607239216566, -0.04605443775653839, -0.07881208509206772, -0.023532025516033173, -0.05678953230381012, -0.02403537556529045, 0.01...
202
Believing is Seeing: Unobserved Object Detection using Generative Models
[ "Subhransu S. Bhattacharjee", "Dylan Campbell", "Rahul Shome" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Bhattacharjee_Believing_is_Seeing_Unobserved_Object_Detection_using_Generative_Models_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Bhattacharjee_Believing_is_Seeing_Unobserved_Object_Detection_using_Generative_Models_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Bhattacharjee_Believing_is_Seeing_CVPR_2025_supplemental.pdf
2410.05869
@InProceedings{Bhattacharjee_2025_CVPR, author = {Bhattacharjee, Subhransu S. and Campbell, Dylan and Shome, Rahul}, title = {Believing is Seeing: Unobserved Object Detection using Generative Models}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, mont...
Can objects that are not visible in an image---but are in the vicinity of the camera---be detected? This study introduces the novel tasks of 2D, 2.5D and 3D unobserved object detection for predicting the location of nearby objects that are occluded or lie outside the image frame. We adapt several state-of-the-art pre-...
[ 0.028007308021187782, -0.0059496317990124226, 0.001039012335240841, 0.05973203480243683, 0.04997502639889717, 0.006712589878588915, 0.024503542110323906, 0.02857946790754795, -0.031803201884031296, -0.04270421341061592, -0.03726736456155777, 0.006924381013959646, -0.06995976716279984, -0.0...
203
MotionStone: Decoupled Motion Intensity Modulation with Diffusion Transformer for Image-to-Video Generation
[ "Shuwei Shi", "Biao Gong", "Xi Chen", "Dandan Zheng", "Shuai Tan", "Zizheng Yang", "Yuyuan Li", "Jingwen He", "Kecheng Zheng", "Jingdong Chen", "Ming Yang", "Yinqiang Zheng" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Shi_MotionStone_Decoupled_Motion_Intensity_Modulation_with_Diffusion_Transformer_for_Image-to-Video_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Shi_MotionStone_Decoupled_Motion_Intensity_Modulation_with_Diffusion_Transformer_for_Image-to-Video_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Shi_MotionStone_Decoupled_Motion_CVPR_2025_supplemental.pdf
2412.05848
@InProceedings{Shi_2025_CVPR, author = {Shi, Shuwei and Gong, Biao and Chen, Xi and Zheng, Dandan and Tan, Shuai and Yang, Zizheng and Li, Yuyuan and He, Jingwen and Zheng, Kecheng and Chen, Jingdong and Yang, Ming and Zheng, Yinqiang}, title = {MotionStone: Decoupled Motion Intensity Modulation with Dif...
The image-to-video (I2V) generation is conditioned on the static image, which has been enhanced recently by the motion intensity as an additional control signal. These motion-aware models are appealing to generate diverse motion patterns, yet there lacks a reliable motion estimator for training such models on large-sca...
[ 0.012052798643708229, -0.018619507551193237, 0.007351793814450502, 0.028594963252544403, 0.0317729152739048, 0.03114975243806839, 0.01597699709236622, -0.011564398184418678, -0.031062206253409386, -0.058585699647665024, 0.004422781988978386, -0.04204382374882698, -0.04168078675866127, 0.02...
204
NLPrompt: Noise-Label Prompt Learning for Vision-Language Models
[ "Bikang Pan", "Qun Li", "Xiaoying Tang", "Wei Huang", "Zhen Fang", "Feng Liu", "Jingya Wang", "Jingyi Yu", "Ye Shi" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Pan_NLPrompt_Noise-Label_Prompt_Learning_for_Vision-Language_Models_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Pan_NLPrompt_Noise-Label_Prompt_Learning_for_Vision-Language_Models_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Pan_NLPrompt_Noise-Label_Prompt_CVPR_2025_supplemental.pdf
2412.01256
@InProceedings{Pan_2025_CVPR, author = {Pan, Bikang and Li, Qun and Tang, Xiaoying and Huang, Wei and Fang, Zhen and Liu, Feng and Wang, Jingya and Yu, Jingyi and Shi, Ye}, title = {NLPrompt: Noise-Label Prompt Learning for Vision-Language Models}, booktitle = {Proceedings of the Computer Vision and ...
The emergence of vision-language foundation models, such as CLIP, has revolutionized image-text representation, enabling a broad range of applications via prompt learning. Despite its promise, real-world datasets often contain noisy labels that can degrade prompt learning performance. In this paper, we demonstrate that...
[ -0.012796618044376373, -0.0057700444012880325, 0.002177736721932888, 0.0500757209956646, 0.007902069017291069, 0.0261812973767519, 0.019183384254574776, -0.00543615547940135, -0.03577093780040741, -0.015587462112307549, -0.04091230034828186, 0.04967951029539108, -0.05816248059272766, -0.01...
205
MEGA: Masked Generative Autoencoder for Human Mesh Recovery
[ "Guénolé Fiche", "Simon Leglaive", "Xavier Alameda-Pineda", "Francesc Moreno-Noguer" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Fiche_MEGA_Masked_Generative_Autoencoder_for_Human_Mesh_Recovery_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Fiche_MEGA_Masked_Generative_Autoencoder_for_Human_Mesh_Recovery_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Fiche_MEGA_Masked_Generative_CVPR_2025_supplemental.pdf
2405.18839
@InProceedings{Fiche_2025_CVPR, author = {Fiche, Gu\'enol\'e and Leglaive, Simon and Alameda-Pineda, Xavier and Moreno-Noguer, Francesc}, title = {MEGA: Masked Generative Autoencoder for Human Mesh Recovery}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, ...
Human Mesh Recovery (HMR) from a single RGB image is a highly ambiguous problem, as an infinite set of 3D interpretations can explain the 2D observation equally well. Nevertheless, most HMR methods overlook this issue and make a single prediction without accounting for this ambiguity. A few approaches generate a distri...
[ 0.004536731634289026, 0.006023396737873554, -0.013352444395422935, 0.047616615891456604, 0.03867431730031967, 0.03717968612909317, 0.021584564819931984, -0.011334292590618134, -0.06159845367074013, -0.059503812342882156, -0.023006319999694824, -0.040928371250629425, -0.05842859297990799, 0...
206
PBR-NeRF: Inverse Rendering with Physics-Based Neural Fields
[ "Sean Wu", "Shamik Basu", "Tim Broedermann", "Luc Van Gool", "Christos Sakaridis" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Wu_PBR-NeRF_Inverse_Rendering_with_Physics-Based_Neural_Fields_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Wu_PBR-NeRF_Inverse_Rendering_with_Physics-Based_Neural_Fields_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wu_PBR-NeRF_Inverse_Rendering_CVPR_2025_supplemental.pdf
null
@InProceedings{Wu_2025_CVPR, author = {Wu, Sean and Basu, Shamik and Broedermann, Tim and Van Gool, Luc and Sakaridis, Christos}, title = {PBR-NeRF: Inverse Rendering with Physics-Based Neural Fields}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, mon...
We tackle the ill-posed inverse rendering problem in 3D reconstruction with a Neural Radiance Field (NeRF) approach informed by Physics-Based Rendering (PBR) theory, named PBR-NeRF. Our method addresses a key limitation in most NeRF and 3D Gaussian Splatting approaches: they estimate view-dependent appearance without m...
[ 0.011990179307758808, 0.003391870530322194, 0.0092148557305336, 0.005260528065264225, 0.0411047600209713, 0.017425065860152245, -0.011398454196751118, -0.008745081722736359, -0.046462930738925934, -0.06780141592025757, -0.03683864325284958, -0.0028327887412160635, -0.03607999160885811, 0.0...
207
Disentangling Safe and Unsafe Image Corruptions via Anisotropy and Locality
[ "Ramchandran Muthukumar", "Ambar Pal", "Jeremias Sulam", "Rene Vidal" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Muthukumar_Disentangling_Safe_and_Unsafe_Image_Corruptions_via_Anisotropy_and_Locality_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Muthukumar_Disentangling_Safe_and_Unsafe_Image_Corruptions_via_Anisotropy_and_Locality_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Muthukumar_Disentangling_Safe_and_CVPR_2025_supplemental.pdf
null
@InProceedings{Muthukumar_2025_CVPR, author = {Muthukumar, Ramchandran and Pal, Ambar and Sulam, Jeremias and Vidal, Rene}, title = {Disentangling Safe and Unsafe Image Corruptions via Anisotropy and Locality}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}...
State-of-the-art machine learning systems are vulnerable to small perturbations to their input, where _small_ is defined according to a threat model that assigns a positive threat to each perturbation. Most prior works define a task-agnostic, isotropic, and global threat, like the l_p norm, where the magnitude of the p...
[ -0.005701721180230379, -0.008610687218606472, -0.028945479542016983, 0.057002052664756775, 0.021077653393149376, 0.021217968314886093, 0.02588815987110138, -0.01648126356303692, -0.03843122348189354, -0.06661096215248108, -0.0465632826089859, 0.002647948218509555, -0.07068509608507156, 0.0...
208
Prometheus: 3D-Aware Latent Diffusion Models for Feed-Forward Text-to-3D Scene Generation
[ "Yuanbo Yang", "Jiahao Shao", "Xinyang Li", "Yujun Shen", "Andreas Geiger", "Yiyi Liao" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Yang_Prometheus_3D-Aware_Latent_Diffusion_Models_for_Feed-Forward_Text-to-3D_Scene_Generation_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Yang_Prometheus_3D-Aware_Latent_Diffusion_Models_for_Feed-Forward_Text-to-3D_Scene_Generation_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Yang_Prometheus_3D-Aware_Latent_CVPR_2025_supplemental.pdf
2412.21117
@InProceedings{Yang_2025_CVPR, author = {Yang, Yuanbo and Shao, Jiahao and Li, Xinyang and Shen, Yujun and Geiger, Andreas and Liao, Yiyi}, title = {Prometheus: 3D-Aware Latent Diffusion Models for Feed-Forward Text-to-3D Scene Generation}, booktitle = {Proceedings of the Computer Vision and Pattern ...
In this work, we introduce Prometheus, a 3D-aware latent diffusion model for text-to-3D generation at both object and scene levels in seconds. We formulate 3D scene generation as multi-view, feed-forward, pixel-aligned 3D Gaussian generation within the latent diffusion paradigm. To ensure generalizability, we build our...
[ 0.014857119880616665, -0.005255554337054491, -0.02349679358303547, 0.054808542132377625, 0.028379399329423904, 0.03123418055474758, 0.007514559663832188, 0.03963829576969147, -0.024596625939011574, -0.038417477160692215, -0.015569259412586689, -0.015470200218260288, -0.03913002833724022, 0...
209
No Pains, More Gains: Recycling Sub-Salient Patches for Efficient High-Resolution Image Recognition
[ "Rong Qin", "Xin Liu", "Xingyu Liu", "Jiaxuan Liu", "Jinglei Shi", "Liang Lin", "Jufeng Yang" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Qin_No_Pains_More_Gains_Recycling_Sub-Salient_Patches_for_Efficient_High-Resolution_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Qin_No_Pains_More_Gains_Recycling_Sub-Salient_Patches_for_Efficient_High-Resolution_CVPR_2025_paper.pdf
null
null
@InProceedings{Qin_2025_CVPR, author = {Qin, Rong and Liu, Xin and Liu, Xingyu and Liu, Jiaxuan and Shi, Jinglei and Lin, Liang and Yang, Jufeng}, title = {No Pains, More Gains: Recycling Sub-Salient Patches for Efficient High-Resolution Image Recognition}, booktitle = {Proceedings of the Computer Vi...
Over the last decade, many notable methods have emerged to tackle the computational resource challenge of the high resolution image recognition (HRIR). They typically focus on identifying and aggregating a few salient regions for classification, discarding sub-salient areas for low training consumption. Nevertheless, m...
[ 0.00482985470443964, -0.016053233295679092, 0.011150497943162918, 0.03415684401988983, 0.008532625623047352, 0.026432018727064133, 0.0011231041280552745, 0.004609824623912573, -0.012889635749161243, -0.048123087733983994, -0.04385359212756157, 0.012433010153472424, -0.08006522059440613, -0...
210
SphereUFormer: A U-Shaped Transformer for Spherical 360 Perception
[ "Yaniv Benny", "Lior Wolf" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Benny_SphereUFormer_A_U-Shaped_Transformer_for_Spherical_360_Perception_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Benny_SphereUFormer_A_U-Shaped_Transformer_for_Spherical_360_Perception_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Benny_SphereUFormer_A_U-Shaped_CVPR_2025_supplemental.pdf
null
@InProceedings{Benny_2025_CVPR, author = {Benny, Yaniv and Wolf, Lior}, title = {SphereUFormer: A U-Shaped Transformer for Spherical 360 Perception}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages ...
This paper proposes a novel method for omnidirectional 360\degree perception. Most common previous methods relied on equirectangular projection. This representation is easily applicable to 2D operation layers but introduces distortions into the image. Other methods attempted to remove the distortions by maintaining a s...
[ 0.022452043369412422, 0.006465307902544737, 0.05032726004719734, 0.0023853310849517584, 0.013048972003161907, 0.03852800279855728, 0.0029426319524645805, 0.022301651537418365, -0.04204349219799042, -0.04437849670648575, -0.031882453709840775, 0.008630635216832161, -0.07142975181341171, 0.0...
211
Advancing Generalizable Tumor Segmentation with Anomaly-Aware Open-Vocabulary Attention Maps and Frozen Foundation Diffusion Models
[ "Yankai Jiang", "Peng Zhang", "Donglin Yang", "Yuan Tian", "Hai Lin", "Xiaosong Wang" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Jiang_Advancing_Generalizable_Tumor_Segmentation_with_Anomaly-Aware_Open-Vocabulary_Attention_Maps_and_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Jiang_Advancing_Generalizable_Tumor_Segmentation_with_Anomaly-Aware_Open-Vocabulary_Attention_Maps_and_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Jiang_Advancing_Generalizable_Tumor_CVPR_2025_supplemental.pdf
2505.02753
@InProceedings{Jiang_2025_CVPR, author = {Jiang, Yankai and Zhang, Peng and Yang, Donglin and Tian, Yuan and Lin, Hai and Wang, Xiaosong}, title = {Advancing Generalizable Tumor Segmentation with Anomaly-Aware Open-Vocabulary Attention Maps and Frozen Foundation Diffusion Models}, booktitle = {Procee...
We explore Generalizable Tumor Segmentation, aiming to train a single model for zero-shot tumor segmentation across diverse anatomical regions. Existing methods face limitations related to segmentation quality, scalability, and the range of applicable imaging modalities. In this paper, we uncover the potential of the i...
[ 0.015169106423854828, -0.034786030650138855, -0.012129280716180801, 0.044087573885917664, 0.06297370046377182, 0.03398733213543892, 0.03657621145248413, 0.015838604420423508, -0.022438321262598038, -0.050889089703559875, -0.009099899791181087, -0.015260311774909496, -0.0407574325799942, 0....
212
Towards Generalizable Scene Change Detection
[ "Jae-Woo Kim", "Ue-Hwan Kim" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Kim_Towards_Generalizable_Scene_Change_Detection_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Kim_Towards_Generalizable_Scene_Change_Detection_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Kim_Towards_Generalizable_Scene_CVPR_2025_supplemental.pdf
2409.06214
@InProceedings{Kim_2025_CVPR, author = {Kim, Jae-Woo and Kim, Ue-Hwan}, title = {Towards Generalizable Scene Change Detection}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages = {24463-24473} }
While current state-of-the-art Scene Change Detection (SCD) approaches achieve impressive results in well-trained research data, they become unreliable under unseen environments and different temporal conditions; in-domain performance drops from 77.6% to 8.0% in a previously unseen environment and to 4.6% under a diffe...
[ 0.029063932597637177, -0.05559027940034866, 0.02005658484995365, 0.0413285531103611, 0.0473046712577343, 0.022253159433603287, 0.02654946595430374, 0.04716145619750023, -0.02357395552098751, -0.06265207380056381, -0.030837390571832657, -0.0038042746018618345, -0.06856659799814224, 0.005281...
213
Beyond Clean Training Data: A Versatile and Model-Agnostic Framework for Out-of-Distribution Detection with Contaminated Training Data
[ "Yuchuan Li", "Jae-Mo Kang", "Il-Min Kim" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Li_Beyond_Clean_Training_Data_A_Versatile_and_Model-Agnostic_Framework_for_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Li_Beyond_Clean_Training_Data_A_Versatile_and_Model-Agnostic_Framework_for_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Li_Beyond_Clean_Training_CVPR_2025_supplemental.pdf
null
@InProceedings{Li_2025_CVPR, author = {Li, Yuchuan and Kang, Jae-Mo and Kim, Il-Min}, title = {Beyond Clean Training Data: A Versatile and Model-Agnostic Framework for Out-of-Distribution Detection with Contaminated Training Data}, booktitle = {Proceedings of the Computer Vision and Pattern Recogniti...
In real-world AI applications, training datasets are often contaminated, containing a mix of in-distribution (ID) and out-of-distribution (OOD) samples without labels. This contamination poses a significant challenge for developing and training OOD detection models, as nearly all existing methods assume access to a cle...
[ 0.026855241507291794, -0.029096662998199463, -0.013974003493785858, 0.054254379123449326, 0.048858679831027985, -0.0017265193164348602, -0.00801454670727253, -0.0029140887781977654, -0.04091973230242729, -0.036622628569602966, -0.03070727363228798, 0.007415554486215115, -0.09814867377281189,...
214
Incomplete Multi-modal Brain Tumor Segmentation via Learnable Sorting State Space Model
[ "Zheyu Zhang", "Yayuan Lu", "Feipeng Ma", "Yueyi Zhang", "Huanjing Yue", "Xiaoyan Sun" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Zhang_Incomplete_Multi-modal_Brain_Tumor_Segmentation_via_Learnable_Sorting_State_Space_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Zhang_Incomplete_Multi-modal_Brain_Tumor_Segmentation_via_Learnable_Sorting_State_Space_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhang_Incomplete_Multi-modal_Brain_CVPR_2025_supplemental.pdf
null
@InProceedings{Zhang_2025_CVPR, author = {Zhang, Zheyu and Lu, Yayuan and Ma, Feipeng and Zhang, Yueyi and Yue, Huanjing and Sun, Xiaoyan}, title = {Incomplete Multi-modal Brain Tumor Segmentation via Learnable Sorting State Space Model}, booktitle = {Proceedings of the Computer Vision and Pattern Re...
Brain tumor segmentation plays a crucial role in clinical diagnosis, yet the frequent unavailability of certain MRI modalities poses a significant challenge. In this paper, we introduce the Learnable Sorting State Space Model (LS3M), a novel framework designed to maximize the utilization of available modalities for bra...
[ -0.033843353390693665, -0.03042183443903923, 0.005534459371119738, 0.011499186977744102, 0.05911954864859581, 0.03498765453696251, 0.002260236069560051, 0.0033393241465091705, -0.03681847080588341, -0.05933568999171257, 0.002374105155467987, 0.01139169279485941, -0.03850070387125015, 0.028...
215
FedAWA: Adaptive Optimization of Aggregation Weights in Federated Learning Using Client Vectors
[ "Changlong Shi", "He Zhao", "Bingjie Zhang", "Mingyuan Zhou", "Dandan Guo", "Yi Chang" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Shi_FedAWA_Adaptive_Optimization_of_Aggregation_Weights_in_Federated_Learning_Using_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Shi_FedAWA_Adaptive_Optimization_of_Aggregation_Weights_in_Federated_Learning_Using_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Shi_FedAWA_Adaptive_Optimization_CVPR_2025_supplemental.pdf
2503.15842
@InProceedings{Shi_2025_CVPR, author = {Shi, Changlong and Zhao, He and Zhang, Bingjie and Zhou, Mingyuan and Guo, Dandan and Chang, Yi}, title = {FedAWA: Adaptive Optimization of Aggregation Weights in Federated Learning Using Client Vectors}, booktitle = {Proceedings of the Computer Vision and Patt...
Federated Learning (FL) has emerged as a promising framework for distributed machine learning, enabling collaborative model training without sharing local data, thereby preserving privacy and enhancing security. However, data heterogeneity resulting from differences across user behaviors, preferences, and device charac...
[ -0.009427973069250584, -0.060269180685281754, 0.010648953728377819, 0.0317407101392746, 0.0322764590382576, 0.020014842972159386, 0.030394235625863075, -0.02642315812408924, -0.024344811215996742, -0.046871282160282135, -0.015719132497906685, -0.007129562087357044, -0.06410956382751465, 0....
216
FreeUV: Ground-Truth-Free Realistic Facial UV Texture Recovery via Cross-Assembly Inference Strategy
[ "Xingchao Yang", "Takafumi Taketomi", "Yuki Endo", "Yoshihiro Kanamori" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Yang_FreeUV_Ground-Truth-Free_Realistic_Facial_UV_Texture_Recovery_via_Cross-Assembly_Inference_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Yang_FreeUV_Ground-Truth-Free_Realistic_Facial_UV_Texture_Recovery_via_Cross-Assembly_Inference_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Yang_FreeUV_Ground-Truth-Free_Realistic_CVPR_2025_supplemental.pdf
2503.17197
@InProceedings{Yang_2025_CVPR, author = {Yang, Xingchao and Taketomi, Takafumi and Endo, Yuki and Kanamori, Yoshihiro}, title = {FreeUV: Ground-Truth-Free Realistic Facial UV Texture Recovery via Cross-Assembly Inference Strategy}, booktitle = {Proceedings of the Computer Vision and Pattern Recogniti...
Recovering high-quality 3D facial textures from single-view 2D images is a challenging task, especially under constraints of limited data and complex facial details such as makeup, wrinkles, and occlusions. In this paper, we introduce FreeUV, a novel ground-truth-free UV texture recovery framework that eliminates the n...
[ -0.00806409865617752, -0.00555754080414772, 0.0013116702903062105, 0.008827322162687778, 0.041102685034275055, 0.04268896207213402, 0.015651144087314606, 0.026406018063426018, -0.002173567656427622, -0.08029721677303314, -0.0017539911204949021, 0.008925740607082844, -0.06965824216604233, 0...
217
HarmonySet: A Comprehensive Dataset for Understanding Video-Music Semantic Alignment and Temporal Synchronization
[ "Zitang Zhou", "Ke Mei", "Yu Lu", "Tianyi Wang", "Fengyun Rao" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Zhou_HarmonySet_A_Comprehensive_Dataset_for_Understanding_Video-Music_Semantic_Alignment_and_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Zhou_HarmonySet_A_Comprehensive_Dataset_for_Understanding_Video-Music_Semantic_Alignment_and_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhou_HarmonySet_A_Comprehensive_CVPR_2025_supplemental.pdf
2503.01725
@InProceedings{Zhou_2025_CVPR, author = {Zhou, Zitang and Mei, Ke and Lu, Yu and Wang, Tianyi and Rao, Fengyun}, title = {HarmonySet: A Comprehensive Dataset for Understanding Video-Music Semantic Alignment and Temporal Synchronization}, booktitle = {Proceedings of the Computer Vision and Pattern Rec...
This paper introduces HarmonySet, a comprehensive dataset designed to advance video-music understanding. HarmonySet consists of 48,328 diverse video-music pairs, annotated with detailed information on rhythmic synchronization, emotional alignment, thematic coherence, and cultural relevance. We propose a multi-step huma...
[ 0.042730312794446945, -0.026085564866662025, -0.01146517600864172, 0.06377121806144714, 0.022840917110443115, -0.0038476837798953056, 0.047990232706069946, 0.010240735486149788, -0.019046178087592125, -0.06072573363780975, -0.036245204508304596, 0.003514067502692342, -0.061113111674785614, ...
218
Rethinking Diffusion for Text-Driven Human Motion Generation: Redundant Representations, Evaluation, and Masked Autoregression
[ "Zichong Meng", "Yiming Xie", "Xiaogang Peng", "Zeyu Han", "Huaizu Jiang" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Meng_Rethinking_Diffusion_for_Text-Driven_Human_Motion_Generation_Redundant_Representations_Evaluation_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Meng_Rethinking_Diffusion_for_Text-Driven_Human_Motion_Generation_Redundant_Representations_Evaluation_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Meng_Rethinking_Diffusion_for_CVPR_2025_supplemental.zip
null
@InProceedings{Meng_2025_CVPR, author = {Meng, Zichong and Xie, Yiming and Peng, Xiaogang and Han, Zeyu and Jiang, Huaizu}, title = {Rethinking Diffusion for Text-Driven Human Motion Generation: Redundant Representations, Evaluation, and Masked Autoregression}, booktitle = {Proceedings of the Compute...
Since 2023, Vector Quantization (VQ)-based discrete generation methods have rapidly dominated human motion generation, primarily surpassing diffusion-based continuous generation methods in standard performance metrics. However, VQ-based methods have inherent limitations. Representing continuous motion data as limited d...
[ -0.006644322071224451, -0.025934210047125816, -0.0047143069095909595, 0.03800554573535919, 0.054555878043174744, 0.03197932988405228, 0.052061546593904495, -0.0015476997941732407, -0.03597096726298332, -0.05723801627755165, -0.01513315923511982, -0.04577096179127693, -0.04315093532204628, ...
219
StyleMaster: Stylize Your Video with Artistic Generation and Translation
[ "Zixuan Ye", "Huijuan Huang", "Xintao Wang", "Pengfei Wan", "Di Zhang", "Wenhan Luo" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Ye_StyleMaster_Stylize_Your_Video_with_Artistic_Generation_and_Translation_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Ye_StyleMaster_Stylize_Your_Video_with_Artistic_Generation_and_Translation_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Ye_StyleMaster_Stylize_Your_CVPR_2025_supplemental.pdf
2412.07744
@InProceedings{Ye_2025_CVPR, author = {Ye, Zixuan and Huang, Huijuan and Wang, Xintao and Wan, Pengfei and Zhang, Di and Luo, Wenhan}, title = {StyleMaster: Stylize Your Video with Artistic Generation and Translation}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference...
Style control has been popular in video generation models. Existing methods often generate videos far from the given style, cause content leakage, and struggle to transfer one video to the desired style. Our first observation is that the style extraction stage matters, whereas existing methods emphasize global style bu...
[ 0.047065459191799164, -0.019477317109704018, 0.008755385875701904, 0.06007889658212662, 0.052955903112888336, 0.007341619115322828, 0.0053929840214550495, -0.0010231523774564266, -0.00945141538977623, -0.06313904374837875, -0.04031580686569214, -0.006200672592967749, -0.05020138621330261, ...
220
Unsupervised Continual Domain Shift Learning with Multi-Prototype Modeling
[ "Haopeng Sun", "Yingwei Zhang", "Lumin Xu", "Sheng Jin", "Ping Luo", "Chen Qian", "Wentao Liu", "Yiqiang Chen" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Sun_Unsupervised_Continual_Domain_Shift_Learning_with_Multi-Prototype_Modeling_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Sun_Unsupervised_Continual_Domain_Shift_Learning_with_Multi-Prototype_Modeling_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Sun_Unsupervised_Continual_Domain_CVPR_2025_supplemental.pdf
null
@InProceedings{Sun_2025_CVPR, author = {Sun, Haopeng and Zhang, Yingwei and Xu, Lumin and Jin, Sheng and Luo, Ping and Qian, Chen and Liu, Wentao and Chen, Yiqiang}, title = {Unsupervised Continual Domain Shift Learning with Multi-Prototype Modeling}, booktitle = {Proceedings of the Computer Vision a...
In real-world applications, deep neural networks may encounter constantly changing environments, where the test data originates from continually shifting unlabeled target domains. This problem, known as Unsupervised Continual Domain Shift Learning (UCDSL), poses practical difficulties. Existing methods for UCDSL aim to...
[ -0.030562257394194603, -0.0327087864279747, -0.01987999491393566, 0.026470208540558815, 0.023705974221229553, 0.008312683552503586, 0.0362793430685997, -0.0017693801783025265, -0.002314977580681443, -0.023148078471422195, -0.004566062707453966, 0.016419479623436928, -0.05898453667759895, 0...
221
OmniGuard: Hybrid Manipulation Localization via Augmented Versatile Deep Image Watermarking
[ "Xuanyu Zhang", "Zecheng Tang", "Zhipei Xu", "Runyi Li", "Youmin Xu", "Bin Chen", "Feng Gao", "Jian Zhang" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Zhang_OmniGuard_Hybrid_Manipulation_Localization_via_Augmented_Versatile_Deep_Image_Watermarking_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Zhang_OmniGuard_Hybrid_Manipulation_Localization_via_Augmented_Versatile_Deep_Image_Watermarking_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhang_OmniGuard_Hybrid_Manipulation_CVPR_2025_supplemental.pdf
2412.01615
@InProceedings{Zhang_2025_CVPR, author = {Zhang, Xuanyu and Tang, Zecheng and Xu, Zhipei and Li, Runyi and Xu, Youmin and Chen, Bin and Gao, Feng and Zhang, Jian}, title = {OmniGuard: Hybrid Manipulation Localization via Augmented Versatile Deep Image Watermarking}, booktitle = {Proceedings of the Co...
With the rapid growth of generative AI and its widespread application in image editing, new risks have emerged regarding the authenticity and integrity of digital content. Existing versatile watermarking approaches suffer from trade-offs between tamper localization precision and visual quality. Constrained by the limit...
[ 0.012864943593740463, -0.024495409801602364, 0.009664363227784634, 0.07234694808721542, 0.05560298264026642, -0.007798710837960243, 0.014639279805123806, 0.0008325534290634096, -0.02999858185648918, -0.04867325723171234, -0.003993315622210503, -0.03475184738636017, -0.049163561314344406, -...
222
Open-Canopy: Towards Very High Resolution Forest Monitoring
[ "Fajwel Fogel", "Yohann Perron", "Nikola Besic", "Laurent Saint-André", "Agnès Pellissier-Tanon", "Martin Schwartz", "Thomas Boudras", "Ibrahim Fayad", "Alexandre d'Aspremont", "Loic Landrieu", "Philippe Ciais" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Fogel_Open-Canopy_Towards_Very_High_Resolution_Forest_Monitoring_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Fogel_Open-Canopy_Towards_Very_High_Resolution_Forest_Monitoring_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Fogel_Open-Canopy_Towards_Very_CVPR_2025_supplemental.pdf
null
@InProceedings{Fogel_2025_CVPR, author = {Fogel, Fajwel and Perron, Yohann and Besic, Nikola and Saint-Andr\'e, Laurent and Pellissier-Tanon, Agn\`es and Schwartz, Martin and Boudras, Thomas and Fayad, Ibrahim and d'Aspremont, Alexandre and Landrieu, Loic and Ciais, Philippe}, title = {Open-Canopy: Towar...
Estimating canopy height and its changes at meter resolution from satellite imagery is a significant challenge in computer vision with critical environmental applications. However, the lack of open-access datasets at this resolution hinders the reproducibility and evaluation of models. We introduce Open-Canopy, the fir...
[ 0.00692854356020689, -0.033255770802497864, 0.013775709085166454, -0.011058970354497433, 0.024083174765110016, 0.01716349832713604, 0.050744686275720596, 0.02610710822045803, -0.048074763268232346, -0.05721702054142952, -0.01865178346633911, -0.025811467319726944, -0.08462369441986084, -0....
223
ClearSight: Visual Signal Enhancement for Object Hallucination Mitigation in Multimodal Large Language Models
[ "Hao Yin", "Guangzong Si", "Zilei Wang" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Yin_ClearSight_Visual_Signal_Enhancement_for_Object_Hallucination_Mitigation_in_Multimodal_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Yin_ClearSight_Visual_Signal_Enhancement_for_Object_Hallucination_Mitigation_in_Multimodal_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Yin_ClearSight_Visual_Signal_CVPR_2025_supplemental.pdf
2503.13107
@InProceedings{Yin_2025_CVPR, author = {Yin, Hao and Si, Guangzong and Wang, Zilei}, title = {ClearSight: Visual Signal Enhancement for Object Hallucination Mitigation in Multimodal Large Language Models}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, ...
Contrastive decoding strategies are widely used to mitigate object hallucinations in multimodal large language models (MLLMs). By reducing over-reliance on language priors, these strategies ensure that generated content remains closely grounded in visual inputs, producing contextually accurate outputs. Since contrastiv...
[ 0.0300714410841465, 0.028010450303554535, 0.023464882746338844, 0.023536767810583115, 0.020870063453912735, 0.006196460220962763, 0.05964457243680954, 0.03891225531697273, -0.06549183279275894, -0.030564161017537117, -0.01674254611134529, 0.032980144023895264, -0.08700604736804962, 0.02489...
224
Stretching Each Dollar: Diffusion Training from Scratch on a Micro-Budget
[ "Vikash Sehwag", "Xianghao Kong", "Jingtao Li", "Michael Spranger", "Lingjuan Lyu" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Sehwag_Stretching_Each_Dollar_Diffusion_Training_from_Scratch_on_a_Micro-Budget_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Sehwag_Stretching_Each_Dollar_Diffusion_Training_from_Scratch_on_a_Micro-Budget_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Sehwag_Stretching_Each_Dollar_CVPR_2025_supplemental.pdf
2407.15811
@InProceedings{Sehwag_2025_CVPR, author = {Sehwag, Vikash and Kong, Xianghao and Li, Jingtao and Spranger, Michael and Lyu, Lingjuan}, title = {Stretching Each Dollar: Diffusion Training from Scratch on a Micro-Budget}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conferenc...
As scaling laws in generative AI push performance, they simultaneously concentrate the development of these models among actors with large computational resources. With a focus on text-to-image (T2I) generative models, we aim to unlock this bottleneck by demonstrating very low-cost training of large-scale T2I diffusion...
[ 0.018981320783495903, -0.028947053477168083, -0.03579595312476158, 0.07107442617416382, 0.042705636471509933, 0.05283577740192413, 0.008891469798982143, -0.008125987835228443, 0.01600610837340355, -0.04192766919732094, -0.0009395788074471056, 0.0011624133912846446, -0.055351000279188156, 0...
225
Guiding Human-Object Interactions with Rich Geometry and Relations
[ "Mengqing Xue", "Yifei Liu", "Ling Guo", "Shaoli Huang", "Changxing Ding" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Xue_Guiding_Human-Object_Interactions_with_Rich_Geometry_and_Relations_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Xue_Guiding_Human-Object_Interactions_with_Rich_Geometry_and_Relations_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Xue_Guiding_Human-Object_Interactions_CVPR_2025_supplemental.pdf
2503.20172
@InProceedings{Xue_2025_CVPR, author = {Xue, Mengqing and Liu, Yifei and Guo, Ling and Huang, Shaoli and Ding, Changxing}, title = {Guiding Human-Object Interactions with Rich Geometry and Relations}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, mont...
Human-object interaction (HOI) synthesis is crucial for creating immersive and realistic experiences for applications such as virtual reality. Existing methods often rely on simplified object representations, such as the object's centroid or the nearest point to a human, to achieve physically plausible motions. However...
[ -0.02876216359436512, 0.04426732659339905, 0.01700487919151783, 0.004518839530646801, 0.04274739325046539, 0.025936894118785858, 0.003304191632196307, 0.01668611727654934, -0.020467709749937057, -0.0406535342335701, -0.03990122303366661, -0.009989835321903229, -0.051884230226278305, -0.006...
226
TacoDepth: Towards Efficient Radar-Camera Depth Estimation with One-stage Fusion
[ "Yiran Wang", "Jiaqi Li", "Chaoyi Hong", "Ruibo Li", "Liusheng Sun", "Xiao Song", "Zhe Wang", "Zhiguo Cao", "Guosheng Lin" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Wang_TacoDepth_Towards_Efficient_Radar-Camera_Depth_Estimation_with_One-stage_Fusion_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Wang_TacoDepth_Towards_Efficient_Radar-Camera_Depth_Estimation_with_One-stage_Fusion_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wang_TacoDepth_Towards_Efficient_CVPR_2025_supplemental.pdf
2504.11773
@InProceedings{Wang_2025_CVPR, author = {Wang, Yiran and Li, Jiaqi and Hong, Chaoyi and Li, Ruibo and Sun, Liusheng and Song, Xiao and Wang, Zhe and Cao, Zhiguo and Lin, Guosheng}, title = {TacoDepth: Towards Efficient Radar-Camera Depth Estimation with One-stage Fusion}, booktitle = {Proceedings of ...
Radar-Camera depth estimation aims to predict dense and accurate metric depth by fusing input images and Radar data. Model efficiency is crucial for this task in pursuit of real-time processing on autonomous vehicles and robotic platforms. However, due to the sparsity of Radar returns, the prevailing methods adopt mult...
[ 0.015141752548515797, 0.013074786402285099, 0.011464529670774937, 0.018696250393986702, 0.031323812901973724, 0.02349211648106575, 0.015456803143024445, 0.024247484281659126, -0.014650811441242695, -0.07193296402692795, 0.002057728124782443, -0.04536452516913414, -0.03383077681064606, -0.0...
227
Physical Plausibility-aware Trajectory Prediction via Locomotion Embodiment
[ "Hiromu Taketsugu", "Takeru Oba", "Takahiro Maeda", "Shohei Nobuhara", "Norimichi Ukita" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Taketsugu_Physical_Plausibility-aware_Trajectory_Prediction_via_Locomotion_Embodiment_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Taketsugu_Physical_Plausibility-aware_Trajectory_Prediction_via_Locomotion_Embodiment_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Taketsugu_Physical_Plausibility-aware_Trajectory_CVPR_2025_supplemental.pdf
2503.17267
@InProceedings{Taketsugu_2025_CVPR, author = {Taketsugu, Hiromu and Oba, Takeru and Maeda, Takahiro and Nobuhara, Shohei and Ukita, Norimichi}, title = {Physical Plausibility-aware Trajectory Prediction via Locomotion Embodiment}, booktitle = {Proceedings of the Computer Vision and Pattern Recognitio...
Humans can predict future human trajectories even from momentary observations by using human pose-related cues. However, previous Human Trajectory Prediction (HTP) methods leverage the pose cues implicitly, resulting in implausible predictions. To address this, we propose Locomotion Embodiment, a framework that explici...
[ 0.018967140465974808, -0.009371118620038033, -0.030012117698788643, 0.03405401110649109, 0.04782428592443466, 0.0002912568161264062, 0.01609494537115097, 0.000820489542093128, -0.043800003826618195, -0.05136984959244728, -0.028934461995959282, -0.03838718309998512, -0.06186879798769951, -0...
228
CADDreamer: CAD Object Generation from Single-view Images
[ "Yuan Li", "Cheng Lin", "Yuan Liu", "Xiaoxiao Long", "Chenxu Zhang", "Ningna Wang", "Xin Li", "Wenping Wang", "Xiaohu Guo" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Li_CADDreamer_CAD_Object_Generation_from_Single-view_Images_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Li_CADDreamer_CAD_Object_Generation_from_Single-view_Images_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Li_CADDreamer_CAD_Object_CVPR_2025_supplemental.pdf
2502.20732
@InProceedings{Li_2025_CVPR, author = {Li, Yuan and Lin, Cheng and Liu, Yuan and Long, Xiaoxiao and Zhang, Chenxu and Wang, Ningna and Li, Xin and Wang, Wenping and Guo, Xiaohu}, title = {CADDreamer: CAD Object Generation from Single-view Images}, booktitle = {Proceedings of the Computer Vision and P...
The field of diffusion-based 3D generation has experienced tremendous progress in recent times. However, existing 3D generative models often produce overly dense and unstructured meshes, which are in stark contrast to the compact, structured and clear-edged CAD models created by human modelers. We introduce CADDreamer,...
[ 0.007919395342469215, 0.030600182712078094, -0.033440422266721725, 0.05636539310216904, 0.0362301804125309, 0.04695422574877739, -0.005140854977071285, -0.004551883786916733, 0.006724287755787373, -0.07192199677228928, -0.022798139601945877, -0.04118141531944275, -0.04252815619111061, 0.04...
229
Vision-Language Model IP Protection via Prompt-based Learning
[ "Lianyu Wang", "Meng Wang", "Huazhu Fu", "Daoqiang Zhang" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Wang_Vision-Language_Model_IP_Protection_via_Prompt-based_Learning_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Wang_Vision-Language_Model_IP_Protection_via_Prompt-based_Learning_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wang_Vision-Language_Model_IP_CVPR_2025_supplemental.pdf
2503.02393
@InProceedings{Wang_2025_CVPR, author = {Wang, Lianyu and Wang, Meng and Fu, Huazhu and Zhang, Daoqiang}, title = {Vision-Language Model IP Protection via Prompt-based Learning}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, ye...
Vision-language models (VLMs) like CLIP (Contrastive Language-Image Pre-Training) have seen remarkable success in visual recognition, highlighting the increasing need to safeguard the intellectual property (IP) of well-trained models. Effective IP protection extends beyond ensuring authorized usage; it also necessitate...
[ 0.00886522326618433, -0.006715760100632906, 0.013063970021903515, 0.06631693989038467, 0.029947521165013313, -0.009714909829199314, 0.028896955773234367, -0.00959526002407074, -0.03903865069150925, -0.0017302508931607008, -0.05458088219165802, 0.007856826297938824, -0.06970205157995224, 0....
230
Where's the Liability in the Generative Era? Recovery-based Black-Box Detection of AI-Generated Content
[ "Haoyue Bai", "Yiyou Sun", "Wei Cheng", "Haifeng Chen" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Bai_Wheres_the_Liability_in_the_Generative_Era_Recovery-based_Black-Box_Detection_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Bai_Wheres_the_Liability_in_the_Generative_Era_Recovery-based_Black-Box_Detection_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Bai_Wheres_the_Liability_CVPR_2025_supplemental.pdf
null
@InProceedings{Bai_2025_CVPR, author = {Bai, Haoyue and Sun, Yiyou and Cheng, Wei and Chen, Haifeng}, title = {Where's the Liability in the Generative Era? Recovery-based Black-Box Detection of AI-Generated Content}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (...
The recent proliferation of photorealistic images created by generative models has sparked both excitement and concern, as these images are increasingly indistinguishable from real ones to the human eye. While offering new creative and commercial possibilities, the potential for misuse, such as in misinformation and fr...
[ 0.020326465368270874, -0.02898172102868557, -0.021511156111955643, 0.04816364124417305, 0.04768214002251625, 0.0219719335436821, 0.029534000903367996, 0.01223195530474186, -0.0325244776904583, -0.03918836638331413, -0.028752410784363747, -0.008703137747943401, -0.05445821210741997, 0.01279...
231
Kiss3DGen: Repurposing Image Diffusion Models for 3D Asset Generation
[ "Jiantao Lin", "Xin Yang", "Meixi Chen", "Yingjie Xu", "Dongyu Yan", "Leyi Wu", "Xinli Xu", "Lie Xu", "Shunsi Zhang", "Ying-Cong Chen" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Lin_Kiss3DGen_Repurposing_Image_Diffusion_Models_for_3D_Asset_Generation_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Lin_Kiss3DGen_Repurposing_Image_Diffusion_Models_for_3D_Asset_Generation_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Lin_Kiss3DGen_Repurposing_Image_CVPR_2025_supplemental.pdf
2503.01370
@InProceedings{Lin_2025_CVPR, author = {Lin, Jiantao and Yang, Xin and Chen, Meixi and Xu, Yingjie and Yan, Dongyu and Wu, Leyi and Xu, Xinli and Xu, Lie and Zhang, Shunsi and Chen, Ying-Cong}, title = {Kiss3DGen: Repurposing Image Diffusion Models for 3D Asset Generation}, booktitle = {Proceedings o...
Diffusion models have achieved great success in generating 2D images. However, the quality and generalizability of 3D content generation remain limited. State-of-the-art methods often require large-scale 3D assets for training, which are challenging to collect. In this work, we introduce Kiss3DGen (Keep It Simple and S...
[ 0.008827662095427513, -0.00818755105137825, 0.009279982186853886, 0.03465238958597183, 0.04810118302702904, 0.039306677877902985, 0.005165325943380594, 0.0007790832314640284, -0.0016107583651319146, -0.052812933921813965, -0.009801236912608147, -0.039589665830135345, -0.03362793102860451, ...
232
DiTASK: Multi-Task Fine-Tuning with Diffeomorphic Transformations
[ "Krishna Sri Ipsit Mantri", "Carola-Bibiane Schönlieb", "Bruno Ribeiro", "Chaim Baskin", "Moshe Eliasof" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Mantri_DiTASK_Multi-Task_Fine-Tuning_with_Diffeomorphic_Transformations_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Mantri_DiTASK_Multi-Task_Fine-Tuning_with_Diffeomorphic_Transformations_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Mantri_DiTASK_Multi-Task_Fine-Tuning_CVPR_2025_supplemental.pdf
null
@InProceedings{Mantri_2025_CVPR, author = {Mantri, Krishna Sri Ipsit and Sch\"onlieb, Carola-Bibiane and Ribeiro, Bruno and Baskin, Chaim and Eliasof, Moshe}, title = {DiTASK: Multi-Task Fine-Tuning with Diffeomorphic Transformations}, booktitle = {Proceedings of the Computer Vision and Pattern Recog...
Pre-trained Vision Transformers now serve as powerful tools for computer vision. Yet, efficiently adapting them for multiple tasks remains a challenge that arises from the need to modify the rich hidden representations encoded by the learned weight matrices, without inducing interference between tasks. Current paramete...
[ 0.02546880580484867, -0.016495805233716965, 0.005166052840650082, 0.030648279935121536, 0.020964426919817924, 0.049200937151908875, 0.004986958112567663, 0.01143655739724636, -0.01394551619887352, -0.053032971918582916, -0.0018157089361920953, -0.0017119511030614376, -0.07661900669336319, ...
233
OW-OVD: Unified Open World and Open Vocabulary Object Detection
[ "Xing Xi", "Yangyang Huang", "Ronghua Luo", "Yu Qiu" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Xi_OW-OVD_Unified_Open_World_and_Open_Vocabulary_Object_Detection_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Xi_OW-OVD_Unified_Open_World_and_Open_Vocabulary_Object_Detection_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Xi_OW-OVD_Unified_Open_CVPR_2025_supplemental.pdf
null
@InProceedings{Xi_2025_CVPR, author = {Xi, Xing and Huang, Yangyang and Luo, Ronghua and Qiu, Yu}, title = {OW-OVD: Unified Open World and Open Vocabulary Object Detection}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year ...
Open world perception expands traditional closed-set frameworks, which assume a predefined set of known categories, to encompass dynamic real-world environments. Open World Object Detection (OWOD) and Open Vocabulary Object Detection (OVD) are two main research directions, each addressing unique challenges in dynamic e...
[ -0.00887113157659769, 0.020094236359000206, 0.02777058258652687, 0.056372061371803284, 0.03158228099346161, 0.024650853127241135, 0.009034616872668266, 0.027207905426621437, -0.021111473441123962, -0.0370040237903595, -0.031607430428266525, 0.03817684203386307, -0.07481031119823456, -0.020...
234
Improving Diffusion Inverse Problem Solving with Decoupled Noise Annealing
[ "Bingliang Zhang", "Wenda Chu", "Julius Berner", "Chenlin Meng", "Anima Anandkumar", "Yang Song" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Zhang_Improving_Diffusion_Inverse_Problem_Solving_with_Decoupled_Noise_Annealing_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Zhang_Improving_Diffusion_Inverse_Problem_Solving_with_Decoupled_Noise_Annealing_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhang_Improving_Diffusion_Inverse_CVPR_2025_supplemental.pdf
2407.01521
@InProceedings{Zhang_2025_CVPR, author = {Zhang, Bingliang and Chu, Wenda and Berner, Julius and Meng, Chenlin and Anandkumar, Anima and Song, Yang}, title = {Improving Diffusion Inverse Problem Solving with Decoupled Noise Annealing}, booktitle = {Proceedings of the Computer Vision and Pattern Recog...
Diffusion models have recently achieved success in solving Bayesian inverse problems with learned data priors. Current methods build on top of the diffusion sampling process, where each denoising step makes small modifications to samples from the previous step. However, this process struggles to correct errors from ear...
[ -0.017428681254386902, 0.01419302448630333, -0.024253306910395622, 0.06285005062818527, 0.0640685111284256, 0.045741915702819824, 0.02083946019411087, -0.012471061199903488, -0.02519339881837368, -0.08040516823530197, 0.02547341212630272, -0.013948715291917324, -0.028828606009483337, 0.001...
235
AvatarArtist: Open-Domain 4D Avatarization
[ "Hongyu Liu", "Xuan Wang", "Ziyu Wan", "Yue Ma", "Jingye Chen", "Yanbo Fan", "Yujun Shen", "Yibing Song", "Qifeng Chen" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Liu_AvatarArtist_Open-Domain_4D_Avatarization_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Liu_AvatarArtist_Open-Domain_4D_Avatarization_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Liu_AvatarArtist_Open-Domain_4D_CVPR_2025_supplemental.pdf
2503.19906
@InProceedings{Liu_2025_CVPR, author = {Liu, Hongyu and Wang, Xuan and Wan, Ziyu and Ma, Yue and Chen, Jingye and Fan, Yanbo and Shen, Yujun and Song, Yibing and Chen, Qifeng}, title = {AvatarArtist: Open-Domain 4D Avatarization}, booktitle = {Proceedings of the Computer Vision and Pattern Recognitio...
This work focuses on open-domain 4D avatarization, with the purpose of creating a 4D avatar from a portrait image in an arbitrary style. We select parametric triplanes as the intermediate 4D representation, and propose a practical training paradigm that takes advantage of both generative adversarial networks (GANs) and...
[ 0.030787009745836258, -0.004080170765519142, -0.008620128966867924, 0.05446067079901695, 0.008705539628863335, 0.022255683317780495, 0.02005334012210369, 0.006292639300227165, -0.012132717296481133, -0.0643373504281044, -0.015599687583744526, -0.027457719668745995, -0.09132513403892517, 0....
236
DesignDiffusion: High-Quality Text-to-Design Image Generation with Diffusion Models
[ "Zhendong Wang", "Jianmin Bao", "Shuyang Gu", "Dong Chen", "Wengang Zhou", "Houqiang Li" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Wang_DesignDiffusion_High-Quality_Text-to-Design_Image_Generation_with_Diffusion_Models_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Wang_DesignDiffusion_High-Quality_Text-to-Design_Image_Generation_with_Diffusion_Models_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wang_DesignDiffusion_High-Quality_Text-to-Design_CVPR_2025_supplemental.pdf
2503.01645
@InProceedings{Wang_2025_CVPR, author = {Wang, Zhendong and Bao, Jianmin and Gu, Shuyang and Chen, Dong and Zhou, Wengang and Li, Houqiang}, title = {DesignDiffusion: High-Quality Text-to-Design Image Generation with Diffusion Models}, booktitle = {Proceedings of the Computer Vision and Pattern Recog...
In this paper, we present DesignDiffusion, a simple yet effective framework for the novel task of synthesizing design images from textual descriptions. A primary challenge lies in generating accurate and style-consistent textual and visual content. Existing works in a related task of visual text generation often focus ...
[ 0.043122485280036926, -0.009925748221576214, -0.01649288833141327, 0.06767290085554123, 0.0574989840388298, 0.024602282792329788, -0.0053221057169139385, 0.0008695635478943586, 0.005835459567606449, -0.06442953646183014, -0.03210816532373428, -0.010557061061263084, -0.035751551389694214, 0...
237
Using Powerful Prior Knowledge of Diffusion Model in Deep Unfolding Networks for Image Compressive Sensing
[ "Chen Liao", "Yan Shen", "Dan Li", "Zhongli Wang" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Liao_Using_Powerful_Prior_Knowledge_of_Diffusion_Model_in_Deep_Unfolding_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Liao_Using_Powerful_Prior_Knowledge_of_Diffusion_Model_in_Deep_Unfolding_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Liao_Using_Powerful_Prior_CVPR_2025_supplemental.pdf
2503.08429
@InProceedings{Liao_2025_CVPR, author = {Liao, Chen and Shen, Yan and Li, Dan and Wang, Zhongli}, title = {Using Powerful Prior Knowledge of Diffusion Model in Deep Unfolding Networks for Image Compressive Sensing}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (C...
Recently, Deep Unfolding Networks (DUNs) have achieved impressive reconstruction quality in the field of image Compressive Sensing (CS) by unfolding iterative optimization algorithms into neural networks. The reconstruction quality of DUNs depends on the learned prior knowledge, so introducing stronger prior knowledge ...
[ -0.00022774862009100616, -0.015862571075558662, -0.02214386686682701, 0.04049457237124443, 0.08874456584453583, 0.034205734729766846, 0.02087080478668213, -0.012907478027045727, -0.011393350549042225, -0.07148058712482452, 0.019252287223935127, -0.019854096695780754, -0.009241748601198196, ...
238
Koala-36M: A Large-scale Video Dataset Improving Consistency between Fine-grained Conditions and Video Content
[ "Qiuheng Wang", "Yukai Shi", "Jiarong Ou", "Rui Chen", "Ke Lin", "Jiahao Wang", "Boyuan Jiang", "Haotian Yang", "Mingwu Zheng", "Xin Tao", "Fei Yang", "Pengfei Wan", "Di Zhang" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Wang_Koala-36M_A_Large-scale_Video_Dataset_Improving_Consistency_between_Fine-grained_Conditions_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Wang_Koala-36M_A_Large-scale_Video_Dataset_Improving_Consistency_between_Fine-grained_Conditions_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wang_Koala-36M_A_Large-scale_CVPR_2025_supplemental.pdf
null
@InProceedings{Wang_2025_CVPR, author = {Wang, Qiuheng and Shi, Yukai and Ou, Jiarong and Chen, Rui and Lin, Ke and Wang, Jiahao and Jiang, Boyuan and Yang, Haotian and Zheng, Mingwu and Tao, Xin and Yang, Fei and Wan, Pengfei and Zhang, Di}, title = {Koala-36M: A Large-scale Video Dataset Improving Cons...
With the continuous progress of visual generation technologies, the scale of video datasets has grown exponentially. The quality of these datasets plays a pivotal role in the performance of video generation models. We assert that temporal splitting, detailed captions, and video quality filtering are three crucial dete...
[ 0.020118916407227516, -0.018532905727624893, -0.03318245708942413, 0.08201763778924942, 0.042275309562683105, 0.020294977352023125, 0.03027266636490822, 0.01274117175489664, -0.03542221710085869, -0.01550788152962923, -0.04616184160113335, 0.0109647735953331, -0.061696093529462814, 0.02581...
239
VASparse: Towards Efficient Visual Hallucination Mitigation via Visual-Aware Token Sparsification
[ "Xianwei Zhuang", "Zhihong Zhu", "Yuxin Xie", "Liming Liang", "Yuexian Zou" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Zhuang_VASparse_Towards_Efficient_Visual_Hallucination_Mitigation_via_Visual-Aware_Token_Sparsification_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Zhuang_VASparse_Towards_Efficient_Visual_Hallucination_Mitigation_via_Visual-Aware_Token_Sparsification_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhuang_VASparse_Towards_Efficient_CVPR_2025_supplemental.pdf
2501.06553
@InProceedings{Zhuang_2025_CVPR, author = {Zhuang, Xianwei and Zhu, Zhihong and Xie, Yuxin and Liang, Liming and Zou, Yuexian}, title = {VASparse: Towards Efficient Visual Hallucination Mitigation via Visual-Aware Token Sparsification}, booktitle = {Proceedings of the Computer Vision and Pattern Reco...
Large Vision-Language Models (LVLMs) may produce outputs that are unfaithful to reality, also known as visual hallucinations (VH), which significantly impedes their real-world usage. To alleviate VH, various decoding strategies have been proposed to enhance visual information. However, many of these methods may require...
[ 0.04384279251098633, 0.0069689261727035046, 0.01971527934074402, 0.0445026271045208, -0.005541742313653231, 0.036903586238622665, 0.062132660299539566, 0.03687754645943642, -0.0459800586104393, -0.045847732573747635, -0.005173703655600548, -0.004574434831738472, -0.055124253034591675, 0.01...
240
SPARC: Score Prompting and Adaptive Fusion for Zero-Shot Multi-Label Recognition in Vision-Language Models
[ "Kevin Miller", "Aditya Gangrade", "Samarth Mishra", "Kate Saenko", "Venkatesh Saligrama" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Miller_SPARC_Score_Prompting_and_Adaptive_Fusion_for_Zero-Shot_Multi-Label_Recognition_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Miller_SPARC_Score_Prompting_and_Adaptive_Fusion_for_Zero-Shot_Multi-Label_Recognition_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Miller_SPARC_Score_Prompting_CVPR_2025_supplemental.pdf
2502.16911
@InProceedings{Miller_2025_CVPR, author = {Miller, Kevin and Gangrade, Aditya and Mishra, Samarth and Saenko, Kate and Saligrama, Venkatesh}, title = {SPARC: Score Prompting and Adaptive Fusion for Zero-Shot Multi-Label Recognition in Vision-Language Models}, booktitle = {Proceedings of the Computer ...
Zero-shot multi-label recognition (MLR) with Vision-Language Models (VLMs) faces significant challenges without training data, model tuning, or architectural modifications. Existing approaches require prompt tuning or architectural adaptations, limiting zero-shot applicability. Our work proposes a novel solution treati...
[ -0.00006075802230043337, -0.021483752876520157, 0.0015869303606450558, 0.05047229677438736, 0.02418874017894268, 0.014138307422399521, 0.022249579429626465, 0.029294684529304504, -0.024767214432358742, -0.0344216413795948, -0.02708747610449791, 0.018179098144173622, -0.04683523625135422, 0...
241
UniGoal: Towards Universal Zero-shot Goal-oriented Navigation
[ "Hang Yin", "Xiuwei Xu", "Linqing Zhao", "Ziwei Wang", "Jie Zhou", "Jiwen Lu" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Yin_UniGoal_Towards_Universal_Zero-shot_Goal-oriented_Navigation_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Yin_UniGoal_Towards_Universal_Zero-shot_Goal-oriented_Navigation_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Yin_UniGoal_Towards_Universal_CVPR_2025_supplemental.pdf
2503.10630
@InProceedings{Yin_2025_CVPR, author = {Yin, Hang and Xu, Xiuwei and Zhao, Linqing and Wang, Ziwei and Zhou, Jie and Lu, Jiwen}, title = {UniGoal: Towards Universal Zero-shot Goal-oriented Navigation}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, mon...
In this paper, we propose a general framework for universal zero-shot goal-oriented navigation. Existing zero-shot methods build inference framework upon large language models (LLM) for specific tasks, which differs a lot in overall pipeline and fails to generalize across different types of goal. Towards the aim of uni...
[ 0.003429108764976263, -0.014169137924909592, 0.007154843304306269, -0.00261373701505363, 0.03226334601640701, 0.011505275033414364, 0.03105606697499752, 0.04956649988889694, -0.033465053886175156, -0.01982380449771881, -0.02807242050766945, 0.01966460794210434, -0.08403967320919037, -0.030...
242
Noise-Consistent Siamese-Diffusion for Medical Image Synthesis and Segmentation
[ "Kunpeng Qiu", "Zhiqiang Gao", "Zhiying Zhou", "Mingjie Sun", "Yongxin Guo" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Qiu_Noise-Consistent_Siamese-Diffusion_for_Medical_Image_Synthesis_and_Segmentation_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Qiu_Noise-Consistent_Siamese-Diffusion_for_Medical_Image_Synthesis_and_Segmentation_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Qiu_Noise-Consistent_Siamese-Diffusion_for_CVPR_2025_supplemental.pdf
2505.06068
@InProceedings{Qiu_2025_CVPR, author = {Qiu, Kunpeng and Gao, Zhiqiang and Zhou, Zhiying and Sun, Mingjie and Guo, Yongxin}, title = {Noise-Consistent Siamese-Diffusion for Medical Image Synthesis and Segmentation}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (C...
Deep learning has revolutionized medical image segmentation, yet its full potential remains constrained by the paucity of annotated datasets. While diffusion models have emerged as a promising approach for generating synthetic image-mask pairs to augment these datasets, they paradoxically suffer from the same data scar...
[ 0.010104929096996784, -0.018781423568725586, -0.0364719033241272, 0.038375064730644226, 0.047093432396650314, 0.05335303023457527, 0.026835227385163307, 0.0037156131584197283, -0.007974925450980663, -0.08806373924016953, 0.007110788486897945, -0.008366615511476994, -0.014668601565063, 0.01...
243
DefectFill: Realistic Defect Generation with Inpainting Diffusion Model for Visual Inspection
[ "Jaewoo Song", "Daemin Park", "Kanghyun Baek", "Sangyub Lee", "Jooyoung Choi", "Eunji Kim", "Sungroh Yoon" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Song_DefectFill_Realistic_Defect_Generation_with_Inpainting_Diffusion_Model_for_Visual_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Song_DefectFill_Realistic_Defect_Generation_with_Inpainting_Diffusion_Model_for_Visual_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Song_DefectFill_Realistic_Defect_CVPR_2025_supplemental.pdf
2503.13985
@InProceedings{Song_2025_CVPR, author = {Song, Jaewoo and Park, Daemin and Baek, Kanghyun and Lee, Sangyub and Choi, Jooyoung and Kim, Eunji and Yoon, Sungroh}, title = {DefectFill: Realistic Defect Generation with Inpainting Diffusion Model for Visual Inspection}, booktitle = {Proceedings of the Com...
Developing effective visual inspection models remains challenging due to the scarcity of defect data. While image generation models have been used to synthesize defect images, producing highly realistic defects remains difficult. We propose DefectFill, a novel method for realistic defect generation that requires only a...
[ 0.005377688445150852, 0.01535953301936388, -0.018108762800693512, 0.06419947743415833, 0.06820304691791534, 0.035235751420259476, 0.00851437821984291, 0.00569138303399086, -0.018932150676846504, -0.060741446912288666, -0.024260109290480614, -0.005704028531908989, -0.03201233223080635, 0.01...
244
Less is More: Efficient Image Vectorization with Adaptive Parameterization
[ "Kaibo Zhao", "Liang Bao", "Yufei Li", "Xu Su", "Ke Zhang", "Xiaotian Qiao" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Zhao_Less_is_More_Efficient_Image_Vectorization_with_Adaptive_Parameterization_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Zhao_Less_is_More_Efficient_Image_Vectorization_with_Adaptive_Parameterization_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhao_Less_is_More_CVPR_2025_supplemental.pdf
null
@InProceedings{Zhao_2025_CVPR, author = {Zhao, Kaibo and Bao, Liang and Li, Yufei and Su, Xu and Zhang, Ke and Qiao, Xiaotian}, title = {Less is More: Efficient Image Vectorization with Adaptive Parameterization}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVP...
Image vectorization aims to convert raster images to vector ones, allowing for easy scaling and editing.Existing works mainly rely on preset parameters (i.e., a fixed number of paths and control points), ignoring the complexity of the image and posing significant challenges to practical applications.We demonstrate that...
[ 0.001597211929038167, -0.013584592379629612, 0.012206321582198143, 0.017104685306549072, 0.023506714031100273, 0.07668907940387726, 0.017622584477066994, -0.007290427107363939, -0.05953259393572807, -0.08328226208686829, -0.03173615783452988, -0.045253872871398926, -0.04526926204562187, 0....
245
FedMIA: An Effective Membership Inference Attack Exploiting "All for One" Principle in Federated Learning
[ "Gongxi Zhu", "Donghao Li", "Hanlin Gu", "Yuan Yao", "Lixin Fan", "Yuxing Han" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Zhu_FedMIA_An_Effective_Membership_Inference_Attack_Exploiting_All_for_One_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Zhu_FedMIA_An_Effective_Membership_Inference_Attack_Exploiting_All_for_One_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhu_FedMIA_An_Effective_CVPR_2025_supplemental.pdf
2402.06289
@InProceedings{Zhu_2025_CVPR, author = {Zhu, Gongxi and Li, Donghao and Gu, Hanlin and Yao, Yuan and Fan, Lixin and Han, Yuxing}, title = {FedMIA: An Effective Membership Inference Attack Exploiting ''All for One'' Principle in Federated Learning}, booktitle = {Proceedings of the Computer Vision and ...
Federated Learning (FL) is a promising approach for training machine learning models on decentralized data while preserving privacy. However, privacy risks, particularly Membership Inference Attacks (MIAs), which aim to determine whether a specific data point belongs to a target client's training set, remain a signific...
[ 0.014919457025825977, -0.05433918535709381, -0.020044630393385887, 0.060960832983255386, 0.04508914798498154, -0.0005545338499359787, 0.05172346904873848, -0.04172129929065704, -0.01972569338977337, -0.020148156210780144, 0.02399909868836403, 0.004304456524550915, -0.058535125106573105, 0....
246
Erase Diffusion: Empowering Object Removal Through Calibrating Diffusion Pathways
[ "Yi Liu", "Hao Zhou", "Benlei Cui", "Wenxiang Shang", "Ran Lin" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Liu_Erase_Diffusion_Empowering_Object_Removal_Through_Calibrating_Diffusion_Pathways_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Liu_Erase_Diffusion_Empowering_Object_Removal_Through_Calibrating_Diffusion_Pathways_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Liu_Erase_Diffusion_Empowering_CVPR_2025_supplemental.pdf
2503.07026
@InProceedings{Liu_2025_CVPR, author = {Liu, Yi and Zhou, Hao and Cui, Benlei and Shang, Wenxiang and Lin, Ran}, title = {Erase Diffusion: Empowering Object Removal Through Calibrating Diffusion Pathways}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, ...
Erase inpainting, or object removal, aims to precisely remove target objects within masked regions while preserving the overall consistency of the surrounding content. Despite diffusion-based methods have made significant strides in the field of image inpainting, challenges remain regarding the emergence of unexpected ...
[ 0.012956283055245876, 0.0040275938808918, 0.003583709243685007, 0.04766397550702095, 0.05260179564356804, 0.006906810216605663, 0.01918107457458973, -0.019512847065925598, -0.04141611233353615, -0.05909780412912369, 0.005355615634471178, -0.007535182870924473, -0.0271149855107069, 0.000002...
247
Prompt-CAM: Making Vision Transformers Interpretable for Fine-Grained Analysis
[ "Arpita Chowdhury", "Dipanjyoti Paul", "Zheda Mai", "Jianyang Gu", "Ziheng Zhang", "Kazi Sajeed Mehrab", "Elizabeth G. Campolongo", "Daniel Rubenstein", "Charles V. Stewart", "Anuj Karpatne", "Tanya Berger-Wolf", "Yu Su", "Wei-Lun Chao" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Chowdhury_Prompt-CAM_Making_Vision_Transformers_Interpretable_for_Fine-Grained_Analysis_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Chowdhury_Prompt-CAM_Making_Vision_Transformers_Interpretable_for_Fine-Grained_Analysis_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Chowdhury_Prompt-CAM_Making_Vision_CVPR_2025_supplemental.pdf
null
@InProceedings{Chowdhury_2025_CVPR, author = {Chowdhury, Arpita and Paul, Dipanjyoti and Mai, Zheda and Gu, Jianyang and Zhang, Ziheng and Mehrab, Kazi Sajeed and Campolongo, Elizabeth G. and Rubenstein, Daniel and Stewart, Charles V. and Karpatne, Anuj and Berger-Wolf, Tanya and Su, Yu and Chao, Wei-Lun}, t...
We present a simple approach to make pre-trained Vision Transformers (ViTs) interpretable for fine-grained analysis, aiming to identify and localize the traits that distinguish visually similar categories, such as bird species. Pre-trained ViTs, such as DINO, have demonstrated remarkable capabilities in extracting loca...
[ 0.0022763318847864866, -0.021379368379712105, -0.012950303964316845, 0.048624832183122635, 0.02578941360116005, 0.03691556304693222, 0.009032654576003551, 0.01892550103366375, -0.027476847171783447, -0.015269685536623001, -0.05840616673231125, 0.005575108341872692, -0.06067343056201935, 0....
248
Instruction-based Image Manipulation by Watching How Things Move
[ "Mingdeng Cao", "Xuaner Zhang", "Yinqiang Zheng", "Zhihao Xia" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Cao_Instruction-based_Image_Manipulation_by_Watching_How_Things_Move_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Cao_Instruction-based_Image_Manipulation_by_Watching_How_Things_Move_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Cao_Instruction-based_Image_Manipulation_CVPR_2025_supplemental.pdf
2412.12087
@InProceedings{Cao_2025_CVPR, author = {Cao, Mingdeng and Zhang, Xuaner and Zheng, Yinqiang and Xia, Zhihao}, title = {Instruction-based Image Manipulation by Watching How Things Move}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June},...
This paper introduces a novel dataset construction pipeline that samples pairs of frames from videos and uses multimodal large language models (MLLMs) to generate editing instructions for training instruction-based image manipulation models. Video frames inherently preserve the identity of subjects and scenes, ensuring...
[ 0.038482654839754105, -0.018623411655426025, -0.03590919449925423, 0.06159784272313118, 0.05420070141553879, 0.006641268730163574, 0.02333795465528965, 0.013805256225168705, -0.03187500312924385, -0.034267179667949677, -0.04449950531125069, 0.021684907376766205, -0.07267342507839203, -0.02...
249
DPFlow: Adaptive Optical Flow Estimation with a Dual-Pyramid Framework
[ "Henrique Morimitsu", "Xiaobin Zhu", "Roberto M. Cesar", "Xiangyang Ji", "Xu-Cheng Yin" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Morimitsu_DPFlow_Adaptive_Optical_Flow_Estimation_with_a_Dual-Pyramid_Framework_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Morimitsu_DPFlow_Adaptive_Optical_Flow_Estimation_with_a_Dual-Pyramid_Framework_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Morimitsu_DPFlow_Adaptive_Optical_CVPR_2025_supplemental.pdf
2503.14880
@InProceedings{Morimitsu_2025_CVPR, author = {Morimitsu, Henrique and Zhu, Xiaobin and Cesar, Roberto M. and Ji, Xiangyang and Yin, Xu-Cheng}, title = {DPFlow: Adaptive Optical Flow Estimation with a Dual-Pyramid Framework}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conf...
Optical flow estimation is essential for video processing tasks, such as restoration and action recognition. The quality of videos is constantly increasing, with current standards reaching 8K resolution. However, optical flow methods are usually designed for low resolution and do not generalize to large inputs due to t...
[ 0.0010836460860446095, -0.02003643289208412, 0.03497958183288574, -0.00837664119899273, 0.038531817495822906, 0.027155738323926926, 0.02069157175719738, -0.0013318919809535146, -0.03775245323777199, -0.0642334371805191, 0.010250908322632313, -0.053664058446884155, -0.056982457637786865, 0....
250
DocSAM: Unified Document Image Segmentation via Query Decomposition and Heterogeneous Mixed Learning
[ "Xiao-Hui Li", "Fei Yin", "Cheng-Lin Liu" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Li_DocSAM_Unified_Document_Image_Segmentation_via_Query_Decomposition_and_Heterogeneous_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Li_DocSAM_Unified_Document_Image_Segmentation_via_Query_Decomposition_and_Heterogeneous_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Li_DocSAM_Unified_Document_CVPR_2025_supplemental.pdf
2504.04085
@InProceedings{Li_2025_CVPR, author = {Li, Xiao-Hui and Yin, Fei and Liu, Cheng-Lin}, title = {DocSAM: Unified Document Image Segmentation via Query Decomposition and Heterogeneous Mixed Learning}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month ...
Document image segmentation is crucial in document analysis and recognition but remains challenging due to the heterogeneity of document formats and diverse segmentation tasks. Existing methods often treat these tasks separately, leading to limited generalization and resource wastage.This paper introduces DocSAM, a tra...
[ -0.020601434633135796, -0.037812523543834686, -0.007283344864845276, 0.04568857327103615, 0.02296644262969494, 0.007352426648139954, 0.005369437392801046, -0.005720865447074175, -0.03315785899758339, -0.0491526797413826, -0.051880843937397, 0.021985532715916634, -0.05391686037182808, 0.023...
251
Ferret: An Efficient Online Continual Learning Framework under Varying Memory Constraints
[ "Yuhao Zhou", "Yuxin Tian", "Jindi Lv", "Mingjia Shi", "Yuanxi Li", "Qing Ye", "Shuhao Zhang", "Jiancheng Lv" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Zhou_Ferret_An_Efficient_Online_Continual_Learning_Framework_under_Varying_Memory_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Zhou_Ferret_An_Efficient_Online_Continual_Learning_Framework_under_Varying_Memory_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhou_Ferret_An_Efficient_CVPR_2025_supplemental.pdf
2503.12053
@InProceedings{Zhou_2025_CVPR, author = {Zhou, Yuhao and Tian, Yuxin and Lv, Jindi and Shi, Mingjia and Li, Yuanxi and Ye, Qing and Zhang, Shuhao and Lv, Jiancheng}, title = {Ferret: An Efficient Online Continual Learning Framework under Varying Memory Constraints}, booktitle = {Proceedings of the Co...
In the realm of high-frequency data streams, achieving real-time learning within varying memory constraints is paramount. This paper presents Ferret, a comprehensive framework designed to enhance online accuracy of Online Continual Learning (OCL) algorithms while dynamically adapting to varying memory budgets.Ferret em...
[ -0.016531653702259064, -0.04340310022234917, -0.008892044425010681, 0.02576076239347458, 0.028408817946910858, 0.028327949345111847, 0.029227791354060173, 0.03698617219924927, -0.01583789847791195, -0.028905285522341728, -0.023259026929736137, -0.005817281547933817, -0.06558064371347427, 0...
252
Spatiotemporal Skip Guidance for Enhanced Video Diffusion Sampling
[ "Junha Hyung", "Kinam Kim", "Susung Hong", "Min-Jung Kim", "Jaegul Choo" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Hyung_Spatiotemporal_Skip_Guidance_for_Enhanced_Video_Diffusion_Sampling_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Hyung_Spatiotemporal_Skip_Guidance_for_Enhanced_Video_Diffusion_Sampling_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Hyung_Spatiotemporal_Skip_Guidance_CVPR_2025_supplemental.pdf
2411.18664
@InProceedings{Hyung_2025_CVPR, author = {Hyung, Junha and Kim, Kinam and Hong, Susung and Kim, Min-Jung and Choo, Jaegul}, title = {Spatiotemporal Skip Guidance for Enhanced Video Diffusion Sampling}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, mon...
Diffusion models have emerged as a powerful tool for generating high-quality images, videos, and 3D content. While sampling guidance techniques like CFG improve quality, they reduce diversity and motion. Autoguidance mitigates these issues but demands extra weak model training, limiting its practicality for large-scale...
[ 0.01036333478987217, -0.048609085381031036, 0.012826800346374512, 0.056043874472379684, 0.02697499468922615, 0.01832297258079052, 0.03491586446762085, 0.008468925952911377, -0.01268248911947012, -0.07470681518316269, -0.018435226753354073, -0.030535846948623657, -0.013804404996335506, 0.02...
253
VidComposition: Can MLLMs Analyze Compositions in Compiled Videos?
[ "Yunlong Tang", "Junjia Guo", "Hang Hua", "Susan Liang", "Mingqian Feng", "Xinyang Li", "Rui Mao", "Chao Huang", "Jing Bi", "Zeliang Zhang", "Pooyan Fazli", "Chenliang Xu" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Tang_VidComposition_Can_MLLMs_Analyze_Compositions_in_Compiled_Videos_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Tang_VidComposition_Can_MLLMs_Analyze_Compositions_in_Compiled_Videos_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Tang_VidComposition_Can_MLLMs_CVPR_2025_supplemental.pdf
2411.10979
@InProceedings{Tang_2025_CVPR, author = {Tang, Yunlong and Guo, Junjia and Hua, Hang and Liang, Susan and Feng, Mingqian and Li, Xinyang and Mao, Rui and Huang, Chao and Bi, Jing and Zhang, Zeliang and Fazli, Pooyan and Xu, Chenliang}, title = {VidComposition: Can MLLMs Analyze Compositions in Compiled V...
The advancement of Multimodal Large Language Models (MLLMs) has enabled significant progress in multimodal understanding, expanding their capacity to analyze video content. However, existing evaluation benchmarks for MLLMs primarily focus on abstract video comprehension, lacking a detailed assessment of their ability t...
[ 0.02630520798265934, -0.005670767743140459, -0.002008289098739624, 0.03992484137415886, 0.02447928488254547, 0.002452733926475048, 0.01856175623834133, 0.025193503126502037, -0.04579772427678108, -0.004417240619659424, -0.03437865152955055, 0.0193752683699131, -0.060882486402988434, -0.005...
254
SAR3D: Autoregressive 3D Object Generation and Understanding via Multi-scale 3D VQVAE
[ "Yongwei Chen", "Yushi Lan", "Shangchen Zhou", "Tengfei Wang", "Xingang Pan" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Chen_SAR3D_Autoregressive_3D_Object_Generation_and_Understanding_via_Multi-scale_3D_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Chen_SAR3D_Autoregressive_3D_Object_Generation_and_Understanding_via_Multi-scale_3D_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Chen_SAR3D_Autoregressive_3D_CVPR_2025_supplemental.pdf
2411.16856
@InProceedings{Chen_2025_CVPR, author = {Chen, Yongwei and Lan, Yushi and Zhou, Shangchen and Wang, Tengfei and Pan, Xingang}, title = {SAR3D: Autoregressive 3D Object Generation and Understanding via Multi-scale 3D VQVAE}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Confe...
Autoregressive models have demonstrated remarkable success across various fields, from large language models (LLMs) to large multimodal models (LMMs) and 2D content generation, moving closer to artificial general intelligence (AGI). Despite these advances, applying autoregressive approaches to 3D object generation and ...
[ 0.023396547883749008, 0.0009180461638607085, 0.022329669445753098, 0.0021234548185020685, 0.01654864102602005, 0.07413631677627563, 0.026004750281572342, 0.01606852561235428, -0.052383214235305786, -0.044524677097797394, -0.015541144646704197, -0.011181634850800037, -0.04580112174153328, 0...
255
Dual-Interrelated Diffusion Model for Few-Shot Anomaly Image Generation
[ "Ying Jin", "Jinlong Peng", "Qingdong He", "Teng Hu", "Jiafu Wu", "Hao Chen", "Haoxuan Wang", "Wenbing Zhu", "Mingmin Chi", "Jun Liu", "Yabiao Wang" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Jin_Dual-Interrelated_Diffusion_Model_for_Few-Shot_Anomaly_Image_Generation_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Jin_Dual-Interrelated_Diffusion_Model_for_Few-Shot_Anomaly_Image_Generation_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Jin_Dual-Interrelated_Diffusion_Model_CVPR_2025_supplemental.pdf
2408.13509
@InProceedings{Jin_2025_CVPR, author = {Jin, Ying and Peng, Jinlong and He, Qingdong and Hu, Teng and Wu, Jiafu and Chen, Hao and Wang, Haoxuan and Zhu, Wenbing and Chi, Mingmin and Liu, Jun and Wang, Yabiao}, title = {Dual-Interrelated Diffusion Model for Few-Shot Anomaly Image Generation}, booktitl...
The performance of anomaly inspection in industrial manufacturing is constrained by the scarcity of anomaly data. To overcome this challenge, researchers have started employing anomaly generation approaches to augment the anomaly dataset. However, existing anomaly generation methods suffer from limited diversity in the...
[ 0.004938249010592699, 0.009411572478711605, -0.04270784929394722, 0.05381343141198158, 0.08003506809473038, 0.04123486578464508, 0.01003971602767706, -0.01725970394909382, -0.018779968842864037, -0.043612852692604065, -0.01879088766872883, -0.020937860012054443, -0.021939130499958992, 0.02...
256
ODE: Open-Set Evaluation of Hallucinations in Multimodal Large Language Models
[ "Yahan Tu", "Rui Hu", "Jitao Sang" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Tu_ODE_Open-Set_Evaluation_of_Hallucinations_in_Multimodal_Large_Language_Models_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Tu_ODE_Open-Set_Evaluation_of_Hallucinations_in_Multimodal_Large_Language_Models_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Tu_ODE_Open-Set_Evaluation_CVPR_2025_supplemental.pdf
2409.09318
@InProceedings{Tu_2025_CVPR, author = {Tu, Yahan and Hu, Rui and Sang, Jitao}, title = {ODE: Open-Set Evaluation of Hallucinations in Multimodal Large Language Models}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = ...
Hallucination poses a persistent challenge for multimodal large language models (MLLMs). However, existing benchmarks for evaluating hallucinations are generally static, which may overlook the potential risk of data contamination. To address this issue, we propose ODE, an open-set, dynamic protocol designed to evaluate...
[ -0.026543855667114258, -0.00915229506790638, 0.01046960148960352, 0.0635715052485466, 0.02078385278582573, 0.019293390214443207, 0.03809712454676628, 0.018060998991131783, -0.01313232071697712, -0.011668847873806953, -0.00903169997036457, 0.01493938360363245, -0.09667067229747772, -0.01764...
257
Self-Supervised Learning for Color Spike Camera Reconstruction
[ "Yanchen Dong", "Ruiqin Xiong", "Xiaopeng Fan", "Zhaofei Yu", "Yonghong Tian", "Tiejun Huang" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Dong_Self-Supervised_Learning_for_Color_Spike_Camera_Reconstruction_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Dong_Self-Supervised_Learning_for_Color_Spike_Camera_Reconstruction_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Dong_Self-Supervised_Learning_for_CVPR_2025_supplemental.pdf
null
@InProceedings{Dong_2025_CVPR, author = {Dong, Yanchen and Xiong, Ruiqin and Fan, Xiaopeng and Yu, Zhaofei and Tian, Yonghong and Huang, Tiejun}, title = {Self-Supervised Learning for Color Spike Camera Reconstruction}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conferenc...
Spike camera is a kind of neuromorphic camera with ultra-high temporal resolution, which can capture dynamic scenes by continuously firing spike signals. To capture color information, a color filter array (CFA) is employed on the sensor of the spike camera, resulting in Bayer-pattern spike streams. How to restore high-...
[ 0.018458813428878784, -0.018158013001084328, -0.037512682378292084, 0.05282675102353096, 0.05848866328597069, 0.019793761894106865, 0.01608840376138687, -0.005238804034888744, -0.03364928811788559, -0.03912246972322464, 0.010806328617036343, -0.033395081758499146, -0.05671244114637375, 0.0...
258
Interactive Medical Image Analysis with Concept-based Similarity Reasoning
[ "Ta Duc Huy", "Sen Kim Tran", "Phan Nguyen", "Nguyen Hoang Tran", "Tran Bao Sam", "Anton van den Hengel", "Zhibin Liao", "Johan W. Verjans", "Minh-Son To", "Vu Minh Hieu Phan" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Huy_Interactive_Medical_Image_Analysis_with_Concept-based_Similarity_Reasoning_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Huy_Interactive_Medical_Image_Analysis_with_Concept-based_Similarity_Reasoning_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Huy_Interactive_Medical_Image_CVPR_2025_supplemental.zip
2503.06873
@InProceedings{Huy_2025_CVPR, author = {Huy, Ta Duc and Tran, Sen Kim and Nguyen, Phan and Tran, Nguyen Hoang and Sam, Tran Bao and van den Hengel, Anton and Liao, Zhibin and Verjans, Johan W. and To, Minh-Son and Phan, Vu Minh Hieu}, title = {Interactive Medical Image Analysis with Concept-based Similar...
The ability to interpret and intervene model decisions is important for the adoption of computer-aided diagnosis methods in clinical workflows. Recent concept-based methods link the model predictions with interpretable concepts and modify their activation scores to interact with the model. However, these concepts are a...
[ 0.0160689614713192, -0.015238779596984386, -0.016750013455748558, 0.0031615530606359243, 0.04840940237045288, 0.017947273328900337, 0.022649381309747696, -0.02371787279844284, 0.0045955306850373745, -0.06402354687452316, -0.02677152305841446, -0.008867444470524788, -0.03647911548614502, 0....
259
From Elements to Design: A Layered Approach for Automatic Graphic Design Composition
[ "Jiawei Lin", "Shizhao Sun", "Danqing Huang", "Ting Liu", "Ji Li", "Jiang Bian" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Lin_From_Elements_to_Design_A_Layered_Approach_for_Automatic_Graphic_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Lin_From_Elements_to_Design_A_Layered_Approach_for_Automatic_Graphic_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Lin_From_Elements_to_CVPR_2025_supplemental.pdf
2412.19712
@InProceedings{Lin_2025_CVPR, author = {Lin, Jiawei and Sun, Shizhao and Huang, Danqing and Liu, Ting and Li, Ji and Bian, Jiang}, title = {From Elements to Design: A Layered Approach for Automatic Graphic Design Composition}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Co...
In this work, we investigate automatic design composition from multimodal graphic elements. Although recent studies have developed various generative models for graphic design, they usually face the following limitations: they only focus on certain subtasks and are far from achieving the design composition task; they d...
[ 0.031739965081214905, 0.007722175680100918, -0.009456166066229343, 0.022075964137911797, 0.05138164386153221, -0.004344542045146227, -0.015171496197581291, 0.03301247954368591, -0.006391989998519421, -0.05187338963150978, -0.05404042452573776, 0.011058144271373749, -0.0538780651986599, 0.0...
260
h-Edit: Effective and Flexible Diffusion-Based Editing via Doob's h-Transform
[ "Toan Nguyen", "Kien Do", "Duc Kieu", "Thin Nguyen" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Nguyen_h-Edit_Effective_and_Flexible_Diffusion-Based_Editing_via_Doobs_h-Transform_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Nguyen_h-Edit_Effective_and_Flexible_Diffusion-Based_Editing_via_Doobs_h-Transform_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Nguyen_h-Edit_Effective_and_CVPR_2025_supplemental.pdf
null
@InProceedings{Nguyen_2025_CVPR, author = {Nguyen, Toan and Do, Kien and Kieu, Duc and Nguyen, Thin}, title = {h-Edit: Effective and Flexible Diffusion-Based Editing via Doob's h-Transform}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {J...
We introduce a theoretical framework for diffusion-based image editing by formulating it as a reverse-time bridge modeling problem. This approach modifies the backward process of a pretrained diffusion model to construct a bridge that converges to an implicit distribution associated with the editing target at time 0. B...
[ -0.01220889762043953, 0.019820258021354675, -0.013419770635664463, 0.053996741771698, 0.07085851579904556, 0.012838571332395077, -0.002252982696518302, -0.0007773212273605168, -0.0008080370025709271, -0.08169488608837128, 0.011408287100493908, -0.009018667042255402, -0.05682313069701195, -...
261
Masking meets Supervision: A Strong Learning Alliance
[ "Byeongho Heo", "Taekyung Kim", "Sangdoo Yun", "Dongyoon Han" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Heo_Masking_meets_Supervision_A_Strong_Learning_Alliance_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Heo_Masking_meets_Supervision_A_Strong_Learning_Alliance_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Heo_Masking_meets_Supervision_CVPR_2025_supplemental.pdf
2306.11339
@InProceedings{Heo_2025_CVPR, author = {Heo, Byeongho and Kim, Taekyung and Yun, Sangdoo and Han, Dongyoon}, title = {Masking meets Supervision: A Strong Learning Alliance}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year ...
Pre-training with random masked inputs has emerged as a novel trend in self-supervised training. However, supervised learning still faces a challenge in adopting masking augmentations, primarily due to unstable training. In this paper, we propose a novel way to involve masking augmentations dubbed Masked Sub-branch (Ma...
[ 0.018112149089574814, -0.027257349342107773, -0.012817059643566608, 0.02750432677567005, 0.02328336052596569, 0.02970530278980732, 0.034388914704322815, -0.025101343169808388, -0.029584145173430443, -0.026395509019494057, -0.027784988284111023, 0.010685313493013382, -0.05250372365117073, -...
262
DI-PCG: Diffusion-based Efficient Inverse Procedural Content Generation for High-quality 3D Asset Creation
[ "Wang Zhao", "Yan-Pei Cao", "Jiale Xu", "Yuejiang Dong", "Ying Shan" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Zhao_DI-PCG_Diffusion-based_Efficient_Inverse_Procedural_Content_Generation_for_High-quality_3D_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Zhao_DI-PCG_Diffusion-based_Efficient_Inverse_Procedural_Content_Generation_for_High-quality_3D_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhao_DI-PCG_Diffusion-based_Efficient_CVPR_2025_supplemental.pdf
null
@InProceedings{Zhao_2025_CVPR, author = {Zhao, Wang and Cao, Yan-Pei and Xu, Jiale and Dong, Yuejiang and Shan, Ying}, title = {DI-PCG: Diffusion-based Efficient Inverse Procedural Content Generation for High-quality 3D Asset Creation}, booktitle = {Proceedings of the Computer Vision and Pattern Reco...
Procedural Content Generation (PCG) is powerful in creating high-quality 3D contents, yet controlling it to produce desired shapes is difficult and often requires extensive parameter tuning. Inverse Procedural Content Generation aims to automatically find the best parameters under the input condition. However, existing...
[ 0.0025772342924028635, -0.019500471651554108, 0.012484757229685783, 0.04623856768012047, 0.04055598005652428, 0.02455567754805088, 0.004980424884706736, 0.004829785320907831, -0.005257420241832733, -0.07541318982839584, 0.0012711831368505955, -0.03132368624210358, -0.03697274625301361, 0.0...
263
SALOVA: Segment-Augmented Long Video Assistant for Targeted Retrieval and Routing in Long-Form Video Analysis
[ "Junho Kim", "Hyunjun Kim", "Hosu Lee", "Yong Man Ro" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Kim_SALOVA_Segment-Augmented_Long_Video_Assistant_for_Targeted_Retrieval_and_Routing_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Kim_SALOVA_Segment-Augmented_Long_Video_Assistant_for_Targeted_Retrieval_and_Routing_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Kim_SALOVA_Segment-Augmented_Long_CVPR_2025_supplemental.pdf
2411.16173
@InProceedings{Kim_2025_CVPR, author = {Kim, Junho and Kim, Hyunjun and Lee, Hosu and Ro, Yong Man}, title = {SALOVA: Segment-Augmented Long Video Assistant for Targeted Retrieval and Routing in Long-Form Video Analysis}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Confere...
Despite advances in Large Multi-modal Models, applying them to long and untrimmed video content remains challenging due to limitations in context length and substantial memory overhead. These constraints often lead to significant information loss and reduced relevance in the model responses. With the exponential growth...
[ 0.014646036550402641, -0.05903897434473038, -0.0038860710337758064, 0.0004955531912855804, 0.028602726757526398, -0.006004107650369406, 0.010897794738411903, 0.03107220120728016, -0.05608687177300453, -0.026265664026141167, -0.025311067700386047, -0.018472054973244667, -0.05029075965285301, ...
264
Notes-guided MLLM Reasoning: Enhancing MLLM with Knowledge and Visual Notes for Visual Question Answering
[ "Wenlong Fang", "Qiaofeng Wu", "Jing Chen", "Yun Xue" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Fang_Notes-guided_MLLM_Reasoning_Enhancing_MLLM_with_Knowledge_and_Visual_Notes_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Fang_Notes-guided_MLLM_Reasoning_Enhancing_MLLM_with_Knowledge_and_Visual_Notes_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Fang_Notes-guided_MLLM_Reasoning_CVPR_2025_supplemental.pdf
null
@InProceedings{Fang_2025_CVPR, author = {Fang, Wenlong and Wu, Qiaofeng and Chen, Jing and Xue, Yun}, title = {Notes-guided MLLM Reasoning: Enhancing MLLM with Knowledge and Visual Notes for Visual Question Answering}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference...
The knowledge-based visual question answering (KB-VQA) task involves using external knowledge about the image to assist reasoning. Building on the impressive performance of multimodal large language model (MLLM), recent methods have commenced leveraging MLLM as an implicit knowledge base for reasoning. However, the dir...
[ 0.04088301211595535, -0.013371436856687069, 0.0342874675989151, 0.027667414397001266, 0.04989290237426758, -0.019275492057204247, 0.04204834625124931, -0.00773307029157877, -0.03740277513861656, -0.00878122542053461, -0.025167210027575493, 0.028767459094524384, -0.06334169954061508, 0.0099...
265
Are Spatial-Temporal Graph Convolution Networks for Human Action Recognition Over-Parameterized?
[ "Jianyang Xie", "Yitian Zhao", "Yanda Meng", "He Zhao", "Anh Nguyen", "Yalin Zheng" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Xie_Are_Spatial-Temporal_Graph_Convolution_Networks_for_Human_Action_Recognition_Over-Parameterized_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Xie_Are_Spatial-Temporal_Graph_Convolution_Networks_for_Human_Action_Recognition_Over-Parameterized_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Xie_Are_Spatial-Temporal_Graph_CVPR_2025_supplemental.pdf
2505.10679
@InProceedings{Xie_2025_CVPR, author = {Xie, Jianyang and Zhao, Yitian and Meng, Yanda and Zhao, He and Nguyen, Anh and Zheng, Yalin}, title = {Are Spatial-Temporal Graph Convolution Networks for Human Action Recognition Over-Parameterized?}, booktitle = {Proceedings of the Computer Vision and Patter...
Spatial-temporal graph convolutional networks (ST-GCNs) showcase impressive performance in skeleton-based human action recognition (HAR). However, despite the development of numerous models, their recognition performance does not differ significantly after aligning the input settings. With this observation, we hypothes...
[ 0.023852113634347916, -0.02168123982846737, -0.017256634309887886, 0.03815126046538353, 0.024178439751267433, 0.03092551976442337, 0.02001655474305153, 0.027166903018951416, -0.006777286063879728, -0.047886233776807785, 0.027201596647500992, -0.05149609223008156, -0.050066474825143814, 0.0...
266
DA-VPT: Semantic-Guided Visual Prompt Tuning for Vision Transformers
[ "Li Ren", "Chen Chen", "Liqiang Wang", "Kien Hua" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Ren_DA-VPT_Semantic-Guided_Visual_Prompt_Tuning_for_Vision_Transformers_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Ren_DA-VPT_Semantic-Guided_Visual_Prompt_Tuning_for_Vision_Transformers_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Ren_DA-VPT_Semantic-Guided_Visual_CVPR_2025_supplemental.pdf
null
@InProceedings{Ren_2025_CVPR, author = {Ren, Li and Chen, Chen and Wang, Liqiang and Hua, Kien}, title = {DA-VPT: Semantic-Guided Visual Prompt Tuning for Vision Transformers}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year...
Visual Prompt Tuning (VPT) has become a promising solution for Parameter-Efficient Fine-Tuning (PEFT) approach for Vision Transformer (ViT) models by partially fine-tuning learnable tokens while keeping most model parameters frozen. Recent research has explored modifying the connection structures of the prompts. Howeve...
[ 0.02385704405605793, -0.031733110547065735, 0.012350145727396011, 0.0648457482457161, 0.031155554577708244, 0.0440504215657711, 0.009703806601464748, -0.010578770190477371, -0.011250733397901058, -0.027271123602986336, -0.05085757374763489, 0.02608802542090416, -0.061095595359802246, -0.01...
267
Towards Lossless Implicit Neural Representation via Bit Plane Decomposition
[ "Woo Kyoung Han", "Byeonghun Lee", "Hyunmin Cho", "Sunghoon Im", "Kyong Hwan Jin" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Han_Towards_Lossless_Implicit_Neural_Representation_via_Bit_Plane_Decomposition_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Han_Towards_Lossless_Implicit_Neural_Representation_via_Bit_Plane_Decomposition_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Han_Towards_Lossless_Implicit_CVPR_2025_supplemental.pdf
2502.21001
@InProceedings{Han_2025_CVPR, author = {Han, Woo Kyoung and Lee, Byeonghun and Cho, Hyunmin and Im, Sunghoon and Jin, Kyong Hwan}, title = {Towards Lossless Implicit Neural Representation via Bit Plane Decomposition}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference ...
We quantify the upper bound on the size of the implicit neural representation (INR) model from a digital perspective. The upper bound of the model size increases exponentially as the required bit-precision increases. To this end, we present a bit-plane decomposition method that makes INR predict bit-planes, producing t...
[ -0.040293652564287186, -0.015277320519089699, -0.016618311405181885, 0.030700454488396645, 0.021938690915703773, 0.046906452625989914, 0.022982057183980942, 0.00466971704736352, -0.039581362158060074, -0.0334584042429924, -0.010527399368584156, -0.0131455073133111, -0.062111131846904755, 0...
268
Spectral State Space Model for Rotation-Invariant Visual Representation Learning
[ "Sahar Dastani", "Ali Bahri", "Moslem Yazdanpanah", "Mehrdad Noori", "David Osowiechi", "Gustavo Adolfo Vargas Hakim", "Farzad Beizaee", "Milad Cheraghalikhani", "Arnab Kumar Mondal", "Herve Lombaert", "Christian Desrosiers" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Dastani_Spectral_State_Space_Model_for_Rotation-Invariant_Visual_Representation_Learning_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Dastani_Spectral_State_Space_Model_for_Rotation-Invariant_Visual_Representation_Learning_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Dastani_Spectral_State_Space_CVPR_2025_supplemental.pdf
2503.06369
@InProceedings{Dastani_2025_CVPR, author = {Dastani, Sahar and Bahri, Ali and Yazdanpanah, Moslem and Noori, Mehrdad and Osowiechi, David and Hakim, Gustavo Adolfo Vargas and Beizaee, Farzad and Cheraghalikhani, Milad and Mondal, Arnab Kumar and Lombaert, Herve and Desrosiers, Christian}, title = {Spectr...
State Space Models (SSMs) have recently emerged as an alternative to Vision Transformers (ViTs) due to their unique ability of modeling global relationships with linear complexity. SSMs are specifically designed to capture spatially proximate relationships of image patches. However, they fail to identify relationships ...
[ 0.00554740009829402, -0.011424709111452103, 0.014698849990963936, 0.022239431738853455, 0.03440600261092186, 0.017548631876707077, 0.047181449830532074, 0.011756816878914833, -0.0356726236641407, -0.05348380282521248, -0.0321011058986187, -0.002918971935287118, -0.0856480747461319, -0.0033...
269
iSegMan: Interactive Segment-and-Manipulate 3D Gaussians
[ "Yian Zhao", "Wanshi Xu", "Ruochong Zheng", "Pengchong Qiao", "Chang Liu", "Jie Chen" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Zhao_iSegMan_Interactive_Segment-and-Manipulate_3D_Gaussians_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Zhao_iSegMan_Interactive_Segment-and-Manipulate_3D_Gaussians_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhao_iSegMan_Interactive_Segment-and-Manipulate_CVPR_2025_supplemental.zip
2505.11934
@InProceedings{Zhao_2025_CVPR, author = {Zhao, Yian and Xu, Wanshi and Zheng, Ruochong and Qiao, Pengchong and Liu, Chang and Chen, Jie}, title = {iSegMan: Interactive Segment-and-Manipulate 3D Gaussians}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, ...
The efficient rendering and explicit nature of 3DGS promote the advancement of 3D scene manipulation.However, existing methods typically encounter challenges in controlling the manipulation region and are unable to furnish the user with interactive feedback, which inevitably leads to unexpected results.Intuitively, inc...
[ 0.013337936252355576, 0.008621587418019772, 0.01821661740541458, 0.028246590867638588, 0.01119267474859953, 0.023590222001075745, 0.01015652995556593, 0.018302569165825844, -0.047233570367097855, -0.047434981912374496, -0.07190224528312683, 0.004049170296639204, -0.07657741755247116, -0.00...
270
BlueLM-V-3B: Algorithm and System Co-Design for Multimodal Large Language Models on Mobile Devices
[ "Xudong Lu", "Yinghao Chen", "Cheng Chen", "Hui Tan", "Boheng Chen", "Yina Xie", "Rui Hu", "Guanxin Tan", "Renshou Wu", "Yan Hu", "Yi Zeng", "Lei Wu", "Liuyang Bian", "Zhaoxiong Wang", "Long Liu", "Yanzhou Yang", "Han Xiao", "Aojun Zhou", "Yafei Wen", "Xiaoxin Chen", "Shuai R...
https://openaccess.thecvf.com/content/CVPR2025/html/Lu_BlueLM-V-3B_Algorithm_and_System_Co-Design_for_Multimodal_Large_Language_Models_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Lu_BlueLM-V-3B_Algorithm_and_System_Co-Design_for_Multimodal_Large_Language_Models_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Lu_BlueLM-V-3B_Algorithm_and_CVPR_2025_supplemental.pdf
null
@InProceedings{Lu_2025_CVPR, author = {Lu, Xudong and Chen, Yinghao and Chen, Cheng and Tan, Hui and Chen, Boheng and Xie, Yina and Hu, Rui and Tan, Guanxin and Wu, Renshou and Hu, Yan and Zeng, Yi and Wu, Lei and Bian, Liuyang and Wang, Zhaoxiong and Liu, Long and Yang, Yanzhou and Xiao, Han and Zhou, Aojun and...
The emergence and growing popularity of multimodal large language models (MLLMs) have significant potential to enhance various aspects of daily life, from improving communication to facilitating learning and problem-solving. Mobile phones, as essential daily companions, represent the most effective and accessible deplo...
[ -0.005776036065071821, -0.011686665005981922, 0.027790196239948273, 0.0034534798469394445, 0.03864011541008949, -0.00008644901390653104, 0.030870186164975166, 0.0039700670167803764, -0.04117562994360924, -0.007320266682654619, 0.006305710878223181, 0.020673485472798347, -0.07128328084945679,...
271
Unraveling Normal Anatomy via Fluid-Driven Anomaly Randomization
[ "Peirong Liu", "Ana Lawry Aguila", "Juan E. Iglesias" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Liu_Unraveling_Normal_Anatomy_via_Fluid-Driven_Anomaly_Randomization_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Liu_Unraveling_Normal_Anatomy_via_Fluid-Driven_Anomaly_Randomization_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Liu_Unraveling_Normal_Anatomy_CVPR_2025_supplemental.pdf
2501.13370
@InProceedings{Liu_2025_CVPR, author = {Liu, Peirong and Aguila, Ana Lawry and Iglesias, Juan E.}, title = {Unraveling Normal Anatomy via Fluid-Driven Anomaly Randomization}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year ...
Data-driven machine learning has made significant strides in medical image analysis. However, most existing methods are tailored to specific modalities and assume a particular resolution (often isotropic). This limits their generalizability in clinical settings, where variations in scan appearance arise from difference...
[ -0.008975681848824024, -0.032779764384031296, -0.021215500310063362, 0.026729563251137733, 0.05588644742965698, 0.005530840251594782, 0.02935641258955002, 0.0199127197265625, -0.04512489214539528, -0.052026789635419846, -0.004535568878054619, 0.00131188181694597, -0.04930010810494423, 0.01...
272
Taming Teacher Forcing for Masked Autoregressive Video Generation
[ "Deyu Zhou", "Quan Sun", "Yuang Peng", "Kun Yan", "Runpei Dong", "Duomin Wang", "Zheng Ge", "Nan Duan", "Xiangyu Zhang" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Zhou_Taming_Teacher_Forcing_for_Masked_Autoregressive_Video_Generation_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Zhou_Taming_Teacher_Forcing_for_Masked_Autoregressive_Video_Generation_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhou_Taming_Teacher_Forcing_CVPR_2025_supplemental.pdf
2501.12389
@InProceedings{Zhou_2025_CVPR, author = {Zhou, Deyu and Sun, Quan and Peng, Yuang and Yan, Kun and Dong, Runpei and Wang, Duomin and Ge, Zheng and Duan, Nan and Zhang, Xiangyu}, title = {Taming Teacher Forcing for Masked Autoregressive Video Generation}, booktitle = {Proceedings of the Computer Visio...
We introduce MAGI, a hybrid video generation framework that combines masked modeling for intra-frame generation with causal modeling for next-frame generation. Our key innovation, Complete Teacher Forcing (CTF), conditions masked frames on complete observation frames rather than masked ones (namely Masked Teacher Forci...
[ 0.0397769957780838, -0.029978876933455467, 0.013757373206317425, 0.05879264697432518, 0.03949207067489624, 0.02081502042710781, 0.05618380010128021, -0.0025489770341664553, -0.02886500023305416, -0.044149383902549744, -0.0075551485642790794, 0.0046343314461410046, -0.0711580440402031, 0.02...
273
UniRestore: Unified Perceptual and Task-Oriented Image Restoration Model Using Diffusion Prior
[ "I-Hsiang Chen", "Wei-Ting Chen", "Yu-Wei Liu", "Yuan-Chun Chiang", "Sy-Yen Kuo", "Ming-Hsuan Yang" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Chen_UniRestore_Unified_Perceptual_and_Task-Oriented_Image_Restoration_Model_Using_Diffusion_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Chen_UniRestore_Unified_Perceptual_and_Task-Oriented_Image_Restoration_Model_Using_Diffusion_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Chen_UniRestore_Unified_Perceptual_CVPR_2025_supplemental.pdf
2501.13134
@InProceedings{Chen_2025_CVPR, author = {Chen, I-Hsiang and Chen, Wei-Ting and Liu, Yu-Wei and Chiang, Yuan-Chun and Kuo, Sy-Yen and Yang, Ming-Hsuan}, title = {UniRestore: Unified Perceptual and Task-Oriented Image Restoration Model Using Diffusion Prior}, booktitle = {Proceedings of the Computer Vi...
Image restoration aims to recover content from inputs degraded by various factors, such as adverse weather, blur, and noise. Perceptual Image Restoration (PIR) methods improve visual quality but often do not support downstream tasks effectively. On the other hand, Task-oriented Image Restoration (TIR) methods focus on ...
[ 0.022094769403338432, -0.010682182386517525, 0.0031424867920577526, 0.031095454469323158, 0.07747793197631836, 0.022726893424987793, 0.02035430446267128, 0.030797773972153664, -0.02784409187734127, -0.09936729073524475, -0.0036979932337999344, -0.02657252736389637, -0.03026076965034008, 0....
274
Sharp-It: A Multi-view to Multi-view Diffusion Model for 3D Synthesis and Manipulation
[ "Yiftach Edelstein", "Or Patashnik", "Dana Cohen-Bar", "Lihi Zelnik-Manor" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Edelstein_Sharp-It_A_Multi-view_to_Multi-view_Diffusion_Model_for_3D_Synthesis_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Edelstein_Sharp-It_A_Multi-view_to_Multi-view_Diffusion_Model_for_3D_Synthesis_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Edelstein_Sharp-It_A_Multi-view_CVPR_2025_supplemental.zip
null
@InProceedings{Edelstein_2025_CVPR, author = {Edelstein, Yiftach and Patashnik, Or and Cohen-Bar, Dana and Zelnik-Manor, Lihi}, title = {Sharp-It: A Multi-view to Multi-view Diffusion Model for 3D Synthesis and Manipulation}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Con...
Advancements in text-to-image diffusion models have led to significant progress in fast 3D content creation. One common approach is to generate a set of multi-view images of an object, and then reconstruct it into a 3D model. However, this approach bypasses the use of a native 3D representation of the object and is hen...
[ -0.016006268560886383, -0.0008960752747952938, 0.007142995018512011, 0.06458105146884918, 0.04302972927689552, 0.03055664896965027, 0.010560479946434498, 0.016241636127233505, -0.014809059910476208, -0.04766314476728439, -0.027475889772176743, 0.0030446199234575033, -0.03197711706161499, 0...
275
URWKV: Unified RWKV Model with Multi-state Perspective for Low-light Image Restoration
[ "Rui Xu", "Yuzhen Niu", "Yuezhou Li", "Huangbiao Xu", "Wenxi Liu", "Yuzhong Chen" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Xu_URWKV_Unified_RWKV_Model_with_Multi-state_Perspective_for_Low-light_Image_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Xu_URWKV_Unified_RWKV_Model_with_Multi-state_Perspective_for_Low-light_Image_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Xu_URWKV_Unified_RWKV_CVPR_2025_supplemental.pdf
2505.23068
@InProceedings{Xu_2025_CVPR, author = {Xu, Rui and Niu, Yuzhen and Li, Yuezhou and Xu, Huangbiao and Liu, Wenxi and Chen, Yuzhong}, title = {URWKV: Unified RWKV Model with Multi-state Perspective for Low-light Image Restoration}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition...
Existing low-light image enhancement (LLIE) and joint LLIE and deblurring (LLIE-deblur) models have made strides in addressing predefined degradations, yet they are often constrained by dynamically coupled degradations. To address these challenges, we introduce a Unified Receptance Weighted Key Value (URWKV) model wit...
[ 0.0019227098673582077, -0.03495602682232857, 0.033941805362701416, 0.02761647291481495, 0.05146254599094391, 0.005867315456271172, 0.029052622616291046, 0.021834254264831543, -0.009160781279206276, -0.05840672552585602, -0.013232992962002754, 0.0011595154646784067, -0.05163315683603287, 0....
276
Revisiting Backdoor Attacks against Large Vision-Language Models from Domain Shift
[ "Siyuan Liang", "Jiawei Liang", "Tianyu Pang", "Chao Du", "Aishan Liu", "Mingli Zhu", "Xiaochun Cao", "Dacheng Tao" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Liang_Revisiting_Backdoor_Attacks_against_Large_Vision-Language_Models_from_Domain_Shift_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Liang_Revisiting_Backdoor_Attacks_against_Large_Vision-Language_Models_from_Domain_Shift_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Liang_Revisiting_Backdoor_Attacks_CVPR_2025_supplemental.pdf
2406.18844
@InProceedings{Liang_2025_CVPR, author = {Liang, Siyuan and Liang, Jiawei and Pang, Tianyu and Du, Chao and Liu, Aishan and Zhu, Mingli and Cao, Xiaochun and Tao, Dacheng}, title = {Revisiting Backdoor Attacks against Large Vision-Language Models from Domain Shift}, booktitle = {Proceedings of the Co...
Instruction tuning enhances large vision-language models (LVLMs) but increases their vulnerability to backdoor attacks due to their open design. Unlike prior studies in static settings, this paper explores backdoor attacks in LVLM instruction tuning across mismatched training and testing domains. We introduce a new eva...
[ -0.029730048030614853, -0.005487408023327589, -0.0008685399079695344, 0.03963758423924446, 0.043041907250881195, -0.009030461311340332, 0.06387930363416672, 0.0033258325420320034, -0.008589682169258595, -0.0013676179805770516, 0.0002748574479483068, 0.027815891429781914, -0.05352027341723442...
277
Condensing Action Segmentation Datasets via Generative Network Inversion
[ "Guodong Ding", "Rongyu Chen", "Angela Yao" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Ding_Condensing_Action_Segmentation_Datasets_via_Generative_Network_Inversion_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Ding_Condensing_Action_Segmentation_Datasets_via_Generative_Network_Inversion_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Ding_Condensing_Action_Segmentation_CVPR_2025_supplemental.pdf
2503.14112
@InProceedings{Ding_2025_CVPR, author = {Ding, Guodong and Chen, Rongyu and Yao, Angela}, title = {Condensing Action Segmentation Datasets via Generative Network Inversion}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year ...
This work presents the first condensation approach for procedural video datasets used in temporal action segmentation. We propose a condensation framework that leverages generative prior learned from the dataset and network inversion to condense data into compact latent codes with significant storage reduced across tem...
[ 0.020242176949977875, -0.04575136676430702, -0.04517656937241554, 0.05009957775473595, 0.01926877163350582, 0.017308972775936127, 0.0416845940053463, 0.025767285376787186, -0.007788899354636669, -0.03888612613081932, -0.0025195227935910225, -0.02272428572177887, -0.034672413021326065, 0.00...
278
TCFG: Tangential Damping Classifier-free Guidance
[ "Mingi Kwon", "Shin seong Kim", "Jaeseok Jeong", "Yi Ting Hsiao", "Youngjung Uh" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Kwon_TCFG_Tangential_Damping_Classifier-free_Guidance_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Kwon_TCFG_Tangential_Damping_Classifier-free_Guidance_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Kwon_TCFG_Tangential_Damping_CVPR_2025_supplemental.pdf
2503.18137
@InProceedings{Kwon_2025_CVPR, author = {Kwon, Mingi and Kim, Shin seong and Jeong, Jaeseok and Hsiao, Yi Ting and Uh, Youngjung}, title = {TCFG: Tangential Damping Classifier-free Guidance}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {...
Diffusion models have achieved remarkable success in text-to-image synthesis, largely attributed to the use of classifier-free guidance (CFG), which enables high-quality, condition-aligned image generation. CFG combines the conditional score (e.g., text-conditioned) with the unconditional score to control the output. H...
[ 0.009941830299794674, -0.022128082811832428, 0.030997227877378464, 0.03896680846810341, 0.030925679951906204, 0.006761323660612106, -0.001878333743661642, 0.012346161529421806, -0.01918291300535202, -0.05997278541326523, -0.022913817316293716, 0.019465334713459015, -0.05734742805361748, 0....
279
MatAnyone: Stable Video Matting with Consistent Memory Propagation
[ "Peiqing Yang", "Shangchen Zhou", "Jixin Zhao", "Qingyi Tao", "Chen Change Loy" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Yang_MatAnyone_Stable_Video_Matting_with_Consistent_Memory_Propagation_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Yang_MatAnyone_Stable_Video_Matting_with_Consistent_Memory_Propagation_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Yang_MatAnyone_Stable_Video_CVPR_2025_supplemental.pdf
2501.14677
@InProceedings{Yang_2025_CVPR, author = {Yang, Peiqing and Zhou, Shangchen and Zhao, Jixin and Tao, Qingyi and Loy, Chen Change}, title = {MatAnyone: Stable Video Matting with Consistent Memory Propagation}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, ...
Auxiliary-free human video matting methods, which rely solely on input frames, often struggle with complex or ambiguous backgrounds. To tackle this, we propose MatAnyone, a practical framework designed for target-assigned video matting. Specifically, building on a memory-based framework, we introduce a consistent memor...
[ 0.022918574512004852, -0.007672956679016352, -0.0037878325674682856, 0.0399102121591568, 0.01566559448838234, 0.013743733987212181, 0.017586112022399902, -0.003506723791360855, -0.07309205830097198, -0.051746297627687454, -0.03164055198431015, 0.011193429119884968, -0.05486996844410896, 0....
280
Can Generative Video Models Help Pose Estimation?
[ "Ruojin Cai", "Jason Y. Zhang", "Philipp Henzler", "Zhengqi Li", "Noah Snavely", "Ricardo Martin-Brualla" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Cai_Can_Generative_Video_Models_Help_Pose_Estimation_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Cai_Can_Generative_Video_Models_Help_Pose_Estimation_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Cai_Can_Generative_Video_CVPR_2025_supplemental.pdf
2412.16155
@InProceedings{Cai_2025_CVPR, author = {Cai, Ruojin and Zhang, Jason Y. and Henzler, Philipp and Li, Zhengqi and Snavely, Noah and Martin-Brualla, Ricardo}, title = {Can Generative Video Models Help Pose Estimation?}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference ...
Pairwise pose estimation from images with little or no overlap is an open challenge in computer vision. Existing methods, even those trained on large-scale datasets, struggle in these scenarios due to the lack of identifiable correspondences or visual overlap. Inspired by the human ability to infer spatial relationshi...
[ 0.0648365318775177, -0.01806625910103321, -0.01831599324941635, 0.06614845246076584, 0.010696079581975937, 0.019047001376748085, 0.01290462538599968, 0.009426877833902836, -0.027396133169531822, -0.03860916197299957, -0.02056192234158516, -0.026192041113972664, -0.08878691494464874, 0.0064...
281
Molmo and PixMo: Open Weights and Open Data for State-of-the-Art Vision-Language Models
[ "Matt Deitke", "Christopher Clark", "Sangho Lee", "Rohun Tripathi", "Yue Yang", "Jae Sung Park", "Mohammadreza Salehi", "Niklas Muennighoff", "Kyle Lo", "Luca Soldaini", "Jiasen Lu", "Taira Anderson", "Erin Bransom", "Kiana Ehsani", "Huong Ngo", "YenSung Chen", "Ajay Patel", "Mark ...
https://openaccess.thecvf.com/content/CVPR2025/html/Deitke_Molmo_and_PixMo_Open_Weights_and_Open_Data_for_State-of-the-Art_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Deitke_Molmo_and_PixMo_Open_Weights_and_Open_Data_for_State-of-the-Art_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Deitke_Molmo_and_PixMo_CVPR_2025_supplemental.pdf
2409.17146
@InProceedings{Deitke_2025_CVPR, author = {Deitke, Matt and Clark, Christopher and Lee, Sangho and Tripathi, Rohun and Yang, Yue and Park, Jae Sung and Salehi, Mohammadreza and Muennighoff, Niklas and Lo, Kyle and Soldaini, Luca and Lu, Jiasen and Anderson, Taira and Bransom, Erin and Ehsani, Kiana and Ngo, Huon...
Today's most advanced vision-language models (VLMs) remain proprietary. The strongest open-weight models rely heavily on synthetic data from proprietary VLMs to achieve good performance, effectively distilling these closed VLMs into open ones. As a result, the community has been missing foundational knowledge about how...
[ 0.005197461694478989, -0.019499488174915314, 0.032962050288915634, 0.044196683913469315, 0.02411610074341297, 0.013222982175648212, 0.015848308801651, 0.030540727078914642, -0.027224859222769737, -0.03976011648774147, -0.030984699726104736, 0.029329944401979446, -0.1255226731300354, -0.012...
282
DriveGPT4-V2: Harnessing Large Language Model Capabilities for Enhanced Closed-Loop Autonomous Driving
[ "Zhenhua Xu", "Yan Bai", "Yujia Zhang", "Zhuoling Li", "Fei Xia", "Kwan-Yee K. Wong", "Jianqiang Wang", "Hengshuang Zhao" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Xu_DriveGPT4-V2_Harnessing_Large_Language_Model_Capabilities_for_Enhanced_Closed-Loop_Autonomous_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Xu_DriveGPT4-V2_Harnessing_Large_Language_Model_Capabilities_for_Enhanced_Closed-Loop_Autonomous_CVPR_2025_paper.pdf
null
null
@InProceedings{Xu_2025_CVPR, author = {Xu, Zhenhua and Bai, Yan and Zhang, Yujia and Li, Zhuoling and Xia, Fei and Wong, Kwan-Yee K. and Wang, Jianqiang and Zhao, Hengshuang}, title = {DriveGPT4-V2: Harnessing Large Language Model Capabilities for Enhanced Closed-Loop Autonomous Driving}, booktitle =...
Multimodal large language models (MLLMs) possess the ability to comprehend visual images or videos, and show impressive reasoning ability thanks to the vast amounts of pretrained knowledge, making them highly suitable for autonomous driving applications. Unlike the previous work, DriveGPT4-V1, which focused on open-loo...
[ 0.007065138313919306, -0.015919694676995277, 0.035563573241233826, 0.047036971896886826, 0.0244755856692791, 0.02352537214756012, 0.014805915765464306, 0.03758914768695831, -0.017345717176795006, 0.0015955656999722123, -0.01939922384917736, 0.02174423262476921, -0.06989318877458572, -0.010...
283
High-Fidelity Lightweight Mesh Reconstruction from Point Clouds
[ "Chen Zhang", "Wentao Wang", "Ximeng Li", "Xinyao Liao", "Wanjuan Su", "Wenbing Tao" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Zhang_High-Fidelity_Lightweight_Mesh_Reconstruction_from_Point_Clouds_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Zhang_High-Fidelity_Lightweight_Mesh_Reconstruction_from_Point_Clouds_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhang_High-Fidelity_Lightweight_Mesh_CVPR_2025_supplemental.zip
null
@InProceedings{Zhang_2025_CVPR, author = {Zhang, Chen and Wang, Wentao and Li, Ximeng and Liao, Xinyao and Su, Wanjuan and Tao, Wenbing}, title = {High-Fidelity Lightweight Mesh Reconstruction from Point Clouds}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR...
Recently, learning signed distance functions (SDFs) from point clouds has become popular for reconstruction. To ensure accuracy, most methods require using high-resolution Marching Cubes for surface extraction. However, this results in redundant mesh elements, making the mesh inconvenient to use. To solve the problem, ...
[ -0.006437106058001518, 0.0005615552072413266, -0.0045689987018704414, 0.0452546589076519, 0.035174496471881866, 0.05548027902841568, -0.01059870608150959, -0.006476221140474081, -0.03033549338579178, -0.10183756798505783, -0.01581287570297718, -0.025010041892528534, -0.040626756846904755, ...
284
MDP: Multidimensional Vision Model Pruning with Latency Constraint
[ "Xinglong Sun", "Barath Lakshmanan", "Maying Shen", "Shiyi Lan", "Jingde Chen", "Jose M. Alvarez" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Sun_MDP_Multidimensional_Vision_Model_Pruning_with_Latency_Constraint_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Sun_MDP_Multidimensional_Vision_Model_Pruning_with_Latency_Constraint_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Sun_MDP_Multidimensional_Vision_CVPR_2025_supplemental.pdf
2504.02168
@InProceedings{Sun_2025_CVPR, author = {Sun, Xinglong and Lakshmanan, Barath and Shen, Maying and Lan, Shiyi and Chen, Jingde and Alvarez, Jose M.}, title = {MDP: Multidimensional Vision Model Pruning with Latency Constraint}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Co...
Current structural pruning methods face two significant limitations: (i) they often limit pruning to finer-grained levels like channels, making aggressive parameter reduction challenging, and (ii) they focus heavily on parameter and FLOP reduction, with existing latency-aware methods frequently relying on simplistic, s...
[ -0.017601748928427696, -0.020374996587634087, -0.03552616387605667, 0.05212575942277908, 0.029523802921175957, 0.045569222420454025, 0.0040421090088784695, -0.003352608997374773, -0.035174787044525146, -0.06606185436248779, -0.019672241061925888, -0.010971570387482643, -0.046252865344285965,...
285
OSDFace: One-Step Diffusion Model for Face Restoration
[ "Jingkai Wang", "Jue Gong", "Lin Zhang", "Zheng Chen", "Xing Liu", "Hong Gu", "Yutong Liu", "Yulun Zhang", "Xiaokang Yang" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Wang_OSDFace_One-Step_Diffusion_Model_for_Face_Restoration_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Wang_OSDFace_One-Step_Diffusion_Model_for_Face_Restoration_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wang_OSDFace_One-Step_Diffusion_CVPR_2025_supplemental.pdf
2411.17163
@InProceedings{Wang_2025_CVPR, author = {Wang, Jingkai and Gong, Jue and Zhang, Lin and Chen, Zheng and Liu, Xing and Gu, Hong and Liu, Yutong and Zhang, Yulun and Yang, Xiaokang}, title = {OSDFace: One-Step Diffusion Model for Face Restoration}, booktitle = {Proceedings of the Computer Vision and Pa...
Diffusion models have demonstrated impressive performance in face restoration. Yet, their multi-step inference process remains computationally intensive, limiting their applicability in real-world scenarios. Moreover, existing methods often struggle to generate face images that are harmonious, realistic, and consistent...
[ -0.011212149634957314, -0.021909289062023163, -0.001261918805539608, 0.04161066934466362, 0.05161519721150398, 0.05857501178979874, 0.039986174553632736, 0.025564953684806824, 0.0008375889156013727, -0.07697056978940964, 0.020348340272903442, -0.01884765550494194, -0.04962373152375221, -0....
286
Task Singular Vectors: Reducing Task Interference in Model Merging
[ "Antonio Andrea Gargiulo", "Donato Crisostomi", "Maria Sofia Bucarelli", "Simone Scardapane", "Fabrizio Silvestri", "Emanuele Rodolà" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Gargiulo_Task_Singular_Vectors_Reducing_Task_Interference_in_Model_Merging_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Gargiulo_Task_Singular_Vectors_Reducing_Task_Interference_in_Model_Merging_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Gargiulo_Task_Singular_Vectors_CVPR_2025_supplemental.pdf
2412.00081
@InProceedings{Gargiulo_2025_CVPR, author = {Gargiulo, Antonio Andrea and Crisostomi, Donato and Bucarelli, Maria Sofia and Scardapane, Simone and Silvestri, Fabrizio and Rodol\`a, Emanuele}, title = {Task Singular Vectors: Reducing Task Interference in Model Merging}, booktitle = {Proceedings of the...
Task Arithmetic has emerged as a simple yet effective method to merge models without additional training. However, by treating entire networks as flat parameter vectors, it overlooks key structural information and is susceptible to task interference. In this paper, we study task vectors at the layer level, focusing on ...
[ -0.004897828679531813, -0.01860625669360161, -0.021912498399615288, 0.0174797885119915, 0.019730009138584137, 0.013025159016251564, 0.04542884603142738, -0.02047630399465561, -0.0342126227915287, -0.06706256419420242, -0.005671980325132608, -0.00309055857360363, -0.0777876153588295, -0.003...
287
Functionality Understanding and Segmentation in 3D Scenes
[ "Jaime Corsetti", "Francesco Giuliari", "Alice Fasoli", "Davide Boscaini", "Fabio Poiesi" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Corsetti_Functionality_Understanding_and_Segmentation_in_3D_Scenes_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Corsetti_Functionality_Understanding_and_Segmentation_in_3D_Scenes_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Corsetti_Functionality_Understanding_and_CVPR_2025_supplemental.pdf
2411.16310
@InProceedings{Corsetti_2025_CVPR, author = {Corsetti, Jaime and Giuliari, Francesco and Fasoli, Alice and Boscaini, Davide and Poiesi, Fabio}, title = {Functionality Understanding and Segmentation in 3D Scenes}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR...
Understanding functionalities in 3D scenes involves interpreting natural language descriptions to locate functional interactive objects, such as handles and buttons, in a 3D environment. Functionality understanding is highly challenging, as it requires both world knowledge to interpret language and spatial perception t...
[ 0.02070285938680172, 0.011264441534876823, 0.03242257609963417, 0.013335896655917168, 0.031204205006361008, 0.060973092913627625, 0.029269689694046974, 0.011719837784767151, -0.018659865483641624, -0.012857725843787193, -0.053435277193784714, 0.005796541925519705, -0.06276395916938782, 0.0...
288
Dragin3D: Image Editing by Dragging in 3D Space
[ "Weiran Guang", "Xiaoguang Gu", "Mengqi Huang", "Zhendong Mao" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Guang_Dragin3D_Image_Editing_by_Dragging_in_3D_Space_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Guang_Dragin3D_Image_Editing_by_Dragging_in_3D_Space_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Guang_Dragin3D_Image_Editing_CVPR_2025_supplemental.pdf
null
@InProceedings{Guang_2025_CVPR, author = {Guang, Weiran and Gu, Xiaoguang and Huang, Mengqi and Mao, Zhendong}, title = {Dragin3D: Image Editing by Dragging in 3D Space}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year ...
Interactive drag editing of images is a valuable task that has gained considerable attention for its precision and controllability. However, existing approaches have primarily focused on manipulating the shape or movement of objects in 2D plane. We propose to extend this drag-based editing task to 3D space. Firstly, we...
[ 0.004349087364971638, 0.016949260607361794, -0.011796816252171993, 0.03157049044966698, 0.02679244987666607, 0.030228247866034508, 0.010968483984470367, 0.019251268357038498, -0.05385245755314827, -0.07159756869077682, -0.06929177045822144, -0.012500202283263206, -0.07243368774652481, -0.0...
289
MMTL-UniAD: A Unified Framework for Multimodal and Multi-Task Learning in Assistive Driving Perception
[ "Wenzhuo Liu", "Wenshuo Wang", "Yicheng Qiao", "Qiannan Guo", "Jiayin Zhu", "Pengfei Li", "Zilong Chen", "Huiming Yang", "Zhiwei Li", "Lening Wang", "Tiao Tan", "Huaping Liu" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Liu_MMTL-UniAD_A_Unified_Framework_for_Multimodal_and_Multi-Task_Learning_in_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Liu_MMTL-UniAD_A_Unified_Framework_for_Multimodal_and_Multi-Task_Learning_in_CVPR_2025_paper.pdf
null
null
@InProceedings{Liu_2025_CVPR, author = {Liu, Wenzhuo and Wang, Wenshuo and Qiao, Yicheng and Guo, Qiannan and Zhu, Jiayin and Li, Pengfei and Chen, Zilong and Yang, Huiming and Li, Zhiwei and Wang, Lening and Tan, Tiao and Liu, Huaping}, title = {MMTL-UniAD: A Unified Framework for Multimodal and Multi-T...
Advanced driver assistance systems require a comprehensive understanding of the driver's mental/physical state and traffic context but existing works often neglect the potential benefits of joint learning between these tasks. This paper proposes MMTL-UniAD, a unified multi-modal multi-task learning framework that simul...
[ 0.03401075676083565, -0.011040964163839817, 0.03063328005373478, 0.022289710119366646, 0.01904493011534214, 0.012190954759716988, 0.026293914765119553, 0.029877716675400734, 0.003829186549410224, -0.07320667058229446, -0.0173824243247509, 0.06844154745340347, -0.05103669688105583, -0.02556...
290
T2V-CompBench: A Comprehensive Benchmark for Compositional Text-to-video Generation
[ "Kaiyue Sun", "Kaiyi Huang", "Xian Liu", "Yue Wu", "Zihan Xu", "Zhenguo Li", "Xihui Liu" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Sun_T2V-CompBench_A_Comprehensive_Benchmark_for_Compositional_Text-to-video_Generation_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Sun_T2V-CompBench_A_Comprehensive_Benchmark_for_Compositional_Text-to-video_Generation_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Sun_T2V-CompBench_A_Comprehensive_CVPR_2025_supplemental.pdf
null
@InProceedings{Sun_2025_CVPR, author = {Sun, Kaiyue and Huang, Kaiyi and Liu, Xian and Wu, Yue and Xu, Zihan and Li, Zhenguo and Liu, Xihui}, title = {T2V-CompBench: A Comprehensive Benchmark for Compositional Text-to-video Generation}, booktitle = {Proceedings of the Computer Vision and Pattern Reco...
Text-to-video (T2V) generative models have advanced significantly, yet their ability to compose different objects, attributes, actions, and motions into a video remains unexplored. Previous text-to-video benchmarks also neglect this important ability for evaluation. In this work, we conduct the first systematic study o...
[ 0.04167751595377922, -0.017718253657221794, 0.002341537969186902, 0.05073046311736107, 0.009102043695747852, 0.00010247873433399945, 0.03158001974225044, 0.032893020659685135, -0.019824471324682236, -0.020789198577404022, -0.021809712052345276, 0.012265240773558617, -0.0564136765897274, -0...
291
Self-Evolving Visual Concept Library using Vision-Language Critics
[ "Atharva Sehgal", "Patrick Yuan", "Ziniu Hu", "Yisong Yue", "Jennifer J. Sun", "Swarat Chaudhuri" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Sehgal_Self-Evolving_Visual_Concept_Library_using_Vision-Language_Critics_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Sehgal_Self-Evolving_Visual_Concept_Library_using_Vision-Language_Critics_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Sehgal_Self-Evolving_Visual_Concept_CVPR_2025_supplemental.pdf
2504.00185
@InProceedings{Sehgal_2025_CVPR, author = {Sehgal, Atharva and Yuan, Patrick and Hu, Ziniu and Yue, Yisong and Sun, Jennifer J. and Chaudhuri, Swarat}, title = {Self-Evolving Visual Concept Library using Vision-Language Critics}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition...
We study the problem of building a visual concept library for visual recognition. Building effective visual concept libraries is challenging, as manual definition is labor-intensive, while relying solely on LLMs for concept generation can result in concepts that lack discriminative power or fail to account for the comp...
[ 0.020486174151301384, -0.0012898724526166916, -0.004595326259732246, 0.04457411170005798, 0.028369253501296043, 0.00948698166757822, 0.007462036795914173, 0.03618468716740608, -0.028305040672421455, -0.045588646084070206, -0.059620823711156845, 0.026268137618899345, -0.05903974920511246, 0...
292
Multimodal Autoregressive Pre-training of Large Vision Encoders
[ "Enrico Fini", "Mustafa Shukor", "Xiujun Li", "Philipp Dufter", "Michal Klein", "David Haldimann", "Sai Aitharaju", "Victor G. Turrisi da Costa", "Louis Béthune", "Zhe Gan", "Alexander Toshev", "Marcin Eichner", "Moin Nabi", "Yinfei Yang", "Joshua Susskind", "Alaaeldin El-Nouby" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Fini_Multimodal_Autoregressive_Pre-training_of_Large_Vision_Encoders_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Fini_Multimodal_Autoregressive_Pre-training_of_Large_Vision_Encoders_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Fini_Multimodal_Autoregressive_Pre-training_CVPR_2025_supplemental.pdf
2411.14402
@InProceedings{Fini_2025_CVPR, author = {Fini, Enrico and Shukor, Mustafa and Li, Xiujun and Dufter, Philipp and Klein, Michal and Haldimann, David and Aitharaju, Sai and da Costa, Victor G. Turrisi and B\'ethune, Louis and Gan, Zhe and Toshev, Alexander and Eichner, Marcin and Nabi, Moin and Yang, Yinfei and Su...
We introduce a novel method for pre-training of large-scale vision encoders. Building on recent advancements in autoregressive pre-training of vision models, we extend this framework to a multimodal setting, i.e., images and text. In this paper, we present AIMV2, a family of generalist vision encoders characterized by ...
[ 0.0187652837485075, -0.022454533725976944, 0.03303736448287964, 0.019120417535305023, 0.033293139189481735, 0.0422775000333786, 0.03626507893204689, 0.018367968499660492, -0.03816036507487297, -0.04149564355611801, -0.03968003764748573, 0.012035473249852657, -0.07835602760314941, -0.017470...
293
AKiRa: Augmentation Kit on Rays for Optical Video Generation
[ "Xi Wang", "Robin Courant", "Marc Christie", "Vicky Kalogeiton" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Wang_AKiRa_Augmentation_Kit_on_Rays_for_Optical_Video_Generation_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Wang_AKiRa_Augmentation_Kit_on_Rays_for_Optical_Video_Generation_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wang_AKiRa_Augmentation_Kit_CVPR_2025_supplemental.pdf
2412.14158
@InProceedings{Wang_2025_CVPR, author = {Wang, Xi and Courant, Robin and Christie, Marc and Kalogeiton, Vicky}, title = {AKiRa: Augmentation Kit on Rays for Optical Video Generation}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, ...
Recent advances in text-conditioned video diffusion have greatly improved video quality. However, these methods offer limited or sometimes no control to users on camera aspects, including dynamic camera motion, zoom, distorted lens and focus shifts. These motion and optical aspects are crucial for adding controllabilit...
[ 0.016585884615778923, -0.021036040037870407, 0.0014092185301706195, 0.0412713922560215, 0.0201554112136364, -0.01146488357335329, 0.018370620906352997, 0.004322369582951069, -0.023525001481175423, -0.04899505153298378, -0.028834780678153038, 0.0011057049268856645, -0.03370163217186928, 0.0...
294
Towards Stable and Storage-efficient Dataset Distillation: Matching Convexified Trajectory
[ "Wenliang Zhong", "Haoyu Tang", "Qinghai Zheng", "Mingzhu Xu", "Yupeng Hu", "Weili Guan" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Zhong_Towards_Stable_and_Storage-efficient_Dataset_Distillation_Matching_Convexified_Trajectory_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Zhong_Towards_Stable_and_Storage-efficient_Dataset_Distillation_Matching_Convexified_Trajectory_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhong_Towards_Stable_and_CVPR_2025_supplemental.pdf
2406.19827
@InProceedings{Zhong_2025_CVPR, author = {Zhong, Wenliang and Tang, Haoyu and Zheng, Qinghai and Xu, Mingzhu and Hu, Yupeng and Guan, Weili}, title = {Towards Stable and Storage-efficient Dataset Distillation: Matching Convexified Trajectory}, booktitle = {Proceedings of the Computer Vision and Patte...
The rapid evolution of deep learning and large language models has led to an exponential growth in the demand for training data, prompting the development of Dataset Distillation methods to address the challenges of managing large datasets. Among these, Matching Training Trajectories (MTT) has been a prominent approach...
[ -0.006498446688055992, -0.03016810491681099, -0.03764087334275246, 0.07898229360580444, 0.04685024544596672, 0.015728363767266273, 0.03361455351114273, 0.017147215083241463, -0.006578140426427126, -0.041478823870420456, -0.04321504384279251, -0.022379077970981598, -0.061708174645900726, -0...
295
TSAM: Temporal SAM Augmented with Multimodal Prompts for Referring Audio-Visual Segmentation
[ "Abduljalil Radman", "Jorma Laaksonen" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Radman_TSAM_Temporal_SAM_Augmented_with_Multimodal_Prompts_for_Referring_Audio-Visual_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Radman_TSAM_Temporal_SAM_Augmented_with_Multimodal_Prompts_for_Referring_Audio-Visual_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Radman_TSAM_Temporal_SAM_CVPR_2025_supplemental.pdf
null
@InProceedings{Radman_2025_CVPR, author = {Radman, Abduljalil and Laaksonen, Jorma}, title = {TSAM: Temporal SAM Augmented with Multimodal Prompts for Referring Audio-Visual Segmentation}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {Jun...
Referring audio-visual segmentation (Ref-AVS) aims to segment objects within audio-visual scenes using multimodal cues embedded in text expressions. While the Segment Anything Model (SAM) has revolutionized visual segmentation, its applicability to Ref-AVS, where multimodal cues act as novel prompts, remains unexplored...
[ 0.023402905091643333, 0.005830645095556974, -0.0003930999955628067, 0.027254940941929817, 0.005291457287967205, 0.014297000132501125, 0.04638857766985893, 0.036854010075330734, -0.055762406438589096, -0.04080493003129959, -0.05837065353989601, 0.02155585214495659, -0.053005561232566833, 0....
296
TFCustom: Customized Image Generation with Time-Aware Frequency Feature Guidance
[ "Mushui Liu", "Dong She", "Jingxuan Pang", "Qihan Huang", "Jiacheng Ying", "Wanggui He", "Yuanlei Hou", "Siming Fu" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Liu_TFCustom_Customized_Image_Generation_with_Time-Aware_Frequency_Feature_Guidance_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Liu_TFCustom_Customized_Image_Generation_with_Time-Aware_Frequency_Feature_Guidance_CVPR_2025_paper.pdf
null
null
@InProceedings{Liu_2025_CVPR, author = {Liu, Mushui and She, Dong and Pang, Jingxuan and Huang, Qihan and Ying, Jiacheng and He, Wanggui and Hou, Yuanlei and Fu, Siming}, title = {TFCustom: Customized Image Generation with Time-Aware Frequency Feature Guidance}, booktitle = {Proceedings of the Comput...
Subject-driven image personalization has seen notable advancements, especially with the advent of the ReferenceNet paradigm. ReferenceNet excels in integrating image reference features, making it highly applicable in creative and commercial settings. However, current implementations of ReferenceNet primarily operate as...
[ 0.03512401133775711, -0.027686886489391327, 0.02701636776328087, -0.004855460952967405, 0.042196907103061676, 0.03290925547480583, -0.010024957358837128, 0.04781591519713402, -0.012920104898512363, -0.04824816435575485, -0.026082484051585197, -0.003218934405595064, -0.05332036316394806, 0....
297
Boosting Point-Supervised Temporal Action Localization through Integrating Query Reformation and Optimal Transport
[ "Mengnan Liu", "Le Wang", "Sanping Zhou", "Kun Xia", "Xiaolong Sun", "Gang Hua" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Liu_Boosting_Point-Supervised_Temporal_Action_Localization_through_Integrating_Query_Reformation_and_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Liu_Boosting_Point-Supervised_Temporal_Action_Localization_through_Integrating_Query_Reformation_and_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Liu_Boosting_Point-Supervised_Temporal_CVPR_2025_supplemental.pdf
null
@InProceedings{Liu_2025_CVPR, author = {Liu, Mengnan and Wang, Le and Zhou, Sanping and Xia, Kun and Sun, Xiaolong and Hua, Gang}, title = {Boosting Point-Supervised Temporal Action Localization through Integrating Query Reformation and Optimal Transport}, booktitle = {Proceedings of the Computer Vis...
Point-supervised Temporal Action Localization poses significant challenges due to the difficulty of identifying complete actions with a single-point annotation per action. Existing methods typically employ Multiple Instance Learning, which struggles to capture global temporal context and requires heuristic post-proces...
[ -0.0030471784994006157, -0.022043948993086815, -0.016025522723793983, 0.03519337624311447, -0.0021053862292319536, -0.017375895753502846, 0.05115325003862381, -0.018722595646977425, -0.025791572406888008, -0.00943446159362793, -0.016087908297777176, -0.012042942456901073, -0.0394749790430069...
298
SketchFusion: Learning Universal Sketch Features through Fusing Foundation Models
[ "Subhadeep Koley", "Tapas Kumar Dutta", "Aneeshan Sain", "Pinaki Nath Chowdhury", "Ayan Kumar Bhunia", "Yi-Zhe Song" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Koley_SketchFusion_Learning_Universal_Sketch_Features_through_Fusing_Foundation_Models_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Koley_SketchFusion_Learning_Universal_Sketch_Features_through_Fusing_Foundation_Models_CVPR_2025_paper.pdf
null
2503.14129
@InProceedings{Koley_2025_CVPR, author = {Koley, Subhadeep and Dutta, Tapas Kumar and Sain, Aneeshan and Chowdhury, Pinaki Nath and Bhunia, Ayan Kumar and Song, Yi-Zhe}, title = {SketchFusion: Learning Universal Sketch Features through Fusing Foundation Models}, booktitle = {Proceedings of the Comput...
While foundation models have revolutionised computer vision, their effectiveness for sketch understanding remains limited by the unique challenges of abstract, sparse visual inputs. Through systematic analysis, we uncover two fundamental limitations: Stable Diffusion (SD) struggles to extract meaningful features from a...
[ 0.009896654635667801, -0.04133549705147743, 0.027259714901447296, 0.06155652552843094, 0.05327558517456055, 0.028860490769147873, 0.01328896265476942, 0.018182259052991867, -0.025830820202827454, -0.08743026852607727, -0.009745941497385502, -0.025117915123701096, -0.056189924478530884, 0.0...
299
Bridging the Vision-Brain Gap with an Uncertainty-Aware Blur Prior
[ "Haitao Wu", "Qing Li", "Changqing Zhang", "Zhen He", "Xiaomin Ying" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Wu_Bridging_the_Vision-Brain_Gap_with_an_Uncertainty-Aware_Blur_Prior_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Wu_Bridging_the_Vision-Brain_Gap_with_an_Uncertainty-Aware_Blur_Prior_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wu_Bridging_the_Vision-Brain_CVPR_2025_supplemental.pdf
2503.04207
@InProceedings{Wu_2025_CVPR, author = {Wu, Haitao and Li, Qing and Zhang, Changqing and He, Zhen and Ying, Xiaomin}, title = {Bridging the Vision-Brain Gap with an Uncertainty-Aware Blur Prior}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month ...
Can our brain signals faithfully reflect the original visual stimuli, even including high-frequency details? Although human perceptual and cognitive capacities enable us to process and remember visual information, these abilities are constrained by several factors, such as limited attentional resources and finite capac...
[ 0.020506637170910835, 0.011832893826067448, 0.0013280949788168073, 0.04422595351934433, 0.027274560183286667, -0.0028426500502973795, 0.03985156491398811, 0.032935984432697296, -0.06453610956668854, -0.06782174110412598, -0.04881354793906212, 0.013931148685514927, -0.05113428458571434, -0....