paper_id uint32 0 2.87k | title stringlengths 15 149 | authors listlengths 1 69 | cvf_url stringlengths 94 199 | pdf_url stringlengths 95 200 | supp_url stringlengths 100 148 ⌀ | arxiv_id stringlengths 10 10 ⌀ | bibtex large_stringlengths 285 1.82k | abstract large_stringlengths 547 2.44k | embedding listlengths 768 768 |
|---|---|---|---|---|---|---|---|---|---|
100 | SymDPO: Boosting In-Context Learning of Large Multimodal Models with Symbol Demonstration Direct Preference Optimization | [
"Hongrui Jia",
"Chaoya Jiang",
"Haiyang Xu",
"Wei Ye",
"Mengfan Dong",
"Ming Yan",
"Ji Zhang",
"Fei Huang",
"Shikun Zhang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Jia_SymDPO_Boosting_In-Context_Learning_of_Large_Multimodal_Models_with_Symbol_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Jia_SymDPO_Boosting_In-Context_Learning_of_Large_Multimodal_Models_with_Symbol_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Jia_SymDPO_Boosting_In-Context_CVPR_2025_supplemental.zip | 2411.11909 | @InProceedings{Jia_2025_CVPR,
author = {Jia, Hongrui and Jiang, Chaoya and Xu, Haiyang and Ye, Wei and Dong, Mengfan and Yan, Ming and Zhang, Ji and Huang, Fei and Zhang, Shikun},
title = {SymDPO: Boosting In-Context Learning of Large Multimodal Models with Symbol Demonstration Direct Preference Optimiza... | As language models continue to scale, Large Language Models (LLMs) have exhibited emerging capabilities in In-Context Learning (ICL), enabling them to solve language tasks by prefixing a few in-context demonstrations (ICDs) as context. Inspired by these advancements, researchers have extended these techniques to develo... | [
-0.01776091754436493,
0.008550198748707771,
-0.016569659113883972,
0.06439178436994553,
0.013582085259258747,
0.0303377415984869,
0.019178049638867378,
0.03626776114106178,
-0.06545606255531311,
-0.015944844111800194,
-0.026491187512874603,
0.037649061530828476,
-0.07397139072418213,
-0.01... |
101 | Stealthy Backdoor Attack in Self-Supervised Learning Vision Encoders for Large Vision Language Models | [
"Zhaoyi Liu",
"Huan Zhang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Liu_Stealthy_Backdoor_Attack_in_Self-Supervised_Learning_Vision_Encoders_for_Large_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Liu_Stealthy_Backdoor_Attack_in_Self-Supervised_Learning_Vision_Encoders_for_Large_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Liu_Stealthy_Backdoor_Attack_CVPR_2025_supplemental.pdf | 2502.18290 | @InProceedings{Liu_2025_CVPR,
author = {Liu, Zhaoyi and Zhang, Huan},
title = {Stealthy Backdoor Attack in Self-Supervised Learning Vision Encoders for Large Vision Language Models},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
... | Self-supervised learning (SSL) vision encoders learn high-quality image representations and thus have become a vital part of developing vision modality of large vision language models (LVLMs). Due to the high cost of training such encoders, pre-trained encoders are widely shared and deployed into many LVLMs, which are ... | [
-0.004236872307956219,
-0.003760155290365219,
-0.007283580955117941,
0.053733110427856445,
0.030393242835998535,
0.011523181572556496,
0.053661126643419266,
0.00227144081145525,
-0.0194840170443058,
-0.010674039833247662,
-0.028259241953492165,
0.0020136774983257055,
-0.05080931633710861,
... |
102 | Data-free Universal Adversarial Perturbation with Pseudo-semantic Prior | [
"Chanhui Lee",
"Yeonghwan Song",
"Jeany Son"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Lee_Data-free_Universal_Adversarial_Perturbation_with_Pseudo-semantic_Prior_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Lee_Data-free_Universal_Adversarial_Perturbation_with_Pseudo-semantic_Prior_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Lee_Data-free_Universal_Adversarial_CVPR_2025_supplemental.pdf | 2502.21048 | @InProceedings{Lee_2025_CVPR,
author = {Lee, Chanhui and Song, Yeonghwan and Son, Jeany},
title = {Data-free Universal Adversarial Perturbation with Pseudo-semantic Prior},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year ... | Data-free Universal Adversarial Perturbation (UAP) is an image-agnostic adversarial attack that deceives deep neural networks using a single perturbation generated solely from random noise without relying on data priors. However, traditional data-free UAP methods often suffer from limited transferability due to the abs... | [
0.005622821394354105,
-0.030098000541329384,
0.0010057613253593445,
0.0664534866809845,
0.014155060984194279,
0.029977338388562202,
0.036297183483839035,
-0.002166259102523327,
-0.005428451579064131,
-0.028911544010043144,
-0.029450349509716034,
-0.0231501292437315,
-0.07197092473506927,
0... |
103 | Debiasing Multimodal Large Language Models via Noise-Aware Preference Optimization | [
"Zefeng Zhang",
"Hengzhu Tang",
"Jiawei Sheng",
"Zhenyu Zhang",
"Yiming Ren",
"Zhenyang Li",
"Dawei Yin",
"Duohe Ma",
"Tingwen Liu"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Zhang_Debiasing_Multimodal_Large_Language_Models_via_Noise-Aware_Preference_Optimization_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Zhang_Debiasing_Multimodal_Large_Language_Models_via_Noise-Aware_Preference_Optimization_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhang_Debiasing_Multimodal_Large_CVPR_2025_supplemental.pdf | 2503.17928 | @InProceedings{Zhang_2025_CVPR,
author = {Zhang, Zefeng and Tang, Hengzhu and Sheng, Jiawei and Zhang, Zhenyu and Ren, Yiming and Li, Zhenyang and Yin, Dawei and Ma, Duohe and Liu, Tingwen},
title = {Debiasing Multimodal Large Language Models via Noise-Aware Preference Optimization},
booktitle = {Pro... | Multimodal Large Language Models (MLLMs) excel in various tasks, yet often struggle with modality bias, tending to rely heavily on a single modality or prior knowledge when generating responses. In this paper, we propose a debiased preference optimization dataset, RLAIF-V-Bias, and introduce a Noise-Aware Preference Op... | [
0.0073725138790905476,
0.0038856733590364456,
0.023357857018709183,
0.04497608542442322,
0.014584898948669434,
0.04101762920618057,
0.020002707839012146,
0.023132409900426865,
-0.025528468191623688,
-0.02661426179111004,
-0.011946235783398151,
0.04617894068360329,
-0.0935952216386795,
0.00... |
104 | SAM2-LOVE: Segment Anything Model 2 in Language-aided Audio-Visual Scenes | [
"Yuji Wang",
"Haoran Xu",
"Yong Liu",
"Jiaze Li",
"Yansong Tang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Wang_SAM2-LOVE_Segment_Anything_Model_2_in_Language-aided_Audio-Visual_Scenes_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Wang_SAM2-LOVE_Segment_Anything_Model_2_in_Language-aided_Audio-Visual_Scenes_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wang_SAM2-LOVE_Segment_Anything_CVPR_2025_supplemental.pdf | null | @InProceedings{Wang_2025_CVPR,
author = {Wang, Yuji and Xu, Haoran and Liu, Yong and Li, Jiaze and Tang, Yansong},
title = {SAM2-LOVE: Segment Anything Model 2 in Language-aided Audio-Visual Scenes},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month... | Reference Audio-Visual Segmentation (Ref-AVS) aims to provide a pixel-wise scene understanding in Language-aided Audio-Visual Scenes (LAVS). This task requires the model to continuously segment objects referred to by text and audio from a video. Previous dual-modality methods always fail due to the lack of a third moda... | [
0.017357759177684784,
0.005628794431686401,
0.026481295004487038,
0.03466219827532768,
0.0007065552053973079,
0.041176483035087585,
0.06043250486254692,
0.030433103442192078,
-0.05158247426152229,
-0.05543392524123192,
-0.042599089443683624,
0.0019538572523742914,
-0.06162998080253601,
-0.... |
105 | GIVEPose: Gradual Intra-class Variation Elimination for RGB-based Category-Level Object Pose Estimation | [
"Ziqin Huang",
"Gu Wang",
"Chenyangguang Zhang",
"Ruida Zhang",
"Xiu Li",
"Xiangyang Ji"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Huang_GIVEPose_Gradual_Intra-class_Variation_Elimination_for_RGB-based_Category-Level_Object_Pose_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Huang_GIVEPose_Gradual_Intra-class_Variation_Elimination_for_RGB-based_Category-Level_Object_Pose_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Huang_GIVEPose_Gradual_Intra-class_CVPR_2025_supplemental.pdf | 2503.15110 | @InProceedings{Huang_2025_CVPR,
author = {Huang, Ziqin and Wang, Gu and Zhang, Chenyangguang and Zhang, Ruida and Li, Xiu and Ji, Xiangyang},
title = {GIVEPose: Gradual Intra-class Variation Elimination for RGB-based Category-Level Object Pose Estimation},
booktitle = {Proceedings of the Computer Vis... | Recent advances in RGBD-based category-level object pose estimation have been limited by their reliance on precise depth information, restricting their broader applicability. In response, RGB-based methods have been developed. Among these methods, geometry-guided pose regression that originated from instance-level task... | [
0.008354687131941319,
-0.03356108069419861,
0.0013132826425135136,
0.015288571827113628,
0.021768970414996147,
0.052671656012535095,
0.017524264752864838,
-0.011354036629199982,
-0.04713737219572067,
-0.036427613347768784,
-0.021951232105493546,
-0.0246992539614439,
-0.09019182622432709,
0... |
106 | FRAME: Floor-aligned Representation for Avatar Motion from Egocentric Video | [
"Andrea Boscolo Camiletto",
"Jian Wang",
"Eduardo Alvarado",
"Rishabh Dabral",
"Thabo Beeler",
"Marc Habermann",
"Christian Theobalt"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Camiletto_FRAME_Floor-aligned_Representation_for_Avatar_Motion_from_Egocentric_Video_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Camiletto_FRAME_Floor-aligned_Representation_for_Avatar_Motion_from_Egocentric_Video_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Camiletto_FRAME_Floor-aligned_Representation_CVPR_2025_supplemental.pdf | 2503.23094 | @InProceedings{Camiletto_2025_CVPR,
author = {Camiletto, Andrea Boscolo and Wang, Jian and Alvarado, Eduardo and Dabral, Rishabh and Beeler, Thabo and Habermann, Marc and Theobalt, Christian},
title = {FRAME: Floor-aligned Representation for Avatar Motion from Egocentric Video},
booktitle = {Proceedi... | Egocentric motion capture with a head-mounted body-facing stereo camera is crucial for VR and AR applications but presents significant challenges such as heavy occlusions and limited annotated real-world data. Existing methods rely on synthetic pretraining and struggle to generate smooth and accurate predictions in rea... | [
0.06295879185199738,
0.020413652062416077,
-0.006809567101299763,
0.033180151134729385,
0.021374087780714035,
0.05723743513226509,
0.03381624072790146,
0.009364169090986252,
-0.038001928478479385,
-0.05554025247693062,
-0.013021616265177727,
-0.050746988505125046,
-0.07531482726335526,
-0.... |
107 | Sketch Down the FLOPs: Towards Efficient Networks for Human Sketch | [
"Aneeshan Sain",
"Subhajit Maity",
"Pinaki Nath Chowdhury",
"Shubhadeep Koley",
"Ayan Kumar Bhunia",
"Yi-Zhe Song"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Sain_Sketch_Down_the_FLOPs_Towards_Efficient_Networks_for_Human_Sketch_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Sain_Sketch_Down_the_FLOPs_Towards_Efficient_Networks_for_Human_Sketch_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Sain_Sketch_Down_the_CVPR_2025_supplemental.zip | 2505.23763 | @InProceedings{Sain_2025_CVPR,
author = {Sain, Aneeshan and Maity, Subhajit and Chowdhury, Pinaki Nath and Koley, Shubhadeep and Bhunia, Ayan Kumar and Song, Yi-Zhe},
title = {Sketch Down the FLOPs: Towards Efficient Networks for Human Sketch},
booktitle = {Proceedings of the Computer Vision and Patt... | As sketch research has collectively matured over time, its adaptation for at-mass commercialisation emerges on the immediate horizon. Despite an already mature research endeavour for photos, there is no research on the efficient inference specifically designed for sketch data. In this paper, we first demonstrate existi... | [
0.010482663288712502,
-0.03650861978530884,
0.024717533960938454,
0.040531475096940994,
0.04832816869020462,
0.013972398824989796,
0.009921844117343426,
0.023792751133441925,
-0.040214601904153824,
-0.06424372643232346,
-0.011533644050359726,
-0.02788594923913479,
-0.08261189609766006,
-0.... |
108 | Generalized Zero-Shot Classification via Semantics-Free Inter-Class Feature Generation | [
"Libiao Chen",
"Dong Nie",
"Junjun Pan",
"Jing Yan",
"Zhenyu Tang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Chen_Generalized_Zero-Shot_Classification_via_Semantics-Free_Inter-Class_Feature_Generation_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Chen_Generalized_Zero-Shot_Classification_via_Semantics-Free_Inter-Class_Feature_Generation_CVPR_2025_paper.pdf | null | null | @InProceedings{Chen_2025_CVPR,
author = {Chen, Libiao and Nie, Dong and Pan, Junjun and Yan, Jing and Tang, Zhenyu},
title = {Generalized Zero-Shot Classification via Semantics-Free Inter-Class Feature Generation},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CV... | Generalized Zero-Shot Learning (GZSL) addresses the challenge of classifying unseen classes in the presence of seen classes by leveraging semantic attributes to bridge the gap for unseen classes. However, in image based disease classification, such as glioma sub-typing, distinguishing between classes using image semant... | [
0.014952429570257664,
-0.012925907969474792,
0.00631976081058383,
0.021910348907113075,
0.051209915429353714,
0.030573589727282524,
0.04364350065588951,
0.009502594359219074,
-0.03075762465596199,
-0.01114024966955185,
-0.03704220801591873,
0.00647246977314353,
-0.0922025591135025,
0.03404... |
109 | Feat2GS: Probing Visual Foundation Models with Gaussian Splatting | [
"Yue Chen",
"Xingyu Chen",
"Anpei Chen",
"Gerard Pons-Moll",
"Yuliang Xiu"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Chen_Feat2GS_Probing_Visual_Foundation_Models_with_Gaussian_Splatting_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Chen_Feat2GS_Probing_Visual_Foundation_Models_with_Gaussian_Splatting_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Chen_Feat2GS_Probing_Visual_CVPR_2025_supplemental.pdf | 2412.09606 | @InProceedings{Chen_2025_CVPR,
author = {Chen, Yue and Chen, Xingyu and Chen, Anpei and Pons-Moll, Gerard and Xiu, Yuliang},
title = {Feat2GS: Probing Visual Foundation Models with Gaussian Splatting},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
mon... | Given that visual foundation models (VFMs) are trained on extensive datasets but often limited to 2D images, a natural question arises: how well do they understand the 3D world? With the differences in architecture and training protocols (i.e., objectives, proxy tasks), a unified framework to fairly and comprehensively... | [
0.018867524340748787,
-0.03157595172524452,
0.058662012219429016,
0.02590559422969818,
0.017400817945599556,
0.015148858539760113,
0.04658963531255722,
0.011405768804252148,
-0.04851645231246948,
-0.06541147828102112,
0.009797316044569016,
-0.002274405211210251,
-0.04524785280227661,
-0.00... |
110 | Multi-Modal Aerial-Ground Cross-View Place Recognition with Neural ODEs | [
"Sijie Wang",
"Rui She",
"Qiyu Kang",
"Siqi Li",
"Disheng Li",
"Tianyu Geng",
"Shangshu Yu",
"Wee Peng Tay"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Wang_Multi-Modal_Aerial-Ground_Cross-View_Place_Recognition_with_Neural_ODEs_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Wang_Multi-Modal_Aerial-Ground_Cross-View_Place_Recognition_with_Neural_ODEs_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wang_Multi-Modal_Aerial-Ground_Cross-View_CVPR_2025_supplemental.pdf | null | @InProceedings{Wang_2025_CVPR,
author = {Wang, Sijie and She, Rui and Kang, Qiyu and Li, Siqi and Li, Disheng and Geng, Tianyu and Yu, Shangshu and Tay, Wee Peng},
title = {Multi-Modal Aerial-Ground Cross-View Place Recognition with Neural ODEs},
booktitle = {Proceedings of the Computer Vision and Pa... | Place recognition (PR) aims at retrieving the query place from a database and plays a crucial role in various applications, including navigation, autonomous driving, and augmented reality. While previous multi-modal PR works have mainly focused on the same-view scenario in which ground-view descriptors are matched with... | [
0.011353040114045143,
0.0015039402060210705,
0.022072795778512955,
0.028926508501172066,
0.021310260519385338,
0.03501022607088089,
0.026067277416586876,
0.03546847775578499,
-0.03402353823184967,
-0.027386300265789032,
-0.019727585837244987,
-0.03677475079894066,
-0.10335080325603485,
-0.... |
111 | Rethinking Decoder Design: Improving Biomarker Segmentation Using Depth-to-Space Restoration and Residual Linear Attention | [
"Saad Wazir",
"Daeyoung Kim"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Wazir_Rethinking_Decoder_Design_Improving_Biomarker_Segmentation_Using_Depth-to-Space_Restoration_and_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Wazir_Rethinking_Decoder_Design_Improving_Biomarker_Segmentation_Using_Depth-to-Space_Restoration_and_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wazir_Rethinking_Decoder_Design_CVPR_2025_supplemental.pdf | null | @InProceedings{Wazir_2025_CVPR,
author = {Wazir, Saad and Kim, Daeyoung},
title = {Rethinking Decoder Design: Improving Biomarker Segmentation Using Depth-to-Space Restoration and Residual Linear Attention},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
... | Segmenting biomarkers in medical images is crucial for various biotech applications. Despite advances, Transformer and CNN based methods often struggle with variations in staining and morphology, limiting feature extraction. In medical image segmentation, where datasets often have limited sample availability, recent st... | [
0.006925581023097038,
-0.018867911770939827,
-0.030970502644777298,
0.021667080000042915,
0.03980468958616257,
0.04004999250173569,
0.021666716784238815,
-0.013430185616016388,
0.002899439539760351,
-0.06151379644870758,
0.008179979398846626,
0.00538485124707222,
-0.03995388001203537,
0.01... |
112 | MaDCoW: Marginal Distortion Correction for Wide-Angle Photography with Arbitrary Objects | [
"Kevin Zhang",
"Jia-Bin Huang",
"Jose Echevarria",
"Stephen DiVerdi",
"Aaron Hertzmann"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Zhang_MaDCoW_Marginal_Distortion_Correction_for_Wide-Angle_Photography_with_Arbitrary_Objects_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Zhang_MaDCoW_Marginal_Distortion_Correction_for_Wide-Angle_Photography_with_Arbitrary_Objects_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhang_MaDCoW_Marginal_Distortion_CVPR_2025_supplemental.pdf | null | @InProceedings{Zhang_2025_CVPR,
author = {Zhang, Kevin and Huang, Jia-Bin and Echevarria, Jose and DiVerdi, Stephen and Hertzmann, Aaron},
title = {MaDCoW: Marginal Distortion Correction for Wide-Angle Photography with Arbitrary Objects},
booktitle = {Proceedings of the Computer Vision and Pattern Re... | We introduce MaDCoW, a method for correcting marginal distortion of arbitrary objects in wide-angle photography. People often use wide-angle photography to convey natural scenes--smartphones typically default to wide-angle photography--but depicting very wide-field-of-view scenes produces distorted object appearance, p... | [
0.04227999225258827,
-0.010751104913651943,
-0.02847103960812092,
0.024867037311196327,
0.07008378207683563,
0.017574850469827652,
0.015873512253165245,
0.01208651252090931,
-0.041685983538627625,
-0.058873385190963745,
-0.042514342814683914,
-0.008919461630284786,
-0.061800211668014526,
-... |
113 | SynTab-LLaVA: Enhancing Multimodal Table Understanding with Decoupled Synthesis | [
"Bangbang Zhou",
"Zuan Gao",
"Zixiao Wang",
"Boqiang Zhang",
"Yuxin Wang",
"Zhineng Chen",
"Hongtao Xie"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Zhou_SynTab-LLaVA_Enhancing_Multimodal_Table_Understanding_with_Decoupled_Synthesis_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Zhou_SynTab-LLaVA_Enhancing_Multimodal_Table_Understanding_with_Decoupled_Synthesis_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhou_SynTab-LLaVA_Enhancing_Multimodal_CVPR_2025_supplemental.pdf | null | @InProceedings{Zhou_2025_CVPR,
author = {Zhou, Bangbang and Gao, Zuan and Wang, Zixiao and Zhang, Boqiang and Wang, Yuxin and Chen, Zhineng and Xie, Hongtao},
title = {SynTab-LLaVA: Enhancing Multimodal Table Understanding with Decoupled Synthesis},
booktitle = {Proceedings of the Computer Vision and... | Due to the limited scale of multimodal table understanding (MTU) data, model performance is constrained. A straightforward approach is to use multimodal large language models to obtain more samples, but this may cause hallucinations, generate incorrect sample pairs, and cost significantly.To address the above issues, w... | [
0.019935686141252518,
0.025541754439473152,
-0.02478588931262493,
0.03395294398069382,
0.03335355967283249,
-0.0045921690762043,
0.008198698982596397,
0.03208232298493385,
-0.015417715534567833,
-0.01981966383755207,
-0.03064575046300888,
0.022780494764447212,
-0.08624278008937836,
0.02650... |
114 | Edit Away and My Face Will not Stay: Personal Biometric Defense against Malicious Generative Editing | [
"Hanhui Wang",
"Yihua Zhang",
"Ruizheng Bai",
"Yue Zhao",
"Sijia Liu",
"Zhengzhong Tu"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Wang_Edit_Away_and_My_Face_Will_not_Stay_Personal_Biometric_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Wang_Edit_Away_and_My_Face_Will_not_Stay_Personal_Biometric_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wang_Edit_Away_and_CVPR_2025_supplemental.pdf | 2411.16832 | @InProceedings{Wang_2025_CVPR,
author = {Wang, Hanhui and Zhang, Yihua and Bai, Ruizheng and Zhao, Yue and Liu, Sijia and Tu, Zhengzhong},
title = {Edit Away and My Face Will not Stay: Personal Biometric Defense against Malicious Generative Editing},
booktitle = {Proceedings of the Computer Vision an... | Recent advancements in diffusion models have made generative image editing more accessible than ever. While these developments allow users to generate creative edits with ease, they also raise significant ethical concerns, particularly regarding malicious edits to human portraits that threaten individuals' privacy and ... | [
0.0019356440752744675,
0.004676030483096838,
-0.002582960296422243,
0.05093197152018547,
0.05563732609152794,
0.007004581857472658,
0.054273221641778946,
-0.024069905281066895,
-0.044042423367500305,
-0.06781605631113052,
-0.012800246477127075,
0.005514529068022966,
-0.06771405786275864,
-... |
115 | Any6D: Model-free 6D Pose Estimation of Novel Objects | [
"Taeyeop Lee",
"Bowen Wen",
"Minjun Kang",
"Gyuree Kang",
"In So Kweon",
"Kuk-Jin Yoon"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Lee_Any6D_Model-free_6D_Pose_Estimation_of_Novel_Objects_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Lee_Any6D_Model-free_6D_Pose_Estimation_of_Novel_Objects_CVPR_2025_paper.pdf | null | 2503.18673 | @InProceedings{Lee_2025_CVPR,
author = {Lee, Taeyeop and Wen, Bowen and Kang, Minjun and Kang, Gyuree and Kweon, In So and Yoon, Kuk-Jin},
title = {Any6D: Model-free 6D Pose Estimation of Novel Objects},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
m... | We introduce Any6D, a model-free framework for 6D object pose estimation that requires only a single RGB-D anchor image to estimate both the 6D pose and size of unknown objects in novel scenes. Unlike existing methods that rely on textured 3D models or multiple viewpoints, Any6D leverages a joint object alignment proce... | [
0.0009069499210454524,
0.023294365033507347,
0.0038266396149992943,
0.046396464109420776,
0.014068500138819218,
0.044559262692928314,
0.003926469013094902,
0.03056926652789116,
-0.055739209055900574,
-0.034128475934267044,
-0.0261940099298954,
-0.022114470601081848,
-0.0958372950553894,
-0... |
116 | Improving Accuracy and Calibration via Differentiated Deep Mutual Learning | [
"Han Liu",
"Peng Cui",
"Bingning Wang",
"Weipeng Chen",
"Yupeng Zhang",
"Jun Zhu",
"Xiaolin Hu"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Liu_Improving_Accuracy_and_Calibration_via_Differentiated_Deep_Mutual_Learning_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Liu_Improving_Accuracy_and_Calibration_via_Differentiated_Deep_Mutual_Learning_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Liu_Improving_Accuracy_and_CVPR_2025_supplemental.pdf | null | @InProceedings{Liu_2025_CVPR,
author = {Liu, Han and Cui, Peng and Wang, Bingning and Chen, Weipeng and Zhang, Yupeng and Zhu, Jun and Hu, Xiaolin},
title = {Improving Accuracy and Calibration via Differentiated Deep Mutual Learning},
booktitle = {Proceedings of the Computer Vision and Pattern Recogn... | Deep Neural Networks (DNNs) have achieved remarkable success in a variety of tasks, particularly in terms of prediction accuracy. However, in real-world scenarios, especially in safety-critical applications, accuracy alone is insufficient; reliable uncertainty estimates are essential. Modern DNNs, often trained with cr... | [
0.020028436556458473,
-0.009906945750117302,
-0.03834667429327965,
0.04153190180659294,
0.04115265607833862,
0.01895308867096901,
0.026402469724416733,
-0.03432454541325569,
-0.010075355879962444,
-0.0605369433760643,
0.021866371855139732,
0.004818187560886145,
-0.061701517552137375,
0.009... |
117 | DrVideo: Document Retrieval Based Long Video Understanding | [
"Ziyu Ma",
"Chenhui Gou",
"Hengcan Shi",
"Bin Sun",
"Shutao Li",
"Hamid Rezatofighi",
"Jianfei Cai"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Ma_DrVideo_Document_Retrieval_Based_Long_Video_Understanding_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Ma_DrVideo_Document_Retrieval_Based_Long_Video_Understanding_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Ma_DrVideo_Document_Retrieval_CVPR_2025_supplemental.pdf | 2406.12846 | @InProceedings{Ma_2025_CVPR,
author = {Ma, Ziyu and Gou, Chenhui and Shi, Hengcan and Sun, Bin and Li, Shutao and Rezatofighi, Hamid and Cai, Jianfei},
title = {DrVideo: Document Retrieval Based Long Video Understanding},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Confere... | Most of the existing methods for video understanding primarily focus on videos only lasting tens of seconds, with limited exploration of techniques for handling long videos. The increased number of frames in long videos poses two main challenges: difficulty in locating key information and performing long-range reasonin... | [
0.014906586147844791,
-0.019271882250905037,
0.0010133974719792604,
0.06822268664836884,
0.04284188896417618,
0.009912777692079544,
0.00896404031664133,
0.005824788939207792,
-0.03894386067986488,
-0.02028614841401577,
-0.030224114656448364,
0.045727115124464035,
-0.034393567591905594,
0.0... |
118 | Infighting in the Dark: Multi-Label Backdoor Attack in Federated Learning | [
"Ye Li",
"Yanchao Zhao",
"Chengcheng Zhu",
"Jiale Zhang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Li_Infighting_in_the_Dark_Multi-Label_Backdoor_Attack_in_Federated_Learning_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Li_Infighting_in_the_Dark_Multi-Label_Backdoor_Attack_in_Federated_Learning_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Li_Infighting_in_the_CVPR_2025_supplemental.pdf | 2409.19601 | @InProceedings{Li_2025_CVPR,
author = {Li, Ye and Zhao, Yanchao and Zhu, Chengcheng and Zhang, Jiale},
title = {Infighting in the Dark: Multi-Label Backdoor Attack in Federated Learning},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June... | Federated Learning (FL), a privacy-preserving decentralized machine learning framework, has been shown to be vulnerable to backdoor attacks. Current research primarily focuses on the Single-Label Backdoor Attack (SBA), wherein adversaries share a consistent target. However, a critical fact is overlooked: adversaries ma... | [
-0.028698943555355072,
-0.033218126744031906,
-0.02256055362522602,
0.047440849244594574,
0.03537074103951454,
-0.00850647408515215,
0.05549930781126022,
-0.02695440500974655,
-0.022429106757044792,
-0.03253612667322159,
-0.0051113017834723,
0.008387134410440922,
-0.041822437196969986,
0.0... |
119 | Buffer Anytime: Zero-Shot Video Depth and Normal from Image Priors | [
"Zhengfei Kuang",
"Tianyuan Zhang",
"Kai Zhang",
"Hao Tan",
"Sai Bi",
"Yiwei Hu",
"Zexiang Xu",
"Milos Hasan",
"Gordon Wetzstein",
"Fujun Luan"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Kuang_Buffer_Anytime_Zero-Shot_Video_Depth_and_Normal_from_Image_Priors_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Kuang_Buffer_Anytime_Zero-Shot_Video_Depth_and_Normal_from_Image_Priors_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Kuang_Buffer_Anytime_Zero-Shot_CVPR_2025_supplemental.pdf | 2411.17249 | @InProceedings{Kuang_2025_CVPR,
author = {Kuang, Zhengfei and Zhang, Tianyuan and Zhang, Kai and Tan, Hao and Bi, Sai and Hu, Yiwei and Xu, Zexiang and Hasan, Milos and Wetzstein, Gordon and Luan, Fujun},
title = {Buffer Anytime: Zero-Shot Video Depth and Normal from Image Priors},
booktitle = {Proce... | We present Buffer Anytime, a framework for estimation of depth and normal maps (which we call geometric buffers) from video that eliminates the need for paired video--depth and video--normal training data. Instead of relying on large-scale annotated video datasets, we demonstrate high-quality video buffer estimation by... | [
0.029306335374712944,
-0.008769713342189789,
-0.0014192607486620545,
0.0425250269472599,
0.005858725868165493,
0.027679016813635826,
0.043073125183582306,
0.058496858924627304,
-0.023094529286026955,
-0.07397401332855225,
-0.014430792070925236,
-0.043752022087574005,
-0.03891011327505112,
... |
120 | PSHuman: Photorealistic Single-image 3D Human Reconstruction using Cross-Scale Multiview Diffusion and Explicit Remeshing | [
"Peng Li",
"Wangguandong Zheng",
"Yuan Liu",
"Tao Yu",
"Yangguang Li",
"Xingqun Qi",
"Xiaowei Chi",
"Siyu Xia",
"Yan-Pei Cao",
"Wei Xue",
"Wenhan Luo",
"Yike Guo"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Li_PSHuman_Photorealistic_Single-image_3D_Human_Reconstruction_using_Cross-Scale_Multiview_Diffusion_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Li_PSHuman_Photorealistic_Single-image_3D_Human_Reconstruction_using_Cross-Scale_Multiview_Diffusion_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Li_PSHuman_Photorealistic_Single-image_CVPR_2025_supplemental.pdf | 2409.10141 | @InProceedings{Li_2025_CVPR,
author = {Li, Peng and Zheng, Wangguandong and Liu, Yuan and Yu, Tao and Li, Yangguang and Qi, Xingqun and Chi, Xiaowei and Xia, Siyu and Cao, Yan-Pei and Xue, Wei and Luo, Wenhan and Guo, Yike},
title = {PSHuman: Photorealistic Single-image 3D Human Reconstruction using Cros... | Photorealistic 3D human modeling is essential for various applications and has seen tremendous progress. However, existing methods for monocular full-body reconstruction, typically relying on front and/or predicted back view, still struggle with satisfactory performance due to the ill-posed nature of the problem and so... | [
0.012155836448073387,
0.027117619290947914,
-0.022378960624337196,
0.024882977828383446,
0.0523538738489151,
0.04530370235443115,
0.02355252392590046,
-0.002094920724630356,
-0.04193948209285736,
-0.08961579203605652,
0.003451695665717125,
-0.05128318443894386,
-0.054582759737968445,
-0.00... |
121 | LSNet: See Large, Focus Small | [
"Ao Wang",
"Hui Chen",
"Zijia Lin",
"Jungong Han",
"Guiguang Ding"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Wang_LSNet_See_Large_Focus_Small_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Wang_LSNet_See_Large_Focus_Small_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wang_LSNet_See_Large_CVPR_2025_supplemental.pdf | 2503.23135 | @InProceedings{Wang_2025_CVPR,
author = {Wang, Ao and Chen, Hui and Lin, Zijia and Han, Jungong and Ding, Guiguang},
title = {LSNet: See Large, Focus Small},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year = {2025},
... | Vision network designs, including Convolutional Neural Networks and Vision Transformers, have significantly advanced the field of computer vision. Yet, their complex computations pose challenges for practical deployments, particularly in real-time applications. To tackle this issue, researchers have explored various li... | [
0.005275084171444178,
-0.01917761005461216,
0.007307759486138821,
0.0021728447172790766,
0.02990931086242199,
0.016140051186084747,
-0.02746908739209175,
0.03835099935531616,
-0.03174508735537529,
-0.04418308287858963,
-0.021704552695155144,
-0.02207644097507,
-0.06896309554576874,
0.00334... |
122 | DynamicScaler: Seamless and Scalable Video Generation for Panoramic Scenes | [
"Jinxiu Liu",
"Shaoheng Lin",
"Yinxiao Li",
"Ming-Hsuan Yang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Liu_DynamicScaler_Seamless_and_Scalable_Video_Generation_for_Panoramic_Scenes_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Liu_DynamicScaler_Seamless_and_Scalable_Video_Generation_for_Panoramic_Scenes_CVPR_2025_paper.pdf | null | 2412.11100 | @InProceedings{Liu_2025_CVPR,
author = {Liu, Jinxiu and Lin, Shaoheng and Li, Yinxiao and Yang, Ming-Hsuan},
title = {DynamicScaler: Seamless and Scalable Video Generation for Panoramic Scenes},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month ... | The increasing demand for immersive AR/VR applications and spatial intelligence has heightened the need to generate high-quality scene-level and 360deg panoramic video. However, most video diffusion models are constrained by limited resolution and aspect ratio, which restricts their applicability to scene-level dynamic... | [
0.003626817837357521,
-0.005403304938226938,
0.019209034740924835,
0.003755474230274558,
0.04331333190202713,
0.035015180706977844,
0.04570208117365837,
-0.0018421561690047383,
-0.06727603077888489,
-0.05981510132551193,
0.01254281122237444,
-0.022719576954841614,
-0.01317587960511446,
0.0... |
123 | Tartan IMU: A Light Foundation Model for Inertial Positioning in Robotics | [
"Shibo Zhao",
"Sifan Zhou",
"Raphael Blanchard",
"Yuheng Qiu",
"Wenshan Wang",
"Sebastian Scherer"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Zhao_Tartan_IMU_A_Light_Foundation_Model_for_Inertial_Positioning_in_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Zhao_Tartan_IMU_A_Light_Foundation_Model_for_Inertial_Positioning_in_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhao_Tartan_IMU_A_CVPR_2025_supplemental.pdf | null | @InProceedings{Zhao_2025_CVPR,
author = {Zhao, Shibo and Zhou, Sifan and Blanchard, Raphael and Qiu, Yuheng and Wang, Wenshan and Scherer, Sebastian},
title = {Tartan IMU: A Light Foundation Model for Inertial Positioning in Robotics},
booktitle = {Proceedings of the Computer Vision and Pattern Recog... | Despite recent advances in deep learning, most existing learning IMU odometry methods are trained on specific datasets, lack generalization, and are prone to overfitting, which limits their real-world application. To address these challenges, we present Tartan IMU, a foundation model designed for generalizable, IMU-bas... | [
0.007524932734668255,
-0.012956680729985237,
-0.009247349575161934,
0.02413543313741684,
0.041145507246255875,
0.041861698031425476,
0.027676664292812347,
0.029932694509625435,
-0.023679690435528755,
-0.037978529930114746,
-0.01721612922847271,
-0.04640493914484978,
-0.0717746764421463,
-0... |
124 | Event Ellipsometer: Event-based Mueller-Matrix Video Imaging | [
"Ryota Maeda",
"Yunseong Moon",
"Seung-Hwan Baek"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Maeda_Event_Ellipsometer_Event-based_Mueller-Matrix_Video_Imaging_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Maeda_Event_Ellipsometer_Event-based_Mueller-Matrix_Video_Imaging_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Maeda_Event_Ellipsometer_Event-based_CVPR_2025_supplemental.zip | 2411.17313 | @InProceedings{Maeda_2025_CVPR,
author = {Maeda, Ryota and Moon, Yunseong and Baek, Seung-Hwan},
title = {Event Ellipsometer: Event-based Mueller-Matrix Video Imaging},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year = ... | Light-matter interactions modify both the intensity and polarization state of light. Changes in polarization, represented by a Mueller matrix, encode detailed scene information. Existing optical ellipsometers capture Mueller-matrix images; however, they are often limited to static scenes due to long acquisition times. ... | [
-0.012921715155243874,
0.022663338109850883,
0.009941017255187035,
0.015291696414351463,
0.034394875168800354,
-0.018276121467351913,
-0.004911260679364204,
0.005128875840455294,
-0.06101042777299881,
-0.023884903639554977,
-0.02550458163022995,
0.012393486686050892,
-0.016051482409238815,
... |
125 | DocLayLLM: An Efficient Multi-modal Extension of Large Language Models for Text-rich Document Understanding | [
"Wenhui Liao",
"Jiapeng Wang",
"Hongliang Li",
"Chengyu Wang",
"Jun Huang",
"Lianwen Jin"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Liao_DocLayLLM_An_Efficient_Multi-modal_Extension_of_Large_Language_Models_for_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Liao_DocLayLLM_An_Efficient_Multi-modal_Extension_of_Large_Language_Models_for_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Liao_DocLayLLM_An_Efficient_CVPR_2025_supplemental.pdf | 2408.15045 | @InProceedings{Liao_2025_CVPR,
author = {Liao, Wenhui and Wang, Jiapeng and Li, Hongliang and Wang, Chengyu and Huang, Jun and Jin, Lianwen},
title = {DocLayLLM: An Efficient Multi-modal Extension of Large Language Models for Text-rich Document Understanding},
booktitle = {Proceedings of the Computer... | Text-rich document understanding (TDU) requires comprehensive analysis of documents containing substantial textual content and complex layouts. While Multimodal Large Language Models (MLLMs) have achieved fast progress in this domain, existing approaches either demand significant computational resources or struggle wit... | [
0.015256197191774845,
-0.011875768192112446,
-0.016933150589466095,
0.05505411699414253,
0.021149741485714912,
0.0148464385420084,
-0.013737154193222523,
0.030478667467832565,
-0.03427283465862274,
-0.011009220033884048,
-0.04789668694138527,
0.05236348137259483,
-0.042899880558252335,
0.0... |
126 | EDEN: Enhanced Diffusion for High-quality Large-motion Video Frame Interpolation | [
"Zihao Zhang",
"Haoran Chen",
"Haoyu Zhao",
"Guansong Lu",
"Yanwei Fu",
"Hang Xu",
"Zuxuan Wu"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Zhang_EDEN_Enhanced_Diffusion_for_High-quality_Large-motion_Video_Frame_Interpolation_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Zhang_EDEN_Enhanced_Diffusion_for_High-quality_Large-motion_Video_Frame_Interpolation_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhang_EDEN_Enhanced_Diffusion_CVPR_2025_supplemental.pdf | 2503.15831 | @InProceedings{Zhang_2025_CVPR,
author = {Zhang, Zihao and Chen, Haoran and Zhao, Haoyu and Lu, Guansong and Fu, Yanwei and Xu, Hang and Wu, Zuxuan},
title = {EDEN: Enhanced Diffusion for High-quality Large-motion Video Frame Interpolation},
booktitle = {Proceedings of the Computer Vision and Pattern... | Handling complex or nonlinear motion patterns has long posed challenges for video frame interpolation. Although recent advances in diffusion-based methods offer improvements over traditional optical flow-based approaches, they still struggle to generate sharp, temporally consistent frames in scenarios with large motion... | [
0.014355359598994255,
0.012632027268409729,
-0.00004399533281684853,
0.0596744604408741,
0.04206467419862747,
0.043964728713035583,
0.006853573489934206,
-0.019799651578068733,
-0.020144455134868622,
-0.05031098425388336,
0.0050143888220191,
-0.05397859215736389,
-0.029214557260274887,
0.0... |
127 | Handling Spatial-Temporal Data Heterogeneity for Federated Continual Learning via Tail Anchor | [
"Hao Yu",
"Xin Yang",
"Le Zhang",
"Hanlin Gu",
"Tianrui Li",
"Lixin Fan",
"Qiang Yang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Yu_Handling_Spatial-Temporal_Data_Heterogeneity_for_Federated_Continual_Learning_via_Tail_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Yu_Handling_Spatial-Temporal_Data_Heterogeneity_for_Federated_Continual_Learning_via_Tail_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Yu_Handling_Spatial-Temporal_Data_CVPR_2025_supplemental.pdf | 2412.18355 | @InProceedings{Yu_2025_CVPR,
author = {Yu, Hao and Yang, Xin and Zhang, Le and Gu, Hanlin and Li, Tianrui and Fan, Lixin and Yang, Qiang},
title = {Handling Spatial-Temporal Data Heterogeneity for Federated Continual Learning via Tail Anchor},
booktitle = {Proceedings of the Computer Vision and Patte... | Federated Continual Learning (FCL) allows each client to continually update its knowledge from task streams, enhancing the applicability of federated learning in real-world scenarios. However, FCL needs to address not only spatial data heterogeneity between clients but also temporal data heterogeneity between tasks. In... | [
-0.0038927877321839333,
-0.0576096847653389,
0.0009777032537385821,
0.03752896562218666,
0.03311678022146225,
0.021356916055083275,
0.013814407400786877,
0.0034573350567370653,
-0.02608231082558632,
-0.02141767181456089,
-0.010219586081802845,
-0.015363143756985664,
-0.06691998243331909,
0... |
128 | DeSiRe-GS: 4D Street Gaussians for Static-Dynamic Decomposition and Surface Reconstruction for Urban Driving Scenes | [
"Chensheng Peng",
"Chengwei Zhang",
"Yixiao Wang",
"Chenfeng Xu",
"Yichen Xie",
"Wenzhao Zheng",
"Kurt Keutzer",
"Masayoshi Tomizuka",
"Wei Zhan"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Peng_DeSiRe-GS_4D_Street_Gaussians_for_Static-Dynamic_Decomposition_and_Surface_Reconstruction_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Peng_DeSiRe-GS_4D_Street_Gaussians_for_Static-Dynamic_Decomposition_and_Surface_Reconstruction_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Peng_DeSiRe-GS_4D_Street_CVPR_2025_supplemental.pdf | null | @InProceedings{Peng_2025_CVPR,
author = {Peng, Chensheng and Zhang, Chengwei and Wang, Yixiao and Xu, Chenfeng and Xie, Yichen and Zheng, Wenzhao and Keutzer, Kurt and Tomizuka, Masayoshi and Zhan, Wei},
title = {DeSiRe-GS: 4D Street Gaussians for Static-Dynamic Decomposition and Surface Reconstruction f... | We present DeSiRe-GS, a self-supervised gaussian splatting representation, enabling effective static-dynamic decomposition and high-fidelity surface reconstruction in complex driving scenarios. Our approach employs a two-stage optimization pipeline of dynamic street Gaussians. In the first stage, we extract 2D motion m... | [
0.011962746270000935,
0.005750760901719332,
0.031181393191218376,
0.04334275424480438,
0.003422093577682972,
0.06051395833492279,
0.025275668129324913,
0.009481548331677914,
-0.018409403041005135,
-0.06033506989479065,
-0.013994239270687103,
-0.006779002957046032,
-0.05773103982210159,
-0.... |
129 | End-to-End HOI Reconstruction Transformer with Graph-based Encoding | [
"Zhenrong Wang",
"Qi Zheng",
"Sihan Ma",
"Maosheng Ye",
"Yibing Zhan",
"Dongjiang Li"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Wang_End-to-End_HOI_Reconstruction_Transformer_with_Graph-based_Encoding_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Wang_End-to-End_HOI_Reconstruction_Transformer_with_Graph-based_Encoding_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wang_End-to-End_HOI_Reconstruction_CVPR_2025_supplemental.pdf | 2503.06012 | @InProceedings{Wang_2025_CVPR,
author = {Wang, Zhenrong and Zheng, Qi and Ma, Sihan and Ye, Maosheng and Zhan, Yibing and Li, Dongjiang},
title = {End-to-End HOI Reconstruction Transformer with Graph-based Encoding},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (... | Human-object interaction (HOI) reconstruction has garnered significant attention due to its diverse applications and the success of capturing human meshes. Existing HOI reconstruction methods often rely on explicitly modeling interactions between humans and objects. However, such a way leads to a natural conflict betwe... | [
-0.010696295648813248,
0.027428751811385155,
-0.0003267683496233076,
0.007643462624400854,
0.026729250326752663,
0.03246728703379631,
0.004000124987214804,
0.005326219834387302,
-0.01575130969285965,
-0.032038796693086624,
-0.042858824133872986,
-0.009201793931424618,
-0.07982349395751953,
... |
130 | REWIND: Real-Time Egocentric Whole-Body Motion Diffusion with Exemplar-Based Identity Conditioning | [
"Jihyun Lee",
"Weipeng Xu",
"Alexander Richard",
"Shih-En Wei",
"Shunsuke Saito",
"Shaojie Bai",
"Te-Li Wang",
"Minhyuk Sung",
"Tae-Kyun Kim",
"Jason Saragih"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Lee_REWIND_Real-Time_Egocentric_Whole-Body_Motion_Diffusion_with_Exemplar-Based_Identity_Conditioning_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Lee_REWIND_Real-Time_Egocentric_Whole-Body_Motion_Diffusion_with_Exemplar-Based_Identity_Conditioning_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Lee_REWIND_Real-Time_Egocentric_CVPR_2025_supplemental.zip | 2504.04956 | @InProceedings{Lee_2025_CVPR,
author = {Lee, Jihyun and Xu, Weipeng and Richard, Alexander and Wei, Shih-En and Saito, Shunsuke and Bai, Shaojie and Wang, Te-Li and Sung, Minhyuk and Kim, Tae-Kyun and Saragih, Jason},
title = {REWIND: Real-Time Egocentric Whole-Body Motion Diffusion with Exemplar-Based I... | We present REWIND (Real-Time Egocentric Whole-Body Motion Diffusion), a one-step diffusion model for real-time, high-fidelity human motion estimation from egocentric image inputs. While an existing method for egocentric whole-body (i.e., body and hands) motion estimation is non-real-time and acausal due to diffusion-ba... | [
0.012301386334002018,
-0.014817347750067711,
0.0009810853516682982,
0.017572250217199326,
0.03642614558339119,
0.016469432041049004,
0.03833213821053505,
-0.008100220002233982,
-0.03557682782411575,
-0.07397062331438065,
0.002126561477780342,
-0.048892028629779816,
-0.041030846536159515,
-... |
131 | Hiding Images in Diffusion Models by Editing Learned Score Functions | [
"Haoyu Chen",
"Yunqiao Yang",
"Nan Zhong",
"Kede Ma"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Chen_Hiding_Images_in_Diffusion_Models_by_Editing_Learned_Score_Functions_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Chen_Hiding_Images_in_Diffusion_Models_by_Editing_Learned_Score_Functions_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Chen_Hiding_Images_in_CVPR_2025_supplemental.pdf | 2503.18459 | @InProceedings{Chen_2025_CVPR,
author = {Chen, Haoyu and Yang, Yunqiao and Zhong, Nan and Ma, Kede},
title = {Hiding Images in Diffusion Models by Editing Learned Score Functions},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
... | Hiding data using neural networks (i.e., neural steganography) has achieved remarkable success across both discriminative classifiers and generative adversarial networks. However, the potential of data hiding in diffusion models remains relatively unexplored. Current methods exhibit limitations in achieving high extrac... | [
-0.008343840949237347,
-0.004495581146329641,
-0.012590121477842331,
0.07378534972667694,
0.07825235277414322,
0.019750729203224182,
0.01658996380865574,
-0.04043354094028473,
-0.023942651227116585,
-0.05726994201540947,
0.000614245655015111,
-0.0063971891067922115,
-0.03463215380907059,
-... |
132 | Disco4D: Disentangled 4D Human Generation and Animation from a Single Image | [
"Hui En Pang",
"Shuai Liu",
"Zhongang Cai",
"Lei Yang",
"Tianwei Zhang",
"Ziwei Liu"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Pang_Disco4D_Disentangled_4D_Human_Generation_and_Animation_from_a_Single_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Pang_Disco4D_Disentangled_4D_Human_Generation_and_Animation_from_a_Single_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Pang_Disco4D_Disentangled_4D_CVPR_2025_supplemental.zip | 2409.17280 | @InProceedings{Pang_2025_CVPR,
author = {Pang, Hui En and Liu, Shuai and Cai, Zhongang and Yang, Lei and Zhang, Tianwei and Liu, Ziwei},
title = {Disco4D: Disentangled 4D Human Generation and Animation from a Single Image},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Confe... | We present Disco4D, a novel Gaussian Splatting framework for 4D human generation and animation from a single image. Different from existing methods, Disco4D distinctively disentangles clothings (with Gaussian models) from the human body (with SMPL-X model), significantly enhancing the generation details and flexibility... | [
0.019268454983830452,
-0.022730758413672447,
-0.009882739745080471,
0.057728737592697144,
0.019127078354358673,
0.026795247569680214,
0.006588687654584646,
0.011595412157475948,
-0.03921614587306976,
-0.038165606558322906,
-0.050705209374427795,
-0.03367875516414642,
-0.0753147155046463,
-... |
133 | DoraCycle: Domain-Oriented Adaptation of Unified Generative Model in Multimodal Cycles | [
"Rui Zhao",
"Weijia Mao",
"Mike Zheng Shou"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Zhao_DoraCycle_Domain-Oriented_Adaptation_of_Unified_Generative_Model_in_Multimodal_Cycles_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Zhao_DoraCycle_Domain-Oriented_Adaptation_of_Unified_Generative_Model_in_Multimodal_Cycles_CVPR_2025_paper.pdf | null | 2503.03651 | @InProceedings{Zhao_2025_CVPR,
author = {Zhao, Rui and Mao, Weijia and Shou, Mike Zheng},
title = {DoraCycle: Domain-Oriented Adaptation of Unified Generative Model in Multimodal Cycles},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June... | Adapting generative models to specific domains presents an effective solution for satisfying specialized requirements. However, adapting to some complex domains remains challenging, especially when these domains require substantial paired data to capture the targeted distributions. Since unpaired data from a single mod... | [
-0.006404949352145195,
-0.024647509679198265,
-0.03331722691655159,
0.048558857291936874,
0.04136650264263153,
0.02537013217806816,
0.043219294399023056,
0.030459504574537277,
-0.0231728907674551,
-0.03542493283748627,
-0.010340308770537376,
0.019056325778365135,
-0.05981990322470665,
0.01... |
134 | WeatherGen: A Unified Diverse Weather Generator for LiDAR Point Clouds via Spider Mamba Diffusion | [
"Yang Wu",
"Yun Zhu",
"Kaihua Zhang",
"Jianjun Qian",
"Jin Xie",
"Jian Yang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Wu_WeatherGen_A_Unified_Diverse_Weather_Generator_for_LiDAR_Point_Clouds_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Wu_WeatherGen_A_Unified_Diverse_Weather_Generator_for_LiDAR_Point_Clouds_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wu_WeatherGen_A_Unified_CVPR_2025_supplemental.pdf | 2504.13561 | @InProceedings{Wu_2025_CVPR,
author = {Wu, Yang and Zhu, Yun and Zhang, Kaihua and Qian, Jianjun and Xie, Jin and Yang, Jian},
title = {WeatherGen: A Unified Diverse Weather Generator for LiDAR Point Clouds via Spider Mamba Diffusion},
booktitle = {Proceedings of the Computer Vision and Pattern Recog... | 3D scene perception demands a large amount of adverse-weather LiDAR data, yet the cost of LiDAR data collection presents a significant scaling-up challenge. To this end, a series of LiDAR simulators have been proposed. Yet, they can only simulate a single adverse weather with a single physical model, and the fidelity i... | [
0.003135155187919736,
-0.025165952742099762,
0.016158904880285263,
0.05002477392554283,
0.06375504285097122,
0.03211582079529762,
0.028054354712367058,
0.0007961606606841087,
-0.03326330706477165,
-0.04576478525996208,
-0.03443824127316475,
-0.0062535484321415424,
-0.04280032962560654,
0.0... |
135 | MUST: The First Dataset and Unified Framework for Multispectral UAV Single Object Tracking | [
"Haolin Qin",
"Tingfa Xu",
"Tianhao Li",
"Zhenxiang Chen",
"Tao Feng",
"Jianan Li"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Qin_MUST_The_First_Dataset_and_Unified_Framework_for_Multispectral_UAV_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Qin_MUST_The_First_Dataset_and_Unified_Framework_for_Multispectral_UAV_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Qin_MUST_The_First_CVPR_2025_supplemental.pdf | 2503.17699 | @InProceedings{Qin_2025_CVPR,
author = {Qin, Haolin and Xu, Tingfa and Li, Tianhao and Chen, Zhenxiang and Feng, Tao and Li, Jianan},
title = {MUST: The First Dataset and Unified Framework for Multispectral UAV Single Object Tracking},
booktitle = {Proceedings of the Computer Vision and Pattern Recog... | UAV tracking faces significant challenges in real-world scenarios, such as small-size targets and occlusions, which limit the performance of RGB-based trackers. Multispectral images (MSI), which capture additional spectral information, offer a promising solution to these challenges. However, progress in this field has ... | [
0.004876491613686085,
-0.021496448665857315,
0.00728903291746974,
0.029805239289999008,
0.036125607788562775,
0.006853537168353796,
0.045725494623184204,
0.02224668860435486,
-0.05729171261191368,
-0.06977810710668564,
-0.07129179686307907,
0.0014100911794230342,
-0.07654855400323868,
-0.0... |
136 | IDOL: Instant Photorealistic 3D Human Creation from a Single Image | [
"Yiyu Zhuang",
"Jiaxi Lv",
"Hao Wen",
"Qing Shuai",
"Ailing Zeng",
"Hao Zhu",
"Shifeng Chen",
"Yujiu Yang",
"Xun Cao",
"Wei Liu"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Zhuang_IDOL_Instant_Photorealistic_3D_Human_Creation_from_a_Single_Image_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Zhuang_IDOL_Instant_Photorealistic_3D_Human_Creation_from_a_Single_Image_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhuang_IDOL_Instant_Photorealistic_CVPR_2025_supplemental.zip | 2412.14963 | @InProceedings{Zhuang_2025_CVPR,
author = {Zhuang, Yiyu and Lv, Jiaxi and Wen, Hao and Shuai, Qing and Zeng, Ailing and Zhu, Hao and Chen, Shifeng and Yang, Yujiu and Cao, Xun and Liu, Wei},
title = {IDOL: Instant Photorealistic 3D Human Creation from a Single Image},
booktitle = {Proceedings of the ... | Creating a high-fidelity, animatable 3D full-body avatar from a single image is a challenging task due to the diverse appearance and poses of humans and the limited availability of high-quality training data. To achieve fast and high-quality human reconstruction, this work rethinks the task from the perspectives of dat... | [
0.016348032280802727,
-0.031077701598405838,
-0.0330343022942543,
0.05537886917591095,
0.014175498858094215,
0.030673541128635406,
0.04441710561513901,
0.029380924999713898,
-0.014387894421815872,
-0.06286438554525375,
-0.022712770849466324,
-0.03200811892747879,
-0.08847446739673615,
-0.0... |
137 | Tightening Robustness Verification of MaxPool-based Neural Networks via Minimizing the Over-Approximation Zone | [
"Yuan Xiao",
"Yuchen Chen",
"Shiqing Ma",
"Chunrong Fang",
"Tongtong Bai",
"Mingzheng Gu",
"Yuxin Cheng",
"Yanwei Chen",
"Zhenyu Chen"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Xiao_Tightening_Robustness_Verification_of_MaxPool-based_Neural_Networks_via_Minimizing_the_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Xiao_Tightening_Robustness_Verification_of_MaxPool-based_Neural_Networks_via_Minimizing_the_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Xiao_Tightening_Robustness_Verification_CVPR_2025_supplemental.pdf | 2211.09810 | @InProceedings{Xiao_2025_CVPR,
author = {Xiao, Yuan and Chen, Yuchen and Ma, Shiqing and Fang, Chunrong and Bai, Tongtong and Gu, Mingzheng and Cheng, Yuxin and Chen, Yanwei and Chen, Zhenyu},
title = {Tightening Robustness Verification of MaxPool-based Neural Networks via Minimizing the Over-Approximati... | The robustness of neural network classifiers is important in the safety-critical domain and can be quantified by robustness verification. At present, efficient and scalable verification techniques are always sound but incomplete, and thus, the improvement of verified robustness results is the key criterion to evaluate ... | [
0.00903515424579382,
-0.005907045677304268,
-0.009020546451210976,
0.03003317303955555,
0.050201527774333954,
0.018918238580226898,
-0.004910723771899939,
-0.02244294062256813,
-0.025878503918647766,
-0.02909550629556179,
-0.00023845267423894256,
-0.006946220528334379,
-0.04422915726900101,
... |
138 | SketchVideo: Sketch-based Video Generation and Editing | [
"Feng-Lin Liu",
"Hongbo Fu",
"Xintao Wang",
"Weicai Ye",
"Pengfei Wan",
"Di Zhang",
"Lin Gao"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Liu_SketchVideo_Sketch-based_Video_Generation_and_Editing_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Liu_SketchVideo_Sketch-based_Video_Generation_and_Editing_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Liu_SketchVideo_Sketch-based_Video_CVPR_2025_supplemental.zip | 2503.23284 | @InProceedings{Liu_2025_CVPR,
author = {Liu, Feng-Lin and Fu, Hongbo and Wang, Xintao and Ye, Weicai and Wan, Pengfei and Zhang, Di and Gao, Lin},
title = {SketchVideo: Sketch-based Video Generation and Editing},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR... | Video generation and editing conditioned on text prompts or images have undergone significant advancements. However, challenges remain in accurately controlling global layout and geometry details solely by texts, and supporting motion control and local modification through images. In this paper, we aim to achieve sket... | [
0.013301363214850426,
-0.005071265157312155,
-0.0017218198627233505,
0.05271530896425247,
0.06067890301346779,
-0.00558865861967206,
0.029647082090377808,
0.040251798927783966,
-0.05318092554807663,
-0.08145612478256226,
-0.029196754097938538,
-0.04503354802727699,
-0.06287158280611038,
0.... |
139 | PhysicsGen: Can Generative Models Learn from Images to Predict Complex Physical Relations? | [
"Martin Spitznagel",
"Jan Vaillant",
"Janis Keuper"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Spitznagel_PhysicsGen_Can_Generative_Models_Learn_from_Images_to_Predict_Complex_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Spitznagel_PhysicsGen_Can_Generative_Models_Learn_from_Images_to_Predict_Complex_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Spitznagel_PhysicsGen_Can_Generative_CVPR_2025_supplemental.pdf | 2503.05333 | @InProceedings{Spitznagel_2025_CVPR,
author = {Spitznagel, Martin and Vaillant, Jan and Keuper, Janis},
title = {PhysicsGen: Can Generative Models Learn from Images to Predict Complex Physical Relations?},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
... | The image-to-image translation abilities of generative learning models have recently made significant progress in the estimation of complex (steered) mappings between image distributions. While appearance based tasks like image in-painting or style transfer have been studied at length, we propose to investigate the pot... | [
0.004183425568044186,
-0.01217567827552557,
-0.009476114995777607,
0.06786274164915085,
0.038829829543828964,
0.022170184180140495,
0.005013798829168081,
0.023296620696783066,
-0.027958620339632034,
-0.030530324205756187,
-0.024000808596611023,
0.0012429679045453668,
-0.0469488687813282,
0... |
140 | Taste More, Taste Better: Diverse Data and Strong Model Boost Semi-Supervised Crowd Counting | [
"Maochen Yang",
"Zekun Li",
"Jian Zhang",
"Lei Qi",
"Yinghuan Shi"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Yang_Taste_More_Taste_Better_Diverse_Data_and_Strong_Model_Boost_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Yang_Taste_More_Taste_Better_Diverse_Data_and_Strong_Model_Boost_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Yang_Taste_More_Taste_CVPR_2025_supplemental.pdf | 2503.17984 | @InProceedings{Yang_2025_CVPR,
author = {Yang, Maochen and Li, Zekun and Zhang, Jian and Qi, Lei and Shi, Yinghuan},
title = {Taste More, Taste Better: Diverse Data and Strong Model Boost Semi-Supervised Crowd Counting},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conferen... | Semi-supervised crowd counting is crucial for addressing the high annotation costs of densely populated scenes. Although several methods based on pseudo-labeling have been proposed, it remains challenging to effectively and accurately utilize unlabeled data. In this paper, we propose a novel framework called Taste More... | [
0.024479757994413376,
-0.05872812867164612,
0.006257591303437948,
0.021185003221035004,
0.016513092443346977,
0.002016703598201275,
0.02597329393029213,
-0.007687475997954607,
-0.042083825916051865,
-0.03231915831565857,
-0.04306666553020477,
-0.007993732579052448,
-0.0709451213479042,
0.0... |
141 | Gaussian Splashing: Unified Particles for Versatile Motion Synthesis and Rendering | [
"Yutao Feng",
"Xiang Feng",
"Yintong Shang",
"Ying Jiang",
"Chang Yu",
"Zeshun Zong",
"Tianjia Shao",
"Hongzhi Wu",
"Kun Zhou",
"Chenfanfu Jiang",
"Yin Yang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Feng_Gaussian_Splashing_Unified_Particles_for_Versatile_Motion_Synthesis_and_Rendering_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Feng_Gaussian_Splashing_Unified_Particles_for_Versatile_Motion_Synthesis_and_Rendering_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Feng_Gaussian_Splashing_Unified_CVPR_2025_supplemental.zip | 2401.15318 | @InProceedings{Feng_2025_CVPR,
author = {Feng, Yutao and Feng, Xiang and Shang, Yintong and Jiang, Ying and Yu, Chang and Zong, Zeshun and Shao, Tianjia and Wu, Hongzhi and Zhou, Kun and Jiang, Chenfanfu and Yang, Yin},
title = {Gaussian Splashing: Unified Particles for Versatile Motion Synthesis and Ren... | We demonstrate the feasibility of integrating physics-based animations of solids and fluids with 3D Gaussian Splatting (3DGS) to create novel effects in virtual scenes reconstructed using 3DGS. Leveraging the coherence of the Gaussian Splatting and Position-Based Dynamics (PBD) in the underlying representation, we mana... | [
0.007722753565758467,
0.010946350172162056,
0.02180904895067215,
0.03924013301730156,
0.002274629194289446,
0.006071868352591991,
0.016685236245393753,
0.03102215938270092,
-0.05642026290297508,
-0.06570872664451599,
-0.02499457634985447,
-0.04315114766359329,
-0.030107852071523666,
0.0004... |
142 | Improve Representation for Imbalanced Regression through Geometric Constraints | [
"Zijian Dong",
"Yilei Wu",
"Chongyao Chen",
"Yingtian Zou",
"Yichi Zhang",
"Juan Helen Zhou"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Dong_Improve_Representation_for_Imbalanced_Regression_through_Geometric_Constraints_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Dong_Improve_Representation_for_Imbalanced_Regression_through_Geometric_Constraints_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Dong_Improve_Representation_for_CVPR_2025_supplemental.pdf | 2503.00876 | @InProceedings{Dong_2025_CVPR,
author = {Dong, Zijian and Wu, Yilei and Chen, Chongyao and Zou, Yingtian and Zhang, Yichi and Zhou, Juan Helen},
title = {Improve Representation for Imbalanced Regression through Geometric Constraints},
booktitle = {Proceedings of the Computer Vision and Pattern Recogn... | In representation learning, uniformity refers to the uniform feature distribution in the latent space (i.e., unit hypersphere). Previous work has shown that improving uniformity contributes to the learning of under-represented classes. However, most of the previous work focused on classification; the representation spa... | [
0.013963809236884117,
-0.01731864921748638,
-0.028443170711398125,
0.028831884264945984,
0.02125599794089794,
0.03819144889712334,
0.02187458425760269,
-0.04211745038628578,
-0.035256218165159225,
-0.046684980392456055,
-0.01315211970359087,
-0.03433253616094589,
-0.09575250744819641,
0.02... |
143 | AnyDressing: Customizable Multi-Garment Virtual Dressing via Latent Diffusion Models | [
"Xinghui Li",
"Qichao Sun",
"Pengze Zhang",
"Fulong Ye",
"Zhichao Liao",
"Wanquan Feng",
"Songtao Zhao",
"Qian He"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Li_AnyDressing_Customizable_Multi-Garment_Virtual_Dressing_via_Latent_Diffusion_Models_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Li_AnyDressing_Customizable_Multi-Garment_Virtual_Dressing_via_Latent_Diffusion_Models_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Li_AnyDressing_Customizable_Multi-Garment_CVPR_2025_supplemental.pdf | 2412.04146 | @InProceedings{Li_2025_CVPR,
author = {Li, Xinghui and Sun, Qichao and Zhang, Pengze and Ye, Fulong and Liao, Zhichao and Feng, Wanquan and Zhao, Songtao and He, Qian},
title = {AnyDressing: Customizable Multi-Garment Virtual Dressing via Latent Diffusion Models},
booktitle = {Proceedings of the Comp... | Recent advances in garment-centric image generation from text and image prompts based on diffusion models are impressive. However, existing methods lack support for various combinations of attire, and struggle to preserve the garment details while maintaining faithfulness to the text prompts, limiting their performance... | [
0.01804439350962639,
-0.044541485607624054,
-0.009248443879187107,
0.039649371057748795,
0.041105058044195175,
0.042838435620069504,
0.02993108332157135,
0.009660076349973679,
-0.004513499792665243,
-0.06639280170202255,
-0.05096040293574333,
-0.044931329786777496,
-0.032837554812431335,
0... |
144 | Spectral Informed Mamba for Robust Point Cloud Processing | [
"Ali Bahri",
"Moslem Yazdanpanah",
"Mehrdad Noori",
"Sahar Dastani",
"Milad Cheraghalikhani",
"Gustavo Adolfo Vargas Hakim",
"David Osowiechi",
"Farzad Beizaee",
"Ismail Ben Ayed",
"Christian Desrosiers"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Bahri_Spectral_Informed_Mamba_for_Robust_Point_Cloud_Processing_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Bahri_Spectral_Informed_Mamba_for_Robust_Point_Cloud_Processing_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Bahri_Spectral_Informed_Mamba_CVPR_2025_supplemental.pdf | 2503.04953 | @InProceedings{Bahri_2025_CVPR,
author = {Bahri, Ali and Yazdanpanah, Moslem and Noori, Mehrdad and Dastani, Sahar and Cheraghalikhani, Milad and Hakim, Gustavo Adolfo Vargas and Osowiechi, David and Beizaee, Farzad and Ben Ayed, Ismail and Desrosiers, Christian},
title = {Spectral Informed Mamba for Rob... | State Space Models (SSMs) have shown significant promise in Natural Language Processing (NLP) and, more recently, computer vision. This paper introduces a new methodology leveraging Mamba and Masked Autoencoder (MAE) networks for point cloud data in both supervised and self-supervised learning. We propose three key con... | [
-0.0005222160834819078,
-0.012300297617912292,
-0.004976845346391201,
0.0517241433262825,
0.03744987025856972,
0.07460050284862518,
0.04617949202656746,
-0.016683850437402725,
-0.049277905374765396,
-0.04659435153007507,
-0.023087598383426666,
-0.014335605315864086,
-0.08370083570480347,
0... |
145 | Latent Space Imaging | [
"Matheus Souza",
"Yidan Zheng",
"Kaizhang Kang",
"Yogeshwar Nath Mishra",
"Qiang Fu",
"Wolfgang Heidrich"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Souza_Latent_Space_Imaging_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Souza_Latent_Space_Imaging_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Souza_Latent_Space_Imaging_CVPR_2025_supplemental.pdf | 2407.07052 | @InProceedings{Souza_2025_CVPR,
author = {Souza, Matheus and Zheng, Yidan and Kang, Kaizhang and Mishra, Yogeshwar Nath and Fu, Qiang and Heidrich, Wolfgang},
title = {Latent Space Imaging},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {J... | Digital imaging systems have traditionally relied on brute-force measurement and processing of pixels arranged on regular grids. In contrast, the human visual system performs significant data reduction from the large number of photoreceptors to the optic nerve, effectively encoding visual information into a low-bandwid... | [
0.0038142940029501915,
0.016298405826091766,
-0.02128109149634838,
0.03093763254582882,
0.059647172689437866,
0.02295081689953804,
0.010878506116569042,
0.03847912326455116,
-0.03914378210902214,
-0.055761419236660004,
-0.01916000060737133,
-0.03774305805563927,
-0.04133448749780655,
-0.00... |
146 | Balanced Direction from Multifarious Choices: Arithmetic Meta-Learning for Domain Generalization | [
"Xiran Wang",
"Jian Zhang",
"Lei Qi",
"Yinghuan Shi"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Wang_Balanced_Direction_from_Multifarious_Choices_Arithmetic_Meta-Learning_for_Domain_Generalization_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Wang_Balanced_Direction_from_Multifarious_Choices_Arithmetic_Meta-Learning_for_Domain_Generalization_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wang_Balanced_Direction_from_CVPR_2025_supplemental.pdf | 2503.18987 | @InProceedings{Wang_2025_CVPR,
author = {Wang, Xiran and Zhang, Jian and Qi, Lei and Shi, Yinghuan},
title = {Balanced Direction from Multifarious Choices: Arithmetic Meta-Learning for Domain Generalization},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
... | Domain generalization is proposed to address distribution shift, arising from statistical disparities between training source and unseen target domains. The widely used first-order meta-learning algorithms demonstrate strong performance for domain generalization by leveraging the gradient matching theory, which aims to... | [
0.009002748876810074,
0.01133483462035656,
0.005906850099563599,
0.026488108560442924,
0.039293501526117325,
0.039253752678632736,
0.023875808343291283,
-0.0005094468942843378,
-0.015470972284674644,
-0.04220085218548775,
0.0036525714676827192,
0.015498864464461803,
-0.08565524965524673,
-... |
147 | Anatomical Consistency and Adaptive Prior-informed Transformation for Multi-contrast MR Image Synthesis via Diffusion Model | [
"Yejee Shin",
"Yeeun Lee",
"Hanbyol Jang",
"Geonhui Son",
"Hyeongyu Kim",
"Dosik Hwang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Shin_Anatomical_Consistency_and_Adaptive_Prior-informed_Transformation_for_Multi-contrast_MR_Image_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Shin_Anatomical_Consistency_and_Adaptive_Prior-informed_Transformation_for_Multi-contrast_MR_Image_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Shin_Anatomical_Consistency_and_CVPR_2025_supplemental.pdf | null | @InProceedings{Shin_2025_CVPR,
author = {Shin, Yejee and Lee, Yeeun and Jang, Hanbyol and Son, Geonhui and Kim, Hyeongyu and Hwang, Dosik},
title = {Anatomical Consistency and Adaptive Prior-informed Transformation for Multi-contrast MR Image Synthesis via Diffusion Model},
booktitle = {Proceedings o... | Multi-contrast magnetic resonance (MR) images offer critical diagnostic information but are limited by long scan times and high cost. While diffusion models (DMs) excel in medical image synthesis, they often struggle to maintain anatomical consistency and utilize the diverse characteristics of multi-contrast MR images ... | [
-0.020367275923490524,
-0.004146613646298647,
-0.01926400139927864,
0.057927362620830536,
0.05331496149301529,
0.05237487331032753,
0.02905433624982834,
0.00422735046595335,
-0.0295921228826046,
-0.08809530735015869,
-0.004578252322971821,
-0.0035599987022578716,
-0.028994113206863403,
0.0... |
148 | BlobGEN-Vid: Compositional Text-to-Video Generation with Blob Video Representations | [
"Weixi Feng",
"Chao Liu",
"Sifei Liu",
"William Yang Wang",
"Arash Vahdat",
"Weili Nie"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Feng_BlobGEN-Vid_Compositional_Text-to-Video_Generation_with_Blob_Video_Representations_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Feng_BlobGEN-Vid_Compositional_Text-to-Video_Generation_with_Blob_Video_Representations_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Feng_BlobGEN-Vid_Compositional_Text-to-Video_CVPR_2025_supplemental.pdf | null | @InProceedings{Feng_2025_CVPR,
author = {Feng, Weixi and Liu, Chao and Liu, Sifei and Wang, William Yang and Vahdat, Arash and Nie, Weili},
title = {BlobGEN-Vid: Compositional Text-to-Video Generation with Blob Video Representations},
booktitle = {Proceedings of the Computer Vision and Pattern Recogn... | Existing video generation models struggle to follow complex text prompts and synthesize multiple objects, raising the need for additional grounding input for improved controllability. In this work, we propose to decompose videos into visual primitives -- blob video representation, a general representation for controlla... | [
0.03116268664598465,
-0.011537645012140274,
0.0017480303067713976,
0.06577198207378387,
0.043885793536901474,
0.013058082200586796,
0.0004076907935086638,
0.016096988692879677,
-0.027523528784513474,
-0.040695007890462875,
-0.016802143305540085,
-0.015101420693099499,
-0.048231132328510284,
... |
149 | D2SP: Dynamic Dual-Stage Purification Framework for Dual Noise Mitigation in Vision-based Affective Recognition. | [
"Haoran Wang",
"Xinji Mai",
"Zeng Tao",
"Xuan Tong",
"Junxiong Lin",
"Yan Wang",
"Jiawen Yu",
"Shaoqi Yan",
"Ziheng Zhou",
"Wenqiang Zhang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Wang_D2SP_Dynamic_Dual-Stage_Purification_Framework_for_Dual_Noise_Mitigation_in_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Wang_D2SP_Dynamic_Dual-Stage_Purification_Framework_for_Dual_Noise_Mitigation_in_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wang_D2SP_Dynamic_Dual-Stage_CVPR_2025_supplemental.pdf | 2406.16473 | @InProceedings{Wang_2025_CVPR,
author = {Wang, Haoran and Mai, Xinji and Tao, Zeng and Tong, Xuan and Lin, Junxiong and Wang, Yan and Yu, Jiawen and Yan, Shaoqi and Zhou, Ziheng and Zhang, Wenqiang},
title = {D2SP: Dynamic Dual-Stage Purification Framework for Dual Noise Mitigation in Vision-based Affect... | The current advancements in Dynamic Facial Expression Recognition (DFER) methods mainly focus on better capturing the spatial and temporal features of facial expressions. However, DFER datasets contain a substantial amount of noisy samples, and few have addressed the issue of handling this noise. We identified two type... | [
-0.0032320977188646793,
-0.005747729446738958,
0.0061467718333005905,
0.026753541082143784,
0.03985151648521423,
0.05583260953426361,
0.011796807870268822,
-0.03341719135642052,
-0.02206539735198021,
-0.03983665630221367,
-0.024590926244854927,
-0.010738302022218704,
-0.048406124114990234,
... |
150 | PartRM: Modeling Part-Level Dynamics with Large Cross-State Reconstruction Model | [
"Mingju Gao",
"Yike Pan",
"Huan-ang Gao",
"Zongzheng Zhang",
"Wenyi Li",
"Hao Dong",
"Hao Tang",
"Li Yi",
"Hao Zhao"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Gao_PartRM_Modeling_Part-Level_Dynamics_with_Large_Cross-State_Reconstruction_Model_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Gao_PartRM_Modeling_Part-Level_Dynamics_with_Large_Cross-State_Reconstruction_Model_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Gao_PartRM_Modeling_Part-Level_CVPR_2025_supplemental.zip | 2503.19913 | @InProceedings{Gao_2025_CVPR,
author = {Gao, Mingju and Pan, Yike and Gao, Huan-ang and Zhang, Zongzheng and Li, Wenyi and Dong, Hao and Tang, Hao and Yi, Li and Zhao, Hao},
title = {PartRM: Modeling Part-Level Dynamics with Large Cross-State Reconstruction Model},
booktitle = {Proceedings of the Com... | As interest grows in world models that predict future states from current observations and actions, accurately modeling part-level dynamics has become increasingly relevant for various applications. Existing approaches, such as Puppet-Master, rely on fine-tuning large-scale pre-trained video diffusion models, which are... | [
-0.026477672159671783,
-0.01958519034087658,
-0.007229994982481003,
0.02834271639585495,
0.03052333928644657,
0.019395984709262848,
0.01238156482577324,
0.004905147012323141,
-0.05385614186525345,
-0.048065390437841415,
-0.005741318687796593,
-0.050468333065509796,
-0.04109016805887222,
0.... |
151 | LaVin-DiT: Large Vision Diffusion Transformer | [
"Zhaoqing Wang",
"Xiaobo Xia",
"Runnan Chen",
"Dongdong Yu",
"Changhu Wang",
"Mingming Gong",
"Tongliang Liu"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Wang_LaVin-DiT_Large_Vision_Diffusion_Transformer_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Wang_LaVin-DiT_Large_Vision_Diffusion_Transformer_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wang_LaVin-DiT_Large_Vision_CVPR_2025_supplemental.pdf | null | @InProceedings{Wang_2025_CVPR,
author = {Wang, Zhaoqing and Xia, Xiaobo and Chen, Runnan and Yu, Dongdong and Wang, Changhu and Gong, Mingming and Liu, Tongliang},
title = {LaVin-DiT: Large Vision Diffusion Transformer},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conferen... | This paper presents the Large Vision Diffusion Transformer (LaVin-DiT), a scalable and unified foundation model designed to tackle over 20 computer vision tasks in a generative framework. Unlike existing large vision models directly adapted from natural language processing architectures, which rely on less efficient au... | [
0.022968651726841927,
-0.009876257739961147,
-0.014333846978843212,
0.03824985399842262,
0.03376731276512146,
0.03533630073070526,
0.0030925506725907326,
0.014466793276369572,
-0.005418173503130674,
-0.045372750610113144,
-0.00122564856428653,
-0.01114659570157528,
-0.05824228748679161,
0.... |
152 | DiffFNO: Diffusion Fourier Neural Operator | [
"Xiaoyi Liu",
"Hao Tang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Liu_DiffFNO_Diffusion_Fourier_Neural_Operator_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Liu_DiffFNO_Diffusion_Fourier_Neural_Operator_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Liu_DiffFNO_Diffusion_Fourier_CVPR_2025_supplemental.pdf | 2411.09911 | @InProceedings{Liu_2025_CVPR,
author = {Liu, Xiaoyi and Tang, Hao},
title = {DiffFNO: Diffusion Fourier Neural Operator},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year = {2025},
pages = {150-160}
} | We introduce DiffFNO, a novel diffusion framework for arbitrary-scale super-resolution strengthened by a Weighted Fourier Neural Operator (WFNO). Mode Rebalancing in WFNO effectively captures critical frequency components, significantly improving the reconstruction of high-frequency image details that are crucial for s... | [
-0.011233726516366005,
-0.018553409725427628,
0.023108743131160736,
0.031147122383117676,
0.05013905465602875,
0.029773559421300888,
-0.0035294622648507357,
-0.005638769827783108,
-0.028930354863405228,
-0.06057726591825485,
0.02918132022023201,
-0.008936012163758278,
-0.0406746082007885,
... |
153 | CAP-Net: A Unified Network for 6D Pose and Size Estimation of Categorical Articulated Parts from a Single RGB-D Image | [
"Jingshun Huang",
"Haitao Lin",
"Tianyu Wang",
"Yanwei Fu",
"Xiangyang Xue",
"Yi Zhu"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Huang_CAP-Net_A_Unified_Network_for_6D_Pose_and_Size_Estimation_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Huang_CAP-Net_A_Unified_Network_for_6D_Pose_and_Size_Estimation_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Huang_CAP-Net_A_Unified_CVPR_2025_supplemental.pdf | null | @InProceedings{Huang_2025_CVPR,
author = {Huang, Jingshun and Lin, Haitao and Wang, Tianyu and Fu, Yanwei and Xue, Xiangyang and Zhu, Yi},
title = {CAP-Net: A Unified Network for 6D Pose and Size Estimation of Categorical Articulated Parts from a Single RGB-D Image},
booktitle = {Proceedings of the C... | This paper tackles category-level pose estimation of ar- ticulated objects in robotic manipulation tasks and intro- duces a new benchmark dataset. While recent methods es- timate part poses and sizes at the category level, they often rely on geometric cues and complex multi-stage pipelines that first segment parts from... | [
0.02717369608581066,
-0.021039219573140144,
-0.02242976427078247,
0.02360071986913681,
0.015528044663369656,
0.0496014840900898,
-0.004251343198120594,
-0.0017150648636743426,
-0.06246341019868851,
-0.03752656653523445,
-0.03820470720529556,
-0.04453118517994881,
-0.07605181634426117,
-0.0... |
154 | SeCap: Self-Calibrating and Adaptive Prompts for Cross-view Person Re-Identification in Aerial-Ground Networks | [
"Shining Wang",
"Yunlong Wang",
"Ruiqi Wu",
"Bingliang Jiao",
"Wenxuan Wang",
"Peng Wang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Wang_SeCap_Self-Calibrating_and_Adaptive_Prompts_for_Cross-view_Person_Re-Identification_in_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Wang_SeCap_Self-Calibrating_and_Adaptive_Prompts_for_Cross-view_Person_Re-Identification_in_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wang_SeCap_Self-Calibrating_and_CVPR_2025_supplemental.pdf | 2503.06965 | @InProceedings{Wang_2025_CVPR,
author = {Wang, Shining and Wang, Yunlong and Wu, Ruiqi and Jiao, Bingliang and Wang, Wenxuan and Wang, Peng},
title = {SeCap: Self-Calibrating and Adaptive Prompts for Cross-view Person Re-Identification in Aerial-Ground Networks},
booktitle = {Proceedings of the Compu... | When discussing the Aerial-Ground Person Re-identification (AGPReID) task, we face the main challenge of the significant appearance variations caused by different viewpoints, making identity matching difficult. To address this issue, previous methods attempt to reduce the differences between viewpoints by critical attr... | [
0.01975608803331852,
-0.044252850115299225,
0.01756940223276615,
0.0371965691447258,
0.04546085745096207,
0.01904226839542389,
0.025496136397123337,
-0.022764939814805984,
-0.034674227237701416,
-0.04160318523645401,
-0.04656502977013588,
-0.029559152200818062,
-0.10459866374731064,
-0.038... |
155 | Zero-Shot Styled Text Image Generation, but Make It Autoregressive | [
"Vittorio Pippi",
"Fabio Quattrini",
"Silvia Cascianelli",
"Alessio Tonioni",
"Rita Cucchiara"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Pippi_Zero-Shot_Styled_Text_Image_Generation_but_Make_It_Autoregressive_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Pippi_Zero-Shot_Styled_Text_Image_Generation_but_Make_It_Autoregressive_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Pippi_Zero-Shot_Styled_Text_CVPR_2025_supplemental.pdf | 2503.17074 | @InProceedings{Pippi_2025_CVPR,
author = {Pippi, Vittorio and Quattrini, Fabio and Cascianelli, Silvia and Tonioni, Alessio and Cucchiara, Rita},
title = {Zero-Shot Styled Text Image Generation, but Make It Autoregressive},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Confe... | Styled Handwritten Text Generation (HTG) has recently received attention from the computer vision and document analysis communities, which have developed several solutions, either GAN- or diffusion-based, that achieved promising results. Nonetheless, these strategies fail to generalize to novel styles and have technica... | [
0.03569931164383888,
-0.021758802235126495,
-0.0017874755430966616,
0.06366723775863647,
0.03560400754213333,
0.049067314714193344,
0.02887127548456192,
0.031205888837575912,
-0.02200564369559288,
-0.0773308128118515,
-0.017300128936767578,
-0.02606132999062538,
-0.06182045489549637,
0.012... |
156 | Don't Shake the Wheel: Momentum-Aware Planning in End-to-End Autonomous Driving | [
"Ziying Song",
"Caiyan Jia",
"Lin Liu",
"Hongyu Pan",
"Yongchang Zhang",
"Junming Wang",
"Xingyu Zhang",
"Shaoqing Xu",
"Lei Yang",
"Yadan Luo"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Song_Dont_Shake_the_Wheel_Momentum-Aware_Planning_in_End-to-End_Autonomous_Driving_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Song_Dont_Shake_the_Wheel_Momentum-Aware_Planning_in_End-to-End_Autonomous_Driving_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Song_Dont_Shake_the_CVPR_2025_supplemental.pdf | null | @InProceedings{Song_2025_CVPR,
author = {Song, Ziying and Jia, Caiyan and Liu, Lin and Pan, Hongyu and Zhang, Yongchang and Wang, Junming and Zhang, Xingyu and Xu, Shaoqing and Yang, Lei and Luo, Yadan},
title = {Don't Shake the Wheel: Momentum-Aware Planning in End-to-End Autonomous Driving},
bookti... | End-to-end autonomous driving frameworks enable seamless integration of perception and planning but often rely on one-shot trajectory prediction, which may lead to unstable control and vulnerability to occlusions in single-frame perception. To address this, we propose the Momentum-Aware Driving (MomAD) framework, which... | [
-0.0323684886097908,
-0.0170906949788332,
-0.0039382693357765675,
0.04885491356253624,
0.021569011732935905,
0.03402305021882057,
0.017221633344888687,
0.027747439220547676,
-0.03091256320476532,
-0.05485023558139801,
-0.02658073417842388,
-0.01366037130355835,
-0.04421313852071762,
-0.033... |
157 | Leveraging Perturbation Robustness to Enhance Out-of-Distribution Detection | [
"Wenxi Chen",
"Raymond A. Yeh",
"Shaoshuai Mou",
"Yan Gu"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Chen_Leveraging_Perturbation_Robustness_to_Enhance_Out-of-Distribution_Detection_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Chen_Leveraging_Perturbation_Robustness_to_Enhance_Out-of-Distribution_Detection_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Chen_Leveraging_Perturbation_Robustness_CVPR_2025_supplemental.pdf | 2503.18784 | @InProceedings{Chen_2025_CVPR,
author = {Chen, Wenxi and Yeh, Raymond A. and Mou, Shaoshuai and Gu, Yan},
title = {Leveraging Perturbation Robustness to Enhance Out-of-Distribution Detection},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = ... | Out-of-distribution (OOD) detection is the task of identifying inputs that deviate from the training data distribution. This capability is essential for the safe deployment of deep computer vision models in open-world environments. In this work, we propose a post-hoc method, Perturbation-Rectified OOD detection (PRO), ... | [
0.01116775069385767,
-0.009239505976438522,
0.017979800701141357,
0.01825801655650139,
0.03379666432738304,
-0.006332657765597105,
0.01912154071033001,
0.005492401774972677,
-0.029247906059026718,
-0.03940512239933014,
-0.0017902425024658442,
-0.023764654994010925,
-0.09138521552085876,
-0... |
158 | Neural Motion Simulator Pushing the Limit of World Models in Reinforcement Learning | [
"Chenjie Hao",
"Weyl Lu",
"Yifan Xu",
"Yubei Chen"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Hao_Neural_Motion_Simulator_Pushing_the_Limit_of_World_Models_in_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Hao_Neural_Motion_Simulator_Pushing_the_Limit_of_World_Models_in_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Hao_Neural_Motion_Simulator_CVPR_2025_supplemental.pdf | 2504.07095 | @InProceedings{Hao_2025_CVPR,
author = {Hao, Chenjie and Lu, Weyl and Xu, Yifan and Chen, Yubei},
title = {Neural Motion Simulator Pushing the Limit of World Models in Reinforcement Learning},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = ... | An embodied system must not only model the patterns of the external world but also understand its own motion dynamics. A motion dynamic model is essential for efficient skill acquisition and effective planning. In this work, we introduce the neural motion simulator (MoSim), a world model that predicts the future physic... | [
-0.058760933578014374,
0.01430750172585249,
-0.014553246088325977,
0.02564956806600094,
0.034055445343256,
0.028553079813718796,
0.010181037709116936,
0.016274109482765198,
-0.062074195593595505,
-0.029640594497323036,
-0.0006935435230843723,
-0.00792477373033762,
-0.04824499413371086,
-0.... |
159 | Aesthetic Post-Training Diffusion Models from Generic Preferences with Step-by-step Preference Optimization | [
"Zhanhao Liang",
"Yuhui Yuan",
"Shuyang Gu",
"Bohan Chen",
"Tiankai Hang",
"Mingxi Cheng",
"Ji Li",
"Liang Zheng"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Liang_Aesthetic_Post-Training_Diffusion_Models_from_Generic_Preferences_with_Step-by-step_Preference_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Liang_Aesthetic_Post-Training_Diffusion_Models_from_Generic_Preferences_with_Step-by-step_Preference_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Liang_Aesthetic_Post-Training_Diffusion_CVPR_2025_supplemental.pdf | 2406.04314 | @InProceedings{Liang_2025_CVPR,
author = {Liang, Zhanhao and Yuan, Yuhui and Gu, Shuyang and Chen, Bohan and Hang, Tiankai and Cheng, Mingxi and Li, Ji and Zheng, Liang},
title = {Aesthetic Post-Training Diffusion Models from Generic Preferences with Step-by-step Preference Optimization},
booktitle =... | Generating visually appealing images is fundamental to modern text-to-image generation models. A potential solution to better aesthetics is direct preference optimization (DPO), which has been applied to diffusion models to improve general image quality including prompt alignment and aesthetics. Popular DPO methods pro... | [
0.0029643154703080654,
-0.006356205325573683,
0.02338639833033085,
0.06546983867883682,
0.02953483909368515,
0.04979602247476578,
0.005121554248034954,
0.006739179138094187,
-0.006669966969639063,
-0.06816652417182922,
-0.019448013976216316,
-0.004977031145244837,
-0.06137953698635101,
-0.... |
160 | Adversarial Diffusion Compression for Real-World Image Super-Resolution | [
"Bin Chen",
"Gehui Li",
"Rongyuan Wu",
"Xindong Zhang",
"Jie Chen",
"Jian Zhang",
"Lei Zhang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Chen_Adversarial_Diffusion_Compression_for_Real-World_Image_Super-Resolution_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Chen_Adversarial_Diffusion_Compression_for_Real-World_Image_Super-Resolution_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Chen_Adversarial_Diffusion_Compression_CVPR_2025_supplemental.pdf | 2411.13383 | @InProceedings{Chen_2025_CVPR,
author = {Chen, Bin and Li, Gehui and Wu, Rongyuan and Zhang, Xindong and Chen, Jie and Zhang, Jian and Zhang, Lei},
title = {Adversarial Diffusion Compression for Real-World Image Super-Resolution},
booktitle = {Proceedings of the Computer Vision and Pattern Recognitio... | Real-world image super-resolution (Real-ISR) aims to reconstruct high-resolution images from low-resolution inputs degraded by complex, unknown processes. While many Stable Diffusion (SD)-based Real-ISR methods have achieved remarkable success, their slow, multi-step inference hinders practical deployment. Recent SD-ba... | [
-0.002896352903917432,
-0.02650323510169983,
-0.006526479963213205,
0.04710398241877556,
0.052921928465366364,
0.04288497194647789,
0.012262555770576,
-0.015520703047513962,
-0.005926673766225576,
-0.07896211743354797,
0.0012063690228387713,
-0.04029739275574684,
-0.023862669244408607,
0.0... |
161 | DiSciPLE: Learning Interpretable Programs for Scientific Visual Discovery | [
"Utkarsh Mall",
"Cheng Perng Phoo",
"Mia Chiquier",
"Bharath Hariharan",
"Kavita Bala",
"Carl Vondrick"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Mall_DiSciPLE_Learning_Interpretable_Programs_for_Scientific_Visual_Discovery_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Mall_DiSciPLE_Learning_Interpretable_Programs_for_Scientific_Visual_Discovery_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Mall_DiSciPLE_Learning_Interpretable_CVPR_2025_supplemental.pdf | 2502.10060 | @InProceedings{Mall_2025_CVPR,
author = {Mall, Utkarsh and Phoo, Cheng Perng and Chiquier, Mia and Hariharan, Bharath and Bala, Kavita and Vondrick, Carl},
title = {DiSciPLE: Learning Interpretable Programs for Scientific Visual Discovery},
booktitle = {Proceedings of the Computer Vision and Pattern ... | Visual data is used in numerous different scientific workflows ranging from remote sensing to ecology. As the amount of observation data increases, the challenge is not just to make accurate predictions but also to understand the underlying mechanisms for those predictions. Good interpretation is important in scientifi... | [
0.030403591692447662,
-0.038499604910612106,
-0.011948845349252224,
0.03435023874044418,
0.050566256046295166,
0.03391909599304199,
0.017882680520415306,
-0.006403404287993908,
-0.025045474991202354,
-0.04538775980472565,
-0.017240598797798157,
0.030085990205407143,
-0.07792983204126358,
0... |
162 | SOLAMI: Social Vision-Language-Action Modeling for Immersive Interaction with 3D Autonomous Characters | [
"Jianping Jiang",
"Weiye Xiao",
"Zhengyu Lin",
"Huaizhong Zhang",
"Tianxiang Ren",
"Yang Gao",
"Zhiqian Lin",
"Zhongang Cai",
"Lei Yang",
"Ziwei Liu"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Jiang_SOLAMI_Social_Vision-Language-Action_Modeling_for_Immersive_Interaction_with_3D_Autonomous_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Jiang_SOLAMI_Social_Vision-Language-Action_Modeling_for_Immersive_Interaction_with_3D_Autonomous_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Jiang_SOLAMI_Social_Vision-Language-Action_CVPR_2025_supplemental.pdf | 2412.00174 | @InProceedings{Jiang_2025_CVPR,
author = {Jiang, Jianping and Xiao, Weiye and Lin, Zhengyu and Zhang, Huaizhong and Ren, Tianxiang and Gao, Yang and Lin, Zhiqian and Cai, Zhongang and Yang, Lei and Liu, Ziwei},
title = {SOLAMI: Social Vision-Language-Action Modeling for Immersive Interaction with 3D Auto... | Human beings are social animals. How to equip 3D autonomous characters with similar social intelligence that can perceive, understand and interact with humans remains an open yet foundamental problem. In this paper, we introduce SOLAMI, the first end-to-end Social vision-Language-Action (VLA) Modeling framework for Imm... | [
-0.013206739909946918,
0.0069719054736196995,
-0.00905943475663662,
0.008784573525190353,
0.010581821203231812,
0.027452627196907997,
0.0657028779387474,
0.030189601704478264,
-0.01217926200479269,
-0.03933242708444595,
-0.054862797260284424,
0.011961910873651505,
-0.0633057951927185,
-0.0... |
163 | EntropyMark: Towards More Harmless Backdoor Watermark via Entropy-based Constraint for Open-source Dataset Copyright Protection | [
"Ming Sun",
"Rui Wang",
"Zixuan Zhu",
"Lihua Jing",
"Yuanfang Guo"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Sun_EntropyMark_Towards_More_Harmless_Backdoor_Watermark_via_Entropy-based_Constraint_for_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Sun_EntropyMark_Towards_More_Harmless_Backdoor_Watermark_via_Entropy-based_Constraint_for_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Sun_EntropyMark_Towards_More_CVPR_2025_supplemental.pdf | null | @InProceedings{Sun_2025_CVPR,
author = {Sun, Ming and Wang, Rui and Zhu, Zixuan and Jing, Lihua and Guo, Yuanfang},
title = {EntropyMark: Towards More Harmless Backdoor Watermark via Entropy-based Constraint for Open-source Dataset Copyright Protection},
booktitle = {Proceedings of the Computer Visio... | High-quality open-source datasets are essential for advancing deep neural networks. However, the unauthorized commercial use of these datasets has raised significant concerns about copyright protection. One promising approach is backdoor watermark-based dataset ownership verification (BW-DOV), in which dataset protecto... | [
-0.012454910203814507,
-0.011353578418493271,
-0.039131827652454376,
0.06785919517278671,
0.0538543239235878,
0.017973609268665314,
0.037295520305633545,
-0.04100920632481575,
-0.018070021644234657,
-0.042104702442884445,
-0.024549035355448723,
-0.016365984454751015,
-0.029651237651705742,
... |
164 | Adaptive Markup Language Generation for Contextually-Grounded Visual Document Understanding | [
"Han Xiao",
"Yina Xie",
"Guanxin Tan",
"Yinghao Chen",
"Rui Hu",
"Ke Wang",
"Aojun Zhou",
"Hao Li",
"Hao Shao",
"Xudong Lu",
"Peng Gao",
"Yafei Wen",
"Xiaoxin Chen",
"Shuai Ren",
"Hongsheng Li"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Xiao_Adaptive_Markup_Language_Generation_for_Contextually-Grounded_Visual_Document_Understanding_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Xiao_Adaptive_Markup_Language_Generation_for_Contextually-Grounded_Visual_Document_Understanding_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Xiao_Adaptive_Markup_Language_CVPR_2025_supplemental.pdf | 2505.05446 | @InProceedings{Xiao_2025_CVPR,
author = {Xiao, Han and Xie, Yina and Tan, Guanxin and Chen, Yinghao and Hu, Rui and Wang, Ke and Zhou, Aojun and Li, Hao and Shao, Hao and Lu, Xudong and Gao, Peng and Wen, Yafei and Chen, Xiaoxin and Ren, Shuai and Li, Hongsheng},
title = {Adaptive Markup Language Generat... | Visual Document Understanding has become essential with the increase of text-rich visual content. This field poses significant challenges due to the need for effective integration of visual perception and textual comprehension, particularly across diverse document types with complex layouts. Moreover, existing fine-tun... | [
-0.005510037299245596,
0.004306059330701828,
-0.008003782480955124,
0.046120498329401016,
0.028287231922149658,
-0.025298800319433212,
0.011154081672430038,
0.029738275334239006,
-0.022750653326511383,
-0.02855744957923889,
-0.05450093373656273,
0.015973616391420364,
-0.040932752192020416,
... |
165 | BARD-GS: Blur-Aware Reconstruction of Dynamic Scenes via Gaussian Splatting | [
"Yiren Lu",
"Yunlai Zhou",
"Disheng Liu",
"Tuo Liang",
"Yu Yin"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Lu_BARD-GS_Blur-Aware_Reconstruction_of_Dynamic_Scenes_via_Gaussian_Splatting_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Lu_BARD-GS_Blur-Aware_Reconstruction_of_Dynamic_Scenes_via_Gaussian_Splatting_CVPR_2025_paper.pdf | null | null | @InProceedings{Lu_2025_CVPR,
author = {Lu, Yiren and Zhou, Yunlai and Liu, Disheng and Liang, Tuo and Yin, Yu},
title = {BARD-GS: Blur-Aware Reconstruction of Dynamic Scenes via Gaussian Splatting},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month ... | 3D Gaussian Splatting (3DGS) has shown remarkable potential for static scene reconstruction, and recent advancements have extended its application to dynamic scenes. However, the quality of reconstructions depends heavily on high-quality input images and precise camera poses, which is not that trivial to fulfill in the... | [
-0.008753707632422447,
-0.024952303618192673,
0.019952228292822838,
0.040346674621105194,
0.015617312863469124,
0.004346699919551611,
0.039315760135650635,
0.018642017617821693,
-0.05753820389509201,
-0.07280353456735611,
-0.02024739794433117,
-0.018951604142785072,
-0.029276935383677483,
... |
166 | SALAD: Skeleton-aware Latent Diffusion for Text-driven Motion Generation and Editing | [
"Seokhyeon Hong",
"Chaelin Kim",
"Serin Yoon",
"Junghyun Nam",
"Sihun Cha",
"Junyong Noh"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Hong_SALAD_Skeleton-aware_Latent_Diffusion_for_Text-driven_Motion_Generation_and_Editing_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Hong_SALAD_Skeleton-aware_Latent_Diffusion_for_Text-driven_Motion_Generation_and_Editing_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Hong_SALAD_Skeleton-aware_Latent_CVPR_2025_supplemental.pdf | 2503.13836 | @InProceedings{Hong_2025_CVPR,
author = {Hong, Seokhyeon and Kim, Chaelin and Yoon, Serin and Nam, Junghyun and Cha, Sihun and Noh, Junyong},
title = {SALAD: Skeleton-aware Latent Diffusion for Text-driven Motion Generation and Editing},
booktitle = {Proceedings of the Computer Vision and Pattern Rec... | Text-driven motion generation has advanced significantly with the rise of denoising diffusion models. However, previous methods often oversimplify representations for the skeletal joints, temporal frames, and textual words, limiting their ability to fully capture the information within each modality and their interacti... | [
0.02236442267894745,
-0.024495946243405342,
-0.03788728639483452,
0.052220527082681656,
0.04152051359415054,
0.03396596387028694,
0.018045060336589813,
0.043645888566970825,
-0.033798422664403915,
-0.05323876440525055,
-0.013644607737660408,
-0.03119955025613308,
-0.0283159501850605,
-0.00... |
167 | Towards Universal AI-Generated Image Detection by Variational Information Bottleneck Network | [
"Haifeng Zhang",
"Qinghui He",
"Xiuli Bi",
"Weisheng Li",
"Bo Liu",
"Bin Xiao"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Zhang_Towards_Universal_AI-Generated_Image_Detection_by_Variational_Information_Bottleneck_Network_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Zhang_Towards_Universal_AI-Generated_Image_Detection_by_Variational_Information_Bottleneck_Network_CVPR_2025_paper.pdf | null | null | @InProceedings{Zhang_2025_CVPR,
author = {Zhang, Haifeng and He, Qinghui and Bi, Xiuli and Li, Weisheng and Liu, Bo and Xiao, Bin},
title = {Towards Universal AI-Generated Image Detection by Variational Information Bottleneck Network},
booktitle = {Proceedings of the Computer Vision and Pattern Recog... | The rapid advancement of generative models has significantly improved the quality of generated images. Meanwhile, it challenges information authenticity and credibility. Current generated image detection methods based on large-scale pre-trained multimodal models have achieved impressive results. Although these models p... | [
0.03715333715081215,
-0.05654340237379074,
0.003670620499178767,
0.061507776379585266,
0.02114870585501194,
0.02497355081140995,
0.04364427551627159,
0.023631518706679344,
-0.0348970890045166,
-0.06710290908813477,
-0.03677431493997574,
0.003761149710044265,
-0.0700976699590683,
0.02181280... |
168 | HSI: A Holistic Style Injector for Arbitrary Style Transfer | [
"Shuhao Zhang",
"Hui Kang",
"Yang Liu",
"Fang Mei",
"Hongjuan Li"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Zhang_HSI_A_Holistic_Style_Injector_for_Arbitrary_Style_Transfer_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Zhang_HSI_A_Holistic_Style_Injector_for_Arbitrary_Style_Transfer_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhang_HSI_A_Holistic_CVPR_2025_supplemental.pdf | 2502.04369 | @InProceedings{Zhang_2025_CVPR,
author = {Zhang, Shuhao and Kang, Hui and Liu, Yang and Mei, Fang and Li, Hongjuan},
title = {HSI: A Holistic Style Injector for Arbitrary Style Transfer},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June... | Attention-based arbitrary style transfer methods have gained significant attention recently due to their impressive ability to synthesize style details. However, the point-wise matching within the attention mechanism may overly focus on local patterns such that neglect the remarkable global features of style images. Ad... | [
0.022830719128251076,
0.017037536948919296,
0.004431177396327257,
0.02088530920445919,
0.03507266566157341,
0.04610750824213028,
0.015871921554207802,
-0.008258238434791565,
-0.010404713451862335,
-0.05754045024514198,
-0.03857548162341118,
-0.014062685891985893,
-0.07709944248199463,
-0.0... |
169 | LookingGlass: Generative Anamorphoses via Laplacian Pyramid Warping | [
"Pascal Chang",
"Sergio Sancho",
"Jingwei Tang",
"Markus Gross",
"Vinicius Azevedo"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Chang_LookingGlass_Generative_Anamorphoses_via_Laplacian_Pyramid_Warping_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Chang_LookingGlass_Generative_Anamorphoses_via_Laplacian_Pyramid_Warping_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Chang_LookingGlass_Generative_Anamorphoses_CVPR_2025_supplemental.zip | 2504.08902 | @InProceedings{Chang_2025_CVPR,
author = {Chang, Pascal and Sancho, Sergio and Tang, Jingwei and Gross, Markus and Azevedo, Vinicius},
title = {LookingGlass: Generative Anamorphoses via Laplacian Pyramid Warping},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVP... | Anamorphosis refers to a category of images that are intentionally distorted, making them unrecognizable when viewed directly. Their true form only reveals itself when seen from a specific viewpoint, which can be through some catadioptric device like a mirror or a lens. While the construction of these mathematical devi... | [
0.05178403481841087,
-0.008061502128839493,
-0.025295766070485115,
0.01695663295686245,
0.05877886340022087,
0.042983878403902054,
0.04474455490708351,
0.028559915721416473,
-0.03130986541509628,
-0.10798937827348709,
-0.016235269606113434,
-0.03170471265912056,
-0.049803778529167175,
0.00... |
170 | V2V3D: View-to-View Denoised 3D Reconstruction for Light Field Microscopy | [
"Jiayin Zhao",
"Zhenqi Fu",
"Tao Yu",
"Hui Qiao"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Zhao_V2V3D_View-to-View_Denoised_3D_Reconstruction_for_Light_Field_Microscopy_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Zhao_V2V3D_View-to-View_Denoised_3D_Reconstruction_for_Light_Field_Microscopy_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Zhao_V2V3D_View-to-View_Denoised_CVPR_2025_supplemental.pdf | 2504.07853 | @InProceedings{Zhao_2025_CVPR,
author = {Zhao, Jiayin and Fu, Zhenqi and Yu, Tao and Qiao, Hui},
title = {V2V3D: View-to-View Denoised 3D Reconstruction for Light Field Microscopy},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
... | Light field microscopy (LFM) has gained significant attention due to its ability to capture snapshot-based, large-scale 3D fluorescence images. However, existing LFM reconstruction algorithms are highly sensitive to sensor noise or require hard-to-get ground-truth annotated data for training. To address these challenge... | [
0.008097715675830841,
0.0006964830099605024,
0.014090134762227535,
-0.0026564982254058123,
0.029762273654341698,
0.02428448013961315,
0.010447089560329914,
-0.0005259665776975453,
-0.022455481812357903,
-0.06099764257669449,
0.048405807465314865,
-0.010272668674588203,
-0.04689374566078186,
... |
171 | DiN: Diffusion Model for Robust Medical VQA with Semantic Noisy Labels | [
"Erjian Guo",
"Zhen Zhao",
"Zicheng Wang",
"Tong Chen",
"Yunyi Liu",
"Luping Zhou"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Guo_DiN_Diffusion_Model_for_Robust_Medical_VQA_with_Semantic_Noisy_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Guo_DiN_Diffusion_Model_for_Robust_Medical_VQA_with_Semantic_Noisy_CVPR_2025_paper.pdf | null | 2503.18536 | @InProceedings{Guo_2025_CVPR,
author = {Guo, Erjian and Zhao, Zhen and Wang, Zicheng and Chen, Tong and Liu, Yunyi and Zhou, Luping},
title = {DiN: Diffusion Model for Robust Medical VQA with Semantic Noisy Labels},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (C... | Medical Visual Question Answering (Med-VQA) systems benefit the interpretation of medical images containing critical clinical information. However, the challenge of noisy labels and limited high-quality datasets remains underexplored. To address this, we establish the first benchmark for noisy labels in Med-VQA by simu... | [
0.01731104776263237,
-0.0020647651981562376,
0.003939162939786911,
0.06684126704931259,
0.024501720443367958,
0.0496184267103672,
0.010450383648276329,
-0.03128710761666298,
0.0001939166832016781,
-0.05578269809484482,
0.0007740947767160833,
0.02962368354201317,
-0.0330212377011776,
0.0246... |
172 | Splatter-360: Generalizable 360 Gaussian Splatting for Wide-baseline Panoramic Images | [
"Zheng Chen",
"Chenming Wu",
"Zhelun Shen",
"Chen Zhao",
"Weicai Ye",
"Haocheng Feng",
"Errui Ding",
"Song-Hai Zhang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Chen_Splatter-360_Generalizable_360_Gaussian_Splatting_for_Wide-baseline_Panoramic_Images_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Chen_Splatter-360_Generalizable_360_Gaussian_Splatting_for_Wide-baseline_Panoramic_Images_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Chen_Splatter-360_Generalizable_360_CVPR_2025_supplemental.pdf | null | @InProceedings{Chen_2025_CVPR,
author = {Chen, Zheng and Wu, Chenming and Shen, Zhelun and Zhao, Chen and Ye, Weicai and Feng, Haocheng and Ding, Errui and Zhang, Song-Hai},
title = {Splatter-360: Generalizable 360 Gaussian Splatting for Wide-baseline Panoramic Images},
booktitle = {Proceedings of th... | Wide-baseline panoramic images are frequently used in applications like VR and simulations to minimize capturing labor costs and storage needs. However, synthesizing novel views from these panoramic images in real time remains a significant challenge, especially due to panoramic imagery's high resolution and inherent d... | [
0.029809558764100075,
0.012251274660229683,
0.02438533492386341,
0.024871792644262314,
0.01457822136580944,
0.02253626100718975,
0.022306108847260475,
0.016238942742347717,
-0.022429244592785835,
-0.056223634630441666,
-0.0070200711488723755,
-0.005630883388221264,
-0.05804448574781418,
0.... |
173 | ShowMak3r: Compositional TV Show Reconstruction | [
"Sangmin Kim",
"Seunguk Do",
"Jaesik Park"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Kim_ShowMak3r_Compositional_TV_Show_Reconstruction_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Kim_ShowMak3r_Compositional_TV_Show_Reconstruction_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Kim_ShowMak3r_Compositional_TV_CVPR_2025_supplemental.pdf | 2504.19584 | @InProceedings{Kim_2025_CVPR,
author = {Kim, Sangmin and Do, Seunguk and Park, Jaesik},
title = {ShowMak3r: Compositional TV Show Reconstruction},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year = {2025},
pages ... | Reconstructing dynamic radiance fields from video clips is challenging, especially when entertainment videos like TV shows are given. Many challenges make the reconstruction difficult due to (1) actors occluding with each other and having diverse facial expressions, (2) cluttered stages, and (3) small baseline views or... | [
0.029161425307393074,
-0.009044773876667023,
0.00049772416241467,
0.019159138202667236,
0.04061201587319374,
0.014848371036350727,
0.01724890060722828,
0.004207797348499298,
-0.05103477090597153,
-0.034644074738025665,
0.001195852062664926,
0.01456207875162363,
-0.05179283767938614,
0.0109... |
174 | CADRef: Robust Out-of-Distribution Detection via Class-Aware Decoupled Relative Feature Leveraging | [
"Zhiwei Ling",
"Yachen Chang",
"Hailiang Zhao",
"Xinkui Zhao",
"Kingsum Chow",
"Shuiguang Deng"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Ling_CADRef_Robust_Out-of-Distribution_Detection_via_Class-Aware_Decoupled_Relative_Feature_Leveraging_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Ling_CADRef_Robust_Out-of-Distribution_Detection_via_Class-Aware_Decoupled_Relative_Feature_Leveraging_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Ling_CADRef_Robust_Out-of-Distribution_CVPR_2025_supplemental.pdf | 2503.00325 | @InProceedings{Ling_2025_CVPR,
author = {Ling, Zhiwei and Chang, Yachen and Zhao, Hailiang and Zhao, Xinkui and Chow, Kingsum and Deng, Shuiguang},
title = {CADRef: Robust Out-of-Distribution Detection via Class-Aware Decoupled Relative Feature Leveraging},
booktitle = {Proceedings of the Computer Vi... | Deep neural networks (DNNs) have been widely criticized for their overconfidence when dealing with out-of-distribution (OOD) samples, highlighting the critical need for effective OOD detection to ensure the safe deployment of DNNs in real-world settings. Existing post-hoc OOD detection methods primarily enhance the dis... | [
0.022060075774788857,
-0.016595570370554924,
0.018085524439811707,
0.040096309036016464,
0.05029992386698723,
0.026244930922985077,
0.007958409376442432,
0.003096455940976739,
-0.0024550699163228273,
-0.04417261853814125,
-0.004511620849370956,
0.009213894605636597,
-0.06201409175992012,
-... |
175 | S^3-Face: SSS-Compliant Facial Reflectance Estimation via Diffusion Priors | [
"Xingyu Ren",
"Jiankang Deng",
"Yuhao Cheng",
"Wenhan Zhu",
"Yichao Yan",
"Xiaokang Yang",
"Stefanos Zafeiriou",
"Chao Ma"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Ren_S3-Face_SSS-Compliant_Facial_Reflectance_Estimation_via_Diffusion_Priors_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Ren_S3-Face_SSS-Compliant_Facial_Reflectance_Estimation_via_Diffusion_Priors_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Ren_S3-Face_SSS-Compliant_Facial_CVPR_2025_supplemental.zip | null | @InProceedings{Ren_2025_CVPR,
author = {Ren, Xingyu and Deng, Jiankang and Cheng, Yuhao and Zhu, Wenhan and Yan, Yichao and Yang, Xiaokang and Zafeiriou, Stefanos and Ma, Chao},
title = {S{\textasciicircum}3-Face: SSS-Compliant Facial Reflectance Estimation via Diffusion Priors},
booktitle = {Proceed... | Recent 3D face reconstruction methods have made remarkable advancements, yet achieving high-quality facial reflectance from monocular input remains challenging. Existing methods rely on the light-stage captured data to learn facial reflectance models. However, limited subject diversity in these datasets poses challenge... | [
0.015449898317456245,
-0.019800320267677307,
0.01176617480814457,
0.018854064866900444,
0.025736413896083832,
0.03990506753325462,
0.04251629859209061,
-0.012854363769292831,
-0.01728202775120735,
-0.092317134141922,
0.0015523068141192198,
-0.019564595073461533,
-0.040719516575336456,
-0.0... |
176 | FSBench: A Figure Skating Benchmark for Advancing Artistic Sports Understanding | [
"Rong Gao",
"Xin Liu",
"Zhuozhao Hu",
"Bohao Xing",
"Baiqiang Xia",
"Zitong Yu",
"Heikki Kälviäinen"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Gao_FSBench_A_Figure_Skating_Benchmark_for_Advancing_Artistic_Sports_Understanding_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Gao_FSBench_A_Figure_Skating_Benchmark_for_Advancing_Artistic_Sports_Understanding_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Gao_FSBench_A_Figure_CVPR_2025_supplemental.pdf | 2504.19514 | @InProceedings{Gao_2025_CVPR,
author = {Gao, Rong and Liu, Xin and Hu, Zhuozhao and Xing, Bohao and Xia, Baiqiang and Yu, Zitong and K\"alvi\"ainen, Heikki},
title = {FSBench: A Figure Skating Benchmark for Advancing Artistic Sports Understanding},
booktitle = {Proceedings of the Computer Vision and ... | Figure skating, known as the "Art on Ice," is among the most artistic sports, challenging to understand due to its blend of technical elements (like jumps and spins) and overall artistic expression. Existing figure skating datasets mainly focus on single tasks, such as action recognition or scoring, lacking comprehensi... | [
-0.038834016770124435,
-0.0466165617108345,
0.0050693717785179615,
0.02356821298599243,
0.030820120126008987,
0.0009164318908005953,
0.048323582857847214,
0.03321744501590729,
-0.04139108583331108,
-0.04621486738324165,
-0.001987208379432559,
-0.017903529107570648,
-0.06756780296564102,
-0... |
177 | Keep the Balance: A Parameter-Efficient Symmetrical Framework for RGB+X Semantic Segmentation | [
"Jiaxin Cai",
"Jingze Su",
"Qi Li",
"Wenjie Yang",
"Shu Wang",
"Tiesong Zhao",
"Shengfeng He",
"Wenxi Liu"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Cai_Keep_the_Balance_A_Parameter-Efficient_Symmetrical_Framework_for_RGBX_Semantic_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Cai_Keep_the_Balance_A_Parameter-Efficient_Symmetrical_Framework_for_RGBX_Semantic_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Cai_Keep_the_Balance_CVPR_2025_supplemental.pdf | null | @InProceedings{Cai_2025_CVPR,
author = {Cai, Jiaxin and Su, Jingze and Li, Qi and Yang, Wenjie and Wang, Shu and Zhao, Tiesong and He, Shengfeng and Liu, Wenxi},
title = {Keep the Balance: A Parameter-Efficient Symmetrical Framework for RGB+X Semantic Segmentation},
booktitle = {Proceedings of the Co... | Multimodal semantic segmentation is a critical challenge in computer vision, with early methods suffering from high computational costs and limited transferability due to full fine-tuning of RGB-based pre-trained parameters. Recent studies, while leveraging additional modalities as supplementary prompts to RGB, still p... | [
0.017184553667902946,
-0.03594811260700226,
0.026544470340013504,
0.018781233578920364,
0.002252093516290188,
0.05756581202149391,
-0.007593096699565649,
0.0347493439912796,
-0.056233134120702744,
-0.06967311352491379,
-0.039948511868715286,
-0.015376613475382328,
-0.07522814720869064,
-0.... |
178 | VideoDirector: Precise Video Editing via Text-to-Video Models | [
"Yukun Wang",
"Longguang Wang",
"Zhiyuan Ma",
"Qibin Hu",
"Kai Xu",
"Yulan Guo"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Wang_VideoDirector_Precise_Video_Editing_via_Text-to-Video_Models_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Wang_VideoDirector_Precise_Video_Editing_via_Text-to-Video_Models_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wang_VideoDirector_Precise_Video_CVPR_2025_supplemental.zip | 2411.17592 | @InProceedings{Wang_2025_CVPR,
author = {Wang, Yukun and Wang, Longguang and Ma, Zhiyuan and Hu, Qibin and Xu, Kai and Guo, Yulan},
title = {VideoDirector: Precise Video Editing via Text-to-Video Models},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
... | Despite the typical inversion-then-editing paradigm using text-to-image (T2I) models has demonstrated promising results, directly extending it to text-to-video (T2V) models still suffers severe artifacts such as color flickering and content distortion. Consequently, current video editing methods primarily rely on T2I m... | [
0.020602740347385406,
-0.00497038196772337,
-0.013692667707800865,
0.06420169770717621,
0.028588466346263885,
0.017663385719060898,
0.038021937012672424,
0.027041848748922348,
-0.021523091942071915,
-0.04736494645476341,
-0.006836052052676678,
-0.006874422077089548,
-0.04312446713447571,
0... |
179 | LLM-driven Multimodal and Multi-Identity Listening Head Generation | [
"Peiwen Lai",
"Weizhi Zhong",
"Yipeng Qin",
"Xiaohang Ren",
"Baoyuan Wang",
"Guanbin Li"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Lai_LLM-driven_Multimodal_and_Multi-Identity_Listening_Head_Generation_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Lai_LLM-driven_Multimodal_and_Multi-Identity_Listening_Head_Generation_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Lai_LLM-driven_Multimodal_and_CVPR_2025_supplemental.zip | null | @InProceedings{Lai_2025_CVPR,
author = {Lai, Peiwen and Zhong, Weizhi and Qin, Yipeng and Ren, Xiaohang and Wang, Baoyuan and Li, Guanbin},
title = {LLM-driven Multimodal and Multi-Identity Listening Head Generation},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference ... | Generating natural listener responses in conversational scenarios is crucial for creating engaging digital humans and avatars. Recent work has shown that large language models (LLMs) can be effectively leveraged for this task, demonstrating remarkable capabilities in generating contextually appropriate listener behavio... | [
0.015379988588392735,
-0.009441928938031197,
-0.002352388808503747,
0.0077671813778579235,
0.031989194452762604,
0.04487094655632973,
0.04071154445409775,
0.0018901601433753967,
-0.02894699200987816,
-0.026335900649428368,
-0.037910331040620804,
0.04701245203614235,
-0.060624901205301285,
... |
180 | Towards Understanding How Knowledge Evolves in Large Vision-Language Models | [
"Sudong Wang",
"Yunjian Zhang",
"Yao Zhu",
"Jianing Li",
"Zizhe Wang",
"Yanwei Liu",
"Xiangyang Ji"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Wang_Towards_Understanding_How_Knowledge_Evolves_in_Large_Vision-Language_Models_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Wang_Towards_Understanding_How_Knowledge_Evolves_in_Large_Vision-Language_Models_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wang_Towards_Understanding_How_CVPR_2025_supplemental.pdf | 2504.02862 | @InProceedings{Wang_2025_CVPR,
author = {Wang, Sudong and Zhang, Yunjian and Zhu, Yao and Li, Jianing and Wang, Zizhe and Liu, Yanwei and Ji, Xiangyang},
title = {Towards Understanding How Knowledge Evolves in Large Vision-Language Models},
booktitle = {Proceedings of the Computer Vision and Pattern ... | Large Vision-Language Models (LVLMs) are gradually becoming the foundation for many artificial intelligence applications. However, understanding their internal working mechanisms has continued to puzzle researchers, which in turn limits the further enhancement of their capabilities. In this paper, we seek to investigat... | [
-0.01202392391860485,
-0.004500511102378368,
0.001266355044208467,
0.041657954454422,
0.047107402235269547,
0.015224103815853596,
0.04310961812734604,
0.04635893926024437,
-0.036008384078741074,
-0.0017786422977223992,
-0.025320792570710182,
0.048915643244981766,
-0.07211946696043015,
0.00... |
181 | A Unified, Resilient, and Explainable Adversarial Patch Detector | [
"Vishesh Kumar",
"Akshay Agarwal"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Kumar_A_Unified_Resilient_and_Explainable_Adversarial_Patch_Detector_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Kumar_A_Unified_Resilient_and_Explainable_Adversarial_Patch_Detector_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Kumar_A_Unified_Resilient_CVPR_2025_supplemental.pdf | null | @InProceedings{Kumar_2025_CVPR,
author = {Kumar, Vishesh and Agarwal, Akshay},
title = {A Unified, Resilient, and Explainable Adversarial Patch Detector},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year = {2025},
pa... | Deep Neural Networks (DNNs), backbone architecture in `almost' every computer vision task, are vulnerable to adversarial attacks, particularly physical out-of-distribution (OOD) adversarial patches. Existing defense models often struggle with interpreting these attacks in ways that align with human visual perception. O... | [
0.005904223769903183,
-0.025887243449687958,
-0.018459439277648926,
0.03797312453389168,
0.010546818375587463,
0.029600102454423904,
0.010905630886554718,
-0.005478958133608103,
-0.025608716532588005,
-0.06118491291999817,
-0.008938959799706936,
0.0015779311070218682,
-0.07530330121517181,
... |
182 | VISTA: Enhancing Long-Duration and High-Resolution Video Understanding by Video Spatiotemporal Augmentation | [
"Weiming Ren",
"Huan Yang",
"Jie Min",
"Cong Wei",
"Wenhu Chen"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Ren_VISTA_Enhancing_Long-Duration_and_High-Resolution_Video_Understanding_by_Video_Spatiotemporal_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Ren_VISTA_Enhancing_Long-Duration_and_High-Resolution_Video_Understanding_by_Video_Spatiotemporal_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Ren_VISTA_Enhancing_Long-Duration_CVPR_2025_supplemental.pdf | 2412.00927 | @InProceedings{Ren_2025_CVPR,
author = {Ren, Weiming and Yang, Huan and Min, Jie and Wei, Cong and Chen, Wenhu},
title = {VISTA: Enhancing Long-Duration and High-Resolution Video Understanding by Video Spatiotemporal Augmentation},
booktitle = {Proceedings of the Computer Vision and Pattern Recogniti... | Current large multimodal models (LMMs) face significant challenges in processing and comprehending long-duration or high-resolution videos, which is mainly due to the lack of high-quality datasets. To address this issue from a data-centric perspective, we propose VISTA, a simple yet effective video spatiotemporal augme... | [
0.05316866189241409,
-0.027254687622189522,
0.007459448650479317,
0.04170124605298042,
0.03056819550693035,
0.010480612516403198,
0.03451494127511978,
0.03279147297143936,
-0.03670017793774605,
-0.030216775834560394,
-0.03884008154273033,
0.005319697316735983,
-0.04005173221230507,
0.00774... |
183 | Structured 3D Latents for Scalable and Versatile 3D Generation | [
"Jianfeng Xiang",
"Zelong Lv",
"Sicheng Xu",
"Yu Deng",
"Ruicheng Wang",
"Bowen Zhang",
"Dong Chen",
"Xin Tong",
"Jiaolong Yang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Xiang_Structured_3D_Latents_for_Scalable_and_Versatile_3D_Generation_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Xiang_Structured_3D_Latents_for_Scalable_and_Versatile_3D_Generation_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Xiang_Structured_3D_Latents_CVPR_2025_supplemental.pdf | 2412.01506 | @InProceedings{Xiang_2025_CVPR,
author = {Xiang, Jianfeng and Lv, Zelong and Xu, Sicheng and Deng, Yu and Wang, Ruicheng and Zhang, Bowen and Chen, Dong and Tong, Xin and Yang, Jiaolong},
title = {Structured 3D Latents for Scalable and Versatile 3D Generation},
booktitle = {Proceedings of the Compute... | We introduce a novel 3D generation method for versatile and high-quality 3D asset creation.The cornerstone is a unified Structured LATent (SLAT) representation which allows decoding to different output formats, such as Radiance Fields, 3D Gaussians, and meshes. This is achieved by integrating a sparsely-populated 3D gr... | [
0.03362036868929863,
-0.0184144526720047,
0.002302306005731225,
0.03159179165959358,
0.03387819603085518,
0.030917078256607056,
-0.012651887722313404,
0.014457781799137592,
-0.027174100279808044,
-0.04836157709360123,
-0.02811998687684536,
-0.0588766448199749,
-0.061924222856760025,
0.0215... |
184 | GA3CE: Unconstrained 3D Gaze Estimation with Gaze-Aware 3D Context Encoding | [
"Yuki Kawana",
"Shintaro Shiba",
"Quan Kong",
"Norimasa Kobori"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Kawana_GA3CE_Unconstrained_3D_Gaze_Estimation_with_Gaze-Aware_3D_Context_Encoding_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Kawana_GA3CE_Unconstrained_3D_Gaze_Estimation_with_Gaze-Aware_3D_Context_Encoding_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Kawana_GA3CE_Unconstrained_3D_CVPR_2025_supplemental.pdf | 2505.10671 | @InProceedings{Kawana_2025_CVPR,
author = {Kawana, Yuki and Shiba, Shintaro and Kong, Quan and Kobori, Norimasa},
title = {GA3CE: Unconstrained 3D Gaze Estimation with Gaze-Aware 3D Context Encoding},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
mont... | We propose a novel 3D gaze estimation approach that learns spatial relationships between the subject and objects in the scene, and outputs 3D gaze direction. Our method targets unconstrained settings, including cases where close-up views of the subject's eyes are unavailable, such as when the subject is distant or faci... | [
0.02548816427588463,
0.026077672839164734,
0.009890596382319927,
0.0015606596134603024,
0.012075435370206833,
0.03062453307211399,
0.011614573188126087,
0.039895862340927124,
-0.004816079046577215,
-0.03329983726143837,
-0.032262761145830154,
0.01754096895456314,
-0.09105918556451797,
-0.0... |
185 | Self-Cross Diffusion Guidance for Text-to-Image Synthesis of Similar Subjects | [
"Weimin Qiu",
"Jieke Wang",
"Meng Tang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Qiu_Self-Cross_Diffusion_Guidance_for_Text-to-Image_Synthesis_of_Similar_Subjects_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Qiu_Self-Cross_Diffusion_Guidance_for_Text-to-Image_Synthesis_of_Similar_Subjects_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Qiu_Self-Cross_Diffusion_Guidance_CVPR_2025_supplemental.pdf | 2411.18936 | @InProceedings{Qiu_2025_CVPR,
author = {Qiu, Weimin and Wang, Jieke and Tang, Meng},
title = {Self-Cross Diffusion Guidance for Text-to-Image Synthesis of Similar Subjects},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year ... | Diffusion models achieved unprecedented fidelity and diversity for synthesizing image, video, 3D assets, etc. However, subject mixing is an unresolved issue for diffusion-based image synthesis, particularly for synthesizing multiple similar-looking subjects. We propose Self-Cross Diffusion Guidance to penalize the over... | [
0.02647712267935276,
-0.002485438250005245,
0.009664200246334076,
0.023516643792390823,
0.03744431957602501,
0.03960254788398743,
0.0294506773352623,
0.00233948091045022,
-0.009419922716915607,
-0.050853144377470016,
-0.014004191383719444,
-0.024263856932520866,
-0.030783994123339653,
-0.0... |
186 | RigGS: Rigging of 3D Gaussians for Modeling Articulated Objects in Videos | [
"Yuxin Yao",
"Zhi Deng",
"Junhui Hou"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Yao_RigGS_Rigging_of_3D_Gaussians_for_Modeling_Articulated_Objects_in_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Yao_RigGS_Rigging_of_3D_Gaussians_for_Modeling_Articulated_Objects_in_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Yao_RigGS_Rigging_of_CVPR_2025_supplemental.pdf | 2503.16822 | @InProceedings{Yao_2025_CVPR,
author = {Yao, Yuxin and Deng, Zhi and Hou, Junhui},
title = {RigGS: Rigging of 3D Gaussians for Modeling Articulated Objects in Videos},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year = {... | This paper considers the problem of modeling articulated objects captured in 2D videos to enable novel view synthesis, while also being easily editable, drivable, and reposable. To tackle this challenging problem, we propose RigGS, a new paradigm that leverages 3D Gaussian representation and skeleton-based motion repre... | [
0.03415317088365555,
-0.008047472685575485,
-0.031267739832401276,
0.02657552808523178,
0.04264169558882713,
0.05098175257444382,
0.008493253961205482,
0.007349854800850153,
-0.06415431201457977,
-0.06714653968811035,
-0.008623582310974598,
-0.029756274074316025,
-0.06282404810190201,
-0.0... |
187 | Noise Modeling in One Hour: Minimizing Preparation Efforts for Self-supervised Low-Light RAW Image Denoising | [
"Feiran Li",
"Haiyang Jiang",
"Daisuke Iso"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Li_Noise_Modeling_in_One_Hour_Minimizing_Preparation_Efforts_for_Self-supervised_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Li_Noise_Modeling_in_One_Hour_Minimizing_Preparation_Efforts_for_Self-supervised_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Li_Noise_Modeling_in_CVPR_2025_supplemental.pdf | 2505.00045 | @InProceedings{Li_2025_CVPR,
author = {Li, Feiran and Jiang, Haiyang and Iso, Daisuke},
title = {Noise Modeling in One Hour: Minimizing Preparation Efforts for Self-supervised Low-Light RAW Image Denoising},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
... | Noise synthesis is a promising solution for addressing the data shortage problem in data-driven low-light RAW image denoising. However, accurate noise synthesis methods often necessitate labor-intensive calibration and profiling procedures during preparation, preventing them from landing to practice at scale. This work... | [
0.03877061977982521,
-0.006692402996122837,
-0.02119968645274639,
0.04783955588936806,
0.07026242464780807,
0.050885364413261414,
0.014301390387117863,
-0.006045338232070208,
-0.009936453774571419,
-0.06926941871643066,
-0.0022192152682691813,
0.012625659815967083,
-0.04269138351082802,
0.... |
188 | Adv-CPG: A Customized Portrait Generation Framework with Facial Adversarial Attacks | [
"Junying Wang",
"Hongyuan Zhang",
"Yuan Yuan"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Wang_Adv-CPG_A_Customized_Portrait_Generation_Framework_with_Facial_Adversarial_Attacks_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Wang_Adv-CPG_A_Customized_Portrait_Generation_Framework_with_Facial_Adversarial_Attacks_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wang_Adv-CPG_A_Customized_CVPR_2025_supplemental.pdf | null | @InProceedings{Wang_2025_CVPR,
author = {Wang, Junying and Zhang, Hongyuan and Yuan, Yuan},
title = {Adv-CPG: A Customized Portrait Generation Framework with Facial Adversarial Attacks},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June}... | Recent Customized Portrait Generation (CPG) methods, taking a facial image and a textual prompt as inputs, have attracted substantial attention. Although these methods generate high-fidelity portraits, they fail to prevent the generated portraits from being tracked and misused by malicious face recognition systems. To ... | [
0.0028484712820500135,
-0.019173551350831985,
0.00914200022816658,
0.04153674840927124,
0.008095513097941875,
0.03406692296266556,
0.038506899029016495,
-0.004106601700186729,
-0.016290251165628433,
-0.06604208797216415,
-0.000986739993095398,
-0.028630048036575317,
-0.0662558525800705,
-0... |
189 | Fish-Vista: A Multi-Purpose Dataset for Understanding & Identification of Traits from Images | [
"Kazi Sajeed Mehrab",
"M. Maruf",
"Arka Daw",
"Abhilash Neog",
"Harish Babu Manogaran",
"Mridul Khurana",
"Zhenyang Feng",
"Bahadir Altintas",
"Yasin Bakis",
"Elizabeth G Campolongo",
"Matthew J Thompson",
"Xiaojun Wang",
"Hilmar Lapp",
"Tanya Berger-Wolf",
"Paula Mabee",
"Henry Bart",... | https://openaccess.thecvf.com/content/CVPR2025/html/Mehrab_Fish-Vista_A_Multi-Purpose_Dataset_for_Understanding__Identification_of_Traits_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Mehrab_Fish-Vista_A_Multi-Purpose_Dataset_for_Understanding__Identification_of_Traits_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Mehrab_Fish-Vista_A_Multi-Purpose_CVPR_2025_supplemental.pdf | null | @InProceedings{Mehrab_2025_CVPR,
author = {Mehrab, Kazi Sajeed and Maruf, M. and Daw, Arka and Neog, Abhilash and Manogaran, Harish Babu and Khurana, Mridul and Feng, Zhenyang and Altintas, Bahadir and Bakis, Yasin and Campolongo, Elizabeth G and Thompson, Matthew J and Wang, Xiaojun and Lapp, Hilmar and Berger-... | We introduce Fish-Visual Trait Analysis (Fish-Vista), the first organismal image dataset designed for the analysis of visual traits of aquatic species directly from images using machine learning and computer vision methods. Fish-Vista contains 69,269 annotated images spanning 4,316 fish species, curated and organized t... | [
0.01627272740006447,
-0.015268024057149887,
-0.03593524172902107,
0.05080690234899521,
0.06476949155330658,
0.04285803064703941,
0.04262825846672058,
0.011227487586438656,
-0.01939661242067814,
-0.059337764978408813,
-0.00015868731134105474,
0.004021992441266775,
-0.09732797741889954,
-0.0... |
190 | High Dynamic Range Video Compression: A Large-Scale Benchmark Dataset and A Learned Bit-depth Scalable Compression Algorithm | [
"Zhaoyi Tian",
"Feifeng Wang",
"Shiwei Wang",
"Zihao Zhou",
"Yao Zhu",
"Liquan Shen"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Tian_High_Dynamic_Range_Video_Compression_A_Large-Scale_Benchmark_Dataset_and_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Tian_High_Dynamic_Range_Video_Compression_A_Large-Scale_Benchmark_Dataset_and_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Tian_High_Dynamic_Range_CVPR_2025_supplemental.pdf | 2503.00410 | @InProceedings{Tian_2025_CVPR,
author = {Tian, Zhaoyi and Wang, Feifeng and Wang, Shiwei and Zhou, Zihao and Zhu, Yao and Shen, Liquan},
title = {High Dynamic Range Video Compression: A Large-Scale Benchmark Dataset and A Learned Bit-depth Scalable Compression Algorithm},
booktitle = {Proceedings of ... | Recently, learned video compression (LVC) is undergoing a period of rapid development. However, due to absence of large and high-quality high dynamic range (HDR) video training data, LVC on HDR video is still unexplored. In this paper, we are the first to collect a large-scale HDR video benchmark dataset, named HDRVD2K... | [
0.012068752199411392,
-0.015509127639234066,
-0.005988212767988443,
0.039155419915914536,
0.0446649044752121,
0.02842954359948635,
-0.007221446838229895,
-0.020693432539701462,
-0.021048981696367264,
-0.04823407158255577,
0.010116529650986195,
-0.007414677180349827,
-0.027709918096661568,
... |
191 | OffsetOPT: Explicit Surface Reconstruction without Normals | [
"Huan Lei"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Lei_OffsetOPT_Explicit_Surface_Reconstruction_without_Normals_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Lei_OffsetOPT_Explicit_Surface_Reconstruction_without_Normals_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Lei_OffsetOPT_Explicit_Surface_CVPR_2025_supplemental.pdf | 2503.15763 | @InProceedings{Lei_2025_CVPR,
author = {Lei, Huan},
title = {OffsetOPT: Explicit Surface Reconstruction without Normals},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year = {2025},
pages = {11729-11738}
} | Neural surface reconstruction has been dominated by implicit representations with marching cubes for explicit surface extraction. However, those methods typically require high-quality normals for accurate reconstruction. We propose OffsetOPT, a method that reconstructs explicit surfaces directly from 3D point clouds ... | [
-0.022682128474116325,
0.03487608581781387,
-0.014460057020187378,
0.036075133830308914,
0.017616815865039825,
0.04863182455301285,
-0.0015782049158588052,
0.017175087705254555,
-0.03597886487841606,
-0.10525475442409515,
-0.023215271532535553,
-0.02961227111518383,
-0.07151635736227036,
-... |
192 | PCM : Picard Consistency Model for Fast Parallel Sampling of Diffusion Models | [
"Junhyuk So",
"Jiwoong Shin",
"Chaeyeon Jang",
"Eunhyeok Park"
] | https://openaccess.thecvf.com/content/CVPR2025/html/So_PCM__Picard_Consistency_Model_for_Fast_Parallel_Sampling_of_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/So_PCM__Picard_Consistency_Model_for_Fast_Parallel_Sampling_of_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/So_PCM__Picard_CVPR_2025_supplemental.pdf | 2503.19731 | @InProceedings{So_2025_CVPR,
author = {So, Junhyuk and Shin, Jiwoong and Jang, Chaeyeon and Park, Eunhyeok},
title = {PCM : Picard Consistency Model for Fast Parallel Sampling of Diffusion Models},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month ... | Recently, diffusion models have achieved significant advances in vision, text, and robotics. However, they still face slow generation speeds due to sequential denoising processes. To address this, a parallel sampling method based on Picard iteration was introduced, effectively reducing sequential steps while ensuring e... | [
-0.03156178817152977,
-0.034888800233602524,
-0.028207877650856972,
0.06257142871618271,
0.026780162006616592,
0.05301699414849281,
0.014524104073643684,
0.032093558460474014,
-0.014561477117240429,
-0.07919815927743912,
0.005563525017350912,
-0.02735917456448078,
-0.045746032148599625,
0.... |
193 | CoMapGS: Covisibility Map-based Gaussian Splatting for Sparse Novel View Synthesis | [
"Youngkyoon Jang",
"Eduardo Pérez-Pellitero"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Jang_CoMapGS_Covisibility_Map-based_Gaussian_Splatting_for_Sparse_Novel_View_Synthesis_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Jang_CoMapGS_Covisibility_Map-based_Gaussian_Splatting_for_Sparse_Novel_View_Synthesis_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Jang_CoMapGS_Covisibility_Map-based_CVPR_2025_supplemental.zip | null | @InProceedings{Jang_2025_CVPR,
author = {Jang, Youngkyoon and P\'erez-Pellitero, Eduardo},
title = {CoMapGS: Covisibility Map-based Gaussian Splatting for Sparse Novel View Synthesis},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
... | We propose Covisibility Map-based Gaussian Splatting (CoMapGS), designed to recover underrepresented sparse regions in sparse novel view synthesis. CoMapGS addresses both high- and low-uncertainty regions by constructing covisibility maps, enhancing initial point clouds, and applying uncertainty-aware weighted supervis... | [
0.02931114099919796,
-0.002801296766847372,
0.02551559917628765,
0.020106466487050056,
0.0059905764646828175,
0.03677614778280258,
-0.012990600429475307,
0.02342924289405346,
-0.0217136200517416,
-0.05786193162202835,
-0.03802831098437309,
-0.027445096522569656,
-0.06264422088861465,
0.001... |
194 | Any-Resolution AI-Generated Image Detection by Spectral Learning | [
"Dimitrios Karageorgiou",
"Symeon Papadopoulos",
"Ioannis Kompatsiaris",
"Efstratios Gavves"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Karageorgiou_Any-Resolution_AI-Generated_Image_Detection_by_Spectral_Learning_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Karageorgiou_Any-Resolution_AI-Generated_Image_Detection_by_Spectral_Learning_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Karageorgiou_Any-Resolution_AI-Generated_Image_CVPR_2025_supplemental.pdf | 2411.19417 | @InProceedings{Karageorgiou_2025_CVPR,
author = {Karageorgiou, Dimitrios and Papadopoulos, Symeon and Kompatsiaris, Ioannis and Gavves, Efstratios},
title = {Any-Resolution AI-Generated Image Detection by Spectral Learning},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conf... | Recent works have established that AI models introduce spectral artifacts into generated images and propose approaches for learning to capture them using labeled data. However, the significant differences in such artifacts among different generative models hinder these approaches from generalizing to generators not see... | [
0.009780025109648705,
-0.011967762373387814,
-0.02848108857870102,
0.030468564480543137,
0.039930522441864014,
0.010407908819615841,
0.02763238549232483,
0.0025567130651324987,
-0.03138909116387367,
-0.0664004236459732,
-0.032777708023786545,
0.02042836882174015,
-0.056874047964811325,
-0.... |
195 | DivPrune: Diversity-based Visual Token Pruning for Large Multimodal Models | [
"Saeed Ranjbar Alvar",
"Gursimran Singh",
"Mohammad Akbari",
"Yong Zhang"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Alvar_DivPrune_Diversity-based_Visual_Token_Pruning_for_Large_Multimodal_Models_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Alvar_DivPrune_Diversity-based_Visual_Token_Pruning_for_Large_Multimodal_Models_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Alvar_DivPrune_Diversity-based_Visual_CVPR_2025_supplemental.pdf | 2503.02175 | @InProceedings{Alvar_2025_CVPR,
author = {Alvar, Saeed Ranjbar and Singh, Gursimran and Akbari, Mohammad and Zhang, Yong},
title = {DivPrune: Diversity-based Visual Token Pruning for Large Multimodal Models},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
... | Large Multimodal Models (LMMs) have emerged as powerful models capable of understanding various data modalities, including text, images, and videos. LMMs encode both text and visual data into tokens that are then combined and processed by an integrated Large Language Model (LLM). Including visual tokens substantially i... | [
0.014302181079983711,
-0.037540338933467865,
-0.01571974717080593,
0.04878861829638481,
0.012373623438179493,
0.052726857364177704,
0.01747792400419712,
0.020713740959763527,
-0.042656123638153076,
-0.03238119184970856,
-0.050816167145967484,
-0.006196395494043827,
-0.058586474508047104,
-... |
196 | Training Data Provenance Verification: Did Your Model Use Synthetic Data from My Generative Model for Training? | [
"Yuechen Xie",
"Jie Song",
"Huiqiong Wang",
"Mingli Song"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Xie_Training_Data_Provenance_Verification_Did_Your_Model_Use_Synthetic_Data_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Xie_Training_Data_Provenance_Verification_Did_Your_Model_Use_Synthetic_Data_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Xie_Training_Data_Provenance_CVPR_2025_supplemental.zip | 2503.09122 | @InProceedings{Xie_2025_CVPR,
author = {Xie, Yuechen and Song, Jie and Wang, Huiqiong and Song, Mingli},
title = {Training Data Provenance Verification: Did Your Model Use Synthetic Data from My Generative Model for Training?},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition C... | High-quality open-source text-to-image models have lowered the threshold for obtaining photorealistic images significantly, but also face potential risks of misuse. Specifically, suspects may use synthetic data generated by these generative models to train models for specific tasks without permission, when lacking real... | [
0.006260382942855358,
-0.04362904280424118,
-0.025748958811163902,
0.07102616131305695,
0.05924329161643982,
-0.008462403900921345,
0.029451558366417885,
-0.004750026855617762,
-0.0065918792970478535,
-0.03179343789815903,
-0.007328593172132969,
0.00898722279816866,
-0.07685695588588715,
0... |
197 | 3D-AVS: LiDAR-based 3D Auto-Vocabulary Segmentation | [
"Weijie Wei",
"Osman Ülger",
"Fatemeh Karimi Nejadasl",
"Theo Gevers",
"Martin R. Oswald"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Wei_3D-AVS_LiDAR-based_3D_Auto-Vocabulary_Segmentation_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Wei_3D-AVS_LiDAR-based_3D_Auto-Vocabulary_Segmentation_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Wei_3D-AVS_LiDAR-based_3D_CVPR_2025_supplemental.pdf | null | @InProceedings{Wei_2025_CVPR,
author = {Wei, Weijie and \"Ulger, Osman and Nejadasl, Fatemeh Karimi and Gevers, Theo and Oswald, Martin R.},
title = {3D-AVS: LiDAR-based 3D Auto-Vocabulary Segmentation},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
m... | Open-vocabulary segmentation methods offer promising capabilities in detecting unseen object categories, but the category must be aware and needs to be provided by a human, either via a text prompt or pre-labeled datasets, thus limiting their scalability. We propose 3D-AVS, a method for Auto-Vocabulary Segmentation of ... | [
0.008074485696852207,
0.00043203512905165553,
0.02923007681965828,
0.040127821266651154,
0.004466991871595383,
0.04127690568566322,
0.04207894951105118,
0.014847449027001858,
-0.03975813463330269,
-0.025393787771463394,
-0.05708645284175873,
-0.010062693618237972,
-0.057102132588624954,
-0... |
198 | STOP: Integrated Spatial-Temporal Dynamic Prompting for Video Understanding | [
"Zichen Liu",
"Kunlun Xu",
"Bing Su",
"Xu Zou",
"Yuxin Peng",
"Jiahuan Zhou"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Liu_STOP_Integrated_Spatial-Temporal_Dynamic_Prompting_for_Video_Understanding_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Liu_STOP_Integrated_Spatial-Temporal_Dynamic_Prompting_for_Video_Understanding_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Liu_STOP_Integrated_Spatial-Temporal_CVPR_2025_supplemental.pdf | 2503.15973 | @InProceedings{Liu_2025_CVPR,
author = {Liu, Zichen and Xu, Kunlun and Su, Bing and Zou, Xu and Peng, Yuxin and Zhou, Jiahuan},
title = {STOP: Integrated Spatial-Temporal Dynamic Prompting for Video Understanding},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CV... | Pre-trained on tremendous image-text pairs, vision-language models like CLIP have demonstrated promising zero-shot generalization across numerous image-based tasks. However, extending these capabilities to video tasks remains challenging due to limited labeled video data and high training costs. Recent video prompting ... | [
0.04022914916276932,
-0.03031485714018345,
0.0042066569440066814,
0.04180704802274704,
0.014326759614050388,
0.01871481165289879,
0.023321084678173065,
0.017416389659047127,
-0.03965326026082039,
-0.02972273714840412,
-0.04299192875623703,
-0.006076018325984478,
-0.03573630377650261,
-0.00... |
199 | TimeTracker: Event-based Continuous Point Tracking for Video Frame Interpolation with Non-linear Motion | [
"Haoyue Liu",
"Jinghan Xu",
"Yi Chang",
"Hanyu Zhou",
"Haozhi Zhao",
"Lin Wang",
"Luxin Yan"
] | https://openaccess.thecvf.com/content/CVPR2025/html/Liu_TimeTracker_Event-based_Continuous_Point_Tracking_for_Video_Frame_Interpolation_with_CVPR_2025_paper.html | https://openaccess.thecvf.com/content/CVPR2025/papers/Liu_TimeTracker_Event-based_Continuous_Point_Tracking_for_Video_Frame_Interpolation_with_CVPR_2025_paper.pdf | https://openaccess.thecvf.com/content/CVPR2025/supplemental/Liu_TimeTracker_Event-based_Continuous_CVPR_2025_supplemental.pdf | 2505.03116 | @InProceedings{Liu_2025_CVPR,
author = {Liu, Haoyue and Xu, Jinghan and Chang, Yi and Zhou, Hanyu and Zhao, Haozhi and Wang, Lin and Yan, Luxin},
title = {TimeTracker: Event-based Continuous Point Tracking for Video Frame Interpolation with Non-linear Motion},
booktitle = {Proceedings of the Computer... | Video frame interpolation (VFI) that leverages the bio-inspired event cameras as guidance has recently shown better performance and memory efficiency than the frame-based methods, thanks to the event cameras' advantages, such as high temporal resolution. A hurdle for event-based VFI is how to effectively deal with non-... | [
0.055949125438928604,
-0.009078231640160084,
0.022796303033828735,
0.018173208460211754,
0.036382030695676804,
0.03491797298192978,
0.02102738246321678,
0.02680969052016735,
-0.04025480896234512,
-0.07015760987997055,
0.007429694756865501,
-0.03414834290742874,
-0.02929755300283432,
-0.007... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.