Dataset Viewer
Auto-converted to Parquet Duplicate
paper_id
uint32
0
2.87k
title
stringlengths
15
149
authors
sequencelengths
1
69
cvf_url
stringlengths
94
199
pdf_url
stringlengths
95
200
supp_url
stringlengths
100
148
arxiv_id
stringlengths
10
10
bibtex
large_stringlengths
285
1.82k
abstract
large_stringlengths
547
2.44k
embedding
sequencelengths
768
768
0
Deterministic Image-to-Image Translation via Denoising Brownian Bridge Models with Dual Approximators
[ "Bohan Xiao", "Peiyong Wang", "Qisheng He", "Ming Dong" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Xiao_Deterministic_Image-to-Image_Translation_via_Denoising_Brownian_Bridge_Models_with_Dual_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Xiao_Deterministic_Image-to-Image_Translation_via_Denoising_Brownian_Bridge_Models_with_Dual_CVPR_2025_paper.pdf
null
null
@InProceedings{Xiao_2025_CVPR, author = {Xiao, Bohan and Wang, Peiyong and He, Qisheng and Dong, Ming}, title = {Deterministic Image-to-Image Translation via Denoising Brownian Bridge Models with Dual Approximators}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (...
Image-to-Image (I2I) translation involves converting an im- age from one domain to another. Deterministic I2I transla- tion, such as in image super-resolution, extends this con- cept by guaranteeing that each input generates a consistent and predictable output, closely matching the ground truth (GT) with high fidelity....
[ 0.010270824655890465, 0.0024396288208663464, 0.0028782119043171406, 0.033904898911714554, 0.044995155185461044, 0.059894807636737823, 0.004217842593789101, -0.0006209457642398775, -0.016814827919006348, -0.06774745136499405, 0.029633665457367897, -0.009677249938249588, -0.032301198691129684,...
1
Towards Source-Free Machine Unlearning
[ "Sk Miraj Ahmed", "Umit Yigit Basaran", "Dripta S. Raychaudhuri", "Arindam Dutta", "Rohit Kundu", "Fahim Faisal Niloy", "Basak Guler", "Amit K. Roy-Chowdhury" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Ahmed_Towards_Source-Free_Machine_Unlearning_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Ahmed_Towards_Source-Free_Machine_Unlearning_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Ahmed_Towards_Source-Free_Machine_CVPR_2025_supplemental.pdf
null
@InProceedings{Ahmed_2025_CVPR, author = {Ahmed, Sk Miraj and Basaran, Umit Yigit and Raychaudhuri, Dripta S. and Dutta, Arindam and Kundu, Rohit and Niloy, Fahim Faisal and Guler, Basak and Roy-Chowdhury, Amit K.}, title = {Towards Source-Free Machine Unlearning}, booktitle = {Proceedings of the Com...
As machine learning become more pervasive and data privacy regulations evolve, the ability to remove private or copyrighted information from trained models is becoming an increasingly critical requirement. Existing unlearning methods often rely on the assumption of having access to the entire training dataset during th...
[ -0.010323857888579369, -0.03624191880226135, -0.015853341668844223, 0.062099386006593704, 0.052606597542762756, -0.014147561974823475, 0.03671172261238098, 0.002176909474655986, -0.035047680139541626, -0.005467189475893974, -0.01382327452301979, 0.03566297888755798, -0.05731732398271561, 0...
2
Uni4D: Unifying Visual Foundation Models for 4D Modeling from a Single Video
[ "David Yifan Yao", "Albert J. Zhai", "Shenlong Wang" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Yao_Uni4D_Unifying_Visual_Foundation_Models_for_4D_Modeling_from_a_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Yao_Uni4D_Unifying_Visual_Foundation_Models_for_4D_Modeling_from_a_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Yao_Uni4D_Unifying_Visual_CVPR_2025_supplemental.zip
2503.21761
@InProceedings{Yao_2025_CVPR, author = {Yao, David Yifan and Zhai, Albert J. and Wang, Shenlong}, title = {Uni4D: Unifying Visual Foundation Models for 4D Modeling from a Single Video}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June},...
This paper presents a unified approach to understanding dynamic scenes from casual videos. Large pretrained vision foundation models, such as vision-language, video depth prediction, motion tracking, and segmentation models, offer promising capabilities. However, training a single model for comprehensive 4D understandi...
[ 0.030675409361720085, -0.02345341630280018, 0.019434453919529915, 0.03466026112437248, 0.04201898351311684, 0.03475534915924072, 0.00616192864254117, 0.025687163695693016, -0.03886543959379196, -0.04939538985490799, -0.0027504952158778906, -0.021865446120500565, -0.05978608503937721, 0.019...
3
DynScene: Scalable Generation of Dynamic Robotic Manipulation Scenes for Embodied AI
[ "Sangmin Lee", "Sungyong Park", "Heewon Kim" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Lee_DynScene_Scalable_Generation_of_Dynamic_Robotic_Manipulation_Scenes_for_Embodied_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Lee_DynScene_Scalable_Generation_of_Dynamic_Robotic_Manipulation_Scenes_for_Embodied_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Lee_DynScene_Scalable_Generation_CVPR_2025_supplemental.pdf
null
@InProceedings{Lee_2025_CVPR, author = {Lee, Sangmin and Park, Sungyong and Kim, Heewon}, title = {DynScene: Scalable Generation of Dynamic Robotic Manipulation Scenes for Embodied AI}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June},...
Robotic manipulation in embodied AI critically depends on large-scale, high-quality datasets that reflect realistic object interactions and physical dynamics. However, existing data collection pipelines are often slow, expensive, and heavily reliant on manual efforts. We present DynScene, a diffusion-based framework fo...
[ 0.001775662531144917, -0.02757909707725048, -0.04702160134911537, 0.055267706513404846, 0.034457482397556305, 0.05256647244095802, 0.00964529998600483, 0.0011482861591503024, -0.026132142171263695, -0.033943772315979004, -0.04567909240722656, -0.009325932711362839, -0.06281141191720963, -0...
4
DiffLocks: Generating 3D Hair from a Single Image using Diffusion Models
[ "Radu Alexandru Rosu", "Keyu Wu", "Yao Feng", "Youyi Zheng", "Michael J. Black" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Rosu_DiffLocks_Generating_3D_Hair_from_a_Single_Image_using_Diffusion_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Rosu_DiffLocks_Generating_3D_Hair_from_a_Single_Image_using_Diffusion_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Rosu_DiffLocks_Generating_3D_CVPR_2025_supplemental.zip
2505.06166
@InProceedings{Rosu_2025_CVPR, author = {Rosu, Radu Alexandru and Wu, Keyu and Feng, Yao and Zheng, Youyi and Black, Michael J.}, title = {DiffLocks: Generating 3D Hair from a Single Image using Diffusion Models}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVP...
We address the task of generating 3D hair geometry from a single image, which is challenging due to the diversity of hairstyles and the lack of paired image-to-3D hair data. Previous methods are primarily trained on synthetic data and cope with the limited amount of such data by using low-dimensional intermediate repre...
[ 0.033659469336271286, -0.024617819115519524, -0.020371824502944946, 0.055148158222436905, 0.04397740587592125, 0.03585035353899002, 0.020146572962403297, -0.014623479917645454, -0.013054507784545422, -0.06127414107322693, -0.017195377498865128, -0.037464484572410583, -0.060723982751369476, ...
5
Hyperbolic Category Discovery
[ "Yuanpei Liu", "Zhenqi He", "Kai Han" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Liu_Hyperbolic_Category_Discovery_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Liu_Hyperbolic_Category_Discovery_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Liu_Hyperbolic_Category_Discovery_CVPR_2025_supplemental.pdf
2504.06120
@InProceedings{Liu_2025_CVPR, author = {Liu, Yuanpei and He, Zhenqi and Han, Kai}, title = {Hyperbolic Category Discovery}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages = {9891-9900} }
Generalized Category Discovery (GCD) is an intriguing open-world problem that has garnered increasing attention. Given a dataset that includes both labelled and unlabelled images, GCD aims to categorize all images in the unlabelled subset, regardless of whether they belong to known or unknown classes. In GCD, the commo...
[ -0.008779085241258144, 0.007251635193824768, 0.010085645131766796, 0.03387803956866264, 0.038696590811014175, -0.004796986933797598, 0.014303311705589294, -0.007262459024786949, -0.03424140065908432, -0.03717169165611267, -0.013507319614291191, -0.006132584996521473, -0.06985427439212799, ...
6
The Language of Motion: Unifying Verbal and Non-verbal Language of 3D Human Motion
[ "Changan Chen", "Juze Zhang", "Shrinidhi K. Lakshmikanth", "Yusu Fang", "Ruizhi Shao", "Gordon Wetzstein", "Li Fei-Fei", "Ehsan Adeli" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Chen_The_Language_of_Motion_Unifying_Verbal_and_Non-verbal_Language_of_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Chen_The_Language_of_Motion_Unifying_Verbal_and_Non-verbal_Language_of_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Chen_The_Language_of_CVPR_2025_supplemental.zip
2412.10523
@InProceedings{Chen_2025_CVPR, author = {Chen, Changan and Zhang, Juze and Lakshmikanth, Shrinidhi K. and Fang, Yusu and Shao, Ruizhi and Wetzstein, Gordon and Fei-Fei, Li and Adeli, Ehsan}, title = {The Language of Motion: Unifying Verbal and Non-verbal Language of 3D Human Motion}, booktitle = {Pro...
Human communication is inherently multimodal, involving a combination of verbal and non-verbal cues such as speech, facial expressions, and body gestures. Modeling these behaviors is essential for understanding human interaction and for creating virtual characters that can communicate naturally in applications like gam...
[ -0.008330456912517548, -0.001010035164654255, -0.011137885972857475, 0.03575237840414047, 0.022912772372364998, 0.026017220690846443, 0.03895331546664238, 0.03074020892381668, -0.03074241802096367, -0.0391252376139164, -0.04746473208069801, 0.002866833470761776, -0.0634905993938446, -0.020...
7
CALICO: Part-Focused Semantic Co-Segmentation with Large Vision-Language Models
[ "Kiet A. Nguyen", "Adheesh Juvekar", "Tianjiao Yu", "Muntasir Wahed", "Ismini Lourentzou" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Nguyen_CALICO_Part-Focused_Semantic_Co-Segmentation_with_Large_Vision-Language_Models_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Nguyen_CALICO_Part-Focused_Semantic_Co-Segmentation_with_Large_Vision-Language_Models_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Nguyen_CALICO_Part-Focused_Semantic_CVPR_2025_supplemental.pdf
2412.19331
@InProceedings{Nguyen_2025_CVPR, author = {Nguyen, Kiet A. and Juvekar, Adheesh and Yu, Tianjiao and Wahed, Muntasir and Lourentzou, Ismini}, title = {CALICO: Part-Focused Semantic Co-Segmentation with Large Vision-Language Models}, booktitle = {Proceedings of the Computer Vision and Pattern Recognit...
Recent advances in Large Vision-Language Models (LVLMs) have enabled general-purpose vision tasks through visual instruction tuning. While existing LVLMs can generate segmentation masks from text prompts for single images, they struggle with segmentation-grounded reasoning across images, especially at finer granulariti...
[ -0.0027968217618763447, -0.01403918769210577, 0.008400198072195053, 0.03367597982287407, 0.018971310928463936, 0.043894533067941666, 0.0071144611574709415, 0.04817896708846092, -0.002569961128756404, -0.009381590411067009, -0.06158542260527611, 0.006436770316213369, -0.053676787763834, -0....
8
Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment
[ "Ziang Yan", "Zhilin Li", "Yinan He", "Chenting Wang", "Kunchang Li", "Xinhao Li", "Xiangyu Zeng", "Zilei Wang", "Yali Wang", "Yu Qiao", "Limin Wang", "Yi Wang" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Yan_Task_Preference_Optimization_Improving_Multimodal_Large_Language_Models_with_Vision_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Yan_Task_Preference_Optimization_Improving_Multimodal_Large_Language_Models_with_Vision_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Yan_Task_Preference_Optimization_CVPR_2025_supplemental.pdf
2412.19326
@InProceedings{Yan_2025_CVPR, author = {Yan, Ziang and Li, Zhilin and He, Yinan and Wang, Chenting and Li, Kunchang and Li, Xinhao and Zeng, Xiangyu and Wang, Zilei and Wang, Yali and Qiao, Yu and Wang, Limin and Wang, Yi}, title = {Task Preference Optimization: Improving Multimodal Large Language Models...
Current multimodal large language models (MLLMs) struggle with fine-grained or precise understanding of visuals although they give comprehensive perception and reasoning in a spectrum of vision applications. Recent studies either develop tool-using or unify specific visual tasks into the autoregressive framework, often...
[ 0.03374236449599266, -0.002008701441809535, 0.013983473181724548, 0.022560421377420425, 0.019776267930865288, 0.016383320093154907, 0.020873641595244408, 0.038507696241140366, -0.0330033041536808, -0.032857127487659454, -0.028700027614831924, 0.028389107435941696, -0.08374718576669693, -0....
9
Cross-modal Causal Relation Alignment for Video Question Grounding
[ "Weixing Chen", "Yang Liu", "Binglin Chen", "Jiandong Su", "Yongsen Zheng", "Liang Lin" ]
https://openaccess.thecvf.com/content/CVPR2025/html/Chen_Cross-modal_Causal_Relation_Alignment_for_Video_Question_Grounding_CVPR_2025_paper.html
https://openaccess.thecvf.com/content/CVPR2025/papers/Chen_Cross-modal_Causal_Relation_Alignment_for_Video_Question_Grounding_CVPR_2025_paper.pdf
https://openaccess.thecvf.com/content/CVPR2025/supplemental/Chen_Cross-modal_Causal_Relation_CVPR_2025_supplemental.pdf
2503.07635
@InProceedings{Chen_2025_CVPR, author = {Chen, Weixing and Liu, Yang and Chen, Binglin and Su, Jiandong and Zheng, Yongsen and Lin, Liang}, title = {Cross-modal Causal Relation Alignment for Video Question Grounding}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference ...
Video question grounding (VideoQG) requires models to answer the questions and simultaneously infer the relevant video segments to support the answers. However, existing VideoQG methods usually suffer from spurious cross-modal correlations, leading to a failure to identify the dominant visual scenes that align with the...
[ 0.037093985825777054, -0.008603223599493504, 0.015463977120816708, 0.06353283673524857, 0.016608867794275284, 0.01988270692527294, 0.036515187472105026, 0.0367840901017189, -0.020836116746068, -0.021268067881464958, -0.04050051420927048, 0.03167286515235901, -0.06421829015016556, -0.003429...
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
23

Spaces using ai-conferences/CVPR2025 2