Dataset Viewer
Auto-converted to Parquet Duplicate
paper_id
uint32
title
string
authors
list
cvf_url
string
pdf_url
string
supp_url
string
bibtex
string
abstract
large_string
arxiv_id
string
comment
string
github
string
project_page
string
space_ids
list
model_ids
list
dataset_ids
list
embedding
list
0
kh: Symmetry Understanding of 3D Shapes via Chirality Disentanglement
[ "Weikang Wang", "Tobias Weißberg", "Nafie El Amrani", "Florian Bernard" ]
https://openaccess.thecvf.com/content/ICCV2025/html/Wang_kh_Symmetry_Understanding_of_3D_Shapes_via_Chirality_Disentanglement_ICCV_2025_paper.html
https://openaccess.thecvf.com/content/ICCV2025/papers/Wang_kh_Symmetry_Understanding_of_3D_Shapes_via_Chirality_Disentanglement_ICCV_2025_paper.pdf
https://openaccess.thecvf.com/content/ICCV2025/supplemental/Wang_kh_Symmetry_Understanding_ICCV_2025_supplemental.pdf
@InProceedings{Wang_2025_ICCV, author = {Wang, Weikang and Wei{\ss}berg, Tobias and El Amrani, Nafie and Bernard, Florian}, title = {kh: Symmetry Understanding of 3D Shapes via Chirality Disentanglement}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, ...
Chirality information (i.e. information that allows distinguishing left from right) is ubiquitous for various data modes in computer vision, including images, videos, point clouds, and meshes. While chirality has been extensively studied in the image domain, its exploration in shape analysis (such as point clouds and m...
null
null
null
null
[]
[]
[]
[ 0.014335653744637966, 0.0011623065220192075, 0.010857202112674713, 0.0178863313049078, 0.020875804126262665, 0.03718924894928932, 0.006471932865679264, 0.011918001808226109, -0.014221075922250748, -0.05570879206061363, -0.011482015252113342, -0.04218422994017601, -0.05513007193803787, 0.05...
1
Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy
[ "Yiting Yang", "Hao Luo", "Yuan Sun", "Qingsen Yan", "Haokui Zhang", "Wei Dong", "Guoqing Wang", "Peng Wang", "Yang Yang", "Hengtao Shen" ]
https://openaccess.thecvf.com/content/ICCV2025/html/Yang_Efficient_Adaptation_of_Pre-trained_Vision_Transformer_underpinned_by_Approximately_Orthogonal_ICCV_2025_paper.html
https://openaccess.thecvf.com/content/ICCV2025/papers/Yang_Efficient_Adaptation_of_Pre-trained_Vision_Transformer_underpinned_by_Approximately_Orthogonal_ICCV_2025_paper.pdf
https://openaccess.thecvf.com/content/ICCV2025/supplemental/Yang_Efficient_Adaptation_of_ICCV_2025_supplemental.zip
@InProceedings{Yang_2025_ICCV, author = {Yang, Yiting and Luo, Hao and Sun, Yuan and Yan, Qingsen and Zhang, Haokui and Dong, Wei and Wang, Guoqing and Wang, Peng and Yang, Yang and Shen, Hengtao}, title = {Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fin...
A prevalent approach in Parameter-Efficient Fine-Tuning (PEFT) of pre-trained Vision Transformers (ViT) involves freezing the majority of the backbone parameters and solely learning low-rank adaptation weight matrices to accommodate downstream tasks. These low-rank matrices are commonly derived through the multiplicati...
2507.13260
This paper is accepted by ICCV 2025
null
null
[]
[]
[]
[ -0.01595032773911953, -0.031848710030317307, 0.04327205568552017, 0.010211489163339138, 0.03500578552484512, 0.028599634766578674, 0.016466189175844193, 0.005730366334319115, -0.024307364597916603, -0.037106577306985855, -0.027496790513396263, 0.014403547160327435, -0.07350608706474304, -0...
2
MM-IFEngine: Towards Multimodal Instruction Following
[ "Shengyuan Ding", "Shenxi Wu", "Xiangyu Zhao", "Yuhang Zang", "Haodong Duan", "Xiaoyi Dong", "Pan Zhang", "Yuhang Cao", "Dahua Lin", "Jiaqi Wang" ]
https://openaccess.thecvf.com/content/ICCV2025/html/Ding_MM-IFEngine_Towards_Multimodal_Instruction_Following_ICCV_2025_paper.html
https://openaccess.thecvf.com/content/ICCV2025/papers/Ding_MM-IFEngine_Towards_Multimodal_Instruction_Following_ICCV_2025_paper.pdf
https://openaccess.thecvf.com/content/ICCV2025/supplemental/Ding_MM-IFEngine_Towards_Multimodal_ICCV_2025_supplemental.pdf
@InProceedings{Ding_2025_ICCV, author = {Ding, Shengyuan and Wu, Shenxi and Zhao, Xiangyu and Zang, Yuhang and Duan, Haodong and Dong, Xiaoyi and Zhang, Pan and Cao, Yuhang and Lin, Dahua and Wang, Jiaqi}, title = {MM-IFEngine: Towards Multimodal Instruction Following}, booktitle = {Proceedings of th...
The Instruction Following (IF) ability measures how well Multi-modal Large Language Models (MLLMs) understand exactly what users are telling them and doing it right.Existing multimodal instruction following training data is scarce, the benchmarks are simple with atomic instructions, and the evaluation strategies are im...
null
null
null
null
[]
[]
[]
[ -0.0023685619235038757, 0.0012950691161677241, -0.00007703046867391095, 0.023427100852131844, 0.03204823657870293, -0.003080470021814108, 0.04342794418334961, 0.019115056842565536, -0.04817984253168106, -0.003305742284283042, -0.03213934972882271, 0.06202574819326401, -0.0503186360001564, ...
3
Who is a Better Talker: Subjective and Objective Quality Assessment for AI-Generated Talking Heads
[ "Yingjie Zhou", "Jiezhang Cao", "Zicheng Zhang", "Farong Wen", "Yanwei Jiang", "Jun Jia", "Xiaohong Liu", "Xiongkuo Min", "Guangtao Zhai" ]
https://openaccess.thecvf.com/content/ICCV2025/html/Zhou_Who_is_a_Better_Talker_Subjective_and_Objective_Quality_Assessment_ICCV_2025_paper.html
https://openaccess.thecvf.com/content/ICCV2025/papers/Zhou_Who_is_a_Better_Talker_Subjective_and_Objective_Quality_Assessment_ICCV_2025_paper.pdf
https://openaccess.thecvf.com/content/ICCV2025/supplemental/Zhou_Who_is_a_ICCV_2025_supplemental.pdf
@InProceedings{Zhou_2025_ICCV, author = {Zhou, Yingjie and Cao, Jiezhang and Zhang, Zicheng and Wen, Farong and Jiang, Yanwei and Jia, Jun and Liu, Xiaohong and Min, Xiongkuo and Zhai, Guangtao}, title = {Who is a Better Talker: Subjective and Objective Quality Assessment for AI-Generated Talking Heads},...
Speech-driven methods for portraits are figuratively known as "Talkers" because of their capability to synthesize speaking mouth shapes and facial movements. Especially with the rapid development of the Text-to-Image (T2I) models, AI-Generated Talking Heads (AGTHs) have gradually become an emerging digital human media....
2507.23343
null
https://github.com/zyj-2000/Talker
null
[]
[]
[]
[ 0.010329356417059898, -0.020546993240714073, -0.0022839230950921774, 0.04020318016409874, 0.013103203848004341, 0.04535313695669174, 0.04665455222129822, 0.005283264443278313, -0.0022130522411316633, -0.050677791237831116, -0.039543699473142624, 0.031019918620586395, -0.0645647644996643, 0...
4
LayerAnimate: Layer-level Control for Animation
[ "Yuxue Yang", "Lue Fan", "Zuzeng Lin", "Feng Wang", "Zhaoxiang Zhang" ]
https://openaccess.thecvf.com/content/ICCV2025/html/Yang_LayerAnimate_Layer-level_Control_for_Animation_ICCV_2025_paper.html
https://openaccess.thecvf.com/content/ICCV2025/papers/Yang_LayerAnimate_Layer-level_Control_for_Animation_ICCV_2025_paper.pdf
https://openaccess.thecvf.com/content/ICCV2025/supplemental/Yang_LayerAnimate_Layer-level_Control_ICCV_2025_supplemental.pdf
@InProceedings{Yang_2025_ICCV, author = {Yang, Yuxue and Fan, Lue and Lin, Zuzeng and Wang, Feng and Zhang, Zhaoxiang}, title = {LayerAnimate: Layer-level Control for Animation}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, ...
Traditional animation production decomposes visual elements into discrete layers to enable independent processing for sketching, refining, coloring, and in-betweening. Existing anime generation video methods typically treat animation as a distinct data domain different from real-world videos, lacking fine-grained contr...
2501.08295
Project page: https://layeranimate.github.io
null
https://layeranimate.github.io
[ "IamCreateAI/LayerAnimate" ]
[ "Yuppie1204/LayerAnimate-Mix" ]
[]
[ 0.014414518140256405, -0.036979883909225464, 0.005391916260123253, 0.015496291220188141, 0.028710853308439255, -0.0005574686801992357, -0.009399556554853916, -0.0020826796535402536, -0.026559410616755486, -0.051289815455675125, -0.032109182327985764, -0.019442444667220116, -0.017617933452129...
5
Towards a Unified Copernicus Foundation Model for Earth Vision
[ "Yi Wang", "Zhitong Xiong", "Chenying Liu", "Adam J. Stewart", "Thomas Dujardin", "Nikolaos Ioannis Bountos", "Angelos Zavras", "Franziska Gerken", "Ioannis Papoutsis", "Laura Leal-Taixé", "Xiao Xiang Zhu" ]
https://openaccess.thecvf.com/content/ICCV2025/html/Wang_Towards_a_Unified_Copernicus_Foundation_Model_for_Earth_Vision_ICCV_2025_paper.html
https://openaccess.thecvf.com/content/ICCV2025/papers/Wang_Towards_a_Unified_Copernicus_Foundation_Model_for_Earth_Vision_ICCV_2025_paper.pdf
https://openaccess.thecvf.com/content/ICCV2025/supplemental/Wang_Towards_a_Unified_ICCV_2025_supplemental.pdf
@InProceedings{Wang_2025_ICCV, author = {Wang, Yi and Xiong, Zhitong and Liu, Chenying and Stewart, Adam J. and Dujardin, Thomas and Bountos, Nikolaos Ioannis and Zavras, Angelos and Gerken, Franziska and Papoutsis, Ioannis and Leal-Taix\'e, Laura and Zhu, Xiao Xiang}, title = {Towards a Unified Copernic...
Advances in Earth observation (EO) foundation models have unlocked the potential of big satellite data to learn generic representations from space, benefiting a wide range of downstream applications crucial to our planet. However, most existing efforts remain limited to fixed spectral sensors, focus solely on the Earth...
2503.11849
Accepted to ICCV 2025. 33 pages, 34 figures
https://github.com/zhu-xlab/Copernicus-FM
null
[]
[ "wangyi111/Copernicus-FM" ]
[ "wangyi111/Copernicus-Pretrain" ]
[ 0.027766352519392967, -0.06929122656583786, 0.02689434587955475, 0.015745263546705246, 0.03621228039264679, 0.004305608570575714, -0.004965104162693024, 0.05807843804359436, -0.051789600402116776, -0.055493514984846115, -0.03125998377799988, 0.006975578144192696, -0.07972833514213562, 0.00...
6
ROADWork: A Dataset and Benchmark for Learning to Recognize, Observe, Analyze and Drive Through Work Zones
[ "Anurag Ghosh", "Shen Zheng", "Robert Tamburo", "Khiem Vuong", "Juan Alvarez-Padilla", "Hailiang Zhu", "Michael Cardei", "Nicholas Dunn", "Christoph Mertz", "Srinivasa G. Narasimhan" ]
https://openaccess.thecvf.com/content/ICCV2025/html/Ghosh_ROADWork_A_Dataset_and_Benchmark_for_Learning_to_Recognize_Observe_ICCV_2025_paper.html
https://openaccess.thecvf.com/content/ICCV2025/papers/Ghosh_ROADWork_A_Dataset_and_Benchmark_for_Learning_to_Recognize_Observe_ICCV_2025_paper.pdf
https://openaccess.thecvf.com/content/ICCV2025/supplemental/Ghosh_ROADWork_A_Dataset_ICCV_2025_supplemental.pdf
@InProceedings{Ghosh_2025_ICCV, author = {Ghosh, Anurag and Zheng, Shen and Tamburo, Robert and Vuong, Khiem and Alvarez-Padilla, Juan and Zhu, Hailiang and Cardei, Michael and Dunn, Nicholas and Mertz, Christoph and Narasimhan, Srinivasa G.}, title = {ROADWork: A Dataset and Benchmark for Learning to Re...
Perceiving and autonomously navigating through work zones is a challenging and under-explored problem. Open datasets for this long-tailed scenario are scarce. We propose the ROADWork dataset to learn to recognize, observe, analyze, and drive through work zones. State-of-the-art foundation models fail when applied to wo...
null
null
null
null
[]
[]
[]
[ 0.055151909589767456, 0.015910230576992035, 0.020566178485751152, 0.03400871157646179, 0.033986639231443405, 0.01489538885653019, 0.047573987394571304, 0.043161697685718536, -0.0028116675093770027, -0.05412827059626579, -0.04244190827012062, 0.010571218095719814, -0.07279396802186966, -0.0...
7
Gradient Decomposition and Alignment for Incremental Object Detection
[ "Wenlong Luo", "Shizhou Zhang", "De Cheng", "Yinghui Xing", "Guoqiang Liang", "Peng Wang", "Yanning Zhang" ]
https://openaccess.thecvf.com/content/ICCV2025/html/Luo_Gradient_Decomposition_and_Alignment_for_Incremental_Object_Detection_ICCV_2025_paper.html
https://openaccess.thecvf.com/content/ICCV2025/papers/Luo_Gradient_Decomposition_and_Alignment_for_Incremental_Object_Detection_ICCV_2025_paper.pdf
https://openaccess.thecvf.com/content/ICCV2025/supplemental/Luo_Gradient_Decomposition_and_ICCV_2025_supplemental.pdf
@InProceedings{Luo_2025_ICCV, author = {Luo, Wenlong and Zhang, Shizhou and Cheng, De and Xing, Yinghui and Liang, Guoqiang and Wang, Peng and Zhang, Yanning}, title = {Gradient Decomposition and Alignment for Incremental Object Detection}, booktitle = {Proceedings of the IEEE/CVF International Confe...
Incremental object detection (IOD) is crucial for enabling AI systems to continuously learn new object classes over time while retaining knowledge of previously learned categories, allowing model to adapt to dynamic environments without forgetting prior information.Existing IOD methods primarily employ knowledge distil...
null
null
null
null
[]
[]
[]
[ 0.0035268301144242287, -0.0004766414931509644, 0.004043825902044773, 0.03530827537178993, 0.028325721621513367, 0.03982969745993614, 0.026156311854720116, -0.01055420283228159, -0.045582883059978485, -0.02777264639735222, -0.024598974734544754, 0.02481015957891941, -0.0692133903503418, -0....
8
One Polyp Identifies All: One-Shot Polyp Segmentation with SAM via Cascaded Priors and Iterative Prompt Evolution
[ "Xinyu Mao", "Xiaohan Xing", "Fei Meng", "Jianbang Liu", "Fan Bai", "Qiang Nie", "Max Meng" ]
https://openaccess.thecvf.com/content/ICCV2025/html/Mao_One_Polyp_Identifies_All_One-Shot_Polyp_Segmentation_with_SAM_via_ICCV_2025_paper.html
https://openaccess.thecvf.com/content/ICCV2025/papers/Mao_One_Polyp_Identifies_All_One-Shot_Polyp_Segmentation_with_SAM_via_ICCV_2025_paper.pdf
https://openaccess.thecvf.com/content/ICCV2025/supplemental/Mao_One_Polyp_Identifies_ICCV_2025_supplemental.pdf
@InProceedings{Mao_2025_ICCV, author = {Mao, Xinyu and Xing, Xiaohan and Meng, Fei and Liu, Jianbang and Bai, Fan and Nie, Qiang and Meng, Max}, title = {One Polyp Identifies All: One-Shot Polyp Segmentation with SAM via Cascaded Priors and Iterative Prompt Evolution}, booktitle = {Proceedings of the...
Polyp segmentation is vital for early colorectal cancer detection, yet traditional fully supervised methods struggle with morphological variability and domain shifts, requiring frequent retraining. Additionally, reliance on large-scale annotations is a major bottleneck due to the time-consuming and error-prone nature o...
2507.16337
accepted by ICCV2025
null
null
[]
[]
[]
[ -0.017479155212640762, -0.04860679432749748, 0.013979200273752213, 0.015341182239353657, 0.0326567180454731, 0.019246399402618408, 0.044737037271261215, -0.015183096751570702, -0.03546231612563133, -0.09151994436979294, -0.036008477210998535, -0.017747389152646065, -0.043996911495923996, 0...
9
Gradient Extrapolation for Debiased Representation Learning
[ "Ihab Asaad", "Maha Shadaydeh", "Joachim Denzler" ]
https://openaccess.thecvf.com/content/ICCV2025/html/Asaad_Gradient_Extrapolation_for_Debiased_Representation_Learning_ICCV_2025_paper.html
https://openaccess.thecvf.com/content/ICCV2025/papers/Asaad_Gradient_Extrapolation_for_Debiased_Representation_Learning_ICCV_2025_paper.pdf
https://openaccess.thecvf.com/content/ICCV2025/supplemental/Asaad_Gradient_Extrapolation_for_ICCV_2025_supplemental.pdf
@InProceedings{Asaad_2025_ICCV, author = {Asaad, Ihab and Shadaydeh, Maha and Denzler, Joachim}, title = {Gradient Extrapolation for Debiased Representation Learning}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year ...
Machine learning classification models trained with empirical risk minimization (ERM) often inadvertently rely on spurious correlations. When absent in the test data, these unintended associations between non-target attributes and target labels lead to poor generalization. This paper addresses this problem from a model...
2503.13236
Accepted at International Conference on Computer Vision, ICCV 2025
null
https://gerne-debias.github.io/
[]
[]
[]
[ -0.002792726969346404, 0.029606139287352562, -0.01197305228561163, 0.030782189220190048, 0.024950338527560234, 0.018540887162089348, 0.02460322342813015, -0.014949070289731026, -0.025812873616814613, -0.041030917316675186, -0.006143881939351559, 0.008882169611752033, -0.0672808289527893, 0...
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
217

Space using ai-conferences/ICCV2025 1