Dataset Viewer
Auto-converted to Parquet Duplicate
title
stringlengths
15
163
paper_decision
stringclasses
4 values
review_1
stringlengths
853
32.6k
rebuttals_1
stringlengths
0
15.1k
review_2
stringlengths
1.03k
35.6k
rebuttals_2
stringlengths
0
15.1k
review_3
stringlengths
807
27.4k
rebuttals_3
stringlengths
0
15k
review_4
stringlengths
780
22.2k
rebuttals_4
stringlengths
0
15.1k
review_5
stringclasses
171 values
rebuttals_5
stringclasses
166 values
review_6
stringclasses
25 values
rebuttals_6
stringclasses
24 values
review_7
stringclasses
4 values
rebuttals_7
stringclasses
4 values
Continuous Bayesian Model Selection for Multivariate Causal Discovery
Accept (poster)
Summary: This paper studies structure learning for observational data using Bayeisan model selection. It falls into the category of score based learning and uses model evidence as score to select DAG. It shows the existing work on bivariate case [Dhir et al., 2024] can be extended to multivariate case, and applies a fl...
Rebuttal 1: Rebuttal: Thank you for your positive and encouraging feedback on our work. We appreciate your acknowledgement that the proposed method **"allows for learning nonparametric DAGs in a scalable manner"** and that our **"experiments show competitive performance with the benchmarks"**. We address your comments ...
Summary: This paper presents a multivariate causal discovery approach based on Bayesian model selection. It builds on the work of Dhir et al. (2024), who proposed to use Bayesian model selection to identify causal direction in the bivariate case. The Bayesian model selection framework allows for a trade-off between a m...
Rebuttal 1: Rebuttal: Thank you for your insightful review. We appreciate your recognition of our contribution to the field of causal discovery. We are glad you think the **"theory was well covered and convincing"**, the paper is **"well written and well motivated"** and our **"extensive experiments significantly outpe...
Summary: Recent work shows that in the bivariate case Bayesian model selection can be used for structure identification under more flexible assumptions at the cost of a small probability of error. This paper extends the previous result to the multivariate case. The authors empirically validate the method by comparing t...
Rebuttal 1: Rebuttal: Thank you for your detailed feedback. We appreciate your acknowledgement of the **novelty of our approach in applying Bayesian model selection for multivariate structure learning** and your recognition of the **thorough discussion of our contributions relative to prior work**. We are also pleased ...
Summary: The paper proposes a new method called CGP-CDE for (Bayesian) causal model discovery that allows for less restrictive model assumptions and can be applied to higher dimensional systems as well. It is based on a GP approach to obtain a nonparametric conditional density estimator for each node given its parents ...
Rebuttal 1: Rebuttal: Thank you for your positive and encouraging feedback. We are glad you found the paper **"clear and well written"**, and appreciate your comment that our method is a **"significant and promising contribution"**. > The paper initially suggests that it will solve the problem of restrictive / unreali...
null
null
null
null
null
null
MP-Nav: Enhancing Data Poisoning Attacks against Multimodal Learning
Accept (poster)
Summary: 1. The author analyzed the shortcomings of existing attack methods: only associating errors by randomly selecting concepts, and poisoning instances randomly, which usually makes it difficult to achieve a good attack effect. 2. The authors proposed a plug-and-play module MP-Nav. MP-Nav effectively solves the pr...
Rebuttal 1: Rebuttal: Thanks for your positive score. Please find our responses below. 1 [Essential References Not Discussed]: “The author should consider using some other methods as baselines [1-3]”\ **Response** 1: We have indeed used [1] as one of the baseline methods that our paper has made comparisons with. [2]...
Summary: This paper introduces the Multimodal Poison Navigator (MP-Nav), a plug-and-play module designed to improve data poisoning attacks on multi-modal models. The authors propose a two-step approach: (1) concept-level selection, which identifies semantically similar concepts for misassociation, and (2) instance-leve...
Rebuttal 1: Rebuttal: Thanks for your positive score. Please find our responses below. 1 [Other Strengths And Weaknesses]: “a discussion on potential countermeasures would add more depth.”\ **Response** 1: This is a similar question to one raised by reviewer cLuX. Kindly refer to the "Response 3" for reviewer cLuX. ...
Summary: This paper presents MP-Nav that optimizes data poisoning attacks for vision-language models. The approach strategically selects concept pairs and robust instances to maximize poisoning efficiency while maintaining overall model utility. The authors evaluate MP-Nav on benchmark datasets and demonstrate improvem...
Rebuttal 1: Rebuttal: Thanks for your positive review. Please find our responses below. 1 [Other Comments Or Suggestions]: “Please explicitly discuss the limitations of MP-Nav, particularly regarding scenarios where poisoning may not be effective.”\ **Response** 1: There are potentially two limitations. First, MP-Nav...
Summary: This paper addresses the vulnerability of large-scale multimodal learning models to data poisoning attacks, where adversaries subtly inject malicious instances into training data to misalign concepts. It proposes MP-Nav (Multimodal Poison Navigator), a module that strategically selects semantically similar con...
Rebuttal 1: Rebuttal: Many thanks for the reviewer’s statement that “Experimental results demonstrate that MP-Nav improves attack success rates while preserving model utility”, and the acknowledgment that “the method is simple and effective”. Kindly find our response below. 1 [Claims and Evidence (First two points)]...
null
null
null
null
null
null
CAT Merging: A Training-Free Approach for Resolving Conflicts in Model Merging
Accept (poster)
Summary: ## Summary The paper introduces CAT Merging, a training-free framework for merging multiple expert models while mitigating knowledge conflicts. Existing methods, such as task vectors, merge models by accumulating task vector weights, but conflicting components across tasks can lead to performance degradation....
Rebuttal 1: Rebuttal: **Q4.1: Pay attention to the concurrent work.** **A4.1:** Thank you for highlighting the concurrent work, "Interfering with Interference: Blind Shuffling and Superposition for Better Multi-Model Compression," which addresses interference during multi-model merging through random layer shuffling a...
Summary: The paper introduces Conflict-Aware Task Merging, a training-free model merging method that addresses knowledge conflicts in multi-task model merging. The meaning of knowledge conflicts is that existing methods, such as Task Arithmetic, suffer from conflicts when integrating multiple fine-tuned task vectors, o...
Rebuttal 1: Rebuttal: **Q3.1: Writing issues (W1-4).** **A3.1:** Thanks for the suggestions. We will revise them and thoroughly double-check the manuscript to avoid similar issues. **Q3.2: Comparisons on inference speed and computational overhead (W5).** **A3.2:** **The inference speed** remains consistent with ...
Summary: The paper proposes a novel model training-free model merging algorithm that removes the conflicting components of task vectors. This is done in a round robin fashion; for each task vector, the conflicting components of each other task vector are computed and removed from them. This is done with a projection fo...
Rebuttal 1: Rebuttal: **Q2.1: Results on LLM.** **A2.1:** Thanks for your suggestion. We conducted additional experiments using RoBERTa as the backbone model on the GLUE benchmark. As summarized in A3.3 below, CAT Merging consistently achieves superior average performance compared to existing state-of-the-art merging ...
Summary: This paper proposes Conflict-Aware Task Merging (CAT Merging), a training-free method to combine multiple fine-tuned models while alleviating knowledge conflicts that degrade performance when merging. The core idea is to selectively trim conflict-prone components from each task’s weight update (“task vector”) ...
Rebuttal 1: Rebuttal: **Q1.1: Is the Lipschitz continuity assumption becoming less reliable in Transformer architectures?** **A1.1**: We thank the reviewer for this insightful observation. Indeed, the multiplicative interactions in Transformer architectures complicate the Lipschitz continuity assumption. However, give...
null
null
null
null
null
null
Pretraining Generative Flow Networks with Inexpensive Rewards for Molecular Graph Generation
Accept (poster)
Summary: The paper introduces Atomic GFlowNets (A-GFN), a novel generative model for molecular graph generation that leverages individual atoms as building blocks to explore drug-like chemical spaces more comprehensively. It adopts a pretraining mechanism using ZINC dataset, where A-GFN learns from inexpensive yet info...
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback. Our response and proposed revisions for the concerns raised by the reviewer are as follows # 1 We will ensure that figure sizes remain consistent throughout the appendix, particularly improving the font readability of Figure 3. Thank you for...
Summary: This paper proposes a training strategy to improve GFlowNet-based molecular generation. First, it uses atom-based policy rather than fragment-based policy to enable access to a larger chemical space. Second, this work proposes using expert trajectories constructed from ZINC to pretrain the network, which impro...
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful and constructive review of our paper. We are pleased to see that you find our work well-motivated, comprehensive in evaluation, and a notable contribution to the application of GFlowNets in molecular design. Additionally, we are grateful for your recognition...
Summary: This paper introduces Atomic GFlowNets (or A-GFNs), an atom-based generative framework for molecular design based on GFlowNets, proposing a more general-purpose exploration of the chemical space. The authors propose pre-training A-GFNs on inexpensive molecular properties that act as rewards for training the un...
Rebuttal 1: Rebuttal: 1. Why does TB sometimes outperform RTB in single-objective tasks (Table 3)? Is it due to over-regularization in RTB? Thank you for raising this important question. The observed performance difference stems from fundamental differences in how TB and RTB balance optimization objectives: Yes, RTB's...
null
null
null
null
null
null
null
null
SpikeVideoFormer: An Efficient Spike-Driven Video Transformer with Hamming Attention and $\mathcal{O}(T)$ Complexity
Accept (poster)
Summary: This manuscript introduces a video-based transformer model that implements spiking neural networks (SNNs) and Convolutional Neural Networks (CNN). The work highlights the efficiency of the proposed model in video-related tasks, particularly focusing on computational (parameters) and power efficiency. A key con...
Rebuttal 1: Rebuttal: Dear Reviewer ZrRF, We greatly appreciate your time and effort in reviewing our work. Below are our point-by-point responses to your comments. --- **Experimental Designs Or Analyses:** - Thanks for the constructive comment. For a qualitative comparison of video semantic segmentation, please re...
Summary: The authors present a novel model called the SpikeVideoFormer – a transformer network based on Spiking Neural Networks (SNN). They use Spike-Driven Hamming Attention (SDHA) instead of the usual dot product based self-attention. They claim their network to have a linear temporal complexity compared to the other...
Rebuttal 1: Rebuttal: Dear Reviewer 4RFZ, We sincerely appreciate your time and effort in reviewing our paper. Please find our point-by-point responses to your comments below. --- **Claims And Evidence:** - Thanks for the value suggestion. Normally, Power = Watts * Time. According to [B, C], when comparing ANNs and...
Summary: The authors propose SpikeVideoFormer, an efficient spike-based Transformer to process videos with linear temporal complexity. Technically, a spike-based Hamming attention mechanism is proposed from a theoretical perspective. Then, the authors further analyze several spike-based attention modules for video proc...
Rebuttal 1: Rebuttal: Dear Reviewer 9Qge, We are grateful for your insightful feedback. Below, we provide a detailed response to each of your points. --- **Other Strengths And Weaknesses:** --- **W1 Inference Time (per video clip $T\times 256\times 256\times 3$ as input)** - **We report the inference time in the ...
Summary: The paper introduces SpikeVideoFormer, an efficient spike-driven video Transformer that leverages normalized Hamming similarity and joint space-time attention to achieve linear temporal complexity. It outperforms existing SNN-based models in video classification, human pose tracking, and video semantic segment...
Rebuttal 1: Rebuttal: Dear Reviewer oaZD, We appreciate your time and effort in reviewing our paper. Below, we provide a point-by-point response to your questions. --- **Claims And Evidence:** - **We report the latency in the table below**, based on tests conducted using the same hardware setup—a single A6000 GPU ...
null
null
null
null
null
null
Dequantified Diffusion-Schrödinger Bridge for Density Ratio Estimation
Accept (poster)
"Summary: This paper discusses the challenges of density ratio estimation in applications involving (...TRUNCATED)
"Rebuttal 1:\nRebuttal: **1. Notation consistency and variance reduction proof** \n\nWe sincerely ap(...TRUNCATED)
"Summary: The paper introduces Dequantified Diffusion Schrödinger Bridge for Density Ratio Estimati(...TRUNCATED)
"Rebuttal 1:\nRebuttal: **1. Empirical Validation of Support Expansion Claims and necessity of GD**\(...TRUNCATED)
"Summary: This paper aim to overcome the density-chasm and support chasm problems in density ratio e(...TRUNCATED)
"Rebuttal 1:\nRebuttal: We appreciate this thoughtful observation and appreciate the reviewer’s co(...TRUNCATED)
"Summary: This paper addresses the density-chasm problem in density ratio estimation. The authors pr(...TRUNCATED)
"Rebuttal 1:\nRebuttal: **1. Theoretical refinements for Theorems 4.1 and 4.2**\n\nWe appreciate the(...TRUNCATED)
null
null
null
null
null
null
DAMA: Data- and Model-aware Alignment of Multi-modal LLMs
Accept (poster)
"Summary: In this paper, the authors propose DAMO, an innovative data- and model-aware alignment str(...TRUNCATED)
"Rebuttal 1:\nRebuttal: Response to Reviewer $\\color{green}\\text{gFQs}$: \n\nWe sincerely thank yo(...TRUNCATED)
"Summary: The paper examines the inherent property of DPO regarding its imbalanced responsiveness to(...TRUNCATED)
"Rebuttal 1:\nRebuttal: Response to Reviewer $\\color{red}\\text{TUJo}$:\n\nWe highly appreciate you(...TRUNCATED)
"Summary: Authors propose a variant of DPO where the Beta hyperparameter is adapter dynamically depe(...TRUNCATED)
"Rebuttal 1:\nRebuttal: Response to Reviewer $\\color{blue}\\text{ERND}$:\n\nWe highly appreciate yo(...TRUNCATED)
null
null
null
null
null
null
null
null
"Visual Attention Never Fades: Selective Progressive Attention ReCalibration for Detailed Image Capt(...TRUNCATED)
Accept (poster)
"Summary: This paper focuses on improving detailed image captioning quality in VLMs. The authors arg(...TRUNCATED)
"Rebuttal 1:\nRebuttal: We sincerely thank the reviewer for the detailed and helpful review. We apol(...TRUNCATED)
"Summary: This work proposes a training-free method to enhance detailed image captioning with improv(...TRUNCATED)
"Rebuttal 1:\nRebuttal: **1. Concerns Regarding Performance Trade-off**\n\n> Compared with baselines(...TRUNCATED)
"Summary: The paper introduces an ​adaptive attention enhancement mechanism aimed at improving the(...TRUNCATED)
"Rebuttal 1:\nRebuttal: We sincerely thank the reviewer for taking the time and effort to evaluate o(...TRUNCATED)
"Summary: The authors study the effect of attention variability spatially and temporally and its imp(...TRUNCATED)
"Rebuttal 1:\nRebuttal: We sincerely thank the reviewer for the thoughtful feedback and insightful q(...TRUNCATED)
null
null
null
null
null
null
A Mathematical Framework for AI-Human Integration in Work
Accept (poster)
"Summary: This paper develops a model of job success probability by viewing jobs as a composition of(...TRUNCATED)
"Rebuttal 1:\nRebuttal: We thank you for your thoughtful, detailed, and insightful feedback. In resp(...TRUNCATED)
"Summary: This paper presents a mathematical framework for modeling jobs, workers, and worker-job fi(...TRUNCATED)
"Rebuttal 1:\nRebuttal: We thank you for your detailed and encouraging review. We are especially gra(...TRUNCATED)
"Summary: This paper models human-AI collaboration in jobs. In particular, it models jobs as being c(...TRUNCATED)
"Rebuttal 1:\nRebuttal: Thank you for your thoughtful and constructive feedback. We especially appre(...TRUNCATED)
"Summary: The authors propose a model of workforce replacement by AI and run some simulations based (...TRUNCATED)
"Rebuttal 1:\nRebuttal: We thank the reviewer for taking the time to evaluate our submission and for(...TRUNCATED)
null
null
null
null
null
null
XAttnMark: Learning Robust Audio Watermarking with Cross-Attention
Accept (poster)
"Summary: This paper presents a robust watermarking scheme XAttnMark for audio content, where the em(...TRUNCATED)
"Rebuttal 1:\nRebuttal: We sincerely thank the reviewer for their meticulous and constructive feedba(...TRUNCATED)
"Summary: This paper focuses on robust audio watermark detection and source attribution, which is mo(...TRUNCATED)
"Rebuttal 1:\nRebuttal: We sincerely appreciate the reviewer's insightful attention and precise feed(...TRUNCATED)
"Summary: The paper introduces a novel neural audio watermarking framework called XATTNMARK. The key(...TRUNCATED)
"Rebuttal 1:\nRebuttal: We sincerely thank the reviewer for taking the time to review our manuscript(...TRUNCATED)
"Summary: This paper proposes XATTNMARK, a novel neural audio watermarking system designed to achiev(...TRUNCATED)
"Rebuttal 1:\nRebuttal: We sincerely thank the reviewer for their valuable feedback. We have address(...TRUNCATED)
null
null
null
null
null
null
End of preview. Expand in Data Studio

ICML 2025 Submissions Dataset

This dataset contains information about ICML 2025 paper submissions including:

  • Paper titles
  • Decision outcomes (Accept/Reject)
  • Peer reviews
  • Author rebuttals

Files

  • ICML_2025_submissions_huggingface.json: Main dataset file containing all submission data

Usage

import json

# Load the dataset
with open('ICML_2025_submissions_huggingface.json', 'r') as f:
    data = json.load(f)

# Example: Print first paper title
print(data[0]['title'])

Data Structure

Each entry contains:

  • title: Paper title
  • paper_decision: Decision outcome
  • review_1, review_2, etc.: Peer reviews
  • rebuttals_1, rebuttals_2, etc.: Author rebuttals

License

This dataset is released under the MIT License. Please ensure you have appropriate permissions to use this data and comply with ICML's terms of service.

Downloads last month
19