title stringlengths 15 163 | paper_decision stringclasses 4
values | review_1 stringlengths 853 32.6k | rebuttals_1 stringlengths 0 15.1k | review_2 stringlengths 1.03k 35.6k | rebuttals_2 stringlengths 0 15.1k | review_3 stringlengths 807 27.4k ⌀ | rebuttals_3 stringlengths 0 15k ⌀ | review_4 stringlengths 780 22.2k ⌀ | rebuttals_4 stringlengths 0 15.1k ⌀ | review_5 stringclasses 171
values | rebuttals_5 stringclasses 166
values | review_6 stringclasses 25
values | rebuttals_6 stringclasses 24
values | review_7 stringclasses 4
values | rebuttals_7 stringclasses 4
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Continuous Bayesian Model Selection for Multivariate Causal Discovery | Accept (poster) | Summary: This paper studies structure learning for observational data using Bayeisan model selection. It falls into the category of score based learning and uses model evidence as score to select DAG. It shows the existing work on bivariate case [Dhir et al., 2024] can be extended to multivariate case, and applies a fl... | Rebuttal 1:
Rebuttal: Thank you for your positive and encouraging feedback on our work. We appreciate your acknowledgement that the proposed method **"allows for learning nonparametric DAGs in a scalable manner"** and that our **"experiments show competitive performance with the benchmarks"**. We address your comments ... | Summary: This paper presents a multivariate causal discovery approach based on Bayesian model selection. It builds on the work of Dhir et al. (2024), who proposed to use Bayesian model selection to identify causal direction in the bivariate case. The Bayesian model selection framework allows for a trade-off between a m... | Rebuttal 1:
Rebuttal: Thank you for your insightful review. We appreciate your recognition of our contribution to the field of causal discovery. We are glad you think the **"theory was well covered and convincing"**, the paper is **"well written and well motivated"** and our **"extensive experiments significantly outpe... | Summary: Recent work shows that in the bivariate case Bayesian model selection can be used for structure identification under more flexible assumptions at the cost of a small probability of error. This paper extends the previous result to the multivariate case. The authors empirically validate the method by comparing t... | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback. We appreciate your acknowledgement of the **novelty of our approach in applying Bayesian model selection for multivariate structure learning** and your recognition of the **thorough discussion of our contributions relative to prior work**. We are also pleased ... | Summary: The paper proposes a new method called CGP-CDE for (Bayesian) causal model discovery that allows for less restrictive model assumptions and can be applied to higher dimensional systems as well. It is based on a GP approach to obtain a nonparametric conditional density estimator for each node given its parents ... | Rebuttal 1:
Rebuttal: Thank you for your positive and encouraging feedback. We are glad you found the paper **"clear and well written"**, and appreciate your comment that our method is a **"significant and promising contribution"**.
> The paper initially suggests that it will solve the problem of restrictive / unreali... | null | null | null | null | null | null |
MP-Nav: Enhancing Data Poisoning Attacks against Multimodal Learning | Accept (poster) | Summary: 1. The author analyzed the shortcomings of existing attack methods: only associating errors by randomly selecting concepts, and poisoning instances randomly, which usually makes it difficult to achieve a good attack effect.
2. The authors proposed a plug-and-play module MP-Nav. MP-Nav effectively solves the pr... | Rebuttal 1:
Rebuttal: Thanks for your positive score. Please find our responses below.
1 [Essential References Not Discussed]: “The author should consider using some other methods as baselines [1-3]”\
**Response** 1:
We have indeed used [1] as one of the baseline methods that our paper has made comparisons with.
[2]... | Summary: This paper introduces the Multimodal Poison Navigator (MP-Nav), a plug-and-play module designed to improve data poisoning attacks on multi-modal models. The authors propose a two-step approach: (1) concept-level selection, which identifies semantically similar concepts for misassociation, and (2) instance-leve... | Rebuttal 1:
Rebuttal: Thanks for your positive score. Please find our responses below.
1 [Other Strengths And Weaknesses]: “a discussion on potential countermeasures would add more depth.”\
**Response** 1: This is a similar question to one raised by reviewer cLuX. Kindly refer to the "Response 3" for reviewer cLuX. ... | Summary: This paper presents MP-Nav that optimizes data poisoning attacks for vision-language models. The approach strategically selects concept pairs and robust instances to maximize poisoning efficiency while maintaining overall model utility. The authors evaluate MP-Nav on benchmark datasets and demonstrate improvem... | Rebuttal 1:
Rebuttal: Thanks for your positive review. Please find our responses below.
1 [Other Comments Or Suggestions]: “Please explicitly discuss the limitations of MP-Nav, particularly regarding scenarios where poisoning may not be effective.”\
**Response** 1: There are potentially two limitations.
First, MP-Nav... | Summary: This paper addresses the vulnerability of large-scale multimodal learning models to data poisoning attacks, where adversaries subtly inject malicious instances into training data to misalign concepts. It proposes MP-Nav (Multimodal Poison Navigator), a module that strategically selects semantically similar con... | Rebuttal 1:
Rebuttal: Many thanks for the reviewer’s statement that “Experimental results demonstrate that MP-Nav improves attack success rates while preserving model utility”, and the acknowledgment that “the method is simple and effective”. Kindly find our response below.
1 [Claims and Evidence (First two points)]... | null | null | null | null | null | null |
CAT Merging: A Training-Free Approach for Resolving Conflicts in Model Merging | Accept (poster) | Summary: ## Summary
The paper introduces CAT Merging, a training-free framework for merging multiple expert models while mitigating knowledge conflicts. Existing methods, such as task vectors, merge models by accumulating task vector weights, but conflicting components across tasks can lead to performance degradation.... | Rebuttal 1:
Rebuttal: **Q4.1: Pay attention to the concurrent work.**
**A4.1:** Thank you for highlighting the concurrent work, "Interfering with Interference: Blind Shuffling and Superposition for Better Multi-Model Compression," which addresses interference during multi-model merging through random layer shuffling a... | Summary: The paper introduces Conflict-Aware Task Merging, a training-free model merging method that addresses knowledge conflicts in multi-task model merging. The meaning of knowledge conflicts is that existing methods, such as Task Arithmetic, suffer from conflicts when integrating multiple fine-tuned task vectors, o... | Rebuttal 1:
Rebuttal: **Q3.1: Writing issues (W1-4).**
**A3.1:** Thanks for the suggestions. We will revise them and thoroughly double-check the manuscript to avoid similar issues.
**Q3.2: Comparisons on inference speed and computational overhead (W5).**
**A3.2:**
**The inference speed** remains consistent with ... | Summary: The paper proposes a novel model training-free model merging algorithm that removes the conflicting components of task vectors. This is done in a round robin fashion; for each task vector, the conflicting components of each other task vector are computed and removed from them. This is done with a projection fo... | Rebuttal 1:
Rebuttal: **Q2.1: Results on LLM.**
**A2.1:** Thanks for your suggestion. We conducted additional experiments using RoBERTa as the backbone model on the GLUE benchmark. As summarized in A3.3 below, CAT Merging consistently achieves superior average performance compared to existing state-of-the-art merging ... | Summary: This paper proposes Conflict-Aware Task Merging (CAT Merging), a training-free method to combine multiple fine-tuned models while alleviating knowledge conflicts that degrade performance when merging. The core idea is to selectively trim conflict-prone components from each task’s weight update (“task vector”) ... | Rebuttal 1:
Rebuttal: **Q1.1: Is the Lipschitz continuity assumption becoming less reliable in Transformer architectures?**
**A1.1**: We thank the reviewer for this insightful observation. Indeed, the multiplicative interactions in Transformer architectures complicate the Lipschitz continuity assumption. However, give... | null | null | null | null | null | null |
Pretraining Generative Flow Networks with Inexpensive Rewards for Molecular Graph Generation | Accept (poster) | Summary: The paper introduces Atomic GFlowNets (A-GFN), a novel generative model for molecular graph generation that leverages individual atoms as building blocks to explore drug-like chemical spaces more comprehensively. It adopts a pretraining mechanism using ZINC dataset, where A-GFN learns from inexpensive yet info... | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive feedback. Our response and proposed revisions for the concerns raised by the reviewer are as follows
# 1
We will ensure that figure sizes remain consistent throughout the appendix, particularly improving the font readability of Figure 3. Thank you for... | Summary: This paper proposes a training strategy to improve GFlowNet-based molecular generation. First, it uses atom-based policy rather than fragment-based policy to enable access to a larger chemical space. Second, this work proposes using expert trajectories constructed from ZINC to pretrain the network, which impro... | Rebuttal 1:
Rebuttal: We sincerely appreciate your thoughtful and constructive review of our paper. We are pleased to see that you find our work well-motivated, comprehensive in evaluation, and a notable contribution to the application of GFlowNets in molecular design. Additionally, we are grateful for your recognition... | Summary: This paper introduces Atomic GFlowNets (or A-GFNs), an atom-based generative framework for molecular design based on GFlowNets, proposing a more general-purpose exploration of the chemical space. The authors propose pre-training A-GFNs on inexpensive molecular properties that act as rewards for training the un... | Rebuttal 1:
Rebuttal: 1. Why does TB sometimes outperform RTB in single-objective tasks (Table 3)? Is it due to over-regularization in RTB?
Thank you for raising this important question. The observed performance difference stems from fundamental differences in how TB and RTB balance optimization objectives:
Yes, RTB's... | null | null | null | null | null | null | null | null |
SpikeVideoFormer: An Efficient Spike-Driven Video Transformer with Hamming Attention and $\mathcal{O}(T)$ Complexity | Accept (poster) | Summary: This manuscript introduces a video-based transformer model that implements spiking neural networks (SNNs) and Convolutional Neural Networks (CNN). The work highlights the efficiency of the proposed model in video-related tasks, particularly focusing on computational (parameters) and power efficiency.
A key con... | Rebuttal 1:
Rebuttal: Dear Reviewer ZrRF,
We greatly appreciate your time and effort in reviewing our work. Below are our point-by-point responses to your comments.
---
**Experimental Designs Or Analyses:**
- Thanks for the constructive comment. For a qualitative comparison of video semantic segmentation, please re... | Summary: The authors present a novel model called the SpikeVideoFormer – a transformer network based on Spiking Neural Networks (SNN). They use Spike-Driven Hamming Attention (SDHA) instead of the usual dot product based self-attention. They claim their network to have a linear temporal complexity compared to the other... | Rebuttal 1:
Rebuttal: Dear Reviewer 4RFZ,
We sincerely appreciate your time and effort in reviewing our paper. Please find our point-by-point responses to your comments below.
---
**Claims And Evidence:**
- Thanks for the value suggestion. Normally, Power = Watts * Time. According to [B, C], when comparing ANNs and... | Summary: The authors propose SpikeVideoFormer, an efficient spike-based Transformer to process videos with linear temporal complexity. Technically, a spike-based Hamming attention mechanism is proposed from a theoretical perspective. Then, the authors further analyze several spike-based attention modules for video proc... | Rebuttal 1:
Rebuttal: Dear Reviewer 9Qge,
We are grateful for your insightful feedback. Below, we provide a detailed response to each of your points.
---
**Other Strengths And Weaknesses:**
---
**W1 Inference Time (per video clip $T\times 256\times 256\times 3$ as input)**
- **We report the inference time in the ... | Summary: The paper introduces SpikeVideoFormer, an efficient spike-driven video Transformer that leverages normalized Hamming similarity and joint space-time attention to achieve linear temporal complexity. It outperforms existing SNN-based models in video classification, human pose tracking, and video semantic segment... | Rebuttal 1:
Rebuttal: Dear Reviewer oaZD,
We appreciate your time and effort in reviewing our paper. Below, we provide a point-by-point response to your questions.
---
**Claims And Evidence:**
- **We report the latency in the table below**, based on tests conducted using the same hardware setup—a single A6000 GPU ... | null | null | null | null | null | null |
Dequantified Diffusion-Schrödinger Bridge for Density Ratio Estimation | Accept (poster) | Summary: This paper discusses the challenges of density ratio estimation in applications involving f-divergences, particularly with multi-modal distributions or large distributional differences, known as the density-chasm problem. To address this, the authors propose Dequantified Diffusion-Bridge Interpolants (DDBI), w... | Rebuttal 1:
Rebuttal: **1. Notation consistency and variance reduction proof**
We sincerely appreciate the reviewer’s careful reading and valuable feedback, which have helped us improve the clarity and rigor of our presentation. Below, we address each point:
- **Notation consistency**: We have corrected a typo in the... | Summary: The paper introduces Dequantified Diffusion Schrödinger Bridge for Density Ratio Estimation (D3RE), a novel framework addressing the challenges of density-chasm and support-chasm in traditional density ratio estimation (DRE). By leveraging Diffusion Bridge Interpolants (DBI) and Gaussian Dequantization (GD), t... | Rebuttal 1:
Rebuttal: **1. Empirical Validation of Support Expansion Claims and necessity of GD**
We sincerely thank the reviewer for raising this important point regarding support expansion and the necessity of gradient descent (GD). We appreciate the insightful feedback and have carefully considered the suggestions.... | Summary: This paper aim to overcome the density-chasm and support chasm problems in density ratio estimation by combining diffusion bridge process and optimal transport theory via Schrodinger bridges. The authors provide theoretical justifications, demonstrating that their proposed DDBI and DSBI expand the support and ... | Rebuttal 1:
Rebuttal: We appreciate this thoughtful observation and appreciate the reviewer’s comment on our paper. Thank you very much!
**1. More theoretical contributions**
**Proposition 4.4**:
Under the DSBI interpolant $X_t = \alpha_t X_0 + \beta_t X_1 + \sqrt{t(1-t)\gamma^2} Z_t$ with $(X_0,X_1) \sim \pi_{2\gam... | Summary: This paper addresses the density-chasm problem in density ratio estimation. The authors propose using diffusive interpolants and Gaussian dequantization, and they theoretically and experimentally verify that these methods can mitigate the problem. Additionally, they demonstrate that incorporating Schrödinger b... | Rebuttal 1:
Rebuttal: **1. Theoretical refinements for Theorems 4.1 and 4.2**
We appreciate the reviewer’s insightful feedback, which has helped us improve the clarity and rigor of Theorems 4.1 and 4.2. The key refinements in the revised manuscript are:
- **Theorem 4.1:**
We have explicitly clarified that the support ... | null | null | null | null | null | null |
DAMA: Data- and Model-aware Alignment of Multi-modal LLMs | Accept (poster) | Summary: In this paper, the authors propose DAMO, an innovative data- and model-aware alignment strategy for Multi-modal Large Language Models (MLLMs), Specifically, a data-aware strategy is introduced to enhance the model's adaptability with data hardness, a model-aware is proposes to facilitate a more effective optim... | Rebuttal 1:
Rebuttal: Response to Reviewer $\color{green}\text{gFQs}$:
We sincerely thank you for your invaluable and constructive feedback. We particularly appreciate your positive acknowledgement of our novelty, clear organization, and extensive experimental validations. Below we provide the point-to-point response... | Summary: The paper examines the inherent property of DPO regarding its imbalanced responsiveness to data with varying difficulty levels and proposes Data and Model-aware DPO (DAMO) to address this issue. Experiments across various benchmarks demonstrate that DAMO enhances both trustworthiness and general task performan... | Rebuttal 1:
Rebuttal: Response to Reviewer $\color{red}\text{TUJo}$:
We highly appreciate your insightful comments and acknowledgment of our contributions! Your constructive criticism is invaluable in refining our work! We organize your concerns into the following 3 aspects:
> **Q1. Clarification about sub-sentence c... | Summary: Authors propose a variant of DPO where the Beta hyperparameter is adapter dynamically depending on model and data- awareness. Author postulate the existence of easy and hard to distinguish example in alignment training settings, and therefore propose dynamic strategy to adjust those. Evaluation is reported on ... | Rebuttal 1:
Rebuttal: Response to Reviewer $\color{blue}\text{ERND}$:
We highly appreciate your insightful comments, which help us a lot to better scrutiny and polish our work! The following are point-to-point responses.
> **Q1. Implementation with more advanced models (e.g., LLaVA 1.6 and LLaVA-OneVision) makes DAMO... | null | null | null | null | null | null | null | null |
Visual Attention Never Fades: Selective Progressive Attention ReCalibration for Detailed Image Captioning in Multimodal Large Language Models | Accept (poster) | Summary: This paper focuses on improving detailed image captioning quality in VLMs. The authors argue that existing models struggle to maintain strong visual attention when generating longer captions, causing increased noise and reduced recall. To fix this, they propose a method that selectively strengthens visual atte... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the detailed and helpful review. We apologize for any confusion caused by the incomplete results in the original submission. Your comments have been very valuable, and we provide our detailed responses below. We would also appreciate any further suggestions or f... | Summary: This work proposes a training-free method to enhance detailed image captioning with improved balance between precision and recall by re-calibrating the attention values in multimodal large language models (MLLMs). This work first analyzes the attention patterns in MLLMs and finds that 1) trivially enlarging th... | Rebuttal 1:
Rebuttal: **1. Concerns Regarding Performance Trade-off**
> Compared with baselines, the proposed method significantly improves the recall, but the precision is hurt. For example, as shown in Table 2, the precision is ~3% lower than PAI. Similarly, in Figure 6, the human evaluation suggests a lower precisi... | Summary: The paper introduces an adaptive attention enhancement mechanism aimed at improving the precision of image captioning while maintaining an acceptable recall rate. Specifically, the selective attention enhancement strategy seems powerful according to its significant improvement in the precision of long captio... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for taking the time and effort to evaluate our paper. We truly appreciate your insightful comments. Please find our detailed response to your comment below. If you have any further feedback, we would be grateful to hear it.
**1. Efficiency Comparisons**
> Since th... | Summary: The authors study the effect of attention variability spatially and temporally and its impact on detailed image captioning with Visual Language Models (VLMs). The authors provide a detailed analysis of methods that tackle attention leaking from the image into the text as the caption grows, and they find that s... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thoughtful feedback and insightful question. Your comments greatly helped us refine and clarify the paper. Please find our detailed response below—we’re happy to address any further concerns.
**1. Analysis of the “noisy” scores**
We understand the reviewer... | null | null | null | null | null | null |
A Mathematical Framework for AI-Human Integration in Work | Accept (poster) | Summary: This paper develops a model of job success probability by viewing jobs as a composition of tasks that need to be accomplished, and workers supply skills that affect the probability tasks as successfully completed. The authors then calibrate the model using the O*NET database's skill descriptions associated wit... | Rebuttal 1:
Rebuttal: We thank you for your thoughtful, detailed, and insightful feedback. In response, we added new theoretical and empirical analyses that sharpen our results, test their robustness across modeling choices, and highlight connections to real-world phenomena such as productivity compression.
Please see... | Summary: This paper presents a mathematical framework for modeling jobs, workers, and worker-job fit, focusing on subskill decomposition into decision-level and action-level tasks to highlight the distinct strengths of humans and AI. The study examines how variations in subskill abilities affect job success and identif... | Rebuttal 1:
Rebuttal: We thank you for your detailed and encouraging review. We are especially grateful for your recognition of our framework’s real-world applicability and for your thoughtful suggestions regarding dataset limitations and empirical grounding, which have directly shaped our additional experiments. Pleas... | Summary: This paper models human-AI collaboration in jobs. In particular, it models jobs as being composed of multiple different subtasks, each of which involve different skills. The ability of different agents is noisy and ordered (e.g. the same agent can’t perform worse on easier subtasks on average than they do on e... | Rebuttal 1:
Rebuttal: Thank you for your thoughtful and constructive feedback. We especially appreciate your recognition of our modeling approach and the suggestion regarding imperfect merging, which we have now incorporated. Please see this PDF for new figures (https://acrobat.adobe.com/id/urn:aaid:sc:eu:fef7d6b2-24f6... | Summary: The authors propose a model of workforce replacement by AI and run some simulations based on it.
Claims And Evidence: The authors claim to uncover deep truths about the job market, but they rest upon a foundation of assumptions that are not justified. They also do not make any real claims beyond stating they ... | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to evaluate our submission and for the opportunity to clarify key aspects of our work. Please see this PDF for new figures (link: https://acrobat.adobe.com/id/urn:aaid:sc:eu:fef7d6b2-24f6-4a59-9386-fdf09456ed99).
> "..assumptions that skills are independe... | null | null | null | null | null | null |
XAttnMark: Learning Robust Audio Watermarking with Cross-Attention | Accept (poster) | Summary: This paper presents a robust watermarking scheme XAttnMark for audio content, where the embedding and detection of the watermark is performed using neural networks. A key aim of the work is to improve robust attribution (the ability to recover a binary code hidden in the content) while retaining robust detecti... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their meticulous and constructive feedback. We will revise the manuscript by improving the introduction and mentioning the CAI initiative and the C2PA standard in the body of the paper. Our responses are as follows:
> Q1. On the statistical significance of the... | Summary: This paper focuses on robust audio watermark detection and source attribution, which is more a technical report than a top-tier conference paper. Specifically, it adopts blended architecture of disjointed generator-detector and fully shared-parameter architecture. Besides, temporal conditioning mechanism and p... | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's insightful attention and precise feedback. We have addressed the reviewer's concerns as follows:
> Q1: How about the comparison of model's parameters, training speed and inference speed?
**Response:** We additionally report the model size, training speed, a... | Summary: The paper introduces a novel neural audio watermarking framework called XATTNMARK. The key contributions include:
A cross-attention mechanism that enables efficient message retrieval by sharing an embedding table between the generator and detector.
A temporal conditioning module that distributes the message te... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for taking the time to review our manuscript and providing valuable feedback. We have carefully considered each point raised and provide our detailed responses below:
> Q1. The subjective listening test involves a relatively small number of participants.
**Respon... | Summary: This paper proposes XATTNMARK, a novel neural audio watermarking system designed to achieve both robust detection and accurate message attribution, two goals that are difficult to achieve simultaneously in prior work. The authors blend the architectural benefits of WavMark and AudioSeal by introducing partial ... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their valuable feedback. We have addressed the reviewer's concerns as follows:
> W1. The model still struggles with extreme transformations like speed changes (acknowledged in the text).
In the paper we show that the model is able to effectively perform the de... | null | null | null | null | null | null |
Dimension-Free Adaptive Subgradient Methods with Frequent Directions | Accept (poster) | Summary: In machine learning, the seminal work [DHS'11] proposed the adaptive subgradient method with full matrices (ADA-FULL), which requires maintaining a preconditioning matrix with $O(d^2)$ space and $O(d^3)$ running time. However, ADA-FULL suffers from high-dimensional dependence in its regret bound and computatio... | Rebuttal 1:
Rebuttal: Many thanks for your constructive feedback!
---
Q1. FTSL and FTFSL have the same complexities with ADA-FD(P) and ADA-FFD(P), and regret bounds are close.
A1. We acknowledge that the proposed methods have the same time and space complexities with ADA-FD(P) and ADA-FFD(P). However, we want to cl... | Summary: This work proposed an adaptive online subgradient method with frequent directions. The main contribution is the regret bound is dimension-free and the algorithm only requires the time complexity of $O(\tau d)$ in each iteration.
## update after rebuttal
All of my questions have been addressed. Hence, I would ... | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback!
---
Q1. The literature review should be more appropriate.
A1. Thank you for bringing these related works to our attention. After checking the papers, we acknowledge that the idea of adaptive FD by incorporating the cumulative discarded information is first... | Summary: The paper proposes adaptive subgradient methods for online convex optimization that have better regret bounds and time complexities than existing methods. This is achieved by analyzing the frequent directions in the primal-dual framework.
Claims And Evidence: The claims are supported by clear evidence.
Metho... | Rebuttal 1:
Rebuttal: Thanks for your constructive comments!
---
Q1: Since the loss functions are only assumed to be convex and hence can be non-smooth, the paper should be careful not to mix gradients and subgradients and clearly indicate whether their arguments work for every subgradient or just one particular subg... | null | null | null | null | null | null | null | null |
Dynamic Mixture of Curriculum LoRA Experts for Continual Multimodal Instruction Tuning | Accept (poster) | Summary: This paper presents an algorithm D-MoLE for continual multimodal instruction tuning. The algorithm solves the challenges of task architecture conflict and modality imbalance by dynamically assigning LoRA experts and a gradient-based continual curriculum. Experimental results show the effectiveness of the propo... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the helpful and encouraging comments. We appreciate that they note the inclusion of theoretical analysis, find our writing clear and easy to read, and recognize the significance of the problem setting. We hope the following explanations provide sufficient clarif... | Summary: The paper presents D-MoLE, a framework for continual multimodal instruction tuning (CMIT) in multimodal large language models (MLLMs). It dynamically allocates LoRA experts across layers using zero-cost metrics and addresses modality imbalance through gradient-based inter-modal curriculum learning. By resolvin... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the encouraging and constructive feedback. We are pleased that they find our approach well-motivated, intuitive, and effective, and acknowledge the impressive performance gains. We hope our responses below address the remaining concerns.
---
**Q1. Including Som... | Summary: This paper addresses the challenge of continual multimodal instruction tuning (CMIT) for Multimodal Large Language Models (MLLMs) by proposing a novel Dynamic Mixture of Curriculum LoRA Experts (D-MoLE) method. Unlike fixed-architecture models that struggle with adapting to new tasks, D-MoLE dynamically evolve... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thoughtful and positive feedback. It is encouraging that they find our paper well-written and easy to follow, our results impressive, and our method practical, intuitive, and concise. We hope the clarifications below address the reviewer’s remaining concerns... | Summary: This paper presents D-MoLE, a framework designed to tackle the challenges of continual multimodal instruction tuning (CMIT) in Multimodal Large Language Models (MLLMs). D-MoLE employs a dynamic layer-wise expert allocation strategy to overcome task architecture conflicts and a gradient-based inter-modal conti... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thoughtful and encouraging feedback. We are glad that they find our paper well-structured and our approach to be inspiring. We hope our responses below help clarify the remaining points.
---
**Q1. Notation Refinement**
Thank you for the valuable suggestion... | null | null | null | null | null | null |
PARM: Multi-Objective Test-Time Alignment via Preference-Aware Autoregressive Reward Model | Accept (poster) | Summary: This paper studies the reward-guided multi-objective alignment problem. A prior work GenARM uses a token-level reward model to guide the decoding process, and requires to train two separate reward models to guide the multi-objective decoding process. This work equips GenARM with a preference-aware LoRA-like ad... | Rebuttal 1:
Rebuttal: Thanks for your thoughtful review and valuable feedback. We address your concerns as follows.
> **[Experimental Designs Or Analyses]**. GenARM is the only baseline in evaluation ... include comparision with policy-guided approaches [2,3,4,5].
> [5] Rewarded soups (NeurIPS 2023)
> **[Wea... | Summary: This paper introduces Preference-aware ARM (PARM), a method for guiding large language models (LLMs) at test time based on user preferences. PARM builds upon GenARM, which trains a separate preference model for each human preference. In contrast, PARM employs a unified model that conditions all preferences on ... | Rebuttal 1:
Rebuttal: Thanks for your thoughtful review and valuable feedback. We address your concerns as follows.
> **[Essential References Not Discussed]**. Discussions on controllable text generation are missing.
Controllable Text Generation (CTG) generates text from LLMs with specific attributes or constraints. ... | Summary: The authors proposed a preference-aware ARM for multi-objective test-time alignment. PARM is an ARM conditioned on user preferences through the proposed PBLoRA, which manages trade-offs across multiple preference dimensions during inference.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Th... | Rebuttal 1:
Rebuttal: Thanks for your thoughtful review and valuable feedback. We address your concerns as follows.
> **[Weaknesses 1]**. The proposed model is the extension of the existing model ARM and in particular the GenARM. The main difference is that the proposed model is to condition the massive of model param... | null | null | null | null | null | null | null | null |
Aligning Multimodal Representations through an Information Bottleneck | Accept (poster) | Summary: In this paper, the authors study the alignment of representation in multimodal learning through information theory.
For a positive pair $X_\alpha, X_\beta$ from modalities $\alpha, \beta$, they formulate the essence $Y$ and nuisance
$N_\alpha, N_\beta$ of the inputs as the common and modality-specific parts ... | Rebuttal 1:
Rebuttal: We would like to begin by thanking you for the time dedicated to give us feedback to improve our work. We address your main concerns next:
> these datasets seem to be rather old (from 2015 and 2002).
We believe that you may be refering to the metrics instead of to the datasets. These are still w... | Summary: The manuscript shows that contrastive learning methods for multimodal representations do not remove modality-specific information, which leads to misaligned representations. It uses an Information Bottleneck approach to add a regularization term to the loss function to filter out this extra information while p... | Rebuttal 1:
Rebuttal: We would like to begin by thanking you for the time dedicated to give us feedback to improve our work. We address your main concerns next:
> could the choice of image encoder introduce bias in the URR measurements?
To analyze this point, we have performed experiments that are identical to those ... | Summary: The paper analyzes the problem that the contrastive losses in multimodal representation learning fail to align representations effectively due to their retention of modality-specific information. To address this, the authors propose a variationally-derived regularization term that reduces modality-specific in... | Rebuttal 1:
Rebuttal: We would like to begin by thanking you for the time dedicated to give us feedback to improve our work. We address your main concerns next:
> 1. Quantitative results of image retrievals are missing.
> What will the model perform when having a stronger $L_M$ constraint?
Please see second answer t... | Summary: This paper addresses the challenge of misalignment in multimodal representation learning when using contrastive loss functions. The authors argue that this misalignment stems from modality-specific information present in the representation space that contrastive objectives fail to remove. Leveraging the Inform... | Rebuttal 1:
Rebuttal: We would like to begin by thanking you for the time dedicated to give us feedback to improve our work. We address your main concerns next:
> (i) additional multimodal benchmarks would enhance generalizability.
More experiments were not included due to space limitations.
> (ii) task-specific met... | null | null | null | null | null | null |
PROXSPARSE: REGULARIZED LEARNING OF SEMI-STRUCTURED SPARSITY MASKS FOR PRETRAINED LLMS | Accept (poster) | Summary: The paper introduces ProxSparse, a learning-based framework designed to improve the efficiency of large language models (LLMs) through semi-structured pruning.
Claims And Evidence: Yes.
It presents detailed experiments comparing ProxSparse with state‐of‐the‐art baselines across multiple LLM families and tasks... | Rebuttal 1:
Rebuttal: We appreciate the reviewer for acknowledging the strength of our paper! Below we address the questions regarding **2:4 ratio, semi-structured benefits, assumption of the theoretical proof, ALM and EnumALM as well as model size justification.**
# W1:`The focus on 2:4 pruning sparsity`: 2:4 pruning... | Summary: This paper introduces a learning-based approach for semi-structured pruning of LLMs using a structured sparsity regularizer and proximal gradient descent. It enables global mask optimization without retraining and improves efficiency. Experiments on seven models show superior perplexity and zero-shot accuracy ... | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback! Below we share responses on **comparison w/ layer-wise method and MaskLLM, inference efficiency.**
# W1:`Comparing newer layer-wise method`: We achieve better results than OWL and AlphaPrune
OWL[1] and AlphaPrune[2] are important works in pruning, ... | Summary: The authors propose ProxSparse, a method for learn a semi-structured pruning mask using two regularisors, one is analogues to l1 regularisation and the other promotes a locality constraint for semi-structured pruning.
Claims And Evidence: From my understanding, the main claim is that previous methods, which r... | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments! Below, we address several questions raised including **sparsity pattern selection, Hessian-based pruning, anonymized citations, and the anonymized model family**.
# W1: `The focus on 2:4 pruning sparsity`: 2:4 pruning is the most practical semi-s... | Summary: This work introduces ProxSparse, a learning-based framework for mask selection via regularized optimization. The key design is a sparsity regularization $Reg_{2:4}$ that forces 2:4 sparsity and a weight regularization $Reg_{W_0}$ to avoid significant differences between the tuned parameters and the original p... | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing the effectiveness and practicality of ProxSparse! Below, we address the questions raised regarding **Llama, ADMMPrune comparisons, clarification on weight updates and MaskLLM comparison**.
# W1: `Lack of Llama results`: We anonymized Anon.model-1,2,3 due to I... | null | null | null | null | null | null |
Point-Level Topological Representation Learning on Point Clouds | Accept (poster) | Summary: The paper proposes to extract point-level features given the global structure of the point cloud, using concepts from algebraic topology and differential geometry.
Claims And Evidence: The proposed method can compute point-level topological features conditioned on the global topological structures of the poin... | Rebuttal 1:
Rebuttal: Thank you for your review and your valuable feedback!
> The experiments and the visualization are good. I wonder whether it is possible to evaluate on more diverse tasks, like ModelNet40 classification, ShapeNet segmentation, and S3DIS segmentation, like many point cloud papers evaluate.
Thank y... | Summary: The paper introduces TOPF (Topological Point Features), a method for extracting point-level topological features from point clouds using tools from algebraic topology and differential geometry. The authors propose leveraging persistent homology and harmonic representatives from the Hodge Laplacian to relate gl... | Rebuttal 1:
Rebuttal: Thank you very much for your careful review and feedback!
> Weakness: It could be better to analyze the runtime of the computational complexity of persistent homology regarding different point cloud sizes.
Thank you for this suggestion! We analysed the computational complexity in appendix E.2 in... | Summary: The paper presents a method (TOPF) to extract point-level topological features of point clouds, i.e., to assign to each point in the cloud a feature vector that encodes to which generators of homology it contributes. Topological features are thereby computed across all scales using persistent homology on a (Vi... | Rebuttal 1:
Rebuttal: Thank you very much for your detailed and thorough review. We will reply to the raised issues in as much detail as the character limit permits.
**Claims And Evidence:**
Thank you for your detailed feedback! While we still believe that the listed contributions are technically true, we now realise ... | Summary: Inspired by TDA, the authors proposed a point-level topological representation learning method for point cloud data analysis. Specifically, they introduced topological point features (TOPF) to extract point-level features from point clouds through discrete algebraic topology and differential geometry. The TOPF... | Rebuttal 1:
Rebuttal: Thank you very much for your feedback! We are happy about the many strengths of TOPF identified by you. We will now address your comments:
> Weakness 1: Although the author analyzed TOPF from a theoretical perspective, I still think that the author's TDA cannot be actually applied in the current ... | null | null | null | null | null | null |
Score-of-Mixture Training: One-Step Generative Model Training Made Simple via Score Estimation of Mixture Distributions | Accept (spotlight poster) | Summary: This paper proposes a framework for training one-step generative models, called ScoreMix. The proposed method is derived by minimizing the $\alpha$-skew Jensen-Shannon Divergence ($\alpha$-JSD) between the generated distribution $q_{\theta}$ from an implicit generative model and the data distribution $p$ (or t... | Rebuttal 1:
Rebuttal: We appreciate the effort in reviewing our work and the helpful suggestions for improving the readability of our paper.
Below, we provide clarifications on the identified weaknesses and responses to the questions.
### Clarifications on Weaknesses
* `On stability of ScoreMix training`: We apprecia... | Summary: This paper proposes a generalization of the KL-minimization procedure for learning one-step generators from score-based models. The authors introduce an "$\alpha$-skew Jensen–Shannon divergence", which interpolates between the KL divergence and the reversed-KL divergence. They propose two settings: one trainin... | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s effort in reviewing our paper and constructive comments. We will address the raised concerns in our revision as follows.
* `On FID evaluation and adversarial loss`: We appreciate the reviewer for the thoughtful comment on the limitation of FID evaluation and its inter... | Summary: The paper presents the ScoreMix, a new type of one-step generative model, trained using the $\alpha$-JSD from $f$-divergence. ScoreMix can be trained from scratch and used for distillation. It achieves SOTA performance in the 1-NFE regime. The paper grounds the theoretical approach and performs extensive exper... | Rebuttal 1:
Rebuttal: We appreciate the effort in reviewing our manuscript and providing constructive comments. We will incorporate all the feedback in our revision to improve the manuscript.
### On Weaknesses
* `Missing reference and baselines`: We thank the reviewer for pointing out the missing references and basel... | null | null | null | null | null | null | null | null |
Surrogate Prompt Learning: Towards Efficient and Diverse Prompt Learning for Vision-Language Models | Accept (poster) | Summary: This paper presents a novel Surrogate Prompt Learning (SurPL) framework for vision-language models (VLMs). SurPL aims to achieve efficient and diverse prompt learning by replacing explicit diverse prompt learning with a Surrogate Feature Generator (SFG) that generates diverse text features without requiring co... | Rebuttal 1:
Rebuttal: >Q1: Lack deeper theoretical justification on why surrogate features retain meaningful semantic information.
A1: Thanks for the comments. We provide a theoretical analysis based on the universal approximation theorem. Due to character limits, please refer to the Q&A 1 of Reviewer jTad for detaile... | Summary: This paper introduces Surrogate Prompt Learning to address efficiency issues in diverse prompt learning for vision-language models (VLMs). SurPL leverages a lightweight Surrogate Feature Generator (SFG) to directly generate diverse prompted text features from a single basic prompt, avoiding the computational o... | Rebuttal 1:
Rebuttal: >Q1: The paper doesn't provide much theoretical justification for why surrogate features effectively replace the original prompted features, relying instead on empirical results.
A1: Thanks for the valuable comments. We provide a theoretical analysis to demonstrate the effectiveness of the surrog... | Summary: Prompt learning is an efficient fine-tuning technique that learns text prompts. Learning multiple text prompts instead of just one can improve performance while increasing computational cost. This paper proposes learning diverse text prompts without initializing additional parameters by generating specific tex... | Rebuttal 1:
Rebuttal: >Q1: How does the FG loss work with images that contain multiple classes? Would other instances in the figure disturb the classification since we only need to predict the most significant object? For instance, the author could provide heatmap visualization on such images.
A1: Thanks for the insig... | Summary: This paper proposes Surrogate Prompt Learning (SurPL), a new approach to enhance the efficiency and diversity of prompt learning for VLMs. Instead of learning multiple diverse prompts, SurPL directly generates surrogate prompted text features through a lightweight Surrogate Feature Generator (SFG), reducing co... | Rebuttal 1:
Rebuttal: >Q1: The O(M) computational complexity claim of SurPL lacks a formal derivation.
A1: Thanks for the valuable comments. We provide a theoretical analysis of computational complexity here.
Notations.
Loss $L$, classnames ${c}=(c_m) _ {m=1}^M$, text encoder parameter: ${T}=({T}^{k}) _ {k=1}^K$, ... | null | null | null | null | null | null |
Delta Decompression for MoE-based LLMs Compression | Accept (poster) | Summary: The paper presents D²-MoE, a new compression framework designed to tackle issues of parameter redundancy, memory usage, and storage inefficiency in MoE LLMs. D²-MoE enhances efficiency by breaking down expert weights into a shared base weight, which captures common features, and a delta weight that represents ... | Rebuttal 1:
Rebuttal: ### **Dear Reviewer 4qJS**
Thank you for your insightful comments and for acknowledging the strengths of D²-MoE in terms of **efficiency, accuracy, and practical applicability**. Below, we address your concerns in detail.
------
**Q1: Motivation and Theoretical Justification of the D²-MoE Frame... | Summary: This paper decompose expert weights into a shared base weight and expert-specific delta weights, allowing for effective compression while preserving expert diversity. The delta weights are then compressed using SVD and the base weights undergo semi-dynamical structured pruning. The paper provides extensive em... | Rebuttal 1:
Rebuttal: ### **Reviewer V5e9**
Thank you for your detailed review and for recognizing the novelty of D²-MoE and its strong empirical performance. Below, we address your concerns in depth.
------
**Q1: The criteria for setting the SVD truncation threshold should be explained.**
**A1:**
(1) We set an ov... | Summary: This paper introduces D2-MoE, which decomposes expert weights into a shared base weight and unique delta weights. The delta weights are then compressed using SVD, and the base weight is further compressed using a semi-dynamical structured pruning strategy. The authors claim D2-MoE achieves better compression r... | Rebuttal 1:
Rebuttal: ### **Dear Reviewer LEQk**
**Q1: Careful discussion on relation with LoRA**
**A1:**
(1) Our framework structurally builds a multi-LoRA setup for MoE compression, consisting of a single base branch and multiple delta low-rank branches, enabling us to leverage existing LoRA research for further f... | Summary: This paper introduces D²-MoE for MoE Language Models. The author decomposes expert weights into a shared base weight and expert-specific delta weights, then compresses each component separately.
Claims And Evidence: The primary claim that D²-MoE outperforms existing compression methods is backed by comparativ... | Rebuttal 1:
Rebuttal: ### **Dear Reviewer 73q5**
Thank you for your thoughtful review and constructive feedback. We appreciate your recognition of the novelty of D²-MoE and its strong empirical performance. Below, we address your concerns in detail.
**Q1: The relationship between compression ratio and performance deg... | null | null | null | null | null | null |
DynaMind: Reasoning over Abstract Video Dynamics for Embodied Decision-Making | Accept (poster) | Summary: This paper proposes to encode the manipulation video into `"dynamic representation" by assigning weight to frames. Leveraging this representation, future states are predicted, which are then used to output the action for robot control. The weight of each frame are determined by the variance and similarity betw... | Rebuttal 1:
Rebuttal: We appreciate Reviewer mTuA’s recognition of the novelty. Please find our responses to each comment below.
>Seen manipulation tasks during training;lacks evaluation on unseen tasks for generalization.
- Randomized initializations within seen tasks are a standard evaluation protocol. In Table 1, t... | Summary: This paper proposes a novel method to leverage video data for decision making. To address the gap between abstract language and complex video, the paper proposes to learn abstract dynamic representations for video, rather than making language more detailed. The dynamic representation is learned by assigning hi... | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer fGaZ for the valuable feedback. Below, we respond to each comment and will revise the paper accordingly.
>Works like LAPO, LAPA, and IGOR use video to learn latent actions for decision-making. PIDM relates to future prediction.
These works, like ours, target video-base... | Summary: This paper proposes the DynaMind framework for video dynamic abstraction and reasoning, aiming to extract key dynamic information from long-horizon videos for future prediction and decision-making. First, a FrameScorer mechanism is designed to evaluate the importance of video frames based on visual saliency an... | Rebuttal 1:
Rebuttal: We thank Reviewer kRnT for recognizing the novelty and presentation of our work. We address each concern below and will revise the paper accordingly.
>Comparison with more recent imitation learning methods from the past two years
We agree that including comparisons with more recent imitation lear... | Summary: This paper aims to address the mismatch problems between abstract languages and the rich content of videos. It proposes dynamic abstraction to represent spatiotemporal latents as a substitute for videos. It generates dynamic abstraction by learning semantic consistency and visual saliency and learns the agent ... | Rebuttal 1:
Rebuttal: We thank Reviewer VsTD for the feedback. To ensure clarity, some responses are stated directly—we appreciate your understanding.
>The fixed window size is less flexible
- Fixed window sizes are standard practice, used in baselines like LISA and SkillDiffuser, and in some video understanding work.
... | null | null | null | null | null | null |
Simple Graph Contrastive Learning via Fractional-order Neural Diffusion Networks | Reject | Summary: This paper introduces a novel augmentation-free GCL framework. Unlike traditional GCL methods that rely on complex augmentations or negative sampling, this framework uses Fractional Differential Equations to generate different feature views.
Claims And Evidence: The experimental results demonstrate competitiv... | Rebuttal 1:
Rebuttal: Thank you for the insightful comments and suggestions.
**W1**. Why FDE for GCL.
A core principle of GCL is to generate diverse views, with novelty in how they are constructed. FD-GCL uses neural diffusion-based encoders governed by FDEs, where fractional order $\alpha$ controls diffusion scale—e... | Summary: This paper proposed a simple and effective augmentation-free graph contrastive learning framework, which uses Fractional Differential Equations induced graph neural diffusion models . By varying the order parameter, this method generates diverse views that capture both local and global graph information, elim... | Rebuttal 1:
Rebuttal: Thank you for the insightful comments and suggestions.
**W1**. The novelty of FD-GCL
A general guiding principle for GCL is to generate views from diverse perspectives, with *novelty lies in how these views are generated*. For example, PolyGCL uses polynomial filters for low-pass and high-pass s... | Summary: This paper proposes Fractional-order Neural Diffusion Networks (FNDN) as a new encoding method for Simple Graph Contrastive Learning (GCL). Unlike augmentation-based GCL approaches that rely on complex data transformations or augmentation-free methods that still require careful encoder design, this work introd... | Rebuttal 1:
Rebuttal: Thank you for the insightful comments and suggestions.
**W1**. The novelty of FD-GCL
FD-GCL's novelty is not merely replacing components of existing augmentation-free or negative-free pipelines, but introducing a new perspective for encoder design: generating distinct views via diffusion dynamic... | null | null | null | null | null | null | null | null |
Enhancing Parallelism in Decentralized Stochastic Convex Optimization | Accept (poster) | Summary: This paper presents Decentralized Anytime SGD, a decentralization optimization algorithms that is based on Anytime SGD. The authors presents the convergence analysis of Decentralized Anytime SGD. Decentralized Anytime SGD achieves linear speedup and has a better sample complexity than that of D-SGD under the n... | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and feedback. We address the reviewer’s questions individually:
**Experiments**: We have included experiments evaluating our method on both a synthetic, convex least squares problem and non-convex neural network training. We refer the reviewer to our response ... | Summary: The paper proposes Decentralized Anytime SGD — a noved algorithm for decentralized optimization. The algorithm is based on anytime SGD algorithm proposed by (Cutkosky, 2019). The paper provides the convergence rate of their method for convex functions, showing improvement over D-SGD in the middle convergence t... | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and valuable input. We address the reviewer’s concerns and questions separately:
**Correctness of the proof and Lemma A.1**: We divide our answer into 2 parts:
- First, the reviewer’s concern about the appearance of the term $x_{-1}$ in the analysis of [1] ref... | Summary: The paper introduces Decentralized Anytime SGD (DAT-SGD) to enhance parallelism in decentralized stochastic convex optimization (SCO).
Main Findings
DAT-SGD extends the parallelism threshold to O(ρ√N), matching centralized learning, while prior decentralized methods were limited to O(ρ¹/²N¹/⁴).
Main Results
... | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s constructive feedback. It appears that the reviewer’s primary concern is the lack of experimental results. In response, we have included experiments evaluating our method on both a synthetic convex problem and non-convex neural network training. We refer the reviewer t... | Summary: The paper studies an anytime variant of decentralized SGD. It achieves bounds allowing a larger number of nodes successfully team up in decentralized training. It does so by using gradients at averaged query points, thus improving the consensus distance and thus convergence under large number of nodes, which i... | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging our contributions and for the positive feedback. As suggested, we provide experiments to evaluate our method, including a synthetic least squares problem and an image classification task. All experiments are run with 3 random seeds, and we report the average... | null | null | null | null | null | null |
Exploring and Mitigating Adversarial Manipulation of Voting-Based Leaderboards | Accept (oral) | Summary: This paper focuses on the adversarial manipulation of voting-based LLM leaderboards, e.g., Chatbot Arena. Intuitively, keeping the model's response anonymous is essential to ensure the integrity of the leaderboard. However, this paper demonstrated that an adversary can efficiently de-anonymize the responses an... | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback and questions. Our detailed responses are as follows:
> Q1: Given the efficiency of the identity-probing detector, what is the meaning of applying a more sophisticated training-based detector? Could the authors please explain the intuition/motivat... | Summary: It has become common for LLMs to be evaluated subjectively in crowd-sourced "arenas", which usually use elo-based scoring based on user preferences. The authors study this voting-based evaluation setting, and find that they are susceptible to adversarial manipulation through a two-step attack: (1) de-anonymizi... | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback and questions. Our detailed responses are as follows:
> Q1: Sections 4.2.3 and 4.3 could be revised to be clearer…Specifically, 4.3 should clarify that it is experimenting based off of the proposed defenses in 4.2.3 (and has nothing to do with 4.2... | Summary: This submission examines the susceptibility of voting-based LLM assessment platforms to adversary interference, particularly emphasizing Chatbot Arena, a prominent platform that ranks language models according to human preferences.
The primary contributions of the paper are: (1) Evidence that users can effe... | Rebuttal 1:
Rebuttal: Thank you for your positive feedback and questions. Our detailed responses are as follows:
> Q1: The proposed technique demonstrates high accuracy in de-anonymizing model answers; nonetheless, LLMs are regularly updated. In what manner may the efficacy of your detection techniques be altered when... | null | null | null | null | null | null | null | null |
From Mechanistic Interpretability to Mechanistic Biology: Training, Evaluating, and Interpreting Sparse Autoencoders on Protein Language Models | Accept (spotlight poster) | Summary: This paper shows that SAEs can be used to better understand PLMs. They show that the features are interpretable via a series of case studies, including some clean histograms of activating examples. They use the SAEs to make high level observations about the PLMs, such as by categorizing the feature activation ... | Rebuttal 1:
Rebuttal: We thank the reviewer for their suggestions and enthusiasm for our work.
We agree that one of the limitations of the paper is a heavy reliance on qualitative measures of feature interpretability. Towards making feature analysis more quantitative, we introduced the family specificity and activati... | Summary: The authors train SAEs on ESM-2 (a large protein language model), characterize the discovered features, use these organized features to better understand how ESM-2 learns protein representation.
They also develop a visualization tool, and find SAE features that correspond to known properties such as thermostab... | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed reading and thoughts.
We acknowledge that the lack of other pLMs in our analysis beyond ESM-2 means that we should not make general conclusions about all pLMs. We intend to update the text to reflect this, replacing pLM with ESM-2 to narrow the scope of t... | Summary: The paper investigates the interpretability of protein language models by training sparse autoencoders on pLM latents (in particular from ESM2). The goal is to extract and analyze features that pLMs use to represent protein sequences, with the broader aim of linking these features to biological properties. The... | Rebuttal 1:
Rebuttal: We thank the reviewer for the thorough review and many good suggestions.
We agree with the reviewer’s points around the limitations of SAEs. The presence of a large number of family-specific features suggests that SAEs do indeed learn, or memorize, MSAs. Furthermore, the activation pattern of fam... | Summary: This paper studies sparse autoencoders trained on the protein language model ESM-2. They find that the SAEs contain a variety of generic and family-specific features, as well as features that can be used to identify sequence determinants of properties such as thermostability and subcellular localization. They ... | Rebuttal 1:
Rebuttal: We thank the reviewer for these thoughtful comments and suggestions.
We created an anonymous link for our InterProt visualizer at http://icml.interprot.com and hope that it can provide context on our manual interpretation process and showcase some interpretable features. To support our claims aro... | null | null | null | null | null | null |
LARM: Large Auto-Regressive Model for Long-Horizon Embodied Intelligence | Accept (poster) | Summary: In the open-world environment of Minecraft, this paper proposes the Large Auto-Regressive Model (LARM), which leverages the instruction-following and generalization capabilities of large language models to construct a Minecraft agent. Additionally, the paper introduces Referee RL to provide immediate feedback ... | Rebuttal 1:
Rebuttal: We believe the Reviewer has significant misunderstandings of this work. In the following, we address the concerns one by one using more precise explanations and sufficient experiments.
## Q1: Inference speed analysis
We did not explicitly compare the speed of our method with the counterparts bec... | Summary: This paper introduces a lightweight LLM-based agent that balances efficiency and generalization for long-horizon tasks. Using Referee RL, which employs a giant LLM for immediate feedback, LARM overcomes reward vanishment in reinforcement learning. Tested in Minecraft, it outperforms previous methods, achieving... | Rebuttal 1:
Rebuttal: We have addressed the concerns of the Reviewer one by one in the following. The paper will be revised accordingly.
## Q1: Real-time inference
The speed of 0.58 second per inference is the inference time of the high-level scheduling policy. Each time of high-level schedule corresponds to seconds ... | Summary: This paper focusses on the long-horizon embodied intelligence, specifically, the MineCraft tasks. Previous works generally rely on the strong generalization of giant LLM agents, since the performance of lightweight LLMs such as LLaVA-7B is limited. However, this requires huge computing resources. In this paper... | Rebuttal 1:
Rebuttal: We have test our method in more environments as suggested, and the details are in the following. The paper will be revised accordingly.
## Q1: Experiment in a household simulator
We thank the Reviewer for this suggestion. As suggested, we conduct experiments in VirtualHome to further validate th... | Summary: The paper introduces LARM (Large Auto-Regressive Model), a lightweight LLM-based embodied agent designed for long-horizon decision-making in open-world environments.
LARM is built on a lightweight auto-regressive model (fewer than 5B parameters) and directly predicts actions instead of generating text like tr... | Rebuttal 1:
Rebuttal: We have addressed all concerns of the Reviewer in the following. The paper will be revised accordingly.
## Q1: Referee RL stable update proof
We thank the Reviewer for this reminder and will add this proof to the paper. Due to the reply character limit, we cannot provide the whole proof here, bu... | null | null | null | null | null | null |
Adversarial Combinatorial Semi-bandits with Graph Feedback | Accept (poster) | Summary: **Edit post-rebuttal: I thank the authors for their feedback, which answered my questions. I maintain my overall positive score.**
The submission considers adversarial combinatorial semi-bandits, with additional feedback, ranging from no additional feedback to full-information feedback. The two extreme cases ... | Rebuttal 1:
Rebuttal: The detailed review and insightful feedback from the reviewer are deeply appreciated. We have done several updates in light of the review in our revision.
- It is worth mentioning that our previous lower bound construction did not lead to the desired trade-off and was wrong. We corrected it by con... | Summary: The authors consider the problem of combinatorial semi-bandit with feedback graphs, where a graph over the $K$ actions may provide the learner with side information during the learning process. By presenting appropriate lower and upper bounds, the authors establish a minimax optimal regret bound for graphs con... | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's thorough review and comments and have done several updates in our revision in light of the review.
- We appreciate the review spotting the issue of missing a factor $S$ in the log term. We have corrected this in our revision.
- We agree that the lower bound i... | Summary: The paper extends the standard combinatorial semi-bandit problem by incorporating a feedback graph $G$ that allows the learner to observe rewards not just from the arms selected in the combinatorial action but also from their neighbors in $G$.
The authors show that the optimal regret scales as $S\\sqrt{T} + \\... | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's detailed comments and, most importantly, for pointing out our error in the lower bound. We have the following updates in our revision in light of the review:
- We have corrected the lower bound. Specifically, we now construct $S$ independent sub-problems by a... | Summary: The paper studies the problem of semi-bandits, aiming to generalize bandit feedback and the full-information setup into a single framework. The problem is formulated as follows: Given a graph $G=(K,E)$, where $K$ represents the number of vertices and
$E$ is the set of edges, consider a $K$-armed bandit proble... | Rebuttal 1:
Rebuttal: We truly appreciate the reviewer's review and inspiring questions. Please see the following for our updates in light of the review and our thoughts:
- We appreciate the review's point on the readability and in our revision, we have added a few sentences on $S$ and the fact that $S>1$ corresponds t... | null | null | null | null | null | null |
Transfer Learning for Nonparametric Contextual Dynamic Pricing | Accept (poster) | Summary: The paper studies dynamic pricing problems using transfer learning techniques. The objective is to maximize expected total rewards and to minimize regret. The problem setup is stylized (monopoly scenario, time homogeneous demand), yet, reasonable. Numerical experiments are performed and results are compared to... | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and helpful comments. The additional simulation results are given in this anonymized link. https://docs.google.com/document/d/e/2PACX-1vRBfGzJo3ETCltTWfOi_0p4RjLbsUJo6g9z0J-Ckm2m6fL0fahJWSrrptiFwOGCyxhtHNuyHsQP0tOh/pub
**Q1** Our theoretical guarantees hold ... | Summary: The authors study the problem of transfer learning in the context of dynamic pricing. Given a pre-collected source dataset, the authors propose an algorithm that exploits such a dataset to learn a partitioning of the joint context-price space in order to propose a price for each user with the goal of maximizin... | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and helpful comments. The additional simulation results are given in this anonymized link. https://docs.google.com/document/d/e/2PACX-1vRBfGzJo3ETCltTWfOi_0p4RjLbsUJo6g9z0J-Ckm2m6fL0fahJWSrrptiFwOGCyxhtHNuyHsQP0tOh/pub
**W1)** The classic $0$ or $p$ revenue ... | Summary: The paper studies contextual dynamic pricing with nonparametric demands, a critical application in revenue management. The authors consider how transfer learning techniques can be applied for this problem, and achieve minimax optimal regret by devising a provably optimal online dynamic pricing algorithm while ... | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and insightful comments.
**Comparing our work with Cai et al (2024).** Cai et al (2024) study transfer learning for nonparametric contextual MAB under covariate shift. In their setting, actions (i.e. arms) are discrete and for simplicity are treated as a co... | Summary: This paper introduces a novel Transfer Learning for Dynamic Pricing (TLDP) algorithm designed to effectively utilize pre-collected data from a source domain to improve pricing decisions in a target domain. The regret upper bound of TLDP is established under a straightforward Lipschitz condition on the reward f... | Rebuttal 1:
Rebuttal: Thank you for your appreciation, especially in acknowledging our presentation and our novelty. | null | null | null | null | null | null |
Towards LLM Unlearning Resilient to Relearning Attacks: A Sharpness-Aware Minimization Perspective and Beyond | Accept (poster) | Summary: This paper explores the robustness of large language model (LLM) unlearning against relearning attacks, which can effectively restore forgotten knowledge through minimal fine-tuning. The authors establish a connection between robust LLM unlearning and Sharpness-Aware Minimization (SAM), a technique designed to... | Rebuttal 1:
Rebuttal: We appreciate Reviewer s57’s careful evaluation of our work. The constructive criticism and insightful questions help us further improve the paper. We respond to each key question below.
1. **Response to the choice of attacks**
Thank you for raising this question. Based on your suggestion, we h... | Summary: The paper reveals that Sharpness-Aware Minimization (SAM), traditionally used for improving model generalization, naturally yields a robust optimization framework for LLM unlearning. Through experiments, the paper shows that SAM-enhanced unlearning methods result in smaller discrepancies between model performa... | Rebuttal 1:
Rebuttal: Thank you very much for the positive review. Your comment regarding the lack of theoretical claims has encouraged us to reflect on whether rigorous guarantees can be established to support the improved unlearning robustness enabled by SAM. While our strong empirical validation has already been ack... | Summary: This paper investigates improving the robustness of LLM unlearning against relearning attacks by incorporating sharpness-aware minimization (SAM) and other smoothness optimization techniques. The authors draw an analogy between robust unlearning and adversarial training, formulating the problem as a min-max op... | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer RRKK for the thorough and thoughtful review. Below, we address each key point raised in the comments.
1. **Response to Computation Efficiency**
Thank you for your constructive feedback. **[Fig. R1](https://ibb.co/4Rqfnq8d)** presents the total run time of our propose... | Summary: This paper addresses the challenge of robust LLM unlearning, where undesired knowledge is removed from a large language model (LLM) without requiring full retraining. A key issue with existing unlearning methods is their vulnerability to relearning attacks, where a small fine-tuning step can restore forgotten ... | Rebuttal 1:
Rebuttal: We thank Reviewer ZQJD for the thorough review and the encouraging comments on our contributions and presentation. We also greatly appreciate the constructive feedback. Below, we address each key point raised in the comments.
1.**Regarding more robust unlearning methods and larger model evaluatio... | null | null | null | null | null | null |
Sketch to Adapt: Fine-Tunable Sketches for Efficient LLM Adaptation | Accept (poster) | Summary: This paper is concerned with parameter-efficient fine-tuning. It is argued in this paper that previous solutions in parameter-efficient fine-tuning are either low-rank or quantized. They are limited in applicability due to restrictive assumptions. In this paper, a sketchtune approach is proposed with the inspi... | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing the soundness of our work and providing thoughtful feedback. We address the reviewer’s concerns and suggestions below:
## **[Concern 1 - The sketch strategy is somehow hard to understand. Specifically, in Section 2.3, it is not very clear how the mapping matr... | Summary: The paper introduces a method that compresses pre-trained LLM weights row-by-row into a smaller, shared set of trainable “sketched” weights. They compress weights by approximately minimizing the reconstruction error for activations over a set of data. Experimentally, SketchTune shows advantages in terms of mo... | Rebuttal 1:
Rebuttal: Thank you for the thorough review and thoughtful feedback. Below, we address your questions and concerns.
## **[Theoretical Claims]**
The effectiveness of LoRA and SketchTune depends on the structure of the true update matrix $\Delta$. If $\Delta$ is low-rank, LoRA is favored; if $\Delta$ aligns... | Summary: The paper proposes SketchTune, which uses a learned sketching algorithm to compress the LLM into a small set of shared sketched parameters and fine-tune those parameters for adaptation. The proposed approach reduces model size while preserving the pre-trained capabilities of the full model.
## update after re... | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed review and insightful feedback. We address your concerns as follows.
## **[Theoretical Claims 1 - Ambiguous Title in Section 2.1 & More in-depth Empirical Observation Analysis]**
We thank the reviewer for the insightful suggestion. We conducted additional a... | Summary: The paper proposes an alternative to parameter efficient fine-tuning of LLMs by using sketching to create a low-dimensional representation of the weight matrices which is theoretically shown to be better for certain classes of matrices. Experiments on Llama models shows that the algorithm is able to outperform... | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful feedback and for acknowledging the empirical effectiveness of our method. We address your concerns and questions below:
## **[W1 - Sketch generation process appears to be more expensive than LoRA]**
While SketchTune introduces an additional sketching st... | null | null | null | null | null | null |
Text-to-LoRA: Instant Transformer Adaption | Accept (poster) | Summary: **Summary of Contributions:**
The paper introduces **Text-to-LoRA (T2L)**, a hypernetwork model designed to adapt Large Language Models (LLMs) on the fly based on natural language descriptions of target tasks. T2L aims to overcome the limitations of traditional fine-tuning by constructing Low-Rank Adaptation ... | Rebuttal 1:
Rebuttal: > - Zero-shot performance does not yet match that of task-specific LoRA adapters.
> - Reconstruction-trained T2L fails to generalize to unseen tasks.
> - Performance depends on the quality and alignment of the natural language task descriptions.
> - Relies on generated task descriptions for traini... | Summary: The authors explore whether it is possible to represent a set of lora parameters as task embedding(T2L). This allows many pre-trained LoRAs to be compressed, and potentially could generalize to new unseen tasks. They show it is possible to generalize to unseen tasks in this way. They analyze the generated LoRA... | Rebuttal 1:
Rebuttal: > “HyperDreambooth[1] is an intersting paper to cite”
We thank the reviewer for bringing the paper to our attention and will include this prior work in the camera-ready version.
---
> “Little ablation of the task embedder? Do better reasoning models generalize better?”
We agree with the review... | Summary: This paper proposes the T2L architecture and training methods to generate task-specific LoRA parameters from task embeddings. The authors claim that their approach enhances zero-shot performance by enabling on-the-fly adaptation through a single forward pass of a pretrained hypernetwork.
Claims And Evidence: ... | Rebuttal 1:
Rebuttal: > “I am curious whether there exist not identical but similar tasks in the training datasets compared to those in the evaluation benchmarks.”
We confirm that some test and training tasks are similar in that they are mostly multiple-choice question-answering tasks. Also, there are similar and over... | null | null | null | null | null | null | null | null |
STP: Self-play LLM Theorem Provers with Iterative Conjecturing and Proving | Accept (poster) | Summary: This paper introduces STP, a self-playing training framework for automated theorem proving. STP employs a conjecturer to generate new conjectures based on existing theorems and lemmas, while a prover attempts to prove previously unproven conjectures or statements in an iterative process. Experiments on the Lea... | Rebuttal 1:
Rebuttal: We thank reviewer 76YT for their positive review, and for noting “the paper is well-motivated and well-written, with comprehensive experiments”.
> I think the following work is not currently discussed in the paper: [1],
Thank you for the comments. We will include a discussion about [1] upon rev... | Summary: The paper proposes a novel method called **Self-play Theorem Prover (STP)**. STP addresses the shortage of high-quality training data in automated formal theorem proving by simultaneously training two roles: a **conjecturer** that produces new theorems (or “conjectures”) and a **prover** that attempts to prove... | Rebuttal 1:
Rebuttal: We thank reviewer vtLT for their review. In the following, we address the reviewer’s questions/comments in detail.
> Fig 2 - I am not sure how strong the baselines are (and how much effort has been given into tuning them)
We spend equal efforts in tuning the baselines and STP (if not more effor... | Summary: The paper studies the problem of improving LLM theorem provers through training on self-generated conjectures. The approach, called Self-play Theorem Prover (STP), is a framework to train LLM theorem provers in a dual conjecturer-prover setup, evaluated with Lean and Isabelle as formal verifiers. STP is initia... | Rebuttal 1:
Rebuttal: We thank reviewer dMfo for their positive review, and for noting “The experiment design overall is quite sound”. In the following, we address the reviewer’s questions/comments in detail.
> The authors do not include code to reproduce the results. I appreciate the details in the paper but releasin... | Summary: The paper introduces the Self-play Theorem Prover (STP), a novel method for training large language models (LLMs) in formal theorem proving, addressing the scarcity of high-quality training data. STP employs an LLM in two roles: a conjecturer that generates new mathematical conjectures based on existing theore... | Rebuttal 1:
Rebuttal: We thank reviewer ubJp for their positive review, and for noting “The claims are well-supported by clear and convincing evidence”. In the following, we address the reviewer’s questions/comments in detail.
> One claim appears problematic: "STP proves 26.3% of the statements in the LeanWorkbook dat... | null | null | null | null | null | null |
ENAHPool: The Edge-Node Attention-based Hierarchical Pooling for Graph Neural Networks | Accept (poster) | Summary: The paper proposes a methodology to perform hierarchical pooling in GNNs along with a message-passing layer that aims at reducing oversquashing (actually oversmoothing).
Claims And Evidence: No. There is an ablation study but I don't feel it covers all the claims and components of the proposed methodology.
An... | Rebuttal 1:
Rebuttal: Q1: There is an ablation study but I don't feel it covers all the claims and components of the proposed methodology.
A1: Due to space limitations, please refer to our response Q2 to Reviewer foip.
Q2: The contribution to the basic ML research.
A2: This paper proposes a new graph pooling that ca... | Summary: This paper introduces a novel graph pooling (ENAHPool), by combining the hard node assignment and the attention machnism in an interesting way. Different from other pooling operations, the new ENAHPool can compress the nodes and edges into hierarchical graphs associated with the node and edge attention rather ... | Rebuttal 1:
Rebuttal: Q1: References for the new pooling operations.
A1: Thank you for your valuable suggestion. We have further investigated more recently proposed pooling operations and will refine the related work section in the final version.
Recent research on graph pooling has primarily focused on cluster-base... | Summary: This paper develops a novel graph pooling method, namely the ENAHPoo, for graph classification associated with GNNs. Different from the previous graph pooling methods, the ENAHPool simultaneously integrtes either node or edge attention for the hierarchical sturcutre learning. In addition, it also design an ass... | Rebuttal 1:
Rebuttal: Q1: The abstract is a little long.
A1: We will update the abstract in the final version to make it more concise and easier to understand.
Q2: More dataset is preferred for Ablation Study.
A2: Thank you for your constructive suggestion. We have conducted the ablation experiments on all datasets.... | Summary: The paper proposes a cluster-based pooling method for graph neural networks (GNNs). The main feature of the proposal is that it performs an hard assignment of the input nodes, i.e., each node belongs to one cluster. Also, attention mechanism are employed to build node features and adjacency matrix of the coars... | Rebuttal 1:
Rebuttal: Q1: Convincing evidence is needed to claims that soft-assignment worsens the performance.
A1: Thanks for the suggestion. We conducted comparative experiments on all datasets to verify the positive impact of the hard assignment operation on model performance. However, due to time constraints, we o... | null | null | null | null | null | null |
GCAL: Adapting Graph Models to Evolving Domain Shifts | Accept (poster) | Summary: This paper introduces GCAL, a novel framework designed to address the challenge of continual domain adaptation in graph models, particularly in scenarios involving evolving, out-of-distribution graphs. GCAL employs a bilevel optimization strategy: the "adapt" phase fine-tunes the model on new graph domains whi... | Rebuttal 1:
Rebuttal: > **W1. It appears that the label is not given in the adaptation process, however, the label Y is explicitly referenced in the theoretical analysis. More explanation about how the labels are eliminated in this process should be added.**
We appreciate the reviewer’s observation. Indeed, in our set... | Summary: This paper introduces **Graph Continual Adaptive Learning (GCAL)**, a novel framework for continual domain adaptation in graph models, specifically addressing challenges in adapting to multiple out-of-distribution (OOD) graph shifts. The method employs a bilevel optimization strategy with two phases: (1) **Ada... | Rebuttal 1:
Rebuttal: > **Computational Efficiency**
Thank you for your valuable feedback. Our approach leverages a variational-based generation strategy, which is inherently designed to be efficient and scalable. This strategy allows for effective memory graph generation without significantly increasing the computati... | Summary: This paper proposes GCAL, a continual graph domain adaptation framework that mitigates catastrophic forgetting through bilevel optimization, integrating information maximization for adaptation and variational memory graph generation for knowledge replay. The approach is theoretically grounded in information bo... | Rebuttal 1:
Rebuttal: > **W1: The memory replay framework is commonly used in continual learning research. This paper does not introduce a fundamentally new framework in this regard.**
We acknowledge that memory replay is indeed a well-known approach to continual learning. We would like to clarify that **our novelty s... | Summary: This paper proposes Graph Adaptive Continual Learning (GCAL), extending the graph domain adaptation from single-step adaptation to continuous adaptation over a sequence of multiple domains. The proposed GCAL adopts a bi-level optimization strategy and consists of two phases. The adapt phase fine-tunes the give... | Rebuttal 1:
Rebuttal: > **W1: For task construction, it is unclear how the adopted datasets are constructed into different tasks with different distributions.**
**Q1: How are the datasets constructed into different domains, and how to ensure that the different domains have different distribution?**
We appreciate the... | null | null | null | null | null | null |
Hybrid Batch Normalisation: Resolving the Dilemma of Batch Normalisation in Federated Learning | Accept (poster) | Summary: The paper introduces Hybrid Batch Normalization (HBN) as a new normalization method designed to overcome the limitations of Batch Normalization in FL. In FL, client data is Non-IID, leading to a discrepancy between local and global statistics, which degrades BN’s performance. HBN addresses this issue by adapti... | Rebuttal 1:
Rebuttal: Thank you for your valuable contributions to improving this paper. In response to your suggestions, please find our detailed replies below.
**1. FedBN Baseline**
FedBN is designed for personalised FL without obtaining a unified global model, as it keeps BN parameters client-specific,
while focu... | Summary: This paper introduces Hybrid Batch Normalisation (HBN), a normalization technique designed to address the limitations of standard Batch Normalisation (BN) in federated learning (FL) with non-IID data. HBN separates the update of statistical parameters (means and variances) from learnable parameters, enabling u... | Rebuttal 1:
Rebuttal: We appreciate your thoughtful suggestions. Please find our response below.
**1. FedTAN Baseline**
FedTAN employs real-time communication to synchronise the use of shared global statistics.
However, to obtain these global statistics, FedTAN requires three rounds of communication per BN layer dur... | Summary: Due to the lack of a coherent methodology for updating BN statistical parameters, standard BN degrades the federated learning performance. This paper proposes Hybrid Batch Normalization (HBN), which separates the update of statistical parameters from learnable parameters and adaptively combines local batch sta... | Rebuttal 1:
Rebuttal: Thank you for your constructive comments and suggestions.
Please find our response below.
**1. Client Number Experiment**
We conducted comparative experiments on CIFAR-10 ($\beta$ = 0.6) across varying client numbers (10 clients sampled per round).
As shown in Table A, HBN consistently outperf... | null | null | null | null | null | null | null | null |
QuRe: Query-Relevant Retrieval through Hard Negative Sampling in Composed Image Retrieval | Accept (poster) | Summary: This paper proposes a QURE method to retrieval the target image and mitigate the false negatives for the task of Composed Image Retrieval (CIR). The authors introduce a hard negative sampling strategy that select images positioned between two sharp relevance score drops after the target to filter false negativ... | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer c7Tq for the positive feedback and for recognizing the novelty of our work in both the proposed method and the dataset.
**Q1 : There may be cases where there are no hard-negative samples?**
It is true that the number of hard negatives can vary significantly dependin... | Summary: The paper introduces the QURE algorithm, leveraging the BLIP-2 framework and a Hard Negative Sampling strategy to address the challenges in Cross-Image Retrieval (CIR). The novel approach is demonstrated using a custom dataset, HP-FashionIQ. While the approach is innovative and effectively addresses key pain p... | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer 6GLy for recognizing the contributions of our work and for providing valuable suggestions.
**Q1 : Mathematical justification for the Hard Negative Sampling strategy**
We re-emphasize an important limitation in CIR datasets. Since only one or a few target images are... | Summary: This work introduces a new method of QuRe to tackle the problem of composed image retrieval. The proposed method adopts and tailors the hard negative mining to emphasize not only the ranking of the target image, but also other relevant images in the dataset, aiming at improving the overall recall. Experiments ... | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer 13Df for the insightful comments and the time dedicated to reviewing our manuscript. Below, we provide detailed responses to each of your points.
**Q1 : Justification and thorough study to the motivation of the proposed method**
The primary motivation behind our meth... | null | null | null | null | null | null | null | null |
On the Tension between Byzantine Robustness and No-Attack Accuracy in Distributed Learning | Accept (spotlight poster) | Summary: This paper explores the trade-off between Byzantine robustness and standard accuracy in distributed learning. It provides a theoretical analysis of the error of robust aggregation methods when there are no Byzantine workers. In doing so, it establishes lower bounds on the deviation from the average of the dat... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their valuable time, insightful comments, and support of our work. We would like to answer the raised questions point by point as follows:
**Q1. I am unsure that the worst-case analysis is the best way to capture the accuracy vs robustness trade-off of Byzanti... | Summary: The paper analysis the learning error in distributed learning induced by robust aggregation schemes in the case when the actual number of Byzantine workers is 0, while the system is designed to handle a non-zero number of Byzantine workers $f$. The paper makes important contributions to the field of robustness... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their valuable time, constructive suggestions, and support of our work. We would like to respond point by point below.
**Comment 1. Impact on learning error under the more general heterogeneous setting of (G, B)-dissimilarity is missing.**
We agree with the re... | Summary: This paper examines distributed learning in a setting where the server implements a robust aggregation rule. Motivated by the Byzantine-robust learning framework, it evaluates the performance of distributed gradient descent (GD) methods designed to cope with Byzantine workers, even when none are present. The a... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the valuable time and the detailed review. We will respond to the raised concerns point by point as follows:
**Concern 1. The proof of Theorem 4.6 is of limited novelty as it can be directly deduced from [1].**
We thank the reviewer for letting us know their ... | Summary: This work studies robust aggregation methods in the Byzantine setting. Specifically, let $x_i \in \mathbb{R}^d$ be information held by worker $i$, and suppose that the goal is to compute the mean $\frac{1}{n}\sum_{i=1}^n x_i$. In the Byzantine setting, an unknown subset of $f$ workers are adversarially corrupt... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the insightful comments, the constructive suggestions, and the support of our work. We would like to respond to the raised questions point by point below.
**Comment 1. Some of the results on specific aggregation methods could be deferred to the appendix.**
In... | null | null | null | null | null | null |
MIRROR: Make Your Object-Level Multi-View Generation More Consistent with Training-Free Rectification | Accept (poster) | Summary: This paper introduced MIRROR, as a training-free rectification to improve the consistency of multi-view generation. The main contributions can be divided into (1) Trajectory Tracking Module (TTM) to pixel-wise trajectory tracking that labels identical points across views and (2) Feature Rectification Module (... | Rebuttal 1:
Rebuttal: Thank you for your thorough analysis and constructive feedback on our paper. We will address the concerns you raised and hope our responses will clarify your doubts.
***Q1. Scale of Depth***
A1. Based on the camera parameters of the base model, we approximate relative-to-metric depth conversion... | Summary: The paper introduces MIRROR, a training-free, plug-and-play method that improves consistency in multi-view image generation using diffusion models. At its core, MIRROR uses two novel modules: the Trajectory Tracking Module (TTM), which pinpoints corresponding 3D points across views using depth maps, and the Fe... | Rebuttal 1:
Rebuttal: We greatly appreciate your thorough and detailed review, offering valuable insights on methodology, theory, experiments, and scalability, and thank you for recognizing our work!
***Q1. Limitation of TTM***
A1. (1) TTM is designed to ensuring uniform geometry coverage and is theoretically extenda... | Summary: The author present MIRROR, an efficient, training-free, plug-and-play method to enhance multi-view consistency in 3D asset generation. The proposed approach directly rectifies latent noisy images across views during the sampling process. To be specific, a Trajectory Tracking Module based on depth information i... | Rebuttal 1:
Rebuttal: We sincerely appreciate your valuable suggestions and the recognition of our work in methodology, theoretical proof, and experimental design. Below are our responses to your concerns, and we hope they help clarify any doubts you may have.
***Q1. More Metrics***
A1. On the one hand, for fairness... | Summary: This work introduces MIRROR, a training-free plug-and-play module designed to enhance the multi-view consistency of existing text-to-3D and image-to-3D diffusion models. In particular, MIRROR consists of two stages: the first stage leverages an off-the-shelf diffusion model to generate multi-view images, while... | Rebuttal 1:
Rebuttal: Thank you for your thoughtful feedback. We truly appreciate your acknowledgment of our motivation, methods, theoretical proof, and experimental results. Your encouraging comments are highly valued, and we are grateful for your insights!
***Q1. Comparison with Cross-view Attention***
A1. Cross-vi... | null | null | null | null | null | null |
Accurate and Efficient World Modeling with Masked Latent Transformers | Accept (poster) | Summary: This paper proposes EMERALD, a world model that can produce highly accurate rollouts. The architecture is similar to prior works on using transformers as world models, with the exception that it uses MaskGIT to do prediction rather than a naive raster-scan next token prediction scheme. The authors argue that M... | Rebuttal 1:
Rebuttal: Thank you for your positive feedback and valuable returns. Please find below our response to the concerns that you raised in the review.
> The results seem to suggest that EMERALD is only on par with the baseline models. This somewhat muddies the authors' claim that their method is better at pred... | Summary: The paper mainly focuses on an approach towards world modeling where the prediction of dynamics is done by a spatial maskGIT. This results in significant improvements on the Crafter benchmark when compared to other models, and performs well on Atari. This also results in improved efficiency over an existing ap... | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper and providing valuable suggestions. Please find below our response to the concerns and questions that you raised in the review.
> Weakness 1
In Table 3, the lines 2 and 4 compare the use of a RSSM or TSSM for world modeling when using a spatial latent space. Usi... | Summary: This paper proposes a world model architecture in which spatial latent states are predicted using a MaskGIT predictor. Experiments are conducted on the Crafter benchmark, achieving superhuman performance.
Claims And Evidence: Partially. See Q1&2.
Methods And Evaluation Criteria: Partially. See Weakness 1.
T... | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback on our paper. Please find below our response to the concerns and questions that you raised in the review.
> Weakness 1:
The Crafter benchmark evaluates a wide range of general abilities (survival, memory) and was used by $\Delta$-IRIS to evaluate its meth... | Summary: This paper introduces EMERALD, a world modeling approach that balances accuracy and efficiency. EMERALD leverages spatial latent states and MaskGIT-based prediction to generate precise trajectories in the latent space. By improving the perception of critical environmental details, EMERALD enhances the quality ... | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper. Please find below our response to the concerns and questions that you raised in the review.
> in the ablation study, while EMERALD's advantage over RSSM-based frameworks does not stem from higher reconstruction fidelity, the paper does not discuss whether it res... | null | null | null | null | null | null |
Quantum Speedup for Hypergraph Sparsification | Accept (poster) | Summary: Graph sparsification has been extensively studied [SS11, BSS12, LS17] and has numerous applications in graph algorithms and machine learning. As a natural generalization of graphs, hypergraphs have gained increasing attention. Similarly, hypergraph sparsification has attracted significant interest following th... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their thorough evaluation and
constructive feedback. Below, we address the key concerns raised:
1. Essential References Not Discussed:
Thanks for point out these two references, we will add them for the
revision. Thanks for pointing out these references. We wi... | Summary: The authors claim to give the first quantum algorithm for hypergraph sparsification. Their main theorem claims that they can find a sparsifier of size $O(n/\epsilon^2)$ in time $O(r \sqrt{mnr} + r\sqrt{mn}/ \epsilon)$ with high probability. Besides the introduction, the paper is concerned with proving this res... | Rebuttal 1:
Rebuttal: Thank you for your comments and review. Feel free to reach out if
additional clarifications are needed. | Summary: This work introduces the first quantum algorithm for hypergraph sparsification, producing an $\varepsilon$-spectral sparsifier of size $\widetilde{O}(n / \varepsilon^2)$ in time $\widetilde{O}(r \sqrt{m n} / \varepsilon)$ for a weighted hypergraph with $n$ vertices, $m$ hyperedges, and rank $r$. This result de... | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s thorough evaluation and constructive feedback. Below, we address the key concerns raised:
1. Unitary Operations:
The unitary operators $U_{\mathsf{mult}},U_{\mathsf{sum}},U_{\mathsf{div}},U_{\mathsf{square}},U_{\mathsf{minus}}$ are
quantum gate implementat... | Summary: Hypergraph sparsification is the process of reducing the number of hyper edges of a graph while preserving (as much as possible) the energy of the graph.
The paper introduces an algorithm for hypergraph sparsification, addressing an open problem proposed in a previous paper by Apers and de Wolf. More specifi... | Rebuttal 1:
Rebuttal: Thank you for your helpful comments and suggestions. We will carefully
revise our paper to correct all typos, especially we will change the
word "adopts" in the last paragraph of page 2 to "adopt". | null | null | null | null | null | null |
Consistent Multigroup Low-rank Approximation | Reject | Summary: The paper introduces the concept of "consistent multigroup low-rank approximation," which extends the principles of singular value decomposition (SVD) to handle data partitioned into multiple groups. The goal is to find a set of basis vectors that minimize the maximum reconstruction error across all groups whi... | Rebuttal 1:
Rebuttal: Thank you for your comprehensive review and for sharing new interesting ideas.
> "The convexity analysis indicates that the primal problem is non-convex, which might raise concerns about convergence guarantees for more than two groups. Can the authors provide comment and a numerical test to mitig... | Summary: This manuscript addresses the problem of consistent low-rank approximation for multigroup data. It aims to find a sequence of k basis vectors that treats all groups equally by minimizing the maximum error among them and satisfies the consistency property. The paper proposes an iterative algorithm that adds the... | Rebuttal 1:
Rebuttal: We sincerely thank you for your review and valuable comments, which will also be very useful in future work.
> "The method proposed by the author may bring new inspiration in solving large-scale problems?"
**Our method guarantees high scalability.** We appreciate the suggestion of the reviewer... | Summary: The paper studies the problem of estimating the mutligroup singular vectors for the multigroup FAIR PCA method. The Frank-Wolfe method and the SDP relaxation method are both conduct to solve the min-max type nonconvex object function.
## update after rebuttal
(Sorry for the late update.) The paper studies th... | Rebuttal 1:
Rebuttal: Thank you for your thorough review. We understand your concerns, and we believe that your feedback gives excellent input for improving the manuscript.
We begin by carefully addressing the reviewer’s main concern, i.e., the *novelty of our work in the context of previous work*, notably that of Sa... | null | null | null | null | null | null | null | null |
Alberta Wells Dataset: Pinpointing Oil and Gas Wells from Satellite Imagery | Accept (poster) | Summary: The paper presents the large-scale benchmark dataset Alberta Wells for pinpointing oil and gas well, comprising over 210,000 wells and including three classes (abandoned, suspended and active), which frames the problem of identification of wells as a challenge for object detection and binary segmentation. To c... | Rebuttal 1:
Rebuttal: Thank you very much for the thoughtful and constructive review. We appreciate your recognition of the dataset’s scale, quality, and importance for climate-relevant applications, as well as your positive assessment of our methodological rigor and experimental design.
Below, we address your quest... | Summary: This paper introduces the Alberta Wells Dataset, the first large-scale benchmark dataset for detecting oil and gas wells from satellite imagery. The dataset contains over 213,000 wells (abandoned, suspended, and active) across Alberta, Canada, represented in high-resolution (3m/pixel) multi-spectral satellite ... | Rebuttal 1:
Rebuttal: Thank you for your extremely thorough and thoughtful review. We very much appreciate the helpful and constructive feedback. We respond to specific comments and questions below:
## Architecture comparisons and hyperparameter tuning:
Thank you for raising this point. We agree that fully tuning hyp... | Summary: This work proposes a large-scale remote sensing multispectral dataset for pinpointing oil and gas wells. The data comes from real scenes, and the authors carefully designed a reasonable data filtering method and data split scheme to ensure the quality of the data. This work proposes binary segmentation and obj... | Rebuttal 1:
Rebuttal: Thank you for your thoughtful and constructive review. We're grateful for your recognition of the dataset's potential impact on climate change mitigation, as well as for your helpful suggestions. Below are our responses to the points you raised:
## Transformer-based models for well detection
Tha... | null | null | null | null | null | null | null | null |
Contextual Linear Bandits with Delay as Payoff | Accept (poster) | Summary: This paper investigates contextual linear bandits in which the payoff (loss or reward) is observed after a delay proportional to the payoff itself. This extends prior research on multi-armed bandits (MAB) with payoff-dependent delays. The authors propose a phased arm elimination algorithm for the non-contextua... | Rebuttal 1:
Rebuttal: Thanks for your valuable comments. We address the issues mentioned in your review below.
- **Q1: The reliance on mature techniques, such as the spanner method and phased arm elimination, limits the novelty of the proposed approach.**
While we agree that neither volumetric spanner or phased arm e... | Summary: This paper studies a contextual linear bandit setting where the reward/loss is delayed by a length of time proportional to the realised reward/loss. For this problem, the authors propose an arm elimination strategy and analyse the regret (including delay penalty) of the proposed algorithm. Experiments in a sim... | Rebuttal 1:
Rebuttal: Thanks for your valuable comments. We address the issues mentioned in your review.
- **Q1: It is not clear why it is hard to construct a lower bound equivalent to Eq.(2). In particular, why we cannot minimize over an appropriately defined confidence set.**
We emphasize again the difficulty of ob... | Summary: The paper extends the delay-as-payoff model (Schlisselberg et al., 2024) from standard multi-armed bandits (MABs) to contextual linear bandits. This setup arises in practical situations such as clinical trials and modeling time-to-event data for other medical procedures, advertising, wherein the delay in obser... | Rebuttal 1:
Rebuttal: Thanks for your valuable and positive comments and your acknowledgment on our initiation to study linear contextual bandits with payoff-dependent delay. We address the issues mentioned in your review below.
- **Q1: There is no comparison with other delayed linear bandit methods, even those assumi... | Summary: The authors try to extend the delay-as-payoff model to contextual linear bandits. The main novelty here is to apply a phased arm elimination procedure by only picking the **volumetric spanners** of the action set in order to handle both payoff-dependent delays and large action sets. Further extension is discus... | Rebuttal 1:
Rebuttal: Thanks for your valuable comments. We address the issues mentioned in your review below.
- **Q1: Why assume the delay as a linear function of payoff; other general models?**
Our goal is to extend the same delay-as-payoff model of Schlisselberg et al. (2024) from MAB to contextual linear bandits,... | null | null | null | null | null | null |
Lightweight Protocols for Distributed Private Quantile Estimation | Accept (spotlight poster) | Summary: The authors study the problem of estimating quantiles under local differential privacy and under shuffle differential privacy, with applications in distributed and private quantiles estimation.
To do so, the paper presents new algorithms.
The article presents both upper and lower bounds for the problems at ha... | Rebuttal 1:
Rebuttal: Thanks for your questions and valuable feedback!
> The structure of the paper makes it slightly difficult to follow, as the technical details of the algorithm are introduced quite late. This forces the reader to jump back and forth to understand how the challenge outlined in the introduction is a... | Summary: This paper considers the estimation of quantiles under the LDP framework with bounded integral data. It derives a series of lower bounds under both shuffle-DP and LDP, and proposes an LDP algorithm in an adaptive setting.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Ye... | Rebuttal 1:
Rebuttal: Thanks for your questions and your consideration of our paper
>For random variables with infinitely many possible values (e.g., Poisson), does the proposed algorithm or framework still apply?
Without any assumptions on the distribution of the random variable, known lower bounds (as we discuss un... | Summary: This paper studies quantile estimation under local differential privacy. They are interested in the sequentially adaptive local model, where the aggregator queries each user only once, but in rounds where the set of users and the randomizer they are asked to use can depend on information learned in previous ro... | Rebuttal 1:
Rebuttal: Thanks for your interest in our paper and your valuable feedback!
> The only one I was more unsure of was the shuffle model result, where I didn't fully understand the algorithm that they were using (the proof of Theorem 1.4 is rather vague on this point so fully specifying the algorithm would ma... | Summary: The paper studies the problem of finding quantiles with constraints of differential privacy. More specifically, it studies shuffle and local differential privacy.
The authors proved that the algorithms have utility higher than any known algorithm for the problem and also proved that the local DP algorithm’s b... | Rebuttal 1:
Rebuttal: Thanks for your interest in our paper! | null | null | null | null | null | null |
Uncertainty-aware Preference Alignment for Diffusion Policies | Reject | Summary: This paper proposes Diff-UAPA, focusing on handling inconsistent and diverse offline preference data across different user groups.
Building upon diffusion policies, the authors first propose a maximum likelihood estimation (MLE) setup or preference alignment and then augment it with the Beta prior to capture ... | Rebuttal 1:
Rebuttal: Dear Reviewer, we sincerely value your time and effort in evaluating our work. We have prepared comprehensive responses and clarifications to address each point you raised. We hope these responses can resolve your concerns.
> Q1. The authors could conduct more ablation studies or expand the range... | Summary: This paper proposes Diff-UAPA, an uncertainty-aware preference alignment method for diffusion policy, designed to address inconsistencies in preference pairs. Diff-UAPA uses a maximum posterior objective to align the diffusion policy with a regret-based preference model, incorporating an informative Beta prior... | Rebuttal 1:
Rebuttal: Dear Reviewer, we sincerely value your time and effort in evaluating our work. We have prepared comprehensive responses and clarifications to address each point you raised. We hope these responses can resolve your concerns.
> Q1. There are many methods designed to be robust against noisy preferen... | Summary: This paper proposes a method to align RL policy using human demonstration and preference feedback. The method works as follows: (1) learn a reference policy from a set of human demonstration trajectories via behavior cloning; (2) learn a prior distribution about the probability that a trajectory is preferred u... | Rebuttal 1:
Rebuttal: Dear Reviewer, we greatly appreciate your constructive comments. We have seriously considered your suggestions, and we hope the following response can address your concerns:
> Q1. The proposed method uses both the demonstration data and the preference data.
**A1.** Thank you for your comment. As... | null | null | null | null | null | null | null | null |
Data-Juicer Sandbox: A Feedback-Driven Suite for Multimodal Data-Model Co-development | Accept (spotlight poster) | Summary: The paper introduces Data-Juicer Sandbox, an open-source suite that supports co-development of multimodal data and models. It proposes a feedback-driven approach in which data processing and model training are iterated together, rather than in isolation.
A central component is the “Probe-Analyze-Refine” workf... | Rebuttal 1:
Rebuttal: We sincerely appreciate your time, thorough evaluation and valuable feedback! Below, we address all your raised Weaknesses (W), Comments (C), Questions (Q) and Suggestions (S).
> New results: https://anonymous.4open.science/r/icml_submission8414-7C89/rebuttal.pdf
---
## [C on Claims & Methods] ... | Summary: This paper introduces the data juicer sandbox, an open-source suite that analyzes various metrics and make use of heuristics to facilitate the integrated development of multimodal data and models. The proposed "Probe-Analyze-Refine" workflow was validated through image-text pre-training with CLIP, MLLMs, and t... | Rebuttal 1:
Rebuttal: We sincerely appreciate your time, insightful feedback, and recognition of the work's significance! Below, we address your comments point by point.
> The new results: https://anonymous.4open.science/r/icml_submission8414-7C89/rebuttal.pdf
### [Para 1 in Claims part] "generalizability is question... | Summary: This paper introduces a sandbox suite with a feedback-driven experimental platform, which supports cost-effective iteration and guided refinement of both data and models. The authors conduct experiment on image-to-text generation, text-to-video generation and image-text pretrianing. The results demonstrate the... | Rebuttal 1:
Rebuttal: Thank you for recognizing our work as "valuable for improving research efficiency" and providing "valuable insights"! Below, we address the raised weaknesses (W) and questions (Q) with point-to-point clarifications.
> The mentioned new results: https://anonymous.4open.science/r/icml_submission841... | Summary: In their work, authors describe and implement a method and procedures for data-model co-development, aiming on improving pretraining of various foundation model types (language-vision CLIP, diffusion based text-to-video generative models, Llava based image-text generative model). The framework authors introduc... | Rebuttal 1:
Rebuttal: We are grateful for your recognition of the *method, evaluation, theoretical argument & exp design* of our work. Regarding your raised concerns, we address all of them point by point as follows:
> New results added: https://anonymous.4open.science/r/icml_submission8414-7C89/rebuttal.pdf
---
###... | Summary: This paper introduces Data-Juicer Sandbox, a feedback-driven suite for multimodal data-model co-development. The system integrates the data processing system with model-centric infrastructure, and designs a "Probe-Analyze-Refine" workflow to systematically explore the relationship between data processing opera... | Rebuttal 1:
Rebuttal: We sincerely appreciate your acknowledgment and constructive feedback on our work! We respond to the raised only concern with the following new experiments.
> The mentioned new results: https://anonymous.4open.science/r/icml_submission8414-7C89/rebuttal.pdf
---
### [Exp Design, Weakness & Sugges... | null | null | null | null |
Let LLM Tell What to Prune and How Much to Prune | Accept (poster) | Summary: This paper propose a pruning method that targets multiple LLM modules with dynamic pruning ratios. It find the intrinsic of LLM can help to determine the importance of each module and
thus distribute the pruning load on demand, i.e., what to prune and how much to prune. Extensive experiments on multiple benchm... | Rebuttal 1:
Rebuttal: Dear Reviewer 3YW3:
Thank you for your insightful review. All experimental result tables have been compiled and are available at the following available link: https://anonymous.4open.science/r/5BDF/README.md. We have thoroughly considered your concerns and respond to them as follows :
---
### ... | Summary: The paper proposes a structured pruning framework for large language models (LLMs) that dynamically determines "what to prune" (specific modules) and "how much to prune" (pruning ratios) based on their importance.
Specifically, the method employ TE to quantify block-layer interaction and information entropy ... | Rebuttal 1:
Rebuttal: Dear Reviewer nYT6,
Thank you very much for your valuable feedback. We first address the questions raised in the “Questions for Authors” and “Other Strengths and Weaknesses” sections. For any additional concerns mentioned in other parts of the review(if they are distinct from those already cove... | Summary: The paper introduces a new approach to pruning large language models (LLMs) that dynamically assigns pruning ratios to different components based on their importance.
There are two issues with the current pruning methods: (1) focusing on just one structure of the model; (2) using a prescribed pruning ratio.... | Rebuttal 1:
Rebuttal: Dear Reviewer FHHv:
We appreciate the reviewer’s constructive suggestions. Experimental results are available at the link: https://anonymous.4open.science/r/5BDF/README.md. We address the questions in “Questions for Authors” and “Other Strengths and Weaknesses” sections. For any additional con... | null | null | null | null | null | null | null | null |
Generalists vs. Specialists: Evaluating LLMs on Highly-Constrained Biophysical Sequence Optimization Tasks | Accept (poster) | Summary: This paper tackles the problem of biophysical sequence optimization - a task where even small deviations from stringent constraints (e.g., protein stability or solubility) can render a solution unusable. To bridge the gap between generalist LLM-based methods and specialist solvers, the authors introduce a synt... | Rebuttal 1:
Rebuttal: Thank you for your positive assessment of our work. We appreciate your recognition that our approach is well-motivated and technically sound.
## On the applicability of synthetic benchmarks to real-world cases
You raised an important question about whether our synthetic benchmarks apply to real-... | Summary: This paper investigates the use of large language models (LLMs) as black-box sequence optimizers for biophysical sequence design and optimization. The authors compare generalist LLM-based approaches with specialized optimization methods, such as LaMBO-2, to determine whether LLMs can efficiently optimize under... | Rebuttal 1:
Rebuttal: Thank you for your positive assessment and constructive feedback. We appreciate your recognition of our thorough experiments and well-defined benchmarks.
## On testing with real-world biological datasets
You noted that our evaluation relies on synthetic Ehrlich functions rather than real-world b... | Summary: The authors introduce Ehrlich functions, a novel synthetic function suite designed to simulate the properties of biological sequences and to facilitate benchmarking of generative algorithms for sequence optimization. They also propose a bilevel LLM-based solver, LLOME, which leverages a new preference loss cal... | Rebuttal 1:
Rebuttal: Thank you for your positive assessment and thoughtful questions. We're pleased you recognize the value of our contributions and appreciate your suggestions for strengthening our work.
## On success rates and feasibility
You raised an important point about success rates in real-world settings wit... | Summary: This paper introduces a new synthetic test suite (Ehrlich functions) that captures the geometric structure of biophysical sequence optimization problems, proposes a framework LLOME (Language Model Optimization with Margin Expectation), a bilevel optimization routine for online black-box optimization, and uses ... | Rebuttal 1:
Rebuttal: Thank you for your thoughtful assessment of our work. We appreciate your recognition of our technical contributions and would like to address your concerns.
## On the choice of Ehrlich functions over existing benchmarks
To validate the real-world applicability of Ehrlich functions, we conducted ... | null | null | null | null | null | null |
KVTuner: Sensitivity-Aware Layer-Wise Mixed-Precision KV Cache Quantization for Efficient and Nearly Lossless LLM Inference | Accept (poster) | Summary: This paper proposes a multi-objective optimization-based algorithm to search for the optimal layer-wise mixed precision KV cache quantization configuration. The authors observe that key caches generally require more bits for quantization than value caches, and thus propose allocating more bits to the key cache... | Rebuttal 1:
Rebuttal: We sincerely thank you for your thorough feedback. Below, we address the concerns raised and outline revisions to improve the clarity and rigor of our work.
---
# 1. Key Cache Importance
* The reviewer correctly observes that in certain layers (e.g. Layer 0,1,2,31 of Llama-3.1-8B-Instruct), the ... | Summary: This paper proposed an innovative quantization technique for KV caches, which can reduce the inference throughput with a negligible quality drop in the output.
This paper's key insight is that the key cache is more important than the value cache in terms of reducing the quantization error. Its key contributio... | Rebuttal 1:
Rebuttal: Thank you for your thorough review and constructive feedback. We sincerely appreciate your acknowledgment of the innovativeness and feasibility of the proposed methodology and theoretical analysis of attention patterns to KV cache quantization, as well as your recognition of the comprehensive impl... | Summary: The authors propose KVTuner, a sensitivity-aware layer-wise mixed-precision KV cache quantization framework for LLM inference. KVTuner addresses key challenges in KV cache quantization, including layer-wise sensitivity to quantization errors, high overhead of fine-grained online adjustments, and inflexibility ... | Rebuttal 1:
Rebuttal: We sincerely thank you for the thoughtful feedback and constructive critiques. Below, we address each concern and outline planned revisions to strengthen the paper:
---
# 1. Computational cost
**The profiling and layer-wise KV cache precision tuning are completely offline and no online overhead f... | null | null | null | null | null | null | null | null |
Can Large Language Models Understand Intermediate Representations in Compilers? | Accept (poster) | Summary: This paper presents an empirical study of the capability of LLMs to understand intermediate representations (IRs) of code. The LLMs are evaluated on 4 types of tasks of IR understanding: control-flow graph (CFG) reconstruction, IR decompilation, code summarization and execution reasoning. The results indicate ... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thoughtful and constructive feedback. Below, we address the key concerns:
**W1:** The paper only uses the HumanEval, which contains code functions with an average of fewer than 10 lines.
**A1:** Though HumanEval consists of relatively short functions, our ... | Summary: The paper experiments with applying LLMs to the control flow graph of programming code, identifying key challenges of control flow, semantic understanding, and loop handling. These challenges, as analyzed through 4 tasks, seem to permeate over a variety of language models including Coda Llama, Gemma 2, and GPT... | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful feedback and recognition of our novel application of LLMs to compiler IRs. We also appreciate the acknowledgement of our focus on task-specific analysis. We address the key concerns below.
**Q1 & W1:** Are there any non-LLM approaches that would be applic... | Summary: The paper provides an empirical evaluation of current LLMs on IR understanding tasks, namely --
- CFG reconstruction
- decompilation
- code summarization, and
- execution reasoning
and find that models struggle with complex reasoning about IRs
Claims And Evidence: - Pioneering empirical study to investiga... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thoughtful and constructive feedback. We appreciate your recognition of the novelty of applying LLMs to compiler IRs and the value of our in-depth evaluation. Below, we address the key concerns:
**Q1:** Can you provide more details on the level of prompting... | Summary: The authors explored the capabilities of large language models (LLMs) in understanding intermediate representations (IRs), primarily for applications such as code comprehension, optimization, and automated reasoning. Their findings indicate that while LLMs are proficient in understanding static IR features and... | Rebuttal 1:
Rebuttal: Thank you very much for your detailed review and positive feedback on our work. We greatly appreciate your recognition of the novelty and significance of our study, as well as your thorough evaluation of our experimental setup. We would like to address your comments as follows:
**W1:** The study... | null | null | null | null | null | null |
Beyond One-Hot Labels: Semantic Mixing for Model Calibration | Accept (poster) | Summary: This paper propose Calibration-aware Semantic Mixing, a model calibration approach using diffusion-based data augmentation, like “semantic mixup”.Unlike traditional one-hot labeling, CSM generates mixed samples with soft labels with the CLIP. The authors introduce a reannotation technique using CLIP features a... | Rebuttal 1:
Rebuttal: ## Response to Reviewer jJxJ
Thanks for your helpful suggestions! Here’s our response:
**Q4-1**: Training computational cost and memory usage compared to existing methods.
**A4-1**: As also analyzed in **A2-3**, we compare the computational efficiency in terms of the training time in **A2-3 Tab... | Summary: This paper introduces a novel framework, Calibration-aware Semantic Mixing (CSM), designed to improve model calibration. The key contribution lies in addressing the limitations of one-hot labeled datasets by proposing a data augmentation technique that leverages semantic mixing to generate diverse samples via ... | Rebuttal 1:
Rebuttal: ## Response to Reviewer WPRF
Thank you for your encouraging feedback on the clarity, soundness, and comprehensive evaluation of our work. We truly appreciate your thoughtful suggestions for clarity and comprehensive validation. Here are our responses to the suggestions:
**Q3-1**: Clarify Proposi... | Summary: This paper presents Calibration-aware Semantic Mixing (CSM), a novel approach to improving model calibration by generating high-quality augmented data with soft labels. Unlike traditional augmentation methods that rely on one-hot labels, CSM leverages diffusion models to create semantically mixed images with c... | Rebuttal 1:
Rebuttal: ## Response to Reviewer xs6o
Thank you for your positive and insightful feedback! Here are our responses:
**Q2-1**: Additional details and proofs in the supplementary material.
**A2-1**:
Thank you for the kindly comments on the theoretical soundness. The claims made in our paper (including dedu... | Summary: Model calibration typically assumes full certainty in datasets with one-hot labels, limiting accurate uncertainty estimation. To address this, the paper introduces Calibration-aware Semantic Mixing (CSM), a data augmentation framework that synthetically generates diverse training samples annotated with explici... | Rebuttal 1:
Rebuttal: ## Response to Reviewer TsGd
Thank you for your kind suggestions on clarity and experimental thoroughness. Below are our responses:
**Q1-1**: Clarification on the reason that the proposed L2 loss is a balanced loss.
**A1-2**:
Thank you for this nice concern. We need to clarify that there exist... | null | null | null | null | null | null |
Reliable Image Quality Evaluation and Mitigation of Quality Bias in Generative Models | Reject | Summary: This paper introduces the Difference in Quality Assessment (DQA) score, which is designed to evaluate the reliability of evaluation metrics such as the Fréchet Inception Distance (FID). Additionally, the DQA framework aids in identifying more reliable image encoders, thereby enhancing the robustness of evaluat... | Rebuttal 1:
Rebuttal: ## Additional Reference
Thanks for suggesting a missing reference. Although our paper already includes references related to energy-based guidance in text-to-image models—such as Composing Diffusion Models [1], Self-Guidance [2], and Universal Guidance [3]—the suggested reference [4] is indeed a v... | Summary: This paper proposes a Difference in Quality Assessment (DQA) measure that quantifies the reliability of existing quality evaluation metrics for generative models. The authors present a problem in generation model evaluation, i.e., the demographic bias. They find that conventional quality assessment measures ar... | Rebuttal 1:
Rebuttal: ## Extension of Demographic Groups
Thank you for raising this point. Quality bias is not limited to gender; it also extends to other demographic attributes such as race. In our study, we consider four racial groups: Asian, Black, Caucasian, and Indian. We explore two possible directions for extend... | Summary: The paper aims to address the issue of quality disparities in image generation models, proposing the DQA score as a method for assessing the reliability of evaluation metrics, and introducing DQA-Guidance to mitigate quality bias in diffusion models. The core contributions are the DQA metric and its applicatio... | Rebuttal 1:
Rebuttal: ## Novelty of the Paper
To the best of our knowledge, this paper is the first to address fairness issues in the evaluation metrics used for generated images.
We distinguish two types of bias:
- **(a) Bias in the evaluation metric**
- **(b) Bias in quality in the generated image**
Although (b) ha... | Summary: This paper introduces DQA, a novel scoring method designed to assess the reliability of image quality evaluation metrics, particularly in the context of generative models. DQA aims to address the bias present in metrics like FID when evaluating image quality across different demographic groups. The core idea ... | Rebuttal 1:
Rebuttal: ## Adjustment in the Controlled Dataset
The degradations we introduce are well-established in diffusion-based generative models literature.
- **Weak Classifier-Free Guidance (CFG)**
In CFG, using weak guidance simulates a scenario where the generated image loses coherence with the prompt.
- *... | null | null | null | null | null | null |
Cross-environment Cooperation Enables Zero-shot Multi-agent Coordination | Accept (oral) | Summary: This paper proposes to train agents in self-play on a large-distribution of environments to enhance the agents' coordination ability with unseen teammates in unseen environments. Experiments on a toy grid-world game and Overcooked demonstrate the effectiveness of the proposed method.
Claims And Evidence: Yes.... | Rebuttal 1:
Rebuttal: Thank you for your time and effort in reviewing our paper, and for recognizing its strengths in clarity, empirical validation, and the intriguing insights it provides.
# New experiments
Based on your suggestions, we’ve added the following results to broaden the experimental scope to more complex... | Summary: This work presents cross-environment coordination as an alternative to population based training for enabling smooth coordination with unseen partners. They find that (pre-)training on a diverse set of environment configurations on Overcooked with a single learning partner enables agents to work in new environ... | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their thoughtful and constructive feedback on our paper, and address your comments below:
# ZSC vs Ad-hoc Teamplay
We sincerely thank you for clarifying the distinction between these two evaluation settings. From our understanding, our use of the empirical... | Summary: This paper studies a novel multi-agent training paradigm, Cross-Environment Cooperation (CEC), where the learning agent learns to work with a single partner across different variations of the environment. This is in contrast with prior work in the literature that focuses on training an agent that can adapt to ... | Rebuttal 1:
Rebuttal: Thank you for your interest in our work and recognizing the novelty of our training and evaluation approaches. We are glad you found our writing to be clear and our experiment section to be thorough. Your idea for framing cross-environment cross-partner evaluations as a novel contribution is one w... | Summary: This paper proposes Cross-Environment Generalization (CEC) as a way of improving agents’ generalization to unseen agents (the ad hoc teamwork problem) and unseen environments.
The proposed method consists of a procedurally generator that varies Overcooked initial states, over which an IPPO team learns via self... | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed feedback and recognition of our work’s novelty and clarity. Below, we address key concerns and how we plan to integrate this feedback:
# Experiments Beyond Overcooked
To help address your suggestions, we have conducted **3 new experiments showing that our m... | null | null | null | null | null | null |
Survival Analysis via Density Estimation | Accept (poster) | Summary: The authors consider the problem of survival analysis with competing and potentially dependent risks. The paper has two main contributions. First, the authors propose a two-step plug-and-play method which uses the output of a generic density estimator and transforms it into an estimate of the joint survival fu... | Rebuttal 1:
Rebuttal: Thank you for your comments. We appreciate your remark, "The two-step method proposed by the paper is flexible and innovative," and we are grateful for the suggestions to improve the presentation of our paper.
> The suite of evaluation metrics are extensive and well-explained in the paper. The da... | Summary: **Summary:**
The authors introduce an algorithm that can post-process density estimators to perform survival analysis. Additionally, a relaxed assumption to the typical conditional independence between event times is used, by modelling the joint distribution using copulas. Several generalisations and scenarios... | Rebuttal 1:
Rebuttal: Thank you for your comments. We hope our answers resolve your concerns regarding the clarity of our paper.
> Only two datasets are evaluated --- more datasets would make sense for the problem at hand.
As we state in the last paragraph of Section 6, we include additional evaluation results in the... | Summary: This paper presents a framework that reframes survival analysis as a density estimation problem. By post-processing density estimates to derive survival functions, the approach enables the use of any density estimation model for survival analysis, including handling competing risks and dependent censoring.
Cl... | Rebuttal 1:
Rebuttal: > The experimental evaluation involves only a few datasets, raising concerns about the generalizability of the approach.
As we state in the last paragraph of Section 6, we include additional evaluation results in the appendix. Specifically, we present our experimental results (Fig. 6) on four da... | null | null | null | null | null | null | null | null |
No Free Lunch from Random Feature Ensembles: Scaling Laws and Near-Optimality Conditions | Accept (poster) | Summary: The paper investigates the random-feature ridge regression between using a single large model versus multiple smaller models (ensembles). The authors demonstrate that ensembles can achieve near-optimal performance when the total feature count remains high in the overparameterized regime, while in the underpara... | Rebuttal 1:
Rebuttal: Thank you for your review. We respond to your questions concerns as follows:
> In Section D.1, D.2, the authors should clearly point out where are the experimental results.
**Response:** Thank you for your suggestion. We will update the text to point this out.
Section D.1 on synthetic tasks... | Summary: ## Updates after author discussion
Thanks a lot for all the clear discussion. A lot of my issues/confusion with the paper have been addressed in the comments, and I'm convinced that the theory just needs some cleaning up to be fully clear. The paper then tells an interesting -- and, to my knowledge, novel -- ... | Rebuttal 1:
Rebuttal: Thank you for your detailed review. Below, we address your questions and concerns:
> "The one portion ... would be helpful."
**Response**: Thank you for this suggestion. We will add more justification. While ensembles are never optimal, they allow parallelization and can be *near-optimal*, henc... | Summary: In the context of random feature high-dimensional ridge regression, this paper investigates the problem of training an ensemble of independent models and the trade-off between ensemble size and model size for a fixed total number of features. The authors prove a 'no free lunch' theorem, showing that increasing... | Rebuttal 1:
Rebuttal: Thank you for your review! | Summary: The paper investigates the performance of random feature ensembles, and discusss whether the ensemble models outperform the single model when the number of total parameter is fixed. The theoretical analysis is given based on the random feature ridge regression, while the empirical studies are performed on bina... | Rebuttal 1:
Rebuttal: # Rebuttal to Reviewer 2SLL (Complete)
Thank you for your review and questions. It appears that you are convinced of the correctness of our results, but have concerns about the significance of the contribution. We will respond to your concerns and questions individually below:
>1. The contribu... | null | null | null | null | null | null |
SparseLoRA: Accelerating LLM Fine-Tuning with Contextual Sparsity | Accept (poster) | Summary: The paper proposes to accelerate LoRA fine-tuning with contextual sparsity. Tailored for fine-tuning, they propose a lightweight, training-free SVD sparsity estimator to reduce computation overhead. Experimental results show that they can speed up LoRA fine-tuning by 1.4x.
Claims And Evidence: Yes
Methods An... | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s thoughtful feedback and recognition of our approach. Below, we address each of the concerns in detail:
> "S2FT: Efficient, Scalable and Generalizable LLM Fine-tuning by Structured Sparsity" is a recent paper published in Arxiv in December. However, it is off... | Summary: The paper introduces a method to accelerate fine-tuning of large language models (LLMs) by leveraging contextual sparsity. Unlike existing parameter-efficient fine-tuning (PEFT) methods such as LoRA and DoRA, which reduce memory usage but not computational cost, SparseLoRA optimizes both memory and computation... | Rebuttal 1:
Rebuttal: > While the computation speedup is clear, the memory analysis and comparison to the baselines are missing
Our approach uses LoRA for fine-tuning, so the memory profile remains the same as LoRA. Sparsifying the main branch does not affect memory usage.
> Include comparisons with additional PEFT m... | Summary: Previous parameter-efficient fine-tuning (PEFT) methods, such as LoRA and its variants, have primarily focused on memory efficiency and lightweight storage. However, these approaches do not necessarily lead to faster fine-tuning. This paper introduces SparseLoRA, a novel technique that accelerates fine-tuning ... | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s comments:
> Fixed learning rate instead of hyperparameter sweeps.
For concision we only include mean performance. Here V1 refers to results in Table 1-2 of the paper and V2 is the new "conservative" config in our response to Reviewer as16
### LLaMA3-8B LR Sweep (Mat... | Summary: The paper proposes a framework for accelerating the fine-tuning large language models by structured pruning of pretrained weight matrices, and using dense and trainable LoRA adapters. The core proposed idea is to estimate the importance of each channel in a pretrained weight matrix, prune the unimportant chann... | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's comments.
> L2 norm vs. Random pruning in Self-Attention blocks
L2 norm shows clear benefits for FFN blocks, while its gains over Random pruning in Self-Attention blocks are modest. We use a unified L2-based criterion to avoid over-engineering and ensure br... | null | null | null | null | null | null |
CtrlSynth: Controllable Image Text Synthesis for Data-Efficient Multimodal Learning | Accept (poster) | Summary: This paper proposes CtrlSynth, a image-text synthesis pipeline designed for efficient and robust multimodal learning. Specifically, CtrlSynth decomposes an image's visual semantics into basic elements and recompose them to generate images or texts. With these synthetic data, the performance of CLIP-based model... | Rebuttal 1:
Rebuttal: Thank you for highlighting that our experiments are comprehensive. We have added a detailed explanation below:
> Did the author try using both CtrlSynth-mix and original image-text pairs?
**Response 1**: Yes, all our reported results for CtrlSynth-mix include both synthetic and original image-te... | Summary: The paper introduces CtrlSynth, a controllable image-text synthesis framework designed to enhance data efficiency and address challenges in training robust vision-language models. By decomposing visual semantics into modular elements (objects, attributes, relations) and enabling fine-grained control over synth... | Rebuttal 1:
Rebuttal: Thank you for your review. We clarify the filtering threshold and computation costs below:
>The paper does not clarify whether the “label existence ratio threshold” (used for filtering visual tags) generalizes across datasets. Experiments focus on common benchmarks (e.g., ImageNet, COCO), but dom... | Summary: The paper introduces CtrlSynth, a closed loop framework to generate synthetic data in both text and images. The core idea of the work is to decompose an image into granular components (objects and relationships) and re-compose them based on user-specified controls. This is facilitated through the use of founda... | Rebuttal 1:
Rebuttal: Thank you for acknowledging the effectiveness of our method in the current setting.
>The method lacks any comparison to other relevant tasks, such as text to image generation or text-based image editing. Firstly, since CtrlSynth can generate both images and text as part of its pipeline, comparin... | Summary: This paper proposes CtrlSynth to build a closed-loop data generation pipeline.
Building upon the powerful foundation models, this approach generates diverse synthetic data samples depending on the text or image.
It first breaks down the visual elements into visual tags, and exploits them with a user control to... | Rebuttal 1:
Rebuttal: We appreciate your feedback and have provided additional clarification below.
>It is unclear why the re-synthesized data from existing images helps address the long-tail problem.
**Response 1**: Our visual tagging model (VTM) identifies and extracts fine-grained, long-tail concepts from existin... | null | null | null | null | null | null |
Thinking LLMs: General Instruction Following with Thought Generation | Accept (poster) | Summary: This paper introduces Thinking LLMs, a novel approach aimed at improving general instruction following in large language models (LLMs) by explicitly incorporating internal thought processes before generating responses. Traditional LLMs respond directly to user instructions without intermediate reasoning steps,... | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedbacks. Below, we address your concerns and propose our revisions.
---
> Failure case analysis is limited. The paper does not discuss scenarios where TPO might fail, such as backtracking-heavy tasks or reasoning tasks requiring multiple revisions. Au... | Summary: Authors propose TPO, a method that finetunes an instruction-tuned LLM to output discrete thought tokens for harder tasks, without any supervision signal. The model undergoes iterative RLFAI preference learning, where the reward model comes from a judge model that judges based on the LLM’s final answer. Finally... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the valuable and insightful feedback. We address your concerns as follows:
---
> How would simply doing this SFT compare with using TPO?
That's an insightful question regarding potential training alternatives. The Llama 70B model in our study functions strict... | Summary: The paper proposes a method to enhance LLMs by enabling them to "think" explicitly before generating responses. This is aimed at improving performance on complex tasks requiring reasoning and planning, as well as general instruction-following tasks. The authors introduce the so-called Thought Preference Optimi... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thoughtful and constructive feedback. We address each concern raised and propose revisions:
---
> Are the thoughts interpretable and aligned with the optimal or correct human-like reasoning steps?
While our methodology does not impose explicit constraints... | Summary: This paper presents a method and studies how to get LLMs to output initial thought traces before a final answer on instruction-following tasks. Their main idea is to prompt LLMs to initially produce these thought traces before a final response, score just the final response with an LLM-as-a-judge, and train ov... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thoughtful and constructive feedback. We address your concerns and propose corresponding revisions:
---
> Lack of model support. All the experiments for the model generation are done on 1 model: Llama 3.1 8B Instruct...
We acknowledge the reviewer's conce... | null | null | null | null | null | null |
DRAG: Data Reconstruction Attack using Guided Diffusion | Accept (poster) | Summary: This paper proposes DRAG, a new data reconstruction attack under the guidance of diffusion models. This method utilizes the rich prior knowledge embedded in the latent diffusion model and firstly reconstructs data from vision foundation models. Experiments have shown the superiority of DRAG to some extent.
Cl... | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We address your concerns point by point below.
---
> 1. Related to the evaluation metrics
The choice of metrics is highly application dependent, and our selections were guided by prior works in this area. In our study, we focused on MS-SSIM, LPIPS, and DINO ... | Summary: - This paper is about reconstruction attacks in split inference (SI) configurations. Specifically, this paper studies reconstructing a datapoint given the intermediate representation of that datapoint in a deep models
- The paper proposed guided diffusion to do this attack (DRAG), where the guidance term is gi... | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We address your concerns point by point below.
---
> it is unclear what dataset the GAN used in GLASS is trained on, and how that compared to the dataset that the diffusion model DRAG is trained on, and should be mentioned in more detail.
In our evaluation o... | Summary: This paper proposes a data reconstruction attack in split inference. The proposed method is based on guided diffusion, which leverages the rich prior knowledge embedded in a latent diffusion model (LDM) pre-trained on a large-scale dataset. The proposed method performs iterative reconstruction on the LDM’s lea... | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We address your concerns point by point below.
---
> The proposed method make sense for the problem and application. However, the paper lacks details on the attack framework. The author only refer to the Figure 2 for the attack framework. In Figure 2, why all... | Summary: The paper introduces a new reconstruction attack method, DRAG (Data Reconstruction Attack using Guided Diffusion), that reconstructs private data from intermediate representations in split Inference settings. Unlike previous attacks on small CNNs, DRAG employs Latent Diffusion Models (LDMs) to iteratively impr... | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We address your concerns point by point below.
---
> Fig. 2 is not very effective for understanding, even though it is drawn simply. The caption should include additional explanations.
We agree that enhancing the caption and associated text will improve clar... | null | null | null | null | null | null |
Editable Noise Map Inversion: Encoding Target-image into Noise For High-Fidelity Image Manipulation | Accept (poster) | Summary: This paper proposed a new inversion-based imag/video editing method called ENM inversion. The motivation is to improve text alignment with the target text prompt. The authors proposed editable noise refinement, which conduct inference time optimzation on the intermediate latents. The proposed ressults achieved... | Rebuttal 1:
Rebuttal: We sincerely appreciate you taking the time to review our research. Below, we have provided responses to points raised.
**Claims And Evidence:**
> I'd like to see if it works for more editing tasks: (1) adding object, e.g., adding a hat. (2) multi object editing: e.g., you have a blue toy holding... | Summary: This paper propose ENM Inversion, a technique for high-quality real image editing. By refining noise maps to align with both the source and target images, ENM Inversion encodes the target image more effectively into the noise maps, allowing for high-quality edits while preserving the source image's details.
#... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for taking the time to evaluate our research. Below are our responses to all the points raised:
**Paper Weaknesses:**
> What does $Z_t^s$ represent in Figure 2? There is no clear definition of this symbol.
**Answer:**
We sincerely thank the reviewer for the detail... | Summary: The paper introduces Editable Noise Map Inversion (ENM Inversion), a technique that improves both reconstruction quality and editing capabilities in diffusion-based image editing. ENM optimizes noise maps during inversion by minimizing the differences between reconstructed and edited versions, effectively enco... | Rebuttal 1:
Rebuttal: First of all, we sincerely appreciate your time and effort in reviewing our research. Below, we provide responses to all the points raised.
**Other Strengths And Weaknesses:**
> The discussion of method efficiency is insufficient. While competing methods require only one inversion calculation per... | null | null | null | null | null | null | null | null |
Unified Screening for Multiple Diseases | Accept (poster) | Summary: The problem of screening for multiple diseases is formalized as an optimization problem, specifically for the case where policies for each disease are predefined and the task is to decide which policies to activate given a vector of prior risks. Under a fixed budget and a few simplifying assumptions like seque... | Rebuttal 1:
Rebuttal: We thank the reviewer for the thorough review of our paper and constructive comments.
**Limited results:** In (1), we formalize the joint screening and diagnosis problem. This is distinct from (2), which focuses on the referral problem. We choose to solve (2) rather than (1) because our aim is n... | Summary: This paper proposes a framework for unified screening of multiple diseases under budget constraints and competing risks. The authors formulate this as a referral problem where they choose which screening policies to activate based on patient risk profiles. They characterize optimal decision boundaries for the ... | Rebuttal 1:
Rebuttal: Thank you for the thorough review of our paper and constructive comments.
**Figure 1** 1(a) shows the decision boundaries for independent screening (current standard). 2(a) shows the boundaries that characterize the optimal policy for our referral problem (2). Our main contribution is to mathema... | Summary: This article offers a novel optimization framework for the complex task of engaging in unified screening for multiple diseases. They offer a novel optimization framework, attempting to balance multiple factors including disease risk, budget, and diagnostic test characteristics.
Claims And Evidence: The author... | Rebuttal 1:
Rebuttal: We thank the reviewer for the thorough review of our paper and the constructive comments.
**Why screening of one disease depends on the screening of another:** In the example of pulmonary hypertension and cardiac disease, risks are clearly contingent. Our approach is not limited to such examples... | null | null | null | null | null | null | null | null |
Self-Improving Language Models for Evolutionary Program Synthesis: A Case Study on ARC-AGI | Accept (poster) | Summary: This paper proposes SOAR, a framework for program synthesis that enhances language models through a self-improving evolutionary loop. Specifically, SOAR alternates between using an LLM for evolutionary search and applying hindsight learning to fine-tune its generation and refinement capabilities. This process ... | Rebuttal 1:
Rebuttal: We thank the reviewers for their thoughtful comments.
Reviewer cz1F noted that SOAR achieved state-of-the-art results among open-source inductive approaches on the ARC benchmark by transcending the limitations of based models through iterative self-improvement. They also noted the quality of our... | Summary: The paper introduces SOAR, a method for program synthesis that extends existing LLM-based methods by introducing an iterative fine-tuning approach. Recent LLM-based program synthesis work has relied on two methods: (1) directly querying the LLM in-context by expressing the task as language (possibly after fine... | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful feedback.
The Reviewer noted the importance of the problem we tackle and the quality and pedagogy of our experimental section but raised several concerns which we address below.
**Controlling for compute costs:** We thank the reviewer for raising this poi... | Summary: The paper introduces a novel framework for program synthesis that integrates large language models (LLMs) into a self-improving evolutionary loop. The framework alternates between two phases: (1) an evolutionary search phase using an LLM to generate and refine candidate programs for a given task, and (2) a lea... | Rebuttal 1:
Rebuttal: We thank Reviewer FfTX for their helpful feedback.
The reviewer noted that the approach is reasonable, intuitive and novel. They commented on the strength and extensive details of our experimental studies, acknowledging that they supported our claims. This said, they raised several concerns that... | Summary: This paper introduces SOAR, a framework for self-improving program synthesis tested exclusively on the Abstraction and Reasoning Corpus (ARC). SOAR operates in two phases: program search phase and learning phase. In the program search phase, it generates lots of candidate Python programs and selectively refine... | Rebuttal 1:
Rebuttal: We thank Reviewer 5bhq for their time and constructive feedback.
The reviewer understood our work and noted the strength of our experimental study and how its results support our claims. This said, the reviewer raised several concerns that we have addressed below.
**Are results controlled for ... | Summary: This paper introduces SOAR (Self-improving Operators for Automated program Refinements), a framework that enhances language models' program synthesis capabilities through an iterative self-improvement process.
- SOAR alternates between a search phase (using a language model to generate and refine candidate so... | Rebuttal 1:
Rebuttal: We thank the reviewer xjKq for their review of our manuscript and appreciate the recognition of the key aspects of our approach.
The review itself did not include any critique or suggestion for improvement, but it did not come with the highest recommendation either ("Weak accept").
This decisio... | null | null | null | null |
Learnware Specification via Dual Alignment | Accept (poster) | Summary: The learnware system is a model reuse system which is designed to choose the optimal model from a model repository based on rules derived from user datasets. The core of this system lies in the use of specifications for model selection. This paper introduces a novel specifications generation method called Dual... | Rebuttal 1:
Rebuttal: Thanks for the valuable feedback and appreciation of our work. We hope that our responses could mitigate your concerns.
Q1: Question on the mixed task setting
Ans: In the experiments of this paper, we set up four label spaces. It is no difficult to find that the label space A(B) contains the lab... | Summary: This paper introduces a approach (DALI) to generating high-quality model specifications in the learnware paradigm. Unlike existing methods that rely solely on distribution alignment, DALI incorporates discriminative alignment, which captures the model's intrinsic discriminative performance. By jointly consider... | Rebuttal 1:
Rebuttal: Thanks for the insightful feedback and the interest in our work! We hope our responses can address your concerns.
Q1: Questions on differences with similar works.
Ans: The method proposed by [Tan et al. 2024a] adds conditional distributions from the pre-trained model's output labels to the margi... | Summary: The paper shows that existing specification methods primarily rely on distribution alignment to generate specifications and introduces DALI, which incorporates both discriminative and distribution alignments in the process. Theoretical and empirical results demonstrate that DALI improves specification quality,... | Rebuttal 1:
Rebuttal: Thanks for your detailed feedback, and we hope our responses will address your concerns.
Q1: The theoretical analysis is unrelated to the subsequent model search and reuse.
Ans: Our theoretical analysis is closely related to subsequent model search and reuse. Notably, the quality of the generate... | null | null | null | null | null | null | null | null |
On the Importance of Gaussianizing Representations | Accept (poster) | Summary: The authors propose adding a "Gaussianizing" step into normalisation layers such as batchnorm, which transforms the features so that they are approximately Gaussian-distributed. Specifically, they use the "power transform" originally proposed in the field of hypothesis testing, but propose approximating its ob... | Rebuttal 1:
Rebuttal: Dear Reviewer ps2s,
We address all of your comments below.
>
>Regarding the use of data augmentations in the ResNet experiments.
>
We have run an experiment to verify the performance of ResNet18 x CIFAR10 using BatchNormalNorm (BNN), and contrast this with a well-documented baseline - which is in... | Summary: Full disclosure: I was a reviewer for a paper for ICLR 2025 which seems to be largely mirroring this paper and I assume that this is a resubmission of that paper (I'm reviewer G4ZL here: https://openreview.net/forum?id=9ut3QBscB0)
This paper introduces normality normalization as a new type of normalization... | Rebuttal 1:
Rebuttal: Dear Reviewer 7TPa,
We address all of your comments below.
>
>Regarding the baseline performance levels, and the code snippet you provided.
>
To address your inquiry regarding the baselines, we ran experiments with the additional use of mixup (Zhang et al. 2017) for several of the model & dataset... | Summary: The paper proposes a normality normalization that enforces Gaussian feature distribution using a power transform and additive Gaussian noise. The motivation for using the normal distribution is to enhance the model's robustness to random perturbations, improving generalization.
## update after rebuttal
The a... | Rebuttal 1:
Rebuttal: Dear Reviewer zhYV,
We address all of your comments below.
>
>"The Q-Q plots and $R^2$ metrics do not serve as a proper multivariate Gaussianization metric. Perhaps, special statistical tests should be employed, e.g., the Henze-Zirkle test"
>
We very kindly note that we in fact already did precis... | Summary: The paper presents a novel approach to improving the feature representations in deep neural networks by encouraging normality in activations. The authors introduce Normality Normalization (NormalNorm), a normalization technique based on the power transform to Gaussianize feature distributions and enhance robus... | Rebuttal 1:
Rebuttal: Dear Reviewer Eseb,
We address all of your comments below.
>
>"While Normality Normalization improves robustness to random noise, its effectiveness against adversarial perturbations is not fully examined."
>
and
>
>"Adversarial Robustness Claims: The paper suggests that Normality Normalization im... | null | null | null | null | null | null |
Beyond Matryoshka: Revisiting Sparse Coding for Adaptive Representation | Accept (oral) | Summary: This paper presents a novel approach to the problem of learning adaptive-length representations. While previous methods, particularly MRL, have shown good performance, this work carefully studies the utility of high-dimensional but sparse representations, as opposed to lower dimensional but dense representatio... | Rebuttal 1:
Rebuttal: Thanks for your detailed reading and valuable comments. We will address your concerns as follows.
---
**Q1** Typos in line 69,272 and 297
**A1.** Thanks for your suggestions and we will fix the typos in the revision.
---
**Q2** The experiment that requires careful attention is the one proposed... | Summary: In this paper, the authors propose Contrastive Sparse Representation (CSR) as an alternative to Matryoshka Representation Learning (MRL) for adaptive embeddings. MRL requires retraining models and suffers from performance drops at shorter embedding lengths, while CSR achieves adaptive representation through sp... | Rebuttal 1:
Rebuttal: Thanks for your careful reading and critical review. Following your suggestions, we have added more discussions on complex multimodal generation ability and the scalability of CSR. We further address each of your concerns below and hope you find them satisfactory.
---
**Q1** Move extensive techn... | Summary: The paper presents Contrastive Sparse Representation (CSR) as a novel approach to adaptive representation learning, addressing the limitations of Matryoshka Representation Learning (MRL), which requires extensive retraining and suffers from performance degradation at shorter representation lengths.
Claims And... | Rebuttal 1:
Rebuttal: We sincerely thank you for your thoughtful assessment of our paper. We appreciate the recognition of our work's contributions, particularly noting the clear improvements of CSR over MRL by ``Outperforms MRL in Accuracy and Speed Evidence``, ``Reduces Training Time Significantly``.
Meanwhile, tha... | Summary: The authors propose a method for converting pretrained dense embedding vectors into sparse embedding vectors and show that it often outperforms standard approaches such as Matryoshka Representation Learning (MRL) in terms of both accuracy, training time and retrieval speed.
Their CSR method is inspired by Spa... | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and for appreciating the quality of our work. The concerns have been addressed as below:
---
**Q1** Typos in Figure 1 caption, Section C.3, and Table 5.
**A1.** Thank you for pointing out! Following your great suggestions, we will fix them in the revised man... | Summary: This paper focuses on the problem of creating adaptive representations from foundation models, focusing on contrastive sparse coding (CSR) as a novel method applied after pre-training to produce efficient representations for a range of downstream tasks. CSR is compared with Matryoshka Representation Learning (... | Rebuttal 1:
Rebuttal: We appreciate your constructive comments and suggestions, which are helpful for us to improve the quality of our paper further. The concerns have been addressed as below:
---
**Q1** May be useful to add a discussion on pruning, quantization, and distillation methods.
**A1.** Indeed, CSR, pruning... | null | null | null | null |
SGD Jittering: A Training Strategy for Robust and Accurate Model-Based Architectures | Accept (poster) | Summary: The paper introduces SGD Jittering, a training method for model-based architectures (MBAs) solving inverse problems. By adding small, random noise to gradient updates during training, SGD Jittering improves robustness and generalization accuracy without modifying input data or increasing computational cost lik... | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful comments and helpful questions. Please find detailed responses below.
> Comparing to SGLD
We thank R-x3Tj for mentioning SGLD, but we clarify that our SGD jittering is fundamentally different from SGLD in both goal and mechanisms.
SGLD adds noise d... | Summary: The paper introduces "SGD Jittering," a new training strategy designed to enhance the robustness and generalization of Model-Based Architectures (MBAs) for image inverse problems. Specifically, the authors propose to inject random zero-mean Gaussian noises into gradient updates at each iteration within deep u... | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful comments and helpful questions. Please find detailed responses below.
> Generality of Theoretical Results
We agree that Theorems 7.4 and 7.5 were established specifically in the denoising setting. Extending the theoretical analysis to more complex inver... | Summary: The authors study the robustness and generalization properties of model-based architectures. The goal is to solve inverse problems with interpretable algorithms, such as loop-unrolling networks, and maintain two desirable properties: i) robustness to adversarial attacks, ii) generalization to small natural shi... | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful suggestion, and please find the point-to-point response below.
> Stronger baselines such as diffusion models (DM)
We agree that diffusion models (DMs) have demonstrated impressive results in image generation. In response, we added DDPM-based experiments f... | Summary: The paper investigates robustness-accuracy tradeoffs, where the authors focus on unrolling-based methods. The authors consider different training strategies for increasing the robustness to average-case perturbations or distribution-shifts. As a specific solution for unrolling-based methods, the authors propos... | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful feedback and helpful suggestions. We address the reviewer’s comments and questions below.
>Jittering outperforms MSE training in in-distribution results
We acknowledge the reviewer’s observation and appreciate the opportunity to elaborate further. As note... | null | null | null | null | null | null |
TuCo: Measuring the Contribution of Fine-Tuning to Individual Responses of LLMs | Accept (poster) | Summary: This paper proposes a novel method, Tuning Contribution (TuCo), to measure the contribution of fine-tuning to individual responses of large language models (LLMs). The authors introduce a decomposition framework that splits an LLM’s response into a Pre-Training Component (PTC) and a Fine-Tuning Component (FTC)... | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful comments and constructive feedback. We would like to clarify some points:
> Appendix B compares OutputCo and TuCo. But which one is better, and why?
They have different interpretations, and are most appropriate for answering different research questions... | Summary: The authors seek to understand the effect of finetuning on a model. They propose to decompose the forward pass of a finetuned model into the pretrained component (PTC) and fine-tuned component (FTC). They then propose Tuning Contribution (TuCo) as a measure of the relative effect sizes. They subsequently analy... | Rebuttal 1:
Rebuttal: We would like to address the queries raised. Some claims dismissing our experiments are incorrect and unjustified.
> No direct evidence [...] FTC approximates finetuning
FTC is exactly the difference in layer outputs between the finetuned and pretrained models; if FTC is zero then FT=PT. Therefo... | Summary: This paper investigates the impact that fine-tuning has on the forward pass representations of large language models (LLMs). The authors define the Tuning Contribution (TuCo) as a metric measuring the contribution of fine-tuned model representations as compared to pre-trained representations on the model’s for... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for recognizing the original contributions of our work, the comprehensiveness and soundness of our experiments, and the quality of our technical exposition.
In the following, we address the reviewer's points regarding the allocation of space to background and exper... | Summary: This paper introduces “Tuning Contribution” (TuCo), a new method to measure how much fine-tuning affects the outputs of a large language model (LLM) on a per-prompt basis. Formally, ToCUo is calculated by the ratio of the total magnitude of the "fine-tuning component" to the sum of "pre-training component" and... | Rebuttal 1:
Rebuttal: We thank the reviewer for their recognition of our extensive experimental suites and the relevance of our method to interpretability, as well as their thoughtful suggestions on areas of improvement. We would like to address some of the points raised:
> (1) Lipschitzness of the layers may be stron... | null | null | null | null | null | null |
From Individual Experience to Collective Evidence: A Reporting-Based Framework for Identifying Systemic Harms | Accept (poster) | Summary: This paper introduces a method for identifying systemic discrimination or harm by aggregating individual reports of adverse events. The authors formalize this as the incident database problem, where reports arrive sequentially and are analyzed to detect subgroups that experience disproportionate harm.
The au... | Rebuttal 1:
Rebuttal: Thanks for your time writing the review! We have grouped responses to your comments below. If there are any further weaknesses in the work that are concerning for you, please don’t hesitate to let us know.
**Experiments**
> _“For the simulated experiment, it would be better to show the result f... | Summary: The authors propose a framework to identify subgroups that are more likely to experience adverse events in a incident database. Therefore, they construct two algorithms that can deal with the sequentially arriving events to perform hypothesis testing. They show that their algorithms work nicely in empirical pr... | Rebuttal 1:
Rebuttal: Thanks for your time writing the review and reading our paper! It is a good catch on the 4.1 proof, and we agree it would be clearer to break it up as you suggested — we’ll do so in the revision!
To answer your question about handling variations in $\mu_G^0$, [1] show in Section 3 how to extend t... | Summary: This paper introduces methods for identifying subgroups disproportionately affected by AI-related harms. It does so by applying sequential hypothesis testing methods to a stream of incidents incoming into a database. Two methods are proposed: sequential Z testing and “betting-style” approach where the test ess... | Rebuttal 1:
Rebuttal: Thanks for your time writing the review and the thoughtful questions!
(Q1) We are overall optimistic about how our methods might work for a real-world system, and in future work we hope to develop and/or highlight collaborations with practitioners with real incident reporting databases. The linke... | Summary: This paper work studies the problem of identifying systemic harms through individual reporting mechanisms using incident databases where individuals can report negative interactions with a system (such as loan denials or vaccine side effects) to identify subgroups disproportionately experiencing harm. The auth... | Rebuttal 1:
Rebuttal: Thanks for your time writing this review and the thoughtful questions!
(Q1) This is a good question --- differential rates of (access to) reporting is something that we’ve thought about a lot. In the current version of this work, this can be modeled with the group-specific reporting parameters d... | null | null | null | null | null | null |
Understanding Model Ensemble in Transferable Adversarial Attack | Accept (poster) | Summary: The authors investigated the issue of transfer attacks based on ensembles. They provided a theoretical framework for the transferability of adversarial examples, which can be controlled by theor loss and variance among models. The authors conducted some experiments to validate their theoretical findings.
Clai... | Rebuttal 1:
Rebuttal: Thank you very much for your constructive comments! We address all your questions and concerns in the following responses.
>**Q1**: In transfer attacks, the surrogate model and the target model have different model architectures and parameter numbers. However, the authors implicitly assume that t... | Summary: The paper presents a theoretical framework for model ensemble adversarial attacks, focusing on transferable adversarial examples. It defines transferability error, diversity, and Rademacher complexity, and decomposes transferability error into vulnerability and diversity. The authors apply information theory t... | Rebuttal 1:
Rebuttal: Thank you very much for your insightful review of our work!
>**Q1**: The authors should include direct comparisons with state-of-the-art adversarial attack methods in the experiments.
**A1**: We sincerely appreciate the reviewer’s constructive feedback. In direct response to Reviewer uk2i’s Ques... | Summary: The paper provides a theoretical study on transferability of model ensemble adversarial attacks. The authors formulate the problem by considering the expected value of the attacked loss over the distribution of model ensemble (equation 1) and the averaged attacked loss over the set of considered models (equati... | Rebuttal 1:
Rebuttal: Thank you very much for your insightful review of our work!
>**Q1**: ...However, in the standard model ensemble attack scenario, the models may have been trained with fully or partially identical training data, and therefore the models could be quite correlated. Therefore, it seems to me that the... | Summary: This paper proposes novel definitions for theoretically analyzing the adversarial transferability of adversarial attacks with a model ensemble; then, it provides three practical guidelines to improve the transferability of the model ensemble attacks. Specifically, the paper first defines the transferability er... | Rebuttal 1:
Rebuttal: Thank you very much for your insightful review of our work!
>**Q1**: ...Comparing another attack method (either extremely strong or weak) would give us more insights about the vulnerability-variance tradeoffs.
**A1**: We sincerely appreciate the reviewer's constructive suggestion. We have conduc... | null | null | null | null | null | null |
TANGO: Clustering with Typicality-Aware Nonlocal Mode-Seeking and Graph-Cut Optimization | Accept (poster) | Summary: The paper introduces TANGO (Typicality-Aware Nonlocal Mode-Seeking and Graph-Cut Optimization), a clustering algorithm that leverages typicality, a global measure of a point's confidence to be a mode, to address the limitations of traditional mode-seeking methods that rely on local data characteristics and ca... | Rebuttal 1:
Rebuttal: Thank you so much for reviewing our paper. We answer your main concerns below.
Concern about the validation on highly noisy or imbalanced datasets: TANGO can indeed perform well on noisy and imbalanced datasets such as "cluto-t4-8k", "cluto-t5-8k", "cluto-t8-8k", "cluto-t7-10k" and "unbalance". T... | Summary: This paper introduced the notion of "typicality" in density-based clustering, which measures the likelihood or confidence that a certain point should be a mode (a center) of a cluster. Existing techniques determine modes based on local measures (e.g., density of a point), but the premise of the paper is that i... | Rebuttal 1:
Rebuttal: Thank you so much for reviewing our paper. We answer your main concerns below.
Overstatement about power law distribution: Thank you for pointing this out. We will correct it.
Weakness 1: Thank you for the comment. To demonstrate scalability, we have expanded our evaluation to a substantially la... | Summary: The paper introduces TANGO, a novel clustering algorithm that integrates typicality with graph-cut optimization. The primary contribution is the concept of typicality, a novel measure to quantify the confidence of a point being a mode for a cluster. Experimental results demonstrate the efficacy of the proposed... | Rebuttal 1:
Rebuttal: Thank you so much for reviewing our paper. We answer your main concerns below.
Weaknesses:
1: Sorry for the confusion. $k$ in Line 331 is the number of nearest neighbors to define similarity (Line 198) and density (Line 211), which are the same $k$ that is the input parameter of TANGO. $k$ in Li... | Summary: The authors first propose a global perspective metric, typicality, to quantify the confidence of a point being a mode. This addresses the limitation of current mode-seeking methods, which require manually setting thresholds or human intervention to identify modes. They also design an efficient and effective al... | Rebuttal 1:
Rebuttal: Thank you so much for reviewing our paper. We answer your questions below.
Weakness 1: The aggregation operation is done by considering each tree-like subcluster as a vertex in a similarity graph, where the similarity between these subclusters is determined by a path-based connectivity, and final... | null | null | null | null | null | null |
A Stronger Mixture of Low-Rank Experts for Fine-Tuning Foundation Models | Accept (poster) | Summary: The authors propose to apply the Riemannian Preconditioner introduced in the previous work to improve the Mixture of LoRA framework. The Riemannian Preconditioner enhances LoRA training by projecting the full matrix gradient to the subspace of LoRA matrixes, which better approximates full fine-tuning compared ... | Rebuttal 1:
Rebuttal: We appreciate your reviews and thank you for **acknowledging our efforts on theoretical and experimental analysis**. For your concerns mentioned in the review, we provide corresponding response below:
**Response to your concern** of AdamW performances under Multi-Task scenarios
Our supplementar... | Summary: This paper introduces a new approach to enhance the performance of MoE-LoRA for fine-tuning foundation models by incorporating Riemannian Preconditioners. This approach ensures that the gradient updates align more closely with the full-rank optimization, thereby stabilizing and accelerating the training proces... | Rebuttal 1:
Rebuttal: Thanks for your valuable reviews and your **agreements on our proposed method and our efforts on literature review**. We have conducted several new experiments and provided responses to all your concerns:
**Response to W1** about the limitation of LLaVA experiments
In our revision, we conducted... | Summary: This paper introduces a training strategy for Mixture-of-Experts (MoE) models with LoRA. It uses Riemannian preconditioning and gate-value scaling to address gradient sub-optimality and representation limitations. The proposed method modifies traditional preconditioners to stabilize gradient updates and improv... | Rebuttal 1:
Rebuttal: Thank you for providing **postive feedbacks on our presentations, derivations, and also experiments**. For the weaknesses and suggestions you pointed out, we highly value them and provide responses below:
**Response to W1** about further explaning the convergence and fixing the issues in Figure ... | Summary: This work proposes an improved training strategy for MoE-LoRA, aiming to address the limited representation and suboptimal gradient issues when fine-tuning foundation models with plain MoE-LoRA. They first analyze the limitations of LoRA, including the insufficient representation capacity of low-rank matrices ... | Rebuttal 1:
Rebuttal: Thank you for your **acknowledgment on our innovations and theoretical value**. We’ve checked our paper again carefully to address any issues you mentioned. Here are our responses to your valuable concerns:
**Response to W1 and W2** about the notations and abbreviations issues
We have carefully ... | null | null | null | null | null | null |
Learning to Quantize for Training Vector-Quantized Networks | Accept (poster) | Summary: This paper proposes an improvement to the STE method for training VQ networks. While the backpropagated gradient bypasses the codebook in the STE framework, this paper proposes Meta Quantization (MQ), which adopts a bi-level optimization strategy and learn quantization with a hyper-net in a meta-learning fashi... | Rebuttal 1:
Rebuttal: We appreciate your constructive feedback very much. We provide our response to your review as follows.
> Computational Cost and Memory Overheads
We conducted additional experiments to address your concerns. When evaluated on the CelebA dataset with a batch size of 128, the increase in memory us... | Summary: This paper proposes a novel vector quantization training framework Meta-Quantization inspired by meta-learning, which decouples the optimization of codebook and autoencoder into two stages, enabling dynamic codebook generation and task-specific training. The proposed method outperforms existing vector quantiza... | Rebuttal 1:
Rebuttal: We appreciate your constructive feedback very much. We provide our response to your review as follows.
> Is the bi-level optimization approach adopted in this paper necessary?
Yes, it is necessary. The two components of our method address distinct challenges. Specifically, the hypernet resolves ... | Summary: The paper proposes to train VQ-VAE under a meta-learning framework. To be more specific, the paper introduces a hyper-network to replace the embedding-parameterized and trains the model with bi-level optimization. Experiments are conducted on image reconstruction and generation tasks. The proposed MQ-VAE impro... | Rebuttal 1:
Rebuttal: We appreciate your constructive feedback very much. We provide our response to your review as follows.
> Magnitude Comparison
In our experiment, we have found that the magnitude of both indirect and direct are around $10^{-2}$, so none of them dominate each other. Please follow this anonymous li... | Summary: This paper introduces Meta-Quantization (MQ), by using a hyper-net and bi-level optimization to alternatively train the codebook with the autoencoder in Vector Quantization Networks (VQN). Experiments show MQ has better codebook ultilization, image reconstrcution and generation performance.
Claims And Evidenc... | Rebuttal 1:
Rebuttal: We appreciate your constructive feedback very much. We provide our response to your review as follows.
> Why not use a simpler method such as clustering the embeddings or EMA update
One of the advantages and novel aspects of MQ, compared to simpler methods, is that the codebook update follows a ... | null | null | null | null | null | null |
ULPT: Prompt Tuning with Ultra-Low-Dimensional Optimization | Reject | Summary: The paper proposes a more efficient prompt tuning method in that they need to optimize over fewer variables. They achieve this efficiency through a kind of sketching with the Johnson-Lindenstrauss Lemma. They experiment on NLP tasks.
Claims And Evidence: The most problematic claim is wrt efficiency. In fact, ... | Rebuttal 1:
Rebuttal: We thank the reviewer’s thoughtful comments, especially for recognizing our clear paper writing, effective parameter reduction through sketching, and useful GLUE experiments. We now provide detailed responses to each of the concerns.
> “You still need to reconstruct the large matrix $\tilde{P}$ at... | Summary: This paper proposes a new low-dimensional parameterization for prompt tuning that could achieve better performance than the original prompt tuning with only 2% of the parameters.
Claims And Evidence: The claims are in general clear and convincing.
One issue regarding the claims is the intriduction of shift e... | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed feedback. We appreciate that the reviewer says “The claims are in general clear and convincing” and that “The experiment design involves multiple fine-tuning tasks”. Below we address each of the comments in detail.
> “One issue regarding the claims is the ... | Summary: This work proposes a change to prompt tuning where first they decompose the standard n x d parameters as two matrices that are multiplied together n x r @ r x d, but the second matrix if random and frozen, thus vastly reducing the number of learnable parameters.
Additionally they add new shift and scale learn... | Rebuttal 1:
Rebuttal: We appreciate the reviewer for their thorough evaluation and the “strong accept” recommendation! The reviewer fully recognizes the contributions of our work, as well as the comprehensive analysis and clear writing.
> “It would have been nice to see how their approach fared in this more challengin... | null | null | null | null | null | null | null | null |
Approximate Differential Privacy of the $\ell_2$ Mechanism | Accept (poster) | Summary: This paper studies the $\ell_2$ mechanism for releasing a $d$-dimensional statistic with bounded $\ell_2$ sensitivity under approximate differential privacy.
To release a $d$-dimensional statistic $T(x)$, the $\ell_2$ mechanism samples an output $f_X(y)\propto \exp(-\lVert y - T(x)\rVert_2 / \sigma)$ for suita... | Rebuttal 1:
Rebuttal: Thanks for the review!
> Some parameter ranges for the experiments could need more motivation, e.g. why $d=100$ everywhere?
Our experiments focus on the $d \leq 100$ setting to highlight the range where the $\ell_2$ mechanism offers the largest improvement over both Laplace and Gaussian noise. A... | Summary: The paper studies the L2 mechanism with bounded L2 sensitivity in d dimensions, demonstrating improvements over the Laplace and Gaussian mechanisms under approximate differential privacy. It presents algorithms for computing approximation bounds for privacy loss random variables and introduces a parallel sampl... | Rebuttal 1:
Rebuttal: Thanks for the review!
> One missing aspect is the composition of the L2 mechanism. The Gaussian mechanism is widely used in DP-SGD due to the availability of numerical composition analysis. To better demonstrate the practicality of the proposed L2 mechanism, its composition must be studied.
We ... | Summary: The authors consider a specific instantiation of the K-norm mechanism using an L2 norm. They establish conditions for achieving approximate DP as opposed to pure DP as was done in the original K-norm paper. Theory and experiments are provided.
Claims And Evidence: Yes, the paper provides both theory and simu... | Rebuttal 1:
Rebuttal: Thanks for the review!
> I checked some of the early lemmas and the results seemed sound. However, the communication of the theory I found to be unpleasant. I'm not sure I've ever seen so many lemmas without a single theorem. The bulk of the paper reads more like an appendix … [t]he theory of the... | null | null | null | null | null | null | null | null |
Representations Shape Weak-to-Strong Generalization: Theoretical Insights and Empirical Predictions | Accept (poster) | Summary: This paper studies weak to strong generalization, where a strong model is fine-tuned on a task using data labeled by a weaker supervisor model (it is known that perhaps surprisingly the strong model can outperform its weak supervisor). Specifically, the paper introduces estimators, depending only on internal r... | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback on our theoretical analysis, extensive experiments, novelty and broader impact.
> Cor 5.1-2: why label agnostic
As noted in L306 right, once we factor the label-dependent term out of the operator norm (becoming the label variance C), it can be tr... | Summary: This work provides a theoretical analysis of how a strong model can surpass its weak supervisor by studying the structure of their representations. They key insight (beyond prior analyses) is that even when a strong model perfectly fits the weak model’s predictions at train time, its surpasses its weak supervi... | Rebuttal 1:
Rebuttal: We thank the reviewer for finding our theoretical claims clear and convincing, and for acknowledging the novel insights of our paper.
> “...relative representation structures of the weak teacher and strong student matter…Without controlling for weak supervisor quality, it’s hard to know whether t... | Summary: This paper provides a theoretical analysis for weak-to-strong generalization (W2SG) from a representation-based perspective. In particular, the authors consider finetuning over fixed representations with mild structural assumptions.
- It is shown that the overlap between the principal subspace of the strong (... | Rebuttal 1:
Rebuttal: We thank the reviewer for finding our claims well-supported, our explanations insightful and intuitive, the theory and experiments extensive and convincing, and the proposed metric novel. We respond to the comments below.
> the distinction between Sec 4 and Wu & Sahai (2024)
The main differences... | null | null | null | null | null | null | null | null |
Improving the Variance of Differentially Private Randomized Experiments through Clustering | Accept (poster) | Summary: This paper proposes a differentially private algorithm for causal effect estimation, which leverages cluster structure in the data in order to reduce the variance (i.e., improve utility) while maintaining the same privacy guarantee.
## update after rebuttal
I’m bumping up my score, after reading the rebuttal... | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their careful review of our paper. We hope to have addressed their questions, and would be happy to clarify these points in a camera-ready version.
_Response to weaknesses:_
Although we focus the presentation on a motivating application in the advertising ... | Summary: Authors give an algorithm they call Cluster-DP, which is a pure/approximate DP mechanism (label DP) for causal effect estimation. Its main insight is that you can reduce the variance of the estimates by leveraging known clustering structure in the data. At a high level, they add Laplace noise to the empirical ... | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful and positive review of our paper. We would be happy to clean up the notational remarks made by the reviewer, and clarify the points below in a camera-ready version. We now address their questions:
_Q1._
In our proof of Theorem 3.1, we use the composition ... | Summary: This paper introduces Clustered-DP, a differentially private mechanism designed to improve the privacy-variance trade-off in randomized experiments. The proposed method improves the variance-precision trade-off compared to the traditional method, which introduces noise to the sensitive variables.
The paper pro... | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful and overall positive review. We address below their two questions:
_Q1._
These derivations aim to establish the differential privacy guarantee of a mechanism $M_2$ which resamples labels at random with probability $\lambda$ from the true distribution or a... | Summary: The paper proposes CLUSTER-DP, a differentially private mechanism aimed at improving the variance of causal effect estimation in randomized experiments by utilizing clustering structures within data. Traditional differential privacy (DP) approaches introduce noise to protect privacy, resulting in increased est... | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for taking the time to read our paper and for their thoughtful comments. We would be happy to include clarifications of the points below in a camera-ready version.
*Correlation between non-sensitive and private data:*
It seems that your comment is about the m... | null | null | null | null | null | null |
Generative Point Cloud Registration | Accept (poster) | Summary: This work introduces a novel method for point cloud registration that aims to generate geometry-consistent RGB image pairs from paired point sets. These generated RGB image pairs are then used to enhance the performance of point-based registration methods. The proposed approach incorporates two key innovations... | Rebuttal 1:
Rebuttal: **Q1: Clarification on rigid transformations.**
**A1:** Thank you for your valuable suggestion. In the revised introduction, we will explicitly state that our work focuses exclusively on the rigid point cloud registration problem to set clear expectations for the readers.
**Q2: The discrepancy... | Summary: The paper proposes a new perspective on Point Cloud Registration: Generative Point Cloud Registration. Compared to traditional methods or purely geometry-based learning methods, the paper incorporates image generative models. The input is a point cloud pair with unknown pose, and the output is the transformati... | Rebuttal 1:
Rebuttal: **Q1: Lack of joint distribution modeling of multi-view images?**
**A1:** We respectfully clarify that our coupled denoising mechanism has implicitly modeled the joint distribution of multi-view images (i.e., cross-view images in our task).
Formally, the likelihood over the cross-view image pai... | Summary: This paper introduces a novel approach to point cloud registration by leveraging generative models to synthesize 2D images from 3D point clouds, enabling better feature extraction and matching for registration tasks. Traditional methods primarily rely on 3D feature matching, which often struggles in scenarios ... | Rebuttal 1:
Rebuttal: **Q1: The approach heavily relies on the quality of the generated 2D images, which could introduce artifacts or inconsistencies in certain scenarios?**
**A1:** The image generation quality is indeed crucial to the overall performance. Notably, our Match-ControlNet successfully unlocks the genera... | Summary: This paper proposes a new 3D Registration method, Generative Point Cloud Registration, which connects advanced 2D generative models with 3D matching tasks to improve registration performance. The key idea in this paper is to generate cross-view consistent image pairs that are well aligned with source and targe... | Rebuttal 1:
Rebuttal: **Q1: Discussion on applicability in outdoor LiDAR scenes.**
**A1:** We sincerely appreciate this insightful comment. Our current Match-ControlNet indeed targets leveraging depth maps rather than LiDAR data for image generation. Compared to forward-facing depth maps, outdoor LiDAR point clouds p... | null | null | null | null | null | null |
ABNet: Adaptive explicit-Barrier Net for Safe and Scalable Robot Learning | Accept (poster) | Summary: This paper proposes ABNet, an adaptive explicit-barrier net for safe and scalable robot learning. The ABNet is a combination of multiple safe control nets such as BarrierNet, dMPC, as well as the proposed explicit-barrier net. The authors claim that ABNet has the potential to scale to a larger safe foundation ... | Rebuttal 1:
Rebuttal: We really appreciate the reviewer for all the positive and helpful comments. We address the remaining comments below.
(1) The part of references/related works is okay, but it would be nice to have more recent papers included, like the ones published in 2023 and 2024.
**Response:** We will add m... | Summary: The paper proposes to embed control barrier constraints into neural layers to enforce safety assurance to network output. In contrast to implicit formulation with differentiable optimization, the paper argues for a specific QP admitting explicit solution form so as to avoid inefficient batching through multi-t... | Rebuttal 1:
Rebuttal: We thank the reviewer for all the positive and constructive comments. We address the remaining comments below.
(1) The paper needs a clarification on the scope of systems. The target dynamics is of a general control affine form while an assumption is made on relative degree m...
**Response:** Th... | Summary: This paper addresses a critical challenge in AI-enabled robotics—safe learning—by introducing the Adaptive explicit-Barrier Net (ABNet). The authors highlight the limitations of existing safe learning methods, including poor scalability, inefficiency, and instability under noisy inputs. ABNet overcomes these i... | Rebuttal 1:
Rebuttal: We appreciate the reviewer for all the helpful and constructive comments. We address all the concerns below.
(1) Claims are not well-supported by experiments, writing is not particularly clear.
**Response:** Our main claim is safety guarantee of the model, and this is supported by the SAFETY or ... | Summary: The paper presents ABNet, a novel framework that utilizes attention mechanisms to handle diverse input patterns, while incorporating barrier functions to maintain the system state within a safety set, ensuring forward invariance. This approach aims to improve the scalability and robustness of robot learning by... | Rebuttal 1:
Rebuttal: We appreciate the reviewer for all the positive and constructive comments.
(1) Specific technical challenges
**Response:** There are two main technical challenges: (a) The training and testing efficiency of the scalable robot learning model; (b) Formal proof for the safety of the composed model ... | null | null | null | null | null | null |
Policy Gradient with Tree Expansion | Accept (poster) | Summary: The paper introduces SoftTreeMax, a new approach that combines policy gradient (PG) with tree search. The goal is to address the inherent high gradient variance in traditional PG algorithms.
The authors present theoretical analysis showing that the gradient variance of SoftTreeMax decays with the depth of th... | Rebuttal 1:
Rebuttal: Thank you for your detailed and thoughtful review.
**Dependency of bound on $S$ and lower bound**
RL analysis on tabular MDPs often include $S$ terms, usually stemming from the triangle inequality applied on summation over states. These dependencies can sometimes be replaced by structural assum... | Summary: The authors propose a generalization of softmax parametrization for policy gradient methods that utilizes the breadth-first search tree of the future states. This type of parametrization combines planning with policy gradient methods to reduce the latter's variance. The authors proved that given some assumptio... | Rebuttal 1:
Rebuttal: Thank you for your detailed and thoughtful review. We appreciate the careful reading of our work and the constructive feedback. Below, we address your concerns:
**Additional MCTS baselines**
Following your comment, we added comparisons with a strong baseline to our experiments: EfficientZero [Ye... | Summary: This paper extends softmax policy gradient methods by integrating planning through tree expansion. The authors introduce two implementation variants—C-SoftTreeMax and E-SoftTreeMax—which differ in whether the expectation is computed inside or outside the exponent. They analyze the policy gradient variance for ... | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review. We appreciate the careful reading of our proofs and the constructive feedback. We address your concerns below.
**Proof issues in Lemma A.5 and Theorem A.6**
You're correct that $\gamma^{-d}$ from $C_{s,d}$ should appear in this bound. This oversight makes ... | Summary: The paper introduces the SoftMaxTree algorithm, a softmax policy gradient algorithm extended with planning. The main idea of the extension is that estimation the gradient from longer paths reduces the variance of the gradient. Two variant are considered depending on the expectation being inside or outside of t... | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and for recognizing the value of our theoretical and empirical contributions. We appreciate your feedback and address your questions below.
**MCTS vs. our search approach**
Thank you for this important point of clarification. SoftTreeMax and MCTS represent fu... | null | null | null | null | null | null |
Hyper-Transforming Latent Diffusion Models | Accept (poster) | Summary: This work introduces a novel "LDMI" framework which empowers latent diffusion models to generate Implicit Neural Representations (INRs). The proposed Hyper-Transformer Decoder enables the space of INR parameters to be learned in a flexible and probabilistic manner. Empirical tests are conducted on a range of ... | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback and encouraging remarks. Your comments helped us significantly improve the manuscript. Below, we address all the concerns raised.
## On Our Claims Regarding Unconstrained Resolution
We agree that validating our model’s ability to generalize to u... | Summary: This paper proposes a new framework for INR generation (LDMI) which combines latent diffusion models and a transformer based hyper network for learning the distributions over INR parameters. The hyper network transforms the latent variables through a transformer encoder and decoder and generates the INR parame... | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback and for highlighting relevant connections to related work. Below, we address each of the raised concerns in detail.
## On the relation to HyperDreamBooth
We appreciate your suggestion to consider HyperDreamBooth, which we have now cited and disc... | Summary: The authors propose a novel method for generating the parameters of implicit neural representations (INRs) representing real data. They use a latent diffusion framework, which first trains a VAE to learn a rich latent representation of data, then trains a diffusion generative model on the learned representatio... | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful and constructive feedback, and for recognizing the clarity, motivation, and contributions of our work. Your comments helped improve the paper significantly. Below, we address each of the concerns raised in your review.
## On the notation used for weights
... | Summary: This paper introduces a new generative framework called Latent Diffusion Models of Implicit Neural Representations (LDMI), which integrates Implicit Neural Representations (INRs) into transformer-based latent diffusion models. The key component is to use a Hyper-Transformer Decoder (HD) to replace traditional ... | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and thoughtful feedback, as well as for recognizing the novelty and clarity of our work. Below, we address the key concerns raised.
## On the scope and nature of the contribution
$\texttt{LDMI}$ is not intended to compete with standard diffusion models that ... | null | null | null | null | null | null |
KernelBench: Can LLMs Write Efficient GPU Kernels? | Accept (poster) | Summary: The major contribution of this paper is as follows:
- This paper introduced a benchmark framework to evaluate how good a modern LLM can write efficient GPU kernels. The core of this benchmark framework consisting of 250 tasks with 3 levels of granularity: single primitive, sequence of ops and the overall model... | Rebuttal 1:
Rebuttal: Updated Paper: https://storage.googleapis.com/anonymous-files/kernelbench.pdf
We thank you for appreciating KernelBench design and suggesting further improvements. As you noted, automatic GPU code generation is an underexplored area with many interesting research questions; KernelBench facilitate... | Summary: This paper introduces KernelBench, a benchmarking framework designed specifically to evaluate the correctness and performance of GPU CUDA kernels generated by large language models (LLMs). KernelBench compiles a representative set of PyTorch code snippets, categorizing them into three distinct complexity level... | Rebuttal 1:
Rebuttal: Updated Paper: https://storage.googleapis.com/anonymous-files/kernelbench.pdf
We sincerely thank you for the detailed and insightful review\! We are truly encouraged by the positive feedback, particularly that the work is seen as "well-supported by strong evidence" and a "comprehensive benchmark.... | Summary: This paper proposes KernelBench, which is a new benchmark for evaluating LLM's performance in writing correct and fast kernels. Specifically, KernelBench gathers three different levels of tasks, including individual operations, sequence of operations, and end-to-end architectures, and introduces a novel fast_p... | Rebuttal 1:
Rebuttal: Updated Paper: https://storage.googleapis.com/anonymous-files/kernelbench.pdf
We thank reviewer AoMw for your review\! We are glad the reviewer appreciates our "fast\_p evaluation metric for KernelBench" and finds that our "claims are supported by clear and convincing evidence." Below, we address... | null | null | null | null | null | null | null | null |
AdaPTS: Adapting Univariate Foundation Models to Probabilistic Multivariate Time Series Forecasting | Accept (poster) | Summary: The paper introduces AdaPTS, a framework to adapt pre-trained time series models (univariate) for probabilistic multivariate forecasting. The authors use adapters to project multivariate inputs into a latent space where a frozen pre-trained model is applied independently to each channel. To enforce the inverti... | Rebuttal 1:
Rebuttal: We would like to thank Reviewer 4t4s for their detailed feedback and constructive comments. We now address the concerns raised in their review:
> Claim in L147: no requirement of fine-tuning due to feature-level transformations
The claim in line 147 regarding "no requirement of fine-tuning" pert... | Summary: The paper presents AdaPTS, a novel framework for adapting pre-trained univariate foundation models (FMs) to probabilistic multivariate time series forecasting. AdaPTS introduces adapters—feature-space transformations that project multivariate series into latent spaces, where predictions are made independently ... | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s detailed and insightful feedback. We are particularly grateful for the recognition of our work’s strong empirical results, the effectiveness of AdaPTS in improving forecasting accuracy and uncertainty quantification.
We would like to clarify and address the raised con... | Summary: The paper introduces AdaPTS, a framework designed to adapt pretrained univariate time series Foundation Models (FMs) to multivariate probabilistic forecasting tasks. The core challenge addressed is the inherent limitation of existing FMs (e.g., Moment, Chronos), which are typically trained on univariate data a... | Rebuttal 1:
Rebuttal: We deeply appreciate the reviewer’s thoughtful and constructive feedback. The recognition of the theoretical novelty and empirical strengths of our approach is greatly appreciated.
In this rebuttal, we address the reviewer’s specific concerns and questions in detail to further clarify our method... | Summary: The paper introduces a variation-autoencoder style encoder and decoder around a foundational model to enable it to perform forecasting for probabilistic and multivariate settings.
Claims And Evidence: The claim fo the paper is that any time-series univariate foundational model can be adapted to perform much h... | Rebuttal 1:
Rebuttal: We thank Reviewer AXte for their feedback. We appreciate the acknowledgment of our method’s simplicity and validity, as well as the soundness of our theoretical justification.
We would like to address the concerns raised in the review:
> Lack of enough baselines
Our primary objective is to enha... | Summary: The paper proposed AdaPTS, an adapter for univariate time series foundation models, which makes them both multivariate and produce probabilistic predictions. The authors first provide a theoretical framework for adapters for time series foundation models, and discuss many adapters (encoder-decoder combinations... | Rebuttal 1:
Rebuttal: We appreciate the thoughtful feedback provided by Reviewer 267f and would like to address the concerns raised in their review.
> Multivariate Baselines: Beyond PCA, the paper does not compare to some other existing ways of imbuing multivariate context to time series foundation models.
We acknowl... | null | null | null | null |
Improved Regret Analysis in Gaussian Process Bandits: Optimality for Noiseless Reward, RKHS norm, and Non-Stationary Variance | Accept (oral) | Summary: This paper provides refined regret analyses of maximum variance reduction (MVR) type of algorithms in Gaussian Process (GP) bandits. It first establishes a general upper bound on the maximum posterior variance for MVR algorithm (Lemma 3.1), and applies it to obtain upper bounds on the cumulative regret and the... | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper and for the overall positive feedback. We will carefully incorporate the reviewer's feedback in the revision. Below is our answer to the reviewer's question.
**In Section 5, the paper analyzes the algorithms with $\lambda$ chosen to be different f... | Summary: The paper studies the classic problem of Bayesian optimization under the frequentist setting (where the target function lies in the RKHS of a known kernel). It derives a novel bound on the maximum variance after $T$ observations (Lemma 3.1). This bound has several consequences and applies to various Bayesian o... | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper and for the overall positive feedback. Below are our answers to the reviewer's questions.
**1. Are there additional technical challenges that should be emphasized, or ...**
We clarify the technical challenges and novelty of the results in Section... | Summary: The paper develops a novel bound for the posterior variance of the Gaussian process. Such bounds are used to obtain a tighter regret bound of noise-free simple/cumulative regret bounds of Bayesian optimization algorithms. Furthermore, these bounds facilitates establishing novel regret bounds for MVR/PE algorit... | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper and for the overall positive feedback. We will carefully incorporate your suggestions about the clarity into the revision. Below is our answer to the reviewer's question.
**Is the condition $\mathcal{X} = [0, 1]^d$ necessary for corollary 6.1 and... | Summary: This paper presents improved theoretical guarantees for Gaussian Process (GP) bandit algorithms, with a particular focus on reducing regret under three key scenarios: the noiseless setting, dependence on the RKHS norm of the underlying reward function, and non-stationary noise variance. The main contribution i... | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper and for the overall positive feedback. We will carefully incorporate your comments into the revision. Below are the answers to your questions.
**1. Have you considered evaluating your bounds under the polynomial kernel? Given its finite-dimensiona... | null | null | null | null | null | null |
PatchPilot: A Cost-Efficient Software Engineering Agent with Early Attempts on Formal Verification | Accept (poster) | Summary: This paper presents an improvement to the "Agentless" approach to solving SWE-bench tasks. They manager to solve 3-5% more problems on SWE-bench Lite/Verified while using up to 20% *less* money, with Claude 3.5. They provide detailed analysis and ablations.
Claims And Evidence: Yes, there is a wide array of e... | Rebuttal 1:
Rebuttal: Thanks for the constructive and positive comments.
## 1. Extra Clarification on Terminology: "Human-Based Planning"
We thank the reviewer for pointing this out. We agree that the term "human-based planning" may be misleading, as it could imply a human-in-the-loop system. To avoid this confusion... | Summary: The paper proposes PatchPilot, an agentic patching framework designed to address the trade-offs among patching efficacy, stability. It introduces a novel human-based planning workflow, incorporating 6 components, with special emphasis on refinement as a unique contribution.
Claims And Evidence: Their claims, ... | Rebuttal 1:
Rebuttal: Thanks for the constructive and positive comments.
## 1. Better experiment analysis
We will follow the reviewer’s suggestion and include more visualizations to show the comparison between our method and baselines in effectiveness, stability, and cost. To better demonstrate the advantage of Pat... | Summary: This paper proposes PatchPilot, an agentic framework for autonomous software patching. It relies on human-based planning and consists of five workflow components: reproduction, localization, generation, validation, and refinement. The overall workflow as well as each component (except for the final refinement ... | Rebuttal 1:
Rebuttal: Thanks for the constructive comments.
## 1. Novelty and Differences from Existing Tools
As discussed in Section 2, [a-g] are all agent-based, differing from our workflow (we included [b-g] and will add [a]).
- Search tools: We acknowledge that AutoCodeRover and CodeR also have search tools, and... | Summary: In this paper, the authors describe PatchPilot, a novel human-based planning workflow for solving Github issues. The innovations include generating reproduction tests to help locate the root cause; a planning and generation task division for patch generation, and a refinement loop to iteratively improve a patc... | Rebuttal 1:
Rebuttal: Thanks for the positive and constructive comments.
## 1. Stability Comparison
We chose GPT-4o because of our budget limits before. We reran the stability comparison. First, we changed GPT-4o to Claude-3.5-Sonnet. Second, in response to Reviewer HrpY’s concern about small sample size, we increas... | null | null | null | null | null | null |
Dynamical phases of short-term memory mechanisms in RNNs | Accept (poster) | Summary: The paper investigates the strategies that recurrent neural networks (RNNs) use to maintain short-term memories via sequential firing. The authors trained low-rank and full-rank RNNs on delay-response tasks and identified two distinct mechanisms: slow-point (SP) manifolds and limit cycles. They found that intr... | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive feedback, as well as their recognition of our methodological rigor and insightful contributions to understanding short-term memory mechanisms in artificial networks. Please find our responses below:
**Q1** While learning rate is an abstract optimizatio... | Summary: This paper analyzes the emergent mechanisms of short-term memory maintenance in task-optimized recurrent neural networks. The paper presents an analysis of a toy model and performs large-scale experiments to show that similar features emerge in actual task-optimized networks.
Claims And Evidence: The theory a... | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful comments and for recognizing that both the theoretical and empirical components are well-executed. We understand the main concerns to be (1) the clarity of the connection between theory and experiments, and (2) the perceived novelty of our contributions. ... | Summary: This paper studies computational RNN models of a classic neuroscience working memory task–the delayed response task–along with two, very simplified and tractable, dynamical system models capable of learning the task through adaptation of a scalar parameter. The paper studies the role that changes in the delay ... | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful comments and highlighting our approach's strengths—especially our effort to simplify a complex problem into an interpretable framework. Their suggestions have greatly shaped our revisions. Below, we address all specific concerns and weaknesses.
**Q1** Th... | null | null | null | null | null | null | null | null |
Plan-and-Act: Improving Planning of Agents for Long-Horizon Tasks | Accept (poster) | Summary: The paper proposes a Plan-and-Act methodology for long-horizon web-tasks. The basic premise of the Plan-and-Act method is that it decomposes the long-horizon planning into two modules: planning and executing. The planner module creates a long-horizon plan and the executor executes actions relevant to completin... | Rebuttal 1:
Rebuttal: > R4-1: The experimental process for reproducing the tables is not provided in the paper (seeds, temperature, etc.). This makes it slightly harder to judge the effectiveness of the paper as I am not sure if the best results obtained with the LLMs were included in the paper. I would urge the author... | Summary: This paper proposes Plan-and-Act, an agent for web environments which separates planning from execution. A planner generates the overall plan, and a separate executor carries out the plan by issuing low-level actions. In order to train the planner, a synthetic data generation method is introduces to annotate t... | Rebuttal 1:
Rebuttal: > R3-1: It is difficult to understand the contributions of the paper within the broader literature of planning in LLM agents, as discussion of related works in agents with planning is missing
We thank the reviewer for their feedback. Please see response R3-4.
> R3-2: The paper does not discuss ... | Summary: The authors propose Plan-and-Act, which consists of two separate modules for planning and acting (execution), with dynamic replanning for better adaption to different situations. The Planner generates high-level plans, which are taken as input for the Executor to generate low-level actions. Importantly, for th... | Rebuttal 1:
Rebuttal: > R2-1: One primary weakness of this work is its empirical evaluation. It only provides the evaluation on WebArena-Lite, which employs non-real-world websites as part of the environment. The experimental results may be strengthened by evaluating the proposed approach on more realistic benchmarks, ... | Summary: This paper introduces Plan-and-Act, a framework consisting of a planner that generates high-level task plans and an executor that translates these plans into specific actions. To deal with unexpected failures, the planner will be involved in updating the plan after each execution step. Besides, a synthetic dat... | Rebuttal 1:
Rebuttal: > R1-1: The paper only reports the success rate of the methods on the WebArena-Lite benchmark. Additional metrics, such as the average number of steps required to complete a task, would provide a more comprehensive assessment.
Below are additional metrics, including average steps and a success/f... | null | null | null | null | null | null |
Dynamic Range Reduction via Branch-and-Bound | Reject | Summary: This paper tackles the numerical precision challenges of solving NP-hard QUBO problems on low-precision hardware accelerators (e.g., quantum annealers, FPGAs) by introducing a dynamic range (DR)-aware optimization framework. The authors propose a hybrid Branch-and-Bound algorithm with policy rollout to iterati... | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful and detailed feedback, as well as for highlighting the originality, theoretical soundness, and practical relevance of our contributions. We respond to the raised concerns below.
### Scalability and Small-Scale Evaluation ($n \le 20$)
We acknowledge that t... | Summary: For given QUBO instances, the presented approach produces new QUBO instances which feature the same solutions but whose parameters have a reduced dynamic range. This is achieved by formulating the problem as an MDP and running a branch-and-bound strategy. Results show an improved number of found global optima ... | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and constructive review. Below, we address the key concerns and clarify aspects related to the scope, comparisons, and experimental setup. We will incorporate the suggested corrections (e.g., references to equations, citation style, and terminology) in the ca... | Summary: This paper presents a Branch-and-Bound algorithm designed to reduce the numerical precision requirements of NP-hard Quadratic Unconstrained Binary Optimization (QUBO) problems, which are critical in real-time AI applications. By utilizing dynamic range as a measure of complexity, the algorithm aims to enhance ... | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive and encouraging evaluation of our paper, and for acknowledging the strength of our theoretical and empirical contributions. Below, we address the noted concerns regarding dataset details and experimental evaluation.
### Clarification on Data Input and QUBO E... | Summary: The focus of this paper is the Quadratic Unconstrained Binary Optimization problem (QUBO), and in particular on methods to reduce the precision of the input entries. This is motivated by applications in hardware acceleration, where small input (e.g. 8 bits) can result in better parallelization. QUBO is an NP-h... | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful and constructive feedback. Below, we respond to the raised concerns regarding theoretical guarantees, runtime feasibility, and novelty.
### Theoretical Guarantees and Rigor
While the overall optimization procedure is heuristic in nature, key components of... | null | null | null | null | null | null |
EEG-Language Pretraining for Highly Label-Efficient Clinical Phenotyping | Accept (poster) | Summary: This paper introduces EEG-Language Models (ELMs), a multimodal framework that integrates EEG signals with clinical text reports for various downstream tasks, including retrieval, abnormality classification, and event classification, across multiple datasets. The method employs time-series cropping, text segmen... | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your thorough and supportive review of our manuscript. We are grateful for your positive assessment of our work’s novelty, methodological soundness, and comprehensive evaluation, as well as your recommendation to accept. We have used your constructive suggestions to r... | Summary: This paper introduces an approach for pretraining multimodal EEG-language models (ELMs) to improve pathology detection. The authors propose combining EEG data with clinical reports using a sub-unit alignment strategy, which involves cropping EEG time series and segmenting medical reports to create multiple non... | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your detailed and constructive review of our manuscript. We greatly appreciate your recognition of our approach’s innovation, performance improvements, and clinical relevance, as well as your thoughtful suggestions and questions, which have helped us strengthen our wo... | Summary: This paper presents a multi-modality model that integrates EEG recordings and clinical reports for neural event detection. The proposed method segments an EEG recording and its corresponding report into sequences of epochs and words, then constructs epoch-word pairs and an alignment matrix for representation l... | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your detailed and insightful review of our manuscript. We appreciate your feedback as it has helped us refine our presentation and clarify the motivations behind our work. We are encouraged by your recognition of EEG-text alignment as a promising research direction an... | Summary: The manuscript describes EEG-Language (CLIP-like) pretraining on medical EEG recordings and the accompanying textual medical reports. They used a pretrained medical langauge model and a from-scratch-trained EEG encoder to map temporal crops of EEG and subsections of medical reports to the same latent space, wi... | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you sincerely for your positive feedback on our manuscript. We are grateful to hear about the value of our contribution to the research community, as well as the clarity of our manuscript.
Regarding your question about $L_{orth}$, $h_e$ is indeed L2-normalized. We apologize ... | null | null | null | null | null | null |
NEAR: Neural Electromagnetic Array Response | Accept (poster) | Summary: Multi-antenna radar systems face challenges in achieving high angular resolution due to hardware constraints, noise, and limited physical antennas. Traditional supervised learning methods for super-resolution struggle with generalization in unseen environments and require extensive training data.
The authors... | Rebuttal 1:
Rebuttal: We are grateful for your recognition of our work’s strengths, notably our theoretical analysis that precisely characterizes the expressive power of INR, as well as our development of an efficient and effective regularization strategy. In response to your concern regarding the rationale behind dire... | Summary: This paper addresses the challenge of achieving high-resolution angular estimation in multi-antenna radar systems using sparse measurements. The authors propose NEAR (Neural Electromagnetic Array Response), an innovative framework that leverages implicit neural representations (INRs) to predict complete antenn... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer xAKz for the constructive comments and suggestions. We provide additional experimental comparisons below to address your concerns:
**1. Experimental comparison between our approach and data-driven method (NeRF$^2$).**
We add a state-of-the-art data-driven baseline... | Summary: The authors utilize a new INR-based framework to achieve angular super resolution in multi-antennae radar systems. The authors further propose a physics-informed regularizer and provide theoretical insights into what functions can be represented by INRs under certain, in previous literature established, constr... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer MVwc for the time and effort in reviewing our paper. We appreciate your positive comments on our work and have fixed the typo you pointed out. If there are any additional areas where you believe we could further improve our manuscript, we would greatly appreciate yo... | Summary: Problem Statement:
The paper tackles the challenge of achieving angular super-resolution in multi-antenna radar systems using only sparse measurements. In radar systems, hardware constraints (i.e. having only a few physical antennas) and noise limit the achievable angular resolution. Traditional supervised met... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer 5fdJ for the time and effort in reviewing our paper. We greatly appreciate the positive feedback. We hope the following responses can resolve your questions and concerns.
**1. I'm not sure why Theorem 4.5 is needed --- if the Hankel matrices are low rank and the fi... | null | null | null | null | null | null |
P(all-atom) Is Unlocking New Path For Protein Design | Accept (spotlight poster) | Summary: The paper introduces Pallatom, a protein generation model that generates protein structures with all-atom coordinates. The model uses a dual-track framework with residue and atomic-level representations and introduces atom14 representation for modelling variable side-chain coordinates. Pallatom learns a diffus... | Rebuttal 1:
Rebuttal: Thank you to the reviewer for your affirmation of our work. We have replied to your questions as follows.
**Q1:** Paragraph 3.3; why can't the amino acid be extracted from the atomic coordinates without using an AA classifier ? a proper all atom representation would yield the required information... | Summary: The paper presents a novel diffusion-based approach for all-atom protein design. A key contribution is the atom14 representation, which unifies amino acid positions by padding them with virtual atoms. Another innovation is predicting amino acid types based on the atom14 representation instead of parallel gener... | Rebuttal 1:
Rebuttal: Thank you to the reviewer for your affirmation of our work. We have summarized and replied to your questions as follows.
**Q1:** The purpose of computing $f^{template-distogram}$ during inference in Algorithm 1.
This describes our self-conditioning mechanism: during sampling, the predicted stru... | Summary: This paper introduces a novel diffusion model for generating all-atom protein backbones and side-chains, enabling simultaneous sampling of protein structures and their corresponding amino acid sequences. The approach for all-atom protein generation relies on two key elements. First, an atom14 representation, w... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for their positive evaluation of our work. Regarding the suggestions and questions raised, we provide point-by-point responses below.
**Q1:** Is it possible to use guidance with sequence-based information?
Based on our ablation experiments, `hybrid14` exhibits ex... | Summary: The paper presents Pallatom, an end-to-end all-atom generative model that jointly learns protein sequences and their 3D coordinates. It uses an “atom14” representation to standardize side-chain atoms and employs a diffusion-based approach on Cartesian coordinates. A dual-track architecture updates residue-leve... | Rebuttal 1:
Rebuttal: Supp. figures/tables: [LINK](https://anonymous.4open.science/r/Pallatom-rebuttal-114C/README.md). Not repeated hereafter.
**Q1:** The authors' self-defined 128-residue training crop size leads to inferior performance, and no validation evidence with larger crop sizes is provided.
We sincerely s... | null | null | null | null | null | null |
MTSTRec: Multimodal Time-Aligned Shared Token Recommender | Accept (poster) | Summary: The paper introduces MTSTRec, a transformer-based multimodal recommendation framework that temporally aligns different modalities to improve sequential recommendations. Unlike existing methods, Unlike existing methods that perform either early or late fusion, MTSTRec employs a Time-aligned Shared Token (TST) m... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thoughtful and constructive feedback. We are especially grateful for the recognition of the strengths of our work, particularly the innovation of the time-aligned multimodal fusion module and its effectiveness demonstrated through our experiments. We also ap... | Summary: This paper proposes a unified multimodal recommendation framework with a Temporally-aligned Shared Token (TST) fusion module to learn cross-modal interactions, ensuring time-consistent alignment and modality fusion. Comprehensive experiments are conducted to compare the framework with existing works and to val... | Rebuttal 1:
Rebuttal: Thank you for your thoughtful feedback. We appreciate your recognition of our work’s clarity, the thoroughness of the experimental evaluation, and a well-written presentation with interesting ideas.
Below, we address each comment and concern in detail:
[W1] Redundancy between TST and positional... | Summary: This paper introduces MTSTRec, a multimodal sequential recommendation model that integrates textual, visual, and price information into a unified, time-aligned shared token representation.
Claims And Evidence: The claims made in the paper are generally supported by the evidence.
Methods And Evaluation Criter... | Rebuttal 1:
Rebuttal: Thank you for your thoughtful feedback. We greatly appreciate your recognition of the strengths of our work, particularly your acknowledgment of our conceptual motivation for time-aligned shared token (TST) fusion, the well-structured experiments and ablation studies, and the comprehensive supplem... | Summary: The authors propose a sequential recommendation framework focusing on multi modal feature fusion. In the proposed models, the authors include feature sets like product IDs, images, text, and prices.The main contribution comes from authors proposing a new block named Time-aligned Shared Token Fusion module. Eac... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thoughtful and constructive feedback. We appreciate the recognition of the practicality and clarity of our proposed method, the comprehensiveness of our experiments, and the potential impact of releasing two of our datasets to the public. We are especially g... | null | null | null | null | null | null |
BiMaCoSR: Binary One-Step Diffusion Model Leveraging Flexible Matrix Compression for Real Super-Resolution | Accept (poster) | Summary: BiMaCoSR is a method that combines binarization and one-step distillation to significantly compress and accelerate super-resolution (SR) diffusion models. It prevents model collapse from binarization using two auxiliary branches: Sparse Matrix Branch (SMB) and Low Rank Matrix Branch (LRMB). SMB captures high-r... | Rebuttal 1:
Rebuttal: > Q3-1:The authors claim that BMB is responsible for most of the high-frequency information. However, as demonstrated in Fig. 2 of the supplementary material, LRBM appears to play a more significant role in contributing to the high-frequency information in the MLP.
A3-1: This is because the high-... | Summary: This work BiMaCoSR introduces the first binarized one-step diffusion model for real-world single image super-resolution (Real-SR). The paper addresses the heavy memory and computation demands of diffusion-based SR by combining 1-bit model binarization with one-step diffusion distillation. The core idea is to ... | Rebuttal 1:
Rebuttal: > Q2-1:Cite LoRA and KD
A2-1: Thank you for your advice. We will cite LoRA and KD in the revised version.
> Q2-2:One minor claim that could use more direct evidence is the suggestion that BiMaCoSR enables diffusion SR on resource-limited edge devices.
A2-2: In Table 2 in the main paper, our BiM... | Summary: This paper presents BiMaCoSR, a binary one-step diffusion model for efficient real-world image super-resolution (SR), which integrates 1-bit quantization and one-step distillation to address the high computational and memory costs of conventional diffusion models. To mitigate performance degradation caused by ... | Rebuttal 1:
Rebuttal: > Q1-1: The claims about naive binarization and skip connection lack citations.
A1-1: It is a common sense that naive binarization leads to model collapse and we provide experiments in table below. Moreover, we will add citations [1], [3], and [4] for naive binarization and [3] and [4] for skip c... | null | null | null | null | null | null | null | null |
Inverse Reinforcement Learning with Switching Rewards and History Dependency for Characterizing Animal Behaviors | Accept (poster) | Summary: The paper introduces Switching Inverse RL (SWIRL), an inverse reinforcement learning framework for characterizing animal behaviors. In this problem setting, the goal is to infer reward functions and policies from animal behavior trajectories. To achieve this, SWIRL introduces two main design choices: time-vary... | Rebuttal 1:
Rebuttal: We greatly appreciate the time and effort reviewer 5en4 dedicated to analyzing our work and providing such constructive feedback. We are pleased that the reviewer recognized the novelty of our work, the technical soundness, its implications for both neuroscience and ML community and the experiment... | Summary: This paper addresses the limitation of traditional IRL, which assumes rewards depend only on the current state, making it insufficient for modeling long-term, history-dependent decision-making in animals. To capture this dependency, the paper introduces SWIRL, an IRL framework that models behavior as transitio... | Rebuttal 1:
Rebuttal: We sincerely appreciate the time and effort reviewer CRRd dedicated to analyzing our work. Below, we address each of the questions and concerns:
**1. Hierarchical RL, RL+RNN and POMDPs:** We will refer to the works listed in the “Essential References Not Discussed” section using bracketed citatio... | Summary: In this work, the Authors extend the IRL framework, designed previously to consider multiple goal maps in real-world agents concurrently (Ashwood et al & Zhu et al), by explicitly incorporating history-dependent policies and rewards into the model. Using this new framework, the Authors model several standard d... | Rebuttal 1:
Rebuttal: We greatly appreciate the time and effort reviewer FWjW dedicated to analyzing our work. We are sorry that the key contributions of SWIRL are likely misunderstood by the reviewer and apologize for not clarifying this fact more clearly in the paper. Below we will provide a thorough discussion on th... | Summary: This paper presents an EM-based IRL algorithm SWIRL (SWItching IRL), for learning time-varying reward functions to model animal behavior. The paper extends IRL by incorporating time-varying, history-dependent reward functions.
A key contribution of this work is that it incorporates capturing the shifting mo... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer 9ThW for their detailed and thoughtful comments on our paper. We are pleased that the reviewer recognized the novelty of our work, the technical soundness, its potential impact on intelligent behavior research, and the overall paper presentation. Below, we address e... | null | null | null | null | null | null |
Adversarial Inception Backdoor Attacks against Reinforcement Learning | Accept (poster) | Summary: This paper introduces a new backdoor attack against deep reinforcement learning agents, specifically addressing the constraint that an attacker cannot arbitrarily modify the reward function to some extremely large value. The key insight is to selectively poison high-return time steps in the agent’s training da... | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and questions!
**“Existing works like TrojDRL do not introduce arbitrarily large reward values either”**
It is true that TrojDRL perturbs the agent’s reward by a fixed value $\pm c$, but this $c$ may need to be arbitrarily large for attack success. Let’s retu... | Summary: This paper proposed a novel backdoor attack framework called Q-Incept to attack the deep reinforcement learning training process by changing the state, reward, and action stored in the replay buffer. The proposed method designed new transition and reward functions for the MDP under the backdoor attack. The exp... | Rebuttal 1:
Rebuttal: Thank you for your review and questions, we look forward to further discussion.
**“Why the backdoored BR performance can outperform No Poisoning BR scores?”**
For Q-Incept, our theoretical results show that the optimal policy for benign states in $M’$ (the poisoned MDP) is the same as in $M$ (th... | Summary: The paper proposes a new method, Q-Incept, for backdoor poisoning attacks.
Previous work assumes the ability to arbitrarily change the reward within some “poisoned” states in the dataset. The authors rightly point out this is not necessarily realistic, as they arbitrarily manipulate the magnitude of the reward... | Rebuttal 1:
Rebuttal: Thank you for your helpful feedback and questions, we look forward to further discussion with you. Based upon our response we kindly ask you to consider increasing your assessment of our paper.
**"In related work, some alternative poisoning methods are mentioned, I would also include [Lu et. al ... | null | null | null | null | null | null | null | null |
Divide and Conquer: Exploring Language-centric Tree Reasoning for Video Question-Answering | Accept (poster) | Summary: The paper introduces Language-centric Tree Reasoning, a framework for VideoQA that hierarchically decomposes complex questions into a logical tree. It first recursively splits questions into perceptual sub-questions using linguistic cues and retrieval-augmented generation (RAG). Then, answers are aggregated bo... | Rebuttal 1:
Rebuttal: Thank you for carefully reviewing our paper, acknowledging its strengths, and providing valuable suggestions for improvement
If you have any further concerns, please feel free to raise them during the second-round rebuttal phase.
As recommended by the official FAQ, we provide all figures and table... | Summary: This paper proposes Language-centric Tree Reasoning (LTR), a training-free, model-agnostic framework that enhances reasoning capabilities and interpretability in Video Question Answering (VideoQA) by using MLLMs. LTR addresses the limitations of existing MLLMs, such as opacity and lack of controllability in th... | Rebuttal 1:
Rebuttal: Thank you for carefully reviewing our paper, acknowledging its strengths, and providing valuable suggestions for improvement
If you have any further concerns, please feel free to raise them in the rebuttal comment.
As recommended by the official FAQ, we provide all figures via the [anonymous link]... | Summary: This paper introduces Language-centric Tree Reasoning (LTR), a framework to enhance the reasoning of MLLMs. It uses MLLMs to first hierarchically break down a question into sub-questions, then conquer the question by answering the sub-questions in a bottom-up way. In the experiments, LTR is applied to four sta... | Rebuttal 1:
Rebuttal: Thank you for carefully reviewing our paper, acknowledging its strengths, and providing valuable suggestions for improvement
If you have any further concerns, please feel free to raise them during the second-round rebuttal phase.
As recommended by the official FAQ, we provide all figures and tabel... | null | null | null | null | null | null | null | null |
Representation Preserving Multiclass Agnostic to Realizable Reduction | Accept (poster) | Summary: In the PAC learning model, one is trying to learn a function f over a distribution D. One is given samples and attempts to return a hypothesis h so that Pr_{x ~ D}(f(x) neq h(x)) is small. In the realizable setting, one obtains samples of the form (x,f(x)) where f is guaranteed to be in some function class C a... | Rebuttal 1:
Rebuttal: We thank the reviewer for dedicating their time to assess our work. Below, we address the comments provided by the reviewer.
**1. The significance of our work:** We establish the first representation-preserving reduction from agnostic to realizable learning for multiclass classification with an ... | Summary: The paper studies a representation preserving agnostic to realizable reduction. The reduction can nicely be described as, splitting the training data into two parts $V$ and $T$. Now on the first part of the training data $V$ the learner runs on all subsets the realizable learning algorithm, getting $ 2^{|V|} $... | Rebuttal 1:
Rebuttal: We thank the reviewer for dedicating their time to assess our work. In particular, we thank the reviewer for taking the time to verify that the work is technically sound. We are delighted that the reviewer found that we solved an interesting open problem in an elegant way, and moreover mentioned t... | Summary: The authors study agnostic learning with black-box realizable learners, extending the work of Hopkins et al. (2022).
They adapt the simple reduction from Hopkins et al. in a very general PAC learning setting (encompassing list learning and many more). They prove that their reduction algorithm achieves a sampl... | Rebuttal 1:
Rebuttal: We thank the reviewer for dedicating their time to assess our work. In particular, we thank the reviewer for taking the time to verify that the work is technically correct. We are delighted that the reviewer found our work nice and important, and moreover mentioned that our paper should be interes... | Summary: This paper studies the PAC learning of the problem of multiclass classification with unbounded numbers of labels. The primary contribution is a novel reduction from the agnostic learning setting to the realizable setting that preserves the structure of the output space, which resolves an open problem posed by ... | Rebuttal 1:
Rebuttal: We thank the reviewer for dedicating their time to assess our work. We are delighted that the reviewer found our algorithm novel, and mentioned that our paper is well written.
We would be happy to provide additional clarification on any aspect of the paper that could help inform the review score. | null | null | null | null | null | null |
Promoting Ensemble Diversity with Interactive Bayesian Distributional Robustness for Fine-tuning Foundation Models | Accept (poster) | Summary: The authors propose a new Bayesian inference framework called “Interactive Bayesian Distributional Robustness” (IBDR). IBDR is designed to improve the quality and diversity of model ensembles by modelling interactions between individual models in the ensemble in order to prevent them from collapsing into simil... | Rebuttal 1:
Rebuttal: We would like to thanks reviewer KcS3 for their supportive review and feedback. We would like to address some question and concern as follow:
1. **What is the limit on IBDR performance improvement as the number of particles increases?**
Thank you for the helpful suggestion. Following your comment... | Summary: The authors introduce a method to encourage diversity in an ensemble of Bayesian neural-net particles.
To achieve this, they combine results from distributional robustness and determinantal point processes
to derive a PAC-Bayesian-style upper bound on their target objective. An approximation of this bound be... | Rebuttal 1:
Rebuttal: We thank the reviewer for your feedback. We provide detailed responses to the main concerns as follows:
1. **l195: 'conventional Bayesian frameworks' can't enforce diversity**
- Indeed, in Section 3.3, we introduce what we meant by traditional Bayesian framework. Given a training set $S$, we have ... | Summary: The paper introduces Interactive Bayesian Distributional Robustness, a novel Bayesian inference framework designed to improve ensemble diversity and robustness in fine-tuning foundation models. The core idea of IBDR is to explicitly model interactions between multiple sampled particles in the Bayesian inferenc... | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s comments and respond to the key concerns as follows:
**"While IBDR is evaluated on multiple datasets, computational overhead is not thoroughly analyzed. Given that the method involves interactive Bayesian sampling, a study on efficiency would be valuable."**
Thank yo... | Summary: This paper introduces a distributionally robust method for Bayesian estimation, aimed primarily at fine-tuning foundation models. Central to the contribution is a term to promote particle diversity during optimization. Theoretical results of the proposed method are provided, and extensive fine-tuning experimen... | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback and would like to address the concerns as follows:
1. **Regarding the Theoretical Claims:**
- **Regarding the citation of prior work in the proof of Theorem 4.1** : Thank you for the suggestion. Our proof relies on Theorem 2.1 from the cited stud... | null | null | null | null | null | null |
How Transformers Learn Regular Language Recognition: A Theoretical Study on Training Dynamics and Implicit Bias | Accept (poster) | Summary: The work shows a theoretical analysis studying how a single layer of a Transformer (more precisely, an attention layer with a linear layer on top) learns to solve "even pairs" and "parity check" - two regular language recognition tasks. The authors begin by analyzing the even pairs task, showing that the Trans... | Rebuttal 1:
Rebuttal: We thank the reviewer for the helpful comments. Please note that all our new experiment results (i.e., the figures we quote below) can be accessed via the link https://anonymous.4open.science/r/icml2025-4BCC/Figures%20for%20ICML.pdf
Q1: Main takeaway about transformers: (i) How analysis of parity... | Summary: This paper presents a detailed theoretical analysis of how a one-layer transformer learns two sequence recognition/classification tasks: even pairs and parity check. The analysis decomposes the factors driving the attention weights and token score, with a discussion of the training dynamics in detail (e.g. att... | Rebuttal 1:
Rebuttal: We thank the reviewer for the helpful comments. Please note that all our new experiment results (i.e., the figures we quote below) can be accessed via the link https://anonymous.4open.science/r/icml2025-4BCC/Figures%20for%20ICML.pdf
Q: Add more diagrams, or interleave the theoretical analyses wi... | Summary: The authors theoretically study language recognition tasks with transformer. Formally, they study the training dynamics of transformers trained with gradient descent on the parity check and even pairs problems. Considering a single-layer simplified transformer, they first show that the even pairs problem can b... | Rebuttal 1:
Rebuttal: We thank the reviewer for the helpful comments. Please note that all our new experiment results (i.e., the figures we quote below) can be accessed via the link https://anonymous.4open.science/r/icml2025-4BCC/Figures%20for%20ICML.pdf
Q1: It seems that the embedding strategy does not depend on the ... | Summary: This paper focuses on two typical regular languages: even pairs and parity check. The authors show that one-layer transformers can learn even pairs directly without CoT. For parity check, it is shown that one-layer transformers can learn it with CoT and with a small amount of data mixing with even pairs.
Clai... | Rebuttal 1:
Rebuttal: We thank the reviewer for the helpful comments. Please note that all our new experiment results (i.e., the figures we quote below) can be accessed via the link https://anonymous.4open.science/r/icml2025-4BCC/Figures%20for%20ICML.pdf
Q1: It is not clear how the results can be extended to deep tran... | null | null | null | null | null | null |
Think Twice, Act Once: A Co-Evolution Framework of LLM and RL for Large-Scale Decision Making | Accept (poster) | Summary: This paper introduces a novel framework, termed Agents Co-Evolution (ACE), which combines Large Language Models (LLMs) and Reinforcement Learning (RL) for large-scale decision-making in the context of power grid operations. In this framework, the Soft Actor-Critic (SAC) algorithm (Haarnoja et al., 2018) is fir... | Rebuttal 1:
Rebuttal: > LLM4Teach
Thank you very much for your insightful question. You raise a valid point regarding the original version of LLM4Teach requiring environmental interaction after policy-level alignment. However, due to the significant time cost associated with LLM-environment interactions, we adopted th... | Summary: This paper proposes the ACE framework, which co-evolves LLMs and RL agents for industrial-scale decision-making. ACE decouples the high-level reasoning and fine-grained control by employing a "Think Twice, Act Once" strategy. The framework is evaluated on multiple power grid operation challenges from the L2RPN... | Rebuttal 1:
Rebuttal: > Cross-Domain Applicability of ACE
We appreciate the reviewer's question about ACE's generalizability. While our study focuses on power grid control, the core modules of ACE are domain-agnostic and not constrained by specific environment characteristics. We believe ACE's potential applications i... | Summary: The paper proposes Agents Co-Evolution (ACE), a synergistic framework that integrates Large Language Models (LLMs) and Reinforcement Learning (RL) agents to address challenges in large-scale decision-making problems. While LLMs struggle with long-sequence, real-time decision-making, and RL faces inefficiency i... | Rebuttal 1:
Rebuttal: > Computational and Memory Overheads
Thank you for raising the important question regarding computational costs. We conduct experiments in the NeurIPS 2020 competition environment. For expert-guided RL, we trained for 100K timesteps with a total duration of 6h 4m14s. Below are the computational c... | Summary: The paper introduces **Agents Co-Evolution (ACE), a framework that leverages Large Language Models (LLMs) to enhance the sample efficiency of large-scale Reinforcement Learning (RL) decision-making systems**. The core principle of ACE involves using the reasoning capabilities of LLMs to guide the RL training p... | Rebuttal 1:
Rebuttal: > Baselines
We sincerely appreciate your insightful suggestion. In response, we explain our baseline selection criteria and add new baselines for comparison:
- **Original baseline**: We choose SMAAC because it is the winning solution in WCCI 2020 and relies less on predefined rules than other me... | null | null | null | null | null | null |
Importance Corrected Neural JKO Sampling | Accept (poster) | Summary: This paper presents a method to sample from an probability distribution known through its density, up to an unknown normalizing constant. The method follows the trend of neural parameterizations to solve the proximal steps of the JKO scheme to compute the Wasserstein Gradient Flow of the reverse Kullback-Leibl... | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the detailed and valuable feedback.
## Convexity Assumption
There appears to be a misunderstanding regarding the assumptions for the theoretical part: We **do not assume that the density is log concave**. Instead, Assumption 3.1 assumes that the functional... | Summary: This paper contributes a method called "Importance corrected neural JKO sampling", based on the well-established Jordan-Kinderlehrer-Otto (JKO) scheme. The method is constituted by a flow-based ordinary differential equation (ODE), which is parameterized by neural networks and learned using standard neural ODE... | Rebuttal 1:
Rebuttal: Many thanks for the detailed and thoughtful review. Please find our answers below. For the final version, we will additionally correct the typos, extend the literature part (e.g. with SVGD), improve the visual accessibility based on your comments and add the definitions of lsc/coercive/$\lambda$-c... | Summary: This paper applies the Wasserstein Gradient Flow (WGF) framework to the sampling problem, i.e. sampling from a given target distribution. The proposed approach consists of two key stages:
**Stage 1**: JKO Steps with Continuous Normalizing Flows (CNFs)
- Given a terminal density, the authors perform Jordan–Kin... | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the detailed evaluation of our paper. Please find our comments below.
## Literature
The process of research is an active field and therefore many interesting contributions are published frequently in particular in this modern field. However, we want to kin... | Summary: This paper proposes to sample from an unnormalized probability density via a sequence of interleaved continuous normalizing flows (CNFs) and importance accept/reject steps. The CNFs, which are penalized with a velocity norm regularizer as in OT-Flow (Onken et al. 2021), are interpreted as Wasserstein proximal ... | Rebuttal 1:
Rebuttal: Thank you very much for your very detailed and thoughtful review. Please find the answers to your questions and comments below. Additionally, we will correct the typos and grammar errors.
## Questions
1. Our choice of the schedule is based on the following heuristic: Since the rejection layers a... | null | null | null | null | null | null |
Diffusion Sampling Correction via Approximately 10 Parameters | Accept (poster) | Summary: This paper proposes PCA-based Adaptive Search (PAS) to optimize the sampling process of diffusion probabilistic models (DPMs). The key of the method is leveraging Principal Component Analysis to identify a low-dimensional subspace for sampling correction. The method also includes an adaptive search strategy to... | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's recognition of our work and meticulous review. Please find below our responses to all the questions. We would greatly appreciate it if you could consider increasing the score if you are satisfied with our response.
**Abbreviation: CaE (Claims And Evidence), ... | Summary: The paper proposes a novel PCA-based Adaptive Search (PAS) method to accelerate diffusion model sampling with minimal additional computational and parameter costs. The key idea of PAS rests on the observation that the sampling trajectory of a parameterized reverse ODE of diffusion model almost lies in a 3D sub... | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's efforts and valuable review. Below are our responses to all questions. We kindly hope you could consider increasing the score if you are satisfied.
>**Experimental Designs Or Analyses: Direct performance comparisons with training-based methods are lacking, h... | Summary: The authors leveraged the previous finding that the diffusion sampling trajectories are low dimensional, and that part of it is more curvy. Then they developed a method to learn the PCA basis of current ongoing trajectory during sampling and then learn coefficient to recombine the PC vectors to correct for the... | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's detailed review and insightful suggestions!
**Abbreviation: CaE (Claims And Evidence), MaEC (Methods And Evaluation Criteria), ERND (Essential References Not Discussed)**
>***CaE1: The later trajectory may not be linear but instead takes smaller steps.***
... | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.