title
stringlengths
15
163
paper_decision
stringclasses
4 values
review_1
stringlengths
853
32.6k
rebuttals_1
stringlengths
0
15.1k
review_2
stringlengths
1.03k
35.6k
rebuttals_2
stringlengths
0
15.1k
review_3
stringlengths
807
27.4k
rebuttals_3
stringlengths
0
15k
review_4
stringlengths
780
22.2k
rebuttals_4
stringlengths
0
15.1k
review_5
stringclasses
171 values
rebuttals_5
stringclasses
166 values
review_6
stringclasses
25 values
rebuttals_6
stringclasses
24 values
review_7
stringclasses
4 values
rebuttals_7
stringclasses
4 values
Robust Autonomy Emerges from Self-Play
Accept (poster)
Summary: The authors introduce a simulator which is capable of efficiently simulating joint traffic scenarios at scale. Using this simulator, they train a population of driving policies using self-play reinforcement learning. When evaluated on CARLA, NuPlan and Waymax, the authors report state of the art performance de...
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful review, and list a detailed reply for all questions raised. To contextualize this rebuttal we would like to clarify one of the main differences between Agent and Environment Simulation (WOSAC) and learning a driving policy (this work). The goal of WOSAC ...
Summary: This paper presents a batched driving simulator called GIGAFLOW that enables large-scale self-play, i.e., randomly initializing scenarios and learning robust driving behaviors with pre-defined RL rewards. The GIGAFLOW world creates worlds based on eight maps, spawns agents on random locations with randomly per...
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful review, and list a detailed reply for all questions raised. > GPUDrive We are aware of the work, and have been in close communication with the authors of GPUDrive from the start of their project. We will cross-cite each other's work for the camera-ready...
Summary: This paper presents GIGAFLOW, a batched simulator that supports large-scale simulation to train robust driving policies via self-play. Through learning from a massive scale of self-play, the learned policy demonstrates superior performance compared to the prior state-of-the-art on multiple challenging benchmar...
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful review, and list a detailed reply for all questions raised. > The intuition and motivation to use one policy for all agents is under-discussed in this work. Furthermore, the ablation study of comparing shared policy and separate policies for different ty...
null
null
null
null
null
null
null
null
Rethinking the Stability-Plasticity Trade-off in Continual Learning from an Architectural Perspective
Accept (poster)
Summary: This paper tackles the problem of offline Class Incremental Learning but leverage a a dual architecture strategy. The goal is to design one architecture that would be more plastic (focus on new knowledge) and another that would be more stable (focus on older knowledge) and combine both capabilities during trai...
Rebuttal 1: Rebuttal: We sincerely thank Reviewer 2FFZ for the recognition of our insightful empirical findings and interesting idea. We are also grateful for the valuable and constructive feedback. > Conclusions regarding deeper network Our main conclusion is that "existing architectural designs typically exhibit go...
Summary: This paper studies the stability-plasticity trade-off in continual learning from an architectural perspective. It finds that increasing depth improves plasticity, while increasing width enhances stability. Motivated by this, it proposes a dual-architecture framework, DualArch, comprising two distinct networks ...
Rebuttal 1: Rebuttal: We sincerely thank Reviewer Kz16 for the recognition of our intriguing research perspective and interesting idea. We are also grateful for the valuable and constructive feedback. > Computation overhead. We would like to clarify that the total FLOPs of two models (Sta-Net and Pla-Net) in Dual-Arc...
Summary: The paper investigates the stability-plasticity trade-off in continual learning from an architectural perspective. Through empirical studies, the authors find that depth enhances plasticity while width favors stability. Building on this insight, they propose an approach that leverages two specialized networks ...
Rebuttal 1: Rebuttal: We sincerely thank Reviewer MGyK for the recognition of our insightful research perspective, simple yet effective method, and impressive performance improvements. We are also grateful for the valuable and constructive feedback. > Validation on replay-free settings. While our current experiments ...
Summary: This paper investigates the stability-plasticity trade-off in continual learning (CL) from an architectural perspective. The authors empirically demonstrate that deeper networks favor plasticity, while wider networks enhance stability under fixed parameter constraints. To address this, they propose Dual-Arch, ...
Rebuttal 1: Rebuttal: We sincerely thank Reviewer xKgH for the valuable and constructive feedback. > Generalizability of architectural insight / Concern about supplementary material. There may be a misunderstanding regarding Tab. 5 and 6 results: - **Shallower yet wider** ViT (5×49, Tab. 5): Lower AAN/FAF value → ...
null
null
null
null
null
null
Fleet of Agents: Coordinated Problem Solving with Large Language Models
Accept (poster)
Summary: The word produces a novel framework, FOA, that employes LLMs agents for dynamic tree search using genetic-type particle filtering approach. The multiple agents provide dynamic branching and adapting the exploration strategy. A major claim is to improve the cost-efficiency of multi-query methods. The paper comp...
Rebuttal 1: Rebuttal: We thank the reviewer for their careful and thoughtful review. We appreciate the positive assessment, recognizing the new state-of-the-art performance of our method and its extensive evaluation. We are grateful for the reviewer’s thoughtful engagement, and hope that our clarifications help address...
Summary: The paper proposes Fleet of Agents (FOA), a novel multi-agent framework leveraging genetic particle filtering to enhance problem-solving capabilities of Large Language Models (LLMs). FOA achieves improved reasoning quality and significantly reduces computational costs compared to state-of-the-art methods, demo...
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. We are encouraged by their overall positive assessment, recognizing the novelty, strong performance, and cost-effectiveness of our method, and the extensive nature of our experiments involving diverse benchmark tasks, multiple baselines, and LLM...
Summary: The paper introduces Fleet of Agents , a framework that coordinates multiple LLM agents using a genetic-type particle filtering approach to optimize the balance between exploration and exploitation in problem-solving tasks. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Ye...
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and overall positive assessment. We hope to have comprehensively clarified all the questions and concerns of the reviewer with our responses below. We will be happy to answer any further questions that the reviewer may have and hope that the reviewer consid...
null
null
null
null
null
null
null
null
Self-supervised Adversarial Purification for Graph Neural Networks
Accept (poster)
Summary: The paper proposes a method to defend GNNs that is based on a separate GNN classifier and a purifier. The main idea is to decouple the classifier and purifier and learn a multi-step purifier using generalized pagerank. Extensive experiments are provided showing that this approach outperforms state-of-the-art d...
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive comments. We would like to address your concerns (C) with the following responses: --- > **C1.** Aren't the static approaches such as Jaccard-GCN or SVD-GCN also already "decoupled"? Unlike Jaccard-GCN and SVD-GCN, which apply fixed heuristics for ...
Summary: This paper studies the robustness of GNNs against adversarial attacks from the perspective of adversarial purification. The authors introduce a self-supervised adversarial purification framework that preprocesses input data to remove adversarial perturbations before classification. Experimental results on a wi...
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive comments. We would like to address your concerns (C) with the following responses: --- > **C1.** The claim in Proposition 3.1 is clear. However, according to Proposition 3.1, with a deep GNN (e.g., when the number of layers ≥ 3), most nodes in the g...
Summary: Traditional defense strategies for Graph Neural Networks (GNNs), such as adversarial training, often struggle to balance accuracy and robustness, as they entangle these competing objectives within a single classifier. This paper challenges that approach and introduces a novel self-supervised adversarial purifi...
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive comments. We address your concerns (C) as follows: --- > **C1.** The core idea closely resembles test-time graph transformation [1]. We understand the concerns regarding resemblence with TTGT[1] and wish to highlight four key differences: 1. Dedic...
Summary: This study introduces a self-supervised adversarial purification framework to enhance the robustness of GNNs against attacks. Unlike traditional methods that merge accuracy and robustness in a single classifier, their approach (GPR-GAE) employs a dedicated purifier to cleanse input data prior to classification...
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive comments regarding our work. We would like to address your concerns (C) with the following responses: --- > **C1.** The authors should further elaborate on the connections to existing work on GNNs with spectral filtering (a.k.a. spectral GNNs). We t...
null
null
null
null
null
null
VideoJAM: Joint Appearance-Motion Representations for Enhanced Motion Generation in Video Models
Accept (oral)
Summary: The paper proposes a joint appearance-motion learning framework for video generation. The authors are motivated by the key observation that the common pixel-based training objective is invariant to temporal perturbations. Therefore, they propose to equip the model with an explicit motion learning objective via...
Rebuttal 1: Rebuttal: We thank the reviewer for the comprehensive feedback and the useful points for discussion. Please find below our response. __VBench prompts:__ Both VBench and Movie Gen are common benchmarks for general video evaluation. The VBench prompts are not used in our work since out of the 11 dimensions f...
Summary: Despite recent advancements, generative video models still exhibit significant limitations in temporal coherence, especially when modeling real-world dynamic interactions and physics. The authors identify that this issue arises fundamentally from the traditional pixel-based reconstruction objectives, which pri...
Rebuttal 1: Rebuttal: We thank the reviewer for the comprehensive feedback and the interesting points for discussion. Please find below our response to the points raised in the review. __VideoJAM adaptability:__ We appreciate the feedback. While all concepts of our work can be easily generalized to any backbone, we ac...
Summary: This paper presents VideoJAM, a framework that improves motion coherence in generative video models by learning a joint appearance-motion representation. It introduces two key components: predicting both pixels and motion during training, and Inner-Guidance for coherent motion during inference. VideoJAM outper...
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and points for discussion. Please find below our response. __Human evaluation:__ Thank you for your feedback. There appears to be a misunderstanding in the review. Importantly, _VideoJAM appears in all human evaluations_. As highlighted in all table captions...
Summary: The authors present Video-JAM (Joint Appearance-Motion representation), at an aim to capture real-world motion, dynamics, and physics, which existing video generative models struggle with handling. In particular, the authors discover that the current video model training objective biases models towards fideli...
Rebuttal 1: Rebuttal: We thank the reviewer for the comprehensive review of our work and the insightful suggestions. Please find below our response to the points raised in the review. __Related work on text-to-3D:__ Thank you for bringing this work to our attention. We will revise the related works section to referenc...
null
null
null
null
null
null
LieRE: Lie Rotational Positional Encodings
Accept (poster)
Summary: The authors introduce a type of positional embedding which extends the RoPE embeddings by introducing learnable rotation matrices. ## update after rebuttal I thank the authors for their thorough response. In light of this, I will increase my score to weak accept. Claims And Evidence: The authors present reas...
Rebuttal 1: Rebuttal: Dear Reviewer yH2U, Thank you for your thoughtful review and detailed feedback. We understand your concerns about the incremental nature and effectiveness of our work, and would like to address these directly: **On novelty and the primary contribution of the work**: We split up our contributions...
Summary: LieRE extends the popular RoPE by replacing its block-diagonal 2D rotation matrices with learned, dense, high-dimensional rotation matrices derived from Lie group theory. The authors show that LieRE addresses key limitations of RoPE, particularly for multi-dimensional data like images and videos. Specifically,...
Rebuttal 1: Rebuttal: Dear Reviewer gAmX, Thank you for your thorough and supportive review. We particularly appreciate your recognition of our mathematical foundations and empirical results. We have built upon your feedback to further improve the paper. In addition to the changes below, we have also expanded the pape...
Summary: The authors mainly proposed a new positional encoding method called Lie, to replace the previous wildly used RoPE. It is used to improve the spatial relationship representation, especially in 2D and 3D images. Extensive experiments are conducted on classification tasks, and with the proposed PE, the accuracy v...
Rebuttal 1: Rebuttal: Dear Reviewer udrb, We appreciate your recognition of our work's theoretical merits and experimental contributions. We have carefully considered your feedback and would like to address each point. **Long Context for 1D**: LieRE is primarily focused on inputs with dimensionality greater than one....
Summary: The paper introduces a positional embedding encoding based on Lie Groups. The idea of the paper is to parameterize the positional embeddings using skew symmetric matrices. The authors show the benefit of the proposed method in terms of generalization, data efficiency and compute needed. Overall, the idea is n...
Rebuttal 1: Rebuttal: Thank you for the thoughtful and thorough review and writing feedback which has helped strengthen the paper. We have addressed the typos and writing style in the revision based on your comments. **Equivariant work comparison**: We are excited about the equivariant line of work! We believe it is k...
null
null
null
null
null
null
Conditional Lagrangian Wasserstein Flow for Time Series Imputation
Reject
Summary: The paper proposes a time-series imputation method based on optimal transport flow matching. To improve point-estimation of the imputations, the paper suggests learning an additional denoising autoencoder, which when used during sampling reduces imputation variance. Evaluation with common point-estimation metr...
Rebuttal 1: Rebuttal: Thank you very much for the insigtful comments (length limit, the response is concise). ### **Claims And Evidence** 1. > - However, the addition of the drift term... #### **Response**: The proposed theoretical framework can be used to analyze the learning process of the conditional distribution. H...
Summary: In this paper, the authors proposed a novel time-series imputation approach named `Conditional Lagrangian Wasserstein Flow` (CLWF) based on the functional optimization approaches, for example, Schr \"{o}$dinger bridge, optimal transport, flow matching, etc. At first, the authors reformulated the data imputatio...
Rebuttal 1: Rebuttal: Thank you very much for the insigtful comments (due to length limit, we have to keep the response concise here). ### **Claims And Evidence** > 1.To the reviewer's knowledge, the acceleration of diffusion models ... #### **Response**: Here, we mainly refer to the diffusion models for time series i...
Summary: The article proposes a methodology for time series imputation using Wasserstein flows. The paper presents a number of theoretical elements required for their contribution, to then validate their method via simulations. ## update after rebuttal As I posted early in the discussion period, the rebuttal does not ...
Rebuttal 1: Rebuttal: ### **Methods And Evaluation Criteria** > line 149 (right) states to "solve" Eq. (7) - however, note that Eq. (7) is a definition, how can it be "solved"? #### **Response**: In Eq. (7) $\mu_t$ is unkonwn. ### **Supplementary Material** >For instance, what is the point of Fig 3? #### **Response**...
Summary: This paper introduces Conditional Lagrangian Wasserstein Flow (CLWF), time series imputation model that leverages optimal transport theory and Lagrangian mechanics. Following the principle of least action, CLWF learns a velocity field by minimizing kinetic energy, effectively finding the shortest path in proba...
Rebuttal 1: Rebuttal: Thank you very much for the insigtful comments. ### **Claims And Evidence** >1. The authors claim that they show the connection between proposed method and SOC,path measures. But in my opinion, more supporting detail should be included. In this version of manuscript, I cannot find detailed relatio...
null
null
null
null
null
null
Convex Markov Games: A New Frontier for Multi-Agent Reinforcement Learning
Accept (poster)
Summary: The paper proposes a new model for multi-agent interaction called Convex Markov Games (CMGs), that generalizes the concept of Markov Games to convex objectives of the induced state distribution. The authors characterise the existence of mixed and pure Nash Equilibria and propose a simple algorithm to find them...
Rebuttal 1: Rebuttal: Dear reviewer, thank you for your positive endorsement and helpful comments. We are pleased to hear you not only appreciate our proposed generalization of Markov games, but also checked and found **“the proofs are rigorous and the empirical evidence is convincing”**. Thank you for saying the **“p...
Summary: The paper presents convex Markov games, a framework that extends Markov games by generalizing the players' linear utilities with any convex function of their state-action occupancy measures. A similar generalization recently studied in single-agent problems, from MDP to convex MDPs. Here the same extension is ...
Rebuttal 1: Rebuttal: Dear reviewer, thanks for your comments and highlighting our work as a **“Natural extension of prior work in single-agent convex utilities decision making”** with **“several interesting applications”**. Your feedback will help us greatly improve the paper. Pure/Mixed Strategies and Deterministic...
Summary: The paper studies a generalized model of Markov games, called convex Markov games (CMG). The difference between CMGs and standard Markov games is that the former adopt convex functions as the players' utility functions, which are more general than linear functions used by the latter. More specifically, each pl...
Rebuttal 1: Rebuttal: Dear reviewer, thank you for your constructive feedback. We are pleased to hear you find the proposed convex Markov Game model **“well-motivated”** and the proof of existence of pure Nash equilibria an **“important result”**. We appreciate the need for clearly explaining this important result and ...
Summary: The authors introduce a class of convex Markov games that allow general convex preferences, prove that pure strategy Nash equilibria exists, and provide a gradient-based approach to approximate this equilibria, noting the computational difficulty of finding the actual equilibria with general solvers. The signi...
Rebuttal 1: Rebuttal: Dear reviewer, thank you for your constructive feedback. We are glad to hear that the significance of our work carried through and that you think we **“did a great job pointing out the gap in literature”** (convex MDP + multi-agent). We also understand your comment regarding the layout of the expe...
null
null
null
null
null
null
On the Impact of Performative Risk Minimization for Binary Random Variables
Accept (poster)
Summary: The paper devises two metrics to study the path taken for models in a performative setting and applies them in the study of binary random variables (shifted Bernoulli). Claims And Evidence: - They claim that they analyze the impact of Performative Risk Minimization. However, as far as I can tell, there is n...
Rebuttal 1: Rebuttal: Thank you for the detailed and constructive review! We address your concerns and questions below and remain available for further discussion. **The paper never defines what they understand by PRM … They cite three papers but those solve different optimization problems** Thank you for your commen...
Summary: The paper studies settings where predictions actively shape the data distribution, a formalism known as performative prediction. Beyond just predictive accuracy, their main focus is on understanding how different update procedures and learning algorithms influence the data distribution and understanding to wha...
Rebuttal 1: Rebuttal: Thank you for the thoughtful and constructive review! We address your concerns and questions below and remain available for further discussion. **I think that the paper would benefit by including a discussion to the following papers.** Thank you for the suggested references! We agree that, while...
Summary: This paper studied the long-term impact of performative prediction on the predictor quality (as measured by bias) and population distribution (as measured by shift). It examined separately the settings of slow and rapid classifier updates with or without perfect information. The paper analyzed the long-term bi...
Rebuttal 1: Rebuttal: Thank you for the thoughtful and constructive review! We address your concerns and questions below and remain available for further discussion. **Setting $\lambda = 0$ seems too restrictive, no? ...** Thank you for your comment. Indeed, in Section 4.2.2 we had to make assumptions in order to ens...
null
null
null
null
null
null
null
null
Principal-Agent Bandit Games with Self-Interested and Exploratory Learning Agents
Accept (poster)
Summary: The paper studies a principal-agent bandit game where the principal first provides an incentive, and then the agent selects the arm based on estimation and the provided incentive. The authors propose a novel elimination algorithm for the i.i.d. setting and the linear bandit setting. The corresponding regret up...
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. Below we address your concerns. --- **W:** I understand that the paper is of theoretical nature, but I believe that the paper can benefit from experiments on simulated data and real-world data. **A:** Thank you for the suggestion. We do agree that...
Summary: This paper studies the bandit principal-agent problem, where a principal tries to incentivize an agent playing a bandit instance so as to maximize their own cumulative reward. It extends the previous works of Scheid 2023 and Dogan 2023 by considering an agent who selects the arm based on *empirical* reward mea...
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. Below we address your concerns. --- **W1:** It seems that this paper employs many technics from these papers (Dogan et al., 2023a) and (Scheid et al., 2024b) without bringing a lot novelties from a mathematical point of view (except using online el...
Summary: This paper studies the problem of principal-agent interactions with self-interested agents. Different from previous studies like Dogan et al.(2023a, 2023b) and Scheid et al. (2024b), this paper assumes an empirical mean maximizer agent behavior model rather than true mean maximizer. The authors’ elimination fr...
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. Below we address your concerns. --- **W1:** I think the outperformance of soft $O(T^{11/12})$ is not a fair comparison, since their regret is defined with respect to different behavior models, i.e., true mean versus empirical mean. **A1:** We woul...
null
null
null
null
null
null
null
null
Geometric Algebra Planes: Convex Implicit Neural Volumes
Accept (poster)
Summary: The paper aims to improve grid-based representations for neural fields. Inspired by principles from geometric algebra, the paper introduces a set of formulations, specifically, convex and semi-convex representations, to enhance the expressiveness and efficiency of grid-based neural fields. Experimental results...
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful engagement with our work. Response to other comments and suggestions: Thank you for noticing the inconsistency with figure 1. This is a typo, the full figure is in the appendix (figure 6) due to space limitations in the main text. Response to questions:...
Summary: The paper reviews existing literature on INRs, noticing that each method presents a trade off between its representation power and its size and optimizability. Based on Clifford algebra, Geometric Algebra Planes are introduced, generalizing some of the existing approaches which use 2 or 3-dimensional feature g...
Rebuttal 1: Rebuttal: Thank you for your comments and thoughtful engagement with our work. We address your concerns below. Response to E1 (other baselines and nonconvex decoders): Thank you for referencing [1]. In our 3D segmentation experiments, we use a different dataset compared to data types mentioned by the surve...
Summary: This paper provides an analysis of the mixture of the n-dimensional (n<3) voxel representations for learning neural fields. As the authors mentioned, this voxel representation can be line, plane, or volume, which can be viewed as a low-rank or low-resolution representation to encapsulate the target scenes or i...
Rebuttal 1: Rebuttal: Thank you for your thorough review. Response to 1 (comparison with volume-based method): The pink dashed line marked as “GA-Planes ablation (volume only)” is a volume-based method. In the notation we use, this is D($e_{123}$). We believe the inferior performance is caused by the coarser resolutio...
null
null
null
null
null
null
null
null
A Non-isotropic Time Series Diffusion Model with Moving Average Transitions
Accept (poster)
Summary: This paper proposes a non-isotropic time series diffusion model with moving average transitions (MA-TSD). First, the authors empirically found that, when directly applying DDPM to time series data, the directions of model gradients at different diffusion steps are contradicted during training, which leads to u...
Rebuttal 1: Rebuttal: We are grateful for your valuable suggestions and insightful comments. We'd like to reply as follows: **1. Uniform sampling strategy** We've added the default DDIM sampling strategy to our experiments. Specifically, for a given sampling budget, we start with t=T and uniformly select the sampling...
Summary: This paper proposes MA-TSD (Moving Average Time Series Diffusion), a novel time series diffusion model that replaces the standard isotropic diffusion process with a moving average transition. The key motivation is that existing isotropic diffusion models degrade low and high-frequency components identically, w...
Rebuttal 1: Rebuttal: Thank you for the valuable feedback and the opportunity to strengthen our experiments. Below, we address each of the weaknesses: **1. Standard Time Series Generation** We agreed that the standard generation task is necessary for evaluating our model. Thus, we follow the setting of your mentioned...
Summary: When training a standard diffusion model on time series dataset, (Contribution 1) the authors identified that gradients conflict between small t and large t values, which hinders training. To address this issue, they propose (Contribution 2) a heuristic solution by adding "moving average" as an additional corr...
Rebuttal 1: Rebuttal: We greatly appreciate your acknowledgement of our work. We would like to address your questions as follows. **Q1: The difference between Blurring Diffusion Model (BDM) and ours** From a high-level perspective, BDM and ours shared a similar idea, i.e. building the degradation process with low-pas...
Summary: This paper presents a non-isotropic time series diffusion model (MA-TSD) for time series analysis. The key idea is to use a moving average in the forward process to better preserve low-frequency information, thereby avoiding gradient conflicts during training. The model also features an accelerable backward pr...
Rebuttal 1: Rebuttal: We sincerely appreciate your time and effort in reviewing, as well as your constructive feedback, which helps strengthen our work. We would like to address your questions as follows. **1. Lack of non-diffusion benchmarks for comparison** We agree that non-diffusion benchmarks are also necessary...
null
null
null
null
null
null
Learning with Expected Signatures: Theory and Applications
Accept (oral)
Summary: This paper establishes a rigorous framework for the expected signature of stochastic processes, proving consistency and asymptotic normality under a double asymptotic regime—where the discretization mesh ($\pi$) tends to zero (in-fill asymptotics) and the number of observations $N$ increases (long-span asympto...
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the comments. We believe the reviewer well understood the main contributions of the paper, which they highlighted in "Claims and Evidence". **Methods And Evaluation Criteria** In practical applications the (expected) signature transform may face the curse...
Summary: The authors explore an interesting and young topic with in ML, that is, signature-based methods for ML. Signature methods have been quite useful as a sort of preprocessing stage in ML pipelines to synthesize long time-series data among other applications. The authors theoretically explore the expected signatur...
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the comments. We believe the reviewer well understood the main contributions of the paper, which they highlighted in "Summary". **Other Strengths And Weaknesses** - Conclusions were not added due to space constraints, in case of acceptance we shall include...
Summary: An empirical estimate of the expected signature of a stochastic process depends on the number of observed paths $N$ and the partition $\pi$ on which the paths are observed. This paper shows that under suitable conditions, the empirical estimate of the expected signature of a canonical geometric stochastic proc...
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the comments. We believe the reviewer has well understood the main contributions of the paper, which they summarized in "Theoretical Claims". **Minor comments** - Line 37: typo, the informational content (i.e. the norm) of signature terms decays factoriall...
null
null
null
null
null
null
null
null
Reinforcement Learning Control of a Physical Robot Device for Assisted Human Walking without a Simulator
Accept (poster)
Summary: This paper develops an RL application for controlling soft wearable exosuit for human normative walk. Grounded in the motivation that this type of system lacks robust simulators or dynamic models, the paper approaches it from a model-free RL perspective. Furthermore, given the natural lack of data from this pr...
Rebuttal 1: Rebuttal: # We thank the reviewer for thier thoughtful feedback, please check our [new results](https://www.dropbox.com/scl/fo/rgonc4oohtzgf87jqlq3y/AKOxgA5jW9PHt3NKRFLAPcw?rlkey=04igqadzdmyojb9y48zlds5gf&st=g4pizvbd&dl=0) >Q1 Fig.2 concern 1) DIRECT & RIIV are different methods resulting in different p...
Summary: This paper presents an innovative approach to controlling soft exosuits for assisted human walking using reinforcement learning (RL) without relying on a simulator. The authors propose an online Adaptation from an offline Imitating Expert Policy (AIP) approach that addresses key challenges in RL-based control ...
Rebuttal 1: Rebuttal: # We thank the reviewer for thier thoughtful feedback, please check our [new results](https://www.dropbox.com/scl/fo/rgonc4oohtzgf87jqlq3y/AKOxgA5jW9PHt3NKRFLAPcw?rlkey=04igqadzdmyojb9y48zlds5gf&st=g4pizvbd&dl=0) >Q1 Compare with sota method We thank the reviewer for the suggestion: 1. We have ...
Summary: This paper presents an RL-based control framework for a soft exosuit that assists human walking without a simulator. The proposed Adaptation from an offline Imitating Expert Policy (AIP) approach learns from human walking demonstrations and refines control using dHDP. AIP prioritizes data quality over large-sc...
Rebuttal 1: Rebuttal: # We thank the reviewer for thier thoughtful feedback, please check our [new results](https://www.dropbox.com/scl/fo/rgonc4oohtzgf87jqlq3y/AKOxgA5jW9PHt3NKRFLAPcw?rlkey=04igqadzdmyojb9y48zlds5gf&st=g4pizvbd&dl=0) >Q1 Compare with other methods We thank the reviewer for the suggestion. In respons...
Summary: This paper presents AIP (Adaptation from an offline Imitating expert Policy) for controlling a soft inflatable exosuit to assist human walking without relying on a simulator. The approach first learns from human walking demonstrations (offline phase), then adapts this policy online to personalize assistance. T...
Rebuttal 1: Rebuttal: # We thank the reviewer for thier thoughtful feedback, please check our [new results](https://www.dropbox.com/scl/fo/rgonc4oohtzgf87jqlq3y/AKOxgA5jW9PHt3NKRFLAPcw?rlkey=04igqadzdmyojb9y48zlds5gf&st=g4pizvbd&dl=0) >Q1 Isolating the benefit of co-adaptation To address the issue of isolating the be...
null
null
null
null
null
null
Contrastive Visual Data Augmentation
Accept (poster)
Summary: This paper proposes a novel data augmentation technique aimed at improving the recognition capabilities of Large Multimodal Models (LMMs) on rare classes/concepts that are underrepresented in the training set. In particular, authors leverage text-to-image diffusion models to synthesize images of the rare conce...
Rebuttal 1: Rebuttal: We are pleased that reviewer DV9x finds: * our **idea original and sensible** * our **technique novel** * our **references sufficient** * and the **modularity of our approach beneficial** We value your constructive comments and address them in detail: --- **Related Data Augmentation Methods** W...
Summary: In this paper, authors proposes Contrastive Visual Data Augmentation (CoDA), a novel approach to improve LMMs' ability to recognize novel and easily confused visual concepts. CoDA extracts contrastive features between target concepts and the confusable counterparts, generating synthetic training data with test...
Rebuttal 1: Rebuttal: We sincerely thank Reviewer bmru for their insightful and encouraging comments. We are pleased that you found: * our paper **well-written to follow**, * our proposed **data augmentation strategy effective**, * our **method's performances competitive**, * and our new **NovelSpecies dataset inter...
Summary: The paper proposes a data augmentation technique for the tuning of large multimodal models on unseen concepts that are in a way 'very close' to known concepts included in the training. The method thus aims at expanding the knowledge of an existing model, including new concepts when they are encountered. The a...
Rebuttal 1: Rebuttal: We sincerely thank Reviewer pV1R for their detailed review. We are delighted you recognize that: * our **method is novel and provides concrete contributions to the state of the art**, * our **claims are extensively demonstrated**, * our **datasets are solid choices to provide extensive validation...
Summary: The current submission addresses a known issue in LMMs (Large Multimodal Models) that of recognizing novel or even confusing visual concepts, due to their reliance on pre-trained knowledge and their limited ability to capture subtle visual details. To this en,d the authors introduce CoDA (Contrastive visual da...
Rebuttal 1: Rebuttal: We sincerely appreciate Reviewer pV1R for the insightful review. We are encouraged that you find: * **our claims well-supported by evidence**, * **our evaluation criteria and benchmarks appropriate**, * **our experiments sound and well-executed**. We are also glad that you **enjoyed reading ou...
null
null
null
null
null
null
Modulated Diffusion: Accelerating Generative Modeling with Modulated Quantization
Accept (poster)
Summary: This work introduces MoDiff, a novel framework for accelerating diffusion models by combining modulated quantization and error compensation. It enhances existing techniques like caching and quantization, offering a more efficient approach without sacrificing generation quality. MoDiff reduces activation quanti...
Rebuttal 1: Rebuttal: Thanks for recognizing the novelty of our paper. We believe there are some misunderstandings about our implementation, and we will address your questions with the following experiments. **Methods And Evaluation Criteria: Q1** MoDiff focuses on quantization rather than the latest models or datase...
Summary: The paper investigates the shortcomings of current acceleration methods for diffusion models, such as caching and quantization, which suffer from error accumulation and high approximation errors, and introduces MoDiff—a novel framework that accelerates diffusion models through modulated quantization combined w...
Rebuttal 1: Rebuttal: Thank you for recognizing the novelty, effectiveness, and clarity of our paper. We are glad to address your questions. **Experimental Designs Or Analyses: Q1. All experiments were conducted on small-scale datasets such as CIFAR-10, LSUN-Churches, and LSUN-Bedroom, whereas it is standard practice ...
Summary: The author introduces MoDiff, a framework designed to accelerate generative modeling by addressing challenges in caching and post-training quantization (PTQ). MoDiff incorporates modulated quantization and error compensation to reduce quantization errors and mitigate error accumulation. Theoretical analysis su...
Rebuttal 1: Rebuttal: Thank you for recognizing the novelty, effectiveness, and clarity of our paper. We believe there are some misunderstandings about our implementation, and we are glad to address your questions. **Essential References Not Discussed: Q1. Compare with PTQD.** Compared to PTQD, MoDiff is (1) more gen...
Summary: This paper introduces Modulated Diffusion (MoDiff), an approach that combines caching and quantization techniques while addressing their limitations. By leveraging the differences in activations across diffusion timesteps for quantization and incorporating an error compensation mechanism, MoDiff effectively mi...
Rebuttal 1: Rebuttal: **Experimental Designs Or Analyses: Q1** Although the goal of this paper is to validate our method, not to benchmark it across all diffusion architectures, we conduct experiments following [1] to address your concern, using DiT on ImageNet. The results demonstrate that our method consistently imp...
Summary: This paper proposed a method for accelerating diffusion model sampling by modulated quantitation and a carefully designed error compensation mechanism, the method is able to significantly reduce the accumulative error of previous methods. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theor...
Rebuttal 1: Rebuttal: Thank you for recognizing the novelty, effectiveness, and clarity of our paper. We are glad to address your questions. **1. In Table 1, FID column, some values are incorrectly bolded, e.g., 4.21** Thank you for pointing out the incorrect bold formatting. We will revise it in a new version. **2....
null
null
null
null
FedPHA: Federated Prompt Learning for Heterogeneous Client Adaptation
Accept (poster)
Summary: This paper proposes a method called FedPHA (Federated Prompt Learning for Heterogeneous Client Adaptation) to enhance federated prompt learning in diverse client environments. It addresses two key challenges: the limitation of uniform prompt lengths in existing methods and the conflict between global and local...
Rebuttal 1: Rebuttal: Dear Reviewer TNUk: We sincerely thank the reviewer for the constructive and encouraging feedback. We are especially grateful for your positive recognition of our contributions to federated prompt learning, the novelty of the proposed architecture, and the overall experimental design. Below, we a...
Summary: The authors introduce Federated Prompt Learning for Heterogeneous Client Adaptation (FedPHA), a novel approach to adapting pre-trained Vision-Language Models (VLMs) within federated learning. The primary motivation is to tackle the persistent heterogeneity challenge by integrating a uniform global prompt for e...
Rebuttal 1: Rebuttal: Dear Reviewer oKVN: Thank you for your thoughtful review and for raising key concerns regarding our work. We aim to address your concerns in our detailed responses below, hoping to provide clarity and demonstrate the effectiveness of our proposed approach. ### Weakness **W1: Theoretical Explan...
Summary: FedPHA is a novel FPL framework designed to address heterogeneous client adaptation in federated learning. Traditional FPL methods enforce uniform prompt lengths, which limits their adaptability to clients with diverse data distributions. To overcome this limitation, FedPHA proposes a dual-layer prompt archite...
Rebuttal 1: Rebuttal: Dear Reviewer tY2b: We sincerely thank the reviewer for the thoughtful and detailed evaluation of our work. We are especially grateful for the recognition of our contributions in addressing client heterogeneity through the dual-layer prompt architecture, the introduction of SVD-based projection a...
Summary: This paper introduces FedPHA, a novel Federated Prompt Learning (FPL) approach that enables heterogeneous client adaptation using Vision-Language Models (VLMs). The key contributions include: A dual-layer architecture combining a fixed-length global prompt for efficient aggregation and variable-length local pr...
Rebuttal 1: Rebuttal: Dear Reviewer A325: We appreciate your recognition of our technical contributions—specifically the dual-layer prompt design, SVD-based projection, and bidirectional alignment—as well as your acknowledgment of our method’s strong performance and adaptability to heterogeneous clients. Below, we add...
null
null
null
null
null
null
Breaking Barriers: Combinatorial Algorithms for Non-Monotone Submodular Maximization with Sublinear Adaptivity and $1/e$ Approximation
Accept (poster)
Summary: This paper studies the problem of maximizing a non-monotone submodular function subject to a size constraint. In the problem, we are given a set $\mathcal{U}$ of $n$ elements, (a value-oracle of) a non-monotone submodular function $f$, and a positive integer $k$, and the goal is to find a subset $S$ of $\mathc...
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback. Regarding the suggestions related to paper presentation and organization, we have provided detailed responses to each point in our reply to Reviewer KJuz. For other questions, we address each point below. ``` The authors state that they only prov...
Summary: This work introduces enhanced solutions for maximizing a non-monotone submodular function under a cardinality constraint. The authors concentrate on solution quality, query count, and adaptivity, which are the key performance indicators in this area of research. Claims And Evidence: Yes. Methods And Evaluati...
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. Regarding the suggestions related to paper presentation and organization, we have provided detailed responses to each point in our reply to Reviewer KJuz. ``` The results provide marginal improvement compared to the state of the art. ``` We sin...
Summary: The problem of submodular maximization under a cardinality constraint is a fundamental topic with numerous applications. The submodular function, defined on a ground set $U$, can be either monotone or non-monotone, with the non-monotone case being notably more challenging. There is a wealth of results on non-...
Rebuttal 1: Rebuttal: We thank the reviewers for their constructive suggestions. We will revise the manuscript to improve clarity and presentation. In Section 1.1, we clarify the technical contributions, focusing on how the simplified InterpolatedGreedy enables parallelization and enhancing the explanation of our paral...
Summary: The paper studies the non-monotone submodular maximization in the parallel model. They provide two algorithms with the best one having a $1/e$ approximation factor, $O(log(n)log(k))$ adaptivity and almost linear query complexity.Previously, the only algorithm with a $1/e$ approximation factor and logarithmic a...
Rebuttal 1: Rebuttal: ``` Their experiments seem reasonable, but the graph on solution value is missing the data for the ATG algorithm, which appears to have better adaptivity and query complexity. ``` Thank you for pointing this out. Since the objective value is normalized by ATG, the ATG data would appear as a horizo...
null
null
null
null
null
null
Gradient Flow Provably Learns Robust Classifiers for Orthonormal GMMs
Accept (poster)
Summary: This paper investigates the problem of adversarial robustness in deep learning classifiers and provides a theoretical framework demonstrating that standard training methods, specifically gradient flow, can lead to a provably robust classifier under certain conditions. Unlike existing approaches that require ad...
Rebuttal 1: Rebuttal: Thank you for the valuable comments and suggestions. Here are our responses to your questions: **Experiments on real datasets**: As we stated in Remark 2 (and we will expand it as per reviewer MTUy's suggestion), we focus on developing theoretical results in this paper, and the related experiment...
Summary: A common concern in the design of deep learning systems is their susceptibility to imperceptible noise. The authors approach this problem from the angle of finding the maximum adversarial perturbation tolerated by a neural network, without needing adversarial training. It is clarified that this often condition...
Rebuttal 1: Rebuttal: Thank you for the valuable comments and suggestions. Here are our responses to your questions and concerns: **Structure of this paper**: We have shared our view on the current structure of our paper in our rebuttal *"Organization/presentation of this paper"* to reviewer MTUy, and we will make rev...
Summary: This paper presents new theoretical findings regarding the feasibility of achieving robustness without adversarial training. Specifically, the paper focuses on a specific data model: a mixture of Gaussian distributions whose cluster centers (i.e., mean vectors of each Gaussian distribution) are orthonormal vec...
Rebuttal 1: Rebuttal: Thank you for the valuable comments and suggestions. Here are our responses to your questions and concerns: **Organization/presentation of this paper**: Although this is a concern raised by two reviewers (MTUy and hboM), we do not think there is any major issue with the organization of our manusc...
Summary: This paper analyzes the gradient flow dynamics of training a pReLU neural network. It is shown for data coming from Gaussian mixture models with orthonormal cluster centers that, under some technical initialization conditions, the dynamics converge to a particular pReLU model that acts similar to a nearest-clu...
Rebuttal 1: Rebuttal: Thank you for the valuable comments and suggestions, and for the encouraging words acknowledging the strengths of our manuscript. Here are our responses to your questions and concerns: **Concentration of Gaussians**: The claim that a Gaussian $\mathcal{N}(\mu,\frac{\alpha^2}{D}I)$ concentrates ar...
null
null
null
null
null
null
video-SALMONN-o1: Reasoning-enhanced Audio-visual Large Language Model
Accept (poster)
Summary: This paper introduces video-SALMONN-o1, the first open-source reasoning-enhanced audio-visual LLM designed to address the underexplored challenge of general video understanding, which requires complex multimodal (audio-visual-text) reasoning. Current reasoning-optimized LLMs focus narrowly on mathematical/text...
Rebuttal 1: Rebuttal: We deeply appreciate Reviewer dh9B for the positive comments and acknowledgement of our contribution. We would like to address the following questions: 1. __Inference mechanics__: - We use greedy decoding during inference. - We use contrastive step selection only to construct training preferen...
Summary: The paper introduces video-SALMONN-o1, an open-source audio-visual LLM enhanced for general video understanding. It proposes pDPO for reasoning optimization and RivaBench, a new benchmark. The model shows improved accuracy over baselines and zero-shot synthetic video detection capabilities. However, the benchm...
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and would like to resolve concerns and misunderstandings as follows: 1. __Regarding the reliability and bias of the benchmark__: - As stated in section 5 paragraphs 2 and 3, we __always__ use human annotators to generate questions and answers, and __always_...
Summary: This paper introduces video-SALMONN-o1, an open-source reasoning-enhanced audio-visual LLM designed for general video understanding tasks. The authors claim that existing reasoning models are merely focusing on either math problems or visual graphical inputs, without sufficient attention on general audio-video...
Rebuttal 1: Rebuttal: We deeply thank you for acknowledging our effort and contribution! --- Rebuttal Comment 1.1: Comment: Thanks for the response from the authors. I'm keeping my original rating.
Summary: In this work, an RL-based optimization and reasoning-aware framework is proposed for training a large audio-video multi-modal model called Video-SALMONN-o1. This work emphasizes that significant effort has been invested in improving the mathematical and visual graphical inputs from the RL perspective, leading ...
Rebuttal 1: Rebuttal: We sincerely appreciate the detailed and constructive reviews provided by Reviewer BoZJ. We would like to address the concerns and suggestions as follows: 1. We follow the evaluation described in VideoHallucer [1] and report the overall accuracy (when the entire pair is correct) for each category ...
null
null
null
null
null
null
Addressing Misspecification in Simulation-based Inference through Data-driven Calibration
Accept (oral)
Summary: The paper details with the important issue of making neural posterior estimation methods more robust against model misspecification. The authors suggest to use a small set of labeled real-world data to calibrate the posterior inference in the face of a misspecified simulator. I am very short on time for ICML r...
Rebuttal 1: Rebuttal: We sincerely appreciate your review and your recognition of our contribution to the literature on simulation-based inference (SBI) under model misspecification. Regarding your concern about the availability of labeled real datasets, we acknowledge that such datasets are not always accessible. How...
Summary: This paper introduces Robust Posterior Estimation (RoPE), a method for addressing model misspecification in simulation-based inference (SBI). Standard SBI algorithms often assume a well-specified simulator, leading to biased posterior approximations when this assumption is violated. RoPE mitigates this problem...
Rebuttal 1: Rebuttal: Thank you for the thoughtful and constructive feedback. We appreciate your insights and will use them to improve the final version of our manuscript. Below, we detail the modifications we plan to make in response to your comments. ## Figure 1 is overly dense We acknowledge that Figure 1 contains t...
Summary: Because it is challenging to use simulation-based inference (SBI) under model mis-specification, this paper proposes optimal transport to model the distribution between SBI-simulated data and a set of observed data and then constructing posterior distributions with neural posterior estimation (NPE). It relies ...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for carefully reading our paper and appreciate the very constructive feedback regarding the plausibility of finding a calibration set sampled from $p^\star$ in practical settings. We now discuss this question and other comments in detail. ## Source distribution of ...
Summary: This paper presents a method for improving simulation-based inference when the simulator is misspecified. It combines neural posterior estimation with optimal transport, using a potentially small labeled calibration real data paired to corresponding parameters to correct for misspecified simulator. It then fin...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thoughtful and constructive feedback. We greatly appreciate the careful assessment of our work and the valuable insights provided. Below, we address the key points raised. ## Additional References We completely agree that RoPE is closely related to semi-su...
null
null
null
null
null
null
Emergent Symbolic Mechanisms Support Abstract Reasoning in Large Language Models
Accept (poster)
Summary: This paper studies an important topic of whether symbolic representations and behaviors emerge in LLMs when performing abstract reasoning tasks. The paper argues that symbolic abstraction and operations appears at different levels of attention (low-level abstraction, mid-level prediction, high-level retrieval)...
Rebuttal 1: Rebuttal: Thank you very much for the thoughtful and detailed feedback. We present detailed responses below to address each of the issues raised. Throughout these responses, we refer to new results that can be viewed here: https://anonymous.4open.science/r/RB-F30A/13386.pdf ## Additional models and tasks...
Summary: This paper investigates the mechanisms behind how Large Language Models (LLMs) perform two simple abstract reasoning tasks related to algebraic identity rules (left and right). They identify three types of attention heads: abstraction heads, symbolic induction heads, and retrieval heads, which are implicated i...
Rebuttal 1: Rebuttal: Thank you very much for the thoughtful and detailed feedback. We present detailed responses below to address each of the issues raised. Throughout these responses, we refer to new results that can be viewed here: https://anonymous.4open.science/r/RB-F30A/13386.pdf ## Additional models and tasks...
Summary: The paper studies the internal mechanisms of a Llama3-70B on an in-context learning task. Specifically, they study an abstract reasoning task in which the model is given multiple demonstrations of the form ABA or ABB, where A and B correspond to randomly selected tokens, and on the final example the model has ...
Rebuttal 1: Rebuttal: Thank you very much for the thoughtful and detailed feedback. We present detailed responses below to address each of the issues raised. Throughout these responses, we refer to new results that can be viewed here: https://anonymous.4open.science/r/RB-F30A/13386.pdf ## Additional related work Tha...
Summary: This paper investigates the internal mechanisms that support abstract reasoning in LLMs, focusing on the open-source model Llama3-70B. The paper makes a contribution to the ongoing debate about the reasoning capabilities of LLMs by proposing a novel three-stage symbolic architecture and providing empirical evi...
Rebuttal 1: Rebuttal: Thank you very much for the thoughtful and detailed feedback. We present detailed responses below to address each of the issues raised. Throughout these responses, we refer to new results that can be viewed here: https://anonymous.4open.science/r/RB-F30A/13386.pdf ## Additional models and tasks...
null
null
null
null
null
null
ADHMR: Aligning Diffusion-based Human Mesh Recovery via Direct Preference Optimization
Accept (poster)
Summary: The authors propose a framework that integrates a diffusion-based human mesh recovery model with direct preference optimization. The core idea is to train HMR-Scorer, a model that evaluates the quality of human mesh predictions without requiring 3D annotations, and use it to create a preference dataset. This d...
Rebuttal 1: Rebuttal: Thanks for recognizing our SOTA performance, and the potential for pseudo-label generation. We truly appreciate your constructive comments and address them below. ### **Q1. Clarification of claims** Thank you for pointing this out. We would like to clarify that: end-to-end diffusion models predict...
Summary: The paper adapts the works of Diffusion-based DPO (DDPO) to HMR by proposing ADHMR. Specifically, the paper introduces an HMR-scorer model that generates a reward for image-mesh alignment. This module is given the local, sampled from the UV joint locations, and global image features and outputs a score. The ne...
Rebuttal 1: Rebuttal: Thank you for recognizing the effectiveness of our scoring strategy, significant improvements over the base model, and good experiment design. We deeply appreciate your valuable comments and address them below. ### **Q1. Generalizability to unseen data & Bias caused by DPO** To further support o...
Summary: This paper proposes the first method to use preference optimziation to improve the Human Mesh Recovery (HMR) models. The paper first introduces a HMR-scorer model to rank the human mesh result produced by (an arbitrary) HMR method. Experiments show the score is strongly correlated to the reconstruction metrics...
Rebuttal 1: Rebuttal: Thanks for highlighting the novelty of applying preference optimization to HMR, the effectiveness of our framework, and the comprehensive experiments. We truly appreciate your encouraging feedback and respond to your points below. ### **Q1. Equation 5 elucidation** Thank you for your helpful com...
Summary: This paper targets improving HMR methods with preference prediction. Therefore, the authors present a prediction assessment model named HMR-Scorer. Further, the authors create a preference dataset using HMR-scorer, which is used to finetune base model and existing HMR methods. The full method, called ADHMR, sh...
Rebuttal 1: Rebuttal: Thanks for recognizing the strong performance of our method and the clear presentation of our paper. We deeply appreciate your constructive comments and address them below. ### **Q1. Equation 5 elucidation** Thank you for your helpful comment. Our formulation follows the approach introduced in D...
null
null
null
null
null
null
Covered Forest: Fine-grained generalization analysis of graph neural networks
Accept (spotlight poster)
Summary: This paper presents a study on the generalization abilities of sum-aggregation message passing graph neural networks (GNNs) based on a covering number approach. Towards this goal, they employ the so-called forest distance pseudometric, which is an intuitive re-formulation of the tree mover’s distance by Chuang...
Rebuttal 1: Rebuttal: **We thank the reviewer for their fair and constructive review.** > the paper is very heavy and difficult to understand, and at times it lacks clarity in what it wants to achieve and how to prove it. It might be beneficial to add some more intuitive discussion between the various propositions. Fo...
Summary: This paper presents a new framework for analyzing generalization properties of Message-Passing Neural Networks (MPNNs) via fine-grained graph pseudo-metrics. These distances capture subtle structural similarities that the usual 1-WL equivalence classes overlook. The key theoretical results show that MPNNs of v...
Rebuttal 1: Rebuttal: **We thank the reviewer for their detailed and constructive review.** > One potential weakness is the computational complexity of computing pseudo-metrics at scale on large graphs. The authors do mention computational complexity but might elaborate more on possible approximations. Thank you for ...
Summary: This paper first defines three pseudo-distances on graphs compatible with 1-WL or its variants. First, the labeled tree distance, which is an extension of the tree distance to graphs with node features, is defined, and the equivalence with 1-WL indistinguishability is shown. Next, the forest distance is define...
Rebuttal 1: Rebuttal: **We thank the reviewer for their fair and constructive review.** > Q1. We can expect the experiment's results from the definition of the covering number and WL-indistinguishability. Therefore, I have a question whether the observations from this experiment are new. You are correct that, by defi...
Summary: This paper studies the generalization bound for message passing network. The author extend the generalization framework of using pseudo-metric to graph, with a focus on studying what graph pseudo metric is suitable to obtain a tight bound for the generalization error of MPNNs. Specifically, the author studied ...
Rebuttal 1: Rebuttal: **We thank the reviewer for their detailed and constructive review.** > Some part is not clear enough, and the main content is not self contained without looking at the appendix. For example, the equation in line 187 does not explain what is the definition of unr. Also, the Figure 4 referenced i...
null
null
null
null
null
null
FlexiClip: Locality-Preserving Free-Form Character Animation
Accept (poster)
Summary: In this paper, the authors propose a new method, named as FlexiClip, to achieve better temporal coherence and geometric consistency in animated clipart. To better preserve motion smoothness without introducing geometric distortions, FlexiClip utilizes Probability flow ODE (pfODE) to model the evolution of temp...
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. Below, we provide detailed responses to each of the weaknesses you raised: 1. **Figure 3 Analysis**: Upon closer inspection, you can observe that AniClipart distorts objects (e.g., hand distortion in the boy/girl jumping and woman dancing examples), lacks pro...
Summary: The paper proposes FlexiClip, a novel approach designed to overcome these limitations by addressing the intertwined challenges of temporal consistency and geometric integrity, which extends traditional Bezier curve-based trajectory modeling with (1) temporal Jacobians to correct motion dynamics incrementally...
Rebuttal 1: Rebuttal: **Sensitivity of λ in Balancing SDS Loss and Flow Matching Loss** Thank you for your thoughtful review and for raising this important question regarding the sensitivity of our method to the hyperparameter $\lambda$, which balances the SDS loss and flow matching loss. Through our experiments, we ...
Summary: The paper proposes FlexiClip, a novel method for animating clipart images while preserving temporal coherence and geometric integrity. It extends existing approaches by incorporating temporal Jacobians for incremental motion correction, probability flow ODEs (pfODEs) for continuous-time modeling, and flow matc...
Rebuttal 1: Rebuttal: Thanks for your review we will include the following impact statement, basically we did'nt included the impact statement since we thought that it will count towards page limit during review process and can be added easily when accepted since 1 page extra will be given to submit the accepted paper ...
Summary: This paper addresses several key challenges in the the problem of clipart animation. To address the noise accumulation along the animation, the paper proses the novel concept of temporal Jacobians to correct the temporal noise. To ensure the smooth temporal transitions between frames, the paper proposes pfODE ...
Rebuttal 1: Rebuttal: Many thanks for reviewing our paper and for your detailed and thoughtful evaluation. I truly appreciate the time and effort you put into this, as well as your unbiased rating of our work. Regarding your question about the 3D extension, FlexiClip can easily be adapted for 3D animation. This requir...
null
null
null
null
null
null
Novelty Detection in Reinforcement Learning with World Models
Accept (spotlight poster)
Summary: This work proposes a novelty detection technique in model-based reinforcement learning called world models. It refers to sudden changes in the world model's visual properties or dynamic system as novelties. They use KL divergence between latent predictions with and without observable ground truth to design the...
Rebuttal 1: Rebuttal: We thank the reviewer for their suggestions on improving our work. In response, we outline provisional revisions below: First, we note that we have fixed each of the typos and syntax errors that were identified. Specifically, we have: * Moved the period to after the citation. * Rephrased Ln 92 t...
Summary: The paper seeks to determine when there is novelty in an environment by using world models, and particularly when there is high prediction error with such world models. Such an approach aligns strongly with existing neuroscience work. The paper demonstrates with strong results their approach, exceeding the per...
Rebuttal 1: Rebuttal: Thank you for your review, hopefully we can interpret your questions and clarify as well as possible: * Why is the ground truth for Table 1 so noisy? * The ground truth component of the table shows the true observation given to the agent, specifically the noisy observation is a sample from the ...
Summary: The work proposes a principled method for detecting novelty in RL agents that use latent dynamics models, such as DreamerV2. The central idea is that when an agent encounters novel observations or dynamics, the latent state inferred from the current observation (posterior) will significantly differ from the on...
Rebuttal 1: Rebuttal: We thank the reviewer for their time in working to improve our presentation and their encouraging feedback. We are delighted that you took the time to assist the quality of our work. In response, we outline provisional revisions below: * We agree that the discussion of C.2 may be more informative ...
null
null
null
null
null
null
null
null
Fluctuations of the largest eigenvalues of transformed spiked Wigner matrices
Accept (poster)
Summary: This work investigates the asymptotic properties of the largest eigenvalue of entry-wise, non-linear transformations of spiked Wigner matrices. Its main result is to prove a BBP-like result: below a certain critical SNR (which is explicitly given in terms of the original SNR), the top eigenvalue has asymptotic...
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive review and helpful feedback with which we can greatly improve our paper. Below, we address the concerns and questions, and also outline some important revisions we will make. **References**: Thank you for the references that we missed. We will include th...
Summary: The authors study the largest eigenvalue of a spiked Wigner matrix model under elementwise function transformation. They show that the largest eigenvalue of the transformed matrix undergoes a phase transition when an effective SNR variable is tuned. At high SNR, the find that the largest eigenvalue is distribu...
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive review and helpful feedback with which we can greatly improve our paper. Below, we address the concerns and questions, and also outline some important revisions we will make. **Experimental design**: Thank you for your comment. We will include the graphs...
Summary: This paper studies the fluctuations of the largest eigenvalues in transformed spiked Wigner matrices and discuss the Baik–Ben Arous–Péché (BBP)-type phase transition arisiring in these problem. While a great deal is known about these matrix, the contribution is the analysis of the asymptotic fluctuation of th...
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive review and helpful feedback with which we can greatly improve our paper. Below, we address the concerns and questions, and also outline some important revisions we will make. **Relevance to the ICML audience**: Thank you for the comment. In the revision,...
null
null
null
null
null
null
null
null
OWLS: Scaling Laws for Multilingual Speech Recognition and Translation Models
Accept (poster)
Summary: This paper introduces **OWLS, open-source Whisper-style models for multilingual speech recognition (ASR) and translation tasks**, and releases the trained models. The authors empirically derive scaling laws for multilingual speech processing by training OWLS models at varying scales. Experimental results demon...
Rebuttal 1: Rebuttal: Thank your for your insights and comments. > I'm not quite convinced with the claim related to ICL ability. Whisper-style models like OWLS are trained primarily for ASR and translation, not instruction-following... While Whisper-style models are not trained for instruction-following, instructi...
Summary: This paper investigates the effect of model size and dataset size on multilingual Automatic Speech Recognition (ASR) and Speech Translation (ST) tasks for 150 languages. The model sizes vary from 0.25B to 18B parameters. The WER vs. size curves are fitted into power law functions and the correlations are repor...
Rebuttal 1: Rebuttal: Thank your for your insights and comments. We organize our response by section: ## Supplementary Material We believe that we have all the possible information for the embedding extraction process in the appendix, although we acknowledge that the description may be unclear and hard to follow thro...
Summary: The paper introduces OWLS, a suite of multilingual speech recognition and translation models ranging from 0.25B to 18B parameters. It systematically studies scaling laws for speech tasks, demonstrating how model, data, and compute scaling impact performance. The paper claims that larger models improve low-reso...
Rebuttal 1: Rebuttal: Thank you for the review and insights. > some details about data preprocessing are missing Reviewer uTgG raised similar concerns. Please refer to our response to their question 3. > code-switching improvements appear inconsistent across languages. Since we are performing multilingual multi-dom...
Summary: This paper investigates scaling laws for multilingual, multi-task speech-to-text models. To achieve this, the authors introduce OWLS, a collection of ASR/ST models ranging from 0.25B to 18B parameters, with the 18B model being the largest publicly known ASR/ST model to date. The study examines three dimensi...
Rebuttal 1: Rebuttal: Thank your for your insights and comments. We organize our response by section: ## Claims And Evidence: 2. We understand the reviewer’s concern. While our definition was indeed focusing on the fact that scaling leads to a better model for many languages, we do acknowledge that this definition ma...
Summary: This paper empirically evaluates the scaling law for speech recognition and translation in terms of training data, model size, and compute cost, using a total of 350K hours of multilingual training data. Claims And Evidence: Yes Methods And Evaluation Criteria: N/A Theoretical Claims: This paper is primaril...
Rebuttal 1: Rebuttal: Thank you for your valuable insights. > most of the conclusions are predictable, limiting the overall insight gained from the study We respectfully clarify that many of our insights are not predictable. While the notion that a larger model leads to better performance is indeed obvious, showing ...
null
null
null
null
Demystifying Cost-Efficiency in LLM Serving over Heterogeneous GPUs
Accept (poster)
Summary: The paper investigated a highly practical problem of co-optimizing GPU composition, deployment configurations, and workload assignment in a heterogeneous GPU environment under a budget constraint to maximize Large Language Model (LLM) serving efficiency. The authors proposed a Mixed Integer Linear Programming ...
Rebuttal 1: Rebuttal: > 1. How the throughput h is profiled or defined? We agree with the reviewer that, because $h$ is recursively defined, it is impractical to profile every possible configuration exhaustively. However, in practice, this can be solved by employing a one-time profiling strategy that captures the foll...
Summary: This paper investigates cost-efficient LLM serving on heterogeneous GPUs. It benchmarks various GPU types and develops a MILP-based scheduling algorithm to optimize deployment configurations. The study shows that leveraging heterogeneous GPUs improves cost efficiency compared to homogeneous setups. Claims And...
Rebuttal 1: Rebuttal: > 1. Workload spikes and dynamic GPU availability fluctuations. Online rescheduling to adapt to workload changes and GPU drops is an interesting idea that can easily be integrated into our current solution. We introduce this approach and present some preliminary experimental results. **Solution:...
Summary: This paper focuses on the cost efficiency of LLM services on heterogeneous GPUs, proposing ways to improve efficiency by optimizing GPU composition, deployment configurations, and workload allocation. Claims And Evidence: When discussing related work, the paper mainly emphasizes that other methods do not cons...
Rebuttal 1: Rebuttal: > 1. Specific shortcomings of existing methods. Existing methods require a heavy redesign of the scheduling algorithms or demand significant additional system development to achieve similar cost optimization. **Compare with HexGen and Helix.** Both approaches (1) fail to consider workload hetero...
null
null
null
null
null
null
null
null
AffectGPT: A New Dataset, Model, and Benchmark for Emotion Understanding with Multimodal Large Language Models
Accept (oral)
Summary: This paper focuses on descriptive emotion understanding. Compared with discriminative emotion recognition, descriptive emotion understanding provides the possibility of modeling complex emotions. To promote the development of this field, they proposed new datasets (MER-Caption and MER-Caption+) and new models ...
Rebuttal 1: Rebuttal: **Q1:** It would be beneficial to also explore and discuss the impact of different audio and video encoders. **A1:** **(1) Impact of Audio encoders.** The choice of audio encoder does not significantly impact performance. This confirms that AffectGPT's remarkable performance is primarily attribut...
Summary: This paper introduces a new dataset for the multimodal emotion recognition (MER) task. The dataset is constructed using a model-driven, human-assisted approach. Initially, a coarse-grained dataset is generated through data description, followed by fine-grained data refinement through both low-level and high-le...
Rebuttal 1: Rebuttal: **Q1:** Limited discussion on computational costs of pre-fusion operations. The pre-fusion mechanism lacks theoretical analysis of modality interaction dynamics. The structural innovation of AffectGPT is relatively insufficient, and it does not fully explain why the "pre-fusion" operation can serv...
Summary: This paper presents a new video content description dataset with emotional words and highlights a novel annotation method for the dataset. Additionally, it proposes a model that enhances multimodal emotion recognition. The primary innovation of this model lies in its pre-fusion strategy for multimodal inputs. ...
Rebuttal 1: Rebuttal: We sincerely appreciate your recognition of our fair comparative experiments, innovative dataset, effective key components, and comprehensive evaluation benchmark. **Q1:** In Appendix F, experiments have shown that using a combination of SALMONN and mPLUG-Owl results in better performance. Howeve...
Summary: This paper introduces a new dataset, pre-fusion model, and evaluation benchmark to advance multimodal, natural language-based emotion understanding. It proposes a model-led, human-assisted strategy to minimize human effort while constructing the largest multimodal emotion dataset to date. The model features a ...
Rebuttal 1: Rebuttal: We sincerely appreciate your recognition of our work's significance and your acknowledgment of our contributions—including the novel dataset, model architecture, and comprehensive benchmark for descriptive emotion understanding. These innovations enable richer, more flexible emotion representation...
null
null
null
null
null
null
TypyBench: Evaluating LLM Type Inference for Untyped Python Repositories
Accept (poster)
Summary: The paper introduces TYPYBENCH, a benchmark for evaluating the capability of Large Language Models (LLMs) in type inference for Python repositories without explicit type annotations. It defines two novel metrics: TYPESIM, which captures semantic relationships between predicted and ground truth types using synt...
Rebuttal 1: Rebuttal: Thanks for your valuable feedback and suggestions. We would like to address your remaining concerns with the following responses. We will improve the manuscript accordingly to address them. > LLMs are evaluated file by file. Can the authors discuss the potential limitations of using a single file...
Summary: In this work, the authors evaluate the ability of LLMs to perform type inference in Python codebases. They introduce two type inference evaluation metrics: (1) TypeSim, which extends prior work focused on exact matching to consider semantic similarity between LLM-inferred vs. human-annotated types, and (2) Typ...
Rebuttal 1: Rebuttal: Thanks for your valuable feedback and suggestions. We would like to address your remaining concerns with the following responses. We will improve the manuscript accordingly to address them. > Have you considered alternative methods for assessing type/argument similarity? (eg, embedding-based appr...
Summary: The paper introduces TypyBench, a benchmark aimed at evaluating large language models (LLMs) on their ability to perform type inference for Python code. Recognizing limitations in existing benchmarks and exact-matching metrics, the authors propose two novel evaluation measures: TypeSim, which assesses semantic...
Rebuttal 1: Rebuttal: Thanks for your valuable feedback and suggestions. We would like to address your remaining concerns with the following responses. We will improve the manuscript accordingly to address them. ### Need to Justify Why We Need LLMs to Do Type Inference As shown in [previous work](https://github.com/s...
Summary: Summary: The paper introduces TYPYBENCH, a benchmark designed to evaluate the type inference capabilities of large language models (LLMs) across entire Python repositories. The benchmark features two novel metrics: TYPESIM, which measures the semantic similarity between predicted and ground truth types, and TY...
Rebuttal 1: Rebuttal: Thanks for your valuable feedback and suggestions. We would like to clarify the metrics design and address your remaining concerns with the following responses. We will improve the manuscript accordingly to address them. > The motivation behind this work is worth discussing. … Why do we need eval...
null
null
null
null
null
null
Strategic Planning: A Top-Down Approach to Option Generation
Accept (poster)
Summary: This paper proposes a top-down learning method for Reinforcement Learning (RL). The method leverages a Large Language Model (LLM) to decompose a complex task from high-level goals into fine-grained plans, considering specificity, value and feasibility. Following decomposition, these plans are transformed into ...
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful review and for recognizing the novel perspective. We appreciate the reviewer's concern about lacking comparisons between our top-down approach and bottom-up approaches that also leverage LLMs. This is a valid point, and we have now expanded our evaluati...
Summary: The paper proposed Strategic Planning, a top-down approach for decomposing complicated reinforcement learning (RL) task into natural language-described sub-tasks. The paper also designs a reward shaping methodology that translates these strategies expressed in natural language into quantitative feedback for RL...
Rebuttal 1: Rebuttal: We sincerely thank you for your detailed and insightful feedback. ## Additional RL baselines Following your suggestion, we have added **two competitive RL baselines**: DreamerV3 [1] and Exploration via Distributional Ensemble (EDE) [2], both high ranking on the original Crafter leaderboard. Drea...
Summary: The paper considers a top-down approach to hierarchical planning/option generation. The proposed Strategist Agent builds a tree structure that specifies alternative plans or sequential plans (approach and plan nodes), which are broken further, if necessary. The tree structure is generated by a sufficiently `st...
Rebuttal 1: Rebuttal: We appreciate the reviewer's feedback regarding our evaluation methodology. To address these concerns, we have substantially expanded our experimental comparisons to include **two additional RL baselines specifically designed for sparse reward settings and long episodes (DreamerV3 [1], EDE [2]), a...
Summary: This paper defines a new hierarchical RL framework through introducing a Strategy Problem: finding distributions over policies that balance specificity and value. This involves using LLM to generate sub-goals, and a reward shaping method to translate these sub-goals to quantitative feedback for RL. The propose...
Rebuttal 1: Rebuttal: We thank you for recognizing the strengths of our paper, including the clarity of writing and insightfulness of the formalism. ## Clarifying the "human-inspired" framing We acknowledge your concern regarding our use of "human-like" to describe the approach. Upon reflection, we agree that the term...
null
null
null
null
null
null
Nonparametric Modern Hopfield Models
Accept (poster)
Summary: This work proposes a non-parametric procedure to construct retrieval dynamics maps for modern Hopfield networks, based on a supervised support vector regression-like problem where contaminated patterns serve as a training data. The proposed procedure is shown to recover the standard dense retrieval dynamics fr...
Rebuttal 1: Rebuttal: ### We thank the reviewer for the valuable comments. We have revised our paper to address each point. The revised draft is in this [anonymous Dropbox folder](https://www.dropbox.com/scl/fo/81sygn18f4ma3ridm1xlf/AHFYlvzMMlYZnNRhBN9U8mw?rlkey=e1tvpqs6v83kx2rspfvmfgswh&st=z9iuk4iu&dl=0). All modifi...
Summary: This is a theoretical paper that introduces a non-parametric interpretation of Hopfield Nets. The proposed method uses SVR to learn a parameter matrix $W$ mapping from feature space to data space. Additionally, rather than focusing on "memorizing data" as the original Hopfield Nets do, the authors propose to f...
Rebuttal 1: Rebuttal: Thanks for the constructive feedback. We have revised our paper to address each concern in detail. The revised draft is in this [anonymous Dropbox folder](https://www.dropbox.com/scl/fo/81sygn18f4ma3ridm1xlf/AHFYlvzMMlYZnNRhBN9U8mw?rlkey=e1tvpqs6v83kx2rspfvmfgswh&st=z9iuk4iu&dl=0). All modificat...
Summary: This work leverages the concept of soft-margin Support Vector Regression (SVR) to reformulate modern Hopfield models as a non-parametric regression task, where a noisy target pattern is mapped to a reconstructed target pattern using Support Vector Machines (SVMs). By applying the Lagrange multiplier method, th...
Rebuttal 1: Rebuttal: ## Reviewer’s Comment (Claims and Evidence & Experimental Designs or Analyses) > **Concerns** > - Claim2: The theoretical results (noise robustness, tighter retrieval error bounds) do not appear numerically verified in detail. > - Claim3: The experimental section doesn’t explicitly connect to ...
Summary: In this work, the authors replace the energy minimization step by support vector regression which trains on the pairs of a pattern and its perturbed version, regressing the true pattern, in terms of perturbed ones. They also provide a sparse version which uses a subset of patterns. In addition, they perform s...
Rebuttal 1: Rebuttal: The revised draft is in this [anonymous Dropbox folder](https://www.dropbox.com/scl/fo/81sygn18f4ma3ridm1xlf/AHFYlvzMMlYZnNRhBN9U8mw?rlkey=e1tvpqs6v83kx2rspfvmfgswh&st=z9iuk4iu&dl=0). All modifications are marked in BLUE color. Thanks! --- ### **Reviewer’s Comment (Claims and Evidence 1)** > *I...
null
null
null
null
null
null
Graph World Model
Accept (poster)
Summary: This paper introduces the first world model in the graph domain. The proposed method is capable of handling multimodal graph data within a unified framework, accomplishing tasks across multiple domains, including prediction, generation, and optimization. The graph world model demonstrates exceptional performan...
Rebuttal 1: Rebuttal: **Response to Issue 1 (Missing details):** Thanks for the comments. The settings for the baselines primarily follow LLAGA (Chen et al., 2024a) and OFA (Liu et al., 2023a). As stated in Appendix A.3, we convert all nodes and labels in the Cora, PubMed, and HIV datasets into text. For all methods, w...
Summary: This paper proposes Graph World Model (GWM), a novel framework that integrates graph-structured data and multi-modal information into a unified world model for diverse tasks including prediction, generation, and planning. The authors present two variants (GWM-T and GWM-E) with distinct message-passing strategi...
Rebuttal 1: Rebuttal: **Response to Weaknesses 1:** Thanks for your questions. We answer your questions one by one: ***[Rationale of cross-modal fusion of GWM.]*** There are many ways to perform multimodal fusion. We selected two representative methods, not to avoid advanced fusion techniques. One is a simple and dir...
Summary: In this paper, the authors propose Graph World Model (GWM), a framework designed to integrate both unstructured and graph-structured data with multi-modal information. They introduce two GWM variants: a token-based method that transforms multi-modal data into textual representations prior to message passing, a...
Rebuttal 1: Rebuttal: **Q1. Can the author supplement the baseline that tuned on the task-specific dataset (e.g., Table 6).** **Response:** Thanks for the comments. Indeed, we have detailed the specific settings of the baselines for Table 6 in Appendix A.4 (***[lines 779-796]***). We primarily selected three classic ...
Summary: This paper proposes a Graph World Model that supports both unstructured and graph-structured states with multi-modal data. The proposed model can tackle diverse sets of tasks and act as a graph-based foundation model. The results on numerous datasets and tasks show SOTA or comparable results on most tasks comp...
Rebuttal 1: Rebuttal: **Q1. The method, although different in nature, solves a similar task to LANISTR [a], which would be nice to cite and contrast against.** **Response:** Thanks for the valuable suggestions. Indeed, we have already compared two baselines that, like LANISTR, were pre-trained by modality alignment...
null
null
null
null
null
null
Ultra Lowrate Image Compression with Semantic Residual Coding and Compression-aware Diffusion
Accept (poster)
Summary: The paper introduces an ultra-low rate image compression method combining Semantic Residual Coding and a Compression-aware Diffusion Model. SRC efficiently encodes semantic differences between original and compressed latent images, minimizing redundancy, while CDM aligns diffusion steps with compression levels...
Rebuttal 1: Rebuttal: Thank you for your kind recognition and these insightful questions—they are indeed crucial for understanding our work. Q1: **Regarding the computational cost of PFO and practical use cases** A1: For PFO, our goal is to demonstrate the potential of prompt optimization in further improving the ove...
Summary: This paper introduces ResULIC, a novel framework for ultra-low-bitrate image compression that integrates semantic residual coding (SRC) and a compression-aware diffusion model (CDM). SRC is proposed to capture the semantic disparity between the original image and its compressed latent representation. CDM is us...
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s insightful comments and recognition of our work’s potential. The questions raised are highly valuable for improving the robustness and impact of our research. Q1: **Implementation details about compared methods** A1: Thank you for your reminder. To ensure a...
Summary: The present paper describes a perception-oriented image compression framework that utilizes the residual multi-modal semantics (w.r.t what’s could be recovered by the latent decoder) as guidance conditioning for the diffusion denoising process towards improved fidelity at lower rate costs. The insight that fid...
Rebuttal 1: Rebuttal: Q1: **What is the difference between conditioning the diffusion model directly with visual cues instead of explicit texts** A1: In our method, explicit texts primarily compensate for semantic gaps in the visual condition, especially at ultra-low bitrates. For example: - **Low-bitrate regime**: In...
Summary: This paper proposes a diffusion model-based image compression method. First, a pretrained codec is used to obtain a latent representation, which guides the generation process of the diffusion model. Second, semantic information is introduced as additional guidance. To reduce the overhead of transmitting semant...
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback. Regarding your inquiry, we would like to provide the following clarification: Q1: **Comparing with GAN-based GLC** A1: We highlight our advantage over GLC as below: 1. **Ultra-low Rate Support**: GLC is a very competitive and representative GAN-based compr...
null
null
null
null
null
null
Interpolating Neural Network-Tensor Decomposition (INN-TD): a scalable and interpretable approach for large-scale physics-based problems
Accept (poster)
Summary: This paper presents Interpolation Neural Network-Tensor Decomposition (INN-TD), a framework combining neural network approximation of PDE solutions with tensor decomposition methods. By incorporating locally supported interpolation functions, the author claims that INN-TD enhances accuracy, speeds up training ...
Rebuttal 1: Rebuttal: We greatly appreciate the reviewer's valuable comments. > Running statistics and comparison We have added the statistics for all training examples in *[Table 1](https://anonymous.4open.science/r/i7_/t0.png)*. The statistics for the solving examples are summarized in *[Table 2](https://anonymous....
Summary: This paper proposes the Interpolation Neural Network-Tensor Decomposition (INN-TD), which relies on learnable locally supported interpolation functions for finite element methods and functional tensor decomposition for approximating high dimensional multivariate functions. The authors show that INN-TD outperfo...
Rebuttal 1: Rebuttal: We greatly appreciate your kind comments, and they are very helpful to improve the manuscript. > GPU memory footprint in the tables is missing. We have added the GPU memory and computational costs for all cases solved by INN-TD, as shown in *[Table 1](https://anonymous.4open.science/r/i7_/t1....
Summary: This paper introduces a new framework, Interpolation Neural Network-Tensor Decomposition (INN-TD), designed to efficiently and interpretably solve high-dimensional partial differential equations (PDEs) encountered in large-scale physics-based problems. The key innovation lies in integrating the local interpola...
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the constructive suggestions, which can greatly improve the current manuscript. > Explanation of the interpretability As stated by *Doshi-Velez et al. (arXiv:1702.08608)*, model interpretability refers to the ability to explain a model’s behavior in a way ...
null
null
null
null
null
null
null
null
Continuous Semi-Implicit Models
Accept (poster)
Summary: The paper studies semi-implicit models, and proposes an extension from hierarchical semi-implicit models (hSIM) to continuous-time SIMs (cSIM). The continuous-time SIM has the advantage of being simulation-free, as opposite to hierarchical SIM; and enables multi-step sampling at the same time. This is achieved...
Rebuttal 1: Rebuttal: Thank you for your valuable feedback! Below are our responses to your concerns. `Q1`: I checked the experimental design ... Stein et al. “Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models”, NeurIPS 2023. `A1`: Exactly! The paper “Exposing flaws ...
Summary: CoSIM is a continuous extension of hierarchical semi-implicit models, designed to enhance expressiveness and accelerate diffusion models with pretrained score networks. Unlike traditional hierarchical semi-implicit models, which suffer from slow convergence due to sequential training, CoSIM introduces a contin...
Rebuttal 1: Rebuttal: Thank you for your valuable feedback! We will address your concerns below. `Q1`: It is recommended to provide empirical evidence supporting this claim, such as FID/FD vs. training iterations or runtime comparisons. `A1`: We give the details of our training iterations in Table 4 in Appendix C. Fo...
Summary: The paper proposes Continuous Semi-Implicit Models (CoSIM), extending hierarchical semi-implicit models into a continuous-time framework for diffusion model acceleration. CoSIM introduces a continuous transition kernel that allows the simulation-free training. It uses semi-implicit variational inference (SIVI)...
Rebuttal 1: Rebuttal: Thank you for providing valuable feedback! Here are our responses. `Q1`: I personally like this paper. It extends HSIVI ... paper's contributions. `A1`: Thank you for your interest in our work! We acknowledge that CoSIM can be regarded as a continuous extension of HSIVI, but our contributions go...
Summary: This paper proposes CoSIM a continuous hierarchical variational inference framework for semi-implicit distribution aiming to accelerate the sampling speed of diffusion models. The main contribution of this paper includes: * The authors propose a score-based diffusion distillation approach demonstrating super...
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable feedback. We address your specific questions and comments below. `Q1`: It does not seem very clear to me what is the bias in the SiD loss ... on $\lambda$. `A1`: Thank you for your valuable suggestion! The significance of our regularization comes...
null
null
null
null
null
null
SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?
Accept (oral)
Summary: The paper introduces a benchmark of 1488 software engineering freelance tasks consisting of managerial and coding tasks to evaluate LLM performance against real world tasks. The benchmark helps get a clearer picture about the potential social and economic impacts of AI. For individual contributor/coding tasks,...
Rebuttal 1: Rebuttal: Thank you very much for your careful and thoughtful review! We’ll address your points below. > There are certain relevant and seminal works that are not cited in the paper. For example, Evaluating Large Language Models Trained on Code by Chen et al. and Program Synthesis with Large Language Model...
Summary: This paper introduces SWE-Lancer, a benchmark for evaluating language models' capabilities in real-world software engineering tasks. The benchmark comprises 1,488 freelance software engineering tasks from Upwork, collectively valued at $1 million USD in actual payouts. SWE-Lancer includes two distinct task cat...
Rebuttal 1: Rebuttal: Thank you very much for your careful and thoughtful review! We’ll address your points below. > Limited description of the agent framework: Although the details of the agent framework are mentioned in the appendix of the paper, it is still difficult to fully understand its implementation. The pape...
Summary: This paper introduces SWE-Lancer, a benchmark of 1488 real-world freelance software engineering tasks from Upwork valued at $1million USD in actual payouts. The benchmark includes both Individual Contributor (IC) tasks where models generate code patches to fix issues, and Software Engineering Manager tasks whe...
Rebuttal 1: Rebuttal: Thank you for your generous review and strong endorsement! Below we address your points. > Unbiased data collection vs. platform bias Great point! SWE-Lancer focuses on a single repository and freelance tasks, so it isn’t free from platform bias. By ‘unbiased data collection,’ we mean we didn’t...
Summary: The paper introduces a benchmark SWE-Lancer using 1488 freelancing tasks from Upwork. The benchmark offers many advantages as compared to the existing SWE benchmark: connects solving an SWE task directly to economic benefits, more challenging problems, a diverse data set including UI/UX tasks. There are two ty...
Rebuttal 1: Rebuttal: Thank you for this thoughtful review! We appreciate your feedback and address your points below. > Multiple runs and CIs Excellent point. In response to your comment, we performed 3 runs of GPT-4o and o1 on the IC SWE and SWE Manager Diamond subsets to provide confidence intervals in the camera-r...
null
null
null
null
null
null
Vector Grimoire: Codebook-based Shape Generation under Raster Image Supervision
Accept (poster)
Summary: This work focuses on SVG generation. Its model consists of two modules: a visual shape quantizer learns to map raster images onto a discrete codebook by reconstructing them as vector shapes, and an auto-regressive Transformer model jointly learns the distribution over shape tokens, positions and textual descri...
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's time and thoughtful feedback on our manuscript. Given the time constraints of this rebuttal, we have focused on addressing the major concerns as follows. --- ## **Vector-based baselines** We have extended our analysis to two vector-supervised methods – **...
Summary: This paper introduces a text-guided SVG generation model, i.e., GRIMOIRE, using only raster image supervision. The SVG generation task is formulated as the prediction of a series of individual shapes and positions. The experiments demonstrate the effectiveness of the proposed method. ## update after rebuttal ...
Rebuttal 1: Rebuttal: We sincerely thank you for taking the time to review our manuscript and providing valuable feedback. --- ## **Discussion on SDS-based Methods** In the final version of the manuscript, we plan to more clearly highlight the differences between SDS approaches and Grimoire at the end of section 2...
Summary: The authors propose a SVG generative model GRIMOIRE which can be conditioned on a text prompt or a partially completed SVG. The primary innovation in the paper is training a VQ-VAE which tokenizes patches of rasterized SVGs into discrete tokens which crucially can be reconstructed into SVG primitives (primaril...
Rebuttal 1: Rebuttal: We sincerely appreciate your time and thoughtful feedback on our manuscript. Given the time constraints of this rebuttal, we have focused on addressing the major concerns as follows. --- ## **Code Length of the SVG** For Im2Vec, the number of paths and control points per path is fixed at e...
Summary: This paper presents GRIMOIRE, a novel text-guided generative model for scalable vector graphics (SVG). The model consists of two main components: a Visual Shape Quantizer (VSQ), which learns to reconstruct raster images as vector shapes through a discrete codebook; and an Auto-Regressive Transformer (ART), whi...
Rebuttal 1: Rebuttal: We sincerely appreciate the time and effort you have dedicated to reviewing our manuscript and providing insightful feedback. **We conducted ablation experiments to address your concerns.** We have explored **different patch and grid sizes** on the MNIST dataset, analyzed the **impact of stroke l...
null
null
null
null
null
null
WMAdapter: Adding WaterMark Control to Latent Diffusion Models
Accept (poster)
Summary: This paper introduces WMAdapter, a plug-and-play watermarking solution for latent diffusion models that embeds watermarks during the image generation process without modifying the original diffusion components. The authors propose two key innovations: (1) a contextual adapter that conditions on the content of ...
Rebuttal 1: Rebuttal: **Q: Discrepancy between qualitative and quantitative evaluation in Figure 1.** Different types of watermarks introduce different types of artifacts. Although FID is one of the most widely used quantitative metrics for image quality, it often struggles to accurately reflect the visual impact of d...
Summary: This paper introduces WMAdapter, a watermarking plugin for AI-generated images that seamlessly embeds user-specified watermark information during the diffusion generation process. Unlike previous methods that modify diffusion modules to embed watermarks, WMAdapter preserves the integrity of diffusion component...
Rebuttal 1: Rebuttal: **Q: Essential References Not Discussed: Gaussian Shading and FSW** Thank you for the suggestion. In fact, we have already referenced and discussed both works in the Introduction section. Specifically, they are cited as [Yang et al., 2024] and [Xiong et al., 2023], corresponding to Gaussian Shadi...
Summary: This paper proposes the **WMAdapter**, which generates content-aware watermark embeddings using the contextual adapter and embeds watermarks with a hybrid fine-tuning strategy. Specifically, the contextual adapter comprises a series of fuser modules, each of which is attached before a corresponding VAE decoder...
Rebuttal 1: Rebuttal: **Q: Is it possible to select a better $\lambda$ or or find better checkpoints to enhance image quality?** Thank you for your insightful question. In practice, selecting a better $\lambda$ or checkpoint to enhance image quality proves very challenging. During our joint training experiments (Adapt...
null
null
null
null
null
null
null
null
Scalable Model Merging with Progressive Layer-wise Distillation
Accept (poster)
Summary: The paper introduces ​ProDistill, a ​progressive layer-wise distillation​ algorithm for merging multiple fine-tuned models into a single high-performing model. It theoretically demonstrates the necessity of ​task-specific data​ for effective merging and proposes a layer-by-layer distillation approach that mini...
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude to the reviewer for their valuable feedback. The provided suggestions are extremely helpful and constructive, and we will revise the paper accordingly. We address the reviewer's questions as follows. >Q1: The experiments focus on vision and NLP tasks...
Summary: The paper presents a new method for model merging based on progressive feature alignment. It proposes to learn merging coefficients by progressively aligning the representation of the merged model and the constituent models (the finetuned ones) layer by layer. This reduces the computational requirements of the...
Rebuttal 1: Rebuttal: We would like to express our gratitude for the reviewer's helpful and positive comments. The suggestions provided have been instrumental in refining our work, and we will incorporate the necessary revisions accordingly. Below, we address each of the reviewer’s questions in detail. >Q1: Honesty, c...
Summary: Model merging is an emerging paradigm that combines multiple models into **a single, versatile model, eliminating the need for extensive retraining and substantial weight storage**. However, it is commonly observed that the performance of merged models degrades as the number of models increases. To mitigate th...
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s valuable feedback. **We follow the reviewer's advice and conduct additional experiments, with the results provided at https://anonymous.4open.science/r/Experiments-for-Reviewer-CuV6-8476.** We address the reviewer's speficic questions as follows. >Q1: What h...
Summary: The paper introduces ProDistill, a model merging algorithm leveraging progressive layer-wise distillation. A key contribution is the use of merging coefficients that are the same size as the model weights, enabling a fine-grained control of the merging process through element-wise operations. ProDistill effici...
Rebuttal 1: Rebuttal: We greatly appreciate the reviewer's comments and valuable suggestions. **We conduct additional experiments to clarify the reviewer's question, with the results given in https://anonymous.4open.science/r/Experiments-for-Reviewer-ZoNc-9701.** We address the reviewer's questions in more detail as fo...
null
null
null
null
null
null
Audio Flamingo 2: An Audio-Language Model with Long-Audio Understanding and Expert Reasoning Abilities
Accept (poster)
Summary: This paper introduces Audio Flamingo 2 (AF2), an advanced Audio-Language Model (ALM) designed for long-audio understanding and expert-level reasoning. AF2 leverages a custom CLAP model, synthetic Audio Question Answering (AQA) data, and a multi-stage curriculum learning strategy to achieve state-of-the-art per...
Rebuttal 1: Rebuttal: We thank you for your thorough review and constructive feedback. We have tried to address each of your concerns point by point. > Will the dataset be released? It seems to be a huge project if the followers of this paper want to reproduce the dataset. **Ans.**: Yes, absolutely! As stated on Page...
Summary: This paper propose Audio flamingo 2, an audio language model with advanced audio understanding and reasoning abilities, demonstrated by state of the art performance on several benchmarks. The authors develop a custom clap model, a dataset called LongAudio to extend ALMs to 30s-5 minute audios, and another call...
Rebuttal 1: Rebuttal: Thank you for the encouraging review. We are happy you liked our paper. We'd just like to clarify that, in addition to the dataset contributions, our paper also presents several modeling insights that we believe are novel and impactful: - **Dynamic batching for efficient training:** As described...
Summary: This paper introduces Audio Flamingo 2 (AF2), a small yet powerful Audio-Language Model (ALM) with advanced audio understanding and reasoning capabilities. AF2 leverages a custom CLAP model, synthetic AQA data, and a multi-stage curriculum learning strategy to achieve state-of-the-art performance across 20+ be...
Rebuttal 1: Rebuttal: We thank you for your thorough review and constructive feedback. We have tried to address each of your concerns point by point. > Data Accessibility: While the authors mention open-sourcing code and data, providing a clear timeline or repository link would enhance accessibility. **Ans.** As stat...
Summary: This paper introduces a state-of-the-art audio understanding LLM, with a focus on long and complex acoustic scenes. Audio understanding has so far been limited to superficial captioning of individual sound events often generating artificially inflated captions that try to give an illusion of complexity while h...
Rebuttal 1: Rebuttal: We thank you for your thorough review and constructive feedback. We have tried to address each of your concerns point by point. > Thus, I would like to ask the authors to explicitly explain their plan in releasing the training data and the evaluation benchmark during the rebuttal period. **Ans.*...
null
null
null
null
null
null
Relational Invariant Learning for Robust Solvation Free Energy Prediction
Accept (spotlight poster)
Summary: The paper proposes Relational Invariant Learning framework for solvation free energy prediction. RILOOD consists of three key components: a mixed conditional modeling module to integrate data from different environments, a multi-granularity refinement strategy for context-aware representation learning, and an ...
Rebuttal 1: Rebuttal: Dear reviewer b6ge: Thank you for your thoughtful suggestions and questions. We have provided point-by-point answers to each weakness and question. ## Weakness **W1. Personally, I do not find CVAE to be particularly innovative.** While we understand your perspective on the novelty of the CVAE ...
Summary: This paper investigates the challenge of out-of-distribution generalization across different environments in molecular solvation free energy prediction and introduces the RILOOD framework. RILOOD integrates mixup-based conditional modeling, a multi-granularity refinement strategy, and an invariant relational l...
Rebuttal 1: Rebuttal: Dear reviewer 9m7P: Thank you for your thoughtful suggestions and questions.We have provided point-by-point answers to each weakness and question. **W1. RILOOD integrates several modules to capture complex molecular interactions, but this results in a relatively complex model structure:** We u...
Summary: In this paper, the authors presents the Relational Invariant Learning framework (RILOOD) to improve OOD generalization in solvation free energy prediction. RILOOD learns invariant molecular representations in varied environments and applies mixed-enhanced molecular features for modeling environmental diversity...
Rebuttal 1: Rebuttal: Dear reviewer zysq: Thank you for your thoughtful suggestions and questions. We have provided point-by-point answers to each weakness and question. ## Weakness **W1. About theoretical innovation:** We acknowledge that our study may not introduce entirely new theoretical concepts, we believe th...
Summary: This paper presents a novel out-of-distribution learning method for addressing the challenge of predicting solvation free energy in molecular interactions. The key innovation lies in the authors' approach to modeling the distribution of molecular interactions. They validated the effectiveness of their proposed...
Rebuttal 1: Rebuttal: Dear reviewer Jwej: We sincerely appreciate the reviewer's thoughtful suggestions and questions. We have provided point-by-point answers to each weakness and question. ## Weakness **W1. Details about datasets:** We provided splitting details in appendix. The method of scaffold split is availa...
null
null
null
null
null
null
Variance-Reduced Forward-Reflected-Backward Splitting Methods for Nonmonotone Generalized Equations
Accept (poster)
Summary: This paper proposes two stochastic variance-reduction algorithms to solve a class of nonmonotone equations. The key technical tool used in this paper is the intermediate object $S_{\gamma}$. The authors apply classical variance reduction techniques on $S_{\gamma}$ instead of the operator $G$, and they show th...
Rebuttal 1: Rebuttal: First of all, we acknowledge the reviewers for his/her comments and feedback on our work. Below is our response to each point. P1. Weakness: + Q1.1: Paper organization is not good enough. The comparison with (Cai et al. 2023) can be presented in the main text instead. The comparison with existin...
Summary: 1. inspired by SVRG & SAGA, construct two variance-reduced estimators for the forward-reflect operator 2. show that VFR and VFRBS methods achieve SOTA oracle complexity for non-monotone operator splitting problems Claims And Evidence: 1. Does the convergence of your splitting algorithm have strong connection...
Rebuttal 1: Rebuttal: First of all, we highly acknowledge the comments and questions from the reviewer. Below is our detailed response. Q1. "Does the convergence ... estimator? If I replace SVRG/SAGA ... converge? >R1. The answer is "no". In lines 233-240 and 297-304, we have stated that any estimator $S^k$ satisfie...
Summary: This paper studies the forward-reflected operator in two types of variance-reduced estimators: SVRG and SAGA. Using these estimators, the authors propose the Variance-Reduced Forward-Reflected Method and the Variance-Reduced Forward-Reflected-Backward Splitting, which solve nonlinear and generalized equations,...
Rebuttal 1: Rebuttal: First of all, we highly acknowledge the reviewer for constructive comments and feedback. Below is our response to each point. Q1: The authors ... results. I request that authors verify this by providing a table or a comparison with prior works in terms of complexity and assumptions. This is nece...
Summary: In this paper, the author proposes two novel algorithms for solving a class of non-monotone equations, building upon the Forward-Reflected Backward Splitting framework and incorporating variance reduction techniques such as SVRG and SAGA. The proposed methods are accompanied by rigorous convergence guarantees ...
Rebuttal 1: Rebuttal: First of all, we highly acknowledge the reviewer for comments and feedback. Below is our response to each point. C1: In this paper, the author proposes two novel algorithms for solving a class of non-monotone equations, building upon the Forward-Reflected Backward Splitting framework and incorpor...
null
null
null
null
null
null
Whoever Started the interference Should End It: Guiding Data-Free Model Merging via Task Vectors
Accept (poster)
Summary: The paper offers very satisfying results and method in a nice writeup that could be communicated much better. Claims And Evidence: You refer to Task Arithmetic as data-free which is mostly true, except that it reweights the models with data. Classic merging does not and Ties also has a scaling factor but it i...
Rebuttal 1: Rebuttal: We appreciate the reviewers’ valuable feedback and have addressed each point as follows: ### Concern 1: The Use of Validation Data 1. **Whether task arithmetic is data-free**: We summarize TA as a data-free method as it can select empirical parameters for merging without requiring data. Although i...
Summary: This paper introduces WUDI-merging, a new data-free model merging method. The authors provide theory-backed idea that task vectors for a linear layer represent a linear subspace corresponding to its inputs. They use this knowledge to construct a merging method that aims to minimize the inference of the merged ...
Rebuttal 1: Rebuttal: We appreciate the reviewers’ valuable feedback and have addressed each point as follows: ### W1: Reconstruction Error For calcuating the reconstruction error in Equation (13), we first obtain the input for each layer from a set of samples and then compute the reconstruction coefficients using the...
Summary: This paper proposes WUDI-Merging, a data-free model merging method where the merged model weights are optimized via SGD using the Adam optimizer. The optimization objective leverages the insight that task vectors form an approximate linear subspace of the corresponding input space. Additionally, the authors pr...
Rebuttal 1: Rebuttal: We appreciate the reviewers’ valuable feedback and have addressed each point as follows: ---- ### W1: Results on More Models and LoRA To further demonstrate the generalizability of our method on different models and LoRA, we supplemented the experiments on Flan-T5-base and Qwen-14B. For mergin...
null
null
null
null
null
null
null
null
Nonlinear transformers can perform inference-time feature learning
Accept (poster)
Summary: This paper studies the in-context learning capacities of transformers when the prompt sequences are given by a (possibly low-dimensional) Gaussian single-index model. When the length of the prompt sequences exceed certain (information-theoretic) limit, transformer trained with modified gradient descent method ...
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful feedback. We address the technical comments and questions below. **On Algorithm 1** We make the following remarks on Algorithm 1. - The strategy of merging attention matrices and zeroing out some submatrices is extensively used in theoretical analysis of ...
Summary: This paper studies the tasking of learning single-index models $y = \sigma(x \cdot \beta)$ using a two-layer single-head softmax transformer. The authors prove that a pretrained transformer can solve this task in context (different $\beta$ in different prompts). When $\beta$ is sampled from the unit sphere i...
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful feedback. We will correct the typos and improve the writing of the manuscript. We address the technical points below. **Properties of single-index target** We agree with the reviewer that our current learnability result relies on the Euclidean inner product...
Summary: This paper studies the optimization and statistical guarantees for the in-context learning of the Gaussian single-index function class using a Transformer with softmax attention. The derived inference-time sample complexity is tighter than the existing works, which indicates that pre-trained transformers can i...
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful feedback. We address the technical comments below. **Additional experiments** Thank you for the suggestions on experiments. We have conducted an additional experiment to probe the test time sample complexity of GPT-2 models for learning single-index target...
Summary: This work studies in-context learning (ICL) of single-index models $\( y = \sigma(\langle x, \beta \rangle) \)$ using nonlinear transformers, focusing on inference-time sample complexity. The authors propose a two-stage training approach: (1) a single gradient descent step on the attention matrix to capture fe...
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful feedback. We address the technical comments and questions below. **Distinctions from Oko et al. (2024b)** Based on your suggestion, we will include a dedicated comparison section in the paper; here we briefly summarize the key differences. - **Improved in...
null
null
null
null
null
null
Unconstrained Robust Online Convex Optimization
Accept (poster)
Summary: The paper presents an algorithm to solve unconstrained OCO when the observed gradient might be corrupted. They first present an algorithm when $G := \max_t ||g_t||$ is known, by truncating the observed gradient to ensure the norm is less than a specified $h_t$, and adding a regularizer to limit the growth of $...
Rebuttal 1: Rebuttal: We appreciate the positive feedback. Regarding norms, our algorithm extends to other norm settings as long as $k$ is measured accordingly and dual norms. We will also revise the manuscript to fix typos.
Summary: The paper addresses the case of online learning on the unconstrained domain with corrupted gradient feedback with no assumptions on the corruptions nature. The paper provides an algorithm with regret guarantee $\|u\|G(\sqrt{T} + k)$ for the case when the Lipschitz constant is known, and provide the algorithm ...
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback. On the non-differentiable case: we agree that OCO via linearized losses naturally extends to subgradients, and thus our results apply to non-differentiable convex functions as well. Regarding the term $E_{\bar P}$ (line 331, left), this should inde...
Summary: The authors investigate online convex optimization (OCO) in an unconstrained domain under corrupted gradient feedback. They introduce a new measure of corruption, denoted as $k$, which accounts for both the number of corrupted rounds and the magnitude of gradient deviations. Given $k$, their proposed algorithm...
Rebuttal 1: Rebuttal: We thank the reviewer for validating the theoretical contributions. Regarding the constant regret interpretation after Corollary 6.2, it relies on setting $\tau_D = O(1/k)$, which is an initialization parameter used in the doubling trick to track $\max_{i \le t} |w_t|$. Note that if $k$ is unknown...
Summary: The paper studies the challenging problem of online convex optimization (OCO) in an unconstrained domain under the presence of adversarially corrupted gradient feedback. Unlike classical OCO, where gradient estimates are assumed to be accurate or only subject to benign noise, this work makes no statistical ass...
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback. As this work focuses on the theoretical foundations of robust online convex optimization, we look forward to systematically investigate practical applications in future work. In terms of hyper-parameter, the only required user input is the corr...
null
null
null
null
null
null
Improving the Continuity of Goal-Achievement Ability via Policy Self-Regularization for Goal-Conditioned Reinforcement Learning
Accept (poster)
Summary: This paper addresses the issue of discontinuity in goal-achievement capabilities in Goal-Conditioned Reinforcement Learning (GCRL) algorithms. First, this paper theoretically proof that reusing successful trajectories can help achieve adjacent goals, but policy discrepancies must be controlled to avoid perform...
Rebuttal 1: Rebuttal: `... discuss the relationship between MSR and TRPO, PPO methods in designing the MSR loss.` We denote the policy at the t-th iteration as $\pi_{\theta_t}$, with $\theta$ denoting policy parameters. $\pi_{\theta_t}(\cdot|s,g)$ maps state $s$ and goal $g$ to an action distribution. TRPO and PPO's ...
Summary: Reaching adjacent goals utilizing the same policy is non-trivial due to the limited robustness of policy improvement. The paper studies discontinuity in goal-achievement observed in Goal-Conditioned Reinforcement Learning (GCRL). Theoretically, the paper identifies constraints between goal reaching policies of...
Rebuttal 1: Rebuttal: `W1: Theory To Practice` In our work, we aim to address the issue of ensuring that if a policy can achieve a goal $g$, it should also be capable of achieving goals in the vicinity of $g$, which we denote as $g+\epsilon$. From the perspective of cumulative rewards, our objective is to minimize $E_...
Summary: In their present paper, the authors address an evident issue appearing in goal conditioned RL: discontinuity between control policies even in cases of adjacent goals, i.e. where their respective goals are only marginally separated by some distance \eps. The insights generated by a comprehensive analysis of the...
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable feedback and careful review of our paper. We address your concerns in the following: `My only remark is that a higher number of random seed varying trials would improve their statistical evaluation of the experimental results, as the current number (5 trials)...
Summary: This paper presents a regularization technique to improve the capabilities of Goal-Conditioned Reinforcement Learning (GCRL) algorithms. The authors start by motivating the need for their approach and presented prelimaries about GCRL. Next, the authors present a cohesive theoretical analysis displaying that mo...
Rebuttal 1: Rebuttal: Thank you for your thorough review and valuable feedback on our work. We address your concerns in the following: `W1: Many of the figure captions could be improved.` We greatly appreciate your suggestions. In accordance with the principles of accuracy and conciseness, we have re-formulated the ...
null
null
null
null
null
null
Dimensionality Reduction on Complex Vector Spaces for Euclidean Distance with Dynamic Weights
Accept (poster)
Summary: This paper presents an embedding of a d-dimensional vector x into k dimension such that, for any d-dimensional weight vector w $\sum_{i=1}^d w_i^2 x_i^2$ is preserved. Specifically, the authors give an additive error in terms of $\varepsilon \|x\|_2^2 \cdot \|w\|_4^2$. The classic Johnson-Lindenstrauss guarant...
Rebuttal 1: Rebuttal: We thank Reviewer 3 for the useful feedback. We provide below answers to reviewer’s comments. --- **There are no experiments. Comparing the performance with the result by Kaban would have been nice, if only to also see whether the theoretical bounds could be improved.** Thank you for the sugges...
Summary: This paper explores dimensionality reduction on complex vectors for Euclidean distances. The authors decompose the complex dimensionality reduction into several Rademacher chaos random variables, where novel concentration inequalities for sums of independent Rademacher chaoses are derived. Claims And Evidence...
Rebuttal 1: Rebuttal: We thank Reviewer 2 for the useful feedback. We provide below detailed answers to comments. ---- **Few experiments should be provided.** Thank you for the suggestion, we added some proof of concept experiments where we show the empirical distribution of the estimates $\rho$ of the weighted norm...
Summary: The main result of the paper is the following (Theorem 1.1): Let $\epsilon, \delta \in (0,1)$ and $\Delta \ge 0$ be given parameters. There is a function $g : \mathbb{R}^d \rightarrow \mathbb{R}^{O(\Delta^2\log(1/\delta)/\epsilon^2)}$ and an estimator $\rho(g(x), w)$ such that for any vectors $x,w \in \math...
Rebuttal 1: Rebuttal: We are very confused by this review, and we believe that it might be due to a misunderstanding. The summary that the reviewer provides is misleading and incomplete, as in his re-formulation of our main result, Theorem 1.1, they crucially omit that the function $g$ is linear. Because of this, the r...
null
null
null
null
null
null
null
null
Projection Optimization: A General Framework for Multi-Objective and Multi-Group RLHF
Accept (poster)
Summary: This paper primarily focuses on aligning large language models (LLMs) to multiple objectives using per-objective preference data. Prior works on this topic primarily aim to achieve Pareto optimal alignment across all objectives by linearly aggregating all objectives into a single unified form and optimizing th...
Rebuttal 1: Rebuttal: Thanks for your detailed review! We are happy to address your questions as follows. > 1. The discussion of the expectation over prompt Thanks for the good question! Our goal is to find the model $\pi$ that maximize the total expected reward $u(\pi)=\sum_{i=1}^m(\mathbb{E}\_{x\sim \rho, y\sim \p...
Summary: This paper introduces a novel Multi-Objective RLHF (MORLHF) framework that leverages per-objective preference feedback to achieve Pareto optimality by aggregating multiple objectives into a single unified optimization target. Unlike existing approaches that rely on linear aggregation, this work overcomes their...
Rebuttal 1: Rebuttal: Thanks for your positive response and time in reviewing our paper! We will address your questions as follows. >1. The paper assumes that the type/group of each human is known in advance. In this paper, we assume the group information is known. However, if the group information is unknown, we can...
Summary: This paper introduces a projection-based optimization framework for Multi-Objective Reinforcement Learning with Human Feedback (MORLHF). The approach reformulates non-linear reward aggregation as a series of linear sub-problems, enabling computationally efficient Pareto-optimal solutions. The framework is exte...
Rebuttal 1: Rebuttal: Thanks for your positive response and time in reviewing our paper! We will address your questions as follows. 1. Analysis of hyperparameter impacts and selection. Since $\alpha$ and $p$ is assumed to be given in the experiment, the only hyperparameter specifically included in our algorithm is ...
Summary: The paper proposes a general framework for multi-objective multi-group RLHF. The authors creatively draw inspiration from RL with Blackwell-approachability to resolve handle the non-linear structure of the aggregated reward. Claims And Evidence: The authors provided theoretical and empirical guarantees for th...
Rebuttal 1: Rebuttal: Thanks for your positive response and meaningful review! We will address your question as follows. >1. If negative values occur, the paper should discuss how p-norm aggregation applies. Both in theory and practice, our algorithm can handle both the negative reward and the positive reward. In fa...
null
null
null
null
null
null
PaperBench: Evaluating AI’s Ability to Replicate AI Research
Accept (poster)
Summary: The authors have created a benchmark called PaperBench that tests model ability to reproduce modern machine learning research papers. The dataset consists of 18 papers (and a small dev set). To track granular progress on result reproduction, the authors create a hierarchical rubric for each paper. Judging outp...
Rebuttal 1: Rebuttal: Thank you for your insightful feedback! We address your comments below: 1. Concerns About Longevity/Usefulness of Benchmark Thank you for raising this important point! We are happy to announce that we now have author approval on 20 rubrics and so now the dataset and results are on a 20 paper dat...
Summary: This paper contributes PaperBench, a new benchmark that replicates the code implementations of top-tier AI conference research papers, including their code, and the papers' results from running the generated code, including their analysis. It uses 18 spotlight and Oral papers. It leverages LLMs to automate gra...
Rebuttal 1: Rebuttal: Thank you for taking the time to carefully review our paper! The rebuttal is subject to a character limit; we address what we saw as the highest-priority comments below: > Code is not open sourced, or available to reviewers for review. We will open source our codebase for the camera-ready releas...
Summary: The paper introduces PaperBench, a benchmark for evaluating AI agents’ ability to replicate SOTA ML research. The dataset comprises 18 papers from ICML 2024. Each paper is accompanied by a manually curated rubric, which hierarchically decompose each replication task into smaller gradable subtasks. The an LLM-b...
Rebuttal 1: Rebuttal: Thank you for your constructive feedback! We address your comments below: > Line 47 mentions the UK AI Safety Institute’s Basic Agent scaffolding without an explicit citation, which should be included to clarify the source and structure of the agent scaffolding. Thank you for catching this! We’v...
null
null
null
null
null
null
null
null
What is Adversarial Training for Diffusion Models?
Reject
Summary: This paper investigates AT tailored specifically for DMs, emphasizing that adversarial robustness for DMs should enforce equivariance rather than invariance. The authors introduce a new approach where perturbations, either random or adversarial, are added to enforce smoothness in the diffusion trajectories. Em...
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback and constructive criticism. We are glad the reviewer mentioned that our method and analysis are sound, appropriate, and sufficiently clear, giving comments such as: the paper is clearly structured and easy to follow; that the conceptual logic is presented cle...
Summary: This work endeavors to construct a novel adversarial training approach for diffusion models. Through comparing the AT process of traditional classification models, the author suggests that the crucial key of DMs AT resides in equivariance. Consequently, the perturbation process and adversarial training loss in...
Rebuttal 1: Rebuttal: Thank you for the feedback and constructive criticism. We are glad the reviewer acknowledged that we provided sufficient evidence to support our motivation, our method could be helpful for the future robustness of SD and our experimental results and insights are highly professional and elegant. Be...
Summary: This work studies adversarial training for diffusion models, highlighting its fundamental differences from adversarial training for classifiers. Unlike adversarial training for classifiers enforcing invariance, adversarial training for diffusion models requires equivariance to ensure the diffusion process rema...
Rebuttal 1: Rebuttal: Thank you for the feedback and constructive criticism. We are glad the reviewer appreciated the novelty and original idea that could pave the way for future studies and the significance of our work. Below we respond to the remaining remarks: **1. Clarification for rev. `riys` and rev. `a6SF`: Our...
null
null
null
null
null
null
null
null
Latent Thought Models with Variational Bayes Inference-Time Computation
Accept (poster)
Summary: This paper presents a novel method called latent-thought language model (VTM) for autoregressive language modeling. VTM introduces an additional family of scaling dimensions, latent thought vectors, to implicitly learn a sequence representation and guide the generation. Training VTM requires sampling from the ...
Rebuttal 1: Rebuttal: *Thank you for your thoughtful review acknowledging the novelty of our work. We appreciate your recognition that our claims are supported by clear and convincing evidence. We will address your concerns as follows.* **1. Clarification on the cross-attention mechanism.** We wish to humbly clarify ...
Summary: This paper proposes a probabilistic language models called Latent-Thought Language Models (LTMs), which introduce explicit latent vectors to layers of transformers. The authors claim that this setup yields new “scaling dimensions” beyond traditional LLMs, allowing more efficient use of training compute per to...
Rebuttal 1: Rebuttal: *Thank you for your constructive feedbacks. Below are our responses.* **1. Comparison to amortized VI and clarification on multi-step inference** We wish to humbly clarify a possible misunderstanding in terms of the Variational Inference (VI) in our Latent Thought Models (LTMs). LTMs employ the ...
Summary: This work presents novel Latent-Thought Language (LTM) class of models, where the explicit latent vector is introduced to guide the generation of tokens. The model is optimized withing the variational bayes framework, using faster learning rate for latent vector distribution parameters and slower rate for lear...
Rebuttal 1: Rebuttal: *Thank you for your insightful comments and for recognizing the novelty and strong performance of our work. We address your concerns point-by-point as follows.* **1. Scalability.** Our Latent Thought Models (LTMs) scale along two primary dimensions: model size and the number of inference steps (...
null
null
null
null
null
null
null
null
Diffusion on Language Model Encodings for Protein Sequence Generation
Accept (poster)
Summary: The paper introduces DiMA, a latent diffusion framework that works on protein language model representations. While protein sequence design has advanced with discrete and autoregressive methods, the potential of continuous diffusion has been under-explored. DiMA is developed through a systematic exploration of...
Rebuttal 1: Rebuttal: Thank you for your review and positive assessment of our work. We appreciate your recognition of our systematic ablation studies and approach to developing protein-specific diffusion parameterizations rather than simply adopting techniques from the image domain. Below, we address each point raised...
Summary: This paper introduces DiMA, a continuous latent diffusion model that creates (novel) protein sequences using protein language model (PLM) hidden representations. Unlike other approaches that use discrete diffusion or step-by-step generation, DiMA explores continuous diffusion to make better sequences. It works...
Rebuttal 1: Rebuttal: Thank you for your thorough review and constructive comments. We appreciate your recognition of our work's comprehensive experimental design and clear methodology. > **W1.** Limited theory behind the proposed method, making is a straightforward application practice of well-established method Whi...
Summary: The paper introduces DiMA, a latent diffusion approach for protein sequence generation leveraging pre-trained embeddings. The authors consider sequence-only, structural, and sequence-structure joint embeddings. DiMA produces novel and high pLDDT samples. Conditional generation tasks based on protein family, mo...
Rebuttal 1: Rebuttal: Thank you for your review and the valuable suggestions. We appreciate your recognition of DiMA's thorough evaluation and benchmarking approach. Below, we address each point raised. > Concurrent work that the authors will be interested in. Regarding the work by Lu et al. (2024), we became aware o...
Summary: The authors have proposed a continuous diffusion framework, named DiMA. DiMA consists of three modules, i.e. 1) frozen pLMs like ESM2 to extract latent embedding for a given protein sequence, 2) a continuous diffusion module to generate latent embedding for noise and 3) a decoder that maps the latent embedding...
Rebuttal 1: Rebuttal: We sincerely thank you for your thoughtful and detailed feedback. We appreciate the time you have taken to review our work and your positive comments about our paper. We aim to address your concerns and questions below. > **W1.** On technical novelty. While diffusion models are not new, our work...
null
null
null
null
null
null
Ex-VAD: Explainable Fine-grained Video Anomaly Detection Based on Visual-Language Models
Accept (poster)
Summary: This paper proposed an explainable approach named Ex-VAD for fine-grained video anomaly detection, which consists of three modules, Anomaly Explanation Generation Module (AEGM), Multimodal Anomaly Detection Module (MADM), and Label Augment and Alignment Module (LAAM). AEGM tries to extract and refine frame-lev...
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their valuable comments. We will add these valuable comments to the revised manuscript. **R1: Evaluation of Inference Time.** Thanks for your suggestion. We would like to point out that Table 7 in our main paper provides a comparison of relevant computational ...
Summary: This paper introduces Ex-VAD, an explainable fine-grained video anomaly detection method based on visual-language models and large language models. By integrating modules for anomaly explanation generation, multi-modal feature fusion, and label augmentation and alignment, Ex-VAD achieves both fine-grained clas...
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their valuable comments. We will add these valuable comments to the revised manuscript. **R1: Robustness of the label.** We use the SOTA large language model GPT to generate M phrases for label expansion and select the top-k among them as the final labels. To ...
Summary: This paper proposes Ex-VAD, an explainable fine-grained video anomaly detection method that integrates visual-language models (VLMs) and large language models (LLMs). The approach consists of three main modules: the Anomaly Explanation Generation Module (AEGM), the Multi-modal Anomaly Detection Module (MADM), ...
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their valuable comments. We will add these valuable comments to the revised manuscript. **R1: Resource consumption.** We appreciate your concern regarding the resource consumption of Ex-VAD due to the integration of VLMs and LLMs. We also recognize that due to...
Summary: Paper proposes an explainable VAD approach which combines fine-grained classification with explanations. The approaches use pre-trained VLM and LLM to extract the relevant features. The approach employed 3 linear combination of 3 loss functions for the fine-grained classification of anomalous videos. A novel l...
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their valuable comments. We will add these valuable comments to the revised manuscript. **R1:Novelty of the Proposed Pipeline.** We apologize for failing to highlight our contributions and novelty. Different from existing coarse-grained VAD, our method is uniq...
null
null
null
null
null
null
On the Adversarial Robustness of Multi-Kernel Clustering
Accept (poster)
Summary: This paper examines the vulnerability of MKC methods to adversarial perturbations—an area that remains understudied. The authors introduce AdvMKC, a reinforcement learning framework that generates subtle perturbations to deceive MKC methods in black-box settings. Using proximal policy optimization and an innov...
Rebuttal 1: Rebuttal: **We sincerely appreciate Reviewer HijP’s thorough and constructive review. We provide point-by-point responses to the raised weaknesses as follows:** --- **W1:** In Table I, some cases show improved MKC method performance under adversarial conditions. The authors should clarify why this phenome...
Summary: AdvMKC proposes a novel black-box adversarial attack for multi-kernel clustering that employs reinforcement learning—specifically, proximal policy optimization with an advantage function—within a generator-clusterer framework. This approach introduces minimal perturbations to mislead multi-kernel clustering wh...
Rebuttal 1: Rebuttal: **We sincerely appreciate Reviewer rAQk's thorough and constructive review. We provide point-by-point responses to the raised weaknesses as follows:** --- **W1:** Although the generator-clusterer framework is claimed to reduce computational costs, prior research has already enhanced multi-view c...
Summary: This manuscript addresses the underexplored vulnerability of MKC methods to adversarial perturbations. To evaluate the adversarial robustness of MKC in a black-box setting, the authors propose AdvMKC, a novel framework grounded in reinforcement learning. AdvMKC employs proximal policy optimization with an adva...
Rebuttal 1: Rebuttal: **We sincerely appreciate Reviewer W7KP’s thorough and constructive review. We provide point-by-point weaknesses to the raised questions as follows:** --- **W1:** The authors do not provide the code. If released, it would enhance reproducibility and be beneficial for the research community. **R...
Summary: The paper investigates the adversarial robustness of MKC in a black-box setting, a largely unexplored area. It introduces AdvMKC, a novel reinforcement-learning-based attack framework that injects imperceptible perturbations to mislead MKC methods. AdvMKC employs proximal policy optimization with an advantage ...
Rebuttal 1: Rebuttal: **We sincerely appreciate Reviewer yajj’s thorough and constructive review. We provide point-by-point responses to the raised weaknesses as follows:** --- **W1:** This paper assumes the victim MKC method operates as a black box with no direct access. Given this realistic constraint, where freque...
null
null
null
null
null
null
GoIRL: Graph-Oriented Inverse Reinforcement Learning for Multimodal Trajectory Prediction
Accept (poster)
Summary: In this paper, the authors introduce a Graph-oriented Inverse Reinforcement Learning (GoIRL) framework for multimodal trajectory prediction. Specifically, (1) to capture the complex scene context in a structured manner, they use vectorized representations of the environment (scene features), (2) to integrate d...
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s thorough and thoughtful feedback. We are grateful for the recognition of our work’s motivations, technical contributions, framework designs, and experimental results, as well as for the constructive suggestions for improvement. Below, we address each of the r...
Summary: This paper focuses on the problem of trajectory prediction, which is hard because of the inherent uncertainty and underlying multimodality. Previous method mainly focuses on behavior cloning, which has been shown to have a covariant shift problem. Therefore, this paper proposes to use IRL to solve this problem...
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s recognition of our research contributions and writing clarity, as well as the constructive feedback and valuable suggestions for improvement. Below, we address the reviewer’s concerns in detail. 1. **Recent Trajectory Prediction Methods.** Thank you for your...
Summary: GoIRL is a graph-based inverse reinforcement learning framework for predicting multiple possible future trajectories in autonomous driving. It integrates lane-graph features into IRL, uses a hierarchical decoder for accurate predictions, and outperforms supervised models on Argoverse and nuScenes benchmarks. ...
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s recognition of our work’s motivations, technical contributions, and experimental results, as well as the constructive suggestions for improvement. Below, we address the reviewer’s concerns in detail. ### **1. Choice of Benchmark Datasets & Recent Baselines.*...
Summary: The Graph-oriented Inverse Reinforcement Learning (GoIRL) framework is an IRL-based predictor that utilizes vectorized context representations. The author states that the proposed methods overcome the drawbacks of supervised learning techniques. Additionally, a hierarchical parameterized trajectory generator h...
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s affirmative comments on the experimentation, results, and writing, as well as constructive suggestions and valuable references, which have helped us strengthen our manuscript. Below, we provide detailed responses to each of the concerns. 1. **Clarification o...
Summary: This paper presents Graph-oriented Inverse Reinforcement Learning (GoIRL), a novel IRL-based prediction framework that leverages vectorized context representations. The framework first extracts features from the vectorized inputs and then transforms them into grid space using a feature adaptor to ensure compat...
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s thorough and professional evaluation of our work. We are grateful for the recognition of our work’s motivations, novelty, contributions, and experimental validation, as well as for the constructive feedback and valuable suggestions for improvement. Below, we ...
null
null
null
null
Phase transitions for the existence of unregularized M-estimators in single index models
Accept (poster)
Summary: This work considers the problem of the existence of M-estimators in the proportional high-dimensional where the number of samples $n$ and covariate dimensions $p$ diverge at fixed rate $n/d\to\delta$. The main result is to establish a sharp frontier $\delta_{\infty}$ separating regimes where the probability of...
Rebuttal 1: Rebuttal: Thank you for the additional references. We agree that the phase transition phenomena studied in our paper are connected to these earlier works in statistical physics. We addressed this point in our rebuttal to reviewer LDvE; in a way our results are complementary and fill a gap. We will add the ...
Summary: This paper studies the phase transitions for M-estimators in single index models. Prior work has demonstrated that there exist a threshold $\delta_{\infty}$, such that when $n/p \to \delta$, then the M-estimator exists with high-probability when $\delta > \delta_{\infty}$ while the M-estimator does not exist w...
Rebuttal 1: Rebuttal: Thanks for your suggestion regarding the introduction. The single index model is a flexible yet interpretable framework for modeling nonlinear relationships while avoiding the curse of dimensionality. Single models are useful because it is a weak assumption regarding the modeling of $y_i\mid x_i$...
Summary: This paper investigates the existence of solutions to the nonlinear system of equations that characterize the asymptotic behavior of the M-estimator. Notably, the existence of a solution for $\delta>\delta_{\infty}$ remains largely unproven when the assumption of independence between $x_i$ and $y_i$ is removed...
Rebuttal 1: Rebuttal: Thanks for the careful reading of the paper and the kind words. We will be happy to provide clarifications if needed in later discussions.
Summary: This paper studies phase transitions for the existence of unregularized M-estimators in single-index models under proportional asymptotics, where the sample size n and feature dimension p grow proportionally with n/p → δ ∈ (1, ∞). The authors generalize results previously established for binary logistic regres...
Rebuttal 1: Rebuttal: We thank the reviewer for highlighting the importance of earlier foundational work from the information theory and statistical physics communities. We agree that the phase transition phenomena studied in our paper are connected in spirit to classical results such as Cover (1965) on the geometry of...
null
null
null
null
null
null
Do Bayesian Neural Networks Actually Behave Like Bayesian Models?
Accept (poster)
Summary: The paper **"Do Bayesian Neural Networks Actually Behave Like Bayesian Models?"** investigates whether common **approximate inference algorithms** for Bayesian Neural Networks (BNNs) adhere to the theoretical principles of Bayesian belief updating. It empirically evaluates methods such as **Variational Inferen...
Rebuttal 1: Rebuttal: Thank you for your careful review and helpful feedback! We are glad you found our paper “engaging” and “well argued”. We hope our responses and new experiments below answer your questions and alleviate the concerns you raised. > The proposed alternative (martingale posteriors) is presented in a so...
Summary: This paper investigates the properties of several popular algorithms for approximate posterior inference in Bayesian neural networks (BNNs). The main experimental findings are that common approximate inference algorithms in BNNs: (a) do not exhibit functional consistency of posteriors, (b) do not propgate info...
Rebuttal 1: Rebuttal: Thank you for your careful review and helpful feedback! We hope that the responses and new results below answer your questions and alleviate any remaining concerns. > In a set of experiments like this, it would be nice to have a small version where exact inference is possible. This would help clar...
Summary: The paper investigates the alignment of Bayesian neural networks wrt rigorous Bayesian principles/ideals. To do so, tasks like synthetic regression and classification on CIFAR datasets are considered. The main claimed findings are focused on 1) the lack of "functional consistency" shown by approximate posterio...
Rebuttal 1: Rebuttal: Thank you for your careful review and helpful feedback! We are glad you found our paper interesting, and hope that the responses and new results presented below help increase your confidence in backing acceptance of the paper. > I don't really understand why references on Bayesian inference […] ar...
Summary: The paper empirically investigates how well popular approximate inference algorithms for BNNs respect the theoretical properties of Bayesian belief updating. The study tries to examine whether different Bayesian neural network (BNN) posterior approximations adequately capture epistemic uncertainty by analyzi...
Rebuttal 1: Rebuttal: Thank you for your work in reviewing our paper. However, we believe there may have been some significant misunderstandings about our work and we are a little puzzled by both your summary of our work and the conclusions of your review, neither of which reflect our actual contributions. For instanc...
null
null
null
null
null
null
Orient Anything: Learning Robust Object Orientation Estimation from Rendering 3D Models
Accept (poster)
Summary: This paper proposes orient anything, a foundation model for predicting object orientation from a single image in an zero-shot manner. The key of the paper is curating a large-scale dataset for the orientation estimation task, which is rendered from Objaverse including 2M images. The authors propose to deal wit...
Rebuttal 1: Rebuttal: Thank you for your valuable review and suggestions. Below we respond to the comments in **Weaknesses (W)** and **Questions (Q)**. --- ***W1.1 & Q1.1: Quantitative orientation error on real images.*** Currently, the 6 benchmarks in Table 2 are evaluated on real-world data with quantitative 3D or...
Summary: The paper introduces Orient Anything, a foundational model designed for zero-shot estimation of object orientation from monocular images. Due to the scarcity of orientation annotations for open-world objects, the authors develop an automated 3D object orientation annotation pipeline that effectively utilizes t...
Rebuttal 1: Rebuttal: Thank you for your positive review for recognizing the significance of our paper and invaluable suggestions. Below we respond to the comments in **Weaknesses (W)** and **Questions (Q)**. --- ***Q1: Improvement of "Orient Anything+LLM".*** "Orient Anything + LLM" is designed to demonstrate the a...
Summary: The paper introduces Orient Anything, a foundation model for zero-shot object orientation estimation. The key contributions include: 1) Leveraging 3D models and VLMs to annotate front faces, generating 2M synthetic images with orientation labels; 2) Modeling orientation as Gaussian distributions over angles (a...
Rebuttal 1: Rebuttal: Thank you for your positive review for recognizing the significance of our paper and invaluable suggestions. Below we respond to the comments in **Weaknesses (W)** and **Questions (Q)**. --- ***W1&Q2&Q3: Limited ablation studies for key components: 1. distribution fitting, 2. augmentation, 3. ra...
Summary: This paper proposes Orient Anything, a method that obtains orientation through 3D assets and distilled VLM annotation. Although this paper is somewhat overclaimed, it is pioneering. Claims And Evidence: Yes Methods And Evaluation Criteria: The paper is meaningful, most previous academic research has focused ...
Rebuttal 1: Rebuttal: Thank you for your positive review for recognizing the significance of our paper and invaluable suggestions. Below we respond to the comments in **Weaknesses (W)** and **Questions (Q)**. --- ***W1&W2: Scaling up Orient Anything.*** Thank you for your suggestion. The SoFar dataset is really help...
null
null
null
null
null
null
From Weight-Based to State-Based Fine-Tuning: Further Memory Reduction on LoRA with Parallel Control
Accept (oral)
Summary: This paper discussed the PEFT from a new view of control theory. From control theory, a new State-Based Fine-Tuning (State-FT) is proposed, where the network is modeled as a graph with each edge representing weights and each node representing activations. Thus, any components such as MLP or a couple of layers...
Rebuttal 1: Rebuttal: Thank you for your thoughtful and positive comments. We would like to clarify on a few points to address the concerns raised. ### 1.Discussion about the difference between the current work and the pioneering work (Zhang et al., 2024b). We agree with the reviewer that it is important to clearly h...
Summary: This paper presents a state-based fine-tuning framework, which can avoid storing large intermediate states during training. Empirical results show its effectiveness. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The pro...
Rebuttal 1: Rebuttal: Thank you for your valuable suggestion. ### 1. The idea of state-based fine-tuning seems to apply the LoRA from the QKV matrix to the FFN/ATTN block We understand the reviewer’s concern that our contribution may seem limited to proposing an algorithm, and we would like to clarify on this. Fir...
Summary: This paper proposes a novel state-based fine-tuning framework named State-FT for parameter-efficient algorithms. The authors shift the focus from traditional weight-based adaptations (e.g., LoRA and its variants) to directly optimizing the model’s intermediate forward states. Based on the inspiration of the co...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for your positive and insightful feedback. ### 1. Quantization Method like QLoRA/QA-LoRA. Can State-FT use QLoRA method independently? Yes, the State-FT method can independently leverage the QLoRA/QA-LoRA approach. GPU memory usage mainly arises from three sources...
Summary: This paper proposes a new state-based fine-tuning framework that allows tuning entire residual blocks or multiple sequential sub-layers instead of adding adapters to each layer. The method significantly reduces the memory footprint by avoiding the storage of large intermediate activations while maintaining fin...
Rebuttal 1: Rebuttal: We sincerely thank reviewer for the insightful and constructive feedback. ### 1. Sensitivity to the Choice of Controlled Blocks Controlling either the MLP or attention layer yields comparable performance, as demonstrated by the ViT model results: ||MLP|Attn|Full Block| |-|-|-|-| |Performance|$91...
null
null
null
null
null
null
Competitively Consistent Clustering
Accept (poster)
Summary: When clustering dynamic data (with insertions and/or deletions), consistency of updated solutions is a concern that can be more relevant that optimal cost in practice. This work studies fully dynamic clustering algorithms for k-center, facility location, and k-median with competitive recourse. For fully dynam...
Rebuttal 1: Rebuttal: Thanks for the thorough and very positive review. We will mention in the abstract that the algorithm are bi-criteria. Indeed, theoretically, the gap between a solution that uses k and k+1 centers may be large. However, we believe this rarely happens in practical instances. The rebuttal platform ...
Summary: This paper studies fully dynamic consistent clustering, specifically focusing on the $k$-center, facility location, and $k$-median problems. Previous work has focused on algorithms maintaining solutions close to optimal while minimizing recourse—the number of changes to centers over time. The key innovation he...
Rebuttal 1: Rebuttal: Thanks for the thorough review. Indeed, in the statement of Theorem 1.1, the result for the k-center is for every \eps\in(0,1/2). We will add this to the statement. Our algorithms are with 'resource augmentation' which is very common in competitive analysis (e.g. in caching, network routing, sched...
Summary: This paper considers dynamic clustering problems, including dynamic k-center, facility location, and k-median. The goal is to maintain constant factor approximation with small recourse (the total changes made to the solution). All existing works in this problem obtain an absolute recourse guarantee, namely, ...
Rebuttal 1: Rebuttal: Thanks for the thorough and very positive review. In our comments to Reviewer 1 (iX2S), we argue that the fractional OPT against which we compare our algorithm is a the strongest possible dynamic benchmark (up to the constant slack factor of beta). See our comments to Reviewer 3 (G5B5) about the u...
Summary: This paper studies fully dynamic clustering algorithms with competitive recourse guarantees. The authors focus on three classic clustering problems: k-center, facility location, and k-median. In the fully dynamic setting, given a metric space with n data points, a different set of points is chosen as clients a...
Rebuttal 1: Rebuttal: Thanks for the thorough and very positive review. We fixed the comments, and will try to include a more formal proof of the remark. In Theorem 1.1 in the facility location and k-median, we already lose a constant in the cost (or the number of servers). Hence, one may simply use \eps=1. We will add...
null
null
null
null
null
null
Invariance Makes LLM Unlearning Resilient Even to Unanticipated Downstream Fine-Tuning
Accept (poster)
Summary: Incorporating an invariance objective into unlearning tasks can prevent accidentally relearning unlearned WMDP when trained on standard fine-tuning datasets. Claims And Evidence: While there is some evidence that the method is robust against relearning using unrelated fine-tuning tasks, I don’t think there is...
Rebuttal 1: Rebuttal: **Q1: Derivation on (3), connection with (4)-(5), question on stationarity, motivation on IRM, and why not just gradient norm.** **A1:** Eq. (3) follows the standard IRMv1 relaxation from [R1], which approximates the bi-level IRM in Eq. (2) using a single-level gradient penalty. This promotes inv...
Summary: The paper introduces a novel approach to enhance the resilience of LLMs against the re-emergence of unlearned knowledge during downstream fine-tuning tasks. This is achieved through invariant LLM unlearning, which incorporates IRM principles into the unlearning process. The contributions of this paper are summ...
Rebuttal 1: Rebuttal: Thank you for the reviewer’s review. We respond to the key questions raised below. **Q1: Weak motivation for why relearning occurs (in Sec. 4) and under-unlearning in NPO and RMU?** **A1:** Thank you for the question. We provide further clarification below. First, we added experiments showing ...
Summary: The authors propose a novel method to enhance the robustness of language model unlearning against fine-tuning. The core contribution is the introduction of invariance regularization, inspired by Invariant Risk Minimization, which aims to make unlearning effects resilient to subsequent fine-tuning. The paper de...
Rebuttal 1: Rebuttal: **Q1: Robust unlearning evaluation on aggressive approaches (like targeted relearning attacks).** **A1:** We appreciate the reviewer’s suggestion and have incorporated additional experiments to evaluate our method under the **relearning attack** setting [R1]. Specifically, the attack involves fin...
Summary: This paper addresses the challenge of machine unlearning in large language models (LLMs) by improving the robustness of removing targeted knowledge while preserving model utility. Existing unlearning methods are highly sensitive to downstream fine-tuning, often leading to the unintended recovery of unlearned i...
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed summary of our work and contributions. We also appreciate the insightful comments and provide our detailed responses below. **Q1: Confusion in Fig. 4 task vector.** **A1:** We apologize for any confusion caused in Fig. 4 and our presentation. We will impr...
null
null
null
null
null
null
Heads up! Large Language Models Can Perform Tasks Without Your Instruction via Selective Attention Head Masking
Accept (poster)
Summary: This article explores the ability of large language models (LLMs) to perform tasks without relying on explicit instructions through selective attention head masks. It is found that there exists a "functional path" composed of attention head combinations within the model, which is crucial for task execution. Ex...
Rebuttal 1: Rebuttal: Many thanks for your appreciation and valuable remarks on our work. We would like to answer and clarify your concerns as below. > W1. Theoretical analysis Since our conclusions are primarily derived from experiments and observations rather than strict theoretical analysis, we sincerely apologize...
Summary: This paper proposes a simple yet effective attention head masking method for large language models. Specifically, it trains a head weight which can indicate the importance of heads to the task. After training, we can map the trained head weights to head mask and use it as the final mask for inference. Moreover...
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments and suggestions on our work. We would like to answer and clarify your concerns as below. As the text length is constricted, we provide the results in the anonymous github (rebuttal.pdf). First, we would like to clarify that the primary focus of this paper is...
Summary: The authors study the case that there are several attention heads if switched off, the models actively can be tuned to perform a specific task without fine-tuning. Claims And Evidence: - Switching off attention heads leads to similar results as prompting (experiments for language translation) - Masks can be ...
Rebuttal 1: Rebuttal: Thank your for your revisions and comments. We would provide our clarifications and responses below. First and foremost, we would like to clarify certain statements in your Summary, Methods and Experimental Designs sections. Our experiments are not limited to language translation tasks; they also...
null
null
null
null
null
null
null
null
Safety-Polarized and Prioritized Reinforcement Learning
Accept (poster)
Summary: Main findings: - The paper introduces MAXSAFE, a chance-constrained bi-level optimization framework for safe reinforcement learning to achieve hard-constraint satisfaction and near-zero costs in sparse cost setting. Particularly, MAXSAFE minimizes the unsafe probability and maximizes the return under safe poli...
Rebuttal 1: Rebuttal: We would like to sincerely thank the reviewer for the thorough reading of our paper. We address your valuable questions in the following responses. Q1: Sparse cost challenge In our setup, episodes terminate immediately upon safety violations, as we treat safety as a hard constraint. The learner ...
Summary: The paper proposed MAXSAFE, a safe RL algorithm which aims to maximize the return while minimizing (reduce to zero) the probability of visiting an unsafe state. MAXSAFE is extended based on Q-learning algorithm, and thus is applicable to discrete action MDP problems. The major contribution of this paper is tha...
Rebuttal 1: Rebuttal: We would like to sincerely thank the reviewer for the valuable feedback of our work. Q1: The soundness of our assumption that there exists a sufficiently large policy space with minimal unsafe probability. We assume that in the environments we consider, there exists a sufficiently large policy s...
Summary: The paper has a clear motivation to improve the safe RL through action masking. To avoid directly applying infinity to Q, the paper introduces the polarization functions. To improve the learning of REF, the paper uses prioritized learning. The result shows significant improvement in both reward and safetyness ...
Rebuttal 1: Rebuttal: We would like to sincerely thank the reviewer for the positive recognition of our work. Regarding the concern about why we did not choose Safety-Gym as our benchmark suite, the primary reason is that our setting fundamentally differs from that of Safety-Gym. In our framework, safety is treated as...
Summary: This work introduce a chance-constrained bi-level optimization framework called MaxSafe, for the maximal-safety RL problem. MaxSafe first minimizes the unsafe probability and then maximizes the return among the safest policies. Claims And Evidence: 1) The authors assume that there is sufficiently large policy...
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude to the reviewer for the valuable feedback and constructive comments. To better quantify the trade-off between reward and safety, we adopt the SWU score introduced by Yu, H., Xu, W., and Zhang, H. in Towards Safe Reinforcement Learning with a Safety Ed...
null
null
null
null
null
null
Sparse Autoencoders, Again?
Accept (poster)
Summary: The paper proposes sparse variational autoencoders as an analog to sparse autoencoders by adding a sample wise sparsity mask. The authors then consider low dimentional data and show that only this low dimensions are active for optimal parameters. They then compare this model on multiple real world data sets to...
Rebuttal 1: Rebuttal: Thanks for the comprehensive review of our work, particularly with respect to checking proof details and pointing out multiple valuable corrections. **Comment:** *The benchmarks seems sensible ... but the only comparison made is against other sparse autoencoder models.* **Response:** Actually, w...
Summary: After a discussion of the strengths and weaknesses of variational autoencoders (VAEs) and sparse autoencoder (SAEs), this paper proposes an adaptation of the VAE architecture to enhance sparsity and target the interpretability objective that motivates SAEs. The idea is conceptually simple: use the learned vari...
Rebuttal 1: Rebuttal: Thanks for the quite comprehensive and accurate summary of our work. Indeed this description captures the essence of what we hoped to convey. **Comment:** *...additional empirical validation of language model data (e.g., on a more recent, larger model) would make the paper stronger.* **Response...
Summary: The paper introduces a method to explicitly model sparsity in variational autoencoders (VAEs) that leverages a simple (parameter-free) transformation of the latents before decoding. The method is introduced by intuitive construction and supported by theoretical arguments and results. Experimental comparisons s...
Rebuttal 1: Rebuttal: We are appreciative of the detailed, constructive comments, and for pointing out the high potential of our work being used by those interested in sparse autoencoders (which is our intended audience). We address main reviewer points in turn below. **Comment:** *As far as I understand, the masking ...
Summary: This paper addresses the limitations of traditional Sparse Autoencoders (SAEs) and Variational Autoencoders (VAEs) in sparse representation learning, particularly their inability to adaptively adjust sparsity patterns and sensitivity to hyperparameters. The authors propose a novel model called VAEase, which co...
Rebuttal 1: Rebuttal: Thanks for acknowledging the many positive aspects of our work, including the novel design, comprehensive proofs, solid theoretical foundation, well-designed experiments, and the robustness and generalizability of our proposed model. We also appreciate the reviewer's statement that there is clear...
null
null
null
null
null
null
Relational Conformal Prediction for Correlated Time Series
Accept (poster)
Summary: This paper introduces a method for improving uncertainty quantification in time series forecasting by leveraging correlations between sequences. The authors propose Conformal Relational Prediction (COREL) that integrates conformal prediction and quantile regression with graph-based deep learning. The method ca...
Rebuttal 1: Rebuttal: Thanks for the review, please find point-by-point answers below. >Unclear if smaller PIs (or lower Winkler) simply stem from anti-conservativeness. There might be a misunderstanding. The Winkler score (see Eq. 26 in the Appendix) encompasses both coverage and efficiency, i.e., a smaller PI wid...
Summary: The work introduces Conformal Relational Prediction (CoREL), a novel distribution-free uncertainty quantification method for time series forecasting that leverages graph deep learning. CoREL integrates spatiotemporal graph neural networks with conformal prediction (CP) to capture relational structures among co...
Rebuttal 1: Rebuttal: Thank you for the review. Please find our comments below. > Comparisons against standard CP methods in terms of runtime are needed to fully validate efficiency claims. The main two baselines to compare to here would be SCPI and HopCPT. Scalability issues for SCPI comes from training a different...
Summary: The paper presents Conformal Relational Prediction (COREL), which integrating graph deep learning (GDL) operators into the CP framework, allowing relational structures to improve uncertainty estimation for spatio-temporal time series. The method utilizes a STGNN to provide structural embeddings and applies qua...
Rebuttal 1: Rebuttal: Thank you for the review! Please find our answers to your questions below. > Missing reference. We agree it is a relevant reference. We will include it in the updated version of the paper. Thank you. > Typos. Thank you for spotting those! > Prop 3.1 [...] blankets everything that could u...
Summary: This work is on uncertainty quantification in time series forecasting. The authors proposed a conformal prediction method based on graph neural networks. Their approach is based on quantile regression. A spatiotemporal graph neural network was trained on the residuals of the calibration dataset to predict the ...
Rebuttal 1: Rebuttal: Thank you for your review! Please find our point-by-point answers below. > Results show that the proposed model is better on Winkler score but not on the other two metrics. Hence, cannot claim that the proposed model is superior. Coverage and PI Width on their own do not say much about UQ per...
Summary: The paper introduces Conformal Relational Prediction (COREL), a novel approach for uncertainty quantification in correlated time series forecasting using graph deep learning frameworks. COREL overcomes the data exchangeability limitation by employing a spatiotemporal graph neural network (STGNN) to model relat...
Rebuttal 1: Rebuttal: Thanks for the review please find our comments below. > Further comparisons with existing methods and additional concrete examples of scenarios where COREL outperforms other methods can bolster the robustness. [...] This comparative approach is valid but could be strengthened by including a wide...
null
null
null
null
Semi-gradient DICE for Offline Constrained Reinforcement Learning
Reject
Summary: This paper investigates the limitations of SemiDICE in offline constrained reinforcement learning, revealing that it outputs policy corrections rather than stationary distribution corrections. This fundamental flaw significantly impairs SemiDICE's effectiveness in off-policy evaluation (OPE). Based on these fi...
Rebuttal 1: Rebuttal: We appreciate the reviewer’s comment and provide empirical evidence in Section 4 to better substantiate our work. ## Q1. Empirical evidence on the claim from Section 4 We demonstrate that OptiDICE can yield a state $s$ where $d_{\pi^*}(s,a)=0\;\forall a$. The figure provided in https://imgur.com...
Summary: The paper developed a new offline RL algorithm that applies semi-gradient DICE, addressing the challenge of constraint violation when applying semi-gradient DICE in the context of constrained RL. The paper provides theoretical analysis on the characteristics of the correction term (i.e., the ratio of the stati...
Rebuttal 1: Rebuttal: We appreciate the reviewer’s acknowledgment of our contribution: extending semi-gradient DICE to constrained offline RL, grounded in theoretical analysis of its optimal solution and stationary distribution correction approach to address the issue of Bellman flow constraint violation.
Summary: This paper proposes a DICE-based algorithm for offline constrained RL. The proposed method can be seen as a SemiDICE version of COptiDICE, with some extra designs. The paper is generally well-written, but I also feel there is some overclaiming of contributions and a lack of adequate acknowledgment of existing ...
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough and constructive comments. We hope we can address your concerns below. ## Q1. Violation of Bellman flow (BF) constraint and the originality of Proposition 4.1 We respectfully disagree with the reviewer’s claim that replacing the term $(1-\gamma)p_0$​ with t...
null
null
null
null
null
null
null
null
Scaling Inference-Efficient Language Models
Accept (poster)
Summary: This paper proposes to modify the chinchilla scaling laws to also include the model aspect ratio (embedding dim / number of layers) into the scaling law. This accounts for the fact that wider and shallower models are faster in inference. Additionally the paper suggests to include the latency as key metrics f...
Rebuttal 1: Rebuttal: **Claims** **C1:** This paper only looks... **Answer:** From Figure 3 in [2], prior work has also observed that models with smaller loss do not mean better performance over downstream tasks. See more results here: https://anonymous.4open.science/r/ICML25-Rebuttal-3B34 **C2:** Just the reference...
Summary: The paper observes that architecture modifications significantly affect inference latency whilst holding total model size fixed. The paper primarily uses model aspect ratio $r=d_{model}/n_{layers}$ as the parameterization of architecture. The paper then introduces an inference-efficient scaling law, which is ...
Rebuttal 1: Rebuttal: **Q1:** In line 437, you say that your approach "avoids estimating tokens generated". I would rephrase this, as it implies your approach is circumventing a problem in earlier work. To me it seems more like there is only a philosophical difference. The current work is concerned with model latency, ...
Summary: The paper presents revised inference-time scaling laws based on model architecture choices, relying on the observation that models of the same size but different architecture choices can have up to a 3.5 times difference in inference latency. Using that, they train models of varying sizes up to 1B parameters, ...
Rebuttal 1: Rebuttal: **Q1:** Table 1 seems to only rely on 6 data points, how statistically valid is extrapolating to larger sizes in this case? **A1:** As detailed in Table 4 of the Appendix, each model size includes several variants. We use 27 data points to fit the scaling laws in Figure 7. **Q2:** The Spearman c...
Summary: Traditional scaling laws (like Chinchilla) do not account for model architectures in their modeling of the loss. This paper first highlights that the model architecture (like hidden dim, #layers) affects the downstream loss as well as the latency of the models (also studied in multiple previous works). They pr...
Rebuttal 1: Rebuttal: **Q1:** The authors emphasize the importance of estimating the downstream inference latency of models... **Q2:** Moreover, inference latency depends on the number of tokens the model generates to answer a question at inference... **Q3:** I strongly encourage the authors to define precisely how t...
null
null
null
null
null
null
Mixed-curvature decision trees and random forests
Accept (poster)
Summary: The paper introduces mixed-curvature decision trees (DTs) and random forests (RFs), which can be used to analyse data living on product manifolds: combinations of Euclidean, hyperspherical and hyperbolic spaces, allowing for heterogeneous curvature. DTs are reformulated using angles to respect the manifold ge...
Rebuttal 1: Rebuttal: We thank the reviewer for their careful attention, and are grateful for their favorable assessment of our “**well-written**, clear, convincing text”, our core idea being “**novel and interesting**,” and our method’s potential to be an “**effective, interpretable** tool.” **Relationship to signat...
Summary: This paper presents a novel extension of Decision Trees (DTs) and Random Forests (RFs) to product manifolds, which are Cartesian products of hyperbolic, hyperspherical, and Euclidean spaces. The authors introduce an angular reformulation of DTs that respects the geometry of product manifolds, resulting in geod...
Rebuttal 1: Rebuttal: Thank the reviewer for their attention to our manuscript, and in particular for praising the strong performance of our method, the thoroughness of our benchmarks, and the “clear and convincing evidence” of our claims. **Motivation for a tree-based method and our contribution** Tree-based methods...
Summary: The manuscript develops a methodology for creating decision trees and random forests (classifiers or regressors) by assuming the data coordinates can be decomposed into products of hyperbolic, hyperspherical, or Euclidean components. It is shown that each of those spaces belongs to a class of "constant curvatu...
Rebuttal 1: Rebuttal: We thank the reviewer for their favorable comments on our manuscript, including our “**clear and convincing**” claims, “**good results**,” and the “**extra effort** required to establish” performance on real-world benchmarks **Selection of benchmarks.** Our work makes the admittedly strong assu...
Summary: This paper proposes mixed-curvature decision trees (DTs) and random forests (RFs) for data embedded in product manifolds—combinations of hyperbolic, spherical, and Euclidean spaces. The core algorithm selects the geodesic split from three options (hyperbolic, spherical, Euclidean) with highest information gain...
Rebuttal 1: Rebuttal: We thank the reviewer for their time and for acknowledging our “novel integration of mixed-curvature geometries into DTs/RFs” and its “potential for applications in hierarchical or graph-structured data,” and for praising its effectiveness in single-manifold settings. **Added benchmarks** Per the...
null
null
null
null
null
null
Hierarchical Graph Tokenization for Molecule-Language Alignment
Accept (poster)
Summary: Previous LGLMs usually focus on the node level of molecules, ignoring the structure information in molecules. To address this, the paper proposes a novel strategy called Hierarchical Graph Tokenization (HIGHT), which uses a hierarchical graph tokenizer to encode the hierarchy of atom, motif, and molecular leve...
Rebuttal 1: Rebuttal: Dear Reviewer VbFG, Thank you for your time and suggestions to our work. Please find our detailed responses to your questions below: > Related work section Thank you for acknowledging our contribution as the first to incorporate the hierarchical graph information. As for the referred works, we ...
Summary: This paper proposes HIGHT, a novel molecular graph tokenization and post-training framework for applying large language models to molecular graphs. The paper proposes a novel hierarchical tokenization method incorporating molecular motif information, and uses novel alignment pretraining strategy to train mode...
Rebuttal 1: Rebuttal: Dear Reviewer zQSo, Thank you for your time and insightful suggestions, as well as your acknowledgment of the value and convinceness of our work. Following your suggestions, we have revised our manuscript to include a discussion on future work regarding extending HIGHT to incorporate 3D informati...
Summary: The paper introduces a new representation of graphs (specifically, molecules) for the purpose of tokenization for LLMs. The key aspect of the new representation is that it not entirely just node-based, rather it captures features in the graph at both the node and the motif level. It is not clear to me whether ...
Rebuttal 1: Rebuttal: Dear Reviewer brfq, Thank you for acknowledging our performance improvements and constructive suggestions. Please find our explanations to your questions below: > it is not entirely clear what fraction of the improvement comes from the hierarchical tokenization itself. We kindly refer Reviewer...
Summary: This paper aims to address the issue of tokenization in existing LGLMs (large graph-language models) that neglect the essential hierarchical structures inherent in molecules. The hierarchical structures are reflected as motifs or functional groups that are subgraphs within the larger molecular graph. The prop...
Rebuttal 1: Rebuttal: Dear Reviewer eiUy, Thank you for your time and insightful suggestions for our paper. Please find our responses to your concerns below. > In molecular property prediction and chemical reaction tasks, HIGHT does not demonstrate substantial advantages. We need to clarify that the performance gaps...
null
null
null
null
null
null
MixMin: Finding Data Mixtures via Convex Minimization
Accept (poster)
Summary: This submission addresses the optimization of data source mixtures, formulating it as a bi-level optimization problem. The key result is Theorem 3.1, which states that under certain conditions (cross-entropy or mean squared error loss, hypothesis class contains Bayes optimal models), the optimal mixture weight...
Rebuttal 1: Rebuttal: We thank the reviewer for their time and feedback. We discuss the main questions below, and will incorporate the other suggestions to our revised draft. > For the proof of Lemma 3.2, it is not clear how the first displayed inequality is obtained… We describe the derivation below, and will add t...
Summary: This paper proposes MixMin, a simple but effective method for solving the data mixture coefficients in large language model pretraining. The authors identify that the bi-level optimization objective for solving data mixture is intractable. But luckily, such an objective tends to be convex when model classes be...
Rebuttal 1: Rebuttal: We thank the reviewer for their time and feedback. We discuss specific questions below. > Only models under the parameter size of 1B have been evaluated. It's not clear if the method scales well to larger model sizes, such as around 7B. We agree it would be nice to extend our analysis to larger ...
Summary: The authors propose a novel method (called MixMin) for the problem of optimizing data mixtures for pre-training of large ML models in order to improve performance in downstream tasks. The MixMin method proposed by the authors solves this optimization problem with the following approach: First, MixMin trains a ...
Rebuttal 1: Rebuttal: We thank the reviewer for their time and feedback! We elaborate on questions raised in the review below, and will update the draft with the typos pointed out by the reviewer. > Can the authors provide any additional insight into when they expect MixMin to work well in practice? In what scenarios ...
null
null
null
null
null
null
null
null
Oracle-MoE: Locality-preserving Routing in the Oracle Space for Memory-constrained Large Language Model Inference
Accept (poster)
Summary: This paper proposes a new MoE architecture, Oracle-MoE, to address the latency issues associated with deploying large language models (LLMs) on edge devices with limited memory. The key idea is to route tokens in a compact space, called the oracle space, which is derived from attention scores to maintain sema...
Rebuttal 1: Rebuttal: Thank you for your professional review comments and suggestions. We will add them in the updated version. Figures mentioned can be accessed at: https://anonymous.4open.science/r/ICML2025REBUTTAL-E158/README.md _Q1_ Semantic locality: While the paper claims that tokens with higher mutual attention...
Summary: This paper proposes Oracle-MoE, which improves the MoE inference efficiency by exploiting semantic locality to reduce swapping demands. Claims And Evidence: Please see **Other Strengths And Weaknesses** Methods And Evaluation Criteria: Please see **Other Strengths And Weaknesses** Theoretical Claims: Please...
Rebuttal 1: Rebuttal: Thank you for your professional review comments and suggestions. We will add them in the updated version. Figures mentioned can be accessed at: https://anonymous.4open.science/r/ICML2025REBUTTAL-E158/README.md _Q1_ The main concern I have with this paper is the issue of scalability. In the paper,...
Summary: This paper presents Oracle-MoE, a novel Mixture-of-Experts (MoE) architecture aimed at efficiently deploying Large Language Models (LLMs) on memory-constrained edge devices. Current MoE models, despite theoretical advantages for memory efficiency, suffer from high latency during inference due to frequent swapp...
Rebuttal 1: Rebuttal: Thank you for your professional review comments and suggestions. We will add them to the updated version. The figures mentioned can be accessed at: https://anonymous.4open.science/r/ICML2025REBUTTAL-E158/README.md _Q1_ The paper could benefit from further discussions on practical deployment consi...
Summary: The paper introduces Oracle MoE, a novel Mixture-of-Experts (MoE) architecture designed specifically for memory-constrained inference on edge devices. The main idea is to replace conventional token level routing with an oracle-space routing mechanism that leverages semantic locality. By grouping tokens based o...
Rebuttal 1: Rebuttal: Thank you for your professional review comments and suggestions. We will add them in the updated version. Figures mentioned can be accessed at: https://anonymous.4open.science/r/ICML2025REBUTTAL-E158/README.md _Q1_ It would be better to measure the temporal inconsistencies for the whole dataset a...
null
null
null
null
null
null
Less is More: Federated Graph Learning with Alleviating Topology Heterogeneity from A Causal Perspective
Accept (poster)
Summary: This work proposed a causal subgraph learning method for graph federated learning. The work consists of three critical components. First, the edge evaluator separates the local subgraph into causal subgraph and biased subgraph. Second, the dual-GNN is developed to encode the corresponding subgraphs. Third, the...
Rebuttal 1: Rebuttal: Many thanks for your valuable feedback, we carefully reply your concerns as follows. **Response to Weakness(a):** The topology of a graph has a significant influence on node embedding. However, inspired by causal learning, we argue that only the critical topological information is a direct determ...
Summary: To address the topology heterogeneity of FGL, the authors proposed an interesting idea, namely, Less is More. Concretely, the unnecessary edges are discarded while the necessary edges are maintained. The CE loss and NE loss are separately used to train the corresponding GNNs. The HSIC loss is adopted to enforc...
Rebuttal 1: Rebuttal: Thank you very much for your valuable comments, the detailed responses are provided as follows. **Response to Weakness1:** We have collected two new federated graph learning methods FGGP [1], FedGTA [2], and compared them with the proposed FedATH. The experimental results are reported in the foll...
Summary: This paper proposed a reduced-edge based federated graph learning method that aims to mitigate the effects of topological heterogeneity on federated learning. Specifically, the proposed FedATH assesses the importance of each edge via an edge evaluator. Thus, the local subgraph is divided into causal and biased...
Rebuttal 1: Rebuttal: Thank you for your professional advices, the detailed responses are written as follows. **Response to Weakness1:** The difference between causal and biased graphs is whether they contain critical edge information. We use an edge evaluator to assess the importance of each edge. Subgraphs that cont...
Summary: The paper proposes to divide the local graph into causal subgraph and biased subgraph for alleviating the topology heterogeneity, the causal subgraph possesses the key information for the downstream task, and the biased subgraph possesses the confusing information. Thus, only the causal graph neural networks ...
Rebuttal 1: Rebuttal: Thank you very much for your valuable comments, we response your concerns as follows. **Response to Weakness1:** We test the performance of several federated learning methods when all the edges are removed on Cora dataset. It can be seen that although topological heterogeneity causes a decline in...
null
null
null
null
null
null
Sample, Scrutinize and Scale: Effective Inference-Time Search by Scaling Verification
Accept (poster)
Summary: The paper studies a minimalist implementation of test-time scaling that uses only random sampling and direct self-verification. The contributions include: 1. the paper shows that the sample-verification method is surprisingly effective, and it is beneficial to scale both number of solution samples per question...
Rebuttal 1: Rebuttal: Thank you for your review. > The main contribution of the paper. We view our main contribution as showing that scaling search with self-verification works on frontier models and providing an explanation for why: implicit scaling. While it may seem obvious that “increasing verification samples le...
Summary: This paper studies the inference-time scaling of LLMs for reasoning tasks in a sampling-based search setting. The authors first study the test-time scaling along two important dimensions, search (number of sampled candidates) and verification (number of verification scores computed). While scaling in both axes...
Rebuttal 1: Rebuttal: Thank you for your review! We address your questions/comments below. > “I would suggest the authors to also report the token consumption of baselines (e.g., Consistency@k) as well for a more fair comparison”. We appreciate the suggestion and agree it would be useful. As our focus was identifying...
Summary: Overall Evaluation This paper investigates the scalability of sampling-based search methods in inference tasks and proposes a minimal yet effective Sampling-based Search with Self-Verification approach. The key contributions of this work include: 1. A systematic analysis of inference performance scaling wit...
Rebuttal 1: Rebuttal: Thank you for your review! We address your questions below. > Computational cost compared to alternative reasoning frameworks such as Tree-of-Thoughts (ToT) and reinforcement learning-based approaches. Optimizing computational efficiency was not the main focus of this paper, which focuses on und...
Summary: This paper examines scaling test-time compute through a sampling and self-verification approach (“Verification@k"). The authors demonstrate that with sufficient sampling and self-verification, even standard models (Gemini v1.5 Pro) can outperform specialized reasoning models (o1-Preview), and Verification@k im...
Rebuttal 1: Rebuttal: Thank you for your review. We address your questions below. > “I’m skeptical of the author’s implicit scaling claim… it is unclear if increased generations improves generation quality or simply increases generation coverage.” We understand this concern and provided Figure 2 for exactly this reas...
Summary: This paper claims that while self-consistency can greatly improve LLM performance, leveraging additional test time compute to verify/compare generated responses can break the plateau for self-consistency and further enhance model performance, the paper conducts extensive experiments to validate their findings ...
Rebuttal 1: Rebuttal: Thank you for your review. To address your comments on “predictability”: Many papers have studied self-consistency and self-verification, and attempted to scale them up. None have reported the same success that we have; in fact, we aren’t aware of any prior works that have successfully applied se...
null
null
null
null
Learning Mixtures of Experts with EM: A Mirror Descent Perspective
Accept (poster)
Summary: In this paper, the authors discussed the relationship between the EM algorithm and mirror descent in the context of the mixture of experts (MOE) learning. In the beginning, the authors proposed an overview of the EM-based parameter learning procedure in the MOE. On this basis, the authors then introduced the p...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their constructive feedback in highlighting key strengths and limitations of our work. Below, we address the main points raised: - **Theory:** **The reviewer raised concerns about the nomenclature, use of KL vs L2 regularizer, the relationship between the M...
Summary: This paper focuses on integrating MoE optimization and EM algorithm. The authors first proved the theoretical guarantees of EM algorithm for training MoE models. Then, the authors focus on the special case of mixture of 2 linear or logistic experts and analyze the guarantees for the linear convergence. Next, t...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their constructive feedback in highlighting key strengths and limitations of our work. Below, we address the main points raised: - **The provided results are synthetic and small-scale datasets. It is hard to distinguish whether the proposed method is still effe...
Summary: This paper studies the relationship between EM for general MoE with projected mirror descent algorithms. Based on the equivalence between EM and mirror descent, this work provides non-asymptotic convergence rates for training MoE with EM. Claims And Evidence: Yes, this work provides solid proofs to the theore...
Rebuttal 1: Rebuttal: We begin by extending our thanks to the reviewer for the very constructive feedback in highlighting important strengths and limitations of our work. We address the main points raised below: - **Concerns:** **1) The assumptions on the distribution are restricted to the exponential family. 2) Th...
Summary: This paper discusses the relationship between the EM algorithm and MoE models. In particular, a relationship is shown between the EM update of the experts and router parameters and Mirror Descent (MD) with a specific expression for the Bregman divergence regulariser. Claims And Evidence: The paper claims to e...
Rebuttal 1: Rebuttal: We begin by extending our thanks to the reviewer for the very constructive feedback in highlighting important strengths and limitations of our work. We address the main points raised below: - **The limitations of the work are not mentioned** Thanks for your feedback. We would like to mention tha...
null
null
null
null
null
null
FlashTP: Fused, Sparsity-Aware Tensor Product for Machine Learning Interatomic Potentials
Accept (spotlight poster)
Summary: The paper presents FlashTP, a highly optimized tensor-product library designed to address computational inefficiencies through kernel fusion, sparse computation, and path-aggregated execution. The proposed approach significantly accelerates tensor-product operations in equivariant neural networks. Claims And ...
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. # R1. Why Diffdock/MACE was not selected for end-to-end evaluation [Q1] - DiffDock is a diffusion-based molecular docking model, not a MLIP model. Its CGTP configuration is different from the configurations used in MLIPs, so it falls outside the scope of Flas...
Summary: This paper presents FlashTP, an optimized tensor-product library designed to improve the computational efficiency of equivariant machine-learning models that employ spherical tensors. The authors identify three key inefficiencies in existing tensor-product layers: excessive memory traffic from intermediate dat...
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. # R1. Discussion regarding Cartesian-based models - Please refer to the last paragraph of our response R3 to Reviewer hLSR. # R2. Choice of SevenNet over MACE for end-to-end evaluation - We chose SevenNet over MACE for end-to-end evaluation because 1) we beli...
Summary: In this paper, the authors develop FlashTP, an optimized tensor-product library that uses kernel fusion, sparse computation, and path-aggregated execution. FlashTP achieves significant performance improvement in terms of increasing throughput and decreasing memory usage compared to common libraries, e3nn and c...
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. # R1. Evaluation on multi-GPU training [W1, Q1] - Multi-GPU training with FlashTP can be performed using PyTorch's Distributed Data Parallel (DDP). The table below shows the one-epoch training time (in seconds) for SevenNet-l3i5 on the MPF dataset, using varyi...
null
null
null
null
null
null
null
null
Bayesian Optimization from Human Feedback: Near-Optimal Regret Bounds
Accept (poster)
Summary: This paper studies the kernel-based decision making problems under preferential feedbacks. The author proposed the phased-elimination style algorithm (which is refered to as MR-LPF in the paper) for this problem, which leads to $O(\sqrt{T \gamma_T})$ cumulative regret, while the existing algorithm suffers from...
Rebuttal 1: Rebuttal: Thank you for the positive feedback and careful review of the technical material, which is truly invaluable for us. Below, we address your comments and questions, which we hope will enhance your evaluation of the paper. > Interpretation of the lower bound of Scarlett et al., 2017 provided for sta...
Summary: This paper studies Bayesian optimization (BO) with preference feedback, in which every time a pair of inputs are selected and only a binary preference feedback is observed. The paper incorporates preference feedback into a multi-round structure inspired by previous works and prove that the resulting algorithm ...
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed, comprehensive, and constructive feedback. We are glad that you found the theoretical results strong and the provided insights useful. Below, we address the questions, which we hope will help clarify and enhance your evaluation of the paper. > Could the cons...
Summary: This paper proposes a new algorithm, Multi-Round Learning from Preference-based Feedback (MR-LPF), for Bayesian Optimization from Human Feedback (BOHF). MR-LPF achieves a significantly improved regret bound of $\tilde{O}(\sqrt{Γ(T)T})$, matching the optimal regret bounds of conventional Bayesian optimization a...
Rebuttal 1: Rebuttal: We thank the reviewer for the comprehensive review and positive feedback on our work. We are glad that you found our theoretical contributions substantial and the algorithm well-motivated and clearly presented. Below, we respond to your questions and comments, which we hope will further clarify an...
Summary: This paper proposes a Bayesian optimisation method with only human preference-based feedback instead of classical scalar values. The order-optimal sample complexities of conventional BO are recovered. That means the number of preferential feedback samples is of the same order as the number of scalar feedback. ...
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and constructive comments. Below, we provide detailed responses and will incorporate these suggestions to further improve the presentation of the paper. > In Section 3, the main algorithm is introduced but how to train the utility function kernel r...
null
null
null
null
null
null
Rethinking Addressing in Language Models via Contextualized Equivariant Positional Encoding
Accept (poster)
Summary: This paper proposes a new positional encoding method for LLMs, which could enhance the position-addressing ability of transformers. Permutation and orthogonal equivariance are also applied to enforce the positional encoding. This method demonstrates superior performance on various tasks, especially long-contex...
Rebuttal 1: Rebuttal: We greatly thank Reviewer udZG for appreciating our contributions, providing valuable suggestions on improving the work, and supporting the acceptance of this work. We address the questions as follows. > W1: Existing positional encoding methods have introduced improvements to RoPE to better adapt...
Summary: This paper introduces TAPE ,a novel approach to enhancing position-based addressing in Transformers by dynamically adapting positional encodings across layers based on sequence context. TAPE ensures stability and robustness by enforcing permutation and orthogonal equivariance. Experimental results demonstrate ...
Rebuttal 1: Rebuttal: We greatly thank Reviewer poAq for appreciating our contributions, providing valuable suggestions on improving the work, and supporting the acceptance of this work. We address the questions as follows. >W1: The authors provide the running time of attention layers as experimental results in Table ...
Summary: This paper introduces a new approach to processing language sequences using transformer blocks, where token features and positional embeddings are combined and contextualized. The authors extend traditional positional encoding by dividing it into multiple blocks, allowing for more flexible associations between...
Rebuttal 1: Rebuttal: We sincerely appreciate Reviewer G28F's positive assessment of our contributions and strong endorsement for acceptance. The reviewer provided one suggestion, to which we respond below: > I do not see any major weaknesses. One suggestion is about the study of hyper-parameters. I think the authors ...
Summary: This paper proposes a new method for learnable positional encodings, where they are allowed to depend on context/content. The positional encodings, termed TAPE (“conTextualized equivariAnt Position Encoding”), can be added to pre-trained transformers, with only the TAPE-relevant parameters fine-tuned. TAPE is ...
Rebuttal 1: Rebuttal: We greatly thank Reviewer qpJK for appreciating our contributions. We address the concerns as follows. > W1: The motivation of the technique is a bit confusing. The authors claim that relative positional encodings are crucial for “stability and generalization to varying sequence lengths” (L223-22...
null
null
null
null
null
null
DocVXQA: Context-Aware Visual Explanations for Document Question Answering
Accept (poster)
Summary: The paper introduces DocVXQA, a method that generates visual explanations (in the form of a mask) that highlight parts of documents that are relevant for OCR-free document question answering. DocVXQA builds on the Pix2Struct model and learns a mask that, when combined to the input image, must lead the document...
Rebuttal 1: Rebuttal: The authors sincerely appreciate Reviewer Y25T’s constructive feedback. We are pleased that the reviewer recognized the importance of explainability in Document VQA and the strong motivation behind our loss components. We also appreciate the positive remarks on the clarity and coherence of our wri...
Summary: - This paper presents DocVXQA, a novel self-explainable framework for document question answering that learns to provide context-aware visual explanations. It builds on the Pix2Struct model and incorporates a learnable mask to enhance transparency. The approach is based on the information bottleneck principle ...
Summary: This paper proposes DocVXQA, a novel self-explainable framework for Document Visual Question Answering (DocVQA), designed to not only answer questions from document images but also provide context-aware visual explanations via learned relevance heatmaps. The core contribution lies in integrating explainability...
Rebuttal 1: Rebuttal: The authors sincerely appreciate Reviewer ovki’s insightful comments. We are pleased that the reviewer recognized the paper’s key strengths, as he mentioned its originality in integrating interpretability into DocVQA, its effective balance between interpretability and prediction performance, and t...
Summary: First, it introduces DocVXQA, a novel self-explainable framework that not only answers questions about documents but also provides visual explanations highlighting relevant regions that justify the answers. Second, it quantitatively formulates explainability principles (sufficiency and minimality) as explicit...
Rebuttal 1: Rebuttal: The authors thank Reviewer 9NT1 for their feedback. We appreciate the recognition of our approach’s novelty in addressing the challenge of transparency in DocVQA systems, and the acknowledgment that our approach provides a strong theoretical foundation through the quantitative formulation of expla...
null
null
null
null
null
null
Nonparametric Teaching for Graph Property Learners
Accept (spotlight poster)
Summary: The paper introduces GraNT, a novel paradigm that applies nonparametric teaching principles to accelerate the training of graph property learners (specifically GCN). By establishing a theoretical link between traditional parameter-based gradient descent and functional gradient descent, the authors design a gre...
Rebuttal 1: Rebuttal: Thanks for many constructive comments. We are deeply appreciative of the reviewer’s efforts to help us improve our paper. We take all comments seriously and try our best to address every raised concern. We sincerely hope that our response can resolve your concerns. Any follow-up questions are welc...
Summary: This paper presents GraNT (Graph Nonparametric Teaching), a novel framework that improves the learning efficiency of graph property learners (GCNs) using nonparametric teaching. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supp...
Rebuttal 1: Rebuttal: Thanks for the encouraging comments and constructive suggestions. We sincerely thank the reviewer's efforts for helping us improve the paper. We hope that our response resolves your concerns. **[W3]** Very thoughtful question! We believe the idea and analysis behind GraNT have significant potenti...
Summary: In this paper, the authors innovatively introduce a training paradigm termed Graph Nonparametric Teaching (GraNT) designed for graph property learners. Their main idea is to reinterpret the training of GCNs through the lens of nonparametric teaching, which selects training examples (graphs) strategically to ac...
Rebuttal 1: Rebuttal: Thanks for the useful comments. We are deeply appreciative of the reviewer’s efforts to improve our paper. We take all comments seriously and try our best to address every raised concern. We sincerely hope that our response resolves your concerns. **[W1]** Thank you for your helpful feedback in i...
null
null
null
null
null
null
null
null
Bayesian Active Learning for Bivariate Causal Discovery
Accept (poster)
Summary: This paper investigates the problem of identifying the causal direction between two variables, i.e., identify whether x->y or y-> x, through Bayesian active intervention. The paper addresses this problem as a hypothesis testing problem, with a testing statistic called "probability of decisive and correct evide...
Rebuttal 1: Rebuttal: We appreciate your efforts and feedback regarding our paper. We address your concerns below. **Validity of PDC for sample selection.** Our framework is valid since the factorization of BF still holds if $x\_i$ depends on the historical data. Intuitively, this is because the dependency only happen...
Summary: This paper presents a Bayesian active learning framework for identifying causal directions between variables through interventional strategies. Different from traditional information-theoretic approaches, it introduces an objective based on Bayes factors, which directly quantify the strength of evidence suppor...
Rebuttal 1: Rebuttal: Thanks for your efforts and feedback in reviewing our paper. We address your questions below. **Why estimating priors using observation data?** We do not update priors using interventional data since if $k\_0 > 1$, it is difficult for $\mathrm{BF}\_{01}$ to be greater than $k\_0$ to identify $\ma...
Summary: This paper investigate the problem of determining the direction of relationships between variables by active intervention. The previous literature tries to maximize the information-theoretic gains for deciding the intervention value which may not effectively measure the reliability of direction determination. ...
Rebuttal 1: Rebuttal: We appreciate your efforts and positive feedbacks regarding our paper. We address your concerns below. **Definition of $\mathbf{x\_{-k}}$ in Proposition 4.2.** Thank you for your clarification. We will correct this in the updated version. **Conditional Probability in equations (1b), (1c), and (6...
Summary: This paper focuses on causal discovery through Bayes factor. Instead of using information-theoretic gains to determine the direction of causal relationships, this paper adopts Bayes factor and formulating the task as a hypothesis testing. Furthermore, it uses sequential experiment design to selectively gather ...
Rebuttal 1: Rebuttal: We appreciate your efforts and valuable feedback in reviewing our paper. Here are our responses. **About benefits of using Bayes Factor.** We choose the Bayes factor because it naturally aligns well with our goal and hence is more efficient for optimization. Bayes factors are commonly used in exp...
null
null
null
null
null
null
Ensemble Distribution Distillation via Flow Matching
Accept (poster)
Summary: The paper presents an ensemble distribution distillation method leveraging flow matching to efficiently transfer knowledge from an ensemble teacher to a smaller student model. A new approach that models ensemble distribution distillation using flow matching, enabling student models to better capture the dive...
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and for recognizing our work as a new and promising direction. We are pleased that you found our extensive experimental results valuable and especially appreciate your recognition of our fidelity and diversity analyses, which are central to our contribution. We...
Summary: The paper presents an ensemble distillation method based on flow matching named EDFM. The core idea is to learn a mapping between Gaussian noise and the logits of a (Bayesian) teacher model conditioned on the input data. The authors first analyze the importance of diversity in the predictions of the teacher wh...
Rebuttal 1: Rebuttal: We appreciate the positive feedback highlighting our paper’s clarity and coherence. We hope our responses address any remaining concerns, and please reach out if you have any further questions. > the paper is a combination of previous ideas in the distillation literature We respectfully differ i...
Summary: This paper proposes a novel ensemble distribution distillation method (EDFM) that utilizes flow matching to efficiently transfer the diversity of ensembled teacher models to a smaller student model. Key challenges in ensemble distribution distillation are addressed, including the high computational cost of lar...
Rebuttal 1: Rebuttal: Thank you for the thoughtful review and clear understanding of our work, particularly in terms of diversity, efficiency, and scalability. We are glad that you appreciated our extensive experimental results and recognized our approach as both a novel framework and a conceptual innovation. We hope o...
null
null
null
null
null
null
null
null
Learning to (Learn at Test Time): RNNs with Expressive Hidden States
Accept (spotlight poster)
Summary: This paper casts the sequence modeling problem as a meta-learning problem at training time. The resulting model is a model which minimizes a loss, i.e. learns at test time. The authors show that Linear Attention and Attention are special instances in their Learning to Learn at Test Time Framework. Building on ...
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing the impact of our framework. We also thank the reviewer for the concrete suggestions and questions, which we address below. ***Explicit update rule for the final instantiations*** Sorry we did not include these formulas in the main text due to space constrai...
Summary: The paper proposes the Test-Time-Training (TTT) layer, with the goal to overcome the limitations on the expressive power of modern RNNs. The idea consists in linking the hidden state of an RNN to the parameters of a layer, so that input-driven updates on the hidden state translate into updates of model paramet...
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful questions, which we answer below. ***1 - Flexibility w.r.t. inner model*** We agree. We will add the discussion below to the final version if the paper is accepted (ICML does not allow submitting a revision): In principle, any differentiable model and lo...
Summary: This paper introduces Test-Time Training (TTT) layers, a 'clever' way to handle long sequences without the heavy cost of Transformers. The key idea is to treat the hidden state as a learning model that updates itself during inference, allowing it to capture complex patterns over long contexts. The authors prop...
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating our paper as a sufficient proof-of-concept. We also thank the reviewer for the insightful questions, which we answer below. ***1 - Comparison with Mamba 2*** Mamba 2 130M with 2k context trained with Chinchilla scaling law on the Pile has perplexity 10.77, ...
Summary: This paper presents a novel approach to sequence modeling by enhancing the hidden states of RNNs through a general method called Test-Time Training (TTT). The key idea is to frame the online updating of hidden states as a self-supervised learning process, using the update loss $\ell: W_t = W_{t−1} − \eta \nabl...
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing our framework’s potential to guide the development of RNNs with better memory. We also thank the reviewer for the concrete suggestions and questions, which we address below. ***Benchmark on downstream tasks beyond perplexity*** The focus of our paper is on l...
null
null
null
null
null
null
Automatic Differentiation of Optimization Algorithms with Time-Varying Updates
Accept (poster)
Summary: The paper studies the convergence of the gradient of algorithm iterates with respect to the hyperparameters in settings where optimization algorithms employ time-varying update rules, such as changing step sizes or momentum parameters. The authors provide convergence guarantees for the derivative of the iterat...
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable comments and suggestions. We will fix minor typos and incorporate the suggested changes to the next version. > It would be more understandable if the authors added more details on the results of Beck (1994) that they claim to improve upon. In the appendix...
Summary: Automated differentiation (AD) is a little studied workhorse behind all major deep-learning frameworks (PyTorch, TensorFlow, JAX). The authors show linear convergence for forward-mode AD for algorithms, where the iteration's parameters change over time. This is a very neat theoretical result. Claims And Evide...
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable comments and suggestions. We will fix minor typos and incorporate the suggested changes to the next version. > ... the authors could ... Riis (2020) and Mehmood & Ochs (2022) better. The key difference presented in Section 1.2, Paragraph 4 (last on the pa...
Summary: This paper studies automatic differentiation which is widely used and fundamental in bilevel optimization. The focus of this work is on the case where the algorithm may have changing parameters at each iteration, such as step-sizes. Under this setting and some assumptions, they analyzed the convergence of the ...
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable comments and suggestions. We will incorporate the suggested changes to the next version. > I did not check the proof, while I have some concerns regarding the assumptions 3.1 - 3.3, which seem quite strong. Can the authors list some functions that satisfy ...
Summary: When, for example, solving a bilevel optimization $\min_{u\in\mathcal{U}}l(\psi(u),u)$ by gradient descent, a derivative of a solution mapping $\psi(u)$, that is $D\psi(u)$, needs to be computed. While either the implicit differentiation (ID), using chain rule, or the automatic differentiation (AD) can be used...
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable comments and suggestions. We will fix minor typos and incorporate the suggested changes to the next version. > Was there a convergence rate analysis for the AD of non-time-varying update? I was not able to figure that out in the paper. We highlight them i...
null
null
null
null
null
null
How Contaminated Is Your Benchmark? Measuring Dataset Leakage in Large Language Models with Kernel Divergence
Accept (poster)
Summary: The paper proposes the Kernel Divergence Score to estimate data contamination in large language models. The method makes use of a kernel with layer embeddings to estimate the similarities between samples before and after finetuning. If these embeddings remain similar, the data is likely contaminated. The autho...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thorough and constructive feedback. Below, we address the concerns in detail. --- _A1. Baseline comparison in Table 5 (Pile dataset)_ We provide the comparison below. KDS achieves the highest average correlation. | Spearman Corr. | Wikipedia | PhilPapers ...
Summary: This paper proposes the Kernel Divergence Score (KDS) as a measure of dataset-level benchmark contamination for LLMs. KDS is computed as the weighted average of pairwise embedding vector changes across finetuning the model under investigation on the benchmark test dataset. KDS is shown to be robust to many des...
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and constructive feedback. Below, we address your concerns in detail. --- _A1. Related work_ We thank the reviewer for pointing out the work by Dekoninck et al. [1], which appears to be highly relevant! It's our oversight at the time of submission and wil...
Summary: The paper introduces the Kernel Divergence Score (KDS), a method to quantify dataset contamination in LLMs by measuring changes in kernel similarity matrices of sample embeddings before and after fine-tuning. Claims And Evidence: The central claim—that KDS effectively quantifies contamination—is supported by ...
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and constructive feedback. Below, we address the key concerns: --- _A1. Clarification on problem setting_ We absolutely agree with your viewpoint that contamination in real LLMs occurs during pre-training! We'd like to clarify that **our central goal is i...
Summary: This paper investigates how to quantify dataset leakage in large language models. The proposed method is inspired by the fact that fine-tuning affects the embedding relationships unseen samples more significantly than those seen samples. The authors propose Kernel Divergence Score, using kernel similarity ma...
Rebuttal 1: Rebuttal: Dear Reviewer J5f8, We sincerely appreciate your positive feedback and the time you've dedicated to reviewing our manuscript. Your insights are invaluable to us. Please let us know if you have any further questions. --- Rebuttal Comment 1.1: Comment: More details. This paper's idea is inspired...
null
null
null
null
null
null
Mixture of Lookup Experts
Accept (oral)
Summary: This paper introduces a new LLM architecture, MoLoE. MoLoE enables the conversion of experts into lookup tables before inference, which does not require computation and is placed outside of VRAM, thereby reducing its VRAM usage to levels comparable to dense models. At the same time, MoLoE only needs to transfe...
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. We address specific concerns and questions below. > Q1. As the model scales up, the usability of MoLoE may come into question. Firstly, the number of LUT parameters that need to be offloaded by our method is $dN|V|$, whereas MoE needs to offlo...
Summary: This paper presents MoLoE, a new MoE architecture designed to address the high VRAM usage of traditional MoE models. MoLoE uses the output of the Embedding layer as input for the experts during training. Before inference, it pre-computes the output of each expert for each token ID, reparameterizing the experts...
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. We address specific concerns and questions below. > Q1. Pruning LUT entries corresponding to rare token IDs. We prune half entries of the LUTs of MoLoE-16E with 160M activated parameters based on frequency. When the input token ID is pruned, t...
Summary: This paper proposes MoLoE architecture to address the high memory overhead of MoE architectures. The key difference is that MoLoE converts experts into external LUTs before inference, eliminating expert computation and allowing experts to be stored outside of VRAM. Additionally, since only the lookup results n...
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. We address specific concerns and questions below. > Q1. Will MoLoE require more training time than MoE? Since MoLoE activates all parameters during training, its training cost is approximately equal to that of a dense model with the same num...
Summary: The paper proposes Mixture of Lookup Experts (MoLoE), a new variation of Mixture-of-Experts (MoE) architectures that significantly reduces GPU inference latency for batched generation. Claims And Evidence: I think the claims and evidence are good. Methods And Evaluation Criteria: Something is not aligned wit...
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. We address specific concerns and questions below. > Q1. The comparison between MoE and MoLoE is inconsistent. The different settings for MoE and MoLoE are chosen in order to prioritize aligning both **total parameter count in training** and *...
null
null
null
null
null
null
Revisiting Chain-of-Thought in Code Generation: Do Language Models Need to Learn Reasoning before Coding?
Accept (poster)
Summary: The paper explores the impact of the data formatting when finetuning LLMs with synthetically generated data. Specifically, the data--composed of code and corresponding reasoning steps for a NL-to-code problem--is generated by a stronger "teacher" model (the authors state that it is a DeepSeek model in Appendix...
Rebuttal 1: Rebuttal: Thanks for helpful comments! We would like to summarize your concerns and provide our responses below: 1. **which is the teacher model**: We select **DeepSeek-V2.5-1210** as our teacher model because of its strong capabilities and acceptable cost. It is the most suitable version available during ...
Summary: This paper researches on the order of CoT and code in code generation tasks. The authors collect pairs of program questions and response codes in six code data, and then prompt an LM to generate CoT and finally create a dataset with question, code and natural language CoT triplets. Their experiments show that ...
Rebuttal 1: Rebuttal: We address the concerns as below: Q1: How well do the paper's conclusions hold with large-scale training and a powerful model? R1:**Table A. Extent to a larger scale training dataset:** We construct a large dataset of 800K samples. We train Llama-3.1-8B and compare it with DeepSeek-R1-Distill-...
Summary: This paper primarily investigates how chain-of-thought reasoning affects code generation performance. The paper first constructs a dataset of 50k pairs for code generation. A series of experiments is then conducted to investigate how the presence and position of a chain-of-thought affect resulting code generat...
Rebuttal 1: Rebuttal: Thanks for helpful comments! We appreciate the reviewers for pointing out some other reference in related work, and we promise that these relevant references will definitely be included in the future revision.
Summary: The paper argues that in the context of fine tuning a code generation LLM, appending the CoT after the code solution works better compared to the typical setting of prepending it. In order to show this they generate 50k code generation problems with code solutions and CoTs. Experimentally, they demonstrate SFT...
Rebuttal 1: Rebuttal: Thanks for the helpful comments! We will summarize your concerns and provide our responses below. We explored many other training data distributions: 1. **different teacher model**: We take **GPT-4o-2024-08-06** as another teacher model to synthesize the CoT&Code dataset again. We perform experim...
null
null
null
null
null
null
A Theoretical Study of (Hyper) Self-Attention through the Lens of Interactions: Representation, Training, Generalization
Accept (poster)
Summary: The paper presents a theoretical study of self-attention mechanisms, specifically focusing on their representation, training, and generalization capabilities through the lens of mutual interaction among entities. The authors introduce a novel perspective called "interacting entities," demonstrating that a sing...
Rebuttal 1: Rebuttal: *RESPONSE 1* (About the references): In the introduction we referenced several works on "theoretically understanding attention" but we agree that attention modifications could be discussed more. In the final version of the paper we will mention these approaches, i.e, the papers such as (DEBERTA) ...
Summary: This paper proposes a theoretical framework viewing self-attention tokens as interacting entities. The key theoretical findings include proving the representational power of single-layer linear self-attention for pairwise interactions, demonstrating convergence of training under mild assumptions, and establish...
Rebuttal 1: Rebuttal: LINK (please open in a private window): https://drive.google.com/file/d/1lJ3HYR6i02jpm3CAxg4cu4J6icQXubJ8/view?usp=share_link *RESPONSE 1* (On the experiments for "further validation in more complex, real-world scenarios", "more complex or empirical scenarios", "well-established benchmark dataset...
Summary: The paper studies the self-attention mechanism that is central to many of today's ML models (such as in NLP, computer vision and multi-agent systems). Instead of Transformers (which involve several layers of multi-head attention and other components), the paper examines a simplified model that consists of a un...
Rebuttal 1: Rebuttal: *RESPONSE 1* (the concerns on practicality of proposed models and what is their use while self-attention works in NLP): We agree that traditional self-attention architectures can also represent these complex interactions "in surprising ways". However, the main distinction is that the proposed mode...
Summary: This paper is separated into two broad sections. The first is a theoretical study of linear self-attention, which explores the expressivity and generalisability of a single linear self-attention layer. The second is more empirical, proposing two new architectures based on self-attention. The performance of the...
Rebuttal 1: Rebuttal: LINK (OPEN IN PRIVATE WINDOW): https://drive.google.com/file/d/1lJ3HYR6i02jpm3CAxg4cu4J6icQXubJ8/view?usp=share_link *RESPONSE 1* (the results of one-hot embedding): The results were explained as "negligible error $\Theta(10^{-7})$ on test and out of distribution data". However, we agree and for ...
Summary: The paper introduces a broad theoretical perspective to analyze the capabilities of self-attention mechanisms, particularly focusing on the interactions between entities. The paper extends the traditional pairwise self-attention to higher-order interactions and presents two novel mechanisms: HyperFeatureAttent...
Rebuttal 1: Rebuttal: LINK (please open in a private window): https://drive.google.com/file/d/1lJ3HYR6i02jpm3CAxg4cu4J6icQXubJ8/view?usp=share_link *RESPONSE 1* (On Computational Complexity of Models and Response to Question 1): In our appendix, we showed that by leveraging linear self-attention approximations, the co...
null
null
null
null
EnsLoss: Stochastic Calibrated Loss Ensembles for Preventing Overfitting in Classification
Accept (poster)
Summary: This paper proposed a novel ensemble framework, EnsLoss, for mitigating overfitting issue for binary classification task. The EnsLoss is motivated by Calibration property studied in the literature and combined with the idea of ensemble, which makes the 'equivalent loss' also calibrated for ensemble training pr...
Rebuttal 1: Rebuttal: > (No theoretical evidence EnsLoss reduces overfitting - missing analysis of generalization error bounds or excess risk) **Reply.** Thank you for this insightful question. We do not explicitly provide the generalization bound, yet following your suggestion, we can show the advantages of *ensLoss*...
Summary: This paper considers the problem of binary classification, where it proposes a method called EnsLoss that combines with stochastic gradient descent (SGD) to obtain the optimization objective to train a classifier. The key idea of EnsLoss is based on the convenient classification-calibrated condition (Bartlett+...
Rebuttal 1: Rebuttal: > (No clear explanation of why EnsLoss outperforms fixed losses) > > (Intuitive explanation) **Reply.** We appreciate the opportunity to clarify why ensLoss outperforms fixed losses. 1. An intuitive explanation: - ensLoss is a method conceptually similar to Dropout, offering benefits through ense...
Summary: The paper introduces EnsLoss, a novel ensemble learning method that applied the idea of ensemble to loss functions during model training within the empirical risk minimization (ERM) framework. Specifically, instead of explicitly using one loss functions, the author proposes to randomly sample loss functions on...
Rebuttal 1: Rebuttal: > (Empirical verification weak - only tested on CIFAR2, not CIFAR10.) **Reply.** To clarify, our current work focuses specifically on *binary classification*. Since CIFAR10 is a multi-class dataset with 10 categories, we derived binary classification problems from it by creating all possible pair...
Summary: This paper introduces EnsLoss, a stochastic ensemble learning method specifically designed to mitigate overfitting in classification tasks. The main idea is is to ensemble different surrogate loss functions during SGD. The authors provide theoretical analysis and empirical results across various datasets. The...
Rebuttal 1: Rebuttal: > (No ablation studies on invBC transformation) **Reply.** Thank you for the comment. Table 5 partially shows our invBC ablation studies: (1) Fixed case ($\lambda=0$) used in main experiments, where invBC is somewhat "deactivated" with gradients sampling from a fixed log-normal distribution; (2) ...
null
null
null
null
null
null
CRANE: Reasoning with constrained LLM generation
Accept (poster)
Summary: The authors prove that constrained LLM generation diminishes the capabilities of LLMs. The (not precise) essence of this result is that 1. Under constrained decoding, logspace-uniform threshold circuits (in $TC^0$) can obtain the same outputs (solve the same problems) as LLMs. 2. Under unconstrained decodin...
Rebuttal 1: Rebuttal: Dear reviewer 6Prs, Thanks for your constructive feedback. > Q1. In CRANE, these two are freely interleaved with multiple constrained decoding parts. Do the results in Proposition 3.3 generalize naturally to this case? Is it always true that the final answer is at the end? **R1:** Proposition 3...
Summary: The paper introduces CRANE (Constrained Reasoning Augmented Generation), a decoding algorithm for grammar-constrained large language model (LLM) generation that aims to balance syntactic correctness with reasoning capabilities. The work first provide a theoretical explaination for why constrained generation di...
Rebuttal 1: Rebuttal: Dear reviewer ssfH, Thanks for the constructive feedback. We have included additional experiments with different delimiters and larger and newer reasoning models. All experiments below use the same setup as Section 5. In all cases, CRANE consistently outperformed the baselines. We will add these ...
Summary: The paper examines the expressivity of large language models (LLMs) under grammar constraints. It first demonstrates that there are problems LLMs can solve without constraints but fail to solve when under grammar constraints. The paper introduces the CRANE algorithm, which first generates an unconstrained reas...
Rebuttal 1: Rebuttal: Dear reviewer jTKi, Thanks for your constructive feedback. > Q1. The paper provides a theoretical explanation of why COT is essential for LLMs across various tasks. However, COT has recently become a widely adopted technique. The proposed method integrates CoT with output constraints and thus h...
null
null
null
null
null
null
null
null
Collapse-Proof Non-Contrastive Self-Supervised Learning
Accept (poster)
Summary: This work studies non-contrastive self-supervised representation learning. The authors identify known collapse modes in non-contrastive SSL and propose a collapse-proof approach (CPLearn). They show that CPLearn jointly decorrelates and clusters embeddings, avoiding common collapse modes without the need for h...
Rebuttal 1: Rebuttal: Thank you for the appreciation of our work and the time dedicated to review our paper. Please find below the answers to your questions: **Comparison with VicReg** We agree that VicReg is another non-contrastive self-supervised method. Given the limited amount of time available for the rebuttal, w...
Summary: This paper theoretically demonstrate the conditions of avoiding four kinds of collapse in non-contrastive self-supervised learning based on the CPLearn projector design. Specifically, the authors prove that minimizing invariance to data augmentations while matching priors suffices to avoid representation and c...
Rebuttal 1: Rebuttal: Thank you for appreciating the theoretical nature of our paper and providing constructive suggestions. **Baselines for Table 1 and Table 2** Thank you for the suggestion. We did an extra effort in the limited time available to provide additional baselines and make Table 2 more complete. Please fi...
Summary: This paper introduces CPLearn, a novel self-supervised non contrastive approach that avoids heuristics such as stop gradient or momentum encoders for feature collapse. CPLearn does this by utilizing a projector module and a special loss function which minimizes the invariance between augmented views while enfo...
Rebuttal 1: Rebuttal: Thank you for appreciating the theoretical nature of our paper and for the time dedicated to review it. Please find below the answers to the major concerns. **Experiments on ResNet-18** Thank you for the suggestion. Please find below the table with the results on CIFAR10 using ResNet-18. We used ...
Summary: The paper introduces CPLearn, a non-contrastive self-supervised learning method designed to avoid common failure modes—namely, representation, dimensional, cluster, and intracluster collapses. The authors propose a simple projector design and loss function, leveraging ideas from hyperdimensional computing, tha...
Rebuttal 1: Rebuttal: Thank you for appreciating our work and the constructive review. **Validation of quasi-orthogonality** $W^TW=fI$ holds in probabilistic terms, formally governed by Eq. 5 in the paper, namely in expectation we have $E_W[cos(w_i,w_j)]=\delta(i-j)$, with $\delta$ being a Kronecker delta function (eq...
null
null
null
null
null
null
Relating Misfit to Gain in Weak-to-Strong Generalization Beyond the Squared Loss
Accept (poster)
Summary: This paper extends theoretical understanding of weak-to-strong generalization—where a strong model trained on weakly labeled data can surpass the weak model's performance—beyond regression with squared loss to general loss functions defined by Bregman divergences, including classification tasks with cross-entr...
Rebuttal 1: Rebuttal: Thank you so much for reading our paper, and for your comments. We are glad that you like our work! We address your concerns ahead: >...submission is not self-contained While the high-level geometrical framework of viewing weak-to-strong generalization (WTSG) as “projections onto a convex space” ...
Summary: The paper characterizes the gain in weak-to-strong generalization by relating it to misfit, extending the results of Charikar et al. (2024) to general Bregman divergence. This work also weakens the condition on the strong model class, which was considered convex in Charikar et al. (2024), by allowing it to be ...
Rebuttal 1: Rebuttal: Thank you so much for reading our paper, and for your comments. We address your concerns ahead: >...recent references, concurrent works As the reviewer notes, **the concurrent works listed were released after the ICML submission (and some fairly recently).** Nevertheless, we will be sure to cite ...
Summary: This paper generalizes the conclusion that "performance gain correlates with misfit in weak-to-strong generalization" from prior work on squared loss to Bregman divergence loss. It provides empirical evidence through experiments on synthetic tasks, language tasks, and vision tasks. Claims And Evidence: The cl...
Rebuttal 1: Rebuttal: Thank you so much for reading our paper, and for your comments. We are glad that you like our work! We address your concerns ahead: >1...additional insights beyond confirming the correlation between misfit and gain. >2..intuitive interpretation for reverse KL... We address both 1) and 2) in our...
Summary: This paper generalizes the recent theoretical analysis of weak-to-strong generalization beyond squared loss regression to arbitrary Bregman divergence-based loss functions in the fixed-representation finetuning setting when the strong class is convex. - For classification tasks, the authors propose to minimize...
Rebuttal 1: Rebuttal: Thank you so much for reading our paper, and for your comments. We address your concerns ahead: > ...intuition behind key contribution not clearly explained Thank you for bringing this up, we want to make sure that the ideas can be understood since they are counterintuitive! Our understanding of ...
null
null
null
null
null
null
On Exact Bit-level Reversible Transformers Without Changing Architecture
Accept (poster)
Summary: The paper proposes BDIA-transformer, a novel approach combining the bidirectional integration approximation (BDIA) method and activation quantization to achieve exact bit-level reversibility in standard transformer architectures. This combination significantly reduces memory usage during training via online ba...
Rebuttal 1: Rebuttal: The authors thank all four reviewers for their appreciation of the novelty, simplicity, and effectiveness of our BDIA training technique for transformers. Notably, reviewer 7zwY states that __the novelty and effectiveness of the approach make this a strong paper__. Reviewer tmy4 states __minimally...
Summary: The paper introduces the BDIA-transformer, a novel reversible transformer that maintains the standard transformer architecture during inference while leveraging a technique called bidirectional integration approximation (BDIA) for reversibility. The key idea is to treat each transformer block as an Euler integ...
Rebuttal 1: Rebuttal: The authors thank all four reviewers for their appreciation of the novelty, simplicity, and effectiveness of our BDIA training technique for transformers. Notably, reviewer 7zwY states that __the novelty and effectiveness of the approach make this a strong paper__. Reviewer tmy4 states __minimally...
Summary: This paper introduces BDIA-transformer, an exact bit-level reversible transformer that maintains the standard architecture for inference while reducing memory consumption during training. The approach adopts bidirectional integration approximation (BDIA), allowing the authors to consider each transformer block...
Rebuttal 1: Rebuttal: The authors thank all four reviewers for their appreciation of the novelty, simplicity, and effectiveness of our BDIA training technique for transformers. Notably, reviewer 7zwY states that __the novelty and effectiveness of the approach make this a strong paper__. Reviewer tmy4 states __minimally...
Summary: The paper proposes a novel type of reversible transformers with the aim to reduce the memory during training. To this end, this work treats each transformer block as the Euler integration approximation in a manner similar to Neural ODEs. There are two main contributions. Firstly, the authors borrow a technique...
Rebuttal 1: Rebuttal: The authors thank all four reviewers for their appreciation of the novelty, simplicity, and effectiveness of our BDIA training technique for transformers. Notably, reviewer 7zwY states that __the novelty and effectiveness of the approach make this a strong paper__. Reviewer tmy4 states __minimally...
null
null
null
null
null
null
Clipped SGD Algorithms for Performative Prediction: Tight Bounds for Stochastic Bias and Remedies
Accept (poster)
Summary: The paper studies the problem of Performative Prediction: that is when the data distribution also depends on the model weights. This problem also includes the case of differentially private algorithms. The paper studies clipped SGD for finding a stable solution. Under various assumptions, the paper provides u...
Rebuttal 1: Rebuttal: > To demonstrate practical applications of the algorithm, the paper needs to include real datasets with clear justifications/considerations for how the decision-dependent distributions are modeled. Please refer to the response to Reviewer svSm. > The appearance of $\zeta_{t}$ and $\sigma_{DP}^2...
Summary: The paper claims error bounds for the estimate obtained using projected clipped SGD (PCSGD) and DiceSGD algorithm in the problem of performative prediction, where an artificial noise can be added to preserve data privacy. While PCSGD is known for its stability, the output of the algorithm exhibits bias from th...
Rebuttal 1: Rebuttal: > Experiments: There is no setting where the loss function is nonconvex, so the reported experiments do not validate the claim regarding nonconvex loss (Theorem 5). We conducted an additional binary classification experiment in **[https://ibb.co/Jjx0XMRg ]**, where we simulated PCSGD, DiceSGD, C...
Summary: This paper examines the convergence behavior of clipped stochastic gradient descent algorithms in the performative prediction setting, where the subsampling distribution depends on the previous iterate. The theoretical analysis addresses both strongly convex and non-convex objective functions. Claims And Evid...
Rebuttal 1: Rebuttal: > Difference in analysis vs existing works. Indeed, some of the techniques we applied are standard and grounded in existing works, yet we emphasize that this is the first rigorous study of clipping bias with performative prediction. The lower bounding result in Theorem 4 is new as it emphasizes o...
Summary: In this work the authors study the convergence of clipped stochastic gradient descent (SGD) algorithms with decision-dependent data distribution. They explain the performative prediction problem, which is a more general and challenging problem than standard optimization, and they define the performative stable...
Rebuttal 1: Rebuttal: We thank you for your comments and careful review. Our point-by-point replies are listed below. > The definition of $\sigma_{DP}^2$ in Theorem 3? We apologize for the careless typo. Indeed, Theorem 3 should be presented without $\sigma_{DP}^2$. This term is actually introduced later in (14) and ...
null
null
null
null
null
null
The Hidden Dimensions of LLM Alignment: A Multi-Dimensional Analysis of Orthogonal Safety Directions
Accept (poster)
Summary: **Post-rebuttal edit: the authors have provided detailed and convincing responses to my concerns during the discussion phase, meaning I'm happy to increase my score from 3 to 4. I believe this paper deserves to be at ICML.** --- This work performs a multi-dimensional analysis of the shift in representations ...
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments. We are encouraged that the reviewer acknowledges our novelty and well-designed experiments. We will carefully address your concerns below: > **Q1:** How accurate are the approximations in these experiments? You raised a good point. We did not in...
Summary: The paper focuses on the mechanisms of safety alignment in large language models, exploring how the internal representations of the model's refusal of harmful inputs manifest as multiple orthogonal directions in activation space. It introduces the concept of the "safety residual space" and identifies dominant ...
Rebuttal 1: Rebuttal: Thank you for your valuable suggestions. We will carefully address your questions in the following. > **Q1:** The current analysis primarily focuses on a specific model (Llama 3 8B) and a specific dataset, and it remains unclear whether the findings can be generalized to other models or applicati...
Summary: This paper investigates which feature directions are used by safety-tuned language models to determine whether or not to refuse a request. This is done by optimizing an affine mapping to approximate the activations of a safety-tuned model given the corresponding activations in the pretrained model before safet...
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough and insightful feedback. We will clarify all your questions in the revised paper. We will start by answering your questions and then address the remaining issues. > **Q1:** Does the term "safety residual space" refer to the affine map, linear $W$ or $W-I$? ...
Summary: This paper investigates the multi-dimensional nature of safety-aligned behaviors in LLMs, challenging the traditional single-direction representation of safety features. The authors introduce the concept of a safety residual space, analyzing activation shifts during safety fine-tuning of Llama 3 8B. Through si...
Rebuttal 1: Rebuttal: Thank you for the constructive feedback. We appreciate that the reviewer found our core claims well-supported and our analysis novel. We will address each of your concerns below. >**Q1:** Limited Evaluation: The empirical evaluation is not extensive enough. Thank you for these valuable suggestio...
null
null
null
null
null
null
GrokFormer: Graph Fourier Kolmogorov-Arnold Transformers
Accept (poster)
Summary: This paper propose to introduce a novel Kolmogorov-Arnold networks (KAN) based spectral filters to graph Transformer framework to enhance the flexibility to perform low/high/band-pass filtering. Compared to the previous polynomial spectral graph neural networks (GNNs) as well as the proposed graph KAN spectral...
Rebuttal 1: Rebuttal: We sincerely appreciate your constructive and positive comments on the filter design. Please see our response to your comments one by one below. >**Question #1** In Specformer, MLP applied to each eigenvalue $\lambda$, which can learn to output for $\lambda^k$ theoretically. We appreciate your i...
Summary: This paper proposes GrokFormer, a Transformer-based graph spectral model that introduces the expressive graph filter to the Transformer architecture, effectively capturing a wide range of frequency signals in an order- and spectrum-adaptive manner. Experiments on synthetic and real-world datasets show the effe...
Rebuttal 1: Rebuttal: We sincerely appreciate your constructive and positive comments on methodology and experiment designs. Please see our response to your comments one by one below. >**Weakness #1** The time consumption of preprocessing should be included in Table 8, and whether the proposed method achieves a trade-...
Summary: The paper introduces GrokFormer, a novel Graph Transformer (GT) model that addresses limitations in existing graph learning methods, particularly in capturing diverse frequency signals in graph data. GrokFormer incorporates a Graph Fourier Kolmogorov-Arnold Network (KAN) to design spectral filters that are bot...
Rebuttal 1: Rebuttal: We sincerely appreciate your constructive and positive comments on methodology design and theoretical analysis. Please see our response to your comments one by one below. > **Weakness #1** Limited discussion on potential limitations or failure cases. Thank you very much for the suggestion and qu...
Summary: This paper proposes GrokFormer, a novel graph transformer (GT), with superior capability in modelling complex spectral filters. The filter design is both order and spectrum adaptive and is implemented using a specific instantiation of Kolmogorov-Arnold Network in the spectral domain. Results on several node an...
Rebuttal 1: Rebuttal: We sincerely appreciate your constructive and positive comments on the presentation, visualization, and experiment design. Please see our response to your comments one by one below. >**Weaknesses #1 and #2** Lack of a few challenging benchmark GNN datasets (datasets in [Ref1] and LRGB datasets [...
null
null
null
null
null
null
Temporally Sparse Attack for Fooling Large Language Models in Time Series Forecasting
Reject
Summary: The authors propose a black-box attack for LLM time-series forecasters. They use sparse perturbations, i.e., only a subset of training time steps will be perturbed to fool the model prediction. Additionally, due to the unknown true label, they use the model prediction. Claims And Evidence: The authors claim t...
Rebuttal 1: Rebuttal: Thanks sincerely for your time and review. > Your **major concern** is **the relation to [1] Liu, F. et al. Adversarial vulnerabilities in large language models for time series forecasting in AISTATS 2025.** 1. Liu, F. et al began exploring the vulnerabilities of LLMs in time series forecasting...
Summary: This paper proposes Temporally Sparse Attack (TSA), an adversarial attack on LLM-based time series forecasting. Unlike existing methods that modify the entire input, TSA perturbs only a small fraction of time steps, significantly degrading forecasting accuracy. The attack is formulated as a Cardinality-Constra...
Rebuttal 1: Rebuttal: Thanks for taking the time to review our paper. Your **major concerns** are: 1. Evaluation on real-world applications, such as financial or medical forecasting. 2. Discussion on adaptive adversarial defenses that could mitigate TSA’s impact. 3. Analysis of the computational overhead of the S...
Summary: This paper proposes a Temporally Sparse Attack (TSA) for LLM-based time series forecasting. Previous studies achieve adversarial attacks by modifying the entire time series with perturbations. This paper proposes to modifying a sparse portion of a sequence. Specifically, it formulate the tasks as a cardinality...
Rebuttal 1: Rebuttal: Thanks sincerely for your reviews. > Your **major concern** is: **forecasting attacks vs. classification attacks** in time series. Please allow us to present our conclusion first: **Attacking forecasting and classification are fundamentally different, and attacks for one are not directly appli...
Summary: The paper presents Temporally Sparse Attack(TSA), a novel adversarial attack method for LLM-based time series forecasting models that requires manipulating only a small subset of input time steps. The authors formulate the problem as a CCOP and develop an SP-based algorithm to generate these sparse perturbatio...
Rebuttal 1: Rebuttal: Thank you sincerely for dedicating your time to reviewing this paper. To respond to your comments with the respect they deserve, we have carefully addressed each of your points individually. > *There are many methods that solve the optimization problem. Why choose Subspace Pursuit (SP)?* CCOP is...
null
null
null
null
null
null
TtBA: Two-third Bridge Approach for Decision-Based Adversarial Attack
Accept (poster)
Summary: The paper proposes a novel decision-based black box attack against image classifiers. The attack is called TtBA and it is based upon exploiting the geometry of the decision boundary. It introduces a notion of the $k_{bridge}$ metric and discusses how it helps in constructing an efficient adversarial example. T...
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's meticulous evaluation and valuable comments, which have greatly helped improve our manuscript. 1. The **problem formulation is reasonable** for two key reasons. **First**, in hard-label black-box attacks (e.g., HSJA, TA, CGBA), where only the model’s output...
Summary: This paper introduces a decision-based black-box adversarial attack, termed Two-third Bridge Approach---TtBA, that focuses on optimizing perturbation directions for attack queries by leveraging normal vectors and the *bridge* direction, to relieve query complexity. This *bridge* direction is a weighted combo o...
Rebuttal 1: Rebuttal: Thank you for your comments. 1. SOTA studies, including HSJA, TA, CGBA, strongly support the hypothesis that **the decision boundary of DNNs remains smooth and locally concave even for many robustly trained models**. This is because the robust training process does not interfere with normal vect...
Summary: The manuscript introduces an innovative bridge direction to optimize the adversarial perturbation by linearly combining the current unit perturbation direction with its unit normal vector. Via experiment observation, k= 2/3 k_{bridge} can yield a near-optimal perturbation direction. Besides, the paper designs ...
Rebuttal 1: Rebuttal: Thank you for your valuable comments. 1. We **perform a binary search** of $k = k_\text{bridge}^{i} \in (0,1]$ to identify $d_k = d_\text{bridge}^{i}$ which **have identical decision boundary** as $\hat{d}^{i}$. - According to Figure 1, when $k$ is very small, direction $d_k$ approaches $\hat{...
Summary: The paper proposes the TtBA method for decision-based black-box adversarial attacks. It introduces a new bridge direction, a weighted combination of the current direction and its normal vector, controlled by a weight parameter $k$. Experiments on multiple datasets and models show that TtBA outperforms state-o...
Rebuttal 1: Rebuttal: 1. Thank you for raising concerns regarding **the strength of our contributions**. We introduce a fundamentally new and practically valuable metric, $k_\text{bridge}$, specifically designed to quantify decision boundary curvature, a critical but previously unexplored factor in adversarial attacks....
null
null
null
null
null
null
Dissecting Submission Limit in Desk-Rejections: A Mathematical Analysis of Fairness in AI Conference Policies
Accept (poster)
Summary: In this paper, the authors highlight that random desk rejection based on per-author submission limits might be unfair. They propose individual and group unfairness definitions to make the AI conference desk rejection policy more fair. The authors propose an LP optimization algorithm to reduce group unfairness ...
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewers' acknowledgement of this paper’s writing quality and real-world impact. Below, we provide clarifications addressing the weaknesses and questions: ### **Weakness 1 & Question 1: Group Fairness Definition** Our definitions of group fairness are inspired by uti...
Summary: This paper studies the problem of fairly desk-rejecting papers from conferences, where some of the authors have exceeded per-author submission limits. The paper establishes that this can’t be done without desk-rejecting papers from authors who haven’t violated the limit (since their co-authors might have viola...
Rebuttal 1: Rebuttal: We would like to thank the reviewer for acknowledging the novelty of our work and the strong support for our claims. We are pleased to further clarify the motivation behind our paper and to discuss the equilibrium perspective. ### **Weakness 1 & Question 1: Equilibrium Effects** Thank you for th...
Summary: This paper discusses an interesting fairness issue that occurs in AI conference paper submission scenarios and reveals that the current desk-rejection policy (reject papers when submission limits are exceeded) can unfairly disadvantage early-career researchers, whose submissions may be rejected due to senior c...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for recognizing the novelty, mathematical rigor, and potential social impact of our work. We appreciate the constructive feedback and address the concerns as follows: ### **Weakness 1: The Severity of the Problem** We acknowledge that direct evaluation on real con...
null
null
null
null
null
null
null
null