title stringlengths 15 163 | paper_decision stringclasses 4
values | review_1 stringlengths 853 32.6k | rebuttals_1 stringlengths 0 15.1k | review_2 stringlengths 1.03k 35.6k | rebuttals_2 stringlengths 0 15.1k | review_3 stringlengths 807 27.4k ⌀ | rebuttals_3 stringlengths 0 15k ⌀ | review_4 stringlengths 780 22.2k ⌀ | rebuttals_4 stringlengths 0 15.1k ⌀ | review_5 stringclasses 171
values | rebuttals_5 stringclasses 166
values | review_6 stringclasses 25
values | rebuttals_6 stringclasses 24
values | review_7 stringclasses 4
values | rebuttals_7 stringclasses 4
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Optimizing Noise Distributions for Differential Privacy | Accept (poster) | Summary: The paper studies non-canonical (i.e., not Laplace or Gaussian) noise distributions for answering $d$ queries under $(\varepsilon, \delta)$-DP. It casts the overall problem as follows: the user provides $\delta$, $d$, and the sensitivity and error constraint $\sigma$ for each query. Then the provided algorithm... | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive feedback. Below are our responses to their concerns.
**The abstract claims "significant[ly]":** Since "significant" is subjective, we will clearly specify the gains in the abstract if accepted. We addressed our framework's practicality in our response ... | Summary: This paper addresses the optimization of noisy distributions under the RDP framework. Compared to classic approaches, such as Laplace or Gaussian mechanisms, the derived distribution achieves a lower overall cost.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes.
Expe... | Rebuttal 1:
Rebuttal: We appreciate the reviewer's thoughtful feedback. Below, we provide detailed responses to their concerns.
**From the numerical results presented in the figures, it appears that there is little difference for smaller values of ϵ (e.g., ϵ<2) compared to the Gaussian distribution. Meanwhile, adding ... | Summary: The authors of the paper introduce an optimization framework that optimizes noise distribution for $\alpha$-RDP, where the optimal distribution can be obtained by a finite-dimensional convex optimization problem. Their main contribution is the proposal of optimized distribution for a moderate composition regim... | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for acknowledging the novelty of our work and its mathematical contributions. Our responses to the concerns are provided below.
**Q1) In the preliminaries (line 120-130), the definition of $\sim$ is duplicated ( for probability distribution and for the neighbor... | Summary: The paper proposed a novel framework for optimizing noise distributions for (epsilon, delta)-DP using the Renyi differential privacy formulation. Experiments are shown to showcase the benefits of the approach.
Overall: The paper is easy to follow and the main results are well laid out. The experimental result... | Rebuttal 1:
Rebuttal: We first would like to thank the reviewer for recognizing the novelty of our work and for their constructive comments. Below, we provide a detailed response to their concerns.
**Q1) The main issue is that the experiments do not convey the power of the proposed approach and except for very narrow ... | null | null | null | null | null | null |
Beyond the Permutation Symmetry of Transformers: The Role of Rotation for Model Fusion | Accept (spotlight poster) | Summary: In this paper, the authors identify a neural network (NN) parameter symmetry beyond the well-studied permutation symmetry. In particular, they show that the weights of self-attention layers are governed by *rotation symmetry*, i.e. one can transform the query, key, value, output matrices by appropriate rotatio... | Rebuttal 1:
Rebuttal: Thanks for your constructive feedback. We will include additional results and discussions in the revision. We believe that our paper will be much stronger thanks to your efforts.
**Response to claims**: We thank the reviewer for the suggestion, and we will adjust the wording in the next version o... | Summary: The paper studies transformer parameter symmetries. Specifically, it explains how to weight space average two attention layers modulo not only permutation symmetries but also rotation symmetries. Experimental results show that considering this extra symmetry leads to better alignment between different trained ... | Rebuttal 1:
Rebuttal: Thank you very much for your thoughtful feedback. We are honored to have this valuable chance to address your raised concerns and questions. We believe that our paper will be much stronger thanks to your efforts.
**Response to comments and Q1**:
We thank the reviewer for this important question.
... | Summary: The paper introduces rotation symmetry in transformers, extending permutation symmetry from discrete to continuous spaces. It demonstrates theoretically and empirically that rotating query-key and value-output parameter matrices preserves functional equivalence. The main contribution is a theoretically optimal... | Rebuttal 1:
Rebuttal: Thank you very much for your thoughtful feedback. We are honored to have this valuable chance to address your raised concerns and questions. We believe that our paper will be much stronger thanks to your efforts.
**Response to theoretical claims**:
Thank you for this thoughtful point. We would l... | Summary: The paper extends the concept of permutation symmetry in MLPs to rotation symmetry for the self-attention layer in the transformers. The authors show that due to the inherent design of self-attention layers, each of the query, key, and value vectors can be rotated without changing the functional representation... | Rebuttal 1:
Rebuttal: Thank you very much for your thoughtful feedback. We are honored to have this valuable chance to address your raised concerns and questions. We believe that our paper will be much stronger thanks to your efforts.
**Response to claims and W2**:
We thank the reviewer for this insightful suggestion.... | null | null | null | null | null | null |
What Makes a Good Feedforward Computational Graph? | Accept (poster) | Summary: The authors are motivated by the recent surge of feedforward networks, and analyze the underlying computational graphs. Their core question is: What characterizes a “good” computational graph? To address this, they propose two metrics:
a) Mixing time: Assesses how quickly information from various nodes reaches... | Rebuttal 1:
Rebuttal: Dear Reviewer QmmN,
Thank you for your careful review! We hope to provide useful clarifications, and that you may reconsider the relevance of our work:
### **On feedforward networks**
Your comments focus on our work not easily representing multi-layer feedforward NNs.
For us, the term **“feedf... | Summary: The paper studies the impact of feedforward graph structures on information flow, introducing two metrics — mixing time and minimax fidelity— to assess speed and accuracy respectively.
The study reveals a trade-off between fast information propagation and high-fidelity signal propagation among various graph t... | Rebuttal 1:
Rebuttal: Dear Reviewer y3Tf,
We are delighted that you have found our foundations to be strong and our results to be interesting. We hope that our responses will strengthen your view of our contributions even further!
> However, Theorem 6.1's proof relies on strong assumptions
Thank you for raising this... | Summary: the paper proposed two metrics to measure the quality of computational graph, experiments demonstrate the correlation between those metrics and actual performance.
## update after rebuttal
the author's response addresses my question. the paper looks good to me, I will keep my rating
Claims And Evidence: YES
... | Rebuttal 1:
Rebuttal: Dear Reviewer GmC2,
We are very thankful for your kind review and recognising the strengths of our work!
To address your questions:
> The paper discusses the asymptotic behaviour of various graphs. why certain structures perform better from an ML perspective.
This is an excellent question. We ... | null | null | null | null | null | null | null | null |
From Thousands to Billions: 3D Visual Language Grounding via Render-Supervised Distillation from 2D VLMs | Accept (poster) | Summary: This paper proposes an approach for 3D vision-language understanding by leveraging rendered RGB images, grounding masks, and 2D feature loss for model training, rather than incorporating explicit 3D supervision. The model follows a pretrain-finetune paradigm, with evaluations conducted on open-vocab 3D instanc... | Rebuttal 1:
Rebuttal: We appreciate that the reviewer likes our results and data scaling performance. We answer the questions below and will improve the writing given the valuable suggestions.
---
> Model architecture and its operational flow…
**Encoder Backbone**: is a SparseConv UNet [1] (Sec. 3.4), following P... | Summary: The work addresses the problem of open vocabulary based 3d segmentation, that is predicting 3d masks for an RGB point cloud that adhere to the language based query.
In order to do so the authors propose a feedforward architecture that predicts 3D Gaussians which carry information about their belonging to seg... | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s recognition of LIFG-GS, including “the method is well presented and builds upon existing building blocks”, and believing “As such the proposed 3d supervised pre-training task is valuable to the wider community.”. We answer the question below and will make t... | Summary: This paper presents LIFT-GS, a scalable pretraining approach for 3D vision language grounding. Specifically, the model takes in a point cloud of the scene along with the language query embeddings to produce 3D Gaussians with features together with the predicted masks for grounding. For training LIFT-GS, the re... | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful feedback and appreciate the recognition that using 2D supervision is “very useful for scaling up the training data,” “the claim… is supported by the experiments.”, “cross-scene render-supervision is innovative…”. We address the questions below and will ... | Summary: The paper presents LIFT-GS, a feedforward 3D vision–language grounding model that accepts a point cloud and a language query as inputs. It converts the point cloud into 3D Gaussians and uses differentiable rendering to supervise training with only 2D losses. The system is distilled from 2D foundation models to... | Rebuttal 1:
Rebuttal: First, we would like to thank the reviewer for taking the time and effort to engage with the paper, and we look forward to a productive discussion. Before addressing the individual questions, we would like to clarify the core focus of our work.
While our method involves 3D scene reconstruction a... | null | null | null | null | null | null |
A Cross Modal Knowledge Distillation & Data Augmentation Recipe for Improving Transcriptomics Representations through Morphological Features | Accept (poster) | Summary: This paper proposes a cross-modal knowledge distillation framework (Semi-Clipped) and a biologically inspired data augmentation method (PEA). The aim is to enhance the biological significance and predictive power of transcriptomic representations using weakly paired multimodal data (microscopy images + transcr... | Rebuttal 1:
Rebuttal: We thank the reviewer DZgQ for their review, and for acknowledging the robustness of our experimental design, and the innovation behind our approach. We respond below to the comments of the reviewer :
- __*“The visualization is not clear, and the y-axis of Figure 1 is not labeled.” :*__ We thank... | Summary: Understanding how cells respond to stimuli such as genetic perturbations or chemical compounds forms a crucial part of drug discovery. This work proposes a method to enrich representations of transcriptomic data with paired morphological data. Measuring paired transcriptomic and morphological features of cells... | Rebuttal 1:
Rebuttal: We thank the reviewer T8LP for all their comments, and for their acknowledgement of the quality of the paper and the contribution. We aim to acknowledge and answer below the questions and suggestions of the reviewer :
- __*“Additional reference” :*__ We thank the reviewer for pointing us toward ... | Summary: The paper introduces Semi-Clipped, a method that transfers morphological information from microscopy images to transcriptomic data through cross-modal knowledge distillation. The authors adapt the CLIP loss by freezing a pretrained teacher encoder (for images) and learning a trainable adapter for transcriptomi... | Rebuttal 1:
Rebuttal: We thank the reviewer rL4s for their review, and for acknowledging the robustness demonstration of our claims. We will respond below to all their comments :
- __*“The reviewer is concerned that Semi-Cipped outperforms Clip in a context dependent way” :*__ We thank the reviewer for pointing this ... | Summary: This paper aims to extract representations of transcriptomics by distilling knowledge from microscopy images. The authors introduce (1) semi-clipped for cross-modal distillation from pretrained foundation models, and (2) perturbation embedding augmentation to generalize transcriptomics data.
Claims And Eviden... | Rebuttal 1:
Rebuttal: We thank reviewer rGLV for their review. We address reviewer’s comments below, in order to improve the clarity of the contribution of our submission :
- __*"This paper doesn't show empirically that the learnt representation of transcriptomics is relatively comprehensive." :*__ We respectfully di... | null | null | null | null | null | null |
OD³: Optimization-free Dataset Distillation for Object Detection | Reject | Summary: This work presents an optimization-free data distillation framework for object detection. It addresses the challenges of training large neural networks on large-scale datasets by synthesizing compact datasets. The framework consists of two main stages: candidate selection, where object instances are iterativel... | Rebuttal 1:
Rebuttal: Thank you for your valuable suggestions and for giving us the opportunity to address the points you raised!
>**Q1: The experiment comparison is unfair for the baseline methods (random, uniform, k-center, herding, and DCOD).**
We appreciate this concern. Below are our own runs of the core-set s... | Summary: The work proposes a new framework called OD3 (Optimization-free Dataset Distillation for Object Detection), specifically designed for dataset distillation in object detection tasks. It aims to reduce training time and computational resources by selecting and generating a high-quality compact dataset from a lar... | Rebuttal 1:
Rebuttal: We sincerely appreciate your thorough review and for recognizing our contributions!
>**Q1: How does this method perform on the latest transformer-based detectors?**
We agree that evaluating performance on transformer-based detectors is important. To further demonstrate the generalizability of o... | Summary: The paper proposes a dataset distillation method for object detection datasets, aiming at condensing the number of training images down to 0.25 - 5% of the original training dataset. This is achieved by first copy-pasting objects from the training set onto blank backgrounds. In a second step, objects that are ... | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and valuable suggestions!
>**Q1: The evaluation is limited to a single backbone architecture (ResNet-50) and two detectors (RetinaNet & Faster R-CNN), which severely limits its generality. The paper should additionally evaluate the method on more modern backb... | Summary: Dataset distillation for object detection is a under-explored task. This paper proposes a new optimization-free dataset distillation method tailored for object detection, named OD$^3$. OD$^3$ consists of two steps: (1) an iterative candidate selection process that strategically places object instances in synt... | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback and for giving us the opportunity to address your concerns!
>**Q1: The AP performance for different object sizes is omitted.**
The AP performance for different object sizes is reported in Table 3 and Table 5 of the main paper, as well as Table 7 and Tabl... | null | null | null | null | null | null |
Efficient Robust Conformal Prediction via Lipschitz-Bounded Networks | Accept (poster) | Summary: This paper explores how to leverage LipNet and a novel robust conformal score algorithm for robust prediction. Previously proposed robust conformal prediction methods each have their own limitations, such as high computational complexity, making them difficult to scale to large datasets like ImageNet. By utili... | Rebuttal 1:
Rebuttal: **Common response:**
First of all, we would like to thank our reviewers for their time spent reviewing our paper along with the insightful comments they provided. Their reviews highlight that our method is “highly efficient and scalable” while underlining its innovative nature and our comprehensi... | Summary: - This paper proposes a novel method, lip-rcp, for efficient robust conformal prediction (CP) by leveraging Lipschitz-bounded neural networks. The key contributions include:
- Theoretical analysis: Deriving worst-case coverage bounds for vanilla CP under l2 adversarial attacks, valid simultaneously for all pe... | Rebuttal 1:
Rebuttal: **Common response:**
First of all, we would like to thank our reviewers for their time spent reviewing our paper along with the insightful comments they provided. Their reviews highlight that our method is “highly efficient and scalable” while underlining its innovative nature and our comprehensi... | Summary: This paper addresses the limitations of robust conformal prediction (CP) under adversarial attacks. Traditional robust CP methods typically generate prediction sets that are either excessively large or computationally expensive for large-scale scenarios. To tackle these challenges, the authors introduce lip-rc... | Rebuttal 1:
Rebuttal: **Common response:**
First of all, we would like to thank our reviewers for their time spent reviewing our paper along with the insightful comments they provided. Their reviews highlight that our method is “highly efficient and scalable” while underlining its innovative nature and our comprehensi... | Summary: This paper uses 1-Lipschitz networks to estimate robust conformal prediction (CP) sets, leading to the new lip-rcp method. The proposed method achieves SOTA results in the size of the robust CP sets and computational efficiency. In addition, the authors also study vanilla CP under attack, and derive new worst-... | Rebuttal 1:
Rebuttal: **Common response:**
First of all, we would like to thank our reviewers for their time spent reviewing our paper along with the insightful comments they provided. Their reviews highlight that our method is “highly efficient and scalable” while underlining its innovative nature and our comprehens... | null | null | null | null | null | null |
Outlier-Aware Post-Training Quantization for Discrete Graph Diffusion Models | Accept (poster) | Summary: This paper introduces **Bit-DGDM**, a post-training quantization framework for **Discrete Graph Diffusion Models (DGDMs)**, addressing the long inference times caused by huge computational load and the presence of outliers in weights and activations. It proposes decomposing activations into dense, easily quant... | Rebuttal 1:
Rebuttal: We very much appreciate your positive comments of our paper.
**Q1:** Some quantization methods [a,b,c,d], low-rank decomposition and importance-based weight (e.g. PB-LLM) dividing methods should be discussed.
**A1:** We sincerely appreciate your valuable suggestion. Due to the page limit of the ... | Summary: This paper focuses on the quantization of discrete diffusion models for graph data. To achieve this, the authors introduce sparse-dense activation quantization and low-rank decomposition with hardware support. Experimental results demonstrate that the proposed method enhances quantization performance while imp... | Rebuttal 1:
Rebuttal: We highly appreciate your positive reviews and constructive suggestions.
**Q1:** These methods [1,2,3] should be discussed in this context to provide a more comprehensive comparison.
**A1:** Thank you for your constuctive suggestion. We note that these methods [1,2,3] were designed for image di... | Summary: This paper presents Bit-DGDM, an advanced post-training quantization framework developed for Discrete Graph Diffusion Models. The proposed framework introduces two innovations, i.e., (1) a sparse-dense activation quantization mechanism and (2) an ill-conditioned low-rank weight decomposition technique, to effe... | Rebuttal 1:
Rebuttal: We very much appreciate your constructive comments. For your concerns:
**Q1:** The font in the figures of the paper should be consistent to make it easier to read.
**A1:** Thank you for your helpful feedback. We have updated all figures to use Times New Roman font, ensuring formatting consistenc... | Summary: This paper proposes post-training quantization (PTQ) methods to quantize discrete graph diffusion models (DGDM). The paper first analyzes outlier distributions of weights and activations in DGDM. For activations, the proposed method split activation matrices into high-precision sparse matrix (outliers) and low... | Rebuttal 1:
Rebuttal: We very much appreciate your positive comments and constructive suggestions.
**Q1:** What are the differences between the proposed method and SVDQuant?
**A1:** Thank you for your insightful question. Our method introduces two key innovations over SVDQuant. (i) Recognizing the presence of signif... | null | null | null | null | null | null |
ReverB-SNN: Reversing Bit of the Weight and Activation for Spiking Neural Networks | Accept (poster) | Summary: This paper introduces a novel binary design in SNN termed ReverB, which uses real-value activation and binary weights, merging the characteristics of both BNN and SNN. This innovative approach retains the energy efficiency advantages of SNNs in inference and their temporal properties. Additionally, the paper p... | Rebuttal 1:
Rebuttal: Thanks for your efforts in reviewing our paper and your recognition of our novel method and effective results. The responses to your questions are given piece by piece as follows.
**Question 1**: Does the ReverB network has a reset mechanism, and how does it function?
**A1**: Thanks for the ques... | Summary: The paper proposes an SNN design with real-valued activations and binary weights to boost information capacity while keeping energy efficiency. Its novel bit-reversal strategy and adaptive weight scaling are key innovations. However, the paper’s motivation and presentation lack clarity and could benefit from v... | Rebuttal 1:
Rebuttal: Thanks for your efforts in reviewing our paper and your recognition of our novel bit-reversal strategy and adaptive weight scaling. The responses to your weaknesses and questions are given piece by piece as follows.
**Weakness 1**: The paper does not clearly explain the rationale behind reversing... | Summary: This paper addresses the issue of information loss in Spiking Neural Networks (SNNs) due to the binarization of activations. The main contribution in the paper is to use binary weights (in $\\{-1,1\\}$) and real-valued spikes, instead of binary spikes and real-valued weights. This initial contribution is exten... | Rebuttal 1:
Rebuttal: Thanks for your efforts in reviewing our paper. We will try to make the work clearer for you. The responses to your concerns and questions are given as follows.
**Concern 1**: The only required addition is not for the learnable version.
**R1**: Sorry for the confusion. Since the $\alpha$ will be... | Summary: In this paper, the authors propose to make the weights of a spiking neural network ternary and the spikes of the units in the network real valued, essentially swapping what is done in SNNs usually. This swap preserves the advantages of SNNs, while improving its expressivity. The authors demonstrate their metho... | Rebuttal 1:
Rebuttal: Thanks for your efforts in reviewing our paper and your recognition of our novel method and notable results. The responses to your concerns and questions are given piece by piece as follows.
**Concern 1**: It would have been useful to also see evaluations on sequence tasks since SNNs are inherent... | null | null | null | null | null | null |
Rethinking Confidence Scores and Thresholds in Pseudolabeling-based SSL | Accept (poster) | Summary: This paper proposes a method for selection of points to be pseudolabeled in pseudolabeling-based semi-supervised learning idea. Contrasting previous works which use confidence-based thresholding, PaBlo trains a selector function with an optimization objective which balances coverage with pseudolabeling error, ... | Rebuttal 1:
Rebuttal: We appreciate the feedback and questions. Our response is as follows,
**On baselines that do not rely on confidence scores and thresholds.**
While this would be interesting, our paper's focus is on pseudolabeling methods based on confidence scores and thresholds. For this reason, we chose baselin... | Summary: This paper introduces a principled framework for improving pseudolabeling-based semi-supervised learning (SSL) by explicitly controlling confidence scores and thresholds to manage pseudolabel quality and quantity. The approach addresses limitations of heuristic-driven methods, offering a systematic way to bala... | Rebuttal 1:
Rebuttal: Thanks for the careful review and positive feedback. We appreciate the recognition of the strengths of our work — *a flexible and principled approach for learning confidence scores and thresholds for pseudolabeling and its empirical effectiveness*. Our response to the queries is as follows,
**Di... | Summary: This paper proposes PabLo, a novel method for semi-supervised learning. The authors conceive their approach through noting that the threshold for selecting pseudolabels from the teacher model should be both permissive enough to allow for a large degree of supervision, while not being so permissive as to introd... | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback and the noted strengths — *a lightweight, intuitive, and theoretical framework to learn scores and thresholds that can be integrated with existing SSL approaches to improve their performance*. Our response to the queries is as follows,
**More la... | Summary: The paper proposes PabLO, a framework for improving pseudolabeling-based semi-supervised learning (SSL) by learning confidence scores and thresholds with explicit control over pseudolabeling error tolerance. The core idea is to formulate pseudolabeling as an optimization problem that maximizes coverage while b... | Rebuttal 1:
Rebuttal: We appreciate the feedback and the noted strengths of our paper. Our work is well-positioned in the literature on SSL and confidence calibration. Our principled methods to learn confidence scores and thresholds with error bounds replace the heuristic-based choices and enhance the prior SSL methods... | null | null | null | null | null | null |
Fast Min-$\epsilon$ Segmented Regression using Constant-Time Segment Merging | Accept (poster) | Summary: This paper provides a heuristic method to compute segmented regression. Instead of looking for the best segments directly, the algorithm finds as many segments as possible and then merges them until only k segments are left. The authors evaluate the algorithm on the synthetic datasets. The authors' method show... | Rebuttal 1:
Rebuttal: Thank you for reviewing our work, for the constructive improvement ideas and for pointing out the interesting direction of isotonic regression.
**Relation to isotonic regression** (we assume that the review refers to isotonic regression): While isotonic regression is also based on an ordered samp... | Summary: The paper addresses min-epsilon segmented regression, where the goal is to minimize the mean squared error (MSE) for a given number of segments. While the optimal solution has O(n^2)complexity (Bai & Perron, 1998), heuristics like Acharya et al. (2016) improve efficiency to O(n) but often introduce significant... | Rebuttal 1:
Rebuttal: Thank you for your feedback.
**Theoretical guarantees:** We would like to highlight that contrary to the reviewer's summary, the approach by Acharya et al. (2016), for a fixed value of $d$, improves the runtime of the approach to $\mathcal{O}(n\log{n})$, not $\mathcal{O}(n)$. This can be seen in ... | Summary: This paper proposes a new heuristic method for the $\\min$-$\\epsilon$ segmented regression problem. Some prior works propose two types of algorithms for this problem. One line of work (Bai & Perron, 1998; Yamamoto & Perron, 2013) gives optimal solutions for this problem with computational complexity $\\mathca... | Rebuttal 1:
Rebuttal: Thank you for reviewing our work and for the suggestions for improvement.
**Broader subject relevance:** We consider the topic of regression to be a fundamental building block for statistical analysis and machine learning.
As mentioned in Section 7, Diakonikolas et al. (2020) have shown that an a... | Summary: The authors present a new method and algorithm for min-$\epsilon$ segmented
regression. The main contributions are primarily algorithmic but also related
to software engineering, as the authors implement highly efficient programming
techniques to enhance their implementation. The greedy algorithm they propose
... | Rebuttal 1:
Rebuttal: Thank you very much for your valuable and constructive feedback regarding our paper, including the code and experiment setting.
**Evaluation design and constraints:** The alternatives in our 'related work' section solve a slightly different problem, e.g., by enforcing continuity of the resulting ... | null | null | null | null | null | null |
MTL-UE: Learning to Learn Nothing for Multi-Task Learning | Accept (poster) | Summary: This paper introduces MTL-UE, the first unified framework for creating unlearnable examples tailored for multi-task data and models. By leveraging a generator-based structure with label priors and class-wise embeddings, MTL-UE enhances attack robustness through intra-task and inter-task regularization. It supp... | Rebuttal 1:
Rebuttal: **Q1**. Scenarios involving fine-tuning (using a pretrained feature encoder) and advanced augmentations.
**A1**. Thanks for the suggestions. We conducted experiments on the proposed scenarios. The table presents results of training MTL models on UTKFace with ImageNet-pretrained-encoder fine-tunin... | Summary: This paper introduces MTL-UE, the first framework for generating unlearnable examples (UEs) tailored for multi-task learning (MTL) models. While existing UE methods focus on single-task learning (STL) to prevent unauthorized training on personal data, modern AI increasingly relies on generalist MTL models. Thi... | Rebuttal 1:
Rebuttal: **Q1**. How would these methods perform if ViT-B were used as the backbone for the surrogate models?
**A1**. In addition to the results in Table 6, we conducted experiments using ViT-B as the backbone for the surrogate models on the CelebA dataset. The results in the table below show that this ch... | Summary: In this paper, we propose an effective method for generating unlearnable samples for multi-task learning (MLT), which uses a generator to generate perturbations instead of the traditional iterative method. In this paper, the effectiveness of the method is analyzed and validated in terms of both accuracy and ro... | Rebuttal 1:
Rebuttal: **Q1**. Insufficient coverage of existing literature.
A1. Thank you for the advice! We’ll add recent works on **data poisoning** in the related work. As we focus on UE, we add a comparison between UE and other poisoning attacks (the table below) in the updated paper to broaden the literature cove... | Summary: This work studies unlearnable examples (UE) for multi-task learning (MTL). The authors first evaluated baseline UE methods in the MTL scenario, showing that existing UE methods are not effective on MTL when more tasks are involved. Motivated by this observation, MTL-UE is proposed taking both single-task and m... | Rebuttal 1:
Rebuttal: **Q1**. Further clarify the experimental results. What does STL mean in Figure 2, and why do the baseline methods perform poorly, even on STL?
A1. The pipeline for UE has two stages:
- **Stage 1**: UE generation process (Section 4.2).
- **Stage 2**: UE performance evaluation, where generated UEs... | null | null | null | null | null | null |
Mitigating Object Hallucination in Large Vision-Language Models via Image-Grounded Guidance | Accept (spotlight poster) | Summary: This paper proposes the MARINE framework to address the object hallucination issue in Large Vision-Language Models (LVLMs). This framework introduces visual guidance from image-grounded models to effectively reduce hallucinations during inference. Experiments show that MARINE outperforms baseline methods on mu... | Rebuttal 1:
Rebuttal: Thank you for your insightful feedback and acknowledgement of our extensive experiments and the overall clarity and structure of our paper. We detail our response as follows.
### Q1: Using the DETR trained on MSCOCO may be unfair.
MSCOCO (train) is a widely-used open-source image-caption dataset ... | Summary: The paper proposes the MARINE method for mitigating object hallucination in LVLMs. The method uses results from external object detection models and adds it in the form of an extra textual prompt into the LVLM’s generation. The method is compared with several baselines on object hallucination benchmarks, as we... | Rebuttal 1:
Rebuttal: Thank you very much for your insightful review and valuable feedback. We sincerely appreciate your recognition of the simplicity and thorough experimental validation of our MARINE approach.
### Q1.1 Clarify the claim regarding misalignment.
In the original claim, by "visual encoder," we referred ... | Summary: The paper presents a novel method called MARINE to reduce hallucination in large vision-language models (LVLMs).
The method can be applied to LVLMs without any training. When auto-regressively generating individual tokens, logits are computed twice: once with the normal LVLM input ("unconditional"), and once ... | Rebuttal 1:
Rebuttal: Thank you very much for your thoughtful and constructive review. We appreciate your recognition of MARINE’s effectiveness and comprehensive evaluation. We provide detailed responses to your questions below:
### Q1. Effect of sampling temperatures.
In our paper, we opted for greedy sampling (tempe... | Summary: This paper proposes a framework (MARINE) that aggregates a VLM and traditional vision tools such as object detection and image-text alignment. Concretely, given an input image, MARINE uses vision tools as guidance models, achieved through a linear combination of unconditional and conditional logits over the vo... | Rebuttal 1:
Rebuttal: We sincerely appreciate your thoughtful review and encouraging feedback. Thank you for recognizing the clarity and practical design of our approach, our emphasis on mitigating hallucinations, and the strong empirical support we provided. Below, we provide detailed responses to your comments:
### ... | null | null | null | null | null | null |
Adaptive Sensitivity Analysis for Robust Augmentation against Natural Corruptions in Image Segmentation | Accept (poster) | Summary: This work proposes a sensitivity-guided method to improve model robustness against image corruptions. The sensitivity measure enables a selection of proper model-free augmentation policies. The experiments show that the method improves robustness of models on both real and synthetic datasets, compared to SOTA ... | Rebuttal 1:
Rebuttal: Thank you for your review! We clarify some misunderstandings below.
- "Figure 1 not support equal spacing along function g, and there is no proof."
By equal spacing, we meant that given the set of α values that fulfills Q, the set of g(α_i) are at equal intervals along the y-axis of the functio... | Summary: The paper addresses a practical challenge of enhancing model robustness to natural corruptions in semantic segmentation, a critical area for real-time perception applications. It proposes a novel, computationally efficient online adaptive sensitivity analysis approach (10x faster and 200x less storage than exi... | Rebuttal 1:
Rebuttal: Thank you! We appreciate the suggestions and will improve clarity in the revision, adding more references and notation. We address points below:
- “Efficiency improvement claims are missing benchmarks like inference runtime benchmarks and memory usage.”
We would like to clarify that our effici... | Summary: This paper introduces an adaptive, sensitivity-guided augmentation method to improve the robustness of image segmentation models against natural corruptions. The idea is to perform a lightweight, online sensitivity analysis during training to identify the most impactful perturbations. This approach aims to bri... | Rebuttal 1:
Rebuttal: Thank you for your comments, we are grateful to hear that you find our work impactful in real-world robotics applications and our analyses interesting for model sensitivity! We will be sure to add writing fixes in paper revision regarding Table 1 and increase visibility of results related to our c... | Summary: This paper proposes an adaptive, on-the-fly sensitivity analysis approach to design data augmentation for increasing the robustness of the semantic segmentation models under naturally occurring corruptions. The proposed approach attempts to bridge the gap between choosing random augmentations like Trivial Aug... | Rebuttal 1:
Rebuttal: Thank you! We appreciate your feedback, and address points below:
- "Could meta-learning offer a more direct optimization?"
Yes. NOTE: the difference is mostly wrt the choice between data augmentation vs meta-learning as the training approach, rather than an alternative for the sensitivity anal... | null | null | null | null | null | null |
No-Regret is not enough! Bandits with General Constraints through Adaptive Regret Minimization | Accept (poster) | Summary: The authors study the BwK setting where a learner is tasked with repeatedly performing actions and gain high cumulative reward while also satisfying multiple general long-term constraints. Specifically, they consider a best-of-both worlds objective in which a given algorithm has to perform optimally whether or... | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback about our paper.
* The weak adaptivity property was used in a simplified setting in the very recent paper by Castiglioni et al. (2024). Here, the authors only study the case in which they have a single budget constraint and a single consumption con... | Summary: The paper addresses the problem of bandits with general constraints, extending beyond the traditional bandits with knapsacks (BwK) framework. The authors generalize the setting where the learner does not know the Slater's parameter $\rho$ and give an algorithm following the primal-dual framework.
Previous work... | Rebuttal 1:
Rebuttal: **On the questions about the Theoretical proofs:**
Thanks for taking the time to carefully read our proofs. We really appreciate the effort.
* Thanks. We meant to cite the following works:
- At line 660: Hazan, Elad. "Introduction to online convex optimization." Foundations and Trends® in Op... | Summary: This paper studies the general constrainted optimization problem where the reward and cost functions can either be stochastic or adversarial. By extending the LagrangeBwK framework by requiring the primal & dual algorithms to be weakly adaptive in addition to being no-regret, the authors designed a best-of-bot... | Rebuttal 1:
Rebuttal: ### **On the competitive ratio**
Thank you for raising this possible source of confusion about the competitive ratio. The issue here is primarily one of nomenclature rather than our choice of a stronger benchmark. We agree that adding further clarification in the final version will be beneficial.... | null | null | null | null | null | null | null | null |
Feature out! Let Raw Image as Your Condition for Blind Face Restoration | Accept (poster) | Summary: This paper proposes the Pseudo-Hashing Image-to-image Schrodinger Bridge (P-I2SB) framework to enhance the restoration potential of Schrodinger Bridge (SB) by correcting data distributions and effectively learn the optimal transport path between any two data distributions. This approach preprocesses HQ images ... | Rebuttal 1:
Rebuttal: Thank you for your careful review of the paper structure and formatting, which is greatly appreciated.
> Q1. In Table 1, the reference numbers for SoTA methods should be listed.
- Due to the ICML citation format not using numbers, the references were too lengthy to include directly in the table. ... | Summary: The authors present Pseudo-Hashing Image-to-Image Schrödinger Bridge (P-I2SB), a novel framework inspired by optimal mass transport. By correcting data distributions and effectively learning the optimal transport path between them, it enhances the restoration capabilities of Schrödinger Bridge (SB). Experiment... | Rebuttal 1:
Rebuttal: Thank you for your comprehensive review and valuable insights.
> Q1. Their approach resembles a data augmentation technique, like in DiffBIR, suggesting that pairing I2SB with it might yield similar results, indicating potential simplicity.
- **Our P-I2SB is not a data augmentation method.** Data... | Summary: This paper proposes the Pseudo-Hashing Image-to-Image Schrödinger Bridge (P-I2SB), a novel framework for blind face restoration (BFR). The key insight of this paper is that using raw LQ images directly as the starting point for the reverse diffusion process is theoretically optimal.
The authors argue that Schr... | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback and constructive suggestions. Your input is crucial to refining and enhancing the quality of our paper.
> Q1. The computational complexity of PHM preprocessing is not well analyzed. What is the computational cost of PHM preprocessing relative to standard diffus... | Summary: This paper proposes P-I2SB, a novel framework for blind face restoration that leverages a pseudo-hashing strategy to preprocess image pairs and a Schrödinger Bridge Module (SBM) to learn optimal transport paths between LQ and HQ distributions. The key innovation lies in directly using raw LQ images as endpoint... | Rebuttal 1:
Rebuttal: We sincerely appreciate the thorough review and insightful comments you have provided.
> Q1. The paper does not thoroughly analyze the computational overhead of the pseudo-hashing strategies (Cat/Res/Noise-I2SB) compared to baseline methods, despite claiming retained inference speed.
- **Computat... | null | null | null | null | null | null |
EraseAnything: Enabling Concept Erasure in Rectified Flow Transformers | Accept (poster) | Summary: This paper highlights the limitations of existing concept-erasing methods, such as CA, ESD, and UCE, which were developed for Stable Diffusion models utilizing U-Net, cross-attention, and CLIP text encoders. The authors argue that these methods are ineffective for Flux, a modern multi-modal diffusion transform... | Rebuttal 1:
Rebuttal: Thank you for your detailed comments and interest in our work!
- **(A) Limited Fine-Tuning due to VRAM Constraints**
We acknowledge the reviewer's concern regarding limited fine-tuning. Due to 80GB VRAM constraints on our single A100, full fine-tuning was infeasible. We opted for LoRA, priorit... | Summary: Given that current text-to-image models can generate inappropriate content related to pornography, violence, or copyright violations, the problem of effective concept erasure has become a critical research topic. Existing methods have proven effective for Stable Diffusion but are challenging to directly adapt ... | Rebuttal 1:
Rebuttal: We sincerely appreciate your insightful feedback. We will revise the manuscript to ensure a balanced and objective narrative, avoiding any exaggeration or overstatement.
- **(A) Difference between bi-level optimization and multi-objective optimization**
We frame unsafe concept erasing as a bi-le... | Summary: In this paper, the authors propose a methodology for concept unlearning while ensuring the preservation of unrelated concepts in the latest text-to-image (T2I) models based on Flow Matching and Transformer-based diffusion models such as Flux. The authors introduce a bi-level optimization (BO) framework. The lo... | Rebuttal 1:
Rebuttal: Thank you for your kind words and recognition!
- **(A) Adversarial attack experiments**
Thank you for the suggestion to include adversarial attack experiments, which we consider very important. Following the paper's methodology, we used `NudeNet` (Bedapudi, 2019) with a detection threshold... | Summary: This paper introduces EraseAnything, a flux-based concept erasing method designed to selectively remove target concepts while preserving irrelevant ones. The authors employ a bi-level optimization strategy to mitigate overfitting and catastrophic forgetting—key challenges in concept erasure. Experimental evalu... | Rebuttal 1:
Rebuttal: Thank you for your kind words and review!
To be concise:
* **UCE**'s aggressive nudity removal significantly distorts images.
* **EraseAnything** prioritizes image quality and text alignment, offering a better trade-off.
As shown in [this image](https://imgur.com/a/at2lkh8), optimizing **'K'** ... | null | null | null | null | null | null |
Mahalanobis++: Improving OOD Detection via Feature Normalization | Accept (poster) | Summary: The paper proposes a simple fix to the Post-Hoc OOD detection technique based on the Mahalanobis distance computed on the feature space of the neural network of interest. This simple fix consists of normalizing the features by their $l_2$ norm before computing the distance. The authors emphasize how the sample... | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful comments and appreciate the positive feedback. Below we address the reviewers remarks:
- __“The fix intends to alleviate the difference between the feature norms of samples from different classes”__
We would like to clarify that different feature norms... | Summary: This submission focuses on the OOD detection task and it proposes a simple yet effective method to improving the Mahalanobis distance approach.
## update after rebuttal
The authors rebuttal has largely addressed my concerns and I thus maintain my positive rate.
Claims And Evidence: While in a mixture of the... | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback, and for appreciating our work. We address the remarks below:
1. __"organize related work section"__ and __"elaborate difference to existing similar methods"__
We will extend the discussion about related work, and emphasize the differences to... | Summary: The paper revisits the Mahalanobis distance for out-of-distribution detection. It first examines how the assumptions underlying the Mahalanobis distance for OOD detection are violated by a variety of models. It then proposes a maximally simple but effective remedy by applying l2-normalization to the pre-logit ... | Rebuttal 1:
Rebuttal: We thank the reviewer for carefully reading and evaluating our paper, and we are glad that the reviewer finds that our claims are __“supported by clear and convincing evidence”__, that our method is __“well motivated”__, that that they appreciate the __“wide variety of model types, architectures a... | Summary: This paper presents an holistic empirical analysis illustrating the current violation of the gaussian distribution of the representations of most vision backbones. From this constatation, the paper introduces a variation of the Mahalanobis distance for OOD detection called Mahalanobis++. Extensive experiments ... | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable comments and address the remarks below:
- __"the features should be concentrated around $\sqrt{\mathrm{tr}(\Sigma)-\|{\mu}\|^2_2}$" (in Lemma 3.1)__
We thank the reviewer for checking our proof, but we strongly believe that the term $\sqrt{\mathrm{tr}(... | null | null | null | null | null | null |
ABKD: Pursuing a Proper Allocation of the Probability Mass in Knowledge Distillation via $\alpha$-$\beta$-Divergence | Accept (oral) | Summary: This paper investigates a fundamental challenge in Knowledge Distillation (KD): the improper allocation of probability mass when using traditional divergences like Forward KL Divergence (FKLD) and Reverse KL Divergence (RKLD). FKLD tends to spread probability mass too broadly, failing to pay sufficient attenti... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for recognizing the theoretical foundations, clarity of contributions and experiments, and the improved performance demonstrated by our method. Our response follows:
> Q1: I wonder whether the observation is consistent across different datasets. I hope the author c... | Summary: The paper introduces ABKD, a knowledge distillation (KD) framework using alpha-beta-divergence to balance the "hardness-concentration" (focus on high-error classes) and "confidence-concentration" (focus on high-confidence classes) effects. Theoretical analysis shows that FKLD and RKLD represent extreme cases o... | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful comments and suggestions. Our response follows:
> Q1: The experiments should include a comparison of the performance of ABKD in these degenerate cases with the original FKLD and RKLD
**A1**: When ABKD degenerates to FKLD and RKLD, its performance matches t... | Summary: The paper discusses the main challenges in knowledge distillation, which lies under the proper balance between two modes (1) hardness concentration and (2)confidence concentration. They provided a smoother transition between the reverse and forward KL divergences via the integration of alpha-beta divergence Th... | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to provide their helpful and valuable feedback. Our response follows (**please see https://anonymous.4open.science/r/ICML-rebuttal-experiments/results.md for all rebuttal experiments**):
>Q1: Essential References Not Discussed
**A1:** We will include the ... | null | null | null | null | null | null | null | null |
Convergence of Consistency Model with Multistep Sampling under General Data Assumptions | Accept (poster) | Summary: This paper analyzes the convergence of consistency models under approximate self-consistency. With mild data assumptions, it proves sample closeness to the target distribution in Wasserstein or total variation distance. The study applies to various forward processes and highlights the benefits of multistep sam... | Rebuttal 1:
Rebuttal: Thank you for your feedback. We address your points in detail below:
1. **non-uniform discretization:** in this paper, we adopt a uniform discretization for clarity and ease of presentation. However, our results can be extended to the non-uniform discretization setting as well. Suppose $\tau_{0:M}... | Summary: The paper studies the convergence of consistency models with assumptions on the consistency property. It further assumes that the target data distribution has bounded support. In this case, it shows the convergence result in Wasserstein distance and total variation distance. The theoretical results indicate th... | Rebuttal 1:
Rebuttal: Thank you for your feedback. We address your points in detail below:
1. **Interpretation of Theorem 2:** trade-off means increasing the number of sampling steps does not necessarily lead to improved performance due to the influence of term (ii). This is in contrast to standard diffusion models, wh... | Summary: This paper analyzes consistency models—a recently introduced approach for accelerating sampling in diffusion-based generative models. Unlike classical diffusion models that rely on multiple iterative score-based updates, consistency models learn a direct mapping (“consistency function”) from noise to data whil... | Rebuttal 1:
Rebuttal: Thank you for your feedback. Please see our detailed responses below:
1. **Regarding high-dimension issue:** even when accounting for the implicit dependency on the dimension, our upper bound remains at most polynomial in dimension and thus does not suffer from the curse of dimensionality. For exa... | null | null | null | null | null | null | null | null |
Universal Approximation of Mean-Field Models via Transformers | Accept (poster) | Summary: The papers consider a mild variant of the transformer model, as part of a larger literature connecting transformers and maps to/from probability measures. Their main result, from my perspective, is Theorem 4.14 which provides a sort of "small time" approximation guarantees that their version of the transforme... | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and feedback, and we thank the reviewer for finding that our results show that "transformer model can efficiently approximate certain MF ODEs." We hope that this response answers the reviewer's concerns.
> Assumption 4.3
Assumption 4.3 a) is an assumption ... | Summary: The authors study how transformes can be used to approximate mean-field models. The analysis is both theoretical and empirical.
Empirically, they test the transformers' on two different mean field models. Theoretically they provide bounds in therms of $L_\infty$ distance between the expected transformer and th... | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and feedback. We thank the reviewer for finding that the problem we study has "very important implications in machine learning" and that "the paper looks original to me and the question the authors try to answer is fundamental". We hope that this response ans... | Summary: This paper shows, both empirically and with theoretical guarantees, that mean-field dynamics ("transport-type" dynamical systems over the space of probability measures, i.e., which take the form of a continuity equation $\partial_t \mu_t = -\nabla_z \cdot (\mu_t \mathcal{F}(z,\mu_t))$) can be approximated up t... | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and feedback. We thank the reviewer for finding that "the paper shows end-to-end guarantees" and that it is "worthwhile to write those bounds down properly, which this paper does well," and for finding that the paper takes a novel, "arguably simpler approach... | Summary: This paper explores the application of transformers in modeling the mean-field dynamics of interacting particle systems. The study empirically shows that transformers can effectively approximate diverse mean field models, such as the Cucker-Smale model and systems for training two-layer neural networks. It su... | Rebuttal 1:
Rebuttal: We thank the reviewers for their questions and comments, which help improve the paper. We thank the reviewer for finding that our paper "supports these empirical findings with mathematical theory" and "establishes theoretical bounds on these errors, enhancing the understanding of transformer capab... | null | null | null | null | null | null |
Gated Integration of Low-Rank Adaptation for Continual Learning of Language Models | Reject | Summary: This manuscript focused on the continual learning of language models. Unlike the existing continual learning studies based on LoRA that treated the new and old LoRA branches to contribute equally to old tasks, the authors proposed a new method, gated integration of low-rank adaptation (GainLoRA). Specifically,... | Rebuttal 1:
Rebuttal: **Q1: memory and computational overhead regarding the subspace construction**
**A1:** The memory and computational overhead of subspace construction in GainLoRA is minimal due to the small size of the gating module (only 3 layers, see Appendix B.3). We provide detailed analyses below.
Memory: Th... | Summary: This paper introduces GainLoRA, which integrates LoRA with gating mechanisms. GainLoRA expands a new LoRA branch for each task while incorporating task-specific gating modules, for mitigating catastrophic forgetting. Experimental results demonstrate strong performance and provide comprehensive ablations.
Clai... | Rebuttal 1:
Rebuttal: **Q1: The idea of using a mixture of LoRA branches is not novel, as it closely resembles the MoE LoRA framework.**
**A1:** Our method is fundamentally different from existing MoE LoRA frameworks, as it specifically addresses continual learning (CL) in a rehearsal-free setting where task identitie... | Summary: The paper introduces GainLoRA, an approach to mitigate catastrophic forgetting in task incremental continual learning scenarios leveraging gated integration of low-rank adapters. This approach expands LoRA branches for each task and introduces gating modules to dynamically control the impact of each branch. Un... | Rebuttal 1:
Rebuttal: **Q1: the cumulative parameter count increases ... potentially limiting scalability in ... a large number of tasks**
**A1:** We admit that cumulative parameters increase with more tasks, but scaling to a large number of tasks remains a challenge in CL. To the best of our knowledge, our 15-task se... | Summary: The paper proposes a method for computing the weighting factor of different LoRA components in a continual learning setting. The approach is based on training a new set of LoRA parameters for each new task alongside a gating network. This network is constructed such that it outputs a value of 0 at 0. The metho... | Rebuttal 1:
Rebuttal: **Q1: the constraint to not carry any data forward seems a bit artificial to me when a new lora module + gate function is added for each task (hence memory use scale linearly with the number of tasks anyway).**
**A1** The constraint to not carry any data forward is not merely about saving memory ... | null | null | null | null | null | null |
How Classifiers Extract General Features for Downstream Tasks: An Asymptotic Analysis in Two-Layer Models | Reject | Summary: The paper investigates how classifiers learn general features that can be directly applied to new tasks without further training. It considers a two-layer neural network trained with a single gradient descent step on a mean‐squared error loss. In an asymptotic regime—where the number of samples, input dimensio... | Rebuttal 1:
Rebuttal: Attachment link: anonymous.4open.science/r/icmlrebuttal-3B6F
Thank you for recognizing the theory as standard and for positively evaluating the experiments. We understand that you wanted us to **clarify the relationship between the relevant studies and ours** in order to highlight the **novelty**... | Summary: This paper studies how a two-layer classifier, trained by mean-squared error for multi-class problems, learns a features that can cluster unseen data. The main theoretical result is an exact characterization of a single-step gradient update of the network features, derived under proportional asymptotics (sampl... | Rebuttal 1:
Rebuttal: Attachment link: anonymous.4open.science/r/icmlrebuttal-3B6F
Thank you for your comments. We also appreciate your feedback that the motivation of our work and the phenomenology are very interesting.
After reviewing your comments, we found that you suggested improvements for readability, understo... | Summary: This paper explores how classifiers extract general features for transfer to new distributions. It analyzes a two - layer network in the proportional regime, decomposing features into components like random initialization and spikes related to training classes. In binary classification, train - unseen similari... | Rebuttal 1:
Rebuttal: Attachment link: anonymous.4open.science/r/icmlrebuttal-3B6F
Your thoughtful feedback is a great encouragement and reaffirms our commitment to furthering this research.
We understand that your primary concern lies in fairness during experimental validation.
Thus, the primary purpose of this rebu... | null | null | null | null | null | null | null | null |
Sliding Puzzles Gym: A Scalable Benchmark for State Representation in Visual Reinforcement Learning | Accept (poster) | Summary: The paper introduces a sliding puzzle based environment for evaluating visual RL. It provides a number of baselines on the environment.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes. It is sound.
Supplementary Material: Yes, I... | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive assessment of our experimental design and literature review, and for the opportunity to clarify the core motivation behind SPGym regarding the evaluation of visual representation capabilities.
## Why SPGym Tests Visual Representation Capabilities
The cent... | Summary: This paper presents SPGym, a new benchmark for visual reinforcement learning (RL) based on the classic 8-tile puzzle. SPGym uses a visual observation space derived from large datasets and allows researchers to manipulate representation complexity by adjusting visual diversity. Experiments using model-free and ... | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback on SPGym's potential and the detailed, constructive comments regarding the evaluation settings and experimental design. We address the concerns below:
## 1. In-Distribution Evaluation
We understand the concern that Table 2 results might seem like o... | Summary: The paper introduces SPGym, a novel benchmark for visual RL that extends the classic sliding puzzle by replacing numbered tiles with image patches. This enables scaling visual diversity while keeping the puzzle dynamics fixed, with the aim of isolating representation learning from policy learning. The authors ... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their thorough and constructive feedback. We appreciate the recognition of SPGym's potential and the detailed suggestions, which will significantly improve the paper.
## 1. Answers to Direct Questions
1. **Missing Tile:** The missing tile's starting position ... | null | null | null | null | null | null | null | null |
VTGaussian-SLAM: RGBD SLAM for Large Scale Scenes with Splatting View-Tied 3D Gaussians | Accept (poster) | Summary: To address a high memory consumption issue of 3DGS, this paper proposes view-tied 3DGS, which determines Gaussians based on the views.
The Gaussians from the last frame are tracked and processed in sections. Since the method is view-dependent, it efficiently reduces the storage required for location, rotation,... | Rebuttal 1:
Rebuttal: Thanks for your review and positive comments on our idea, contributions, evaluations, and supplementary materials.
### **1. Overview**
We will revise the overview accordingly to make it easier to follow.
### **2. Runtime comparison in Tab.11**
As stated in Lines 434-438 left, we manage to optimi... | Summary: - This paper addresses the limitation of traditional 3DGS-SLAM methods, which struggle to scale up to extremely large scenes due to inefficient tracking and mapping strategies.
- The authors propose tracking and mapping strategies based on a new 3D representation called view-tied 3D Gaussians, which simplifie... | Rebuttal 1:
Rebuttal: Thanks for your review and positive comments on motivation and performance.
### **1. Results on city-level scenes**
Following SplaTAM, we evaluate on the widely used benchmarks such as ScanNet++ and demonstrate superior storage efficiency, learning 20 times more Gaussians for more detailed renderi... | Summary: The paper presents VTGaussian-SLAM, a novel RGBD SLAM system that utilizes view-tied 3D Gaussians for efficient mapping and tracking in large-scale scenes. It introduces the representation of Gaussians tied to depth pixels, thus improving optimization efficiency and reconstruction quality while enabling better... | Rebuttal 1:
Rebuttal: Thanks for your review and positive comments on our idea, contributions, evaluations, and supplementary materials.
### **1. Impact of section length**
For fair comparisons with previous methods in rendering quality, we adopt the same number of iterations for mapping. But our Gaussians are view-t... | Summary: This work presents VTGaussian-SLAM, a novel method for RGB-D SLAM by a novel view-tied 3D Gaussian representation, with corresponding tracking and mapping methods.
This method does reduce parameter optimization (e.g., exact localization, rotation and covariance parameters), so the system can store many more G... | Rebuttal 1:
Rebuttal: Thanks for your review and positive comments on our idea and evaluations.
### **1. Benchmark selection and large-scale scenes**
We follow previous methods like SplaTAM to report our evaluations on the widely used benchmarks such as ScanNet++. We also show our advantages in storage complexity, wh... | null | null | null | null | null | null |
Hyperflows: Pruning Reveals the Importance of Weights | Reject | Summary: The paper proposes a 'prune and regrow' approach during training. The concept of hyperflows and pressure is introduced. Hyperflows behave as a sort of saliency measure for each neural network weight. Pressure is used to control the sparsity of the network. Pruning during training behavior is analyzed in order ... | Rebuttal 1:
Rebuttal: Thank you for the insightful questions. We would like to clarify the inspiration, novelty, and practical advantages of Hyperflows:
**C:** “Is the idea behind this approach coming from max-flows?”
**R:** While the notion of “flow” might remind us of max-flow formulations in network theory, our a... | Summary: This work propose a novel algorithm that measuring importance of weights through observing their gradient during dynamic prunning process. The weights that are believed to be important will regrow at later stage. Overall the proposed algorithm show better performance than those compared with in this work.
## ... | Rebuttal 1:
Rebuttal: We thank the reviewer for the analysis and highlighting potential issues of the manuscript. We address them below.
**C:** “The author might want to clearly compare the actual computing cost associated with this new algorithm.”
**R:** We compared Hyperflows with methods that do not require additi... | Summary: The authors propose Hyperflows, a pruning-during-training method. It assigns each parameter a learnable parameter to determine if a certain parameter should be pruned. The effectiveness of Hyperflows is tested across multiple datasets, including CIFAR10, CIFAR100, and ImageNet. It outperforms baseline methods ... | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback. We address the concerns below:
**C:** "differences between Hyperflows and Learnable masks with STE dense-to-sparse methods."
**R:**
**Common aspects:**
- Learnable Masks, L0 global pressure, STE for mask parameters.
**Technical differences:*... | Summary: This work proposes a novel method for the pruning of parameters from deep neural network models. It focuses upon the principle of defining a network topology based upon each parameter having a measure which captures a tradeoff between a pruning 'pressure' which is applied to every node, as well as a measure of... | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed feedback and for recognizing the potential and novelty of our proposed Hyperflows method. Below we address the main concerns raised:
**C:** “There is inconsistency, for example the methods mention that 160 epochs of training are used, however according to ... | null | null | null | null | null | null |
Certification for Differentially Private Prediction in Gradient-Based Training | Accept (poster) | Summary: This paper presents a certification algorithm for assessing the stability of model predictions, which helps reduce the smooth sensitivity of the predictions. By providing a tighter bound on smooth predictions, the algorithm enhances the accuracy of private predictions. Empirical experiments demonstrate that th... | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and consideration of our work.
* Could you also give the memory overhead for the Algorithm 1? For example, when running gpt2, what's the memory consumption compared to DP-SGD?
Both DP-SGD and AGT require the computation of per-sample gradients, incurring a l... | Summary: This paper introduces a new approach for improving differential privacy in machine learning predictions. The authors propose a method to compute tighter dataset-specific upper bounds on prediction sensitivity by using convex relaxation and bound propagation techniques. Their approach called abstract gradient t... | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and careful review of our work.
* The authors claim that fewer than 10 runs of the AGT algorithm are ... sufficient ... [this] lacks a more systematic sensitivity analysis.
We appreciate the reviewer's point that, though empirically we find a small number o... | Summary: This paper studies upper bounds on the sensitivity of prediction in machine learning models. By doing that, the paper presents tighter privacy analysis. After which, experimental results showing a wide improvement in the tightness of the privacy bounds.
## update after rebuttal
I raised my score to a 4.
Claim... | Rebuttal 1:
Rebuttal: We thank the reviewer for their kind words and for the careful work of checking the proof and technical steps of our work. In their review they did not necessarily provide a strong signal of the weaknesses they would like to see addressed in relation to their score. We hope that both our response ... | Summary: The paper proposes to bound local sensitivity of predictions of models learned with gradient-based methods using interval bound propagation. Further, the paper uses the result to construct a sample-and-aggregate procedure for prediction ensembles. The paper then demonstrates that using the proposed bounds enab... | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and consideration of our work.
* It is quite strange to not see a comparison with PATE. Indeed, this looks like one additional step of training student models on top of the experiment in Fig. 5.
Fig. 4 illustrates the privacy-utility tradeoff of our method ... | null | null | null | null | null | null |
SCENT: Robust Spatiotemporal Learning for Continuous Scientific Data via Scalable Conditioned Neural Fields | Accept (poster) | Summary: This paper presents SCENT, which is a scalable and continuity-informed spatiotemporal learning framework designed to model complex scientific data. Using a transformer-based architecture with learnable queries and sparse attention, it unifies interpolation, reconstruction, and forecasting. Extensive experiment... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the insightful discussions, suggested papers, and constructive comments. It was a pleasant surprise to find substantial similarities as well as subtle yet important distinctions between SCENT, STFNN, and the referenced works. We found STFNN's inference mechanism... | Summary: The authors introduce a new model called SCENT for spatiotemporal modelling such as for differential equations like Navier-stokes. This model can take irregular input data and generate outputs at arbitrary locations and times, and so is capable of forecasting and spatial interpolation. This model has an encode... | Rebuttal 1:
Rebuttal: **1. Hyperparameters**
We appreciate the reviewer’s thoughtful questions regarding the extent of hyperparameter tuning conducted for SCENT in comparison to the baseline methods. This is indeed a crucial aspect when evaluating model performance fairly across methods. We included detailed hyperpar... | Summary: This paper introduces SCENT, a framework for spatiotemporal learning using Scalable Conditioned Neural Fields (CNFs). The model is built on a Transformer-based encoder-processor-decoder architecture, incorporating learnable queries and a query-wise cross-attention mechanism to capture multi-scale dependencies.... | Rebuttal 1:
Rebuttal: We truly appreciate your valuable comments. We agree on your concerns and suggestions, hence provide below our thoughts and additional experimental results for each of the questions.
$\ $
**1. On Fourier feature**
Although Fourier features are well established, their formulation can vary. Her... | Summary: This paper addresses common issues in scientific data, such as sparsity, noise, and multi-scale problems, by proposing a method called SCENT (Scalable Conditioned Neural Field) that can handle various spatio-temporal learning tasks like interpolation, reconstruction, and prediction. The paper is well-structure... | Rebuttal 1:
Rebuttal: We appreciate the reviewer's suggestion. We agree that the Kuroshio current provides an excellent yet challenging testbed for evaluating SCENT. We use 50‐year records from the China Ocean Reanalysis (CORA) [1] as our benchmark and follow the data processing guidelines established by Wu et al. (202... | null | null | null | null | null | null |
How to Train Your Multi-Exit Model? Analyzing the Impact of Training Strategies | Accept (poster) | Summary: Multi-exit neural network model is a model that can exit at different layers. It remains a challenge to find an optimal way to increase accuracy of exiting at an earlier layer while not dropping accuracy of the last layer.
This paper focuses on one angle: what is the best way to train a multi-exit model. The p... | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s thoughtful evaluation of our work and their recognition of its significance. We apologize for brief answers necessitated by the response length limit. Kindly if we resolved the concerns raised, we would be grateful if the reviewer would consider raising their... | Summary: The authors study different training regimes for early-exit networks (EENNs). To this end, they propose a framework consisting of 4 different metrics (gradient dominance, mode connectivity, numerical rank, mutual information) for studying the tranining dynamics of EENNs. They use the framework to explore the d... | Rebuttal 1:
Rebuttal: We sincerely appreciate and agree with the reviewer’s assessment that our work addresses a meaningful gap in the early-exiting literature.
> numerical rank informativeness
Firstly, the key insight of the numerical rank metric is that placing multiple early exits increases the rank and also the e... | Summary: The paper presents an enhanced early-exit training approach that combines two phases: initial backbone training followed by full multi-exit network training. This mixed strategy addresses the shortcomings found in both joint and disjoint training methods. While the paper presents its methodology clearly and pr... | Rebuttal 1:
Rebuttal: We thank the reviewer for the effort spent on reviewing our paper and the valuable insights. If we further addressed the remaining concerns, we would kindly ask for possible score reconsideration.
> The authors do not do experiment on SOTA settings.
> The results presented in Table 1~6 are prob... | Summary: This submission analyses different training strategies for early-exit models, namely disjoint (frozen backbone), joint (end-to-end) and the proposed mixed (backbone pretraining + joint) approach. Several metrics are proposed, including gradient dominance, mode connectivity, numeric rank and mutual information ... | Rebuttal 1:
Rebuttal: We thank the reviewer for assessing our work and recognizing the importance of this previously overlooked aspect. We hope that the answers below adequately answered the reviewer's questions and concerns. If that is the case, we kindly ask for a reconsideration of the score.
> number and placemen... | null | null | null | null | null | null |
Latent Diffusion Planning for Imitation Learning | Accept (spotlight poster) | Summary: The paper proposes latent diffusion planning (LDP), a method for imitation learning featuring 3 components: 1) A variational autoencoder, mapping images to a latent spaces 2) A latent diffusion planner, which generates a sequence of latent states that the policy should visit 3) An inverse dynamics model, also ... | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your detailed feedback on our project. We are happy that you found the paper to be well-written and clear. We address your questions and comments below. Please let us know whether there are any other concerns you have that prevent you from increasing your score.
**Q1... | Summary: - This paper ultimately aims to do some form of imitation learning in robotic settings
- It does this with a modular approach, using: 1) a 'planner' to predict sequences of observations from those provided by an expert demonstrator. 2) an IDM predicting actions from past and future observations.
- At inferenc... | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your detailed feedback on our project. We provide the requested experiments and address the comments and questions in detail below. Please let us know whether there are any other concerns you have that prevent you from increasing your score.
**Q1: Improvement in Expe... | Summary: The work proposes a novel approach for imitation learning that combines an inverse dynamics model (IDM) with a planner that proposes future goal states in latent space. The approach first trains a variational autoencoder (VAE) that encodes visual representations of states into a lower dimensional latent space.... | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your detailed feedback on our project. We are happy that you found the paper well-presented with clear experimental evaluations. Please let us know whether there are any other concerns you have that prevent you from increasing your score.
**Q1: Would the authors be a... | Summary: This paper presents Latent Diffusion Planning (LDP), an algorithm aimed at performing imitation learning with the presence of additional suboptimal and action-free demonstrations.
----
Problem Setting and Key Assumptions:
- Vision-based imitation learning for table top manipulation
- Aside from expert demos,... | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your detailed feedback on our project. We are happy that you found the paper easy to follow and relevant to learning from demonstrations. Please let us know whether there are any other concerns you have that prevent you from increasing your score.
**Q1: Pretrained Un... | null | null | null | null | null | null |
Occult: Optimizing Collaborative Communications across Experts for Accelerated Parallel MoE Training and Inference | Accept (poster) | Summary: All-to-all communication is a major bottleneck in training and inference for mixture-of-experts (MoE) large language models. While existing MoE kernels have improved computational efficiency, all-to-all communication remains a bottleneck. The authors propose Occult, which aims to (1) reduce redundant communica... | Rebuttal 1:
Rebuttal: We thank reviewer NL62 for the dedicated and professional comments. To address your concerns, we provide detailed pointwise responses below:
**[Claims and Evidence]**
We provide code at https://anonymous.4open.science/r/Occult-D802.
**[Theoretical claim 1: Communication complexity in Fig. 2]**
... | Summary: The paper introduces Occult, an algorithm-system co-design approach to optimize collaborative communication in MoE models for large-scale training and inference. The key idea is to reduce inter-device communication costs by maximizing intra-device expert collaboration, using expert placement rescheduling and c... | Rebuttal 1:
Rebuttal: We thank Reviewer 1GwW for recognizing that "the paper addresses a key limitation in MoE scalability", “experiments generally robust”, and that "Occult provides a practical, well-validated optimization." To address your questions, we provide pointwise responses below.
**[Potential concerns 1: Dif... | Summary: In this paper, the author proposes Occult, an MoE training and inference framework designed to reduce communication costs by effectively managing intra- and inter-collaboration among experts. The evaluation results demonstrate that the proposed method achieves significant speedup compared to the state-of-the-a... | Rebuttal 1:
Rebuttal: We sincerely thank reviewer geV9 for recognizing that our approach "enhances the training and inference efficiency of MoE models, which is crucial for future LLM deployment." To address your questions, we provide pointwise responses below.
**[Experiments & Analysis 1: Model configuration]**
Tha... | Summary: This paper introduces Occult, an algorithm-system co-design approach aimed at reducing the communication overhead of Mixture-of-Experts (MoE) large language models (LLMs). Specifically, the authors first propose BRIM, a data structure designed to support fundamental MoE operations efficiently. Next, they optim... | Rebuttal 1:
Rebuttal: We thank tR7y for recognizing that "the motivation of the paper is clear" and that "experiments show that the proposed method achieves faster speed and higher performance compared to baseline methods." To address your questions, we provide pointwise responses below.
**[Experiments & Analysis 1: L... | null | null | null | null | null | null |
Hierarchical Planning for Complex Tasks with Knowledge Graph-RAG and Symbolic Verification | Accept (poster) | Summary: This paper introduces HVR, a neuro-symbolic approach that enhances LLM-based planning by integrating hierarchical planning, retrieval-augmented generation (RAG) over knowledge graphs, and symbolic verification. The proposed method tackles long-horizon and complex task planning by decomposing tasks into macro a... | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback, which has greatly improved our paper. Below, we summarize the main concerns and detail the revisions and clarifications made to address them.
**Q1: Missing comparison with the state-of-the-art**
We did not include direct comparisons with existin... | Summary: The authors propose a neuro-symbolic approach that combines LLMs-based planners with Knowledge Graph-based RAG for hierarchical plan generation. It breaks down complex tasks into subtasks and then into executable atomic action sequences. A symbolic validator is integrated to ensure formal correctness, task dec... | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback, which has greatly improved our paper. Below, we summarize the main concerns and detail the revisions and clarifications made to address them.
**Q1: Missing comparison and discussion of efficiency**
We have included a study of the efficiency of o... | Summary: This paper introduces HVR, a task planning method that integrates hierarchical planning, retrieval-augmented generation (RAG) over symbolic knowledge graphs, and formal verification to enhance the performance of large language models (LLMs) in complex task planning. The proposed method decomposes the language-... | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback, which has greatly improved our paper. Below, we summarize the main concerns and detail the revisions made to address them.
**Q1: Why is it necessary to use an LLM for planning in this setting?**
While prior works such as [1] and [2] show how nat... | Summary: This paper proposes a LLM-based approach (HAR) to tackle long-horizon and complex robotic planning, which integrates hierarchical planning and Retrieval-Augmented Generation (RAG). Specifically, HAR leverages the LLM to decompose complex tasks into subtasks at different abstraction levels while integrates the ... | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback, which has greatly improved our paper. Below, we summarize the main concerns and detail the revisions and clarifications made to address them.
**Q1: Missing details and experiments regarding the reusable library of macro actions**
The macro actio... | null | null | null | null | null | null |
AKORN: Adaptive Knots generated Online for RegressioN splines | Accept (poster) | Summary: This paper introduces AKORN (Adaptive Knots generated Online for RegressioN splines), a parameter-free algorithm for offline non-parametric regression over total variation (TV1)-bounded functions. AKORN leverages online learning techniques to automatically adapt knot selection for spline regression, eliminatin... | Rebuttal 1:
Rebuttal: Thank you very much for your consideration.
### Weaknesses 2 and 4
The weaknesses you point out in 2 and 4 are fully accurate, although we feel that the multivariate problem you mention in 4 is outside of the scope of this paper. High-dimensional nonparametric regression is often considered sepa... | Summary: This paper proposes AKORN, a novel approach for offline non-parametric regression that adaptively selects spline knots without requiring manual hyperparameter tuning. The proposed method yields estimators competitive with oracle-enhanced Trend Filtering, attaining near-optimal theoretical performance for TV-bo... | Rebuttal 1:
Rebuttal: Thank you for your attention to this work!
### Non-evenly spaced design points
Since submitting this paper, we have discovered that we can generalize AKORN to handle uneven covariates by tweaking our proof in a few places. Specifically, ADDLE/AKORN can achieve the same rates for covariates $x_1,... | Summary: This paper studies the non-parametric regression over TV_1-bounded functions. The paper proposes a parameter-free algorithm (AKORN) which leverages online learning techniques to select knots for regression splines. The algorithm proposed achieves near-optimal rates without hyperparameter tuning. Both theoretic... | Rebuttal 1:
Rebuttal: Thank you very much for this detailed review. Firstly, thank you for mentioning the typos. We will make appropriate adjustments (e.g. remove the use of $T:=n$ to avoid confusion with the transpose operation).
### Uniform spacing:
Since submitting this paper, we have discovered that we can genera... | Summary: The authors consider the problem of nonparametric regression over the class of $TV_1$-bounded functions. Crucially, the authors aim to overcome the issue of needing oracle knowledge regarding certain features of the data-generating process, while still achieving optimal error rates. Despite being in an offline... | Rebuttal 1:
Rebuttal: We appreciate your time and consideration. | null | null | null | null | null | null |
Canonical Rank Adaptation: An Efficient Fine-Tuning Strategy for Vision Transformers | Accept (poster) | Summary: This paper introduces Canonical Rank Adaptation (CaRA), a peft method method specifically designed for ViTs. The core idea of CaRA is to tensorise transformer weights across layers and to directly optimize the stack using a Canonical-Polyadic Decomposition.
The authors report minimized trainable parameters and... | Rebuttal 1:
Rebuttal: Thank you for the thoughtful review and for recognising CaRA's relevance in PEFT methods. We appreciate your insights on broader evaluation and efficiency comparisons. We address the questions below in detail.
***The proposed evaluation makes sense although I would have expected more experimental... | Summary: This paper proposes CaRA, which uses the canonical polyadic decomposition (CPD) to replace the matrix multiplication in LoRA. There are two advantages of using CPD. Firstly, the multi-dimensional formulation can capture the structure of the head-dimension in the projection matrices in multi-head attention (MHA... | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and for finding our experimentation sound. Below are our responses to the points raised in the review.
***performance baseline marginal***
Considering SPT-LoRA as the best baseline in both VTAB-1k and FGVC benchmarks, we want to highlight that with only $\ap... | Summary: This paper introduces Canonical Rank Adaptation (CaRA), an efficient fine-tuning strategy for Vision Transformers (ViT). The key finding is that leveraging tensor mathematics can effectively address the high-dimensionality of Multi-Head Attention (MHA), enhancing fine-tuning performance. The main results demon... | Rebuttal 1:
Rebuttal: Thank you for the insightful review and for recognising the innovation in our method and finding it meaningful for the vision classification problem. We appreciate your positive feedback. Below are the responses to your review.
***The method has not been fine-tuned and tested on larger models suc... | null | null | null | null | null | null | null | null |
Recommendations with Sparse Comparison Data: Provably Fast Convergence for Nonconvex Matrix Factorization | Accept (poster) | Summary: This paper addresses comparison-based recommendations using non-convex matrix factorization optimization. While this approach is more efficient than convex optimization, it remains challenging. The authors observe that although finding the global minimum may be difficult, the non-convex function behaves convex... | Rebuttal 1:
Rebuttal: Thank you for your careful review and appreciation of our writing and theoretical analysis. We understand your concerns regarding the warm start and noiseless assumptions and address them below. We also address your questions about with the projection step.
Before addressing your concerns, we res... | Summary: This paper focuses on the nonconvex learning problem in recommendation systems based on pairwise user comparison feedback, which has often been formulated as a convex optimization over utility matrices in prior literature. The authors propose a nonconvex matrix factorization approach to model pairwise comparis... | Rebuttal 1:
Rebuttal: We sincerely appreciate your time and effort in reviewing our paper and recognizing its theoretical contributions. We also acknowledge your suggestion that experiments on real-world datasets would further strengthen our work. While we generally agree, we have chosen not to include such experiments... | Summary: In practical settings, users are often picking between their favorite of a few items. As such, we learn about a user’s preferences via the comparisons they made. Given features about the users and the items, the objective is to recover the low-rank matrix of information given data points of the format (user, (... | Rebuttal 1:
Rebuttal: We thank you for a thorough and positive review of our paper. Here, we address the couple of concerns you have raised. First, you mentioned "it would be enlightening also to include different values for $r$, the rank of the underlying matrix." We agree. Below, we give a table highlighting the esti... | null | null | null | null | null | null | null | null |
Clone-Robust AI Alignment | Accept (poster) | Summary: The paper evaluate the robustness of current RLHF algorithms in the presence of approximate clones and develop RLHF algorithms to enhance the robustness regarding this.
Claims And Evidence: Yes, I think most of the claims made in the submission clear and convincing. However, the empirical experiments (case st... | Rebuttal 1:
Rebuttal: Thank you for your comments! Below we address your specific questions:
> why the focus of the paper is RLHF rather than reward model itself?
Our paper focuses on the step of the RLHF pipeline that takes as input a preference dataset and outputs a reward model. In RLHF, this reward model is then... | Summary: The paper considers axiomatic AI alignment. More precisely, the paper is about Reinforcement Learning with human feedback (RLHF). As motivated by Conitzer et al. (2024), consistency with respect to clones is an interesting property for RLHF algorithms. In this paper, each alternative is identified with its con... | Rebuttal 1:
Rebuttal: Thank you for your helpful comments and feedback! Below we address your specific questions:
> It is unclear to me why it is desirable that the average win rate is the sum of the empirical win rate and the reward function itself. Is there some motivation for this?
The original motivation for this... | Summary: The paper addresses a key challenge in LLM alignment, that of making sure that the RLHF model is unbiased. Specifically, authors show that the distribution of data used to train the model can have a significant impact on how the RLHF model behaves, and as such it is prone to intentional or unintentional biases... | Rebuttal 1:
Rebuttal: Thank you for your comments! Below we address your specific questions:
> Have you tested Weighted MLE on real-world RLHF datasets? Please share results if so. How does Weighted MLE perform across different types of RLHF tasks (e.g., factual questions, multi-turn dialogue)? Could you provide real-... | Summary: This paper mainly focus on the problem of unbalanced input datasets in RLHF, which is caused by adversarial manipulation or inadvertent repetition. The key motivation is to make RLHF robust towards non uniformly distributed datasets. Inspired by social choice theory, they introduced robustness to approximate c... | Rebuttal 1:
Rebuttal: Thank you for your helpful comments and feedback! Below we address your specific questions:
> As an extension, if n>2 in Theorem 2.3, will the proof still stand?
Yes, the results do extend for $n > 2$. Learning a reward function can only become information theoretically harder for $n > 2$ becaus... | null | null | null | null | null | null |
Objective drives the consistency of representational similarity across datasets | Accept (poster) | Summary: To compare representation spaces through representational similarity analysis (RSA) or its close relative in machine learning, centered kernel alignment (CKA), a sample of data is embedded in two different spaces, and the pairwise similarities of all representations in each space is used as a fingerprint for i... | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging that _the paper is well-written_ and that our claims are _supported by the results_, which _are presented clearly_. We are grateful for the valuable feedback that helped us improve our paper. We will address each concern point by point. All new figures/table... | Summary: The paper proposes a way of measuring the consistency of pairwise similarities across datasets and transferability of similarities between them. The authors provide many observations regarding these aspects.
## update after rebuttal
The authors provided some additional discussions and results, which further ... | Rebuttal 1:
Rebuttal: First, we thank the reviewer for their overall positive feedback and for pointing us to two papers that helped strengthen the integration of our work into the existing literature. We agree with their relevance to our work due to using Representational Similarity Analysis [RSA; 2] or other similari... | Summary: This paper is more of an analytical paper that analyze the cross-domain representation similarity among models trained with different objectives. The analytical framework is pretty simple as it is a combination of kernalized CKA and a spearman correlation measure. Methodology description is concise. Experiment... | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the valuable feedback and appreciate the assessment of our manuscript as being _well-written_ and _interesting_ work.
First, we agree that the individual components of our analysis framework are well-established rather than novel by themselves. We see this... | Summary: The paper sets out to challenge the Platonic representation hypothesis by reexamining similarities between representation of models using multiple datasets. Their key finding is that training objective is a dominant factor driving representations, as opposed to model architecture and model size.
Claims And Ev... | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their helpful suggestions and for acknowledging the _relevance of our work_ and the _soundness of our experiments_. We believe that following the reviewer’s suggestions allowed us to notably improve our analyses. Two points stood out in particular, which we ... | null | null | null | null | null | null |
Active Learning of Deep Neural Networks via Gradient-Free Cutting Planes | Accept (poster) | Summary: This paper introduces a novel theoretical framework that extends cutting-plane optimization methods to active learning for deep neural networks. The authors bridge two previously separate domains: deep neural network training and cutting-plane optimization techniques. The primary contribution is showing that c... | Rebuttal 1:
Rebuttal: > The experimental evaluation, while sufficient, could include comparisons to more recent active learning methods
Thank you for the suggestion. We've compared against 8–10 standard baselines from scikit-activeml and DeepAL. Since our method builds on a cutting-plane training scheme, we adapted th... | Summary: This paper proposes a novel method for training ReLU deep neural networks and selecting data queries within an active learning framework. Extending previous work on cutting plane algorithms to multi-layer ReLU networks, the authors formulate network training as a linear programming problem, decomposing the tas... | Rebuttal 1:
Rebuttal: > Theorem 6.3, the main theoretical result in this work, is quite similar to the convergence analysis in (Louche & Ralaivola, 2015)
Dear reviewer, we first cite L&R’s work in Section 2 under “Cutting-Plane-Based Active Learning with Linear Models.” Our contribution goes well beyond theirs, which ... | Summary: This paper provides a very interesting results for training ReLu neural networks. The authors show that training a binary classification problem using ReLu neural networks is essentially solving a linear program (LP), and therefore in the context of active learning, adding a new data point in the training set ... | Rebuttal 1:
Rebuttal: > I am not able to draw solid conclusion that this method will surpass current SGD-based
training of neural networks.
Thank you for raising this point. We do not claim that our method currently outperforms gradient-based training for deep NNs, especially as it remains in an early stage compared t... | null | null | null | null | null | null | null | null |
Avoiding spurious sharpness minimization broadens applicability of SAM | Accept (poster) | Summary: The authors investigate the Sharpness-Aware Minimization (SAM) algorithm for language tasks and find deteriorated performance compared to vision tasks. They explain this by re-writing the SAM update as a gradient norm penalty, and decompose the gradient of the gradient norm into a functional part and a logit p... | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive assessment, insightful comments, and detailed feedback. We are glad that you were able to contextualize our contribution quite spot on.
----
> ### Same compute budget comparisons
As it stands, at an equal number of FLOPs, well-tuned Ad... | Summary: This paper presents an intriguing exploration of the distinction between logit-space and functional-space perturbations within the context of Sharpness-Aware Minimization (SAM). The authors' identification of this subtle difference is interestimng, and while the observed effects might appear minor, the potenti... | Rebuttal 1:
Rebuttal: We thank you for your feedback and sharing the interesting works. Besides, we are pleased to hear that you find the exploration intriguing and recognize its potentially substantial ramifications.
----
> ### 1. Significance of Empirical Gains (0.03 loss):
We understand the concern about... | Summary: The paper investigates the limitations of SAM in NLP tasks, where it often degrades performance despite its success in vision tasks. The authors find that SAM's effectiveness varies across domains due to differences in sharpness minimization pathways: the logit path and the functional path. In NLP, the logit p... | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed comments and address some of their concerns here.
----
> ### 1. Computational Overhead:
We agree that in its current form, Functional SAM is not as FLOPs efficient as Adam.
- However, FLOPs are not the only limiting factor in training; ... | Summary: The paper introduces Functionnal-SAM (F-SAM), an alternative to Sharpness-Aware Minimization (SAM) that aims to address its poor performance in NLP tasks. The authors argue that SAM's failure in language modeling is due to its focus on regularizing logit statistics rather than modifying the functional properti... | Rebuttal 1:
Rebuttal: We thank you for your thorough review. We address the primary concerns below:
----
> ### 1. Significance of Performance Improvements:
- In LLM pre-training (100M-1B+ params), achieving consistent validation loss improvements of 0.03-0.06 (as seen in Tables 1, 2, 3) is **highly s... | Summary: This paper investigates why Sharpness Aware Minimization (SAM), effective in vision tasks, underperforms in natural language processing (NLP). The authors identify that SAM in NLP overly focuses on reducing sharpness via logit manipulation rather than improving the model's functional geometry, leading to spuri... | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed comments and their positive view of our paper. We address their concerns below
----
> ### Theoretical justification of the decomposition
- The decomposition into logit vs. functional sharpness is valid in *any setup involving the composition o... | null | null | null | null |
Curvature-aware Graph Attention for PDEs on Manifolds | Accept (poster) | Summary: This paper introduces a Curvature-aware Graph Attention method specifically designed for solving PDEs on manifolds. It addresses the limitations of previous approaches that focused on Euclidean spaces or overlooked the intrinsic geometry of manifolds. The proposed method uses fast parallel transport and tensor... | Rebuttal 1:
Rebuttal: # Response To Reviewer DBfv
We sincerely appreciate your constructive feedback and meticulous evaluation of our work. Below, we provide responses to each point raised.
> **Q1**. In _Preliminaries_ section, there is a typo: (2,0)-tensor $u^∗⊗v^∗$ should be (0,2)-tensor.
**A:** Thank you for cat... | Summary: The authors propose a new PDE-solver based on neural nets for PDE's on manifolds. They claim that taking into account the curvature of the manifold plays a significant role in computing accurately the dynamics of the process to solve.
The authors align the tangent spaces on a manifold via parallel transport u... | Rebuttal 1:
Rebuttal: # Response To Reviewer Ao56
We are sincerely grateful for the time and effort you have dedicated to reviewing our manuscript. Below, we address each of your comments in detail. Should additional revisions be necessary, we are more than willing to make further adjustments.
> **Q1**. Time considera... | Summary: This paper focus on solving pdes on 2-dim manifolds. It generalizes message passing algorithms to manifolds by adding Gaussian curvature in to consideration. It approximate the complex manifold by constant curvature surfaces in Eq. 11. Such approach using parallel transport on constant curvature surfaces is a ... | Rebuttal 1:
Rebuttal: # Response To Reviewer nE4H
We sincerely thank you for your insightful feedback and recognition of our work. We remain fully open to implementing any additional revisions. Below, we address each of your comments in detail.
> **Q1**. Such approach requires constant curvature.
**A:** This approac... | Summary: The paper proposes a curvature-aware graph attention architecture and applies it to produce a supervised neural time-stepper for PDEs on surfaces embedded in $\mathbb{R}^3$. This architecture leverages the concept of parallel transport on surfaces, and proposes embedding an edge into a constant curvature surfa... | Rebuttal 1:
Rebuttal: # Response To Reviewer y7yT
Thank you for your time and constructive feedback. We appreciate your thorough evaluation and valuable suggestions, which have helped improve our work. Below is our point-by-point response.
>**Q1.** Why use this GNN for surface PDEs? It is less accurate than FEM. It r... | null | null | null | null | null | null |
Towards Learning to Complete Anything in Lidar | Accept (poster) | Summary: The paper proposes a zero-short learning method CAL (Complete Anything in Lidar) to use the temporal context from multi-modal sensor sequences to mine object shapes and semantic features that are then distilled into a Lidar-only instance-level completion and recognition model. The experiments on real-world lid... | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback. We are happy to hear that the reviewer found that our experiments demonstrate promising results for zero-shot shape completion. We are also glad that the reviewer has found our ablation studies thorough, and our method well-documented. Below, we address th... | Summary: The paper introduces CAL (Complete Anything in Lidar), a zero-shot panoptic scene completion framework that infers dense 3D object and scene geometry from sparse Lidar scans without relying on predefined class vocabularies. To achieve this, the authors propose a pseudo-labeling engine that mines 3D shape prior... | Rebuttal 1:
Rebuttal: We’re delighted that the reviewer found our paper well-structured with clear motivation and methodology. We appreciate the detailed feedback, and we are excited to address the concerns raised by the reviewer.
**Q1. Novelty of distilling vision foundation models (VFMs) to Lidar**
We agree with th... | Summary: This paper introduces a novel zero-shot approach for completing data from a single LiDAR scan, including both object and instance completion. The method is potentially scalable as it leverages a pre-trained foundational video segmentation model, eliminating the need for labeled video data. CLIP features are ex... | Rebuttal 1:
Rebuttal: We are thrilled that the reviewer finds our task of zero-shot Lidar-based panoptic scene completion challenging and novel. We are particularly happy that the reviewer recognizes our method's scalability potential (wrt. data) and appreciates the extensive experimental results and sound methodology.... | null | null | null | null | null | null | null | null |
Scaling Laws for Forgetting during Finetuning with Pretraining Data Injection | Accept (poster) | Summary: This paper presents a study of scaling laws for fine-tuning, in the particular case where replay data (in the form of pretraining data) is available. The paper models the forgetting loss as a function of the replay data, fine-tuning data, and number of parameters.
It also extends the scaling law of existing wo... | Rebuttal 1:
Rebuttal: Dear reviewer,
We thank you warmly for your detailed and thorough feedback on our work. We are glad to read that "The experiments are comprehensive and validate the hypotheses", that "most claims and the evidence presented are pretty solid", and that " experimental design choices are clear". ... | Summary: The paper studies the domain adaptation and forgetting effects of language model finetuning by deriving scaling laws that quantify these two phenomena. It shows that one can accurately predict the finetuning performance and the forgetting of the pretraining set of large language models, as a function of the mo... | Rebuttal 1:
Rebuttal: Dear reviewer,
We thank you warmly for your detailed and thorough feedback on our work. We are glad to read that "The claims made in the submission are supported by clear evidence" and that our method "makes sense and provides novel insights."
> While the proposed curve fits these points well,... | Summary: This paper studies a setting (examined previously by Liu 2022, Kang et al 2024, Ibrahim et al 2024) where a small amount of pre-training data is injected in fine-tuning to prevent catastrophic forgetting of the pre-training domain and provide regularization in the target domain. In this setting, the paper deve... | Rebuttal 1:
Rebuttal: Dear reviewer,
We thank you warmly for your detailed and thorough feedback on our work. We are glad to see that you found that "claims are mostly reasonable", that "the methods and evaluation criteria make sense," and that we study "a problem which is valuable to the community".
> Prior work... | Summary: The paper addresses two key challenges in finetuning large language models: (1) overfitting when target domain data is limited and (2) forgetting of pretraining knowledge as the model drifts from its original parameters. The paper studies pretraining data injection as a solution to these challenges, and quanti... | Rebuttal 1:
Rebuttal: Dear reviewer,
We thank you warmly for your detailed and thorough feedback on our work. We are happy to read that you found that our work is a “simple and effective solution”, that "claims are supported by the evidence presented", and that we have "a central point that is [...] new and releva... | null | null | null | null | null | null |
On Linear Convergence in Smooth Convex-Concave Bilinearly-Coupled Saddle-Point Optimization: Lower Bounds and Optimal Algorithms | Accept (poster) | Summary: This paper studies the first-order methods for solving smooth convex-concave saddle-point problems with bilinear coupling i.e. $ \min_x \max_y f(x) + \langle y, Bx \rangle - g(y)$. It establishes the first lower bounds on the number of gradient evaluations $\nabla f(x), \nabla g(y)$ and matrix-vector multiplic... | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort. Unfortunately, the development of the *optimal algorithm* for solving problem (1), which is one of the key contributions of our paper acknowledged by other reviewers, is missing from the "strengths" list. Moreover, the criticism of our paper is based ... | Summary: This paper considers deterministic convex-concave minimax optimization problems. In particular, the main focus is on the case where we can obtain linear convergence, as characterized in Assumption 2.6.
* First, the authors establish fine-grained lower bounds by separately counting oracle calls for the gradient... | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed comments, valuable feedback, and high evaluation of our work. Below, we provide our detailed response to the review.
### Other Comments Or Suggestions
- Fixed.
- Please refer to the separate paragraph below.
- Fixed.
### Questions For Authors
- All function... | Summary: This paper develops tight lower complexity bounds and matching optimal algorithms for smooth saddle-point problems with bilinear coupling. The work unifies existing results in different regimes (strongly-convex-strongly-concave, bilinear saddle-point, strongly convex with affine constraints) as well as gives n... | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed comments, valueable feedback, and high evaluation of our work. Below, we provide our detailed response to the review.
### Question about the proof of Theorem 3.3 in the case $\\mu_x = 0$ or $\\mu_y = 0$
Thank you for the question! This indeed may need addit... | Summary: This work studied smooth (strongly)-convex-(strongly)-concave bilinearly-coupled saddle-point problem, and provided lower complexity bounds in terms of the computation time, and achieved the separation of complexities. And they further proposed an optimal algorithm which matches with the lower bound.
Claims A... | Rebuttal 1:
Rebuttal: We thank the reviewer for the high evaluation of our work and the useful references. We provide our answers to the questions below.
1. Thank you for pointing this out. The statement "to the best of our knowledge, there are no lower complexity bounds that would cover these cases" is indeed a bit i... | null | null | null | null | null | null |
LOGO --- Long cOntext aliGnment via efficient preference Optimization | Accept (poster) | Summary: The paper introduces LOGO, a novel and efficient preference optimization strategy designed for long-context alignment in large language models (LLMs). LOGO addresses issues of misaligned responses in long-context models (LCMs) by introducing:
- A Reference-Free Preference Optimization Strategy.
- Efficient Dat... | Rebuttal 1:
Rebuttal: Dear Reviewer 6VnN, thanks for your insightful comments and suggestions. Below is our detailed response.
---
**[Question 1]** Reasoning Capability Evaluation
**[Re]** Thanks for raising this important point. However, our primary objective is **not to enhance reasoning per se**, but rather to **... | Summary: The paper addresses the challenge that open-source long-context models (LCMs) struggle with generation quality in long-context tasks, despite having strong information retrieval capabilities. These models often produce misaligned results, such as hallucinations and instruction-following errors, leading to low ... | Rebuttal 1:
Rebuttal: Dear Reviewer 2obo, we sincerely appreciate your thorough review of our work and the detailed feedback provided!
---
**[Concern 1]** A fairer comparison of YaRN
**[Re]** We have conducted the experiment and found that incorporating YaRN indeed leads to further performance improvements:
| Model... | Summary: The paper addresses the issue of long-context models struggling with generating coherent and accurate responses in real-world tasks.
It proposes LOGO, a preference optimization-based training strategy for long-context alignment, which includes efficient preference data synthesis and a reference-free training o... | Rebuttal 1:
Rebuttal: Dear Reviewer icM3, thanks for your insightful comments and suggestions. Below is our detailed response:
------
**[Concern 1]** Rigor of the main claim that increasing training data generally improves model effectiveness
**[Re]** We acknowledge that increasing high-quality training data can imp... | null | null | null | null | null | null | null | null |
Reidentify: Context-Aware Identity Generation for Contextual Multi-Agent Reinforcement Learning | Accept (poster) | Summary: The paper proposes an algorithm (CAID) for contextual/multi-scenario (where each scenario is defined by a different MDP) multi-agent reinforcement learning (MARL) in the centralized training decentralized execution (CTDE) paradigm. Each scenario is characterized by a context vector which is unobservable (even ... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thoughtful and constructive feedback. We respond below to the key concerns raised:
**Q1**: Why is the context vector treated as a latent variable? Why is it allowed to vary within an episode?
**A1**: Thank you for raising this point. In realistic Contextua... | Summary: This paper introduces a novel approach called Context-Aware Identity Generation (CAID) to improve the generalization ability and sample efficiency of Multi-Agent Reinforcement Learning in contextual environments. CAID leverages a causal Transformer structure to generate dynamic agent identities, while incorpor... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the constructive and thoughtful feedback. We are encouraged that you found the CAID framework innovative and recognized its potential for improving MARL generalization. Below, we address your concerns in detail:
**Q1**: Lack of ablation or alternative identity ... | Summary: In multi-agent reinforcement learning (MARL), generalization poses a significant challenge. Existing MARL methods exhibit vulnerability when confronted with even slight variations in task settings, requiring the retraining of policies for each task variant. This paper introduces a Context-Aware Identity Genera... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thoughtful and detailed comments. Below we respond to the key points raised in the "Other Strengths and Weaknesses" section:
**Q1**: How is policy generalization defined? How does it differ from definitions in [1] [2]?
**A1**: Thank you for this important ... | Summary: The authors introduce Context-Aware Identity Generation framework which is able to generalize between tasks in one Contextual MARL domain. CAID integrates dynamically assigned identity information into action decoding for each agent, which is claimed to provide smooth adaptation to varying contexts. Combined w... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the detailed and constructive comments! We are grateful for your careful reading of both the main paper and the supplementary material. Below, we respond to your concerns point by point:
**Q1**: The paper does not compare with dynamic role assignment methods.
... | null | null | null | null | null | null |
Watch Out Your Album! On the Inadvertent Privacy Memorization in Multi-Modal Large Language Models | Accept (poster) | Summary: This paper investigates how MLLMs inadvertently memorize privacy that is irrelevant to the training objectives. The authors introduce a layer-wise probing framework to test whether task-irrelevant privacy is embedded in images during fine-tuning. They provide a formal mathematical proof to demonstrate that MLL... | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful feedback. Below we address the main points raised in this review.
* * *
> **Claims And Evidence (W1)**: Privacy-embedded data has less impact on training than natural modality transformations.
We clarify that embedding task-irrelevant privacy significantl... | Summary: The paper explores the effects of incorporating synthetic task-irrelevant private content into training datasets on multimodal large language models (MLLMs). The authors analyze how such content influences gradient updates, model memorization, and the ability to differentiate between injected private informati... | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful feedback. We first summarize all the issues and suggestions raised by the reviewer, and address the main points raised in this review.
* * *
> **Issue 1**: While Table 2 suggests significant performance degradation in OCR-VQA, TextVQA, and Visual Genome du... | Summary: The paper examines how MLLMs inadvertently memorize task-irrelevant private content due to spurious correlations during mini-batch training. It begins with a preliminary analysis that formalizes the conditions under which such memorization occurs, followed by a rigorous mathematical proof demonstrating how tas... | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful feedback. Below we address the main points raised in this review.
* * *
> **Claims And Evidence (W1)**: High gradient similarity with privacy-embedded data raises concerns about the claims.
We clarify that embedding task-irrelevant privacy significantly i... | Summary: This paper demonstrates that MLLMs can inadvertently memorize private content entirely unrelated to their training tasks. The authors provide a rigorous mathematical proof explaining how mini-batches introduce spurious correlations, leading MLLMs to store even random private data. Through a novel probing metho... | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful feedback. Below we address the main points raised in this review.
* * *
> **Claims And Evidence (W1)**: Probing accuracy is similar, but why do visualizations differ so much?
We thank the reviewer for this insightful observation, which we also find an int... | null | null | null | null | null | null |
Accelerating LLM Inference with Lossless Speculative Decoding Algorithms for Heterogeneous Vocabularies | Accept (oral) | Summary: This work proposes three methods for using a draft model with a different vocabulary than the target model in a typical speculative decoding framework. The authors propose: 1) string level exact matching (SLEM) in which the draft tokens are decoded back into string representations and reencoded by the target m... | Rebuttal 1:
Rebuttal: We are so grateful for your solid endorsement, rating our paper with the **highest score of 5 out of 5**! We are particularly thankful for your insightful acknowledgment of this work as a significant breakthrough:
>“I am not aware of any other work that has tackled the heterogeneous vocabulary pro... | Summary: The authors provide a comprehensive view of the challenges in performing speculative decoding with different vocabularies.
They come up with several solutions to address this, each with its own benefits and weaknesses.
In my personal experience, this is often a real headache, as training drafters for specific... | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review. We appreciate your recognition that our work provides **“a comprehensive view”** of speculative decoding with different vocabularies, that **“The exposition is rigorous, the algorithms are well motivated and the lossless-ness is proved,”** and that we do a **“... | Summary: This paper explores possible solutions for speculative decoding with a drafter model that does not share the vocabulary with the target model. Such methods, if successful, can enable the use of many more models as the drafter model for a large model to reduce the inference cost of large language models. The au... | Rebuttal 1:
Rebuttal: Thank you for your thorough review and for underscoring several positive aspects of this work. We appreciate your noting that **“Benchmarks demonstrated the success of the proposed methods when Gemma-2-9B-IT is used as the target model,”** including the observation that **“Table 1 shows that with ... | Summary: This paper addresses a key limitation in existing speculative decoding (SD) methods for large language models (LLMs): the assumption that the drafter and target models share the same vocabulary. The authors propose three novel lossless SD algorithms—String-Level Exact Match (SLEM), String-Level Rejection Sampl... | Rebuttal 1:
Rebuttal: Thank you for your detailed review and for highlighting so many strengths in our work. We appreciate your acknowledgment that **“this paper addresses a key limitation in existing speculative decoding,”** and that **“the paper presents thorough theoretical guarantees and empirical evaluations acros... | null | null | null | null | null | null |
Sampling from Binary Quadratic Distributions via Stochastic Localization | Accept (poster) | Summary: This work addresses the problem of sampling from binary quadratic distributions. The authors apply a stochastic localization framework and focus on a key component—the counting/expectation of the posterior distribution. To this end, they establish Poincaré inequalities for the posterior, from which they derive... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for these insightful comments.
> Overstating the novelty by emphasizing the use of SL in binary quadratic distributions
We appreciate the feedback on framing. While SL *concepts* have appeared in discrete settings (as discussed in Appendix A), prior works often foc... | Summary: This paper introduces a sampling method for binary quadratic distributions using stochastic localization. It is claimed to be the first theoretical that extends stochastic localization to discrete MCMC samplers. They show polynomial mixing in Glauber dynamics and MH algorithm. Some experiments are provided.
C... | Rebuttal 1:
Rebuttal: Thanks for your valuable feedback. We have consolidated the main points and respond as follows:
> No complexity bounds of SL and cost analysis comparisons
SL introduces only minimal additional computation compared to MCMC methods. Please refer to our response to ```Reviewer mA2w's last point```.
... | Summary: The paper proposes a generic localization sampler for binary quadratic distributions. By simulating the observation process similar to [EAM23], the authors propose an unbiased scheme which is capable of sampling from the target faster than a generic MCMC scheme.
Claims And Evidence: The authors provide proofs... | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for the thoughtful feedback.
## Q1
The core challenge stems from the path-dependent behavior of Brownian motion. Given the observation process $Y_t = \alpha(t)X + \sigma B_t$, we know that $\frac{Y_t}{\alpha(t)} - X = \frac{\sigma B_t}{\alpha(t)}$, which converges ... | Summary: This paper studies stochastic localization for sampling from binary quadratic distributions.
As a main theoretical contribution, the authors prove Poincare inequalities for the sampling procedure from the (discrete) posterior distribution \\(q_t(x \mid y)\\) in stochastic localization, and thus establish the c... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for these insightful comments.
## W1
From an experimental perspective, the absolute improvements are indeed modest. However, we would like to emphasize two key points:
1. **Strong Baselines:** Our comparison is between standard DMCMC and SL combined with the *same... | null | null | null | null | null | null |
MetaAgent: Automatically Constructing Multi-Agent Systems Based on Finite State Machines | Accept (poster) | Summary: The paper proposes MetaAgent, an approach to automatically construct multi-agent systems using finite state machines (FSMs). Instead of hand-coding roles and workflows, MetaAgent uses a prompt-driven “Designer” to:
1. Identify which agents (roles) are needed to complete a family of tasks.
2. Build a finite st... | Rebuttal 1:
Rebuttal: Thanks for the reviewer’s appreciation of our finite state machine design as well as the thorough discussion and experiments.
## Re Reference Material:
Thank you for your valuable feedback. To the best of our knowledge, our work is the first to introduce finite state machines for the automatic d... | Summary: This paper proposes a novel framework, MetaAgent, for the automatic generation of multi-agent systems based on finite state machines. The framework comprises three key steps: (1) Agent Design: The designer model defines the roles and tools for each agent according to task discriptions; (2) Finite State Machine... | Rebuttal 1:
Rebuttal: Thanks for the thoughtful feedback. We are encouraged that the reviewer agrees the Finite State Machine(FSM) is an inspiring method to the Multi-Agent System field and appreciates our theoretical analysis.
# Re Weakness:
Our framework is independent of the foundation model’s capabilities. Our met... | Summary: The paper introduces MetaAgent, a framework for automatically designing multi-agent systems using finite state machines (FSMs). The paper conceptualizes FSM within LLM agent design. The proposed method allows traceback ability to solve complex tasks. The paper also develops an optimization approach to merge th... | Rebuttal 1:
Rebuttal: Thank the reviewer for appreciating the finite state machine as a unified framework of a Multi-Agent System.
# Re Reference:
CAMEL is a simple two-agent chat system resembling a Decentralized Debate structure (Session 3.3).
AgentVerse employs two cooperation structures: Horizontal : a Linear... | Summary: This paper primarily discusses the automated construction of multi-agent systems. Its highlight is the introduction of the finite state machine (FSM) concept, incorporating null-transition states and state traceback into multi-agent systems. This allows the system to more flexibly address two issues: (1) when ... | Rebuttal 1:
Rebuttal: Thanks for the reviewer’s effort and the appreciation in the discussion in Session 3.3. We believe the finite state machine has the potential to be a unified structure of the Multi-Agent System.
# Re Claims1:
Our optimization method is inherently self-iterative, meaning it does not rely on extern... | null | null | null | null | null | null |
Mitigating Heterogeneous Token Overfitting in LLM Knowledge Editing | Accept (poster) | Summary: This paper addresses the problem of heterogeneous token overfitting (HTO) in knowledge editing (KE) for large language models (LLMs). The authors identify that existing KE methods, which indiscriminately optimize cross-entropy loss across all tokens, lead to varying overfitting rates for different tokens, degr... | Rebuttal 1:
Rebuttal: We highly appreciate your effort and time spent reviewing our paper and thank you for your expertise and constructive comments. In the following, we address your comments and questions one by one.
>**The relation between the portability loss and the underfitting degree (UD).**
Yes, portability l... | Summary: This paper investigates the Heterogeneous Token Overfitting problem in knowledge editing. The authors first analyze the root cause of this issue, attributing it to the training paradigm that indiscriminately optimizes the probabilities of all tokens. To address this, they propose OVERTONE, which refines the tr... | Rebuttal 1:
Rebuttal: We highly appreciate your effort and time spent reviewing our paper and thank you for your expertise and constructive comments. In the following, we address your comments and questions one by one.
>**OVERTONE can be applied to ROME (or MEMIT) to improve the loss function (Eq 4 in the original pap... | Summary: This paper investigates the challenge of heterogeneous token overfitting in knowledge editing of large language models, where different tokens in the target knowledge generalize at varying rates during selective parameter updates. To address this, the authors propose OVERTONE—a token-level smoothing approach t... | Rebuttal 1:
Rebuttal: We highly appreciate your effort and time spent reviewing our paper and thank you for your expertise and constructive comments. In the following, we address your comments and questions one by one.
>**More model architecture.**
We follow recent works (e.g. EasyEdit survey) to study the representat... | Summary: This paper proposes OVERTONE, a token-level smoothing method to address heterogeneous token overfitting (HTO) in knowledge editing (KE) for large language models (LLMs), enabling specific knowledge updates without compromising pre-trained capabilities. Experiments across multiple methods, LLMs, and scenarios s... | Rebuttal 1:
Rebuttal: We highly appreciate your effort and time spent reviewing our paper and thank you for your expertise and constructive comments. In the following, we address your comments and questions one by one.
>**Why does negative UD (NUD) represent overfitting?**
NUD represents a *token* is overfitted as its... | null | null | null | null | null | null |
Neural Encoding and Decoding at Scale | Accept (spotlight poster) | Summary: This article introduces a multimodal, multi-task model named "Neural Encoding and Decoding at Scale (NEDS)" for large-scale neural encoding and decoding. The model employs a novel multi-task masking strategy, enabling simultaneous bidirectional prediction between neural activity and behavior—predicting neural ... | Rebuttal 1:
Rebuttal: > The use of mask-based pre-training for Transformers is quite common. I am curious whether the three embeddings—Modality Embedding, Temporal Embedding, and Session Embedding—are truly effective...
While masked modeling is a common objective for training transformers, it is not yet clear which ma... | Summary: This paper introduces Neural Encoding and Decoding at Scale (NEDS), a multimodal, multi-task model that simultaneously performs neural encoding (predicting neural activity from behavior) and neural decoding (predicting behavior from neural activity) by bridging behaviors and neural activity with a shared maske... | Rebuttal 1:
Rebuttal: > It could be better to extend the results to other recordings, or you may discuss the heterogeneity of the neuropixel data to show the capability of the model.
We agree with the reviewer's suggestion to extend NEDS to other datasets. To address this, we are currently training NEDS on a primate m... | Summary: This paper proposes NEDS, a multimodal, multi-task auto-encoder to learn meaningful representations of neural activity and behavior. In brief, the model is based on an encoder-only transformer that tokenized spikes in a similar scheme to NDT (linear projection of binned spikes), as well as both continuous disc... | Rebuttal 1:
Rebuttal: We appreciate that the reviewer found our paper well-written and our evaluation convincing. We agree that evaluating NEDS on additional datasets would make our paper stronger. In response, we are currently applying NEDS to a primate motor task dataset (MC-RTT [1]) to demonstrate its generalizabili... | Summary: The paper introduces Neural Encoding and Decoding at Scale (NEDS), a multimodal, multi-task model designed to simultaneously predict neural activity from behavior (encoding) and behavior from neural activity (decoding) using large-scale, multi-animal datasets. NEDS employs a novel multitask-masking strategy th... | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s suggestion to evaluate the generalizability of our model on additional datasets, tasks, and unaligned data. Given the complexity of the neural recordings we analyze in our paper, spanning multiple brain regions and animals, we intentionally focused on a small set of we... | null | null | null | null | null | null |
CLARIFY: Contrastive Preference Reinforcement Learning for Untangling Ambiguous Queries | Accept (poster) | Summary: This paper address the query selection problem of PbRL. The authors propose a representation learning algorithm that embeds trajectories into high dimensional vectors and enlarge the distance between unambiguous trajectories pairs. The authors compare their proposed method with existing PbRL methods.
Claims A... | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thanks for your valuable and detailed comments.
We hope the following statement clear your concern.
**We conducted additional experiments and the results are shown in the [link](https://docs.google.com/document/d/e/2PACX-1vS0ZIKigh-syAaNtcr2Udzk8katEE6AtC0OA23Xveb1dUqzFtMws64U6o6... | Summary: This paper presents CLARIFY, a method that selects unambiguous queries that humans can more easily label. It does this by learning a meaningful embedding space using two contrastive losses. This allows for weaker teachers to provide meaningful feedback on the selected trajectories. Experimental results in cont... | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thanks for finding our paper nice presentation, novel, thorough experiments and solid improvement. We hope the following statement clear your concern.
**Claim and W1: Comparison to [1] and its offline applicability.**
**A for Claim and W1:**
- While both CLARIFY and [1] attempt ... | Summary: This paper proposes an offline PbRL framework, CLARIFY, to address challenges arising from ambiguous queries. The method learns a trajectory embedding space through contrastive learning and utilizes the learned embedding to maximize the selection of clearly distinguished queries via rejection sampling, improvi... | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thanks for your valuable and detailed comments.
We hope the following statement clear your concern.
**We conducted additional experiments and the results are shown in the [link](https://docs.google.com/document/d/e/2PACX-1vQX0KIRCSWV8LrON718raf-d_BL75LRXMY5yB-Ts28kW0BZIVyWHan0kgw5... | Summary: This paper presents CLARIFY, an offline preference-based reinforcement learning (PbRL) algorithm, that leverages contrastive learning to organise the embedding space which is used to learn the reward function.
During the reward-learning phase, CLARIFY alternates between learning a reward via Bradley-Terry and... | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thanks for your valuable and detailed comments.
**We conducted additional experiments and the results are shown in the [link](https://docs.google.com/document/d/e/2PACX-1vS7v9XEpXMFrH0skymO1RQUiXP2lcnnRoP114HpluBSSpvxE3vuRHNYJ1RwlggWB-rlihxrpdeVv53O/pub).**
**C1.1: Robustness to n... | null | null | null | null | null | null |
Doubly Protected Estimation for Survival Outcomes Utilizing External Controls for Randomized Clinical Trials | Accept (poster) | Summary: Estimating the average treatment effect (ATE) from both trial and external control datasets is challenging due to data heterogeneity, specifically covariate shift and outcome shift. This paper proposes a doubly protected estimation framework to address these challenges.
1. When the external control dataset is ... | Rebuttal 1:
Rebuttal: Thanks for the careful reviews. Here are our detailed responses to your questions.
**Methods And Evaluation Criteria**
1. The selection of the cutoff value $\tau$ for computing RMST is crucial in practice since the tail distribution after $\tau$ is neglected. Typically, the event rates at this c... | Summary: Authors study the estimation of restricted mean survival time in a randomized controlled trial where external controls are leveraged to increase statistical power. Since there may be a conditional shift (outcome drift) between trial controls and external controls, it is well understood that doing this is not t... | Rebuttal 1:
Rebuttal: We thank reviewer for the through assessment. We here provide the detailed response to each of these points.
**Experimental Designs Or Analyses:**
1. We have **refined and reorganized the experiment sections** to align with the objectives of the simulation, including how to design the data gener... | Summary: The paper introduces a new way to estimate treatment effects in survival analysis using external controls, which is especially helpful when clinical trials have small control groups, like in rare diseases. It introduces a doubly protected estimator for the restricted mean survival time (RMST) difference, combi... | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments. Here is a detailed response to your concerns.
**Other Comments Or Suggestions**
1. We will restate each theorem and lemma in the supplementary material. In the main text, we will **cross-reference the proof** of each theorem and lemma in the Appendix. "The... | Summary: The authors propose a "doubly protected" estimator for treatment-specific restricted mean survival time difference in RCTs, focusing on alleviating biases commonly encountered when employing additional (i.e., non-trial-derived) external control data. Their estimator accounts for both covariate shift and outcom... | Rebuttal 1:
Rebuttal: Thanks for the careful review and nice words. We hereby provide one-to-one responses to your concerns.
**Methods And Evaluation Criteria**
1. The current three settings represent three typical scenarios we often encounter in practice: Setting 1, where all the ECs are comparable after adjusting fo... | null | null | null | null | null | null |
Topo-Miner: CRISPR-Enhanced DNA Computing for Accelerated Topological Feature Extraction | Reject | Summary: This paper presents Topo-Miner, a CRISPR-enhanced DNA computer designed for rapid and accurate topological feature extraction. The key contributions include CRISPR-enhanced DNA computing for TDA, novel encoding of graph topology into DNA sequences, computational speedup over Ripser, integration with the TopoCo... | Rebuttal 1:
Rebuttal: Dear Reviewer KA9Q,
Thank you for your detailed and insightful review of our manuscript (Submission 14396). We appreciate your recognition of our work's vision and novelty, the positive comments on the supplementary material, and the constructive feedback, including the Weak Accept (3) recommenda... | Summary: The paper introduces Topo-Miner, a computational framework leveraging CRISPR-enhanced DNA computing to accelerate topological data analysis (TDA). The proposed method encodes graph structures into DNA sequences and utilizes CRISPR to perform parallel boundary operations and matrix reductions, which are critica... | Rebuttal 1:
Rebuttal: Dear Reviewer 4ZuV,
Thank you for your time and for providing a critical evaluation of our manuscript (Submission 14396). We acknowledge your recommendation for Reject (1) and have carefully considered the significant concerns raised regarding the speculative nature of our claims due to the relia... | Summary: The paper presents Topo-Miner, a CRISPR-enhanced DNA computing framework designed to improve topological data analysis (TDA) by leveraging DNA computing’s parallelism and CRISPR-Cas systems' precision. The authors claim 50x-200x speedups over existing tools like Ripser and suggest broad applications. However, ... | Rebuttal 1:
Rebuttal: Dear Reviewer Gzh8,
Thank you for your review of our manuscript (Submission 14396) and the Reject (1) recommendation. We have carefully considered your feedback. We understand and acknowledge your concerns regarding the paper's presentation—specifically its organization, clarity, formatting, and ... | Summary: This paper presents a CRISPR-based DNA computing approach designed to accelerate persistent homology computations in topological data analysis (TDA). Specifically, the authors encode nodes, edges, and simplices as DNA molecules and leverage CRISPR to perform operations, thereby exploiting the massive paralleli... | Rebuttal 1:
Rebuttal: Dear Reviewer M9zw,
Thank you very much for your time and for providing detailed critical feedback on our manuscript (Submission 14396). We sincerely appreciate the effort involved in reviewing our work.
**1. Response to Question on Reaction Kinetics**
Thank you for highlighting the crucial im... | null | null | null | null | null | null |
Adversarial Robustness via Deformable Convolution with Stochasticity | Accept (poster) | Summary: This paper introduces DCS (Defensive Convolution with Stochasticity), a novel adversarial defense method that integrates randomness directly into convolution operations to obscure gradient directions. By embedding stochasticity within the network architecture, DCS enhances robustness against both white-box and... | Rebuttal 1:
Rebuttal: Thank you for your detialed comments and your interest in the content of our experiments. We summarize and rebut your 8 major concerns in your comments.
## Re 0. The notion of "data independence" needs clarification.[Claims,Q3]
Thank you for correcting our statement. The "data independence" means... | Summary: This paper proposes a random structural defense method called Deformable Convolution with Stochasticity (DCS) to improve adversarial robustness of convolutional neural networks. DCS replaces fixed convolutional kernels with randomly sampled deformable kernels to reduce adversarial transferability between infer... | Rebuttal 1:
Rebuttal: Your expert comments are constructive for our paper. We summarize and rebut your 4 major concerns in your comments.
## Re 0. Claim of generalization and data independence equires empirical support.[Claims,W2]
Thank you for suggesting additional experiments to validate our claim. To verify the sen... | Summary: This paper introduces deformable convolution with stochasticity (DCS) to enhance the adversarial robustness of deep neural networks. Unlike traditional random defense methods that inject randomness into input data, this work incorporates randomness directly into the network architecture by replacing fixed conv... | Rebuttal 1:
Rebuttal: Thank you for your pertinent review and your interest. Your review is summarize and rebut by 3 major concerns.
## Re 0. Ablation study by replacing downsampling layers.[Experiment,Q1]
This is an interesting problem. Your subtle experimental design help us to study the effect of stride in a limite... | Summary: This paper introduces Deformable Convolution with Stochasticity (DCS), a defense method that injects randomness into convolutional layers by replacing fixed offsets with random masks, thereby creating a data-independent random space for deformed kernels. This paper provides a theoretical analysis using gradien... | Rebuttal 1:
Rebuttal: Thank you for your expert comments and your interest in the content of our experiments. We summarize and rebut your 6 major concerns in your comments.
## Re 0. Evaluate under using BPDA+EOT attack.[Method,Experiment,W2,Q3]
Thanks for this nice concern. We evaluated DCS under BPDA and BPDA+EOT att... | null | null | null | null | null | null |
S2-Track: A Simple yet Strong Approach for End-to-End 3D Multi-Object Tracking | Accept (poster) | Summary: This paper proposes a novel end-to-end 3D multi-object tracking method, aimed at addressing complex scenarios in autonomous driving perception, such as occlusions and small object tracking. The authors decompose the existing end-to-end 3D MOT framework into three core components: query initialization, query pr... | Rebuttal 1:
Rebuttal: Thanks for your time and insightful feedback. We especially appreciate your recognition of our well-designed modules with innovative improvements, and found our impressive performance validates the potential of end-to-end methods. We responded in detail below and will add them to the revision.
> ... | Summary: This paper presents a new method called S2-Track for 3D multiple object tracking (MOT), an essential component for the perception of autonomous driving systems. Existing methods adopt end-to-end query-based trackers to simultaneously detect and track objects, but they fail to track objects in complex scenarios... | Rebuttal 1:
Rebuttal: Thanks for your time and insightful feedback. We especially appreciate your recognition of our three useful improvements and superior tracking performance. We responded in detail below and will add them to the revision.
> Q1: How to initialize object queries with 3D location.
Thanks for your com... | Summary: The paper aim to improve the existing end-to-end 3D multi-object tracking framework. Specifically, the authors propose 2D-prompted query initialization, uncertainty-aware probabilistic decoder, and hierarchical query denoising. Experimental results on nuScenes benchmark show the effectiveness of the proposed f... | Rebuttal 1:
Rebuttal: Thanks for your time and insightful feedback. We especially appreciate your recognition of our effective framework with newly-designed modules and well-written paper. We responded in detail below and will add them to the revision.
> Q1: The proposed method is only evaluated on nuScenes dataset
T... | Summary: This paper proposes an end-to-end stronger yet simple 3D multi-object tracking framework named S2-Track, which decomposes the tracking pipeline into three core modules: query initialization, propagation, and matching. Experiments show the effectiveness of each module in complex scenarios, including 2D-Prompted... | Rebuttal 1:
Rebuttal: Thanks for your time and insightful feedback. We especially appreciate your recognition of our simple and strong framework with effective modules and SOTA tracking performance. We responded in detail below and will add them to the revision.
> Q1: More analysis of other extreme scenarios
Thanks f... | null | null | null | null | null | null |
unMORE: Unsupervised Multi-Object Segmentation via Center-Boundary Reasoning | Accept (poster) | Summary: The paper proposes a novel framework for unsupervised object segmentations. It proposes a two stage solution by incorporating an objectiveness network that is trained on an object centric dataset (ImageNet) to predict the existence, location, and boundary of each object, and a reasoning module to generate fina... | Rebuttal 1:
Rebuttal: We appreciate the reviewer's thoughtful comments and address all concerns below. An anonymous PDF with figures and tables is available at: <https://github.com/icml5450/icml5450/blob/main/FiguresTables.pdf>
# Q1: Include UnSAM in Table 2
A1: We report zero-shot results of UnSAM in the attached ***T... | Summary: This paper proposes a multi-object segmentation approach that first trains objectness networks to identify the existence, object center, and object boundary of individual objects, and then use the trained networks to discover objects on images without further training modules. The paper claims that the approac... | Rebuttal 1:
Rebuttal: We appreciate the reviewer's thoughtful comments and address all concerns below. An anonymous PDF with figures and tables is available at: <https://github.com/icml5450/icml5450/blob/main/FiguresTables.pdf>
# Q1: Representations for segmentation and generation
A1: This is an interesting point. Firs... | Summary: This paper presents OCN, a new two-stage framework for unsupervised multi-object segmentation in images. The proposed pipeline consists of two stages: the first stage involves learning three levels of object-centric representations—object existence, object center field, and object boundary distance field. In t... | Rebuttal 1:
Rebuttal: We appreciate the reviewer's thoughtful comments and address all concerns below. An anonymous PDF with figures and tables is available at: <https://github.com/icml5450/icml5450/blob/main/FiguresTables.pdf>
# Q1: Applied on videos
A1: We agree with the reviewer and conduct the following experiments... | Summary: The paper proposes OCN, which improves unsupervised multi-object discovery by introducing three objectness scores to measure existence, centers, and boundaries, along with a reasoning module to distinguish objects. The model is trained by bootstrapping rough masks from DINOv2 and refined through distillation w... | Rebuttal 1:
Rebuttal: We appreciate the reviewer's thoughtful comments and address all concerns below. An anonymous PDF with figures and tables is available at: <https://github.com/icml5450/icml5450/blob/main/FiguresTables.pdf>
# Q1: Title and terminology
A1: Thanks for this advice. We will consider an alternative titl... | null | null | null | null | null | null |
An Error Analysis of Flow Matching for Deep Generative Modeling | Accept (spotlight poster) | Summary: This paper presents the first end-to-end analysis of Continuous Normalizing Flows (CNFs) built upon Flow Matching. The theoretical results demonstrate that the generated distribution is guaranteed to converge to the true distribution under a mild assumption. Furthermore, the convergence rate is significantly i... | Rebuttal 1:
Rebuttal: Thank you for your positive evaluation.
**1. Is assuming early stopping $\sigma_{min}$ equivalent to considering as the original FM paper? Can the analysis be simplified for some predefined small $\sigma_{min}$ where we resort to a noisy approximation of the target distribution?**
**A:** Thank y... | Summary: This paper presents an analysis of Continuous Normalizing Flows (CNFs) built upon Flow Matching (FM) for deep generative modeling. It proves the generated distribution of FM converges to the target distribution in the Wasserstein-2 distance for general target distributions with bounded support. The convergence... | Rebuttal 1:
Rebuttal: Thank you for your positive evaluation and insightful questions.
**1. There is no experiments to support the theoretical results.**
**A:** We appreciate the reviewer’s concern regarding the absence of experiments. Our primary focus in this work is to establish a rigorous theoretical foundation f... | Summary: This paper presents the first comprehensive analysis of Continuous Normalizing Flows (CNFs) based on Flow Matching. The theoretical results establish that the generated distribution converges to the true distribution under a mild assumption. Additionally, the convergence rate is notably improved when a mild Li... | Rebuttal 1:
Rebuttal: Thank you for your positive evaluation.
**1. The results depend on specific assumptions (bounded support, Lipschitz continuity) which may not hold in all practical scenarios.**
**A:** We appreciate the reviewer’s insightful comments on the assumptions of bounded support and Lipschitz continuity.... | Summary: This paper provides an analysis of flow matching. The authors prove that generative models based on flow matching converge to the target distribution under mild assumption.
## update after rebuttal ##
I have reviewed the rebuttal and decided to maintain the original score.
Claims And Evidence: I'm very unfam... | Rebuttal 1:
Rebuttal: Thank you for your valuable suggestions.
**1. Many existing works have analyzed FM, and the introduction should clearly explain how this work differs from previous theory-related works.**
**A:** While recent works [1,2,3] have analyzed ODE-based generative models, they typically assume that the ... | null | null | null | null | null | null |
DCTdiff: Intriguing Properties of Image Generative Modeling in the DCT Space | Accept (poster) | Summary: This paper introduces a new paradigm in diffusion models by using DCT coefficients, specifically low-frequency components, as operands instead of pixel or latent representations. Inspired by JPEG compression, this method aims to improve efficiency. The model achieves 512x512 resolution without latent represent... | Rebuttal 1:
Rebuttal: Thank you very much for the reviews and constructive suggestions.
**Q1: the main paper lacks qualitative figures, and the appendix only includes Figs. 9-11, missing key results for FFHQ 512**
We will add more qualitative samples in the appendix, including the randomly drawn ones, and compare the... | Summary: In this work authors introduce a novel idea to model images in their frequency spaces with diffusion models. Authors show that they can use Diffusion Transformer architectures to model the frequencies of images in a smart way without changing the architecture. There are several new observations on how to achie... | Rebuttal 1:
Rebuttal: We sincerely appreciate the insightful suggestions and comments provided by the reviewer.
**Q1: The benchmarks used for the evaluation are sufficient in size, authors even include some high-resolution datasets. Nevertheless the method is only compared with the baseline model with the same archite... | Summary: The paper introduces an end-to-end diffusion modeling framework in the frequency space, instead of in the original pixel space. It shows that the DCT (discrete cosine transform) space could be an effective and near-lossless compression for diffusion modeling, mitigating pixel redundancy and enabling efficient ... | Rebuttal 1:
Rebuttal: Thank you very much for the detailed and helpful reviews.
**Q1: The paper only evaluates some generative tasks, while the capability of DCT space on other generative and discriminative tasks is still unknown**
We will first rephrase this sentence to avoid any misunderstanding. Our work primarily... | Summary: The paper propose DCTDiff that models images in the discrete cosine transform (DCT) space. The paper discusses the design space of DCTdiff and reveals interesting properties of image modeling in the DCT space such as spectral autoregression nature of pixel diffusion models.
Claims And Evidence: The paper clai... | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments which help improve the paper.
**Q1: The paper claims that "DCT Upsampling Outperforms Pixel Upsampling". However, it is only compared against interpolation methods**
The upsampling we mean in the paper is indeed interpolation. We will make this sta... | null | null | null | null | null | null |
Unlocking the Capabilities of Large Vision-Language Models for Generalizable and Explainable Deepfake Detection | Accept (poster) | Summary: This paper proposes a novel framework that leverages Vision-Language Models (VLMs) for deepfake detection, addressing their current limitations in forensic analysis. The core innovation is a three-component approach: (1) a knowledge-guided forgery adaptation module that aligns VLM semantic space with forensic ... | Rebuttal 1:
Rebuttal: **1) Multi-turn Dialogue Capabilities:** See response to KySN Q1.
**2) Learnable Context:** Textual descriptions are generated via GPT-4, validated by human annotators. Examples include “Inconsistent head poses” or “Mismatched skin texture”. These annotations are available in https://anonymous.4o... | Summary: This paper proposes leveraging LLM and VLM to improve the model generalization and explainability. It is achieved by a two-stage pipeline: A Knowledge-guided Detection using humans prior to generating feature embedding; leveraging these embedding for LLM to output detection results. The experimental results sh... | Rebuttal 1:
Rebuttal: **1)AUC Calculation by LLM Output:** To ensure rigorous and reproducible evaluation of text-level AUC, we implemented a deterministic rule-based parsing strategy for extracting binary labels ("yes"/"no") from model output. If the output contains "yes" or "is deepfake", the frame is labeled fake. I... | Summary: This papers introduces a method based on large vision language models (LVLMs) for the task of deepfake detection. To this end, the authors proposed a number of modules to enhance LVLMs performance on deepfake detection, including a knowledge-guided forgery adaptation module (KFD), a multi-modal prompt tuning f... | Rebuttal 1:
Rebuttal: **1) Multi-turn Dialogue Capabilities:** We appreciate your feedback. Following the strategy in AnomalyGPT, the alternating training strategy (Section 3.3, Implementation Details) inherently preserves Vicuna-7B’s multi-turn dialogue capabilities. While Figure 5 illustrates single-turn examples for... | null | null | null | null | null | null | null | null |
A Chaotic Dynamics Framework Inspired by Dorsal Stream for Event Signal Processing | Accept (poster) | Summary: Current state-of-the-art event stream processing methods are data-driven deep learning methods. Although these models have achieved high accuracy, they are heavily dependent on the structure of the training dataset. At a time when event sensors are not yet popular and there is a lack of large-scale event strea... | Rebuttal 1:
Rebuttal: We sincerely appreciate your thorough evaluation and valuable feedback on our manuscript. We are also grateful for the constructive suggestions, which have helped us further refine the theoretical derivations, experimental design, and analysis in our paper. In response to your comments, we have re... | Summary: This paper proposes a chaotic dynamical framework inspired by the dorsal visual pathway for processing event signals and generating stable and generalizable event representations. By integrating it with deep neural networks, the authors achieved high accuracy on multiple event-based object classification datas... | Rebuttal 1:
Rebuttal: We sincerely appreciate your thorough evaluation and valuable feedback on our paper. We are pleased that our chaotic dynamical framework and experimental results have been recognized, and we are grateful for the insightful questions that have helped us further improve the paper. In response to you... | Summary: The methods combining event cameras and deep learning mainly involve integrating traditional deep learning techniques with the high temporal resolution and low latency characteristics of event cameras, aiming to process the event stream data. However, the limitation of existing methods for event cameras is the... | Rebuttal 1:
Rebuttal: we would like to express our sincere gratitude for your in-depth review of our paper and your valuable feedback. We greatly appreciate your recognition of the proposed method and experimental results, and we also thank you for raising some important questions that will help us further improve the ... | Summary: This paper proposes a chaotic dynamics framework inspired by the dorsal visual pathway of the brain for processing event camera signals. By encoding event streams into periodic or chaotic signals using Continuous Coupled Neural Networks (CCNN) and analyzing dynamic states via Continuous Wavelet Transform (CWT)... | Rebuttal 1:
Rebuttal: We sincerely appreciate your time and effort in reviewing our paper. We are grateful for your constructive feedback and insightful questions, which have helped us refine our work and clarify its contributions. We acknowledge your concerns regarding the theoretical foundations, biological plausibil... | null | null | null | null | null | null |
Understanding the Unfairness in Network Quantization | Accept (poster) | Summary: This work unveils the potential risk of exacerbating the unfairness in model accuracy among various groups. By theoretical analysis and empirical experiments with both Post-Training Quantization (PTQ) and Quantization-Aware Training (QAT), this work identifies several observations, including group White has le... | Rebuttal 1:
Rebuttal: Thank you for kindly evaluating that "the key contributions are novel." We also sincerely appreciate your constructive suggestions, and believe that the additional experiments and explanations can address your concerns. The new experimental results are available at https://anonymous.4open.science/... | Summary: They used data enhancement to mitigate the unfairness of quantification of unbalanced data set models
Claims And Evidence: convincing
Methods And Evaluation Criteria: make sense
Theoretical Claims: correctness
Experimental Designs Or Analyses: soundness
Supplementary Material: yes
Relation To Broader Sci... | Rebuttal 1:
Rebuttal: Great thanks for kindly evaluating that "the content of this article is very interesting." We also sincerely appreciate your valuable feedback, and believe that the additional experiments, and explanations can address your concerns. The additional experimental results are available at https://anon... | Summary: Network quantization, a widely studied model compression method, effectively converts floating-point models to fixed-point models with negligible accuracy loss. Despite its success in reducing model size, it can exacerbate fairness issues across different dataset groups. This paper examines Post-Training Quant... | Rebuttal 1:
Rebuttal: Thank you sincerely for commenting that "the detailed analysis and experiments provided for this issue are very convincing." We also truly appreciate your constructive suggestions. We have conducted additional experiments and provided further explanations to address your concerns. The additional e... | Summary: The paper investigates the fairness implications of network quantization, focusing on two widely used algorithms: Post-Training Quantization (PTQ) and Quantization-Aware Training (QAT).
The authors identify two key factors that exacerbate unfairness in model accuracy across different groups: the gradient nor... | Rebuttal 1:
Rebuttal: We sincerely appreciate your acknowledgment that “the paper addresses an important and timely issue, the theoretical analysis is rigorous, and the empirical evaluation is thorough.” We believe that our experimental results strongly support our theoretical findings. In response to your concerns, w... | null | null | null | null | null | null |
Improved Lower Bounds for First-order Stochastic Non-convex Optimization under Markov Sampling | Accept (poster) | Summary: This paper studies non-convex stochastic optimization when the data is generated from a Markov chain. This is unlike most papers on the topic where one usually assumes that the noise process affecting the gradients is an i.i.d. process. The goal of this paper is to establish information-theoretic lower bounds ... | Rebuttal 1:
Rebuttal: **Comments on subsampling**: We really appreciate the reviewer for initiating such an interesting discussion. In the following we briefly present our understanding and hopefully would provide some insights to the comments. We think the reviewer’s intuition is correct and is aligned with ours. Howe... | Summary: This paper studies the sample complexity of stochastic optimization for smooth, non-convex functions when the noise variables form a Markov chain instead of being i.i.d. The authors obtain a lower bound of $\Omega{\tau\epsilon^{-4}}$ for stationary Markov processes with a countable state space, where $\tau$ is... | Rebuttal 1:
Rebuttal: ## Response to "Theoretical Claims"
1. The proof of Theorem B.3: We think the proof of Theorem B.3 is correct and we will add details in the updated version. We detailedly explain how line 912 is derived as follows:
First, we derive line 907 from line 906:
Denoting $v_{max} := \max_i \Vert v(i) ... | Summary: The paper proves a lower complexity bound of $ O(\tau_{mix} \varepsilon^{-4}) $ for smooth, non-convex stochastic optimization under Markovian noise with countable states. For finite state space, a lower complexity bound $ O( \varepsilon^{-2}) $ is also given, and a proposed method to match the lower bound to... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for positive and encouraging comments on our paper. In our updated version, we will promote the writing to make it clearer and easier to follow for the readers.
**For zero-respecting algorithms we consider in the paper**, we note that zero-respecting algorithms req... | Summary: This paper studies the lower bound of sample complexity of general first-order algorithms for stochastic non-convex optimization problems under Markov sampling. They first show that for samples drawn from a stationary Markov chain with countable state space, the sample complexity is at least $\Omega(\epsilon^{... | Rebuttal 1:
Rebuttal: 1. By using $:=$, we actually define $g(\theta; s, s’) := (\phi(s)^T \theta - r(s, s’) - \gamma \phi(s’)^T \theta)\phi(s)$.
2. We will fix it in the updated version.
3. We thank the reviewer for the suggestion. By our notation, we mean $x_{t,i}$ being the $i$-th point at $t$-th iteration, whose... | Summary: This paper presents sample complexity lower bounds for stochastic gradient descent under a Markovian sampling assumption. In particular, there are two theorems in the paper showing lower bounds $\Omega(\epsilon^{-4})$ for Markov chains with countably infinite state space and $\Omega(\epsilon^{-2})$ for finite ... | Rebuttal 1:
Rebuttal: **TC1**: After double-checking, we modified the theorem. Now the rate scales with $\max ( \tau_{mix},\tau_{hit})$, but note MaC-SAGE remains optimal (up to constants).
**TC2**: We clarify that we are **not** claiming $B_l$s are i.i.d. Bernoulli r.v., but we claim $z_i$s (see its definition in the... | null | null | null | null |
Latent Variable Estimation in Bayesian Black-Litterman Models | Accept (poster) | Summary: The paper proposes Bayesian models for Portfolio management with good theoretical results and empirical validation. From what I can understand the contribution of the work is computational efficiency of portfolio management. Could the code be provided to do this?
Given no code, I see no academic value for thi... | Rebuttal 1:
Rebuttal: >**Reviewer's Comment:** Could the code be provided to do this? Given no code, I see no academic value for this work.
We provide code for reproducibility and the latest paper revision in this **[anonymous dropbox folder](https://www.dropbox.com/scl/fo/i13bhu138gjk76cf5r44v/ACog6LpDbYdBQbB87jtJA-Q... | Summary: The paper extends the classical Black-Litterman model by incorporating asset features. In the traditional model, investor views and their associated uncertainty are assumed to be given. The author proposes leveraging asset features to estimate both investor views and their uncertainty. Two models are introduce... | Rebuttal 1:
Rebuttal: Thanks for the reviews.
The latest revision is readily available in the **[anonymous dropbox folder](https://www.dropbox.com/scl/fo/i13bhu138gjk76cf5r44v/ACog6LpDbYdBQbB87jtJA-Q?rlkey=mpv3b4xgbmr1cohaen6roxoyf&st=zndq6hnx&dl=0).**
Any changes made from the submitted version are highlighted in blue... | Summary: The paper presents a new formulation for the well-known Black-Litterman model, introducing a Bayesian reinterpretation of the model for portfolio optimization, eliminating the need for subjective investor views and their associated uncertainties. The authors analyse the problem from a theoretical perspective a... | Rebuttal 1:
Rebuttal: Thanks for the reviews.
The latest revision is readily available in the **[anonymous dropbox folder](https://www.dropbox.com/scl/fo/i13bhu138gjk76cf5r44v/ACog6LpDbYdBQbB87jtJA-Q?rlkey=mpv3b4xgbmr1cohaen6roxoyf&st=zndq6hnx&dl=0).**
Any changes made from the submitted version are highlighted in blue... | Summary: Paper removes the need for heuristic investor views while maintaining a Bayesian framework.
It makes the Black-Litterman model more data-driven, robust, and automated.
Claims And Evidence: Claims are well supported.
Methods And Evaluation Criteria: The methods and evaluation criteria are reasonable, but the... | Rebuttal 1:
Rebuttal: Thanks for the insightful questions and reviews.
The latest revision is readily available in the **[anonymous dropbox folder](https://www.dropbox.com/scl/fo/i13bhu138gjk76cf5r44v/ACog6LpDbYdBQbB87jtJA-Q?rlkey=mpv3b4xgbmr1cohaen6roxoyf&st=zndq6hnx&dl=0).**
Any changes or modifications made from the... | null | null | null | null | null | null |
CUPS: Improving Human Pose-Shape Estimators with Conformalized Deep Uncertainty | Accept (poster) | Summary: The paper introduces CUPS, a video-based HMR approach with uncertainty quantification. Specifically, the method uses GLoT as a base model to extract global and local features from videos. An adversarial loss is defined on the output meshes during training, similar to VIBE. The discriminator output (from a sigm... | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer wgxo for their thoughtful feedback and insightful questions. We especially appreciate their recognition of our **creativity** and **theoretical soundness**. Below, we address their comments in detail.
> On calibration’s computational cost:
Calibration in CUPS is **lig... | Summary: This paper introduces CUPS, a method that integrates conformal prediction with deep uncertainty learning for 3D human pose-shape estimation from monocular videos. The key innovation lies in training an end-to-end deep uncertainty function alongside the reconstruction model, which serves as a conformity score f... | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer aBRZ for their thoughtful feedback and insightful questions. We especially appreciate their recognition of the **strength of our experiments** and **practical and theoretical contributions,** offering **novel synergies to advance safety critical vision systems**. Below,... | Summary: This paper presents CUPS, an approach to infer 3D human shapes and poses from videos. At the core is a deep uncertainty function that is trained with 3D pose estimation, and it computes a conformity score to optimize the pose prediction in inference. Experimental results on different datasets and metrics demon... | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer Xj5H for their thoughtful feedback and insightful questions. We especially appreciate their recognition of the **strength of our experiments**, the **theoretical contributions** of our work, and the **effectiveness** of the approach. Below, we address each of the review... | Summary: This paper introduces a novel method for human pose and shape estimation, utilizing the SMPL representation, from video sequences. The proposed approach incorporates conformalized deep uncertainty modeling, which allows for the generation of multiple samples, in contrast to the single-output methods commonly f... | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer Xcn3 for their thoughtful feedback and insightful questions. We particularly appreciate their recognition of the **theoretical soundness** of our work, **the need for uncertainty prediction in human pose estimation in the community**, and the applicability to **safety-c... | null | null | null | null | null | null |
Outlier Gradient Analysis: Efficiently Identifying Detrimental Training Samples for Deep Learning Models | Accept (oral) | Summary: This paper introduces a novel approach called "Outlier Gradient Analysis" for identifying detrimental training samples in deep learning models. The authors establish a conceptual bridge between influence functions (a traditional method for assessing training data impact) and outlier detection in the gradient s... | Rebuttal 1:
Rebuttal: Dear Reviewer MZJ6,
Thank you for your efforts in reviewing our work, we are grateful for your insights. We provide answers to the questions raised, below:
- **Although this work conducted experiments on LLMs, I find it strange that they only used LLMs for classification tasks. It would make mo... | Summary: This paper proposed Outlier Gradient Analysis, establishing a theoretical bridge between influence functions (a common tool for this task) and outlier detection in the gradient space. The key insight is that detrimental samples can be effectively identified as outliers in the gradient space without computing t... | Rebuttal 1:
Rebuttal: Dear Reviewer r7gY,
Thank you for your thoughtful review and feedback, we appreciate it. We have answered questions raised, below:
- **The paper would be in stronger position if it included comparisons with: sample selection methods for learning with noisy labels and data pruning for training ef... | Summary: This paper addresses the challenge of identifying training samples that negatively impact deep learning model performance. The authors draw a connection between identifying detrimental training samples using influence functions and outlier detection in the gradient space. This connection leads to a Hessian-f... | Rebuttal 1:
Rebuttal: Dear Reviewer p9JH,
Thank you for your insightful review, comments, and suggestions. We answer the questions raised, below:
- **Essential References Not Discussed**:
Thank you for providing these references on gradient-based outlier/anomaly detection, we appreciate it. As the reviewer cor... | Summary: The paper introduces a simple yet powerful alternative to traditional influence functions by leveraging outlier detection in gradient space. This method—Outlier Gradient Analysis—provides a scalable, efficient, and accurate way to identify harmful training samples, with broad utility across diverse deep learni... | Rebuttal 1:
Rebuttal: Dear Reviewer 91NL,
Thank you for your insightful review and feedback. We answer the questions raised, below:
- **Q1: Could you elaborate on the computational specifics of Outlier Gradient (L1) and Outlier Gradient (L2)?**
Thank you for your question. Our outlier gradient approach (Algorit... | null | null | null | null | null | null |
Modified K-means Algorithm with Local Optimality Guarantees | Accept (poster) | Summary: This paper generalizes necessary and sufficient conditions for local optimality of a solution of the continuous relaxation of the k-means problem; they generalize these conditions from the case using the Euclidean dissimilarity (Peng & Xia, 2005) to Bregman divergence. Similarly, they then use these observatio... | Rebuttal 1:
Rebuttal: We thank Reviewer fHBw for their review.
**1. High-level comparison with Peng & Xia:**
**R1:** Both of our works study the K-means problem, but our focus is on convergence of Lloyd's algorithm, whereas they consider completely different methods to obtain local minima, to ultimately find a glob... | Summary: This paper investigates the local optimality properties of the K-means clustering algorithm and proposes modifications that guarantee local optimality in both continuous and discrete senses. The authors introduce theoretical results that highlight scenarios where the standard K-means algorithm does not always ... | Rebuttal 1:
Rebuttal: We thank Reviewer bhiT for their suggestions and the positive feedback, that our work has "strong theoretical contributions with clear mathematical formulations".
**1. Test on larger datasets (N > 1000):**
**R1:** We want to highlight that several experiments were done for datasets with N>1000: ... | Summary: This paper considers a (natural) notion of local-optimality for the k-means problem, and shows that Lloyd's algorithm can lead to solutions that are not locally optimal. Generally when anyone discusses Lloyd's algorithm, they often claim that Lloyd's gets stuck in a "local minima" so this result is interesting... | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and suggestions, and are happy that they found our result to be interesting.
**1. Local search algorithms:**
**R1:** Similar to K-means++, Kanungo et. al present a heuristic to initialize centers, with a guarantee that the objective value (distortion) o... | Summary: The paper shows that the traditional K-means algorithm does not always converge to a local optimum (by a 1D counterexample). The paper proves the conditions for K-means to converge to a local optimum. By modifying the termination conditions of K-means (adding a new step), we can guarantee convergence to either... | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful review and positive comments, namely that "the claims are well-supported by theoretical analysis and experiments", "the algorithms and conclusions are simple and elegant", and that "overall...this is a solid contribution".
**1. X-means & G-means:**
**... | null | null | null | null | null | null |
Beyond Communication Overhead: A Multilevel Monte Carlo Approach for Mitigating Compression Bias in Distributed Learning | Accept (poster) | Summary: The paper introduces a Multilevel Monte Carlo compression scheme that leverages biased compressors to construct unbiased gradient estimates. The proposed approach aims to combine the empirical efficiency of biased compressors (Top-k, bitwise compression) with the theoretical guarantees of unbiased methods.
Cl... | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive evaluation of our paper and for the constructive feedback. Below, we address the questions raised:
**1. Scaling with number of machines**
You are right that the performance gains from MLMC grow with the level of parallelization. When using only 4 machines... | Summary: The work proposed to consider Multilevel Monte Carlo (MLMC) in Distributed Learning to mitigate the problem with unbiased compressors analysis. Work introduced a novel Multilevel Monte Carlo (MLMC) compression scheme that leverages biased compressors to construct statistically unbiased estimates.
Claims And E... | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful and encouraging feedback, and for pointing out areas where the theoretical analysis could be clarified. We respond to each concern below and will revise the paper accordingly.
**1. Variance Analysis and Convergence Guarantee**
We appreciate your observati... | Summary: The article presents a new compression method that uses the MLMC algorithm to turn biased compressors into unbiased ones.
Claims And Evidence: The claims in the paper are correct and verified.
Methods And Evaluation Criteria: The proposed methods are proved under generally accepted assumptions on the target ... | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful and detailed feedback. Below, we address each concern and clarify the relationship between our MLMC framework and IS.
**1. MLMC vs. Importance Sampling**
We thank the reviewer for the insightful observation regarding the similarity between our MLMC constr... | Summary: This paper introduces a novel Multilevel Monte Carlo (MLMC) compression scheme that leverages biased compressors to construct statistically unbiased estimates. The proposed algorithm effectively bridges the gap between biased and unbiased methods, combining the strengths of both. The empirical results show tha... | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive evaluation of our contributions, including the novelty of our MLMC compression scheme and the theoretical analysis. We address the concerns below and will incorporate these improvements into the final version.
**1. Implementation**
We appreciate the reviewe... | null | null | null | null | null | null |
Convergence of Mean-Field Langevin Stochastic Descent-Ascent for Distributional Minimax Optimization | Accept (spotlight poster) | Summary: This paper studies the mean-field Langevin (stochastic) descent-ascent (MFL-DA) algorithm for solving distributional minimax optimization problems. The authors demonstrate that the infinite-particle limit of discrete-time MFL-DA is able to converge to the unique stationary point of the problem with a convergen... | Rebuttal 1:
Rebuttal: Thank you for your thoughtful and detailed feedback! Below, we address the comments and questions point-by-point.
**Pessimistic LSI bound**: We will explicitly comment on the weakness of this type of analysis, and we are trying to overcome this in our ongoing research.
**Explicit dependence o... | Summary: This paper studies the convergence rate of discrete-time mean-field Langevin stochastic descent-ascent for min-max problems in distributional optimization under log-Sobolev inequality condition. The authors claim that the derived convergence rate is near-optimal compared to its Euclidean counterpart. The paper... | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments! Below, we respond to the concerns and clarify the novelty and contributions of our work.
**On our analysis**: We would clarify that both our proof technique and the resulting conclusions differ from the aforementioned papers in several fundamental ways. ... | Summary: This paper analyzes a natural algorithm for distribution min-max optimization, which consists in taking alternating Langevin steps. The main contribution of the paper is theoretical analysis of this algorithm for the case where the gradients are exact as well as the case where the gradients are in-exact. Their... | Rebuttal 1:
Rebuttal: Thank you very much for your critical feedback, especially your sharp observations about the order of the bias term. Below, we address your concerns.
$\newcommand{\bE}{\mathbb{E}} \newcommand{\o}{\omega}$
## Addressing Weaknesses
**Bias term of second moment**: Indeed, the norm control in the sub... | Summary: The paper analyzes a Langevin-type scheme for finding equilibria in mean-field games under convexity and smoothness assumptions. The rates obtained scale as $\widetilde{O}(1/\varepsilon)$, which agrees with the rate in Euclidean space. An extension to stochastic gradients is also considered.
Claims And Eviden... | Rebuttal 1:
Rebuttal: Thank you very much for your thorough and constructive feedback. Below, we address your main points raised.
$\newcommand{\bs}{\boldsymbol}$
## Addressing Weakness:
**Particle Discretization**: For this problem, we have verified that our method is feasible in the particle setting. As an illustra... | null | null | null | null | null | null |
Knowledge-Guided Wasserstein Distributionally Robust Optimization | Accept (poster) | Summary: This paper investigates distributionally robust optimization (DRO), focusing on Wasserstein distance-based DRO (W-DRO) while introducing a novel knowledge-guided cost function to further enhance the robustness and performance of DRO frameworks. The authors provide an extensive and thorough review of DRO, elabo... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the encouraging and detailed feedback. Here we list our responses to the weaknesses and questions suggested by the reviewer.
**W1: Lack of Convergence and Guarantees**
We acknowledge the absence of an explicit discussion on convergence rates and statistical g... | Summary: This work introduces a framework for transfer learning called Knowledge-Guided Wasserstein Distributionally Robust Optimization. In face of the overly conservative property of WDRO, the proposed framework adapts the Wasserstein ambiguity set using external knowledge (augment the transport cost function with an... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the encouraging and detailed feedback.
**W1: Inconsistent Writings**
We acknowledge that the connection between WDRO and transfer learning may not be immediately clear. We will refine our writing to ensure a smoother and more intuitive transition between thes... | Summary: The authors believe that traditional Wasserstein Distributionally Robust Optimization (WDRO) has a conservative tendency, which can lead to suboptimal performance. They argue that in real-world scenarios, prior knowledge can be leveraged to enhance model performance and robustness. Therefore, they propose that... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the encouraging and detailed feedback.
**W1: Explanation of Using Prior Knowledge.**
Our transfer learning approach falls under *Domain Adaptation*, which adapts models trained on a source domain to perform well on a related target domain with limited labeled... | Summary: The paper introduces a transfer-learning variant of Wasserstein Distributionally Robust Optimization. Given some external knowledge, which the authors represent by a vector $\theta$, they construct an ambiguity region based on a Wasserstein distance with $\theta$-dependent cost function. This makes the ambigui... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the encouraging and detailed feedback. Here we list our responses to the weaknesses and questions suggested by the reviewer.
**W1: Selection of $\lambda$.**
The additional parameter $\lambda$ is introduced to model the decision maker’s confidence in using the... | null | null | null | null | null | null |
RIFLEx: A Free Lunch for Length Extrapolation in Video Diffusion Transformers | Accept (poster) | Summary: This paper propose a training-free method for video extrapolation. It argues that existing extrapolation strategies, originally developed for text and image generation, fail on videos because of temporal repetition and slow motion. It analyzes the frequency components in positional encoding, isolating individu... | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer DWAu for the recognition of our work. The further questions are addressed as follows.
### Q1: If only 20,000 videos are needed, how to select them? Does the selection make any difference?
The 20K videos in this paper were randomly sampled without selection. Fine-tunin... | Summary: This paper focused on video length extrapolation in Video Diffusion Transformers. The authors provided a comprehensive understanding of video length extrapolation by analyzing the role of frequency components in RoPE. Furthermore, a minimal yet effective method named RIFLEx is proposed to prevent repetition by... | Rebuttal 1:
Rebuttal: We thank reviewer Cp7v for the valuable comments. We address the concerns as follows.
### Q1: The experimental results shown in the supplementary materials display that some cases may still suffering from temporal inconsistency and may lead to the camera to switch in the playing video. Could the ... | Summary: This paper focuses on a challenging question: How to do the length extrapolation for a trained video diffusion model? After some systematic analyses, they found a metric, named intrinsic frequency that governs the extrapolation property of a video diffusion model. Then, they propose RIFLEx to reduce the intri... | Rebuttal 1:
Rebuttal: We appreciate Reviewer JAfh for the acknowledgement of our contributions.
### Q1: Missing related work on repetition issues in autoregressive video generation models
Unlike diffusion models, autoregressive video generation models typically quantize videos into discrete tokens and generate video... | Summary: This work solves the problem of repetitiveness in long video generation from a new perspective. This work first analyzes and experiments the frequency component of the video position encoding ROPE, and concludes that the period of the frequency component directly affects the periodicity of certain characterist... | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer EXby for the valuable suggestions. We have thoroughly addressed the detailed comments as follows.
### Q1: Explain the proposed method is slightly worse than the best method in automatic metrics.
We kindly clarify that **only through a comprehensive consideration of mu... | null | null | null | null | null | null |
Are LLMs Prescient? A Continuous Evaluation using Daily News as the Oracle | Accept (poster) | Summary: The manuscript introduces Daily Oracle, a benchmark dataset composed of automatically-generated question-answer pairs concerning daily news over a 4 year period. The questions are all phrased in a "forecast" manner (e.g., "Will X happen?", "What will Y be on DD-MM-YY?") and are either yes/no or multiple (4) ch... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thoughtful feedback and constructive suggestions, which we will incorporate into the future version of our manuscript.
***
### Concerns about refusal rate
We appreciate the reviewer’s thoughtful comments regarding the refusal cases of Mistral and Mixtral mod... | Summary: The paper uses the task of forecasting real-world events to demonstrate that LLM knowledge deteriorates on more recent questions, and this trend also holds for retrieval. It generates these forecasting questions between January 2020 and December 2024 using LLMs, sourcing information from news articles.
Claims... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the detailed feedback.
***
### Brier score
While we agree that Brier score is valuable to account for uncertainty in binary predictions, we clarify that accuracy remains a valid metric in this setting, revealing a clear performance degradation trend in our exper... | Summary: This paper proposes a benchmark dataset for assessing a model’s generalization ability in predicting future events and analyzes how model performance evolves over time. Specifically, it compares model performance under three conditions: no access to external information, access to retrieved recent news article... | Rebuttal 1:
Rebuttal: We appreciate your thoughtful feedback, and hope to address the concerns below:
***
### Sample size of human evaluation
While the evaluation involved 60 questions, we respectfully note that **this sample size aligns with standard practices in similar dataset validation studies**. For example, *TLB... | Summary: The authors propose a method of constructing a continuous temporal knowledge & temporal prediction efficacy benchmark for LLMs. They show results of an implementation of the benchmark, and they describe the release of the benchmark for public use.
Claims And Evidence: Yes, to the best of my knowledge.
Method... | Rebuttal 1:
Rebuttal: Thank you for your recognition of our work! | null | null | null | null | null | null |
Geometric Contact Flows: Contactomorphisms for Dynamics and Control | Accept (poster) | Summary: This paper introduces Geometric Contact Flows, a framework that models dynamical systems by incorporating Riemannian and contact geometry as the inductive bias. The learned latent space captures the dynamics by contactomorphically preserving the structure of the ambient space. An additional ensemble approach i... | Rebuttal 1:
Rebuttal: > The proposed system seems able to model trajectories with intersected paths. However, it is unclear whether it is overfitting to a specific trajectory
Our framework reconstructs intersecting paths in position space using the full state of the system to resolve directional ambiguities. Extensive... | Summary: This paper introduces a geometric contact flows model based on Riemannian and contact geometry, which introduces a robust and interpretable inductive bias over the previous MLP based methods. Furthermore, the authors propose a novel framework to learn latent dynamics of contactomorphisms and generalization mec... | Rebuttal 1:
Rebuttal: > The application of the system can generally applied to contact related tasks such as robots-object interaction and trajectory synthesis. No further broader impacts identified.
We clarify that the contact Hamiltonian biases in our framework extend beyond interaction tasks in control-based approa... | Summary: The paper proposes to learn in the latent contact Hamiltonian space to inject inductive biases and encoding desirable physical properties. Additionally, the paper developed an ensemble method that aims to identify the unseen states and drive the dynamics to avoid these states. Experiments in character writing ... | Rebuttal 1:
Rebuttal: > The evaluation tasks seem like simple trajectory generalization tasks. These tasks are state based and have very limited variations (4 characters and 1 robot trajectory).
> Does the proposed method scale to more complex, real-world problems, for example ... ?
Yes, as emphasized in the introduc... | null | null | null | null | null | null | null | null |
TimeDART: A Diffusion Autoregressive Transformer for Self-Supervised Time Series Representation | Accept (poster) | Summary: This paper introduces TimeDART, a self-supervised time series representation learning framework that integrates autoregressive modeling with a denoising diffusion process. The method consists of a causal Transformer encoder with a patch-based embedding strategy to capture global trends, while a denoising diffu... | Rebuttal 1:
Rebuttal: > All full results are at https://anonymous.4open.science/r/TimeDART-results-ECBD/
>
> [A1 for Q1]--->para4,para5, [A3 for W1] —> para3
**A1 for Q1**
Please refer to `A1 for Q1` in our rebuttal to the second Reviewer TNxq.
**A2 for Q2**
Of course, we are willing to elaborate on our understandi... | Summary: This paper presents a self-supervised time series representation learning method. It combines autoregressive modeling with the denoising diffusion process. Key ideas involve normalizing and patch-embedding data, using a causal Transformer encoder for long-term evolution and a patch-level diffusion/denoising me... | Rebuttal 1:
Rebuttal: > All full results are at https://anonymous.4open.science/r/TimeDART-results-ECBD/
>
> [Table1]--->para7, [Table2]--->para8
**A1 for Q1**
We conduct detailed few-shot experiments on the performance of the model with 5% or 10% fine-tuning data, including forecasting and classification tasks. The ... | Summary: The paper introduces TimeDART, a novel self-supervised learning framework for time series analysis that integrates autoregressive modeling with diffusion-based denoising. The framework aims to address the limitations of existing methods, such as masked autoencoders, contrastive learning, and autoregressive app... | Rebuttal 1:
Rebuttal: > All full results are at https://anonymous.4open.science/r/TimeDART-results-ECBD/
>
> [Table1]--->para4,para5, [Table2]--->para6, [A3 for Q3] --->para1, [A5 for Q5]--->para2
**A1 for Q1**
We abandon autoregressive modeling and diffusion in the downstream task based on three considerations: Fir... | Summary: Authors propose a novel self-supervised time series representation pre-training framework that integrates two popular generative paradigms to enhance representation transferability. Specifically, they employ a causal Transformer encoder for autoregressive prediction while incorporating a denoising diffusion pr... | Rebuttal 1:
Rebuttal: > All full results are at https://anonymous.4open.science/r/TimeDART-results-ECBD/
>
> [Table1] —>para1,[Table2] —>para2,[Table3] —> para3
**A1 for Q1**
1. For efficiency, we used a lightweight denoising decoder during pre-training. After training, similar to masked autoencoders, only the embedd... | null | null | null | null | null | null |
Can MLLMs Reason in Multimodality? EMMA: An Enhanced MultiModal ReAsoning Benchmark | Accept (oral) | Summary: This paper introduces the EMMA benchmark to evaluate the reasoning capabilities of multimodal LLMs that require the integration of both text and visual cues. The benchmark is curated from existing datasets and supplemented with 1796 newly created questions covering math, chemistry, physics, and coding. A filte... | Rebuttal 1:
Rebuttal: Thank you for the encouraging and thoughtful review! Below is our response to your questions and suggestions.
**Q1: Why did we filter out questions that can be answered using the text and generated image captions?**
Our enhanced filtering pipeline targets questions requiring deep multimodal reas... | Summary: This paper introduces EMMA, a visual question answering benchmark requiring multimodal reasoning. EMMA includes questions in four domains: math, physics, chemistry, and coding. The questions in EMMA are filtered so that they are not answerable based on only the image captions and questions. The experiments sho... | Rebuttal 1:
Rebuttal: Thank you for your detailed review and great questions! We hope our response below helps address them.
**Concern 1: Only 3 models are used in filtering. Other models might be able to solve the retained questions in the text-only setting**
(1) The 3 models used were among the strongest available ... | Summary: This paper introduces EMMA, a novel benchmark designed to evaluate the vision-language reasoning capabilities of MLLMs.
Unlike existing benchmarks that focus on shallow visual understanding or text-dominated problem-solving, EMMA emphasizes tasks where solutions inherently require iterative interaction between... | Rebuttal 1:
Rebuttal: Thank you for your encouraging review and insightful questions. We provide our responses below.
**Q1: What does "organically reason" mean?**
By "organically reason over and with both text and images", we refer to the integrated way humans seamlessly blend visual and textual information during re... | Summary: This paper proposed a benchmark EMMA (Enhanced MultiModal reAsoning) to feature questions that are difficult to solve by relying solely on text-based reasoning or a single visual pass, covering math, physics, chemistry, and coding domains with 2,788 questions. Ten state-of-the-art MLLMs are further evaluated o... | Rebuttal 1:
Rebuttal: Thank you very much for your thoughtful and careful reading!
As you have pointed out, our benchmark provides a test suite that reveals significant limitations of even the most advanced MLLMs in handling complex multimodal reasoning tasks. Although state-of-the-art MLLMs have recently achieved str... | null | null | null | null | null | null |
Quantum Optimization via Gradient-Based Hamiltonian Descent | Accept (poster) | Summary: This paper proposed gradient-based Quantum Hamiltonian descent, which is motivated by insights from high-resolution differential equations and based on quantum Hamiltonian descent. They proved a faster convergence rate of gb-qhd under some reasonable assumptions and conduct numerical simulations that demonstra... | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer 1Ka2 for their detailed comments and insightful suggestions. In particular, we appreciated the Reviewer's observation that our work could be beneficial to "understand the relationship between quantum dynamics and optimization".
We address each of the Reviewer's questi... | Summary: This work proposes a gradient-based quantum hamiltonian descent (QHD), which generalizes the previously proposed based on function values. Theoretical and simulation results are also provided.
## After rebuttal
The authors clarified most of my concerns during the rebuttal. Hence, I increased my score. Howeve... | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer F84Z for their detailed comments and insightful suggestions.
First, we would like to clarify the primary contribution of this submission, as it appears to have been misinterpreted by Reviewer F84Z.
The primary objective of this work is to propose a *novel* quantum Ham... | Summary: In this submission, the authors presented a variant of the prominent quantum Hamiltonian descent (QHD) algorithm by adding the help of the gradient information. More specifically, the authors proposed a new time-dependent Hamiltonian as in Eq (4) which, unlike the original QHD, contains the gradient informatio... | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer NP8c for their detailed comments and insightful suggestions. In particular, we appreciate the Reviewer's observation that our submission "makes a solid contribution to quantum machine learning and optimization."
Below, we address each of the Reviewer's questions indivi... | Summary: This paper explores quantum algorithms for solving unconstrained optimization problems. Given that Nesterov's accelerated gradient descent admits a classical Hamiltonian dynamics interpretation, it is natural to consider leveraging quantum Hamiltonian dynamics for algorithm design. In particular, Leng et al. p... | Rebuttal 1:
Rebuttal: We sincerely appreciate Reviewer Wa1N's thorough feedback and valuable insights. In particular, we thank the Reviewer for recognizing our techniques as "elegant" and acknowledging that this work "proposes a novel idea and enriches the literature on solving unbounded continuous optimization problem... | null | null | null | null | null | null |
AGAV-Rater: Adapting Large Multimodal Model for AI-Generated Audio-Visual Quality Assessment | Accept (poster) | Summary: This paper studies the LMM to assess the quality of AI-generated audio-visual content, evaluating AGAVs from three dimensions: audio perceptual quality, A/V content consistency, and overall A/V quality. The authors introduce a novel AI-generated audio-visual quality assessment dataset, AGAVQA, and propose an L... | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the constructive and valuable feedback. We have addressed your concerns point-by-point below.
**1. Definition of evaluation metrics**
SRCC and KRCC measure the prediction monotonicity, while PLCC and RMSE measure the prediction accuracy. Better AGAVQA meth... | Summary: This paper introduces a AI-generated audio-visual (AGAV) quality assessment dataset (AGAVQA) and AGAV-Rater, a large multimodal model (LMM)-based approach for evaluating AGAV. The AGAVQA dataset containing two subsets: AGAVQA-MOS (multi-dimensional score prediction) and AGAVQA-Pair (optimal AGAV selection). AG... | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the constructive and valuable feedback. We have addressed your concerns point-by-point below.
**1. Correlation of 3-dimensional MOSs**
**SRCC between audio quality and content consistency is 0.6860, indicating that the two dimensions are independent**. SRC... | Summary: This work introduces a new quality assessment dataset and network for the AI-Generated Audio-Visual task. The database additionally handles multimodal challenges like A/V content inconsistency, and the quality assessment model leverages LMM to predict multi-dimensional scores.
Claims And Evidence: The claims ... | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the constructive and valuable feedback. We have addressed your concerns point-by-point below.
**1. Details of the auto-labeling process**
We manually verify 500 auto-labeling results. Among them, the accuracy for content consistency related instruction-res... | Summary: This paper addresses a challenging and important question for the VTA methods: whether LMMs can be utilized to assess the quality of audio-visual content generated by VTA methods. To tackle this problem, the authors first establish a large-scale AGAV quality assessment dataset, AGAVQA, which includes two subse... | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the constructive and valuable feedback. We have addressed your concerns point-by-point below.
**1. Details of human evaluation**
We invited subjects familiar with AVQA and AGAV for on-site training. We provided detailed explanations of the scoring criteria... | null | null | null | null | null | null |
A-PSRO: A Unified Strategy Learning Method with Advantage Metric for Normal-form Games | Accept (poster) | Summary: This paper proposes Advantage Policy Space Response Oracle (A-PSRO), a new framework for learning Nash equilibria in normal-form games with large strategy spaces, applicable to both zero-sum and general-sum settings. The key contribution is the Advantage function, a new evaluative metric that guides strategy u... | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper and providing valuable feedback. Our responses are as follows. We hope these responses address your concerns and that you will consider raising the score of this paper.
Regarding the computational complexity of the LookAhead module, we will explain it from both t... | Summary: This paper defines “Advantage” in 0s games and 2p simplified games as the value of a policy can achieve given all other policies in the strategy profile are playing as their best response. The authors thus derive A-PSRO with Diversity and LookAhead for large-scale games. The authors thus proposed A-PSRO based ... | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper and providing valuable feedback. We appreciate your recognition of our work. Our responses and modifications are as follows. We hope these responses address your concerns.
Due to page limitations in the main text, we have placed the algorithmic details of A-PSRO ... | Summary: The authors propose an extension of PSRO to normal-form games with large-scale action spaces. They incorporate an advantage function to guide strategy exploration and speed up convergence to NE and improve joint rewards in general-sum normal-form games.
Claims And Evidence: - The authors claim to establish an... | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper and providing valuable feedback. Our responses and modifications are as follows. We hope these responses address your concerns and that you will consider raising the score of this paper.
We would like to emphasize that the motivation of this paper is the improvem... | Summary: The paper addresses the challenge of solving Nash equilibria in normal-form games, particularly for games with large strategy spaces. Traditional PSROs and their variants have been effective in learning equilibria but often lack an efficient metric to evaluate and guide strategy improvement. This limitation af... | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper and providing valuable feedback. Our responses and modifications are as follows. We hope these responses address your concerns and that you will consider raising the score of this paper.
Regarding the deterministic convergence rate, the explanation is as follows.... | null | null | null | null | null | null |
Scaling Probabilistic Circuits via Monarch Matrices | Accept (poster) | Summary: This paper replaces dense matrices with sparse Monarch matrices, reducing the computation cost and maintaining accuracy.
Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The proposed methods may be suitable for the pr... | Rebuttal 1:
Rebuttal: Thank you for your feedback.
```
To be honest, I do not know much about hybrid models since I have only read several papers such as Mamba. However, I think the main issue of the hybrid model is that we cannot scale up the size of the hybrid model. That is my understanding. Consequently, almost n... | Summary: This paper proposes a novel parameterization for probabilistic circuits (PCs) to improve their scalability, using structured sparse matrices called Monarch matrices. By replacing dense matrices in sum blocks of PCs with Monarch matrices, the proposed methods can reduce computational costs and allow larger scal... | Rebuttal 1:
Rebuttal: Thank you for your insightful feedback.
```
I find the introduction of Butterfly matrices in line 205 a bit redundant and irrelevant. Why are they mentioned in the methods section rather than the related work? Have they been applied in PCs?
```
One of the main contributions of this work is ident... | Summary: Despite many advantages of probabilistic circuits (PC), their implementations are often difficult due to computational burden, even with block structures. In this paper, the authors proposed an alternative method that replaces dense sum blocks with Monarch matrices, and the method significantly reduce the memo... | Rebuttal 1:
Rebuttal: Thank you for your encouraging feedback. Please feel free to follow up if you have any questions. | Summary: This paper introduces a novel method to scaling Probabilistic Circuits (PCs) by replacing dense matrices in sum nodes with Monarch matrices which is a type of structured sparse matrices constructed by Kronecker products. The key idea is to leverage the sparsity and structure of Monarch matrices to reduce memor... | Rebuttal 1:
Rebuttal: Thank you for your feedback.
```
The paper does not compare Monarch matrices to alternative structured representations like Block Tensor-Train (BTT) decomposition [1] or Toeplitz-like structured layers.
```
Our study is not only limited to the Monarch matrices defined in [1]. Our construction o... | null | null | null | null | null | null |
In-Context Fine-Tuning for Time-Series Foundation Models | Accept (poster) | Summary: The paper extends a foundational forecasting models so that it can be conditioned on addition time-series information. Using in-context learning, the values of other time-series and also the value of the time-series to be predicted is added to the model input. Then the model is trained with the initial objecti... | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful comments and suggestions. We address the main points below. Please let us know if you have any further comments or concerns. If our response sufficiently addresses your concerns, we hope that you will consider updating your score accordingly.
> Given that t... | Summary: This paper proposes a novel in-context finetuning strategy for a specific Time Series Foundation Model (TSFM). By continual pretraining a TSFM (TimesFM in the paper) with in-context examples, the updated model is able to be prompted with related past time series examples at inference time, enhancing forecastin... | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful comments and suggestions. We address the main points below. Please let us know if you have any further questions. If our response addresses your concerns, we hope you consider raising your score accordingly.
> Section 6.3: Moirai
Thank you for clarifying t... | Summary: The paper proposes a "fine-tuning strategy via in-context learning" for pre-trained time series forecasting models. Essentially, the approach is similar to few-shot learning in LLM as multiple time series are added to the context in addition to the forecasted context.
The authors modify an existing architectur... | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful comments and suggestions. We address the main points below. Please let us know if you have any further comments or concerns.
> Only the subset of zero-shot benchmark is utilized. While this is a good idea to preserve the "zero-shot" setting, it would be ad... | Summary: The authors propose a framework to obtain pretrained models for time series forecasting that are capable of doing in-context learning. The authors approach is verified on top of TimesFM (a decoder only pretrained model for time series forecasting) accompanied with extensive evaluations. The authors show that t... | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful comments and suggestions. We address the main points below. Please let us know if you have any further comments or concerns. If our response sufficiently addresses your concerns, we hope that you will consider raising your score accordingly.
> I would sug... | null | null | null | null | null | null |
MOGIC: Metadata-infused Oracle Guidance for Improved Extreme Classification | Accept (poster) | Summary: This paper mainly explores methods to enhance classification performance using metadata in the task of Extreme Classification. Experiments on six popular benchmark datasets show that the method significantly improves the model performance.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theore... | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed review. Please find our response to your comments below.
1. **The two-phase training process involving Oracle training and Oracle-guided disciple training might introduce additional complexity.**
* **Response**: We agree with the reviewer that the two-stag... | Summary: The paper introduces MOGIC, a framework for improving extreme classification (XC) by leveraging metadata through a two-phase training approach. XC involves tasks with extremely large label spaces (e.g., product recommendations, Wikipedia tagging) where metadata can enhance accuracy but faces challenges like no... | Rebuttal 1:
Rebuttal: Thank you for your detailed review. Below are our responses to your comments.
1. **Comparisons with LLM-based approaches**
* **Response**: We have already included comparisons of MOGIC against LLaMA and Phi-based Oracles without disciple models, when LoRA-finetuned for label generation (XC task) ... | Summary: The authors propose a framework for building a disciple model which can perform extreme multi-label classification with the assistance of RAG-like metadata. This pipeline, MOGIC, is two-phase: in phase (1) an oracle with access to high-quality, ground-truth metadata is trained. In phase (2), a smaller, "discip... | Rebuttal 1:
Rebuttal: Thank you for your detailed review. Please find our response to your comments below.
1. **Clarity on Rademacher constants**
* **Response**: The Rademacher complexity constants $R_q$ and $R_l$ in Theorem 1 are scalar values which quantify the complexity or capacity of the hypothesis classes corres... | null | null | null | null | null | null | null | null |
Few-Shot Learner Generalizes Across AI-Generated Image Detection | Accept (poster) | Summary: This paper adopts the concept of traditional few-shot learning (prior to 2022) and utilizes a prototype network to construct an AIGC image detector, with experimental validation demonstrating improved generalization performance.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Strength:
- This pap... | Rebuttal 1:
Rebuttal: Dear Reviewer fx2j,
Thank you for your feedback and constructive comments. We appreciate the time and effort you invested in reviewing our work. Here are our responses to your concerns:
Q1. Academic contribution is quite limited.
To the best of our knowledge, our work is the first to systematic... | Summary: The paper presents the Few-Shot Detector (FSD), an innovative approach to detect AI-generated images, particularly from unseen generative models. Traditional fake image detectors often struggle with generalization to new models due to the scarcity and high cost of collecting training data. FSD circumvents this... | Rebuttal 1:
Rebuttal: Dear Reviewer N8UV,
Thank you for your detailed feedback and constructive comments. We have carefully considered each point you raised and would like to address them as follows:
Q1: The lack of strategy for obtaining test images from the same domain in real-world applications.
While current dif... | Summary: This paper introduces an approach to detecting AI-generated images by reframing the task as a few-shot classification problem. The Few-Shot Detector (FSD) uses a prototypical network to learn a specialized metric space, distinguishing between unseen fake images and real ones only using very few samples. By tre... | Rebuttal 1:
Rebuttal: Dear Reviewer aJMC,
Many thanks for your careful reading and valuable comments. We hope our reply further reduces potential misunderstandings.
Q1. How does FSD perform when faced with generative models that are significantly different from those in the training set?
Our comprehensive evaluation... | null | null | null | null | null | null | null | null |
Multi-Modal Object Re-identification via Sparse Mixture-of-Experts | Accept (poster) | Summary: This work introduces MFRNet, which mitigates insufficient interaction and feature imbalance via two modules. The Feature Fusion Module (FFM) uses a mixture-of-generators for pixel-level alignment, while the Feature Representation Module (FRM) employs a mixture-of-experts for balanced modality-shared and modali... | Rebuttal 1:
Rebuttal: > **Q1:** While the experiments are comprehensive, the analysis lacks depth. For instance, Table 3 presents results for M(TIR) without providing a corresponding discussion in which the performance of MFRNet is lower than TOPReID.
>
**A1:** Thank you for your suggestion. TOP-ReID benefits from sp... | Summary: This paper introduces MFRNet for multi-modal object re-identification. This approach addresses two core issues: insufficient pixel-level feature interaction and difficulty balancing between shared and specific modality features. The proposed Feature Fusion Module (FFM) fosters fine-grained cross-modal interact... | Rebuttal 1:
Rebuttal: > **Q1:** Specifically, in Section 3.1, the description of the FFM states that transformations for all three modalities occur simultaneously. Yet, Figure 2 only illustrates interactions involving the RGB image, which may not fully represent the fusion process.
>
**A1:** Thank you for your sugges... | Summary: This paper proposes a novel Multi-modal Fusion and Representation Network (MFRNet) approach for multi-modal object re-identification, inspired by the sparse Mixture-of-Experts (MoE) paradigm. The proposed framework enhances performance by introducing a Feature Fusion Module (FFM) for fine-grained pixel-level c... | Rebuttal 1:
Rebuttal: > **Q1:** While the MFRNet presents a novel perspective within multi-modal ReID by introducing a sparse Mixture-of-Experts (MoE) framework, its overall novelty remains moderate. It is more like a representing of an effective combination of existing techniques.
>
**A1:** Our novelty primarily lies... | Summary: This work presents the Modality Fusion and Representation Network (MFRNet) aiming to address the limitations in modality interaction and representation of recent works. Two modules named Feature Fusion Module (FFM) and Feature Representation Module (FRM) are proposed to tackle the interaction and representatio... | Rebuttal 1:
Rebuttal: ## Response to Reviewer PMp6:
> **Q1:** From Table 4, we can observe that the FRM has decreased the 'Params' and 'FLOPs'. Generally, the MoE structure should keep similar or slightly higher 'Params' and 'FLOPs' with the baseline during the inference.
>
**A1:** Thank you for your suggestion. As d... | null | null | null | null | null | null |
Global-Local Dirichlet Processes for Clustering Grouped Data in the Presence of Group-Specific Idiosyncratic Variables | Accept (poster) | Summary: The article presents a new method for performing Bayesian nonparametric clustering on data sets with both global and local variables, i.e. data sets for which some variables are only observed for a subset of the individuals. The paper presents a novel formulation of a clustering model that allows for global cl... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for all the comments and questions, responses to some of which are in the textbox below.
- **Question related to allowing our algorithm to make use of information on presence/absence of some variable***
**Response**: The presence of PSA measurement indicates pros... | Summary: This paper considers the problem of clustering grouped data for which the observations may include group-specific variables in addition to the variables that are shared across groups.
To allow for these group-specific variables to aid in the clustering, the paper proposes a novel Bayesian nonparametric approa... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the comment.
- **The derivations seem straightforward, and the paper's impacts do not seem significant for ICML. What were the main technical innovations that overcame previously unresolved technical challenges?**
**Response**: In this paper, our contribution... | Summary: This paper addresses the problem of clustering grouped data where observations may include both group-specific and shared variables across groups. The authors propose a novel Bayesian non-parametric approach called the global-local (GLocal) Dirichlet process. Unlike HDP, where clusters are derived from a comm... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for all the comments.
- **Comparison, differences and gap between this work and Dinari and Freifeld, UAI 2020.**
**Response**: Dinari and Freifeld, 2020 discusses a setting in which data arises from multiple pre-defined groups and consists of variables that are s... | null | null | null | null | null | null | null | null |
RULEBREAKERS: Challenging LLMs at the Crossroads between Formal Logic and Human-like Reasoning | Accept (poster) | Summary: The paper RULEBREAKERS: Challenging Large Language Models at the Crossroads between Formal Logic and Human-like Reasoning introduces RULEBREAKERS, a dataset designed to evaluate large language models' (LLMs) ability to distinguish between logical rule-based conclusions and conclusions that align with human rea... | Rebuttal 1:
Rebuttal: We thank Reviewer 8kkH for commending our “**methodology is well-designed**”, our claims are “**largely supported by empirical evidence**”, and that our experimental design and analyses are “**thoughtfully constructed, with several mechanisms to ensure validity**”. We further appreciate and agree ... | Summary: The authors propose a new dataset for single step reasoning, which consists of pairs of premises and a conclusion, which are answered in a binary way. The premise and the conclusion always are true if one only follows the logical reasoning. However the pair is divided into "rulebreakers", where the reasoning c... | Rebuttal 1:
Rebuttal: We thank Reviewer fmAM for their helpful feedback. We appreciate their positive comments that our evaluation is “**well executed and broad**”, our analyses are “**convincing**” and that our paper is “**clearly written, well presented and easy to follow**”.
**Q1: “How can GPT-4o be in Fig 6, if ca... | Summary: This paper introduces RULEBREAKERS, a dataset specifically created to assess LLMs on reasoning scenarios that emphasize "human-like reasoning" over logic reasoning. The study demonstrates that state-of-the-art LLMs, including GPT-4o, frequently apply logical rules, which is inconsistent with human reasoning.
... | Rebuttal 1:
Rebuttal: We thank Reviewer QkHy for their helpful feedback and recognizing that our study was “**extensive**” in having “**compared multiple advanced LLMs**”.
**Weakness 1 (W1): “human reasoning is undefined”/Question 1 (Q1): “how to define human-reasoning and why?”**
We will **replace references to “hum... | Summary: The authors introduce RULEBREAKERS, a dataset designed to assess LLMs' ability to reason using common sense and factual knowledge rather than strictly following formal logic. The experimental evaluation proposed in the paper spans over seven LLMs, and its findings uncover a notable weakness in these models' ab... | Rebuttal 1:
Rebuttal: We thank Reviewer twqU for commending that our “**methods are reliable**”, our “**dataset represents a valid addition to the LLM community**”, our experiments are “**sound**”, and results “**highlight an open challenge**”. We are glad they found our paper “**primarily well-written**”, echoing Revi... | null | null | null | null | null | null |
UniDB: A Unified Diffusion Bridge Framework via Stochastic Optimal Control | Accept (spotlight poster) | Summary: This paper proposes a unified framework, UniDB, of diffusion bridge models based on Stochastic Optimal Control. This framework enhances the quality and detail of generated images by balancing control cost and terminal penalty.
Claims And Evidence: Claim 1: UniDB helps to understand and generalize Doob’s $h$-t... | Rebuttal 1:
Rebuttal: Thank you for your feedbacks and comments.
> Claim 1: Choice of $L_1$ norm in the training objective (Equation 19): The paper does not provide a clear justification for using the $L_1$ norm instead of other alternatives like $L_2$ norm. The authors should explain whether this choice is based on e... | Summary: This paper introduces UniDB, a diffusion bridge model framework that utilizes
Stochastic Optimal Control (SOC) for process optimization, providing an analytical solution for the optimal controller. UniDB generalizes existing diffusion
bridge models by showing that Doob’s h-transform is a special case where th... | Rebuttal 1:
Rebuttal: We greatly appreciate your feedback and inquiries.
> Claims And Evidence 1: Additional explanation is needed regarding whether the optimal controller in LQ SOC directly contributes to producing sharper and more detailed images.
The over-control in Doob's h-transform violates the natural statisti... | Summary: This paper proposes a framework that unifies and extends various diffusion bridge methods by way of stochastic optimal control. In the case of linear dynamics, they derive a computationally tractable method, which can be thought of as a regularization of previous methods by the introduction of a new hyperparam... | Rebuttal 1:
Rebuttal: We sincerely thank you for your comment.
> Claim & Question 2: The reviewer finds that result to be mathematically trivial and also to be disconnected from saying anything about how well the method performs in practice. However, The reviewer doesn’t think this proposition is integral to their wor... | Summary: This paper proposed a diffusion-based method for image restoration problems, e.g., super-resolution, deraining, and inpainting. Given a dataset of corrupted and clean image pairs, the goal is to construct a diffusion model that at inference generates clean images given corrupted images. The proposed method is ... | Rebuttal 1:
Rebuttal: We sincerely thank you for your comments and questions. We will provide a detailed response to these concerns.
> Claim: The reviewer is not convinced by the implication of Prop 4.3: $\mathcal{J}$ is smaller for finite $\gamma$ does not imply suboptimality in empirical performance. Intuitively, it... | null | null | null | null | null | null |
What Makes In-context Learning Effective for Mathematical Reasoning | Accept (poster) | Summary: In this paper, the authors investigate the theoretical explanation of in-context learning (ICL). They prove that the influence of the demonstrations can be estimated by two factors: LLM-oriented semantic similarity and inference stability of demonstrations. Based on it, they propose a LMS3 method, and the expe... | Rebuttal 1:
Rebuttal: We sincerely appreciate your recognition of our reasonable, correct, and self-contained theoretical analysis, the novelty and effectiveness of our method, and our convincing and strong empirical validation.
$\bf{Q1}$: The generalizability of the theory and methods in this paper could be further ... | Summary: This paper aims to explore the underlying mechanism of in-context learning (ICL). To this end, the authors first theoretically analyze the influence of the demonstrations on inference performance, where they prove that the performance is bounded by an LLM-oriented semantic similarity and the demonstration stab... | Rebuttal 1:
Rebuttal: We sincerely appreciate your recognition of our theoretical analysis, the clarity and good writing of our paper, and the strong performance of our method. As for your concerns:
$\bf{Q1}$: The theoretical analyses in this paper are highly general. Therefore, I suggest the authors to discuss the po... | Summary: In-context learning has been a key driver of LLM performance over the past few years. However, the performance of a model can vary (and sometimes even be negatively impacted) based on the content of the few-shot demonstrations provided in-context. This work provides a theoretical analysis of the conditions und... | Rebuttal 1:
Rebuttal: We sincerely appreciate your affirmation of the effectiveness of our experiments, the good writing of our paper, and the significance of our work. As for your concerns:
$\bf{Q1}$: The algorithm presented requires white box access to the LLM. However, given the strong generalization performance ac... | null | null | null | null | null | null | null | null |
Fully Dynamic Euclidean Bi-Chromatic Matching in Sublinear Update Time | Accept (oral) | Summary: This paper studied geometric bi-chromatic matching (aka the optimal transportation problem) for discrete distributions in the dynamic setting, where points could undergo insertions and deletions. Here, we are given $n$ points inserted and deleted dynamically, and we want to always maintain an approximation to ... | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their time and for providing valuable comments.
- “The texts in Figures 2 and 3 are generally very small. My suggestion would be to have the same figures with better resolutions in the appendix and add a pointer.”
All figures will be replicated in the appe... | Summary: This papers presents a dynamic data structure that maintains a bipartite matching and supports insertions ans deletions of points. Given two point sets in $2$-dimensional space and a parameter $\varepsilon>0$, the data structure computes an $O(1/\varepsilon)$-approximate matching and handles updates in $O(n^{-... | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their time and for providing valuable comments. For the minor comments and typos not addressed below, we will incorporate them in the next version of the paper.
- “Missing the impact statement.”
We will include the following impact statement, and we apolog... | Summary: This paper gives an algorithm for dynamic bi-chromatic matching in Euclidean space with $O(\frac{1}{\varepsilon})$-approximation ratio and sublinear update time $O(\frac{n^\varepsilon}{\varepsilon})$ with theoretical guarantee and this algorithm is the frist sublinear update time algorithm for geometric dynami... | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their time and for providing valuable comments.
- "The contribution is not clear enough in relation to [Xu-Ding 2024]."
The Xu-Ding paper studies a slightly different problem: dynamic maintenance of optimal transport. Specifically, the goal is to design a ... | Summary: The paper studies the euclidean minimum cost bipartite matching problem: given n blue points and n red points in the 2d euclidean plane, we wish to compute a minimum cost bipartite matching between them, where the cost is measured in terms of euclidean distance. The novel component of the paper is to introduce... | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their time and for providing valuable comments.
- “Do the authors have an estimate for the constant in the $O(1/ \varepsilon)$ approximation. I understand that asymptotically it doesn’t matter but it would be nice to understand what the factor is for say $... | null | null | null | null | null | null |
COSDA: Counterfactual-based Susceptibility Risk Framework for Open-Set Domain Adaptation | Accept (poster) | Summary: This paper establishes a novel causal-inspired theoretical framework for Open-Set Domain Adaption by exploring the susceptibility between two visual samples. Based on the theoretical analysis, the authors propose three components: the SRE for estimating the causal relativity; the CFA module to facilitate cross... | Rebuttal 1:
Rebuttal: **Dear Reviewer rce5,**
Thank you for your decision and constructive feedback. These detailed and professional comments have highly enlightened and encouraged us to make every effort to improve our work. We hope our responses could resolve the concerns.
>**Claims And Evidence**. What is the mea... | Summary: This paper introduces an adversarial adaptation framework called COSDA, which aims to address the challenges of unknown category recognition and domain drift in the open domain adaptation problem. The framework is based on causality theory and includes three novel components: (i) Susceptibility Risk Estimator ... | Rebuttal 1:
Rebuttal: **Dear Reviewer zv4D,**
Thank you for your decision and constructive feedback. We have studied the comments carefully and made thorough revisions. We also greatly appreciate your insightful questions and hope that our responses have helped to clarify them.
> **Weakness 1**: As reported in the ma... | Summary: This paper addresses the Open-Set Domain Adaptation problem which is useful in real-world applications. They propose a novel Counterfactual-based susceptibility risk framework, consists of Susceptibility Risk Estimator, Contrastive Feature Alignment, and Virtual Multi-unknown-categories Prototype. Experiments ... | Rebuttal 1:
Rebuttal: **Dear Reviewer pQYc,**
We sincerely appreciate your constructive comments on our work. We have carefully addressed each point raised and incorporated corresponding improvements. We hope our responses have adequately addressed the concerns.
>**Weaknesses 1 & Weaknesses 3**. More ablations on Vi... | Summary: This paper introduces COSDA, a novel causal-based Open-Set Domain Adaptation (OSDA) framework. It proposes Susceptibility Risk, a theoretical approach to measuring and mitigating the risk associated with domain shifts and unknown category recognition. Then, three core components are developed: Susceptibility R... | Rebuttal 1:
Rebuttal: **Dear Reviewer 6vAQ,**
Thank you for your decision and constructive feedback. We have stuied the comments carefully and made through revisions. We hope that our responses have helped to clarify the concerns.
> **Experimental Designs Or Analyses & Q1**. whether the method can perform well on hard... | null | null | null | null | null | null |
Constrained Exploitability Descent: An Offline Reinforcement Learning Method for Finding Mixed-Strategy Nash Equilibrium | Accept (poster) | Summary: This paper proposes an offline RL method to solve mix-strategy Nash Equilibrium via a game-theoretic method, exploitability descent.
Claims And Evidence: Is best-iterate convergence better than average-iterate convergence? Would be good to get more detailed comments on this.
Methods And Evaluation Criteria: ... | Rebuttal 1:
Rebuttal: Thank you for reviewing this paper and providing valuable comments. We are updating the manuscript according to the comments from all reviewers. Here, we reply to your questions and concerns about the paper.
**[Claims And Evidence]**
Yes, average-iterate convergence means that we have to preserv... | Summary: The authors extend Exploitability Descent to the offline setting by applying a regularization constraint to minimize distance to the behavior policy. They provide theoretical guarantees for convergence under uniform concentration assumptions, and they provide experiments empirically validate their method, CED,... | Rebuttal 1:
Rebuttal: Thank you for reviewing this paper and providing valuable comments. We are updating the manuscript according to the comments from all reviewers. Here, we reply to your questions and concerns about the paper.
**[Questions For Authors]**
_**Question**_: A critical limitation to scaling up ED with ... | Summary: This paper introduces Constrained Exploitability Descent (CED), a novel model-free offline reinforcement learning algorithm for adversarial Markov games (MGs). The authors demonstrate, both theoretically and empirically, that, unlike in MDPs, an optimal policy can be learned under policy constraints in adversa... | Rebuttal 1:
Rebuttal: Thank you for reviewing this paper and providing valuable comments. We are updating the manuscript according to the comments from all reviewers. Here, we reply to your question and concern about the paper.
**[Questions For Authors]**
_**Question**_: Could you elaborate more on the novelty of the... | null | null | null | null | null | null | null | null |
Non-asymptotic Error Bounds in $\mathcal{W}_2$-Distance with Sqrt(d) Dimension Dependence and First Order Convergence for Langevin Monte Carlo beyond Log-Concavity | Accept (poster) | Summary: When generating samples from a target distribution $\pi$ from a large
dimension $d$ -- including when the normalization constant is unknown -- one
often employs Langevin Monte Carlo (LMC). This method starts by constructing
a Langevin diffusion where its invariant distribution matches the desired
target distri... | Rebuttal 1:
Rebuttal: ## Response to Reviewer rDrb
Thank you for your valuable feedback on our paper. We are grateful for your thoughtful comments, which have guided us in refining the manuscript. Here, we address each of your questions in detail and highlight the changes made accordingly.
### About *Weakness*
> Thi... | Summary: This paper addresses the challenge of sampling from non-log-concave distributions, including those that satisfy a dissipativity condition or a log-Sobolev inequality. The authors approach this problem by discretizing the Langevin dynamics and establish a state-of-the-art convergence rate of d^{1/2}\varepsilon^... | Rebuttal 1:
Rebuttal: ## Response to Reviewer H65o
We sincerely appreciate your time and effort in reviewing our manuscript. Your insightful comments and constructive suggestions have greatly helped us improve the quality of our work. Below, we provide a point-by-point response to each of your concerns, along with the... | Summary: This paper establishes an almost optimal convergence rate of $\tilde{O}(\sqrt{d}/\epsilon)$ in $W_2$-distance for Langevin Monte Carlo (LMC) when the target measure satisfies the log-Sobolev inequality, along with dissipativity and smoothness conditions. The authors establish this result through a new discreti... | Rebuttal 1:
Rebuttal: ## Response to Reviewer cxoz
We really appreciate your carefully reading and insightful comments. We will respond to each comment below and revise the manuscript according to these suggestions.
### About *Weakness*
>1. There is not sufficient discussion on the implications of Assumption 2.2, …... | Summary: The authors derive a sampling error bound in Wasserstein-2 for a discrete time discretization of Langevin Monte Carlo. The bound contains two parts, an error term due to finite time truncation of the Langevin dynamics, and an error term due to discretization over a finite time horizon. The innovation of this w... | Rebuttal 1:
Rebuttal: ## Response to Reviewer DL9W
We sincerely thank the reviewer for constructive suggestions and comments. Next we address all comments point-by-point and will revise the manuscript to incorporate these suggestions.
### About *Summary*
> However, to carry out … Assumption 2.8 … which appears to ... | null | null | null | null | null | null |
DEFAME: Dynamic Evidence-based FAct-checking with Multimodal Experts | Accept (poster) | Summary: The authors introduce DEFAME, an automated fact-checking framework designed to process multimodal claims using multimodal evidence. DEFAME operates within a zero-shot MLLM pipeline structured into six stages: action planning, action execution (via multimodal web retrieval and GeoClip tool use), result summariz... | Rebuttal 1:
Rebuttal: We thank for the time invested into the review. Please find our response to your concerns below.
### First to handle multimodal claims and evidence
The work by Tahmasebi et al. (CIKM 2024), i.e. LVLM4F, was covered in our paper, most notably in the prior work overview (Table 1) and as a baseline i... | Summary: This paper tackles the challenge of scalable and explainable fact-checking in the presence of disinformation, particularly in multimodal contexts. The authors propose DEFAME, a modular, zero-shot multimodal large language model (MLLM) pipeline for open-domain claim verification. Unlike prior methods that are e... | Rebuttal 1:
Rebuttal: ### Claim on Novelty / Related Work
Thanks for pointing out SNIFFER, MMD-Agent (from the MMFakeBench paper), and LEMMA. Please refer to our response to Reviewer K7Ta.
### No Theoretical Claims
We believe that absence of theoretical claims is not unusual for papers under “Application-Driven Machine... | Summary: The paper presents a novel approach to automated fact-checking designed to address the growing problem of disinformation. The authors introduce DEFAME, a modular system that uses a six-stage pipeline to dynamically select and use various tools for retrieving and evaluating both textual and visual evidence. Th... | Rebuttal 1:
Rebuttal: We value the time invested in the review and thank for the feedback and suggestions! Please find our response to your concerns below.
### Comparison to Prior Work
Thanks for pointing out **SNIFFER**, a related work that was unintentionally omitted. Critically, unlike DEFAME, SNIFFER is **incapabl... | Summary: This paper introduces DEFAME, a multimodal pipeline for open-domain text-image claim verification. DEFAME operates as a six-stage process that handles both multimodal claims and evidence while generating structured reports. It dynamically selects appropriate tools to extract and evaluate both textual and visua... | Rebuttal 1:
Rebuttal: We thank for highlighting that the “paper has great potential” and are grateful for pointing out the usefulness of the information provided in the appendix. We appreciate the time invested into the review and are happy to address the concerns in the following.
### Summarize, Develop, and Justify ... | null | null | null | null | null | null |
Towards Efficient and Scalable Implementation of Differentially Private Deep Learning | Reject | Summary: This focus of the paper is on computational efficiency of implementing Differentially Private Stochastic Gradient Descent (DP-SGD), which is commonly used for private ML model training. In particular, the paper focuses on
1. _Poisson subsampling for generating batches_: while typical implementation have used s... | Rebuttal 1:
Rebuttal: Thanks for your time and your insightful review.
> Figure 5:
Thanks for the comments! It is correct, the ratio of the throughput is TF32/FP32 and that should be clarified in the figure caption. It will be updated for the camera ready version.
> One thing that was not clear to me in Masked DP-S... | Summary: The paper provides a comprehensive empirical study of Differentially Private Stochastic Gradient Descent (DP-SGD) implementations that properly incorporate Poisson subsampling, which is crucial for maintaining theoretical privacy guarantees. Recent research has demonstrated that many implementations ignore the... | Rebuttal 1:
Rebuttal: Thanks for your time and your careful review.
We respectfully disagree with your statement that “While it's a very solid technical work and important for practitioners, the paper lacks scientific novelty that would be expected at ICML.” because the call for papers of ICML mentions as topic of int... | Summary: This paper investigates the computational efficiency of DP-SGD, with a focus on analyzing the computational cost of using Poisson subsampling for DP training, and comparing a series of DPSGD schemes. To reduce computational costs, the author proposed Masked DPSGD algorithm by addressing the frequent recompilat... | Rebuttal 1:
Rebuttal: Thanks for your time and your thorough review.
> What is the privacy guarantee of your proposed algorithm? How can you prove it?
The DP guarantees of the **Masked DP-SGD** follow from the standard analysis of Poisson subsampled DP-SGD in the add/remove adjacency. Note that the only difference we... | null | null | null | null | null | null | null | null |
Model Immunization from a Condition Number Perspective | Accept (oral) | Summary: This paper proposes to achieve model immunization, that is, pretraining a model that is hard to fine-tune on some harmful tasks while preserving the performance of other tasks, by maximizing the condition number of the corresponding harmful fine-tuning objective so that the convergence is slow and numerically ... | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful and encouraging feedback. We are glad the reviewer found the theoretical formulation sound, the proposed regularizers technically solid, and the empirical results convincing. We address the specific concerns and questions below.
> Q12. Manipulating the con... | Summary: The paper reframes the immunization task, i.e., make models robust against finetuning on specific task, (Contribution 1: Section 3) from a novel perspective using condition number.
Through this insight, the authors propose a regularization method at the pretraining stage that makes finetuning for specific ta... | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful review and respond to questions below.
> Q5. Generative task
We consider the immunization against linear probing (Sec. 3). For generative tasks, linear probing is not commonly used for transfer learning. Instead, other techniques, e.g., LoRA, are more com... | Summary: This paper proposed a framework for studying model immunization, i.e., the task to make fine-tuning on harmful datasets harder. The authors proposed that for linear models, the difficulty of fine-tuning can be characterized by the condition number of the Hessian matrix. Based on this theory, the authors propos... | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive and constructive review. We answer individual questions below.
> Q1. Justification for task and dataset selection
Our experiments consist of two settings: (a) a linear model setup, where the setting matches our proposed theoretical framework, and (b) a deep... | null | null | null | null | null | null | null | null |
High-Dimensional Tensor Regression With Oracle Properties | Accept (poster) | Summary: The paper introduces a high-dimensional tensor-response tensor regression model under low-dimensional structural assumptions, such as sparsity and low-rankness.
The authors propose a least squares estimation framework with non-convex penalties and derive general risk bounds for the resulting estimators.
Th... | Rebuttal 1:
Rebuttal: **Broader Scientific Literature**
Thank you for introducing these relevant references. They are very helpful, and we will cite them appropriately in the final version.
**Response to Claims and Evidence**
1. To the best of our knowledge, the proposed bound is tight in terms of its order, with ce... | Summary: This paper addresses tensor regression models for high-dimensional tensor data. Specifically, it proposes a tensor-response tensor regression model, assuming low-dimensional structures such as sparsity and low rankness. While conventional convex penalties are easy to optimize, they often fail to model the data... | Rebuttal 1:
Rebuttal: Thank you for your positive feedback.
**Hyperparameters for the Convex Penalties**
All hyperparameters are selected via ten-fold cross-validation, as noted in the first paragraph of Section 5. This includes the regularization parameter $\lambda$ used in convex penalties. Specifically, $\lambda... | Summary: This paper studies tensor regression models with non-convex penalties and provides general risk bounds for the resulting estimators, demonstrating that they achieve oracle statistical rates under mild technical conditions. The authors also introduce an accelerated proximal gradient algorithm to estimate the pr... | Rebuttal 1:
Rebuttal: **More discussions on decomposition-based tensor regression methods**
We appreciate the comment regarding tensor regression methods based on tensor decomposition techniques such as CP and Tucker. Both decomposition-based and regularization-based approaches have been actively explored in the tenso... | Summary: This paper proposes a framework for tensor on tensor regression, introducing novel nonconvex regularizers and an accelerated proximal gradient algorithm for estimation. The authors propose a class of penalties that depend on the singular values of each tensor dimension and give Frobenius norm rates of converge... | Rebuttal 1:
Rebuttal: We sincerely appreciate your positive comments and insightful feedback. We have carefully addressed your concerns as follows:
**1. Justification for Claim 2**
In this paper, we focus on nonconvex sparse learning in tensor regression. Nonconvex regularizers such as SCAD and MCP can achieve the or... | null | null | null | null | null | null |
SIMPLEMIX: Frustratingly Simple Mixing of Off- and On-policy Data in Language Model Preference Learning | Accept (poster) | Summary: Authors' first claim that on-policy data are most effective on tasks with objective answers, while off-policy data are most effective on open-ended tasks like creative writing. Analysis on AlpacaEval supports the claim. Then, authors propose a simple method of mixing on-policy and off-policy: just sample from ... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the insightful comments and suggestions. We appreciate that the reviewer finds our work simple and has a strong potential to be adopted in practice. We are also encouraged that the reviewer finds our paper “clearly written”.
> It could've been nicer if it was d... | Summary: This paper studies the effect of mixing on-policy and off-policy data when fine-tuning language models using direct preference optimization (DPO). They observe that on-policy data (data generated by the current policy) tends to work better for tasks with clear correct answers, like math and coding, whereas off... | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful review and we would like to extend our gratitude for the reviewer finding our method straightforward and our experiment results clear. We hope to answer the questions and resolve the concerns of the reviewers below:
> The idea itself isn’t very novel—just ... | Summary: The paper investigates the interplay between on-policy and off-policy preference data in aligning large language models (LLMs) with human preferences. It presents the key finding that on-policy data is more effective for reasoning tasks (e.g., math, coding), whereas off-policy data performs better in open-ende... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thoughtful comments and suggestions. We are thankful that the reviewer finds our method “straightforward but effective” and our results “demonstrate clear trends”. We hope to resolve the concerns below:
> Benchmarks like Alpaca Eval 2.0 and Ifeval involve L... | null | null | null | null | null | null | null | null |
PASER: Post-Training Data Selection for Efficient Pruned Large Language Model Recovery | Reject | Summary: This paper proposes PASER, a post-training data selection method for efficient pruned model recovery. PASER involves (i) Semantic-Structural Recovery Instruction Clustering to identify and group data points that focus on similar capabilities. (ii) Capability Degradation-aware Instruction Selection to enable mo... | Rebuttal 1:
Rebuttal: **W1:** In fact, the paper presents empirical evidence supporting this statement in multiple places: 1) In Table 1 and throughout our experiments, we demonstrate that using general-purpose instruction tuning data selection methods-which focus on selecting "high-quality" instruction data in general... | Summary: This paper proposes PASER for efficient recovery of pruned large language models (i.e., fine-tuning pruned large language models to recover their performance). It uses SentenceBERT to embed data, a diffusion kernel to reduce dimensions, and then applies non-negative matrix factorization-based spectral clusteri... | Rebuttal 1:
Rebuttal: **C1: Theoretical Claims**
**A1:** In our time complexity analysis (Section 3.5, page 5), we considered the following:
1. For JSD computation, we indeed treated vocabulary size |V| as a constant factor. The practical vocabulary size (typically 32K-100K tokens) remains fixed regardless of instruc... | Summary: This paper proposes a data selection method for effectively recovering model performance after pruning. Beyond the efficiency argument, the need for such a method is well justified through experimental results, where the authors show that simply training on the full dataset or randomly selected data not only p... | Rebuttal 1:
Rebuttal: **C1: Relation To Broader Scientific Literature**
**A1:** In the revised version, we will add relevant foundational references for the manifold learning techniques applied in our work, including:
1. For manifold learning in high-dimensional spaces: Belkin & Niyogi (2003) "Laplacian Eigenmaps for ... | Summary: This paper proposes a data selection method for recovery fine-tuning: additional training for pruned LLMs to recover the original performance as well as possible. The method consists of three subcomponents: (1) clustering instruction data, (2) assign data budget for each cluster according to the probabilistic ... | Rebuttal 1:
Rebuttal: **C1: Claims And Evidence**
**A1:** While PASER lacks strong theoretical guarantees, our approach is guided by sound principles: targeting severely impaired capabilities is intuitively more efficient than general data selection methods not designed for recovery. The difficulty in establishing th... | Summary: The paper introduces PASER, a novel method for selecting instruction tuning data to recover the performance of pruned large language models (LLMs). Pruning, particularly structured pruning, often degrades model capabilities, and instruction tuning has shown promise for efficient recovery. PASER comprises three... | Rebuttal 1:
Rebuttal: **C1: Claims and Evidence**
**A1:** *Time complexity validation*: In fact, we have provided the empirical runtime data in Figure 2 (Page 8) and compared the efficiency of our PASER with baselines. In practice, for selecting 20% from Alpaca (52K samples) and 4% from LaMini (2.58M samples), the dat... | null | null | null | null |
FourierMamba: Fourier Learning Integration with State Space Models for Image Deraining | Accept (poster) | Summary: The authors propose a novel framework, FourierMamba, which integrates Fourier priors with a state-space model to associate different frequencies in the Fourier domain for image deraining.
## update after rebuttal
The authors' second-round response has addressed most of my concerns. I now find the motivation o... | Rebuttal 1:
Rebuttal: **R1**: Clarification on Motivation
Motivation: Mamba, a state-space model, offers global modeling and linear complexity, making it ideal for sequential data. Frequency information in the Fourier domain is inherently global, and Mamba’s sequence modeling efficiently captures inter-frequency depen... | Summary: The paper applies Mamba to dual-domain spatial dimensions and channels for image deraining. For spatial-dimensional Fourier processing, the authors introduce a new scanning method that better models the correlations between different frequencies. The constructed network is evaluated on multiple image restorati... | Rebuttal 1:
Rebuttal: **R1**: Response to “Insufficient Novelty”
The distinctions between our method and FreqMamba are detailed in Appendix A.9.
Here, we further clarify the novelty:
**Spatial-Domain Zigzag Scanning**: Unlike FreqMamba’s simple wavelet-domain scanning, FourierMamba’s zigzag scanning, inspired by JP... | Summary: This paper proposes FourierMamba that addresses the problem of single image deraining by introducing a scanning encoding mechanism that correlates different frequencies in both spatial and channel dimensions. Specifically, it employs zigzag coding in the spatial dimension to reorganize frequency orders and imp... | Rebuttal 1:
Rebuttal: **R1**: Details of Zigzag Scanning Implementation and Its Impact on Inference Speed
Section 3.2 briefly outlines the motivation and design of zigzag scanning, inspired by JPEG’s zigzag encoding, to order frequencies in the Fourier domain from low to high. For the spatial Fourier spectrum, it is d... | Summary: This paper introduces FourierMamba, a novel framework for image deraining that leverages the Mamba technique within the Fourier space to effectively correlate different frequency components. Unlike existing Fourier-based methods that fail to fully exploit the dependencies between low and high frequencies, Four... | Rebuttal 1:
Rebuttal: **R1**: The Necessity of Channel-Dimensional Fourier Transform
In Appendix A.6 of the paper, titled "Reasons for Using Channel-Dimensional Fourier," we note that different channels typically exhibit distinct degradation characteristics, which collectively determine the global information of an im... | null | null | null | null | null | null |
Counting atoms faster: policy-based nuclear magnetic resonance pulse sequencing for atomic abundance measurement | Accept (poster) | Summary: The paper uses reinforcement learning (Proximal Policy Optimization algorithm) to learn policies to modulate NMR pulses for rapid atomic abundance quantification. The authors developed three interacting agents to (1) align nuclear spins for measurement, (2) facilitate rapid relaxation to equilibrium, and (3) c... | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and feedback. Our responses are:
1. **Regarding Claims and Methods.** The reviewer comments that the simulated approach is not satisfying. There is a precedent for the positive impact of simulated results in this field. Please refer to bullet point 4 in ou... | Summary: The paper presents another novel approach to performing NMR studies using reinforcement learning to modulate pulses. The authors study three different formulations: standard control of NMR pulse for spin alignment, relaxation of spins towards equilibrium, and allowing an agent to determine when relaxation (spo... | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and feedback. Our responses are:
1. **Regarding Weaknesses** and the inclusion of additional real-world equipment deficiencies. We agree that the inclusion of thermal instrument noise is only part of the picture. A variety of limitations are worth modeling... | Summary: Counting atoms matters because it allows us to monitor and control the elemental makeup of materials, which is important for environmental sustainability applications. This paper presents a method that uses RL to optimise nuclear magnetic resonance (NMR) pulse sequences for faster and cheaper atomic abundance ... | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and feedback. Our responses are:
1. **Regarding Weakness 1:** lack of a baseline. In fact, we do use the standard 1D NMR pulse as a baseline, and we compare against it in all experiments. The baseline we use is the simplest and most common approach to anal... | Summary: The paper applies reinforcement learning to NMR data, to infer the elemental composition of a sample. They learn optimal policies to modulate NMR pulses for elemental abundance quantification. They present promising results on simulated data.
## update after rebuttal
I have raised my score.
Claims And Evide... | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and feedback. Our responses are:
1. **Regarding Weakness 1.** The reviewer states that the paper only presents application to simulated data. This is not totally accurate, for 2 reasons. **(I)** Firstly, in Experiment 1, the spin sets used in both training... | null | null | null | null | null | null |
Balancing Interference and Correlation in Spatial Experimental Designs: A Causal Graph Cut Approach | Accept (poster) | Summary: This paper studies optimal cluster-randomized designs for spatial A/B testing problems. The authors investigate the decomposition of the mean squared error (MSE) and show that interference and correlation are two key driving factors, contributing in opposite directions: when interference is strong, it is optim... | Rebuttal 1:
Rebuttal: >Applications to other data examples
We sincerely appreciate your valuable feedback. In response, we conducted extensive investigations during the rebuttal, including: (1) empirical validation on additional data, (2) methodological extensions, (3) theoretical investigations. The methodological i... | Summary: This paper proposes a graph cut approach for design of spatial experiments to estimate Average Treatment Effect (ATE) in settings with both interference where SUTVA assumption does not hold as well as when there exists correlation between units. The authors present a method which builds flexible surrogate func... | Rebuttal 1:
Rebuttal: We sincerely appreciate your thoughtful review and valuable feedback on our work. Your positive assessment is particularly encouraging to us. Below, we provide detailed responses to each of your comments.
> **Real data analyses**
While we do have a real-world dataset, it cannot be directly used ... | Summary: The paper proposes a new method for experimental design in settings with repeated experiments, spatial interference and error correlation. The authors characterize the MSE of the doubly-robust average treatment effect estimator, and propose an upper bound objective function that depends on estimable quantities... | Rebuttal 1:
Rebuttal: We greatly appreciate your constructive comments and your positive assessment of our paper. We focus on your major comments below.
>Comparison with Viviano et al. (2023)
**Difference in objective**: One of the key difference lies in the choice of the estimator. Specifically, our estimator explic... | Summary: This work addresses the problem of experimental design under interference. The authors assume that the interference structure can be accounted for by a covariance matrix which is proposed to be estimated from data via repeated experimentation. The key insight of this work is a decomposition of the global avera... | Rebuttal 1:
Rebuttal: > Methods And Evaluation Criteria
We appreciate your thoughtful comments, which are mainly about our settings with repeated and independent experiments. We believe your concerns may arise from certain misunderstandings of our paper, and we appreciate the opportunity to clarify them. We address th... | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.