title string | paper_decision string | review_1 string | rebuttals_1 string | review_2 string | rebuttals_2 string | review_3 string | rebuttals_3 string | review_4 string | rebuttals_4 string | global_rebuttals string | dataset_source string | conference_year int64 | review_5 string | rebuttals_5 string | review_6 string | rebuttals_6 string | review_7 string | rebuttals_7 string | review_8 string | rebuttals_8 string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Temporally Consistent Atmospheric Turbulence Mitigation with Neural Representations | Accept (poster) | Summary: The study focuses on Video Atmospheric Turbulence Mitigation (ATM), which aims to restore videos that are affected by distortions caused by atmospheric turbulence. Specifically, the proposed ConVRT introduces a neural video representation that decouples spatial and temporal information, allowing targeted regul... | Rebuttal 1:
Rebuttal: **`R4-Q1`**: **Capability of handling longer video sequences and significant motion**.
Our method effectively handles videos with significant motion. We've included additional experimental results and a detailed analysis in our global response to **`shared question A`**. This demonstrates our app... | Summary: This paper proposes a method for improving the temporal consistency of turbulence-affected videos. The proposed method uses neural representations (MLP layers) to separately model the spatial and temporal deformations caused by air turbulence and is able to improve the temporal consistency of restoration resul... | Rebuttal 1:
Rebuttal: **`R3-weakness 1`**: **Forward Model of Turbulence.**
Indeed mitigating turbulence requires overcoming color aberrations, blurriness, and deformations and other effects. Fortunately, existing techniques have successfully addressed many of these challenges and arguably the largest limitation of e... | Summary: This paper introduced ConVRT, a novel method for video atmospheric turbulence mitigation.
This paper has a good structure and is well-written.
Strengths: This paper proposed a new method to deal with turbulence mitigation.
Weaknesses: 1. limited real-world case visualization
2. limited proof of algorithm ef... | Rebuttal 1:
Rebuttal: **`R2-Q1 & R2-Q4`**: **Capability of moving object on more cases**
Yes, our method can handle moving objects. More than half of the cases in our main paper and supplementary materials are dynamic videos. This is evident as the lines in X-t Slice or Y-t Slice are not perfectly vertical or horizont... | Summary: This paper presents an implicit neural representation (INR) framework for taking a pre-trained supervised video atmospheric turbulence mitigation (ATM) model and regularizing its output to be more temporally consistent. The main components are (1) an INR called the temporal deformation field; and (2) a subsequ... | Rebuttal 1:
Rebuttal: **`R1-Q1 / R1 - weakness 1`** : **Visualization of Canonical Spatial Field and Representation Field Design**
The canonical spatial field C serves as a base spatial representation, containing all spatial content of the video. We can obtain a canonical image by deriving it from this field without a... | Rebuttal 1:
Rebuttal: **`Shared Question A `** : **How does the proposed method handle large object motion?**
Since our method relies on moderating the temporal regularity of motion in the video, it is natural to ask whether it can distinguish between large object motion and turbulence motion. We provided additional e... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Aligning Large Language Models with Representation Editing: A Control Perspective | Accept (poster) | Summary: In their paper, the authors introduce RE-CONTROL, a novel approach designed to align Large Language Models (LLMs) through representation editing. They view LLMs as discrete-time stochastic dynamical systems and propose the insertion of control signals into the internal representations. This technique allows fo... | Rebuttal 1:
Rebuttal: **Q1: Some parts of the paper are confusing, especially certain expressions. For example, they did not clarify some notations like a_t, V_{phi} etc.. The legend in figure 1 seems mismatched. And some figures are not mentioned in the paper.**
A: $a_t$ is a typo; we meant $u_t$, which is the contro... | Summary: The paper suggests editing language model features for alignment tasks. The authors first learn a value function of a language model from a human-preference dataset. They then increment feature representations in model layers to maximize test-time utility. Empirical evidence shows that this feature editing met... | Rebuttal 1:
Rebuttal: **Q1. First, a compute-performance tradeoff analysis would clarify the behavior of RE-CONTROL. RE-CONTROL is more compute-intensive than other test-time decoding alternatives because it requires gradient ascent steps at decoding time (Section 4.4). These steps add up and can become quite intensive... | Summary: The paper introduces an alternative procedure for LLM alignment that does not fine-tune LLM weights, but instead learns a separate value function that is used to update hidden states. The value function is learned using a variation of temporal difference, then applied at inference time to modify hidden states ... | Rebuttal 1:
Rebuttal: **Q1a: Choice of baselines**
A: **We have compared our work with ARGS [26]**. Both [26] and [39] are controlled decoding methods. Specifically, [26] directly uses a pre-trained reward model, while [39] further trains a value function that can predict the reward from partial responses. In our pape... | Summary: The paper "Aligning Large Language Models with Representation Editing: A Control Perspective" proposes a method for aligning large language models (LLMs) with human objectives through representation editing. Unlike fine-tuning, which is resource-intensive and unstable, or test-time alignment techniques like pr... | Rebuttal 1:
Rebuttal: **Q1: Complexity: The method involves sophisticated control theory and optimization techniques, which might be challenging to implement and understand for practitioners without a strong background in these areas.**
A: Though our work is theoretically grounded, the implementation of our method is ... | Rebuttal 1:
Rebuttal: We thank the reviewers for their valuable feedback and the time they spent on our manuscript. We would like to highlight that all reviewers agree that RE-control is an innovative approach and that viewing LLM as a dynamical system is novel. Additionally, all reviewers have noted that RE-control is... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Exogenous Matching: Learning Good Proposals for Tractable Counterfactual Estimation | Accept (poster) | Summary: The manuscript introduces exogenous matching, an importance sampling method for efficient estimation of counterfactual expressions in various settings. This method transforms the variance minimization problem into a conditional distribution learning problem, allowing integration with existing modeling approach... | Rebuttal 1:
Rebuttal: We thank the reviewer for the patience and valuable feedback. We acknowledge that, due to page limitations, we had to move some material to the appendix, which may have affected the clarity of the manuscript. We understand that this may have made the paper appear somewhat disorganized. To address ... | Summary: This paper introduces an importance sampling method for efficient estimation of counterfactual expressions within general settings. It transforms the variance minimization problem into a conditional distribution learning issue, allowing integration with existing modeling approaches. The paper also explores the... | Rebuttal 1:
Rebuttal: We thank the reviewer for highlighting the issues and providing valuable feedback to help us improve the manuscript. Below are our responses to the identified weaknesses and questions:
> **Weakness 1**: Contributions are not disentangled well. All three points involve experimental or empirical fi... | Summary: This paper presents Exogenous Matching (EXOM), a new importance sampling method for estimating counterfactual probabilities in Structural Causal Models (SCMs). EXOM transforms variance minimization into a conditional distribution learning problem, providing an upper bound on counterfactual estimator variance a... | Rebuttal 1:
Rebuttal: We thank the reviewer's patience in reading our manuscript and the attention to issues concerning assumptions, generalization, scalability, and experimental aspects. Below are our responses to these concerns:
> **Weakness 1**: Could the authors elaborate on the assumption about density ratio in T... | Summary: Based on the importance sampling methods, the authors propose an exogenous matching approach to estimate counterfactual probability in general settings. They derive the variance upper bound of counterfactual estimators and transform it into the conditional learning problem. They also employ the Markov boundari... | Rebuttal 1:
Rebuttal: We thank the reviewer’s positive evaluation and valuable feedback. Below are our responses to the questions raised in the Weaknesses section:
> **Weakness 1**: Regarding assumption ii), if the proposed method would be sensitive to the specified distribution $P_\mathbf{U}$ of $\mathbf{U}$?
**Resp... | Rebuttal 1:
Rebuttal: We thank all the reviewers for their valuable feedbacks, which will help us improve the manuscript. Here, we summarize and address some common concerns, then list the changes to be made in the next updated manuscript.
### **Discussion on Assumptions**
- **Assumptions Required for Exogenous Match... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
$SE(3)$ Equivariant Ray Embeddings for Implicit Multi-View Depth Estimation | Accept (poster) | Summary: This paper introduces an SE(3)-equivariant multi-view depth estimation model based on the Perceiver IO framework. Specifically, each feature ray is treated as a token, and the feature vector of each ray is concatenated with an equivariant positional embedding. To achieve equivariance, the authors propose using... | Rebuttal 1:
Rebuttal: *W1. The authors introduced a new equivariant nonlinearity inspired by [4], but the motivation and benefits are not clearly demonstrated. What is the distinctive advantage of this new nonlinearity, compared to existing SE(3)-equivariant nonlinearities?*
Our nonlinearity, unlike norm and Gate nonl... | Summary: This paper introduces a ray embedding representation with rotational and translational equivariance, integrating the existing Perceiver IO architecture to achieve robust multi-view implicit depth estimation. The paper first utilizes the mean shift and spherical harmonics to achieve translational equivariance, ... | Rebuttal 1:
Rebuttal: *W1. Since the equivariance consists of two parts, namely translation and rotation, what would be the quantitative impact of removing these two parts respectively?*
Thank you for the valuable suggestion. We conducted an ablation study where we individually integrated only rotation equivariance an... | Summary: This paper presents a SE(3) rotational and translational equivariant variation of Perceive IO for multi-view depth estimation with known camera poses. The authors first encode both the pixel-wise ray direction and the camera translation using spherical harmonics as the position encoding, and then to maintain e... | Rebuttal 1:
Rebuttal: *W1. Many important details from Sections 3.4 to 3.6 are placed in the appendix, making the main paper not self-contained.*
We appreciate and thank the reviewer for the valuable feedback. We decided to organize the paper this way not only due to limited space but also because we want readers to ... | null | null | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for the valuable comments and positive feedback regarding our submission.
As mentioned by reviewer **xRYu**, we are the “first to address SE(3)-equivariance in the transformer-based Perceiver IO architecture for multi-view applications.” They also praise the sign... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Diffusion-based Curriculum Reinforcement Learning | Accept (poster) | Summary: The paper presents an intuitive way to apply curriculum learning using diffusion based models to learn a goal distribution that can interpolate between the state-visitation distribution to states with high-value and high-intrinsic reward. As a result, the curriculum generates goals that lie at the edge of the ... | Rebuttal 1:
Rebuttal: **W1**: The introduction dumps too much related work together [...]
We will restructure the introduction to separate and clarify the related work.
**W2**: [...] Why can't another method (like a VAE, or GAN) do this through modelling the state-visitation distribution? [...]
We acknowledge that o... | Summary: This paper studies curriculum reinforcement learning (RL) in the context of multi-goal RL, which aims to generate a series of goals with increasing difficulty to facilitate guiding learning policies. To this end, the paper proposes a framework that employs a conditional diffusion model that learns to generate ... | Rebuttal 1:
Rebuttal: **W1**: The first paragraph of the introduction is unnecessarily long [...]
We will restructure the introduction to ensure it is more concise and better organized, which we believe will significantly improve the clarity and readability of the paper.
**W2**: While the related work section describ... | Summary: This work presents a novel diffusion model-based curriculum learning approach, called DiCURL, for multi-goal reinforcement learning, namely goal-conditioned RL. The proposed conditional diffusion model leverages a Q-function and a learned reward function based on the Adversarial Intrinsic Motivation principle ... | Rebuttal 1:
Rebuttal: **W1**: The introduction section should be improved in terms of writing [...]
We agree with the reviewer's suggestions: we will restructure the introduction to ensure it is more concise and better organized.
**W2**: [...], the paper does not demonstrate the curricula generated by OUTPACE [...]
... | Summary: This work introduces DiCuRL, a novel approach that uses diffusion models to generate curriculum goals for reinforcement learning agents. The method trains a model to capture the distribution of visited states, focusing on those with higher Q-values and intrinsic motivation rewards (i.e., AIM rewards). This app... | Rebuttal 1:
Rebuttal: **W1**: The approach is quite complicated [...]
We carried out additional experiments on two robot manipulation tasks, please see the **General comment** and its **attached PDF** (Fig. 16) for more details.
**W2**:They missed citing a rich literature [...]
We will carefully review these papers ... | Rebuttal 1:
Rebuttal: ### **General Comment**
We sincerely thank all reviewers for the time and effort devoted to reviewing our manuscript. To address the key points raised, we have provided detailed responses to each reviewer. All responses are organized into questions and weaknesses. For example, **Q1** refers to th... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Generative Modeling of Molecular Dynamics Trajectories | Accept (poster) | Summary: The paper suggests a flow-based generative framework on molecular trajectories, with various downstream tasks such as forward simulation and transition path sampling. Additionally, the model is trained in a transferable setting, across tetrapeptides.
Strengths: 1. Extensive experiments over various downstream... | Rebuttal 1:
Rebuttal: Thank you for the review! To address your questions and concerns:
---
**Experimental baselines**
We now provide new results comparing our work to Timewarp and ITO. Emphatically, these comparisons are limited to the forward simulation task as Timewarp and ITO are not capable of solving the other... | Summary: The authors propose MDGen -- a generative model to sample molecular dynamics trajectory conditioned on key frames. This is a direct application of video generation techniques to solve domain challenges in protein modeling. Specifically, SiT and flow matching models are used to sample SE(3)-invariant representa... | Rebuttal 1:
Rebuttal: Thank you for the review! To address your questions and concerns:
---
**Limited to tetrapeptides**
We have focused on tetrapeptides as model systems in this work for two key reasons:
* We can run simulations for thousands of systems in order to properly test the generalization abilities of our m... | Summary: The paper presents a new framework for generating trajectory of molecular geometries, ie, generative modeling for molecular dynamics. The paper proposes tokenization methods to tokenize the trajectory and learn flow models on the data. Experiments demonstrate the effectiveness of several tasks including forwar... | Rebuttal 1:
Rebuttal: Thank you for the review! To address your questions and concerns:
---
**Limited ML technical contribution**
In writing the paper, we opted to placed more emphasis on the experimental results. Nonetheless, we respectfully disagree that our work has limited ML technical contribution.
We highligh... | Summary: In this work, the authors proposed MDGen, a new framework that aims to model molecular dynamics trajectories via generative modeling techniques. By properly encoding the Protein MD trajectories according to the characteristics of key frames, MDGen adopts flow matching techniques (both continuous and discrete f... | Rebuttal 1:
Rebuttal: Thank you for the review! To address your questions and concerns:
---
**Further discussion on related works**
Thanks for mentioning these works. We are happy to discuss them in the revision alongside the existing related work in the Background which we will also expand as per your suggestion. B... | Rebuttal 1:
Rebuttal: # Overall Response
We thank all reviewers for their time taken in providing constructive feedback!
In addition to the individual responses, we also provide new **figures and visualizations in the PDF file** attached to this global response.
* In Figure 1, we compare the distributions of additio... | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper presents a novel generative model for molecular dynamics (MD) trajectories called MDGEN. This model aims to serve as a flexible surrogate for MD simulations by generating entire trajectories conditioned on initial frames. It addresses tasks such as forward simulation, transition path sampling, trajec... | Rebuttal 1:
Rebuttal: Thank you for the review! To address your questions and concerns:
---
**Complexity and accessibility**
We have aimed to provide a clear and reproducible method and exposition accessible to the average reader familiar with molecular machine learning. We aimed to make modeling choices that were a... | Summary: The authors introduce MDGen as a novel approach for modeling MD trajectories. They demonstrate the capabilities of this method in tasks such as interpolation, upsampling, and inpainting of small peptides. The accuracy as well as speed of the new approach compared to the ground truth baseline is quantitatively ... | Rebuttal 1:
Rebuttal: Thank you for the review! To address your questions and concerns:
---
**Parts of Section 3 hard to follow**
Due to space limitations, the description of our method in Section 3 was indeed a bit condensed. We will expand the exposition with the extra page allotted in the revision.
**Suitability... | null | null | null | null |
Toward a Stable, Fair, and Comprehensive Evaluation of Object Hallucination in Large Vision-Language Models | Accept (poster) | Summary: This paper explores the stable evaluation of object hallucinations, which is a crucial challenge in large vision-language models. The authors provide the first systematic analysis of the underlying mechanism through which instructions affect hallucinations, based on comprehensive experiments. They report a lin... | Rebuttal 1:
Rebuttal: >**W1:** The proposed method realizes consistent evaluation by calculating the hallucination rate at a uniform length. However, the length distributions of descriptions generated by different LVLMs exhibit variations. In other words, some models tend to produce shorter descriptions while others ge... | Summary: This work aims to establish a stable, fair, and comprehensive evaluation method for object hallucinations in large vision-language models. The authors discovered a positive correlation between the length of image descriptions and the degree of object hallucination. Building upon this observation, they develope... | Rebuttal 1:
Rebuttal: > **W1:** Although the rationale behind the length-hallucination curve is compelling, it is fitted using a relatively simplistic linear approach. Exploring more flexible and intricate fitting approaches is worth considering, as it has the potential to achieve higher fitting accuracy and more effec... | Summary: The paper identifies a pitfall regarding the length of image descriptions in the current average-based LVLM hallucination evaluation framework. To address this, they propose a new Length-Hallucination Curve Based evaluation framework to enhance the fairness of evaluations. The paper observes that the degree of... | Rebuttal 1:
Rebuttal: >**W1:** Although paper observe the linear relation between the length of the image description and objection hallucination, there are still unanswered questions regarding the justification of the claim. Please see questions below.
>**Q2:** The paper claimed that object hallucination is primarily... | Summary: This work presents comprehensive experiments to study the relationship between description lengths and hallucinations in LVLMs. Based on the observed positive correlation, authors propose an approach of fitting a length-hallucination curve to evaluate object hallucinations. Speciffically, the curve allows for ... | Rebuttal 1:
Rebuttal: > **W1**: The authors conduct experiments using only the beam search setting. Although I understand that beam search is widely used in hallucination evaluation of LVLMs/LLMs, it remains uncertain whether the observed correlation between the hallucination degree and the description length holds tru... | Rebuttal 1:
Rebuttal: We thank all the reviewers and area chairs for your time and effort during the review process. We are encouraged to hear that our work has **clear and well-written presentations** (by all Reviewers), **good motivation** (by Reviewer Pvzh and Ffbd), **convincing analysis** (by Reviewer Pvzh and onE... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Local Curvature Smoothing with Stein's Identity for Efficient Score Matching | Accept (poster) | Summary: The paper proposes a novel score matching variant called Local Curvature Smoothing with Stein’s Identity (LCSS). This method addresses the computational challenges associated with the Jacobian trace in score matching, particularly for high-dimensional data, by leveraging Stein’s identity. LCSS aims to bypass t... | Rebuttal 1:
Rebuttal: We appreciate the reviewer for thoroughly reading our paper and asking important questions, which we believe will clarify the contributions of our work.
-----------------
Response to Q.1
-----------------
Interchangeability holds when the score function $S_{\theta}$ is both integrable and differe... | Summary: This paper provides a new way for score matching with the purpose of resolving some of the limitations of the existing methods such as high variance of sliced score matching and Gaussian constraints of denoising score matching (DSM). The new method is based on the local curvature smoothing proposed in [15]. A ... | Rebuttal 1:
Rebuttal: The detailed questions from the reviewer reflect her/his careful reading of our paper.
We are grateful for the constructive questions and hope our responses address their concerns.
-----------------
Response to Q.1
-----------------
The loss function of DSM includes $\nabla _ {x} \log q _ {\sigm... | Summary: The paper proposes to use Stein's lemma to obtain a computationally efficient way in implementing a local-curvature regularized variant of the score matching objective. The main idea is to rewrite the Jacobian-trace term in a way that requires no Jacobian evaluations. In numerical experiments, the effectivenes... | Rebuttal 1:
Rebuttal: We appreciate the reviewer's meaningful question; addressing it will clarify our paper's contributions. For reader comprehension, we plan to include the following argument in the camera-ready version (maybe in the appendix).
----
## Question: a formal argument that the proposed estimator has low... | Summary: This manuscript proposes a new score matching method that bypasses the Jacobian trace by applying Stein’s identity, enabling effective regularization and efficient computation.
Strengths: 1. The method is computationally efficient compared to other SSM variants.
2. Experimental results demonstrate the effecti... | Rebuttal 1:
Rebuttal: We appreciate the reviewer taking the time to read our paper thoroughly.
We hope the following responses clarify our contribution. Additionally, our response to Reviewer tfBV below provides an argument about the advantage over SSM and FD-SSM, which we would like the reviewer to examine.
--------... | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
AUC Maximization under Positive Distribution Shift | Accept (poster) | Summary: Due to a positive distribution shift, training and test distributions are not identical. However, existing AUC maximization methods don’t take it into account. To address this shift, this paper theoretically shows a new way to maximize the AUC on the test distribution by using positive and unlabeled data in th... | Rebuttal 1:
Rebuttal: Thank you for your positive comments and constructive feedback.
> The effect of the proposed methods on MINST and Fashion MINST datasets is not significant, which is inconsistent with those on the other datasets. The authors don’t give any explanation.
As described in Line 279, since MNIST and F... | Summary: The paper proposes a method for AUC maximization in binary classification problems under positive distribution shift. They introduce their method, which is simple and easy to implement/understand, and then show it works well in some experiments.
Strengths: - The paper is well written and easy to understand;
-... | Rebuttal 1:
Rebuttal: Thank you for your positive comments and constructive feedback.
> How should the practitioner choose classification threshold after training their classifiers using your method?
Thank you for the insightful question.
In practical use, there are many situations where it is beneficial just to be ... | Summary: This paper considers AUC maximization when the conditional probability distribution of the positive class changes in the test phase. To this end, the unbiased loss function is derived. The loss is approximated by positive and unlabeled data from training distribution, unlabeled data from test distribution, and... | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and constructive feedback.
> Unlike the existing study [15, 42], the negative distribution is not considered.
The method in [15] assumes the positive distribution shift as in our method. Thus, it does not consider the negative distribution change.
The meth... | Summary: This paper addresses the challenge of maximizing the Area Under the Receiver Operating Characteristic Curve (AUC) in imbalanced binary classification problems where there is a positive distribution shift--this shift is where negative data remains constant, but positive data varies. A new method is proposed tha... | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and constructive feedback.
> An expansion to include various metrics, such as F-1 and G-mean of TPR and TNR, which are also relevant for imbalanced data classification, could enrich this paper.
Thank you for the suggestion. We agree that extensions to maxim... | Rebuttal 1:
Rebuttal: Dear all reviewers,
Thank you very much for the detailed and constructive feedback on our paper. We would like to revise the paper based on the comments. A pdf file with additional experiments is attached to this global response.
Best regards,
Authors
Pdf: /pdf/b619aba9f8d582053efbff25b4b1a9f22... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A Simple and Optimal Approach for Universal Online Learning with Gradient Variations | Accept (poster) | Summary: This paper studies universal Online Convex Optimization (OCO) with gradient-variation-dependent regret bounds. That is, to design one single algorithm that is unaware of but is able to adapt to both two following groundtruth: 1) the type of curvatures: the loss functions could be convex, strongly convex, or ex... | Rebuttal 1:
Rebuttal: Thank you for the positive feedback and appreciation of our work! We answer your questions in the following.
**Q1.** When the authors introduce the notion of $F_T$ and small-loss bound for the first time (around Eq. (1.2)), they may want to add that now the loss functions are non-negative (which ... | Summary: The paper studied the problem of regret minimization of a set of functions $\{f_t\}_{t=1}^{T}$ over a compact and convex constraint set $\mathcal{X}$, i.e.,
$\sum{t=1}^{T}f_{t}(x_t) - \text{min}{x\in\mathcal{X}}\sum{t=1}^{T}f_{t}(x),$
where $x_t$ is the output of the proposed algorithm at round $1\leq t\leq T$... | Rebuttal 1:
Rebuttal: Thank you for the positive feedback and appreciation of our work! Due to the 6,000-character limit of the rebuttal, we address your major questions below and respond to other minor issues in the next reply after the discussion period starts.
---
**Q1.** I do not understand the comment on the sma... | Summary: This paper investigates the problem of universal online convex optimization to achieve problem dependent regret guarantees for different classes of convex functions (strongly convex, exp-concave, and convex) simultaneously. Problem/function/data dependent regret guarantees have become popular in literature to ... | Rebuttal 1:
Rebuttal: We sincerely appreciate your feedback. Below, we aim to address your concerns about the number of base learners, the significance of our contributions, and the algorithmic improvements.
---
**Q1-a.** Why is $\log^2 T$ computational complexity claimed to be inefficient throughout the paper?
**A1... | Summary: The authors study the regret minimization problem in online convex optimization without access to curvature information. They tackle the task of achieving problem-dependent optimal regret while requiring no prior knowledge of the function class (convex, exp-concave, or strongly convex). They propose an efficie... | Rebuttal 1:
Rebuttal: Thank you for the positive feedback and appreciation of our work! In the following, we answer your questions about feasible analytical parameters and the statements of the $\log \log T$ term.
---
**Q1.** In my opinion, the bottleneck of the proof is in showing the existence of an appropriate cho... | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Lips Are Lying: Spotting the Temporal Inconsistency between Audio and Visual in Lip-Syncing DeepFakes | Accept (poster) | Summary: This paper tackle deepfake detector problem with audio-visual data focusing on lipsync fake which generally a higher quality fake data. For that, this paper propose a dataset and a method. The dataset (AVLips) is formed using available datasets and 3 methods for the lipsync methods. The method (LipFD) extracts... | Rebuttal 1:
Rebuttal: Dear Reviewer Ze8K, we are genuinely grateful for your valuable feedback. We sincerely hope our clarifications below can address your concerns.
---
**W1:** Ablation removing one or two out of the 3 branches for local feature extraction is missing.
**R1:** Thank you for your suggestion. Followi... | Summary: This paper focuses on a new setting in Deepfake detection called lip-syncing fraud, which only contains fewer minor cues on the leap region. To tackle this issue, the authors provide a novel method called LipFD to obtain the features from both a global view and a regional view. Also, with the new AVLips datase... | Rebuttal 1:
Rebuttal: Dear Reviewer 9Sa1, we sincerely thank you for your valuable time and feedback. We are encouraged by your positive comments on our novel explorations, insightgul investigations, extensive experiments, and good motivation. We sincerely hope our following clarifications and new experiments can addre... | Summary: The proposed work introduces a pioneering method for detecting lip-syncing forgery, an often overlooked threat in current research. By leveraging discrepancies between lip movements and audio signals, a dual-headed detection architecture significantly enhances detection accuracy. This work also contributes to ... | Rebuttal 1:
Rebuttal: Dear Reviewer Qots, we sincerely appreciate your precious time and valuable comments. Your positive comments of our interesting and relevant topic, clear and simple presentation, novel ideas, convincing experimental evaluation, and impressive application are very encouraging to us. We sincerely ho... | Summary: The paper introduces a novel method, LipFD, dedicated to detecting lip-syncing forgeries by exploiting temporal inconsistencies between lip movements and audio signals. This unique approach addresses a significant gap in existing DeepFake detection methods. Experimental results demonstrate that LipFD achieves ... | Rebuttal 1:
Rebuttal: Dear Reviewer K6BS, we sincerely thank you for your valuable time and comments. We are encouraged by your positive comments on our novel task, interesting idea, good writing and high effectiveness. We sincerely hope our clarifications and new experiments can address your concerns. We are happy to ... | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Safe Time-Varying Optimization based on Gaussian Processes with Spatio-Temporal Kernel | Accept (poster) | Summary: The authors propose a time-varying extension of SAFEOPT to overcome the problems of time-varying rewards under time-varying safety constraints.
Under stationarity conditions, optimality guarantees are provided and the numerical simluation shows a (favorable) comparison to the SAFEOPT.
Strengths: 1. The paper... | Rebuttal 1:
Rebuttal: **W1**
We thank the reviewer for the insight. We refer the reviewer to Table 1 in the pdf. The main difference between our method and context-based methods lies in how time is handled in the safe sets. Context-based methods require a safe seed to be available for every context or at every iterat... | Summary: This paper presents a safe Bayesian optimization algorithm TVSAFEOPT with a spatial-temporal kernel and time Lipschitz constants, which improves on SAFEOPT with time-varying reward and safety constraints. The optimality guarantee is proved for the stationary case and the safety guarantee for more general setti... | Rebuttal 1:
Rebuttal: We thank the reviewer for the useful feedback, and the positive assessment of the paper. We provide a point-by-point answers to all raised suggestions, comments, and questions.
**W1**
We thank the reviewer for this suggestion. In the paper we are focused on safety critical systems where satisfy... | Summary: The paper introduces the TVSAFEOPT algorithm, which is based on Gaussian processes with spatio-temporal kernels, designed specifically for optimizing time-varying rewards under time-varying safety constraints. The algorithm provides formal safety guarantees in a general time-varying setting, ensuring safety ev... | Rebuttal 1:
Rebuttal: **W1:They extend the Safeopt algorithm from literature. However, it is clear on what are the additional contributions and difference between these two different approaches.**
We thank the reviewer for the positive assessment of our paper, and for their constructive feedback. We now provide a tabl... | null | null | Rebuttal 1:
Rebuttal: Dear Chairs,
Dear Reviewers,
Thank you for the thoughtful feedback on our manuscript. All three reviewers found our results of interest to the wide readership of NeurIPS. In particular, the reviewers appreciated the theoretical guarantees for safety and optimality of our proposed TVSafeOPT al... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion | Accept (poster) | Summary: This work presents Diffusion Forcing, a new framework for probabilistic sequence modeling that combines diffusion models with Bayesian filtering. This framework builds on state of the art approaches to sequence modeling using diffusion models, but has several novel contributions.
First, it allows the model to ... | Rebuttal 1:
Rebuttal: > Independent noise vs AR-Diffusion
We thank the reviewer for highlighting the need for a more explicit discussion relative to AR Diffusion.
We would first like to clarify that the stabilization discussion in Appendix B.2 is orthogonal to AR-diffusion. The key to AR-Diffusion is training and sa... | Summary: The authors introduce Diffusion Forcing (DF), a method for diffusion of sequential data where the noise level at each token can be different (“independent”). The authors show that DF provides more flexible steerability properties and more stable rollouts compared to full-sequence diffusion and teacher forcing.... | Rebuttal 1:
Rebuttal: We thank the reviewer for their in-depth review - we are particularly happy to be able to address some of the limitations of Diffuser that the reviewer had to contend with themselves in the past!
> Clarification on Classifier Guidance Term
Sorry about the confusion. The reviewer is exactly corre... | Summary: This paper proposes to augment autoregressive models with diffusion. Specifically, rather than generating every token in one shot (one neural network evaluation), the paper proposes to gradually denoise the tokens following an autoregressive order. That is, every token is given a different noise level (lower f... | Rebuttal 1:
Rebuttal: Thank you for your helpful feedback. We respond to your comments below:
> Can we train with arbitrary noise schedules and find a good schedule at evaluation time?
Yes - at the core of Diffusion Forcing lies exactly the idea that by training with **arbitrary, random** noise schedules, we can exp... | Summary: This paper introduces Diffusion Forcing, a novel training paradigm for sequential generative modeling using diffusion models. Diffusion Forcing learns from sequential tokens with varying independent noise levels, enabling more flexible sampling strategies and general capabilities such as guidance. The experime... | Rebuttal 1:
Rebuttal: Thank you for your helpful feedback! We respond to your comments below:
> Writing clarity and style & typos
Sorry about the confusion. It’s true that we opted out of an independent “Method” section to introduce the intuitions first. With the extra 1 page for the camera-ready version, we promise ... | Rebuttal 1:
Rebuttal: ## General Response
We thank the reviewers for their comments and suggestions. We are pleased that the reviewers find our paper original & interesting (Reviewers 35x2,tz82), general & flexible (Reviewers a6Fo, tz82), that it has great performance (Reviewers a6Fo, tz82,35x2) with substantial improv... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Transferable Adversarial Attacks on SAM and Its Downstream Models | Accept (poster) | Summary: This work discusses an interesting security issue of deploying a model fine-tuned on a large foundational model in private downstream applications. It proposes a universal meta-initialized and gradient robust adversarial attack (UMI-GRAT) to break the powerful SAM and its various downstream models, without req... | Rebuttal 1:
Rebuttal: Dear Reviewer aXPE,
Thank you so much for taking the time to read this paper and giving constructive feedback. Please find our response below.
> Give a more comprehensive analysis of the UMI noise.
Following your valuable suggestion, we conducted an evaluation to analyze the impact of the natur... | Summary: In this paper, the authors present a new approach for adversarial attacks on Segment Anything Model (SAM)-based downstream models, addressing the challenge of attacking without prior knowledge of the downstream task or data distribution. Their key contribution is a universal meta initialization-based algorithm... | Rebuttal 1:
Rebuttal: Dear reviewer iuQV
Thank you so much for taking the time to read this paper and giving constructive feedback. Please find our response below.
> Q1, Q2, and Q3: The novelty of the proposed method. The authors should discuss how their method is different from meta-learning-based approaches [2,3,4]... | Summary: This paper proposes an adversarial attack against fine-tuned derivatives to a publicly available foundation model, such as the Segment Anything Model (SAM). In the proposed threat model, attackers can potentially manipulate these downstream models even without knowing the specific task or data they are used fo... | Rebuttal 1:
Rebuttal: Dear Reviewer Q28B,
Thank you so much for taking the time to read this paper and giving constructive feedback. Please find our response below.
> 1. More experiment settings (e.g. against pretrained MAE models) are warranted to demonstrate the generalizability.
Following your good suggestion, w... | Summary: This paper investigates the vulnerability of Segment Anything Model (SAM) and its downstream models to transferable adversarial attacks. The authors propose a novel attack method called Universal Meta-Initialized and Gradient Robust Adversarial attack (UMI-GRAT) that leverages the open-sourced SAM to generate ... | Rebuttal 1:
Rebuttal: Dear Reviewer Qxcd,
Thank you so much for taking the time to read this paper and giving constructive feedback. Please find our response below.
> 1. How does the performance of UMI-GRAT vary with different hyperparameters, such as the bound $\epsilon$ and iterations?
Following your good suggesti... | Rebuttal 1:
Rebuttal: We express our sincere appreciation to all the reviewers for their elaborate and constructive feedback. We summarize our rebuttal as follows:
1. As suggested by reviewer **pDnz**, we conducted the **randomness experiment** and presented the experimental results in **Table R1** of the PDF document... | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: In this paper, the authors propose an adversarial attack method that can contaminate downstream tasks from the perspective of adversarial transferability. They address the problem that SAM models do not have similar optimisation routes after fine-tuning for different downstream tasks by designing universal met... | Rebuttal 1:
Rebuttal: Dear Reviewer pDnz,
Thank you so much for taking the time to read this paper and giving constructive feedback. Please find our response below.
> 1.The readability of the Methodology section of this article is somewhat poor. The authors define the problem to be solved through the form of proposit... | null | null | null | null | null | null |
Stepping Forward on the Last Mile | Accept (poster) | Summary: **Context**. The focus of the present paper is on-device fine-tuning (gradient computation and weight update **starting from a pre-trained model**) under limited memory budget. One way to cut the memory cost of storing the computational graph for gradient computation by standard backprop is the Memory Efficien... | Rebuttal 1:
Rebuttal: We would like to express our gratitude to the reviewers for their careful review of our paper, their interest in the core idea of our work, and valuable suggestions. The comments regarding the detailed experimental setups, precision used in the algorithm, explanations of enhancement mechanisms and... | Summary: This paper explores the feasibility of on-device training using fixed-point forward gradients. The authors propose methods including sign-m-SPSA, Momentum Guided Sampling, Sharpness-aware Perturbation, Sparse Update, and Kernel-wise Normalization to reduce memory footprint and accuracy gaps and conduct experim... | Rebuttal 1:
Rebuttal: We would like to express our gratitude to the reviewers for their careful review of our paper, their interest in the core idea of our work, and valuable suggestions. The comments regarding more detailed explanations of techniques, experimental setups and comparisons are well taken, and the manuscr... | Summary: The authors investigate fixed-point forward gradients for quantized training. They conduct experiments across various deep learning tasks in vision and audio to assess if this method yields competitive models while conserving memory and computational resources.
They introduce algorithm enhancements to reduce m... | Rebuttal 1:
Rebuttal: We would like to express our gratitude to the reviewers for their careful review of our paper, their interest in the core idea of our work, and valuable suggestions. The comments regarding the motivation, novelty, impact of our work, and detailed comparisons of hardware complexity are well taken, ... | Summary: The paper proposes a quantization approach for fine-tuning pretrained data to new local data on resource-constrained devices. In particular, the weights perturbation, gradients estimation, and weights updates are quantized to either 8-bit or 16-bit. This quantization approach is combined with Momentum Guided S... | Rebuttal 1:
Rebuttal: We would like to express our gratitude to the reviewers for their careful review of our paper, their interest in the core idea of our work, and valuable suggestions. The comments regarding accuracy discussions and comparisons of hardware complexity are well taken, and the manuscript will be revise... | Rebuttal 1:
Rebuttal: We would like to express our gratitude to the reviewers for their careful review of our paper, and their interest in the core idea of our work. We appreciate all the feedback, valuable suggestions and recommendations. The comments regarding notations, technical discussions, experimental clarificat... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Text-DiFuse: An Interactive Multi-Modal Image Fusion Framework based on Text-modulated Diffusion Model | Accept (spotlight) | Summary: A new paradigm of multi-modal image fusion named Text-DiFuse is introduced, based on the diffusion model. The paradigm embeds a mechanism for aggregating feature-level multi-modal image information into the diffusion process of degrading multi-modal images, addressing the optimization gap between "degradation ... | Rebuttal 1:
Rebuttal: Q1: Sampling interval and its impact.\
Reply: In our method, image restoration and information integration are mutually coupled. This is reflected in the physical connection, where a fusion control module is embedded within the internal structure of the diffusion model. Once all the networks are t... | Summary: This work focuses on the topic of multi-modal image fusion. Two innovations enhance the performance of the fusion. One is the clever integration of information fusion into the diffusion process. This coupling way enables the fusion function to resist degradation. The other is the introduction of a text-based f... | Rebuttal 1:
Rebuttal: Q1: Clean data for the loss construction.\
Reply: Constructing Eqs. (9) and (10) actually involves very stringent data requirements. Specifically, they require a pair of degraded multi-modal images describing the same scene, along with their corresponding clean versions. Unfortunately, such a data... | Summary: This paper addresses two primary challenges in multimodal image fusion: the mixed degradation of modalities and the insufficient salience of target objects. It proposes two methods to tackle these challenges: feature-level fusion diffusion and the re-modulation of fusion rules in target areas using a zero-shot... | Rebuttal 1:
Rebuttal: Q1: Dataset for training diffusion model.\
Reply: In our work, acquiring image restoration capability depends on pre-training a conditional diffusion model, which needs paired clean and degraded data. The clean data are used to build the loss function for supervision, while the degraded data act a... | Summary: This paper proposes an interactive framework that can exploit the intrinsic connection between image restoration and multi-modal image fusion.
The authors embed information fusion within the diffusion process and address the "composite degradation challenge" i.e., multi-modal information integration with
effec... | Rebuttal 1:
Rebuttal: Q1: Discussion on the necessity of the brightness-chrominance separation. \
Reply: Unlike the image generation task emphasizing diversity, image fusion demands high color fidelity. For instance, in infrared and visible image fusion, the fused image should closely match the colors of the visible im... | Rebuttal 1:
Rebuttal: We sincerely thank each of the reviewers, area chairs, and program chairs for investing their time and effort into our paper. These valuable comments have enriched our understanding of the research problem and will greatly improve the quality of our manuscript. \
According to the reviewers' commen... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
In Pursuit of Causal Label Correlations for Multi-label Image Recognition | Accept (poster) | Summary: This paper proposes a simple yet effective method based to address the issue of contextual bias for multi-label image recognition. It utilizes the casual intervention theory to pursue causal label correlations and suppress spurious label correlations. It utilizes the k-means to model the confounders, and emplo... | Rebuttal 1:
Rebuttal: We thank reviewer BNe7 for the positive comments on our work. In the following, we present our responses addressing the raised concerns.
**(Weakness 1)** In this work, we apply K-means clustering on the spatial features extracted from a pre-trained classification network for confounder modeling. ... | Summary: This paper presents a novel approach to addressing label correlations in multi-label image recognition by using causal intervention. The method involves decoupling features, modeling confounders, and implementing causal interventions to capture useful contextual information while suppressing spurious label cor... | Rebuttal 1:
Rebuttal: We thank reviewer wRCW for the detailed feedbacks on our work. In the following, we present our responses addressing the raised concerns. Should our rebuttal effectively address the concerns, we kindly hope you can raise your score.
**(Weakness 1)** We agree with you that a detailed analysis of t... | Summary: This paper proposes a causal intervention mechanism for multi-label image classification, where causal label correlations are pursued and spurious label correlations are suppressed. To achieve this, the authors frame a pipeline consisting of a branch for decoupling label-specific features and a branch for summ... | Rebuttal 1:
Rebuttal: We thank reviewer NEe5 for the constructive comments and suggestions. In the following, we present our responses addressing the raised concerns. Should our rebuttal effectively address the concerns, we kindly hope you can raise your score.
**(Weakness 1 and 2)** Thanks for your reminding, and we ... | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for their thoughtful analysis and feedbacks, which are invaluable for understanding how to improve our paper. We address the questions and concerns raised by each reviewer point-by-point in the respective threads below. We also attach a PDF containing one updated Figure in... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Proving Theorems Recursively | Accept (poster) | Summary: This paper designs a novel hierarchical search algorithm (POETRY) for generating formal proofs with large language models step-by-step. In particular, POETRY will first search for proof steps with proof level 0 (these steps typically correspond to subgoals in the proof), and check the correctness of the level ... | Rebuttal 1:
Rebuttal: Thank you for your detailed and constructive comments. And thank you for your acknowledgment in POETRY. We hope our responses and rebuttal materials (RM) address your concerns.
## Weaknesses
### w1. How POETRY scales with more computing resources.
Indeed, a large portion of the improvement comes ... | Summary: This paper proposes POETRY, a method for formal theorem proving using language models by training the model to iteratively decompose the problem into sketches, recursively. The authors focus on Isabelle. At each step, POETRY takes a proof state and goal and predicts either a formal sketch (a proof using sorry ... | Rebuttal 1:
Rebuttal: Thank you for your detailed and constructive comments. We hope our responses and rebuttal material address your concerns.
## Clarifications on the summary
There are a few misunderstandings in summary that we would like to clarify: Within each proof sketch, POETRY operates step-by-step, searching ... | Summary: The authors introduce a method called POETRY (proving theorems recursively) for constructing formal proofs in Isabelle/HOL. POETRY performs best-first search on proof sketches guided by a language model fine-tuned on proof sketches. POETRY outperforms other algorithms guided by language models that prove theor... | Rebuttal 1:
Rebuttal: Thank you for your detailed and constructive comments. We hope our responses and rebuttal material (RM) address your concerns.
## Clarification on strengths.
There are a few misunderstandings regarding our strengths that we would like to clarify. While we are not the first to use the term `proof ... | Summary: This paper introduces POETRY, a new method to prove theorems recursively. The key ideas are to use a modified best first search algorithm for the search part, and a *sorry* tactic for assumptions at the current level (to be proven later). The authors provide the intuition that this recursive structure allows P... | Rebuttal 1:
Rebuttal: Thank you for your detailed and constructive comments. We hope our responses and rebuttal material address your concerns. As you mentioned in the strengths section, we will release all the code, models, and data on the POETRY system to support further research on this topic.
## Weaknesses
### w1.... | Rebuttal 1:
Rebuttal: Dear Reviewers and ACs,
Thank you very much for the time and effort you have dedicated to reviewing our paper. We appreciate the thorough suggestions and constructive feedback on our manuscript.
We are also grateful for the positive recognition from the reviewers regarding our motivation (eTZY, ... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Neural Synaptic Balance | Reject | Summary: This paper aims to study and explain the phenomenon of neural synaptic balance, where a balanced neuron means that the total norm of its input weights is equal to the total norm of its output weights. Particularly, the authors study the reasons why and when randomly initialized balanced models (so, models whos... | Rebuttal 1:
Rebuttal: We thank reviewer VcLX for the positive review of this work and insightful comments.
While we have focused here on developing the theory of neural synaptic balance, neural synaptic balance has practical applications. It can be viewed as an additional, complementary, method of regularization on p... | Summary: The authors present a theory of neural synaptic balance, defined as the condition in which a total loss achieves the same value for the input weights to a neuron and its output weights. This is different from the well studied E/I balance in neuroscience and machine learning literature. The authors show mathem... | Rebuttal 1:
Rebuttal: We thank reviewer QTyq for the positive review of this work and insightful comments.
"Why is the energy consumption of physical neurons lower when they are balanced?" Because the balancing algorithm also decreases the norm of weights.
Why not just have a regularizer to keep the overall activatio... | Summary: This paper provides a thorough characterization of regularizers which lead to synaptic balance (when the "cost" of input weights to a neuron or pool of neurons is tied to the cost of output weights) in trained neural networks. Their results apply to many different activation functions and architectures.
Stren... | Rebuttal 1:
Rebuttal: We thank reviewer cT2m for the positive review of this work and insightful comments.
Synaptic balance does not necessarily emerge in networks trained with a regularizer (unless they are trained very carefully, with very small learning rates, etc). Our work shows that one can obtain synaptic bala... | Summary: The authors provide a theoretical approach to the analysis of balanced neurons and networks. Their theoretical work includes proof of the convergence of stochastic balancing. In addition, they investigate the effect of different regularizers and learning rates on balance, training loss, and network weights, in... | Rebuttal 1:
Rebuttal: We thank reviewer TDzF for their positive review of this work and insightful comments.
Regarding Theorem 5.1, the reviewer has mentioned a fair point. In the revised version we will shorten Theorem 5.1, and move Proposition 5.4 and its proof outside of the proof of Theorem 5.1.
-----------
... | Rebuttal 1:
Rebuttal: We thank the reviewers for appreciating our work and for their insightful comments. We have provided a separate response to each reviewer. The primary goal of our paper is to present the theory of synaptic balancing in neural architectures and the main theorem (Theorem 5.1) connects synaptic bal... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Sketchy Moment Matching: Toward Fast and Provable Data Selection for Finetuning | Accept (poster) | Summary: This paper addresses the problem of data selection for finetuning large pre-trained models. The key contributions are:
1. A theoretical analysis of data selection for finetuning that reveals a variance-bias tradeoff in high dimensions.
2. A provable result showing that gradient sketching can efficiently find ... | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their insightful questions and suggestions. We are glad that they found our theory solid and our method effective. On the questions raised in the review:
1. __Low intrinsic dimensions of fine-tuning__:
Recalling the references [2,72] from the introducti... | Summary: The authors study the task of data selection. They extend the classical variance reduction to the high dimensional case and provide a variance-bias tradeoff analysis. Based on the theoretical results, they propose sketchy moment matching, which first utilizes gradient sketchy to form a low-dimensional space an... | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their helpful questions and suggestions, and we are glad that they found this work well-presented and theoretically sound. Nevertheless, we believe there have been misconceptions regarding some key notions and the focus of this work. Before diving into the s... | Summary: This paper concerns the data selection problem: given a collection of $N$ embeddings of dimension $r$ for $r\gg N$, the goal is to pick a subset $S$ of points of size $n$ so that one could run any downstream algorithm on $S$ with a regularization term, so that the empirical risk is small even on the entire fin... | Rebuttal 1:
Rebuttal: We appreciate the insightful questions and suggestions from the reviewer. However, we believe there have been misunderstandings regarding the focus and contribution of this work. We hope that the following responses will help clarify these confusions.
1. __Our theoretical contributions are explana... | Summary: This paper studies the problem of data selection in the over-parametrized fine-tuning regime, i.e. when the number of fine-tuning parameters $r$ is larger than the amount $N$ of available examples. We want to subsample $n\ll N$ examples that form a representative set to train on, and hopefully achieve quality ... | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their constructive questions and suggestions, and we are glad that they found this work interesting and well-presented. On the questions raised in the review:
1. __Cost of gradient computation and SkMM:__
First, we kindly emphasize the __ubiquitous role ... | Rebuttal 1:
Rebuttal: First, we would like to thank all the reviewers for their time, efforts, and valuable suggestions. In the general response, we address some common questions raised in the reviews and summarize important revisions we made
1. __Computational efficiency of SkMM__:
SkMM is efficient in both memory... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Causal Discovery from Event Sequences by Local Cause-Effect Attribution | Accept (poster) | Summary: This paper introduces a new causal model in which individual events of the cause variable trigger events of the effect variable with dynamic delays. The authors propose a cause-effect matching approach to learn a fully directed acyclic graph, named the CASCADE algorithm. The algorithm performs a topological se... | Rebuttal 1:
Rebuttal: Dear Reviewer, thank you for your time and valuable feedback.
We want to follow-up on the applicability of our causal model and its implications. Mechanisms where multiple events are triggered, e.g. Hawkes processes, can still be modeled in parts by CASCADE. To this end, we supplement an experime... | Summary: The article employs the Algorithmic Markov Condition alongside Kolmogorov
complexity for causal discovery from event sequences. It focuses on a specific scenario in
which the sequence of events is divided into source and effect variables. The principal
contribution of this study is its innovative application o... | Rebuttal 1:
Rebuttal: Dear Reviewer, thank you for your time and valuable feedback. We would like to address your concerns first.
1. **Assumptions**: Mechanisms where multiple events are triggered, e.g. Hawkes processes, can still be modeled in parts by CASCADE. To this end, we supplement an experiment with a Hawkes p... | Summary: In their work, the authors are concerned with recovering causal relations, where cause and corresponding effects occur in varying temporal distances. The authors leverage information theoretic formulations and properties of the algorithmic Markov condition to recover the causal graph via minimum description le... | Rebuttal 1:
Rebuttal: Dear Reviewer, thank you for your time and valuable feedback for the main paper as well as the Appendix. We would like to address your concerns and questions in detail.
1. **Assumptions**: First, we would like to elaborate on our assumptions and their implications.
- *Multiple effect events*: ... | Summary: The paper introduces a method for identifying causal relationships in event sequences. The authors presents a causal model that handles both instantaneous and delayed effects, contrasting it with existing methods like Granger causality. This algorithm is evaluated on both synthetic and real-world datasets.
St... | Rebuttal 1:
Rebuttal: Dear Reviewer, thank you for your time and valuable feedback. We would like to address your concerns and questions in detail.
1. **Assumptions**: Mechanisms where multiple events are triggered, e.g. Hawkes processes, can still be modeled in parts by CASCADE. To this end, we supplement an experime... | Rebuttal 1:
Rebuttal: We thank the reviewers for their detailed and thoughtful comments. All reviewers appreciate the proposed causal model with “theoretical foundation based on the AMC and MDL”, with “strong theoretical identifiability results”. In particular the identifiability of instant effects, which Granger causa... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
MC-DiT: Contextual Enhancement via Clean-to-Clean Reconstruction for Masked Diffusion Models | Accept (poster) | Summary: The paper introduces MC-DiT, a training paradigm for Diffusion Transformers (DiT) in the field of generative diffusion models for image generation. By utilizing the proposed clean-to-clean mask-reconstruction approach, the model can better leverage contextual information at different noise variances.
Strengt... | Rebuttal 1:
Rebuttal: ### Q1: Training overhead of extra branches.
With the additional two branches, the training cost of MC-DiT is a little bit higher than MaskDiT and MDT. As shown in below Table, the MC-DiT-XL/2 has more parameters, since the EMA branches introduce additional 56M parameters and these additional para... | Summary: This paper observes that reconstructing masked noisy patches from unmasked noisy patches harms contextual information extraction during the training of DiT and then proposes a novel training paradigm named MC-DiT with clean-to-clean mask-reconstruction. Two EMA branches of DiT decoders are designed to avoid mo... | Rebuttal 1:
Rebuttal: ### Q1: Visual quality comparison.
In Figures~R-1 and R-2(a) in the global rebuttal file, we further provide various $256\times 256$ and $512\times 512$ images generated by our MC-DiT and compare with SOTA methods MaskDiT and MDT. Our generated images are more realistic and have more consistent ... | Summary: This paper introduces MC-DiT, a novel training paradigm for Diffusion Transformers (DiT) in image generation. It addresses the limitations of current masked-reconstruction strategies, which fail to effectively extract **contextual information** due to noisy-to-noisy reconstruction. MC-DiT employs clean-to-clea... | Rebuttal 1:
Rebuttal: ### Q1: Generalization to other domains or datasets.
We adopt the ImageNet dataset in the experiments for a fair comparison, since MaskDiT, SD-DiT and MDT are all evaluated on the ImageNet dataset. In fact, our MC-DiT can be generalized to different domains or datasets for improved image generatio... | Summary: The paper introduces a novel training paradigm for Diffusion Transformers (DiT) in the context of generative diffusion models for image generation. The authors propose MC-DiT, which focuses on enhancing contextual information extraction by reconstructing clean unmasked patches from clean masked patches, as opp... | Rebuttal 1:
Rebuttal: ### Q1: Complexity.
Our MC-DiT has the same main branch and training objective as existing methods like MaskDiT, MDT, and SD-DiT. The additional complexity of MC-DiT lies on the extra two EMA branches and unmasked tuning.
1) The extra two branches increases only 7.6\% parameters and 8\% FLOPs, a... | Rebuttal 1:
Rebuttal: We thank all reviewers for their valuable comments. We appreciate the reviewers' recognition of our work, including **excellent motivation (ZkfZ, RKJM, Ngpd, cs4r and gqjf), reasonable method (gqjf), thorough theoretical analysis (cs4r, Ngpd, YSTZ), state-of-the-art results (Ngpd, gqjf, cs4r and Z... | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: In this work, the authors reveal the issues of Diffusion transformers of having semantic inconsistency as they fail to learn the contextual information. Based on their theoretical analysis, they proposed a novel training paradigm to fully learn contextual information with clean-to-clean mask reconstruction. Th... | Rebuttal 1:
Rebuttal: ### Q1: Motivation for MC-DiT
In Section 3 of the manuscript, sufficient analysis is provided to claim that reconstructing masked noisy patches from unmasked noisy patches is insufficient for contextual information extraction. In details, the information used in noisy-to-noisy patch reconstruction... | Summary: This paper proposes a training strategy for diffusion transformers that fully learns contextual information by introducing clean to clean mask reconstruction during training, and designs complementary DiT decoder branches as well as corresponding supervisory losses to avoid the problem of model collapse, givin... | Rebuttal 1:
Rebuttal: ### Q1: Writing error in line 107.
We thank the reviewer for point out the writing error. The unmasked patches $x_1$ and masked patches $x_2$ in line 107 are corrected to $x_1=x[m]$ and $x_2=x[1-m]$.
### Q2: Visualization results.
We have provided visualization results of generated $256\times 256... | Summary: This paper critiques previous masked-reconstruction strategies in DiT training for their poor contextual information extraction, attributing this to noisy-to-noisy reconstruction. The authors theoretically and empirically validate that this approach limits mutual information between unmasked and masked patches... | Rebuttal 1:
Rebuttal: ### Q1:Claim in Line 38
We provide both theoretical and empirical evidences in Proposition~2 and Figure 1(a) in the manuscript to support the claim. We consider the mutual information $\mathcal{I}(x_0^1;x_0^2)$ between unmasked patches $x_0^1$ and masked patches $x_0^2$ as the contextual informati... | null | null |
Continuously Learning, Adapting, and Improving: A Dual-Process Approach to Autonomous Driving | Accept (poster) | Summary: The paper introduces **LeapAD**, an interesting paradigm for autonomous driving inspired by human cognitive processes, addressing the limitations of prevailing data-driven methods in complex scenarios. LeapAD incorporates a dual-process decision-making module consisting of an Analytic Process (System-II) for l... | Rebuttal 1:
Rebuttal: Dear Reviewer:
Thank you for your constructive comments. We provide discussions and explanations about your concerns as follows.
**Q1:** The paper should clearly distinguish between the Analytic Process and Heuristic Process? How are these processes defined and why is the Heuristic Process calle... | Summary: This paper presents LeapAD, a dual-process closed-loop autonomous driving system.
LeapAD first uses a VLM to analyze the scene by selecting and locating critical objects in the scene, and then it uses a dual-process learning approach to learn driving behaviors.
The dual-process learning system contains an An... | Rebuttal 1:
Rebuttal: Dear Reviewer:
Thanks a lot for your acknowledgement, and we appreciate the time and effort you dedicated to enhancing the quality and clarity of our manuscript.
**Q:** The performance improvement is not very significant compared to the baseline.
**A:** Thanks for your feedback. As you mentione... | Summary: This paper introduces a paradigm to design an annotation-efficient end-to-end autonomous driving system that harnesses the power and generalizability of open-source LLM models. It proves that critical frame/instance selection are critical to a decision-making module training. This method is evaluated by closed... | Rebuttal 1:
Rebuttal: Dear Reviewer:
Thank you for your constructive comments. We will discuss and explain your concerns as follows.
**Q1:** No quantitative benchmark on its VLM module on simulation and the real world. Only some samples are listed in the paper.
**A1**: Thank you for your valuable suggestions. We h... | Summary: The paper "LeapAD" introduces a new approach to autonomous driving that addresses key challenges in adaptability and interpretability. It draws inspiration from human cognition to enhance decision-making processes in complex environments.
The system incorporates two complementary processes:
- Analytic Process:... | Rebuttal 1:
Rebuttal: Dear Reviewer:
Thank you for your constructive comments. We provide discussions and explanations about your concerns as follows.
**Q1:** My primary concern lies in the setup of data and models for generating scene descriptions into text to identify critical objects. Operating within the text dom... | Rebuttal 1:
Rebuttal: Dear Reviewers,
Thank you very much for taking the time to review this manuscript and for helping to improve our work. I greatly appreciate all your comments and suggestions. Please find my detailed responses below.
As suggested by Reviewer eSfn (Q2), we have included visualizations of the failu... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
MomentumSMoE: Integrating Momentum into Sparse Mixture of Experts | Accept (poster) | Summary: The paper introduces MomentumSMoE, a novel integration of heavy-ball momentum into Sparse Mixture of Experts (SMoE) to enhance stability and robustness. It establishes a connection between SMoE and gradient descent on multi-objective optimization problems.
The paper demonstrates theoretical and empirical impro... | Rebuttal 1:
Rebuttal: **Q1. Unfounded connection between the SMoE and gradient descent. Connection to accelerating fixed-point iterations. The authors should work with complex eigenvalues [Azizian et. al.] using negative (Gidel et. al.) or complex momentum (Lorraine et. al.). Empirical (or theoretical) investigation an... | Summary: This paper proposes a variant of sparse mixture of experts, MomentumSMoE, by incorporating momentum into the traditional sparse mixture of experts framework. The authors provide both theoretical proofs and empirical evidence demonstrating that MomentumSMoE offers greater stability and robustness compared to th... | Rebuttal 1:
Rebuttal: **Q1. Why the MomentumV-MoE and Robust MomentumV-MoE have marginal gains on clean IN-1K data, is there any in-depth analysis available on this?**
**Answer:** V-MoE's result reported in Table 2 in our manuscript is among the state-of-the-art results on clean IN-1k data for the models that have ar... | Summary: The paper introduces a novel approach to enhancing the robustness and stability of Sparse Mixture of Experts (SMoE) models. Inspired by the analogy of gradient descent and SMoE, the authors develop a family of models by incorporating momentum into the training process. The key idea is that training SMoE is a m... | Rebuttal 1:
Rebuttal: **Q1. Formulating SMoE as a multi-objective optimization problem is doubtful to me.**
**Answer:** We believe there is a misunderstanding of our formulation of SMoE as a multi-objective optimization problem. Please allow us to clear this misunderstanding by clarifying the role of expert networks ... | Summary: This paper addresses the instability problem of training SMoE models. By establishing a relationship between SMoE and multi-objective optimization, the authors integrate momentum into SMoE and propose MomentumSMoE. Experimental results show that MomentumSMoE is more stable than SMoE during training.
Strengths... | Rebuttal 1:
Rebuttal: **Q1. This method has little effect on models with few layers.**
**Answer:** A momentum-based approach like MomentumSMoE needs more layers to show its advantages, just like a heavy-ball momentum, or Adam needs a couple of iterations to start showing its faster convergence compared to gradient des... | Rebuttal 1:
Rebuttal: ## Global Rebuttal
Dear AC and reviewers,
Thanks for your thoughtful reviews and valuable comments, which have helped us improve the paper significantly. We are encouraged by the endorsements that: 1) The idea of integrating momentum into Sparse Mixture of Experts (SMoE) is original, interesting... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Parallelizing Linear Transformers with the Delta Rule over Sequence Length | Accept (poster) | Summary: This paper proposes the Delta Rule method to construct the state updates for Linear Attention. Furthermore, the paper introduces a chunk-wise training approach, allowing the computational cost of training to grow subquadratically with the text length. Experimentally, the paper validates the effectiveness of th... | Rebuttal 1:
Rebuttal: Thanks for your review!
## W1 Long context experiments
Thanks for your suggestion. Models at 1B scale are currently not powerful enough to provide meaningful results on needle-in-the-haystack style long-range benchmarks. We are currently training models are larger scale (3B parameters), and will ... | Summary: This paper introduces a novel algorithm for the efficient training of DeltaNet Linear Transformers. DeltaNet enhances contextual associative recall using a delta rule-like update but was previously limited by inefficient parallelization in its training algorithm. The work described in this paper presents a har... | Rebuttal 1:
Rebuttal: Thanks for your review!
We are adding some additional results in case they are of interest.
First, we have preliminary results with 3B models trained for 167B tokens:
| Model | # Tokens | wikitext PPL | arc-c | arc-e | boolq | hellaswag | openbookqa | piqa | sciq | winogrande | averag... | Summary: This paper proposes a hardware-efficient algorithm for training linear transformers with a delta update (DeltaNet; SMS21). This architecture has an attention formulation that prevents the direct application of chunk-wise parallel algorithms for computing its output. To address this issue, the authors introduce... | Rebuttal 1:
Rebuttal: Thanks for your review.
## W1: larger scale experiments
As noted by the reviewer, it is difficult to conduct experiments at 1B+ scale. Nonetheless, we are currently running some larger-scale experiments at the 3B parameter scale. Here are some preliminary results:
| Model | # Tokens | wiki... | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Equivariant Machine Learning on Graphs with Nonlinear Spectral Filters | Accept (poster) | Summary: This paper addresses the issue of inadequate modeling of graph equivariance in existing spectral GNNs due to nonlinear operations. The authors investigate the concept of domain translation in graph space as functional translations, drawing from the convolutional operations defined on images. Based on a series ... | Rebuttal 1:
Rebuttal: We thank the reviewer for the careful and constructive review.
>**Additional Comparison with Spectral GNNs:**
Thank you for the comment.
Following the reviewer's request, we included JacobiConv, BernNet, Specformer, and OptBasisGNN in our node classification experiment.
Moreover, motivated by ... | Summary: This paper proposes a spectral GNN called non-linear spectral filters (NLSF), which aims to enhance GNNs with nonlinear functions. Since general GNNs with nonlinear functions do not commute with unitary operators, this paper defines Graph Functional Shifts, which is a set of unitary matrices commuting with a n... | Rebuttal 1:
Rebuttal: Thank you for your comment.
Please note that in Section 4.1, we discussed the complexity of our method. We elaborated on the efficiency of the Lanczos algorithm, which is well-known for its computational efficiency. For estimating the leading $J$ eigenvectors, the Lanczos algorithm takes $O(JE)$... | Summary: The authors introduce spectral GNNs that are equivariant to functional symmetries. Specifically, they introduce node-level, graph-level and pooling non-linear spectral filters and show that these are able to outperform standard convolutional GNNs on (semi-supervised) node classification and graph classificatio... | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive and encouraging comments.
>**Enhancing Accessibility of Theoretical Contributions:**
Thank you for your comment.
We recognize the importance of making our manuscript accessible to a broader audience. To improve accessibility, we will provide more det... | Summary: The authors propose nonlinear spectral filters (NLSFs) that achieve full equivariance to graph functional shifts, demonstrating that these filters have universal approximation properties. These NLSFs are designed based on transferable spectral domain, potentially improving GNN performance in node and graph cla... | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments and questions.
>**Complexity, Efficiency, and Runtime Analysis:**
In Section 4.1, we discussed our method's complexity and efficiency, highlighting the efficiency of the Lanczos algorithm. This algorithm estimates the leading $J$ eigenvectors in ... | Rebuttal 1:
Rebuttal: # General Response to All the Reviewers
We thank the reviewers for their valuable input and criticism. We highlight the main revisions to the paper below.
>**Enhancing NLSFs with Orthogonal Complements:**
Our original method projected the signal's information to the leading (low) frequencies. T... | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper tackles the task of Network design for graph neural networks. The suggested approach is based on spectral properties of graphs. So far in the literature spectral methods were limited in assuming that the graph domain is fixed. To address this, a relaxed version of symmetry is proposed based on band-l... | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback.
>**Enhancing Writing Quality for Improved Readability and Clarity:**
Thank you for your valuable feedback.
In response to the reviewer’s suggestions, we will make the following revisions:
- Introduction Section: We will reorder the paragraph... | null | null | null | null | null | null |
ChronoEpilogi: Scalable Time Series Selection with Multiple Solutions | Accept (poster) | Summary: The authors consider the problem of feature selection when forecasting multivariate time series. They propose a novel algorithm called ChronoEpilogi based on identifying a Markov boundary of the time series variables. They experimentally and theoretically validate the findings.
Strengths: 1. A significant pro... | Rebuttal 1:
Rebuttal: We thank you very much for all the constructive comments, remarks, and questions that helped us to improve the quality of our manuscript. We hope this response addresses your concerns effectively!
**Q1**
We argue that finding one MB primarily serves to build a surrogate model with improved perfo... | Summary: The authors propose a scalable algorithm called ChronoEpilogi that aims to select multiple subsets (Markov Boundaries) of time series (TS) features in order to better understand the underlying data generation process and to provide better explanations of downstream forecasting tasks. Through extensive experime... | Rebuttal 1:
Rebuttal: We thank you very much for all the constructive comments, remarks, and questions that helped us to improve the quality of our manuscript. Hereafter, we provide answers to all your questions and comments pointed out in the weaknesses and limitations sections of your review.
**Question 1)**
In a p... | Summary: The authors presents **ChronoEpilogi**, an algorithm for multiple feature selection in multivariate time-series (TS) forecasting. This approach aims to identify all minimal-size subsets of TS variables (Markov Boundaries) that optimally predict a given target variable's future. The key contributions are:
1. *... | Rebuttal 1:
Rebuttal: We thank you very much for all the constructive comments, remarks, and questions that helped us to improve the quality of our manuscript. Hereafter, we provide answers to all your questions and comments pointed out in the weaknesses and limitations sections of your review.
**Question 1) It woul... | Summary: This paper handles the problem of selecting all the minimal-size subsets of multivariate time series variables such that the past leads to an optimal predictive model for the forecast of a given target variable, which is essentially a time series feature selection problem. Past algorithms have worked to select... | Rebuttal 1:
Rebuttal: We thank you very much for all the constructive comments, remarks, and questions that helped us to improve the quality of our manuscript. Hereafter, we provide answers to all your questions and comments pointed out in the weaknesses and limitations sections of your review.
**Question 1) Line 17... | Rebuttal 1:
Rebuttal: We thank all the reviewers for their constructive comments, remarks, and questions that helped us to improve the quality of our manuscript. Hereafter, we provide a summary of the main modifications and additions we will bring to the final version of the paper.
**1) Extension of the empirical dat... | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper considers the problem of finding all minimal subsets of variables for optimal prediction of time series data, coining the term "Markov Boundaries" for those minimal subsets constituting Markov Blankets for the target time series variables in question.
The paper then proposes novel algorithms for th... | Rebuttal 1:
Rebuttal: We thank you very much for all the constructive comments, remarks, and questions that helped us to improve the quality of our manuscript. Hereafter, we provide answers to all your questions and comments pointed out in the weaknesses and limitations sections of your review.
**Question 1) One won... | null | null | null | null | null | null |
RobIR: Robust Inverse Rendering for High-Illumination Scenes | Accept (poster) | Summary: This paper addresses inverse rendering in high-illumination scenes with strong shadows where past methods bake shadows and highlights into estimation results. This paper proposes to use ACES tone mapping and makes it scene-dependent for inverse rendering in high-illumination scenes. This paper also proposes to... | Rebuttal 1:
Rebuttal: We are glad and appreciate that the reviewer recognizes the novel ideas of RobIR and the proposed method successfully estimates BRDF and illumination. Since many of the questions have already been answered in the common response, our additional response to the reviewer’s comments is below:
**Q1: ... | Summary: This paper proposes a method for the inverse rendering of high-illumination and highly reflective scenes. There are two training phases, in the first phase, it trains by Neus, to get geometry and compute visibility by octrees. In the second phase, it decomposes lighting as SGs and material by MLPs.
Strengths:... | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive review as well as the suggestions for improvement. Our response to the reviewer’s comments is below:
**Q1: How does the relighting work.**
We imported the reconstructed albedo and roughness maps into Blender and performed relighting using Blender's scriptin... | Summary: This paper introduces RobIR, an inverse rendering approach that can better tackle “high-illumination” scenes. RobIR first leverages the existing neural field model (NeuS) to represent 3D geometry information including normal, visibility, and indirect illumination. It then utilizes these geometry priors to deco... | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive review as well as the suggestions for improvement. We will revise the typo errors in the paper based on these insightful suggestions. Our response to the reviewer’s comments is below:
**Q1: Comparison with inverse rendering methods with differentiable path t... | Summary: This paper introduces RobIR, an inverse rendering approach designed to handle strong or directional illumination scenes with strong shadows and specular reflections.
The proposed method aims to decouple environment lighting and object materials, with the goal of producing high-quality albedo without baked sha... | Rebuttal 1:
Rebuttal: We are glad and appreciate that the reviewer recognize that the results of RobIR is thorough. Our response to the reviewer’s comments is below:
**Q1: Visualization of the tone-mapping curve.**
Great suggestion. We highly appreciate the suggestion, which can improve the readability of the article... | Rebuttal 1:
Rebuttal: We thank all the reviewers for their valuable comments. We are glad and appreciate that the reviewers recognize that our proposed regularized visibility estimation and ACES tone mapping are novel, and our experiments are thorough and impressive. We will further polish our paper and release our cod... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
NeuroBOLT: Resting-state EEG-to-fMRI Synthesis with Multi-dimensional Feature Mapping | Accept (poster) | Summary: This paper introduces NeuroBOLT, a transformer-based model. NeuroBOLT utilizes multi-dimensional representation learning across temporal, spatial, and spectral domains to translate raw EEG data into comprehensive fMRI activity signals across the entire brain. Experimental results showcase NeuroBOLT's ability t... | Rebuttal 1:
Rebuttal: We sincerely appreciate your valuable suggestions, which truly help us enhance the quality and readability of our paper from multiple aspects. Responses to your concerns are presented as follows:
> **Consistency and presentation details (W3-5, W8, W12):**
* **W3**: We restructured the model ove... | Summary: The manuscript proposes an EGG-to-fMRI synthesis model. The framework implements a transformer architecture and uses a multi-channel feature combination expanded across the temporal axis. To evaluate the proposed model, EGG and fMRI data from 22 participants were recorded while they were in the resting state w... | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful review and encouraging feedback on our manuscript! We are delighted that the reviewer found our work novel and of interest to the community. We are very encouraged by the reviewers’ evaluation on this work opening up opportunities for multimodal neuroimagin... | Summary: In this work, the authors present a deep learning architecture for inferring functional magnetic resonance imaging (fMRI) signal from electroencephalography (EEG) data. The proposed model, named NeuroBOLT, utilizes transformer backbones to provide spatial, temporal, and frequency-based features from the EEG ar... | Rebuttal 1:
Rebuttal: We truly appreciate your excellent suggestions! We address specific concerns below. Please see the PDF in the general response for additional details and figures.
> **Evaluation on task fMRI**
Our motivation for focusing on resting-state fMRI stems from the rich information that can be gained fr... | null | null | Rebuttal 1:
Rebuttal: We thank all the reviewers for their time and their insightful, constructive suggestions. We are excited that all reviewers find the topic of our paper important and fascinating. Reviewers found our study to be novel and well-motivated with promising results, and also noted that our manuscript is ... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Towards Diverse Device Heterogeneous Federated Learning via Task Arithmetic Knowledge Integration | Accept (poster) | Summary: The authors aim to develop a knowledge distillation method that addresses the challenges posed by heterogeneous device prototypes in federated learning. By capturing the knowledge transfer among device prototypes, the proposed TAKFL tries to preserve each device's unique contribution and prevent knowledge dilu... | Rebuttal 1:
Rebuttal: Thank you for your invaluable review and pointing out that our proposed method shows **some primary theoretical analysis of the learning efficiency**.
### **Response to 1**
To clarify, we use the concept of task arithmetic to address limitations of previous approaches and enhance knowledge transf... | Summary: The paper focus on a problem that traditional federated learning methods fail to effectively handle scenarios where devices have widely varying capabilities. It improve existing Knowledge Distillation (KD) methods that are inadequate in these heterogeneous environments. Experimental results show the validity o... | Rebuttal 1:
Rebuttal: Thank you for your invaluable review and positive remarks on the strengths of our paper in **improving existing KD methods that are inadequate in diverse device heterogeneous environments** and **comprehensive experiments**.
### **Response to 1**
Studies [1,2] primarily address parameter interfer... | Summary: The paper presents a novel framework called TAKFL, which addresses the challenge of transferring knowledge in federated learning across heterogeneous devices, ranging from small IoT devices to large workstations. TAKFL uniquely handles the knowledge distillation by treating the transfer from each device protot... | Rebuttal 1:
Rebuttal: Thank you for your invaluable review and pointing out that our work makes a **substantial contribution to the field**. We also appreciate the reviewer for the positive remarks on the **practicality, innovation, and clarity of our TAKFL framework**, as well as the **strong experimental and theoreti... | Summary: This paper introduced a KD-based framework (TAKFL) to address the dilution and diversity issues in heterogeneous FL knowledge transfer learning. The TAKFL distills knowledge from prototypes of varying sizes and incorporates a self-regularization to mitigate noise simultaneously, then integrates these separatel... | Rebuttal 1:
Rebuttal: Thank you for your invaluable review and pointing out the **novelty of our framework and theoretical model** and **effectiveness of our framework on both CV and NLP tasks**.
### **Response to 1**
We appreciate the reviewer's suggestion and would like to highlight a few differences about FedProto ... | Rebuttal 1:
Rebuttal: We sincerely appreciate all the reviewers' efforts in reviewing and commenting on our work. We are particularly grateful for the positive feedback highlighting the following aspects:
* **The novel theoretical model and framework illustrating the efficacy of knowledge distillation in heterogeneous... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
HEPrune: Fast Private Training of Deep Neural Networks With Encrypted Data Pruning | Accept (poster) | Summary: This paper proposes a data pruning algorithm for the training of Homomorphic Encryption (HE)-based neural networks. The authors introduce an HE-friendly importance score and client-aided masking to prune samples in the dataset. The authors further propose ciphertext-wise pruning to merge ciphertexts with empty... | Rebuttal 1:
Rebuttal: We thank Reviewer ADwT for his/her thorough reading of the manuscript and constructive comments.
Q1 Clarity and consistency
We thank the reviewer for the thorough reading. We will fix the typos in the future version.
Q2 The overhead of the proposed methods
The results in Section 4.2 include th... | Summary: This paper focuses on the scenario where the client encrypts the model and dataset with homomorphic encryption and outsources them to the server for training. It accelerates the training process through dynamic data pruning. This paper makes the following three contributions:
First, this paper is the first to... | Rebuttal 1:
Rebuttal: We thank Reviewer ADwT for his/her thorough reading of the manuscript and constructive comments.
Q1 The effectiveness of HEFS
The proposed HE-friendly score relies on the observation that the importance of a data sample can be quantified by its gradients [1]. We denote the input vector as $x\in\... | Summary: 1. The paper introduces a Homomorphic Encryption (HE)-based confidential training framework that enhances training efficiency through encrypted data pruning.
2. The paper proposes HE-Friendly Score (HEFS), an enhancement over the existing EL2N score, to efficiently assess the importance of encrypted data sampl... | Rebuttal 1:
Rebuttal: Q1 Clarification on the threat model
We would like to clarify some misunderstandings about the privacy threats this paper focuses on. The proposed method protects both the data privacy and the model privacy. In our threat model, both training data and the model weights belong to the client. The c... | Summary: This paper presents a method for pruning data in a utility-preserving way under homomorphic encryption, evaluating the method to demonstrate that the savings from training on pruned data outweighs the costs of encrypted data pruning computations. The methods for determining how relevant data items are to impro... | Rebuttal 1:
Rebuttal: We thank Reviewer 98ig for his/her careful reading of the manuscript and constructive comments.
Q1 Deciding the pruning ratio.
In practice, the pruning ratio should be determined by the client during the client aided masking process. As shown in Figure 1 (in rebuttal pdf), a moderate pruning rat... | Rebuttal 1:
Rebuttal: We thank all reviewers for their constructive and insightful feedback.
We are glad that all reviewers unanimously agree that our encrypted data pruning methods are of great significance for improving private training. We appreciate the reviewers recognizing the novelty of our work as the first fr... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Learning diffusion at lightspeed | Accept (oral) | Summary: This paper considers learning diffusion dynamics from observational data of populations over time, identified as learning the energy functional in Equation 3. Past research has confronted this inverse problem via complex bilevel optimization, limited to potential energies. This paper proposes an alternative ... | Rebuttal 1:
Rebuttal: ### Weakness 1
We appreciate the suggestions to improve the exposition and agree with the reviewer's comment. Specifically, we now removed "phenomenal", "runs at lightspeed" and "few weeks old advancements" from the abstract and introduction, rephrasing so to be more factual in the exposition, foc... | Summary: The authors study diffusion processes from the perspective of Wasserstein gradient flows. Based on the recent fixed-point characterisation for Wasserstein proximal operator methods, they introduce Jordan-Kinderlehrer-Otto (JKO) type methods for learning potential and interaction energies that govern the diffus... | Rebuttal 1:
Rebuttal: ### Weakness 1
We thank the reviewer for the suggestion. We prepared a revision of the paper in which we added a reference to the appendix when referencing to content related to the appendix.
### Weakness 2
Good catch, thank you! We prepared a revision of the paper in which we added the definiti... | Summary: This paper introduces JKOnet*, a new method for learning diffusion processes from data. It uses first-order optimality conditions of the JKO scheme instead of complex bilevel optimization. JKOnet* can recover potential, interaction, and internal energy components of diffusion processes. The authors provide the... | Rebuttal 1:
Rebuttal: ### Weakness 1
We thank the reviewer for suggesting us one way to strengthen the presentation of our contributions.
We deployed our method to learn the diffusion dynamics of embryoid body single-cell RNA sequencing (scRNA-seq) data [1], a popular benchmark in the literature, and compared our re... | Summary: This paper studies the problem of learning a diffusion process from samples. It proposes a new scheme based on learning the "causes mismatch" of the process, rather than the "effects mismatch" as in previous works. The new method is significantly more efficient than the schemes from prior works, and works well... | Rebuttal 1:
Rebuttal: We believe the problem of score-matching in diffusion models to be fundamentally different from the one in our paper.
In score-matching, one tries to "reverse" the time of a known diffusion process, e.g., to recover the uncorrupted state of a corrupted image.
In our setting, instead, we use obse... | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and constructive feedback. Our main changes can be summarized as follows:
First, we applied our methodology to real data in single-cell diffusion dynamics and compared our results with nine existing methods, as requested by reviewer vNRb. In short, our model,... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Random Function Descent | Accept (poster) | Summary: The authors derive a novel gradient descent step schedule from a Bayesian point of view, establishing a connection between Bayesian optimization and classical optimization. The theory gives support to some commonly chosen step schedules and is validated on MNIST dataset.
Strengths: 1. The paper is well writte... | Rebuttal 1:
Rebuttal: Thank you for your review, we are happy you found the paper a pleasure to read!
### Question (Complexity of RFD)
> The authors mention that classic BO is limited to relatively small dimensions.
Does RFD improve upon that?
While classical Bayesian optimization has computational complexity $O(n^3... | Summary: The current paper studies random function descent, draws connection between RFD and SGD, and derives an adaptive step size scheduler. More specifically, the authors study minimizing a stochastic first Taylor approximation of random functions, which has similar form of gradient descent when the random function... | Rebuttal 1:
Rebuttal: Thank you so much for taking the time to review our paper so thoroughly, even
taking the time to read the appendix. We are glad you found it insightful.
### Related work/Literature review
In previous drafts the paragraphs on related work in the introduction were more
extensive but were cut becau... | Summary: Many machine learning model have parameters that are optimized by some form of gradient descent. Given a parameters $\omega$ in a space $\Omega$ and a loss function $\textbf{J}: \Omega \to \mathbb{R}$, typical gradient descent proceeds by picking a starting point $\omega_0$ and iteratively taking steps in the ... | Rebuttal 1:
Rebuttal: Thank you for taking your the time to review our paper. We understand that it is
a difficult paper to read due to its unconventional approach which takes some
time to get used to. We took your thoughts into account and will improve the paper (see general rebuttal), the paper is essentially a theor... | Summary: ### Summary
The paper "Random Function Descent" explores the limitations of classical worst-case optimization theory in explaining the success of optimization in machine learning and selecting appropriate step sizes. It establishes a connection between Bayesian Optimization and classical optimization through ... | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our work, we are glad you find it
innovative!
## Question (final message and vision)
Most optimisers in ML have their root in convex function optimisation, a property that is far from being satisfied in reality. Modifications and tricks work well but not m... | Rebuttal 1:
Rebuttal: We warmly thank all our reviewers for their interest, time, insight and constructive criticism.
Writing the paper was a challenge as it requires familiarity with different
mathematical concepts but should be accessible for practitioners at the same
time. Our approach was to simplify the main text... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
On the Use of Anchoring for Training Vision Models | Accept (spotlight) | Summary: This paper identifies a major problem with anchored training, that the performance of anchored training does not increase with increasing reference set size, and proposes a simple regularization approach to overcome this problem. This approach is evaluated on OOD generalization, calibration and anomaly rejecti... | Rebuttal 1:
Rebuttal: We thank you for your positive feedback. We hope our responses address the questions you have raised.
**1. Accuracy Remains Constant Regardless of Reference Set Size**
We would like to clarify that this is precisely the problem with the original anchored training protocol, which we solve in this... | Summary: In this paper, the authors propose a new strategy to train anchoring-based models, significantly improving performance, training efficiency, and model generalization compared to previous approaches. The key to the method is the added masking strategy that allows the model to better profit from anchoring-based ... | Rebuttal 1:
Rebuttal: We thank you for your positive feedback. We hope our responses address your questions.
**1. Generic Applicability of Anchoring**
Thank you for this question. We concur with you that anchoring is a protocol for training deep neural networks for use with any domain for any application (e.g., text... | Summary: This paper presents a thorough discussion on the use of anchoring for training vision models. In particular, the paper tackles 1) the problem of reference diversity when training with anchoring to explain how superior generalization can be achieved 2) addresses the problem of spurious correlations learnt betwe... | Rebuttal 1:
Rebuttal: We thank you for your positive comments and feedback. We hope our response addresses your concern.
**Domain Generalization Benchmarks**
Thank you for this question. We would like to highlight that we performed experiments on DomainNet which is one of the benchmarks from DomainBed [1] (Line 293 o... | Summary: The authors analyze the effect of anchored training through a series of small experiments and find that, contrary to claims in prior works, increasing the size of the reference set is not beneficial and that this shortcoming cannot be mitigated through existing inference strategies. The authors provide a simpl... | Rebuttal 1:
Rebuttal: We thank you for your positive feedback. Here are our responses to your questions. We plan to incorporate some of these clarifying comments to the manuscript as well.
**1. Choice of $\alpha$**
We want to clarify that at low reference set sizes, there is a high likelihood of exposing the model t... | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
ManiPose: Manifold-Constrained Multi-Hypothesis 3D Human Pose Estimation | Accept (poster) | Summary: This paper presents a method to estimate 3D human keypoints from a sequence of monocular 2D keypoints observations. It builds upon an existing sequence-to-sequence architecture (MixSTE), with a different output parameterization exploiting a kinematic skeletton prior, and different training losses. Lengths of t... | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments. We answer their remarks below, following the same order.
1. **Citation of SMPL-based methods:**
- _We will properly cite these important works._ We agree with the reviewer that SMPL-based methods share the same constant-bone-length assumption that we pre... | Summary: This paper proposes a MCL-based framework for multi-hypothesis 3D human pose estimation. This framework predicts skeletal parameters so that the predicted 3D poses in a sequence are constrained to one smooth manifold. To prove the superiority of such a framework, the paper presents detailed theoretical analysi... | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments. We answer their remarks below, following the same order.
1. **New theoretical analysis on the advantage of multi-hypothesis methods over single-hypothesis:**
We agree with the reviewer and provide the proposed theoretical result hereafter.
- Let $\mathc... | Summary: This paper presents a new method to estimate 3D human pose from 2D observations (lifting). To ensure the body symmetry and temporal consistency, the authors disentangle human skeleton to two parts: temporally consistency bone scales and temporally variable bone rotations. The authors use fancy formulas to prov... | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and provide hereafter our response to their concern, in the same order.
1. **How to constrain the rotation space during training?**
- _Our method can be adapted to incorporate angle constraints._ This is possible for example if one chooses to use rotation... | Summary: This paper propose ManiPose, a manifold-constrained multi-hypothesis model for 3D human pose lifting. The authors provide empirical and experimental evidence to show that joint position regression leads to inconsistent skeleton lengths. And they propose to predict globally consistent pose scale and individual ... | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and answer here their concerns in the same order.
1. **Diversity and fixed number of hypotheses concern:**
- _SOTA oracle MPJPE is evidence of good diversity:_ It is true that ManiPose produces a fixed number of poses per forward pass. While methods based ... | Rebuttal 1:
Rebuttal: We thank the reviewers for their work. We provide answers to all their concerns individually, referring sometimes to the pdf attached to this general answer.
We would like to highlight that our rebuttal includes:
- a new theoretical result, together with its proof sketch,
- a new ablation study r... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Measuring Mutual Policy Divergence for Multi-Agent Sequential Exploration | Accept (poster) | Summary: The authors study MARL in heterogeneous settings, where agents are not allowed to share their parameters, and make use of the sequential updating scheme under the CTDE schema. They propose a method which exploits the preceding information to improve exploration and heterogeneity sequentially. This method is eq... | Rebuttal 1:
Rebuttal: Thanks for your positive comments. We address your concerns as follows:
## (1) The improvement over baselines
We have re-evaluated our method against the baselines using an aggregate statistical test. We have quantified the interquartile mean (IQM) across tasks of our method and baselines. Please... | Summary: The paper proposes a novel training objective where it encourages the policies to diverge from each other and from the previous policy under heterogeneous multi-agent tasks based on sequential recently proposed sequential policy update. It utilizes CS divergence for calculation of "distance" between policies f... | Rebuttal 1:
Rebuttal: Thanks for your positive comments. We address your concerns as follows:
## (1) The aggregate evaluation metrics
Thanks for your constructive suggestion. We have re-evaluated our method by using this powerful toolbox ***rliable***. Please refer to the author rebuttal and the attached pdf for more i... | Summary: This paper is situated in the problem setting of heterogeneous cooperative agents, under the sequential update framework. The paper introduces the novel MADPO algorithm, in which agents maximize the Cauchy Schwarz divergence between agents and between episodes of data gathered by the same agent, to improve exp... | Rebuttal 1:
Rebuttal: First, we would like to express our gratitude for your careful review of our work, as well as for your positive comments and insightful suggestions.We address your concerns as follows:
## (1) Motivation of our work
The sequential updating scheme offers a novel solution to heterogeneous MARL, en... | Summary: This paper introduces a novel multi-agent reinforcement learning (MARL) method called Multi-Agent Divergence Policy Optimization (MADPO), which enhances exploration and heterogeneity through a mutual policy divergence maximization framework. MADPO leverages a sequential updating scheme and quantifies discrepan... | Rebuttal 1:
Rebuttal: Thanks for your positive suggestions. We address your concerns as follows:
## (1) Comparison with HASAC
Liu et al. proposed Heterogeneous Agent SAC (HASAC) by extending maximum entropy RL into heterogeneous MARL [1]. However, we would like to clarify that our MADPO is an on-policy MARL method, whi... | Rebuttal 1:
Rebuttal: We thank all reviewers for their encouraging comments and constructive feedback. We are glad to note that the reviewers recognized our work as innovative, appealing and easy-to-follow *[sHA3, jcAK, 3yBc, gpQ4]*, theoretically nice and interesting to RL community *[sHA3, jcAK, 3yBc, gpQ4]*, well-or... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
RCDN: Towards Robust Camera-Insensitivity Collaborative Perception via Dynamic Feature-based 3D Neural Modeling | Accept (poster) | Summary: In this paper, the authors proposed an essential problems: how to overcome the issues caused by the failed camera perspectives, while stabilizing high collaborative performance with low calibration cost? The authors presented a robust camera-insensitivity collaborative perception with a novel dynamic feature-b... | Rebuttal 1:
Rebuttal: Thank you for your appreciation of our work. We will take your suggestions into account when preparing the final version. Below please find responses to specific questions.
>"W.1: Can the authors showcase the motivation of using Nerf for the static and dynamic fields, are there any dominant advan... | Summary: The paper introduces a new problem: how to overcome the issues caused by the failed camera perspectives, while stabilizing high collaborative performance with low calibration cost? Therefore, RCDN, a Robust Camera-insensitivity collaborative perception with a novel Dynamic feature-based 3D Neural modeling mech... | Rebuttal 1:
Rebuttal: Thank you for your insightful comments. We have carefully addressed all the questions raised. Please find our responses below.
>"W.1: What is difference between sfw, sbw in equation (7)?"
**A.1** Apologies for any confusion regarding the terms s\_fw and s\_bw. The term s\_fw stands for forward s... | Summary: The paper presents RCDN, a method to aggregate multi-sensor perception signals in dynamic environment.
The key idea, is to improve the aggregated multi-agent feature with the multi-view rendering loss.
At its core, RCDN gathers input streams at varying timesteps of multiple agents. The gathered images are fuse... | Rebuttal 1:
Rebuttal: Thank you for your detailed review and valuable comments. Please note our top-level comment with additional experimental and theoretical results. Below we address specific questions.
>"W.1: Have authors evaluated the method on different down tasks other than segmentation?"
**W.1** Our proposed R... | Summary: The paper proposed Bird Eye View (BEV) semantic segmentation pipeline from collaborative perception, robust to motion blur, sensor noise, occlusion and even failure. The proposed a pipeline that adapts neural rendering techniques to overcome the noise/malfunction in camera capture and occlusion. With the propo... | Rebuttal 1:
Rebuttal: Thank you for your detailed review and thoughtful comments. Please note our top-level comment with additional experimental and theoretical results. Below we address specific questions.
>"W.1: Evaluation is only performed with OPV2V-N dataset which may result in overfitting. More evaluation with d... | Rebuttal 1:
Rebuttal: **Please see the attached PDF for a one-page PDF with a summary of added experimental results.**
We thank all reviewers for their constructive comments on our work. We found one comment that was common amongst more than one reviewer, hence we highlight it here.
>"Have you tested RCDN on any data... | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper introduces RCDN, a novel method for robust camera-insensitivity collaborative perception. This method aims to overcome challenges associated with noisy, obscured, or failed camera perspectives by using dynamic feature-based 3D neural modeling. RCDN constructs collaborative neural rendering field repr... | Rebuttal 1:
Rebuttal: We appreciate your time in reviewing our work and your feedback on the paper's value and clarity! Please note our top-level comment with additional experimental and theoretical results. Below we address specific questions.
>"Q.1: Have you tested RCDN on any datasets other than OPV2V-N? How does i... | null | null | null | null | null | null |
On-Road Object Importance Estimation: A New Dataset and A Model with Multi-Fold Top-Down Guidance | Accept (poster) | Summary: 1.This paper this contributes a new large-scale dataset named Traffic Object Importance (TOI) to addresses the problem of on-road object importance estimation, which utilizes video sequences captured from the driver’s perspective as the input.
2.The author also proposes a model that integrates multi-fold top-d... | Rebuttal 1:
Rebuttal: # To Reviewer mvxo
Thank you very much for your positive comments and we appreciate your thoughtful feedback and suggestions.
> ***Q1:**
In page 3 the author mentions that the traffic rule is crucial for object importance and focus on the traffic line rules, but the influence of traffic rules is... | Summary: This paper collects a new large-scale dataset and proposes a novel method that integrates multi-fold top-down guidance with the bottom feature to address the problem of on-road object importance estimation. Specifically, the dataset is almost three times larger than the current publicly dataset for on-road obj... | Rebuttal 1:
Rebuttal: # To Reviewer Yrjv
Thank you very much for your positive comments and we appreciate your thoughtful feedback and suggestions.
> ***Q1:**
The paper does not provide a detailed discussion on the computational efficiency of the proposed method, which is crucial for real driving scenarios. Moreover,... | Summary: This paper presents a novel dataset for on-road object importance estimation. More data about which objects are important for self-driving is included and is promised to be released. Moreover, a novel method that integrates driven intention, semantic context, and traffic rule is devised to tackle the related p... | Rebuttal 1:
Rebuttal: # To Reviewer 8N7U
Thank you very much for your positive comments and we appreciate your thoughtful feedback and suggestions.
> ***Q1:**
Regarding the task, my major concern is the definition of importance. It is shown that surrounding objects that follow the traffic rules are not considered as ... | Summary: This work addresses the issue of estimating the importance of on-road objects using video sequences from a driver’s perspective, a critical task for enhancing driving safety. The authors introduce the Traffic Object Importance (TOI) dataset, which is significantly larger and more diverse than existing datasets... | Rebuttal 1:
Rebuttal: # To Reviewer xrYx
Thank you very much for your positive comments on our proposed dataset addressing a major limitation in the field and our model showing the good performance.
> ***Q1:**
Lack of description of the annotation details. How many annotators are involved in the annotation procedure?... | Rebuttal 1:
Rebuttal: # General Response
We thank reviewers for their valuable feedback. We are encouraged by the reviewers’ positive comments on our work. Specifically, they find our model novel (8N7U) and effective (xrYx, 8N7U), our idea well-motivated (Yrjv), our proposed dataset sound (8N7U), our paper detailed (8N... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
CIFD: Controlled Information Flow to Enhance Knowledge Distillation | Accept (poster) | Summary: Some existing methods alleviate the capacity gap between the teacher and student by setting up Teacher Assistants (TAs), introducing a large number of additional parameters and computational costs. Based on this, this paper proposes to train multiple RDM modules and connect multiple independent classification ... | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and effort. Detailed comments follow the summary.
**Summary**
- We showed that our proposed method outperforms TAKD and DGKD even when there is only 1 RDM. This ensures fairness. However, in terms of training cost, our method with 3 RDMs which is far superior... | Summary: Inspired by Shannon’s rate-distortion theory, this paper proposes two modules, namely the Rate-Distortion Module and the Information Bottleneck Module, to construct intermediate representations for knowledge distillation. Extensive experiments on various datasets demonstrate the effectiveness of this method.
... | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and effort. Detailed comments follow the summary.
**Summary**
- By using one RDM without IBM, we showed that the key to our superior performance compared to Factor Transfer (FT) [28] is the principled loss function used to train the RDM. This is in addition t... | Summary: The paper presents a new distillation method, CIFD, designed based on *Shannon’s Rate-Distortion theory* and **Information Bottleneck Principle (IBP)*. CIFD contains Rate-Distortion Modules (RDM) for the teacher to substitute heavy Teacher Assistant (TA) and Information BottelNeck Module (IBM) for the student... | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and effort in providing feedback. Detailed response after summary.
**Summary**
- We directly addressed the concern on the efficacy of CIFD over large-student teacher gap. We distilled RN18 using RN152, RN101, RN50 (Table 13) as teachers and showed that propose... | null | null | Rebuttal 1:
Rebuttal: We thank all the reviewers for their time and effort for their feedback. We summarize the main points raised and our response. Detailed responses can be found in the reviewers' individual responses. Tables 13 - 17 and Fig. 7 are in the response PDF.
---
### Summary
- Teacher Assistants have ha... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Double-Bayesian Learning | Reject | Summary: This paper appears to suggest that any decision is composed of two Bayesian decisions and it trys to evaluate the implications of this idea.
I am very confused by this paper and really don't know what to make out of it. For example, the conclusion seems to be only a brainstorming session of random ideas and ... | null | Summary: The paper discusses the implications of Bayes' theorem, making assumptions inspired by a thought experiment of communicating a message. Prior (and model) elicitation by solving a fixed point equation is discussed.
Strengths: * The paper takes a fresh look at decision marking under uncertainty, which is at the... | Rebuttal 1:
Comment: I don't see a rebuttal in OpenReview. In any case, I believe that my score would have been hard to move at this stage, and that the manuscript needs a thorough revision before being resubmitted. | Summary: The purpose of this paper is to investigate the optimality of a classifier. It is known that the Bayes classifier is optimal, and it is likewise known that an explicit computation of the Bayes classifier is often very challenging if not impossible. This paper offers an analysis of the Bayes classifier as a seq... | null | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Towards Human-AI Complementarity with Prediction Sets | Accept (poster) | Summary: The paper analyzes decision support systems based on prediction set algorithms. The authors show that: (i) the usage of conformal prediction techniques is generally sub-optimal in terms of accuracy; (ii) the problem of finding the optimal prediction sets under human assistance is NP-hard. Moreover, they provid... | Rebuttal 1:
Rebuttal: **[Lines 125-146 & Algorithm 1]** To ease readability, we will rewrite 135-146 and we will add comments to the pseudo-code in Algorithm 1.
**[Limitation section]** In the limitation section under "Evaluation", we will add a discussion of and citation to Stutz et al., 2023.
**[Other popular datas... | Summary: The authors first show that conformal prediction sets may not lead to human decision optimality. The authors then introduce a greedy algorithm to generate candidate prediction sets that improve human decisions regarding the accuracy metric.
Strengths: The authors find the sub-optimality of conformal predicti... | Rebuttal 1:
Rebuttal: **[Optimal]** We will define what we mean by optimal the first time we mention optimal in the revised version of the paper.
**[Role of $a$]** In the generative model we used in the synthetic experiments, out of 20 features per sample, $d=4$ of these features correlate with the label value and thu... | Summary: The paper shows the conformal prediction set may not be the optimal set recommendation to humans if humans follow certain choice models. The authors then propose a greedy algorithm by modeling $P(y|x)$ and the choice model of humans assuming it follows MNL model. Authors compare the proposed method against the... | Rebuttal 1:
Rebuttal: **[Problem setting]** Straitouri et al. (ICML 2024) [19] conducted a large-scale human subject study where they compared the setting we adopted, where users are not allowed to select label values outside the conformal prediction sets, against the setting the reviewer suggests, where users are allo... | Summary: This paper aims to construct optimal prediction sets under which experts can achieve the highest accuracy. The authors claim that human experts cannot attain maximum accuracy with the prediction sets generated by conformal predictors. To address this issue, the paper proposes an efficient greedy algorithm base... | Rebuttal 1:
Rebuttal: **[More realistic datasets]** The dataset ImageNet-16H is among the only publicly available datasets that we found containing multiple expert predictions per sample, a relatively large number of samples, more than two/three classes and a reasonable level of difficulty. The suggested datasets, Imag... | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their careful and insightful comments, which will help improve our paper. Please, find a point-by-point response below and a one-page pdf with additional results attached.
Pdf: /pdf/44abba03dc7f080cdc9489bfbca32ca7899ed3d9.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Toward Approaches to Scalability in 3D Human Pose Estimation | Accept (poster) | Summary: Existing data in 3D human pose estimation are typically collected indoors with human actors. To address this scalability issue, the authors propose to synthesize 3D human pose data via an Osteo-kinematic model and introduce biochemical constraints for better physical plausibility. Additionally, to deal with th... | Rebuttal 1:
Rebuttal: Thank you for your valuable and detailed review. Your feedback on the use of biomechanical knowledge and the practical limitations of our approach provides important guidance for further improving our work.
## Repetition in the First Paragraph of Section 2
We sincerely thank the reviewer for poi... | Summary: This paper introduces two components aimed at addressing challenges in 3D human pose estimation, specifically in terms of scalability and generalization. The authors propose a Biomechanical Pose Generator (BPG), which incorporates biomechanical principles to generate plausible new 3D poses. They also introduce... | Rebuttal 1:
Rebuttal: Thank you for your comprehensive and constructive review. Your insights on the scalability and generalization aspects, along with suggestions for improved clarity, are invaluable and will be instrumental in enhancing our paper.
## Suggestions for Improving Clarity
We appreciate the reviewer's po... | Summary: The authors propose a 3D human pose estimation framework that incorporates data augmentation and depth ordering information. The main contributions are two-fold: First, the proposed Biomechanical Pose Generator (BPG) generates plausible body poses based on kinematic constraints, which is used for data augmenta... | Rebuttal 1:
Rebuttal: Thank you for your thorough and thoughtful review. Your comments on the novelty and clarity of our contributions, as well as your specific questions, are extremely valuable and will guide us in refining our manuscript.
## **How does BPG differ from existing kinematic constraint-based methods?**
... | Summary: This paper address the task of 3D Human Pose Estimation from monocular RGB. The authors make two main contributions: The Biomechanical Pose Generator (BPG) and the Binary Depth Coordinates (BDC). BPG is a 3D human pose generator that leverages the "Normal Range of Motion" (NROM) that is used in the medical fie... | Rebuttal 1:
Rebuttal: Thank you for your detailed and insightful review. Your feedback on the similarities to existing research and suggestions for further comparisons are greatly appreciated and will help strengthen our work.
## How does BDC differ from "Hand Pose Estimation via Latent 2.5D Heatmap Regression" by Iqb... | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for their insightful feedback and thought-provoking questions regarding our work. We greatly appreciate the recognition of the clarity, relevance, and novelty of our contributions.
We were pleased to receive positive comments from many reviewers.
**Reviewer 696C**... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Inverse Factorized Soft Q-Learning for Cooperative Multi-agent Imitation Learning | Accept (poster) | Summary: This paper extends the IQ-Learn method to cooperative multi-agent settings. The main insight is to use mixing networks to enable centralized training via decentralized Q functions.
Strengths: - The paper is quite relevant to NeurIPS and it is indeed important to extend IQ-Learn (or similar inverse learning al... | Rebuttal 1:
Rebuttal: > How do the agent have access to the global state information. If this is the case, why does the paper even define observations? Is the global state information available only in training or after deployment, too? In what settings is this applicable?
Thank you for the question! We would like to... | Summary: This paper addresses the problem of extending a single-agent imitation learning algorithm, inverse soft-Q learning (IQ-learn, Garg et al. Neurips 21) to the multi-agent cooperative setting. The proposed algorithm, MIFQ, leverages the ideas of mixing networks and the individual-global-max (IGM) principle, to pe... | Rebuttal 1:
Rebuttal: We thank the reviewer for carefully reading our paper and providing us with valuable questions and suggestions.
> I wonder if the authors could discuss whether a simple state-only extension of IQ Learn ...
Our argument in lines 143-148 simply means that directly using the global Q, V, and globa... | Summary: This paper presents a novel algorithm, Multi-agent Inverse Factorized Q-learning (MIFQ), for cooperative multi-agent imitation learning (IL). It extends the inverse soft-Q learning framework to multi-agent settings by introducing a mixing network architecture for centralized training with decentralized executi... | Rebuttal 1:
Rebuttal: We thank the reviewer for reading our paper and for the positive feedback.
> In Figure 2, the semi-transparent curves are not standardly explained. If these do not represent standard deviations, what statistical measure do they depict?
They present standard deviations. We will clarify this in th... | Summary: The paper addresses the imitation problem in cooperative Multi-Agent Reinforcement Learning (MARL). It extends inverse soft-Q learning to the multi-agent domain by leveraging value factorizations under the Centralized Training with Decentralized Execution (CTDE) paradigm. Experimental results demonstrate the e... | Rebuttal 1:
Rebuttal: We thank the reviewer for reading our paper and the insightful comments and questions.
> The paper's organization could be improved.
Thank you for the feedback! We will revise our writing and improve our exposition.
> The similarity between IGC and IGO[1] requires further clarification.
Than... | Rebuttal 1:
Rebuttal: We thank the reviewers for carefully reading our paper and providing constructive feedback and questions, which we have been happy to consider and clarify. Please find a summary of our responses below.
**Reviewer GGqd** raised a concern about the fact that equation (2) does not hold under our mix... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Approximated Orthogonal Projection Unit: Stabilizing Regression Network Training Using Natural Gradient | Accept (poster) | Summary: The paper proposes a novel training framework for regression tasks called the Approximated Orthogonal Projection Unit (AOPU), optimized using truncated natural gradients. The authors utilize the Rank Rate (RR) of the augmented data covariance matrix as a metric. They demonstrate that their method offers more s... | Rebuttal 1:
Rebuttal: **Response to reviewer SnqU:**
We are grateful for the time and effort you have invested in reviewing our manuscript. We take each of your concerns seriously and are confident that we can address all issues raised to your satisfaction.
**Response to weaknesses 1:**
Due to our focus on rigorous ... | Summary: The paper introduces the Approximated Orthogonal Projection Unit, the basis for a new neural network, designed to enhance the stability and interpretability of regression models, particularly in industrial soft sensor applications. The primary aim is to address the need for stable and immediate optimization in... | Rebuttal 1:
Rebuttal: **Response to reviewer PyNt:**
We are deeply grateful for the time and effort the reviewer has invested in reviewing our manuscript. Your recognition and support are crucial to us. We take each of your concerns seriously and have addressed them thoroughly.
**Response to weaknesses 1:**
To expan... | Summary: This paper introduces a new model for soft sensor tasks, the Approximated Orthogonal Projection Unit (AOPU), to enhance the stability and interpretability of regression networks. AOPU incorporates trackable and dual parameters, which are treated differently during the inference and training processes. AOPU tr... | Rebuttal 1:
Rebuttal: **Response to reviewer 8Lrh:**
We greatly appreciate the time and effort the reviewer has dedicated to reviewing our manuscript and thank you for recognizing our work. We will do our best to answer the weaknesses and hope that our answers will make you reconsider raising the AOPU rating.
**Respo... | null | null | Rebuttal 1:
Rebuttal: We want to thank all reviewers for dedicating their time and effort to scrutinizing the manuscript. We have noted the reviewers' have some concerns and misunderstandings regarding the manuscript's presentation. We wish to clarify the contributions and impact of AOPU on the soft sensor deep learnin... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Is Programming by Example Solved by LLMs? | Accept (poster) | Summary: This paper investigates the effectiveness of Large Language Models (LLMs) in solving Programming-by-Example (PBE) tasks. Evaluations are conducted on three classic PBE domains including lists and strings, as well as a graphics programming domain. The findings suggest that while pretrained LLMs are not inherent... | Rebuttal 1:
Rebuttal: Thank you for the thoughtful review. Please see below for a new experimental results that you suggested we run, together with our responses to your questions.
> In the experiments, there are no LLM competitors in the graphics domain. Any reasons?
Thank you for your suggestion! We added the GPT-4... | Summary: The paper focuses on the classical task of Programming By Example (PBE): given some (input,output) pairs, the goal is to generate a program that "fits" these examples (producing the outputs when given the inputs), and also generalizes well to new inputs.
The paper evaluates mostly 7B and also 33B LLMs on three... | Rebuttal 1:
Rebuttal: Thank you for the thoughtful review. We really believe that our responses and new experiment can address your concerns, and hope that you will agree. Please see below.
> empirical results was not surprising or unusual
Papers in the past year find negative results for LLMs on PBE [1-3], and none ... | Summary: The paper performs a relatively thorough study on using LLM for example-guided program synthesis tasks. The results presented in the paper suggest that LLMs make strong progress toward solving the typical suite of example-guided synthesis tasks, potentially increasingly the flexibility and applicability of PBE... | Rebuttal 1:
Rebuttal: We thank Reviewer e1ch for the thoughtful review. Please see the global response PDF for the requested LOGO graphics details, and below for other new experiments and responses to your specific questions.
> CoT and other simple prompting methods are not evaluated
Thanks for the suggestion. We e... | Summary: This paper investigates whether the long-studied programming by example task is "solved" by large language models with Turing-complete languages like python.
Their evaluation is on three domains: lists, strings, and LOGO/Turtle graphics.
They evaluate three LLM-based approaches, including a self-instruct-like ... | Rebuttal 1:
Rebuttal: Thank you for the detailed review, and for your support. Please see the global review for a PDF with new results (including a fun new LOGO experiment). We address your specific questions below.
> Regarding Contamination
We avoided contamination as follows:
1. String dataset: The datasets contain... | Rebuttal 1:
Rebuttal: Thank you all for the helpful reviews. Please see your individual responses, but here we wish to include a PDF illustrating:
1. The conversion to ASCII art requested by reviewer e1ch. Interestingly, we also found that by down sampling the image to ASCII, it is able to somewhat generalize to hand d... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Scene Graph Generation with Role-Playing Large Language Models | Accept (poster) | Summary: This paper proposes SDSGG, a novel open vocabulary scene graph generation(OVSGG) algorithm that leverages the reasoning capability of a LLM to better determine the relations between objects in the scene. It achieves this goal by first prompting a LLM with multiple persona prompts to expand a simple relational ... | Rebuttal 1:
Rebuttal: We thank reviewer xMLQ for the valuable time and constructive feedback. We provide point-to-point response below.
**Q1**: **Clarifying the rule of prompt construction.**
**A1**: The prompts **are like example #1** and **are generated offline**. The "{scene content to be discussed}" is constructe... | Summary: This paper aims to solve the open-vocabulary scene graph generation problem. Previous methods mainly adopt scene-agnostic prompts as text classifiers. The authors argue that using the fixed text classifiers not only struggles to model visual relations with high variance, but also falls short in adapting to dis... | Rebuttal 1:
Rebuttal: We thank reviewer RK4c for the valuable time and constructive feedback. We provide point-to-point response below.
**Q1**: **Computational complexity of description generation (offline).**
**A1**: Suppose there are 3 common object categories (*i.e.*, human, animal, and product [a,b]) and 50 predi... | Summary: This paper starts by discussing methods for Open-vocabulary Scene Graph Generation (OVSGG) based on the CLIP model, highlighting the issue that current OVSGG methods do not differentiate between various scenes, which limits their effectiveness. The authors introduce SDSGG, a scene-specific description-based OV... | Rebuttal 1:
Rebuttal: We thank reviewer qa1m for the valuable time and constructive feedback. We provide point-to-point response below.
**Q1**: **Presentation.**
**A1**: Our apologies. We will revise Section 3.1 to improve clarity and coherence. Our revisions will focus on:
1. Streamlining the naming conventions: We... | null | null | Rebuttal 1:
Rebuttal: To all reviewers:
Thank you so much for your careful review and suggestive comments. We have revised our paper according to your comments. The major changes are as follows:
1. We improve the presentation of Sec. 3.1, according to Reviewer qa1m's comments.
2. We add an experiment to evaluate the ... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
LLM-ESR: Large Language Models Enhancement for Long-tailed Sequential Recommendation | Accept (spotlight) | Summary: The paper presents a framework that integrates large language models (LLMs) into sequential recommendation systems (SRS) to tackle the long-tail challenges. The framework includes dual-view modeling, which combines semantic embeddings from LLMs with collaborative signals, and a retrieval-augmented self-distill... | Rebuttal 1:
Rebuttal: We appreciate the reviewer's valuable time and insightful suggestions, which are important to our paper. We deliver the point-by-point response as follows.
> W1 & Q1
Thank you for highlighting the potential overfitting issue when textual descriptions are not diverse enough. We agree with the rev... | Summary: This paper introduces a novel framework designed to address the long-tail challenges in sequential recommendation systems (SRS). By leveraging semantic embeddings from large language models (LLMs) and combining them with collaborative signals, the authors propose a dual-view modeling framework and a retrieval-... | Rebuttal 1:
Rebuttal: We appreciate that the reviewer has raised these valuable questions to help us refine our paper. We have discussed these questions carefully and responded to them as follows.
> W1 & Q1
We appreciate the reviewer's advice on providing more details regarding practical implementation. There are two... | Summary: The paper addresses the challenges in sequential recommender systems (SRS), particularly the long-tail user and long-tail item issues, which complicate user experience and seller benefits in real-world applications. The authors propose the Large Language Models Enhancement framework for Sequential Recommendati... | Rebuttal 1:
Rebuttal: We appreciate the meticulous and insightful comments to help us polish the paper. Please find the point-to-point responses to the reviewer's concerns.
> W1 & Q1
We greatly appreciate the reviewer's suggestion on refining the motivation of our paper. Existing research on long-tail issues, includi... | null | null | Rebuttal 1:
Rebuttal: We really appreciate your valuable time and insightful suggestions. The referred figures and tables of results in the rebuttal are included in the supplement PDF (i.e., __Rebuttal PDF__).
Pdf: /pdf/249b73a4192de7232766b14590739771e58f2163.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Evaluate then Cooperate: Shapley-based View Cooperation Enhancement for Multi-view Clustering | Accept (poster) | Summary: This paper studies multi-view clustering and seeks to investigate the view cooperation issue. The authors consider DMVC as an unsupervised cooperative game and regard each view as a participant. Compared with the existing methods, this consideration is new and interesting. Based on the novel idea, the authors ... | Rebuttal 1:
Rebuttal: **1. Illustration in Figure 2:**
Thanks. Improvements have made to the View Cooperation Enhancing Module in Figure 2 in the Figure 1 of global reponse, presenting the details of gradient modulation within this module.
**2. Notification table:**
Thanks for your suggestions. A notification table ... | Summary: The author introduces a Shapley-based cooperation enhancement framework aimed at fostering collaboration among different views. The SCE-MVC method incorporates cooperative game theory, considering each view as a participant in the model and assessing their contributions using the Shapley Value.
Strengths: Vie... | Rebuttal 1:
Rebuttal: **1. the use of SCE module on alignment-based methods:**
Thanks. The SCE method consists of two modules: the View Contribution Evaluation Module and the View Cooperation Enhancing Module. For alignment-based methods, the View Contribution Evaluation Module obtains the contributions of views, wher... | Summary: The study centers on improving task performance via deep multi-view clustering (DMVC) and fostering cooperation among different views. Specifically, the study evaluates view contributions, emphasizing the significance of strengthening cooperation among views.
Strengths: Considering multi-view tasks from a col... | Rebuttal 1:
Rebuttal: **1. Quantification of the equilibrium level of view contributions:**
Thanks. Due to the normalized characteristic of view contributions, the cooperation level among views with/without SCE can be compared by calculating the variance of view contributions, denoted as $D(\phi)$. A smaller variance ... | Summary: This research merges game theory with multi-view clustering by introducing the Shapley-based Cooperation Enhancing (SCE) approach. It features a module to systematically evaluate each view's contribution. The approach promotes view cooperation by adjusting the training convergence rate of view parameters based... | Rebuttal 1:
Rebuttal: **1. Different characteristics of joint method (Figure 3(a)) and alignment-based method(Table 2):**
Thanks. Figure 3(a) illustrates the change of view contributions of DMJC (a joint method) with/without SCE. In the joint methods, views' representations are optimized in their respective spaces, le... | Rebuttal 1:
Rebuttal: We thank the SAC, AC, and PCs for their efforts and constructive comments, which are helpful in further improving the quality of our manuscript. We respond to your questions carefully one by one carefully, and we hope our responses can address your concerns.
Note that there are five tables and on... | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper firstly considered DMVC as an unsupervised cooperative game where each view can be regarded as a participant. Then, the authors introduced the shapley value and propose a novel MVC framework termed Shapley-based Cooperation Enhancing Multi-view Clustering (SCE-MVC), which evaluates view cooperation ... | Rebuttal 1:
Rebuttal: **1. The relationship between $\phi$ and $w$:**
Thanks. Evaluating the view contribution using $\phi$ calculated by shapley value, instead of relying solely on pre-set weights $w$, stems from a systemic perspective on the process of multi-view clustering. In multi-view of representation learning ... | null | null | null | null | null | null |
Unified Lexical Representation for Interpretable Visual-Language Alignment | Accept (poster) | Summary: The authors propose a method based on lexical representation for Visual-Language Alignment (VLA). The method relies on aligning two strong unimodal models, namely DINOv2 for the visual modality and Llama 2 for the text modality. Each backbone is fine-tuned with a few adapters or additional layers. The two moda... | Rebuttal 1:
Rebuttal: ## W1. Tokenizer-based vocabulary is not perfect.
Thanks. Please note that LexVLA has achieved SOTA performance in most experiments, demonstrating its effectiveness. While it is nontrivial to design a perfect vocabulary to handle all the corner cases generally, we would take it as an important fut... | Summary: The paper proposes LexVLA, a more interpretable VLA framework that learns a unified lexical representation for both modalities without complex design.
LexVLA uses DINOv2 as the visual model and Llama 2 as the language model, proposing an overuse penalty to avoid false discoveries.
LexVLA outperforms baselines ... | Rebuttal 1:
Rebuttal: ## Q1-1. Is lexical representation a way to select important information and map it to the codebook?
Thank you for your question. We respectfully disagree with this characterization. Lexical representation is not a codebook strategy in [1]. Learning a unified codebook for multi-modal data is fund... | Summary: This paper presents LexVLA, a vision language alignment method integrating a pretrained vision model and a pretrained language model. To retain the original capabilities of pretrained single-modal models, it adopts a unified lexical representation with unique codebooks. Moreover, the vision model is tuned with... | Rebuttal 1:
Rebuttal: ## Q1. How accurately or reliably does the proposed PatchDis metric reflect the interpretability of patch-level visual lexical representation?
Thank you for your concern. We have discussed and analyzed this in the main paper. Our proposed PatchDis is a direct metric for assessing the interpretabi... | null | null | Rebuttal 1:
Rebuttal: We would like to express our sincere gratitude to all the reviewers for their insightful and constructive feedback on our paper. We are delighted that our work has been positively received, and we appreciate the time and effort each reviewer has put into evaluating our submission.
We are particul... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Hybrid Top-Down Global Causal Discovery with Local Search for Linear and Nonlinear Additive Noise Models | Accept (poster) | Summary: This paper studies structure learning problem for additive noise model (ANM) in both linear and nonlinear settings. It proposes a hybrid constraint based approach to learn the DAG by leveraging the local ancestral relationships. The algorithm consists of ordering search and edge discovery these two steps. Corr... | Rebuttal 1:
Rebuttal: We thank the Reviewer for their suggestions on how we might better emphasize the contributions of our work. We respond to Weakness 2 in the general response, adding to our preliminary experiments by comparing our algorithms against many state-of-the-art baselines (CAM, NoGAM, GES, GRaSP, GSP, etc.... | Summary: In this paper, the authors present a causal discovery method by firstly determining the order of the causal variables, then determining the existence of edges between any two variables. The experimental results demonstrate the superiority of the proposed method compared to relevant methods.
Strengths: I thank... | Rebuttal 1:
Rebuttal: We thank the Reviewer for bringing [1] to our attention. We respond to Question 1 in the general response. We provide clarification on Weakness 1 and 2 below.
**Necessary Discussion of Related Literature**
Methods in [1] tackle a different setting than our paper: they obtain an MEC under the Spa... | Summary: The paper presents theoretical results about extensions of the partial order induced by a causal DAG and uses these results to propose new constraint-based algorithms for ANMs.
**Edit**: increased rating from 3 to 5, soundness from 1 to 2, and contribution from 2 to 3.
**Edit 2**: increase rating from 5 to 6... | Rebuttal 1:
Rebuttal: We thank the Reviewer for their detailed comments on how we might improve our notation, and better support our theoretical results. We respond to Weakness 1 in the general response, providing many new experiments that compare our algorithms against many state-of-the-art baselines (CAM, NoGAM, GES,... | Summary: The paper mainly focuses on proposing efficient search algorithms for finding the hierarchical sort ordering (linear topological sort) of variables. As mentioned in Section 5, finding such hierarchical orders can significantly improve the efficiency of causal discovery of edges, making the algorithm tractable ... | Rebuttal 1:
Rebuttal: We thank the Reviewer for their insightful questions about how our methods work, and suggestions for how to clarify our explanations and results. We respond to Weakness 2 in the general response and address everything else below (following the order of the review).
**Explanatory Examples**
We th... | Rebuttal 1:
Rebuttal: We thank the Reviewers for their insightful comments and questions, as they have helped improve the clarity of our paper. We have addressed all raised concerns in this rebuttal, and incorporated the feedback into our manuscript.
We thank the Reviewers for unanimously acknowledging the novelty an... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Take A Shortcut Back: Mitigating the Gradient Vanishing for Training Spiking Neural Networks | Accept (poster) | Summary: The paper trains SNNs using surrogate gradient learning. In order to mitigate the gradient vanishing problem, the paper proposed the Shortcut Back-propagation method and utilizes an evolutionary algorithm framework to balance the training of shallow and deep layers. The effectiveness of the proposed method is ... | Rebuttal 1:
Rebuttal: Thanks for your efforts in reviewing our paper and your recognition of our novel method, notable results, and good writing. The response to your questions is given piece by piece as follows.
W**1**: The author should add more mathematical proof to demonstrate that the mentioned residual structur... | Summary: This paper proposes a simple method to mitigate the gradient vanishing problem in the training of SNNs. This method introduces some early classification heads (including a pooling layer and a fully connected layer) to the SNN. Because the gradients from the early classification heads pass fewer surrogate gradi... | Rebuttal 1:
Rebuttal: Thanks for your efforts in reviewing our paper and your recognition of our simple method, notable results, and good writing. The response to your questions is given piece by piece as follows.
W**1**: In this paper, the author only demonstrates a change in gradient distribution in the first layer... | Summary: This paper proposes shortcut connections between layers to mitigate the gradient vanishing problem in SNNs. Additionally, the authors present a way to phase out the shortcut connections over training so that inference can be done without these additional connections. The experiments show that this method impro... | Rebuttal 1:
Rebuttal: Thanks for your efforts in reviewing our paper and your recognition of our interesting ideas, notable results, and good writing. The response to your questions is given piece by piece as follows.
W**1**: The proposed method will increase the training time.
**A1**: Thanks for this question. The ... | null | null | Rebuttal 1:
Rebuttal: Thanks for your efforts in reviewing our paper and your recognition of our interesting ideas, notable results, and good writing. Here we provide the revised picture in the PDF.
Pdf: /pdf/1db36038c0ae23510f9a5e9ec08a7cc70ee30036.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Reasons and Solutions for the Decline in Model Performance after Editing | Accept (poster) | Summary: This paper addresses the challenges associated with the decline in performance of LLMs after undergoing knowledge editing. The study identifies the primary factors contributing to performance degradation from both data and model perspectives. By constructing a Multi-Question Dataset (MQD) and analyzing the imp... | Rebuttal 1:
Rebuttal: Thank you for your constructive feedbacks on the paper! We have added detailed explanations for the important questions asked in the review.
$\textbf{Q1: }$ How adaptable is the D4C method to different types of LLMs and knowledge editing tasks beyond those tested in your experiments?
$\textbf{W1... | Summary: Recent research has shown varying degrees of decline in model performance following small changes made by certain model editing methods. This paper is the first to comprehensively analyze the reasons behind such performance declines. Through extensive experiments, it identifies two main factors: data and model... | Rebuttal 1:
Rebuttal: Thank you for recognizing the importance, effort in method, and applications of our work. We outline our response to the main concerns:
$\textbf{Q1: }$ Can the authors add a section in the appendix to expand on the dataset mentioned in 3.1
$\textbf{W1: }$ Thank you for your suggestion. We will ... | Summary: The paper investigates the reasons behind performance decline in sequential model editing approaches that selectively update parameters based on both data and model factors. To address the issues causing this decline, the authors propose a method to save editing history, thereby transforming sequential editing... | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback and for recognizing the novelty of the our method. Below, we address some of the weaknesses raised:
$\textbf{Q1: }$ The study is restricted to two closely related editing approaches.
$\textbf{W1: }$ First and foremost, our innovation surpasses the mere transl... | Summary: This paper investigates the reasons and solutions for the decline in model performance of model editing. The authors conduct experiments from two perspectives: data and model. Specifically, to clarify the impact of data on the performance of edited models, the authors first evaluate how editing different type... | Rebuttal 1:
Rebuttal: Thank you for the positive recommendations and valuable feedback!
$\textbf{Q1: }$ There is no overview of this paper, which makes it hard to follow the details of Section 3 and 4.
$\textbf{W1: }$ We provided an overview of this paper using text and figures. Firstly, in the Introduction section,... | Rebuttal 1:
Rebuttal: # Global Response
We thank the reviewers for their thoughtful feedback. We are glad the reviewers find that
* Our motivation is innovation and has great significance
* "The paper is well-motivated: Exploring the reasons behind and impact of small changes made by model editing techniques on ... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
DDK: Distilling Domain Knowledge for Efficient Large Language Models | Accept (poster) | Summary: This paper proposes DDK, a knowledge distillation (KD) framework that distills large language models (LMs) into small LMs. Unlike previous KD methods, DDK dynamically adjusts the domain weights during distillation. Experiments show that DDK outperforms other KD baselines across various tasks.
Strengths: 1. Th... | Rebuttal 1:
Rebuttal: Thanks for your careful reading and constructive suggestions. We will address your concerns shown below in detail.
**Q1: Extra computation introduced by DDK should be considered. Compare the performance of the distilled model and the baselines given the same FLOPs.**
**A1**: Thanks for your insi... | Summary: The paper introduces a new framework called Dynamic Domain Knowledge Distillation (DDK) to enhance the efficiency of knowledge distillation for large language models (LLMs). Unlike traditional methods that overlook domain performance differences between student and teacher models, DDK dynamically adjusts the d... | Rebuttal 1:
Rebuttal: Thanks for your careful reading and constructive suggestions. We will address your concerns shown below in detail.
**Q1: Require knowing the training data distribution and category beforehand.**
**A1**: Our DDK requires the training data distribution and category to divide the domains and we ack... | Summary: The work introduces a novel framework for knowledge distillation (KD) for LLMs. The key innovation of DDK is its dynamic adjustment of the distillation dataset composition based on domain performance differences between the teacher and student models. The paper presents extensive evaluations demonstrating that... | Rebuttal 1:
Rebuttal: Thank you for your valuable comments.
**Q1: Discuss with Sheared LLaMA. Novelty.**
**A1:** Please See **General Response** (**G.Q1** and **G.Q2**).
**Q2: Qwen1.5 results on MMLU and Humaneval.**
**A2:** For MMLU, the accuracy from Qwen Blog is based on a 5-shot setting, while the result in Tab... | Summary: This work proposed a KD strategy for LLMs. Specifically, with assess to the domain-specific performance of both the teacher and student LLMs, DDK uses domain knowledge guided sampling to dynamically update the data mixture. In addition the paper also conducts a statistical analysis of the domain distribution o... | Rebuttal 1:
Rebuttal: Thank you for your nice comments and suggestions.
**Q1: Compare domain-enhanced KD methods.**
**A1:** After investigating KD [R1] and applications on LLMs [R2], we have observed existing domain-enhanced KD methods can be divided into two categories. The first is cross-domain KD for domain adapt... | Rebuttal 1:
Rebuttal: ## **General Response**
Thanks a lot for handling/reviewing our submitted manuscript. We would like to thank the reviewers for their thoughtful and constructive comments and suggestions. By addressing each of the issues raised by the reviewers, we believe that the quality and clarity of our DDK c... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
RestoreAgent: Autonomous Image Restoration Agent via Multimodal Large Language Models | Accept (poster) | Summary: For real-world images corrupted by multiple simultaneous degradations, this paper first analyzes the limitations of using all-in-one restoration models and various task-specific models. The authors then introduce RestoreAgent, which automatically identifies the types of degradation in a degraded image, determi... | Rebuttal 1:
Rebuttal: `Q1`: **Degradation order of JPEG compression.**
Thank you for bringing up this important point regarding the degradation order. In our study, the order of JPEG compression is not fixed and is entirely random, unlike the sequence suggested in references [1,2]. The strategy of placing JPEG compres... | Summary: This paper proposes a new pipeline to address multiple degradation, like noise, blur and low light. Besides, a RestoreAgent with multimodal large language models is introduced to assess the type and extent of degradations in the input images and perform dynamic restorations.
Strengths: 1. The paper is well-wr... | Rebuttal 1:
Rebuttal: `Q1`: **How the order of different enhancement techniques is defined. For example, if the input has noise and rain streaks, how is the order of dehazing and denoising techniques determined? Will this affect performance?**
Thank you for highlighting this important aspect. The order of applying dif... | Summary: This paper presents an image restoration pipeline designed to handle various degradation types and levels by leveraging MLLM’s capabilities to select the appropriate model and determine the execution order. It begins with an analysis of why execution order and utilizing multiple models for different degradatio... | Rebuttal 1:
Rebuttal: `Q1`: **In the introduction, it would be helpful to explain how the MLLM excels at understanding different types and levels of image degradation.**
Thank you for your valuable suggestion. We will incorporate the following explanation into our introduction to clarify how MLLMs excel at understandi... | Summary: This paper introduces RestoreAgent, an innovative image restoration system that leverages multimodal large language models to autonomously handle images with multiple types of degradation. The system addresses limitations of existing all-in-one models and fixed task sequences by dynamically adapting to each im... | Rebuttal 1:
Rebuttal: `Q1`. **More details regarding the construction methods of these datasets**
Thank you for your feedback. We have significantly expanded the relevant sections in the revised version of our paper to offer a much more comprehensive explanation of our data preparation process.
Regarding the training... | Rebuttal 1:
Rebuttal: Dear AC and all reviewers,
We sincerely appreciate your time and efforts in reviewing our paper. We are glad to find that reviewers recognized the following merits of our work:
- **Innovative contribution and strong motivation [DJ9b, NhBR, Ux72, 4Dbc]**:
The proposed RestoreAgent addresses the ... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
FedGMark: Certifiably Robust Watermarking for Federated Graph Learning | Accept (poster) | Summary: This paper investigated the problem of watermarking the Federated Graph Learning (FGL) models. This paper proposed the first backdoor-based FGL watermarking framework, called FedGMark. Specifically, to tackle the issues of ineffectiveness and vulnerability of existing methods, FedGMark designed two modules res... | Rebuttal 1:
Rebuttal: **We thank the reviewer for appreciating the novelty of the studied problem and the proposed certified robust watermarking scheme against attacks.**
**W1: Clearly define the Threat Model and Problem; Who is the adversary to steal the FedGL model); Are clients and central server an adversary? Mak... | Summary: This manuscript introduces FedGMark, a backdoor-based watermarking method specifically designed to protect Federated Graph Learning (FedGL) models from illegal copying and model theft. They claim that the proposed FedGMark is the first method to safeguard the intellectual property of FedGL models, offering cer... | Rebuttal 1:
Rebuttal: **We thank the reviewer for appreciating the intuition and motivation of the proposed solution and the comprehensive evaluations to support the solution.**
**W1: Clearly define the "Threat Model"**
Thanks for the suggestion! See Response to Comment#1 in the global rebuttal.
**W2: Consider bla... | Summary: This paper addresses the problem of protecting model ownership in the emerging domain of Federated Graph Learning (FedGL) by proposing FedGMark, a backdoor-based watermarking technique. The authors argue that existing watermarking approaches are either inapplicable to graph data or exhibit weaknesses in terms ... | Rebuttal 1:
Rebuttal: **We thank the reviewer for appreciating the well-motivated approach, promising performance, and robustness guarantees.**
**W1: Alternative key management methods**
We clarify the predefined key is used by the Watermark Generator to know which local watermark is learnt for which client. It is l... | Summary: This work studies watermarking for federated graph learning (FGL) to protect the ownership of participants. It proposes a customized watermark generator for local clients that can capture the local graph structure and private client information, and a robust model loader consisting of multiple GL submodels and... | Rebuttal 1:
Rebuttal: **We thank the reviewer for appreciating the motivation and novelty of this work (first to study robust watermarking for FedGL models).**
**W1: Clarify the concept of ownership in FedGL**
In typical FL, a server and multiple clients collaboratively train a global model stored in the server, wh... | Rebuttal 1:
Rebuttal: **We thank all reviewers for their constructive comments! We first summarize the global response to the common comments raised by the reviewers and then reply to individual reviewers’ comments.**
**Comment#1:Threat Model (djtK-W1 and yEMY-Q1)**
**Response:** Thanks for the suggestion! We add mo... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Combining Statistical Depth and Fermat Distance for Uncertainty Quantification | Accept (poster) | Summary: This paper introduces a new method for Out-of-Distribution detection based on the concepts of Lens Depth and Fermat distance. This method is used to see whether a sample has a similar representation in the penultimate layer of a Neural Network as the samples in the training data. The method is subjected to var... | Rebuttal 1:
Rebuttal: First of all, thank you for your time and for this review. Here are our answers.
### Q1. How computationally expensive is LD after the improvements? faster/slower to do inference than e.g. DDU?
The computational expense depends on the number n of points in the proposed reduced LD, and the number... | Summary: The paper presents a non-parametric approach to out-of-distribution (OOD) detection. Given a trained neural network classifier, it is proposed to combine the Lens Depth (LD) with the Fermat distance (in an improved form) to capture the geometry and density of the data in feature space. Without assuming any pri... | Rebuttal 1:
Rebuttal: First of all, thank you for your kind review. Here are our answers.
### Weaknesses
>1. Related work to include papers on OOD:
Thank you for your recommendation. We will add more references on OOD in related work.
> 2. An additional evaluation metric:
We do appreciate your kind recommendatio... | Summary: This paper proposes a new method for OOD detection/scoring based on the lens depth and Fermat distance, arguing that it has advantages over prior methods by being non-parametric, non-invasive, (almost) tuning-parameter-free, and quite effective in adapting to the unknown structure of the data to identify OOD p... | Rebuttal 1:
Rebuttal: First of all, thank you for your kindly insightful reviews. Here are our answer.
Many aspects of the discussion will be added to the paper.
### Weaknesses
> W1: Two disjoints clusters for data?
Very natural question. You are perfectly right that the population (ideal) Fermat distance in Eq. (3.... | Summary: The authors address the problem of out of distribution detection in supervised learning with particular focus on neural networks models. The developed method worj in some feature (embedding) space by measuring the statistical depth of the query point with respect to some reference set of points. The particular... | Rebuttal 1:
Rebuttal: ### Comment on weakness
We believe that in [3], one uses GDA and not Gaussian Mixture Model (GMM). More precisely, GMM consists in calculating density for a point x as $p(x) = \sum_{i=1}^{C} w_i p_i(x)$, and so one needs to fit both $w_i$ (weight) and params $\theta_i$ of each $p_i$. (Here $\the... | Rebuttal 1:
Rebuttal: First of all, thank you all for your time and insightful reviews. Here are the main points in the rebuttal.
- We answered all the questions to best of our capability. In particular, the questions of Reviewer NNtx brought extensive mathematical discussions.
- We shall add / nuance multiple points... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Simplified and Generalized Masked Diffusion for Discrete Data | Accept (poster) | Summary: This paper proposes a new framework for masked diffusion models for generative modeling of discrete data. Masked diffusion models offer an alternative to autoregressive models for discrete data but have faced challenges due to complex formulations and unclear relationships between different approaches. This pa... | Rebuttal 1:
Rebuttal: Thank you for the positive feedback! We are glad that you find our contribution notable, our methodology thorough and our experimental results strong and robust. We address each comment below
## Detailed pseudocode and specific implementation challenges
We thank the reviewer for the suggestion. P... | Summary: The paper simplifies the mathematical formula for the absorbing state diffusion process. By doing so, the authors derive a continuous-time ELBO for masked diffusion models. Their method, MD4, achieves better perplexity scores than SEDD on text8 and zero-shot perplexity on numerous datasets.
Strengths: Simplif... | Rebuttal 1:
Rebuttal: Thank you for the time and the feedback! We clarify a few points.
## D3PM results & differences between MD4 and D3PM
* The reviewer is concerned if the D3PM results for zero-shot transfer tasks are comparable to MD4. We clarify that these D3PM results are from the SEDD paper (Lou et al., ICML 20... | Summary: The paper proposes a streamlined and generalized framework for masked diffusion models, addressing the complexities and inefficiencies of existing models, including those based on Score Entropy Discrete Diffusion (SEDD). It introduces a continuous-time variational objective for masked diffusion models, simplif... | Rebuttal 1:
Rebuttal: Thank you for the time you’ve taken to review our work and for the positive and constructive feedback! We are glad that you find our work "offers a novel theoretical formulation" and "achieve state-of-the-art performance" with " comprehensive experimental validation". We address each individual co... | Summary: Summary: This paper introduces a framework for masked diffusions that consolidates previous research on the topic and organizes it into a cohesive structure. The authors also present a generalized model within this framework, which enables the use of state-dependent masking schedules and optimization of schedu... | Rebuttal 1:
Rebuttal: Thank you for the time you’ve taken to review our work and for the positive and constructive feedback! We are glad that you found our paper "offers a valuable approach to optimize the forward process”, “organizes prior work into a cohesive structure”, and "may serve as a source of inspiration" for... | Rebuttal 1:
Rebuttal: # Response to comments shared by reviewers:
We thank the reviewers for their feedback. Below we address the questions shared by reviewers. We also uploaded a rebuttal pdf that contains figures used to address individual questions/comments of the reviewers.
## bny3,Y9JA: Details of training and s... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SocialGPT: Prompting LLMs for Social Relation Reasoning via Greedy Segment Optimization | Accept (poster) | Summary: Paper proposes a pipeline method of orchestration pre-trained foundation models to solve the social relationship classification problem. It uses vision models to extract information in text about the scene in the form of caption. Relevant information, i.e. age, gender, general description, of individual person... | Rebuttal 1:
Rebuttal: Thank you for the constructive feedback and the positive assessment of our work! Below, we detail our responses to the review concerns.
**W1: Technical novelty of GSPO**
Thank you for acknowledging the "clever" design of our pipeline. We agree that our major technical contribution lies in ou... | Summary: This paper introduces SocialGPT, a modular framework for social relation reasoning that integrates the perception capabilities of Vision Foundation Models (VFMs) with the reasoning capabilities of Large Language Models (LLMs). To optimize prompts, the authors propose GSPO, a segment-based optimization algorith... | Rebuttal 1:
Rebuttal: Thank you for the constructive feedback and the positive assessment of our work! We are happy that you find our framework **innovative** and our experimental evaluation **comprehensive**. Below, we detail our responses to the review concerns.
**W1-1**
> Does this coordinate-based inference l... | Summary: This paper proposes a framework called SocialGPT for social relation reasoning, which combines vision foundation models and large language models. A greeedy segment prompt optimization methods is also proposed to prompt LLM. Experimental results show the effectiveness of the proposed method.
Strengths: ---The... | Rebuttal 1:
Rebuttal: Thanks for the comments! Below we address the detailed questions. We hope that our responses will reflect positively on your final decision.
**W1-1: Common solution**
We are fully aware that leveraging foundation models for vision tasks is a growing trend, which also motivates our work. We would... | Summary: This manuscript introduces SocialGPT, a modular framework designed to enhance social relation reasoning by combining Vision Foundation Models (VFMs) and Large Language Models (LLMs). SocialGPT utilizes VFMs to convert image content into a textual social story, followed by LLMs performing text-based reasoning. ... | Rebuttal 1:
Rebuttal: Thank you for your valuable comments! Please find our responses to specific queries below. We hope that our responses will reflect positively on your final decision.
**W1: Substantial computational resources**
Our method leverages multiple foundation models, which may initially appear computatio... | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Customized Multiple Clustering via Multi-Modal Subspace Proxy Learning | Accept (poster) | Summary: This paper incorporates a multi-model subspace proxy learning (Multi-Sub) to design a novel end-to-end multiple clustering approach and utilizes the synergistic capabilities of CLIP and GPT-4 to align textual prompts expressing user preferences with corresponding visual representations. The main contributions ... | Rebuttal 1:
Rebuttal: ### **W1: I wonder if this is another form of two-stage task.**
Thanks for your invaluable feedback. We will make it clear in the revision as follows. A two-stage process used by previous methods separates the representation learning and clustering entirely, where the representation learning is f... | Summary: This paper presents an innovative approach for addressing the limitations of existing multiple clustering methods. By leveraging the synergistic capabilities of CLIP and GPT-4, Multi-Sub aligns textual prompts with visual representations to cater to diverse user-specific clustering needs. This method introduce... | Rebuttal 1:
Rebuttal: ### **W1: A more detailed analysis and discussion on the sensitivity of the method to these parameters.**
We greatly appreciate your suggestion. To show the sensitivity of the balancing factor $\lambda$ that is the only hyper-parameter in our proposal, the experiments were conducted on CIFAR-10.... | Summary: The paper is about Multiple Clustering, which is an interesting topic. The authors propose a novel end-to-end multiple clustering approach that incorporates a multi-modal subspace proxy learning framework. The paper is well written and well organized. However, there are several concerns in the current version ... | Rebuttal 1:
Rebuttal: ### **W1: Several associations.**
Thanks for your insightful comments. Multi-Sub works by learning in the user-preferred subspace. Therefore, it is theoretically unlikely that the learned representations are completely opposite to the user's demand under such aligned subspace. We will make it cl... | Summary: This paper introduces an end-to-end multi-clustering method that integrates a multimodal subspace proxy learning framework. It combines text prompts expressing user preferences with corresponding visual representations to achieve clustering based on user interests.
Strengths: 1.The clustering task, driven by ... | Rebuttal 1:
Rebuttal: ### **W1: The contributions of the paper.**
Thank you for the suggestion. We will carefully emphasize our contribution in the revision as follows:
Given only a high-level user interest in an unsupervised scenario without any class labels or names, we cannot directly apply CLIP. Instead, we must ... | Rebuttal 1:
Rebuttal: Dear Reviewers,
We sincerely thank your invaluable time and efforts invested in reviewing our submission. Your constructive and insightful feedback are greatly appreciated for improving our revision.
We have carefully responded to all the questions and concerns raised in the individual rebuttal ... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Feedback control guides credit assignment in recurrent neural networks | Accept (poster) | Summary: The authors explore the relationship between feedback control and learning with recurrent neural networks (RNN). Specifically, they enforce a control signal onto a RNN that is used to generate a trajectory for a outreaching task, and then propose to use local learning rules on the neurons in the RNN. They show... | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our work.
> My main concern is that I the task chosen consists on bringing a system to a desired static target...
Thank you for flagging this - we also trained RNNs on an additional, commonly used task, where the network has to generate both the sine and c... | Summary: Feedback controllers are ubiquitous in neuroscience but their functions are not fully understood. This paper studies how feedback control interplays with biologically plausible online learning on a standard motor control task. The authors show that:
- feedback control enables to adapt to task variations withou... | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper.
> In the appendix, it is written that the learning rate is taken to be constant. To make claims about e.g. learning speed, the optimizer, in particular its learning rate, has to be tuned.
Thank you for pointing this out. To address your concern... | Summary: Recent work has shown that feedback signals can be critical to rapid adaptation in control tasks, and may explain how biological intelligence can make rapid adjustments when solving such tasks. This paper studies how feedback control achieves this. To do so, the authors train an RNN enhanced with feedback cont... | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper.
> Several sections of the paper seem to just present results from previous work ...
While our work builds upon the task adaptation findings of previous work (Feulner et al. 2022), it offers a substantially distinct focus and set of contribution... | Summary: The paper studies the effect of feedback control on motor learning in recurrent neural networks, finding that feedback control improves learning performance and better aligns with the true gradient w.r.t. the task.
Strengths: - Alignment with the true gradient is an interesting result and helps explain why fe... | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper.
> The training setup is rather limited; it would be interesting to see training done for other tasks and architectures (or RNN sizes).
Thank you for raising this important point. We also trained RNNs on an additional, commonly used task, where t... | Rebuttal 1:
Rebuttal: We thank all the reviewers for their careful review of our manuscript and all the insightful comments!
Here, we attach a single page with extra figures to support our individual rebuttals below.
Pdf: /pdf/78611e01e7397e8abbe130e031a6fcbd24668fab.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Beyond Redundancy: Information-aware Unsupervised Multiplex Graph Structure Learning | Accept (poster) | Summary: - This paper presents an information theory approach to obtain a single graph fused from a multiplex graph, which preserves
- sufficient task-relevant information
- while removing task-irrelevant noise.
- A learnable graph augmentation strategy is also developed.
- The learned graph and representa... | Rebuttal 1:
Rebuttal: We appreciate your thoughtful feedback. Your constructive criticism is invaluable in refining our work. Below, we give point-by-point responses to your comments.
1. **The difference between the existing non-redundancy principle and multiplex graph non-redundancy is unclear. Please clarify it.**
... | Summary: The paper introduces InfoMGF (Information-aware Unsupervised Multiplex Graph Fusion), a novel framework aimed at addressing the issue of graph structure reliability in Multiplex Graphs. The primary goal is to refine graph structures to eliminate noise and maximize task-relevant information. Theoretical analysi... | Rebuttal 1:
Rebuttal: We appreciate your thoughtful feedback. Your constructive criticism is invaluable in refining our work. Below, we give point-by-point responses to your comments.
1. **Scalability: The framework involves several steps. Though the paper provides the complexity analysis in Appendix for each step, it... | Summary: The paper introduces InfoMGF, an innovative framework for Unsupervised Multiplex Graph Learning (UMGL) that addresses the often-overlooked issue of graph structure reliability. InfoMGF refines graph structures by removing task-irrelevant noise and maximizing task-relevant information through mutual information... | Rebuttal 1:
Rebuttal: We appreciate your thoughtful feedback. Your constructive criticism is invaluable in refining our work. Below, we give point-by-point responses to your comments.
**W1. Fig.4 shows that the proposed method is very robust to structure noise. However, more analysis is needed. Both InfoMGF and SUBLIM... | Summary: The authors develop a novel approach to improve Unsupervised Multiplex Graph Learning by refining graph structures to eliminate noise and maximize relevant information. The method utilizes mutual information maximization to integrate multiple graph views effectively. Theoretical validation and comprehensive ex... | Rebuttal 1:
Rebuttal: We appreciate your thoughtful feedback. Your constructive criticism is invaluable in refining our work. Below, we give point-by-point responses to your comments.
**Q1 & W1 & Q2. According to Table 1 and 2, it seems that the proposed method improves more on clustering than classification. Is there... | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for their valuable and insightful comments. We are glad that the reviewers find that our studied problem is novel and significant. Here, we provide a PDF file to further address the reviewers’ concerns regarding the clarity of the paper and the completeness of ... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Co-occurrence is not Factual Association in Language Models | Accept (spotlight) | Summary: This paper distinguishes two forms of knowledge learning in the model:
1. co-occurrence statistics: from modeling the co-occurrence of entities in the text.
2. factual associations: from modeling entity relations established through implicit associations.
They synthesize two datasets where knowledge is repre... | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments, and we really appreciate the suggestions. Please find our response to the comments below:
**More than one piece of knowledge in a sentence**: we agree that the current datasets are limited in the number of pieces of knowledge in a sentence. We th... | Summary: This paper studies how language models acquires factual knowledge during finetuning. It shows that narrative input tends to teach a model co-occurrence between entities, while referencing input teaches more about factual association. Models that learn factual association generalizes better to various question ... | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments, and we really appreciate the suggestions. Please find our response to the comments below:
**Forms of knowledge**: we agree that it will be interesting to study other forms of knowledge, such as quantitative knowledge, procedural knowledge, and pr... | Summary: The work investigates the deficiencies of pretrained language models in learning factual knowledge, highlighting that these models tend to learn word co-occurrence statistics rather than true factual associations. The authors find that language models, when dealing with explicit relationships, are prone to mer... | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments, and we really appreciate the suggestions. Please find our response to the comments below:
**Generalization of results across domains**: although we performed analysis mainly with the synthetic Country-City-Animals dataset (in order to ensure the ... | Summary: This paper investigates the learning of factual knowledge in pretrained language models, distinguishing between knowledge represented as word co-occurrence statistics and true factual associations. The authors find that language models tend to learn co-occurrence statistics, which do not generalize well to rea... | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments, and we really appreciate the suggestions. Please find our response to the comments below:
**Data split**: The training and evaluation data are always disjoint and are of different types. The training data is plain text, while the evaluation data ... | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Dynamic 3D Gaussian Fields for Urban Areas | Accept (spotlight) | Summary: This paper aims to perform view synthesis for dynamic urban scenes. This paper adopts 3DGS as scene geometry and uses neural fields to model the dynamic appearance of urban scenes. The neural scene graph is introduced to handle the movement of dynamic objects, and a deformation field is used to handle local ar... | Rebuttal 1:
Rebuttal: We thank the reviewer for the helpful feedback and for taking the time to review our manuscript. Below we address the concerns raised.
1. Rendering speed compared to 3DGS: We hope to address this concern in our global response, where we show a) an improved runtime of 0.074 seconds when rendering f... | Summary: This paper proposes a hybrid neural scene representation for dynamic urban driving scene modelling. The method utilizes 3D Gaussians as an efficient geometric scaffold and neural fields to represent appearance, thereby reducing memory. To account for transient scene geometry variations caused by weather, seaso... | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to review our manuscript and the constructive feedback on our work. Below we address the concerns raised.
1. Impact of neural field query on rendering speed: We address this point in the global rebuttal posted above. In a nutshell, we show that the neural ... | Summary: The paper presents a novel 3D scene representation for novel view synthesis (nvs) in dynamic urban environments where, in particular, under heterogeneous imaging environments. The proposed representation relies on existing ingredients: 3D Gaussian Splatting, learned static/dynamic object instances, and a glob... | Rebuttal 1:
Rebuttal: We thank the reviewer for the helpful feedback and for taking the time to review our manuscript. Below we address the concerns raised.
1. Positioning of the conceptual contribution vs. the competitive landscape: We addressed this in L114-117 of our paper, but we will make the distinction clearer ... | Summary: This paper works on novel view synthesis (NVS) for large-scale, dynamic urban scenes. This paper proposes a neural scene representation called 4DGF, which uses 3D Gaussians as an efficient geometry scaffold while relying on neural fields as a compact and flexible appearance model. The proposed method integrate... | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to review our manuscript and the constructive feedback on our work. Below we address the concerns raised.
1. Foggy regions in the demonstration video: While these artifacts may be caused by the limitations discussed in Sec. 5 like white balance or focus bl... | Rebuttal 1:
Rebuttal: We thank all reviewers for their helpful and constructive feedback. We appreciate the positive reception of our work. The consensus is that the combination of neural fields and 3D Gaussians as scene representation constitutes an interesting (f8M6), conceptually simple (6enk), and effective (51SN) ... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Causal vs. Anticausal merging of predictors | Accept (poster) | Summary: The paper explores the potential differences in predictor merging when approached from causal versus anti-causal directions. The results from MAXENT and CMAXENT indicate that in the causal direction, the solution converges to logistic regression, whereas in the anti-causal direction, it converges to Linear Dis... | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments. We will first give some general comments, then answer each one of the points in the weaknesses and last answer the questions.
We noticed the strengths of the paper are the same as the summary. Is this correct?
Weaknesses:
1. The question about scaling is... | Summary: The authors give a treatment of the mixture of experts problems using the idea of maxent; they use this as a tool to discuss how to merge causal and anti-causal inferences on the same data, in part as a way to assess the quality of the data being analyzed.
Strengths: The discussion of the differences and merg... | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to read our paper in detail and appreciate that they ranked the soundness and contribution of the paper as excellent. We would like to answer the reviewer’s concerns outlined in the weakness section, clarify some points that might have not been clear in th... | Summary: This paper studies the problem of learning a mixture of experts (predictors) where individual predictors have been learned with different causal constraints. It studies different asymmetries that arise when we merge different predictors using the Causal Maximum Entropy (CMAXENT) objective. It goes on to show t... | Rebuttal 1:
Rebuttal: We thank the reviewer for the careful reading of our paper. We also thank the reviewer for pointing that the soundness and the contribution of the paper are good and the presentation is excellent. We will start giving a short comment on the reviewer's summary and start answering to the weaknesses ... | Summary: This paper studies the differences and properties that emerge when one uses causal, anticausal features for prediction.
Strengths: **S1.** This work makes several interesting observations of causal and anticausal predictors under their parametric assumptions.
**S2.** This work suggests some potential conside... | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments. We would like to clarify certain points related to the weaknesses and respond to their questions as best we can.
W1 (about high dimensionality): in the examples we studied, we used two covariates but the results would be similar if we had predictors of mo... | Rebuttal 1:
Rebuttal: We thank all the reviewers for reading our paper and the interesting questions they asked. We also appreciate some of the reviewers considering the paper has a valuable contribution to the community, being sound and well presented. We invite the reviewers to increase the score if they feel their q... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SelfCodeAlign: Self-Alignment for Code Generation | Accept (poster) | Summary: - The paper introduces SelfCodeAlign, a fully transparent and permissive self-alignment pipeline for code generation in LLMs without relying on extensive human annotations or distillation from larger models. SelfCodeAlign generates instruction-response pairs from seed snippets, evaluates responses with test ca... | Rebuttal 1:
Rebuttal: Thank you for your comprehensive review and important suggestions! Also thanks for pointing out the presentation issues in Appendix, which we will fix in the revision. We provide our responses to your questions and concerns as follows.
> Q1: What about experiments/benchmarking on models that uses... | Summary: The authors proposed SelfCodeAlign that finetunes the model based on the filtered data generated by the same model itself. The authors conduct experiments to show that SelfCodeAlign outperforms most open-sourced models that were finetuned on public code dataset.
Strengths: The code generation problem is impor... | Rebuttal 1:
Rebuttal: Thank you for your valuable review and suggestions! We provide our response as follows.
> Q1: Can you properly highlight the row in table 1?
Thanks for the feedback. We appreciate your suggestions for improving Table 1. Could you kindly provide more specific details regarding your concerns, suc... | Summary: This paper introduces SelfCodeAlign, an entirely transparent and permissive pipeline designed for self-aligning code large language models without the need for human annotations or distillation. By applying SelfCodeAlign to CodeQwen1.5-7B, the authors generated a dataset containing 74k instruction-response pai... | Rebuttal 1:
Rebuttal: Thank you for your comprehensive review and suggestions! We address your questions as follows.
> Q1: Lack of Diversity in Generated Tasks: While the method aims to produce a variety of coding tasks, it is unclear how this diversity is achieved or measured…
Good question. We ensure task diversity... | Summary: This paper proposes a pipeline for generating synthetic instruction tuning data. The method consists of the following steps: 1. data filtering is applied to seed coding data to select high quality examples; 2. base LLM is used to generate a set of coding concept and category based on the seed data; 3. base LLM... | Rebuttal 1:
Rebuttal: Thank you for your valuable review and suggestions! We put our responses to your questions as follows.
> Q1: Have you tried this framework using stronger LLM to generate synthetic data?
Thank you for your question. We want to kindly highlight that we explored this point in Section 4.1, which exa... | Rebuttal 1:
Rebuttal: We deeply appreciate all the reviewers for their insightful feedback and suggestions for our work. In our responses below, we address each primary question (denoted as Q) or comment (denoted as C) raised by the individual reviewers. Additionally, we will revise our paper to incorporate editorial s... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
pFedClub: Controllable Heterogeneous Model Aggregation for Personalized Federated Learning | Accept (poster) | Summary: The authors address a key issue in personalized federated learning, which enables clients with heterogeneous model structures to participate in federated learning with consideration of effectiveness and efficiency. This method is based on model assembly and reassembly, in which the blocks and layers can be tre... | Rebuttal 1:
Rebuttal: `>>> W1`
Thank you for the reviewer's valuable feedback. We addressed the computational cost comparison in Section 4.5, using computation time as a metric against pFedHR. The results, presented in Figure 4, show that pFedClub generally requires less time than pFedHR and offers more consistent per... | Summary: This paper presents a controllable model reassembly approach to enable heterogeneous model cooperation in federated learning. The designed CMSR algorithm provides the control of the space to save the computational cost. Furthermore, the approach also achieves model personalization for each local client. They ... | Rebuttal 1:
Rebuttal: `>>> W1`
Thank you for the reviewer’s question. In the main paper, we set K to 4, following the methodology outlined in [1]. To further explore how the value of K affects our results, we conducted a hyperparameter study on K. The results of this study are presented in Table 5, with detailed analy... | Summary: The paper proposes a `pFedClub` method for personalized federated learning that enables controllable heterogeneous model aggregation, addressing limitations of existing approaches such as lack of personalization, privacy concerns, and uncontrolled model size growth.
Extensive experiments conducted on three be... | Rebuttal 1:
Rebuttal: `>>> W1`
Thank you for pointing this out. We follow existing work [1] to maintain the natural order of the blocks, as we aim for the generated candidate models to be similar to handcrafted network structures. Here, the natural order index is defined as the position of each block. For example, CNN... | Summary: This paper addresses heterogeneous model aggregation in federated learning. To this end, the authors introduce pFedClub, which aims to generate personalized models for federated clients while ensuring that the models remain within size constraints. Specifically, pFedClub consists of three main steps: first, it... | Rebuttal 1:
Rebuttal: `>>> W1`
Thank you for your comments. We would like to emphasize that our research question addresses a significantly challenging problem, where each client maintains a unique model. Aggregating these heterogeneous models on the server is particularly difficult without the use of public data. Add... | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Interpreting Learned Feedback Patterns in Large Language Models | Accept (poster) | Summary: This submission tries to tackle one big question in the field of interpreting the data-driven preference learnt by RLHF in human language. The technical path this submission took is to train probe on SAE features to distinguish between good and bad RLHF features.
Strengths: + The attempt to interpret what hap... | Rebuttal 1:
Rebuttal: Thank you for the insightful review.
We are pleased that you found our research direction good, and that releasing our SAE infrastructure would be beneficial to the community.
## Clarification on SAE Feature Probing
> "Unclear why have to probe on top of SAE feature. SAE greatly increase the di... | Summary: The goal of this paper is to predict where patterns in LLM activations learned from RLHF diverge from the human preferences used for the RLHF training.
Given a base model and an RLHF tuned version of it, the method involves first identifying the 5 layers with highest parameter difference according to an L2 no... | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful review and appreciate that you found our paper accessible.
## Clarifying the Paper's Objectives and Takeaways
> "…the takeaways of this paper are somewhat unclear. … if we finetuned a new model on one of the datasets used in this paper and trained probe... | Summary: The authors propose an approach for measuring and interpreting the divergence between learned feedback patterns (LFPs, or simply the model's activation patterns) and the feedback reward distribution of the preference training dataset. To do so, they identify layers whose activations have moved the most during ... | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review.
We are pleased that you found the question our paper studies interesting, and our explanation for using sparse autoencoders good.
## Key Assumptions
> "The effectiveness of this probing method seems to rely on many key assumptions being true…"
While we agr... | Summary: The paper investigates how large language models (LLMs) learn preferences from human feedback during fine-tuning using reinforcement learning (RLHF). The authors introduce the concept of Learned Feedback Patterns (LFPs) to describe activation patterns in LLMs that align with human feedback. They aim to measure... | Rebuttal 1:
Rebuttal: We thank the reviewer for their insight and time.
We appreciate that you found LFPs gave a new perspective on how LLMs learn from human feedback, and that our use of synthetic data and GPT-4 contributed well to the paper.
## Model and Task Selection
> "The study primarily focuses on a few speci... | Rebuttal 1:
Rebuttal: # Global Rebuttal
We thank the reviewers for their incisive feedback. In this comment, we summarize the additional results in the PDF attached to this comment, points made across multiple reviews, and our responses to those points. Note that unless otherwise specified, the figures referred to in ... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
ReVideo: Remake a Video with Motion and Content Control | Accept (poster) | Summary: ReVideo presents a novel view of video editing by modifying content with input trajectory to create new content. It designs a three-stage strategy to wrestle out the problem of ignoring motion control when direct training. The main contribution of this work relies on the new task of editing motion via user-spe... | Rebuttal 1:
Rebuttal: ## Q1 The Editing of the First Frame
Thanks for this suggestion. The editing method for the first frame is arbitrary, like the setting in AnyV2V. The results presented in the paper utilize text-guided image inpainting tools and InstructPix2Pix. Note that in our framework, editing the first frame i... | Summary: The paper presents a video editing method that enables precise localized adjustments to content and motion within specific areas of a video. It introduces a three-stage training strategy and a spatiotemporal adaptive fusion module to integrate edits across frames and locations effectively. This method allows f... | Rebuttal 1:
Rebuttal: ## Q1 About Artifact
We agree that our method still has room for improvement. We want to clarify this concern from two points:
(1) **The challenge of this task and our novelty.** Our method is the first attempt at local content and motion editing for videos. In Section 3.2 of the main paper, we c... | Summary: This paper presents ReVideo, a new approach for precise local video editing of both content and motion. It introduces a coarse-to-fine training strategy to progressively decouple content and motion control, and a spatiotemporal adaptive fusion module to integrate them effectively. Experiments show ReVideo can ... | Rebuttal 1:
Rebuttal: ## Q1 Practicality of Workflow
**Fig.1** and **Fig.2** in the attached PDF show that our method can still produce smooth results without specifying trajectory. This is due to our inherent capability to predict the motion in editing area via unedited content, enabling automatic motion generation wh... | null | null | Rebuttal 1:
Rebuttal: We appreciate the efforts of all the reviewers, ACs, and PCs. We have carefully read and addressed all concerns. **Since we are limited to 6,000 characters per reviewer during the rebuttal phase, we could only provide brief responses to some questions.** If there are any further issues, we are hap... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Implicit Optimization Bias of Next-token Prediction in Linear Models | Accept (poster) | Summary: This paper studies the implicit bias of the gradient descent on the Next-Token Prediction (LTP) problem in linear models. They first formulate this NTP problem as minimizing the cross-entropy (CE) loss over distinct contexts, each tied with a sparse conditional probability over the token space. They then provi... | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and the constructive questions raised. We appreciate the careful read.
We hope that our responses below answer your questions.
**Q1. for the NTP-compatible and separable conditions to hold, one needs d > m.**
Here are the key points to consider about the co... | Summary: This work studies the implicit bias of optimization in next token prediction tasks by analyzing the structure of the decoding matrix at infinite time. The paper introduces two novel conditions under which the loss reaches its minimum theoretical value and demonstrates that if these conditions hold (which can b... | Rebuttal 1:
Rebuttal: We are grateful for your encouraging feedback and for endorsing our paper.
**Q: A weakness, which the authors do acknowledge in their work, that prevented me from giving a higher score is that there is no clear connection between the structure of the weights and generalization, as there exists i... | Summary: This paper studies the structural properties of the solutions selected by gradient-based optimizers among the many possible minimizers of the NTP objective, the central challenge being to discern the "implicit bias" of the optimizer towards particular solutions.
Strengths: - The paper is generally well writte... | Rebuttal 1:
Rebuttal: Thank you for your time and for the positive feedback and score.
**Q: While the paper provides a a very interesting starting point for studying the solutions found by gradient descent in NTP settings, it's not very clear whether margin maximization practically corresponds to any meaningful takea... | Summary: This study investigates the structural properties of solutions chosen by gradient-based optimizers for next-token prediction (NTP), framing NTP as cross-entropy minimization across various contexts with sparse conditional probability distributions over a finite vocabulary. It focuses on the optimization bias o... | Rebuttal 1:
Rebuttal: Thank you for your review.
Below, we clarify the distinctions from the references you mention and explain why our problem setting differs from studying self-attention/transformers, focusing instead on the NTP paradigm. While we have detailed these discussions in the submission, we repeat them he... | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
On $f$-Divergence Principled Domain Adaptation: An Improved Framework | Accept (poster) | Summary: This study addresses the gap in the theory and algorithms of unsupervised domain adaptation based on f-divergence proposed by Acuna et al. 2021. Specifically, while the theory uses absolute values, the algorithms do not, and this issue is resolved by introducing a single scaling factor. The newly proposed f-DD... | Rebuttal 1:
Rebuttal: We thank you sincerely for the valuable feedback on our paper. Our responses follow.
>- While the empirical validation is strong, it is limited to specific benchmarks. Broader validation across diverse datasets and tasks would strengthen the findings. It is nice to present some insight into what... | Summary: This paper studies the learning theory aspect of the domain adaptation problem, where the key is to bound the estimation errors between expectations over shifting distributions. Specifically, this work improves the recently developed $f$-divergence-based generalization analysis, where the main results ensure a... | Rebuttal 1:
Rebuttal: We thank you sincerely for your careful reading and valuable feedback on our paper. Our responses follow.
>- The proposed algorithm needs further justifications.
>- Q1. Theory and methodology. The major result ... the consistency between Eq. (4) and Eq. (5) seems to be important. Some justific... | Summary: This paper aims to develop an improved version of f-divergence-based unsupervised domain adaptation (UDA) learning theory. In particular, the authors introduce a novel f-divergence-based domain discrepancy measure (f-DD) by combining the two existing concepts, which are f-divergence and domain discrepancy. Bas... | Rebuttal 1:
Rebuttal: We thank you sincerely for your constructive comments. Our responses follow.
>- The novelty of the paper is quite limited since the f-divergence-based domain discrepancy measure (f-DD) is proposed by combining the two existing concepts, which are f-divergence and domain discrepancy.
**Response.*... | Summary: This paper improves the theoretical foundations of UDA proposed by previous work, named f-DD. By removing the absolute value function and incorporating a scaling parameter, f-DD yields novel target error and sample complexity bounds, allowing us to recover previous KL-based results and bridging the gap between... | Rebuttal 1:
Rebuttal: We thank you sincerely for your valuable feedback on our paper. Our responses follow.
>- The readability of the paper is poor. It is almost entirely composed of definitions, remarks, lemmas and theorems, lacking a figure to introduce the motivation of this paper and explain why the improved fram... | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for their constructive comments and valuable feedback. In addition to addressing your individual comments separately, we have also uploaded a PDF file that contains a figure and a table. Specifically:
1. Figure: In response to Reviewer HkGQ's comment that ... | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: In this paper, new expected risk analysis based on f-divergence is provided for the unsupervised domain adaptation problem. Although there are prior researches on expected risk analysis based on f-divergence, several issues have been pointed out, such as the fact that the variational representation of f-diverg... | Rebuttal 1:
Rebuttal: We thank you sincerely for your positive evaluation and constructive comments. Our responses follow.
>- Are no assumptions ...?
**Response.** Our theoretical results, except for Proposition 1 which requires $\mathcal{H}$ to be sufficiently large, do not rely on any specific assumptions about $\m... | null | null | null | null | null | null |
Enhancing Semi-Supervised Learning via Representative and Diverse Sample Selection | Accept (poster) | Summary: The paper suggests a new sampling method for the labeled set of semi-supervised learning. This sampling method, termed RDSS, selects a set of examples that is both representative of the data, and diverse. The paper shows that using such a sampling function improves both freematch and flexmatch, and compares it... | Rebuttal 1:
Rebuttal: Thank you very much for your careful review of our work. We hope our responses can address all your concerns.
**W1. Some of the claims made by paper already appeared in previous art [1-3]. The proposed manuscript does not reference or compare to any of these works.**
**Response:** Thank you for ... | Summary: This paper proposes a Representative and Diverse Sample Selection approach (RDSS) that utilizes a modified Frank-Wolfe algorithm to minimize a novel α-Maximum Mean Discrepancy (α-MMD) criterion, aiming to select a representative and diverse subset from unlabeled data for annotation. Experimental results demons... | Rebuttal 1:
Rebuttal: Thank you very much for your insightful comments. Since Weakness 1 and Question 4, as well as Weakness 2 and Question 7, address the same issue, we have consolidated them accordingly.
**Q1. The definition and usage of variable X in the article are inconsistent.**
**Response:** Thank you for your... | Summary: This paper proposes a new sample selection method, RDSS, for the SSL task. RDSS considers both the representativeness and diversity of the selected sample and achieves state-of-the-art performance. This is achieved by the proposed α-MMD criterion and an efficient optimization algorithm GKHR.
Strengths: 1. RDS... | Rebuttal 1:
Rebuttal: Thank you very much for your insightful comments, which have greatly contributed to improving the quality of our paper. We hope our responses can address all your concerns.
**W1. I would like to see images of the actual selected samples and visualizations of the feature distribution to demonstrat... | Summary: Choice of the labeled set in the semi supervised learning is critical for the final performance of the model. This problem can also be looked as AL with SSL, or single shot AL with SSL (in other words similar to experimental design). This works provides a way to select the seed set which is representative, as ... | Rebuttal 1:
Rebuttal: Thank you very much for your constructive comments, which have definitely helped us enhance the paper and highlight its contributions in a better way.
**W1. Given the vast literature on submodular/supermodular functions, is it not possible to get an algorithm purely from that standpoint? If so, h... | Rebuttal 1:
Rebuttal: Here is a visualization of the sampling results for Reviewer GzC9.
Pdf: /pdf/d3630f75c4a8a7bb8107fcbcb2b336aac5f52636.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
The Implicit Bias of Adam on Separable Data | Accept (poster) | Summary: The main focus of this paper is on the implicit bias of Adam for a single layer linear model which performs binary classification on separable data. In particular, assuming a zero stability constant $\epsilon$, this paper reveals that Adam finds the solution that achieves maximum-$\ell_\infty$-margin and chara... | Rebuttal 1:
Rebuttal: We appreciate your support, and address your questions as follows.
>**Q1**:
Adam without $\epsilon$ sometimes does not converge. Contradiction with [1, 2]? Study a non-zero $\epsilon$ and let $\epsilon=0$ be a special case?
**A1**:
Our goal is to study the implicit bias of Adam when the stabilit... | Summary: This paper examines the implicit bias of the Adam optimizer in the context of linear logistic regression, demonstrating that it converges to the maximum $\ell_\infty$-margin solution under certain mild conditions. The authors note that omitting the stability constant in Adam updates results in a different impl... | Rebuttal 1:
Rebuttal: We appreciate your positive comments. Your comments and questions are addressed as follows.
>**Q1**:
The paper does not present results for a fixed learning rate.
**A1**:
When considering fixed learning rate $\eta_t=\eta$ for some small $\eta$, our analysis can imply that $\lim_{t\to \infty}\big... | Summary: In this work, the author studies the implicit bias of Adam optimizer for a single layer neural network on separable data. The author's work suggests that, compared to the implicit bias of gradient descent which is the max $ \ell_2 $ margin solution, Adam solution converges to the maximum $ \ell_\infty $ margin... | Rebuttal 1:
Rebuttal: Thank you for your supportive comments! We address them in detail as follows:
>**Q1**:
Can the authors expand on how they arrive at the right side of inequality after line 292 using 6.1 ? Perhaps take me through the inequality step by step ?
**A1**:
Thanks for your question. We would like to exp... | Summary: This paper studies the implicit bias of the Adam optimizer for logistic regression on linearly separable data. The authors prove that Adam converges to the linear classifier with the maximum $\ell_\infty$-margin. This result contrasts with the classical results on (stochastic) gradient descent (with or without... | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback! We address your comments as follows:
>**Q1**:
The paper does not provide an intuition why Adam and GD have different implicit biases. Relation to SignGD?
**A1**:
Thanks for your suggestion. Several recent works have discussed that Adam and SignGD are clo... | Rebuttal 1:
Rebuttal: Dear Reviewers,
We appreciate your supportive and constructive comments on our paper. We have addressed all your questions in detail in our individual responses to you. Here, as suggested by Reviewer Gj1V, we include a pdf page presenting some preliminary experiment results on training homogeneou... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Structural Inference of Dynamical Systems with Conjoined State Space Models | Accept (poster) | Summary: The paper introduces the SICSM framework, integrating Selective State Space Models (SSMs) with Generative Flow Networks (GFNs) to tackle challenges in dynamical systems characterized by irregularly sampled trajectories and partial observations. SICSM leverages the adaptive temporal modeling capabilities of SSM... | Rebuttal 1:
Rebuttal: We would like to thank Reviewer uuWM for the motivating review! Here are our answers to the concerns:
> The implementation of SICSM is computationally intensive, requiring significant resources and expertise. This complexity may limit its accessibility and widespread adoption.
Many thanks! We ac... | Summary: This paper proposes to combine State Space Models and Generative Flow Networks to perform structural inference in an irregular time series context. The proposed method is evaluated on a series of different tasks where it performs well, and compared to a number of baselines. The method's robustness to short tim... | Rebuttal 1:
Rebuttal: We would like to thank Reviewer 3YzK for the detailed and thoughtful comments. Here are our answers to the questions:
> My main concerns for the paper are its novelty and its low number of ablations, which make it hard to understand how specific pieces contribute to the performance of the method.... | Summary: The authors consider the problem of structure learning of dynamical systems from irregularly sampled trajectories and partially observed systems. They propose Structural Inference with Conjoined State Space Models (SICSM), a method based on selective state space models (SSMs) and generative flow network (GFNs)... | Rebuttal 1:
Rebuttal: We would like to thank Reviewer hoUX for the thoughtful comments. Here are our answers to the questions:
> The method has 3 key components: [...]. It is not entirely clear how these individual components interact and the explicit need for the GFN.
The state space model in our approach handles th... | Summary: Processes of scientific interest which are representable as graphs, in biology, chemistry, material sciences, mechanics, are an important application for machine learning. Nodes often represent physical objects, some of which influence each other. Nodes exhibit a set of features which can be observed over time... | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the inspiring review. And here are our answers to your concerns.
> To improve the writing, a running example might help bridge the abstractions (node, edge, state...) to physical reality [...]
We would like to sincerely thank the reviewer for this advice.... | Rebuttal 1:
Rebuttal: Dear Program Chairs, Senior Area Chairs, Area Chairs, and Reviewers,
We are deeply grateful for the detailed reviews and constructive feedback provided by Reviewers 2uCG, hoUX, 3YzK, and uuWM. We appreciate the recognition of the novelty and applicability of our work in addressing the complex cha... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Assembly Fuzzy Representation on Hypergraph for Open-Set 3D Object Retrieval | Accept (poster) | Summary: This paper presents a novel 3D object retrieval method. First, to facilitate this task, the authors build 3 datasets for training and evaluation, which may significantly benefit the community. Then the paper propose the Isomorphic Assembly Embedding (IAE) and the Structured Fuzzy Reconstruction (SFR) modules, ... | Rebuttal 1:
Rebuttal: 1. **Visualization (Weakness 1)**
We apologize for the lack of sufficient visualization results. We provide some visualized examples of the retrieval results in Fig. R3 of the rebuttal PDF and we will provide more.
2. **Equations (Weakness 2)**
We have revised the expression and explanation of the... | Summary: The manuscript introduces a framework (HAFR) for addressing the challenge of open-set 3D object retrieval. The authors propose a bottom-up approach focusing on part assembly, leveraging both geometric and semantic information of object parts to enhance retrieval performance across categories, including those u... | Rebuttal 1:
Rebuttal: 1. **Varying numbers of parts (Weakness 1)**
HAFR takes 4 part features as input for each object in this manuscript for now. As shown in Fig. R2 of the rebuttal PDF, the steps for part features generation are as follows:
a) Input the point clouds of an object.
b) For each point, obta... | Summary: This paper proposes to utilize the part-assembly representation method to mitigate the distribution skew of unseen categories, enhancing the generalization performance for open-set 3D object retrieval. Compared to previous methods, this paper benefits from part-level representation learning rather than object-... | Rebuttal 1:
Rebuttal: **Response for Reviewer ZtnM**
We sincerely thank you for the valuable comments and advice, which provided important guidance for us to enhance the rigor and coherence of our paper and directed the focus of our future work.
1. **About the generalization ability of the model (Answer for Weakness ... | Summary: This paper presents a method for finding similar samples from a set of 3D objects given query objects in an open setting, where objects can belong to both already seen and new categories. This method is based on considering 3D objects as hypergraphs consisting of individual geometric and semantic parts of obje... | Rebuttal 1:
Rebuttal: 1. **Generalization ability and comparison (Weakness 1)**
All categories of the testing set are unseen during training (widely accepted of open-set retrieval [1-2]), the retrieval results in this paper are experimented on the unseen categories. The compared results between different methods are ... | Rebuttal 1:
Rebuttal: We thank all reviewers for your insightful feedback and for your valuable time and effort. We try to answer all the questions and weaknesses of each reviewer in the rebuttal section below. The attached PDF contains our additional experimental results and figures.
Pdf: /pdf/9b8638f5e91d96e35326ff58... | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper proposes a framework for open-set 3D object retrieval, called the Hypergraph-Based Assembly Fuzzy Representation (HAFR) framework. This model leverages an Isomorphic Assembly Embedding (IAE) to integrate geometric and semantic consistency. Furthermore, a Structured Fuzzy Reconstruction (SFR) is used... | Rebuttal 1:
Rebuttal: **Response for Reviewer 3mhr**
We sincerely thank you for the valuable comments and advice, which provided important guidance for us to enhance the rigor and coherence of our paper and directed the focus of our future work.
1. **About the ablation study on $K$-value (Answer for Weakness 1)**:
We... | null | null | null | null | null | null |
Infinite-Dimensional Feature Interaction | Accept (poster) | Summary: This work proposes a novel approach for enhancing neural network performance by scaling feature interaction spaces to infinite dimensions using kernel methods. Recent advancements have introduced feature interaction spaces, but these are often limited to finite dimensions, primarily through element-wise multip... | Rebuttal 1:
Rebuttal: We sincerely appreciate the constructive comments from reviewer u52R and the time spent on reviewing this paper. We address the questions and clarify the issues accordingly as described below.
>**[Weakness 1]**: There is no theoretical justification that increasing the dimension of the feature-fe... | Summary: This paper studies placing a kernel function inside of a neural network architecture to facilitate interaction of features/dimensional expansion. They consider deep convolutional networks with parallel pathway features $x$ and $x'$ and a kernel function computed with both pathways' features as inputs $k(x,x')$... | Rebuttal 1:
Rebuttal: We sincerely appreciate the constructive comments from reviewer USts and the time spent on reviewing this paper. We address the questions and clarify the issues accordingly as described below.
>**[Weakness 1]**: Notation is not explained...
**[Re: W1]**: We'll add a detailed explanation of notat... | Summary: The authors present a new architecture for computer vision applications that models high-order interactions between features. The architecture is similar to an attention block, but introduces an RBF Kernel layer that captures interactions of order higher than two. The resulting method has strong empirical perf... | Rebuttal 1:
Rebuttal: We sincerely appreciate the constructive comments from reviewer Jbm2 and the time spent on reviewing this paper. We address the questions and clarify the issues accordingly as described below.
>**[Weakness 1]**: The presentation of the method seems overly complex in some places. For example, prov... | Summary: The paper shifts the focus from traditional neural network design, which emphasizes feature representation space scaling, to feature interaction space scaling. It introduces a new model architecture, InfiNet, that enables feature interaction within an infinite-dimensional space using the RBF kernel, leading to... | Rebuttal 1:
Rebuttal: We sincerely appreciate the constructive comments from reviewer Tii5 and the time spent on reviewing this paper. We address the questions and clarify the issues accordingly as described below.
>**[Weakness 1]**: The paper builds on the simple use of kernel methods. The novelty of the method is mi... | Rebuttal 1:
Rebuttal: Dear Area Chair and Reviewers,
We appreciate reviewers' precious time and valuable advice. We are happy that most of reviewers acknowledged our novel idea (Tii5, Jbm2, USts, u52R) and experiments (Tii5, Jbm2, USts, u52R).
At the same time, we note the concerns and suggestions of the reviewers on... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Universal Online Convex Optimization with $1$ Projection per Round | Accept (poster) | Summary: This paper introduces methods for constrained OCO
which automatically achieve the optimal rate without knowing
in advance whether the losses are convex, strongly convex,
exp-concave, or smooth, while using only 1 projection per round.
This is notable because the standard approach proceeds by combining
several ... | Rebuttal 1:
Rebuttal: Many thanks for the constructive reviews! We will revise our paper accordingly.
---
**Q1:** The main weakness is that the paper feels poorly factored.
**A1:** We apologize for the confusion caused by the numerous references to previous equations in this paper. Due to page limit, we attempted ... | Summary: This paper addresses the challenge of online convex optimization with unknown smoothness properties of the loss functions, which can be convex, strongly convex, or exp-concave. The authors propose an algorithm that achieves regret bounds of order $\sqrt{T}$, $\log T$, and $d \log T$ respectively, while requiri... | Rebuttal 1:
Rebuttal: Many thanks for the constructive reviews! We will revise our paper accordingly.
---
**Q1:** Would simple doubling trick allow to tune prior knowledge of the parameters $G$ and $T$?
**A1:** In fact, the doubling trick enables our proposed algorithm to avoid the prior knowledge of $T$, at the co... | Summary: This paper studies universal OCO algorithms with fewer projections. Previous work either use $O(\log T)$ projections per round, or have a sub-optimal dependence on $d$ for strongly-convex loss. This work designs a new surrogate loss to achieve tight regret for Lipschitz convex/exp-concave/strongly-convex losse... | Rebuttal 1:
Rebuttal: Many thanks for the constructive reviews. We will revise our paper accordingly.
---
**Q1:** The significance of our result.
**A1:** We emphasize that the $d$-dependence is significant in online learning studies. We provide the following elaborations and will also revise the paper to highlight t... | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Keeping LLMs Aligned After Fine-tuning: The Crucial Role of Prompt Templates | Accept (poster) | Summary: This paper proposes a mitigation strategy called "pure tuning, safe testing" to mitigate harmful finetuning issues for LLMs. The strategy is very simple, basically to use a safety system prompt for inference and do finetuning without such a prompt. The core philosophy is that harmful knowledge in the finetunin... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the positive and constructive feedback. Below, we address the reviewer’s questions.
**Q1** (rephrased): It would be appreciated if the authors could provide an explanation for why using different templates does not simultaneously lower the helpfulness while th... | Summary: This paper shows that the prompt templates used during fine-tuning and inference play a crucial role in safety alignment. Then, the authors propose to fine-tune models without a safety prompt, but include it at test time (user inference), which is counter to intuition. The authors demonstrate their method in t... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for acknowledging that the paper is clearly written and that the proposed PTST strategy is novel. Below, we address the reviewer’s questions.
**Q1:** This paper argues that training and testing on the same prompt template makes attacking easier. However, the trainT... | Summary: This paper addresses a critical issue, i,e., LLMs' loss of safety after being fine-tuned. The authors pay their attention to the prompt templates used during fine-tuning and testing, which leads to the main observation that fine-tuning with the valina template and testing with the safe template yields the best... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for acknowledging the novelty and promising performance of PTST. Below, we address the reviewer’s questions.
**Q1:** I feel it necessary for the authors to at least propose some hypotheses on the underlying mechanism of PTST and try to verify them with concrete exp... | Summary: This paper discusses the issue of maintaining model consistency after fine-tuning large language models (LLMs). The research team, through extensive experiments, found that the prompt templates used during fine-tuning and inference play a crucial role in maintaining model safety. The paper proposes the "Pure T... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for reviewing our paper and for acknowledging our experiments as extensive. However, we’d like to point out that the reviewer’s main comment (1, 2 below) under “weakness” (as well as Question 2, Question 3 and Limitation 2) suggests they **may not** have absorbed th... | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for their time and effort in reviewing our paper. Below we address some common questions that are raised by more than one reviewer.
**Q1:** Can the authors provide some discussion on why PTST works and how this method might inspire future explorations? (by SwB... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Communication-Efficient Federated Group Distributionally Robust Optimization | Accept (poster) | Summary: This work introduces three algorithms for communication-efficient Federated Group Distributionally Robust Optimization. The effectiveness of the proposed algorithms are verified through both theoretical and experimental results.
Strengths: 1) This work studies an important problem of federated group distribut... | Rebuttal 1:
Rebuttal: Thank you for the review! We believe we can address your concerns as follows.
***Q1: There lacks a comparison between the three proposed algorithms. What are the connections and differences between these algorithms?***
***A:*** FGDRO-CVaR and FGDRO-KL employ well-established regularization tech... | Summary: This paper addresses the challenge of reducing communication costs and sample complexity in Federated Group Distributionally Robust Optimization (FGDRO). The authors present the FGDRO-CVaR algorithm and the FGDRO-KL algorithm to address different constraints. Subsequently, they conduct extensive experiments ac... | Rebuttal 1:
Rebuttal: Thank you for the review! We address your suggestions and concerns as follows.
***Q1: The introduction's treatment of the concept of generalization appears incomplete. It is evident that there are two levels of generalization in Federated Learning, as delineated in two literature.***
***A:***
Th... | Summary: The paper presents three methods for Federated Learning Group Distributionally Robust Optimization: (i) one tailored to reduce the CVaR which optimizes the top K-losses, (ii) another one tailored to tackle the KL divergence, and finally (iii) one that uses Adam locally. The paper is well written and the ideas ... | Rebuttal 1:
Rebuttal: Thank you for your time to review! Below we address your concerns and suggestions.
***Q1: Why should be designed solutions that are distributional robust? And, at what cost? If we compare a method that simply maximizes/minimizes the FL problem, what is the overall loss?***
***A:*** Designing dis... | Summary: This paper aims to improve the efficiency of existing federated group distributionally robust optimization (FGDRO) when considering two specific types of regularization, condition value at risk and KL divergence. To address the first type of problem, the authors propose FGDRO-CVaR that reduces the sample compl... | Rebuttal 1:
Rebuttal: Thank you for the review! We address your questions as below and will include the discussion in the revision.
***Q1: Why is FGDRO an important problem or technique?***
***A:*** In federated learning, data is distributed across multiple clients, each with its own unique data distribution. FGDRO ... | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Can an AI Agent Safely Run a Government? Existence of Probably Approximately Aligned Policies | Accept (poster) | Summary: This paper defines social markov decision processes (SMDPs) as an MDP generalization incorporating a population of individuals with distinct utility profiles aggregated by a social welfare function. It provides a novel quantitative definition of alignment in this context, then leverages this definition to char... | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and for the time spent reviewing our paper.
**Feasibility of safeguarding black-box policies in SMDPs**
We argue that the main issue is not whether safeguarding a black-box policy is feasible, but whether it produces a useful policy. Indeed, the practical... | Summary: This paper applies ideas from the Probably Approximately Correct framework to agent alignment. The paper defines a new idea of a policy which is Probably Approximately Aligned and explores the existence of such policies under certain assumptions of social welfare and models of the world. The authors show that ... | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and for the time spent reviewing our paper.
**Usefulness of the theoretical results**
We understand that the reviewer's primary concern is the lack of clarity regarding the applicability of the theoretical results to real-world scenarios. While we do not ... | Summary: The paper aims to define alignment quantitatively and ensure AI agents' actions are predictable and safe. The paper start by outlines the basics of utility and social choice theory, focusing on quantifying social satisfaction and the conditions under which it is measurable. Next, the paper defines probably app... | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and for the time spent reviewing our paper.
We understand that that the primary concern of the reviewer is the gap between the theoretical result presented in the paper and their practical implementation in real-world scenarios (in particular for safe poli... | Summary: The paper investigates the potential for AI agents to safely make critical decisions, such as those in a government setting, by examining the concept of alignment. It introduces Probably Approximately Aligned (PAA) policies, which are policies that are nearly optimal in aligning with social welfare objectives.... | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and for the time spent reviewing our paper.
It is primarily mentioned that the problem is not well presented. We agree that clarity in presentation is crucial and would greatly appreciate specific suggestions on how to improve it. Below is a detailed discu... | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Regularized Adaptive Momentum Dual Averaging with an Efficient Inexact Subproblem Solver for Training Structured Neural Network | Accept (poster) | Summary: This article proposes an optimization algorithm RAMDA for training structured neural networks, which combines a number of optimization techniques including dual averaging, momentum, and coordinate-wise preconditioners. Similar to the existing RMDA algorithm, RAMDA also has the capacity to identify the local ma... | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the careful checking of our paper, including the proofs.
In current days, it has been increasingly rare to see a reviewer spending this much effort and time to review papers, and we truly appreciate it.
And we thank the reviewer for pointing out our careless... | Summary: This paper develops regularized adaptive momentum dual averaging (RAMDA) for structured neural networks. The method uses the preconditioning matrix to accelerate the convergence of a regularized momentum dual averaging (RMDA) method at the price of requiring the local solver (e.g. standard proximal gradient me... | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the careful evaluation of our paper and the invaluable suggestions. Our response is as follows.
**Q1.**
Equation 3 is indeed correct that we are using the cube root.
This choice of the preconditioner follows the empirical success of the MADGRAD algorith... | Summary: #### Summary
The paper introduces the Regularized Adaptive Momentum Dual Averaging (RAMDA) algorithm for training structured neural networks. RAMDA addresses the challenge of solving the subproblem involved in the regularized adaptive methods, which typically lacks a closed-form solution. The paper presents an... | Rebuttal 1:
Rebuttal: We thank the reviewer for the evaluation of our paper. Our reply is as follows.
1. Computational Complexity: The complexity of the subproblem (Eq. 4) depends on the regularizer and especially its associated
proximal operation. Let the subproblem dimension, which is also the model size, be $n... | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for their careful evaluations of our paper.
We will individually reply to the reviewers to address their specific questions, and here we would like to highlight some changes that we will make in our revision.
- For Theorem 1, the rate when $\psi$ is convex can be improved t... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Multi-modal Transfer Learning between Biological Foundation Models | Accept (poster) | Summary: The paper introduces a novel multi-modal model, IsoFormer, designed to integrate DNA, RNA, and protein sequences for predicting RNA transcript isoform expression across different tissues. It utilizes pre-trained modality-specific encoders to generate embeddings that are then combined using a sophisticated aggr... | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful feedback and positive comments of our work.
> For Tab 5., wonder what’s the performance for “DNA and RNA encoder not pre-trained”
We have now completed this ablation study with full evaluation of the effect of pre-training for each encoder (detailed in th... | Summary: The paper introduces a new framework for the multi-modality pretrain model according to the Central dogma of biology. The method encode DNA, protein and RNA at the same time. The proposed method can transfer knowledge from the encoders pretraining and modalities.
Strengths: The paper is well-organized and eas... | Rebuttal 1:
Rebuttal: We appreciate this reviewer's comments and suggestions that will improve the revised manuscript.
> More ablation studies should be conducted about removing different modalities of the model in Table 2 ( e.g. we observe only RNA can achieve a high performance, what about protein+RNA? ).
We have n... | Summary: The paper models isoform relative abundance across tissues with a multimodal approach based on 3 pretrained encoders for DNA, RNA, and AA sequences. DNA encoder uses a sequence centered on the gene’s TSS, RNA encoder uses the known isoform sequence from RNAseq and the protein encoder uses corresponding AA seq... | Rebuttal 1:
Rebuttal: Many thanks for the constructive comments and positive assessment of our work.
> I don’t think the authors can claim this is the first attempt to combine DNA, RNA, and AA modalities with techniques from NLP. See the recent Evo work
Evo is a model based solely on DNA sequences and has been applie... | null | null | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their time reading our manuscript and for providing constructive feedback in their reviews.
We are glad the reviewers value positively our approach to combine different biological modalities together and emphasize that the experimental results support the ... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Fast T2T: Optimization Consistency Speeds Up Diffusion-Based Training-to-Testing Solving for Combinatorial Optimization | Accept (poster) | Summary: The paper introduces Optimization Consistency Models (OptCM) as a novel method for solving combinatorial optimization (CO) problems efficiently. Traditional diffusion models, although powerful, are computationally intensive due to their iterative denoising processes. OptCM overcomes this limitation by learning... | Rebuttal 1:
Rebuttal: Thanks for your insightful comments and for acknowledging novelty, model design, and empirical performance. Below we respond to the specific comments.
> **Q1: How does the computational complexity of OptCM compare with traditional diffusion models and other state-of-the-art neural solvers?**
Sin... | Summary: This paper advances CO DM-based neural solvers under the setting where labeled training graphs are available by considering Consistency Models and gradient search (which was adopted from T2T).
Strengths: 1- The paper is in general well-written and technically sound.
2- The use of CMs to accelerate the sampli... | Rebuttal 1:
Rebuttal: Thanks for the valuable comments, and for acknowledging our writing and technical soundness. Nonetheless, we believe there may exist some misunderstandings, especially regarding the value of the research line of data-driven learning-based solvers. It seems the major concern of the comment is about... | Summary: This paper presents Optimization Consistency Models (OptCM) for solving combinatorial optimization (CO) problems efficiently. By leveraging the consistency model, OptCM maps varying noise levels to optimal solutions in a single step, significantly reducing computational overhead. This approach is validated thr... | Rebuttal 1:
Rebuttal: Thanks for your valuable comment, nice suggestions, and for acknowledging our soundness, presentation, experimental extensiveness, and empirical performance. We seriously value the main novelty concern reflected in the comment and carefully address it in the general response. Below we respond to y... | Summary: This paper introduced a new algorithm for solving some classic combinatorial optimization problems. The method falls into the category of learn-based generative solvers. More specifically, it is a direct extension of the DIFUSCO [1] and T2t [2] solver, which are diffusion-based generative solvers. The improvem... | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and insightful suggestions, as well as for acknowledging our novelty, motivation, non-trivial contributions, and convincing evaluations. Your questions and suggestions are instrumental in further strengthening our paper. Below, we respond to your specific comm... | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewers’ time, valuable feedback, and constructive suggestions. Overall, the reviewers deem our work as "well-motivated" (3pWn), "well-written" (3pWn, rPB6), and "technically sound" (rPB6), acknowledging our "novel" "robust" "versatile" methodology and model design (3... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Med-Real2Sim: Non-Invasive Medical Digital Twins using Physics-Informed Self-Supervised Learning | Accept (poster) | Summary: This paper proposes a novel method for creating patient-specific digital twins using non-invasive patient health data. The authors introduce a physics-informed self-supervised learning (SSL) algorithm that pretrains a neural network on learning a differentiable simulator of the cardiac process. Then, another m... | Rebuttal 1:
Rebuttal: We thank the reviewer for the very thoughtful comments and feedback!
**Weaknesses:**
Thank you for highlighting these issues. Please see below a point-by-point response to your concerns.
* While the Windkessel model does not fully capture the complexity of cardiac dynamics, this simplicity is ... | Summary: This paper introduces a novel methodology for identifying patient-specific digital twins using noninvasive medical imaging, particularly focusing on cardiac hemodynamics. By leveraging a physics-informed self-supervised learning approach, the research addresses the challenge of modeling digital twins without i... | Rebuttal 1:
Rebuttal: We thank the reviewer for your very thoughtful comments! Below, we provide a detailed response to the weaknesses and questions.
**Weaknesses:**
Please see below a point-by-point response to your concerns.
* The limited baseline comparisons stem from the novelty of our problem setup. While ther... | Summary: I have read this manuscript during ICML review. It looks the same so I copied my previous review.
The authors presented a method to infer the physical parameters θ of physiological process (heart pumping blood) from noninvasive observation y (the echo image). The mapping from y to θ cannot be directly learned... | Rebuttal 1:
Rebuttal: Thank you for your review of our paper. We greatly appreciate your and other reviewers feedback from our ICML submission, and we have implemented changes to address these comments in our new submission. We are happy to share that we have **added additional validation and comparison of the physics ... | Summary: The paper proposes a method to identify parameters for digital twin models of patients using non-invasive health data, eliminating the need for invasive procedures. This method focuses on scenarios like cardiac hemodynamics, where traditionally invasive measurements (e.g., through catheterization) can be predi... | Rebuttal 1:
Rebuttal: Thank you for the feedback on our paper!
The proposed method is indeed general and not limited to the cardiovascular system example. Any physical or biological system that can be described through ordinary differential equations can utilize our approach. The problem setup in Section 2.1 and the ... | Rebuttal 1:
Rebuttal: We thank all reviewers for their valuable feedback. We would like to take this opportunity to summarize the key contributions of our work, address common concerns across the reviews, and clarify some aspects of our methodology that may have not been fully appreciated.
**Summary of Contributions**... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Trading off Consistency and Dimensionality of Convex Surrogates for Multiclass Classification | Accept (poster) | Summary: This paper studies surrogate loss design and the trade-off between surrogate consistency and loss dimension. The contributions are three-fold: (1) the characterization of the hallucination region, where the decoded prediction from the surrogate loss minimizer gives a class with no target probability mass, indi... | Rebuttal 1:
Rebuttal: To clarify Corollary 7, the number of outcomes $n$ is $n=2^d$, and we embed said outcomes into the vertices of the d-dimensional hypercube, which has $2^d$ vertices. It turns out that for this choice of d, we obtain consistency for any value of $\alpha < 0.5$. In other words, for this choice of $d... | Summary: This paper proposes a method called polytope embedding, which embeds multiclass predictions onto real numbers. The paper studies the properties of this embedding, like hallucination and calibration. Further, with low-noise assumptions, the authors showed more calibration results for their embedding in some cas... | Rebuttal 1:
Rebuttal: Currently, the definition of partial consistency is implicit via Definition 2, but we say that a surrogate and link $(L, \psi)$ are partially consistent if it is calibrated only over some $\mathcal{P} \subsetneq \Delta_{\mathcal{Y}}$ and not calibrated over $\Delta_{\mathcal{Y}}$. We will make thi... | Summary: The paper examines the trade-off between consistency and dimensionality in multi-class classification. It has been known that the lower bound on the dimension for consistent surrogate losses under any distribution is $n - 1$, wheren $n$ is the dimension of the input space. The authors propose the notion of par... | Rebuttal 1:
Rebuttal: We briefly respond to individual questions in order:
Q1: In general, finding the $\min_{\alpha \in [0, \frac 1 2)} \alpha$ that (almost*) guarantees consistency is possible through property elicitation, though it is laborious.
*Working directly with consistency conditions is often much more diff... | Summary: In this paper, the problem of constructing consistent multiclass surrogate losses for the 0-1 loss while reducing the dimension of the scoring function is studied. The concept of partial consistency, which can be dated back to the study of multiclass SVM, is used as a crucial part of this work. It is first rev... | Rebuttal 1:
Rebuttal: The cost of inference can be broken down into two parts: (a) the cost of a forward pass through a model, and (b) the cost of computing the link function. For (a), this is typically unchanged or even reduced by loss function design (reduced when prediction dimension is lowered). For (b), in the pap... | Rebuttal 1:
Rebuttal: Thank you to all of the reviewers for their feedback. We sincerely appreciate the time and effort you put into these reviews. We see them as very constructive and believe they will help us improve this work. We have written a response for each reviewer addressing individual concerns and questions ... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
From Unstructured Data to In-Context Learning: Exploring What Tasks Can Be Learned and When | Accept (poster) | Summary: The paper basically presents three theoretical analyses related to ICL. The section 2 shows that we can use CBOW to do the (country)-(capital) kind of ICL. The section 3 shows that positional embeddings, multiple layers in autoregressive LM, and blocked noise structures are important for ICL. The section 4 sho... | Rebuttal 1:
Rebuttal: Thanks for reviewing our paper. We are delighted that you found it both easy to follow and supported by strong theoretical conclusions and empirical simulations. Below, we address your comments.
> It would be more clear to also illustrate the correct answer of each prompt and provide some brief e... | Summary: This paper studies the emergence of in-context learning (ICL) in both CBOW (Mikolov et al,. 2013) and Transformer models. The focus is on simple synthetic settings that can be studied both theoretically and through synthetic experiments with small models. The paper identifies co-occurence as a key ingredient f... | Rebuttal 1:
Rebuttal: Thanks for reviewing our paper. Below, we address your comments.
> The paper states that "ICL is achievable by only modeling co-occurrence information using CBOW". However, this seems to miss the generality with which the term ICL is used. … So to say that "ICL is achievable" seems like a misuse ... | Summary: The paper studies the emergence of ICL using a synthetic setting. Particularly, it focuses on the importance of concurrence statistics to ICL, and shows that under some simplified conditions, a CBOW-styled model is proven to complete the correct completion for an ICL example. The paper additionally proves the ... | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper. We are pleased that you considered the problem we addressed important and found the paper well-presented and very readable. Below, we address your comments.
> It would be interesting to try deriving results on cases where the input consists of valid grammatical ... | Summary: The paper investigates the emergence of ICL from training on unstructured data. It explores two types of ICL tasks: the first involves input-output pairings that frequently co-occur within sentences, and the second comprises recognizable patterns that do not commonly co-occur. The authors demonstrate that the ... | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper. We are pleased that you found our work valuable in enhancing the understanding of ICL on unstructured training data, and our theoretical and empirical results well-supported. Below, we address your comments.
> lack of experiment details in the paper, ... number ... | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for providing constructive and insightful reviews. Please find our response to each reviewer in the "Rebuttal" section. Also, the updated Figure 1 (as requested by Reviewer xShA) is attached here.
Pdf: /pdf/29af091604ae57951a34989ef4894d6558264083.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
On Convergence of Adam for Stochastic Optimization under Relaxed Assumptions | Accept (poster) | Summary: This paper provides two probabilistic convergence rates for Adam with generalized affine variance noise under smoothness and generalized smooth condition, respectively, which achieves comparable results to many prior results.
Strengths: Please see the above Summary.
Weaknesses: 1. I suggest that authors shou... | Rebuttal 1:
Rebuttal: We thanks a lot for the reviewer's effort invested on our paper. Below are our responses to the major concern.
**Response to Weaknesses 1-3: Thank you for the suggestions on the presentation. We will revise as follows accordingly.**
1. In Line 14, we will replace '$g_t$' with '$g_t = \frac{\par... | Summary: In this paper, the authors analyze the convergence of Adam under milder noise conditions (affline variance) and milder smoothness conditions (both $L$-smoothness and $(L_0,L_q)$-smoothness) and propose a $O(\text{polylog}(T)/\sqrt T)$ convergence rate.
Strengths: This paper analyses the convergence of Adam un... | Rebuttal 1:
Rebuttal: Thanks a lot for your valuable feedback and suggestions!
**Response to Weakness**
1. Indeed, numerical results on Adam with/without corrective terms could be found in [10]. We also perform a simple experiment in Table 1 (the attached PDF), which roughly aligns with the result.
2. Thanks a lot... | Summary: This paper studies the high-probability convergence of Adam in the non-convex setting under relaxed assumptions. The authors consider a general noise condition that governs affine, sub-Gaussian, and bounded noise conditions. They also consider a generalized smoothness condition motivated by language model expe... | Rebuttal 1:
Rebuttal: We thanks a lot for the reviewer's effort and valuable suggestions on our manuscript.
**Response to Weakness: We clarify as follows, and we will revise the presentation issue accordingly.**
- First, in the non-asymptotic analysis (see Table 1) which we follows from, the total iteration number $T... | null | null | Rebuttal 1:
Rebuttal: We thank all reviewers for their comments and suggestions! In the global rebuttal, we have:
- **summarization of proof novelty**
- **a simple experiment in the attached PDF as supplementary material for our main results.**
While our proof borrows some ideas from [42, 10, 14, 2, 38, 19] as we ment... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
MeshFormer : High-Quality Mesh Generation with 3D-Guided Reconstruction Model | Accept (oral) | Summary: This paper introduces MeshFormer, a sparse-view reconstruction model designed to generate high-quality 3D textured meshes from sparse RGB images and their corresponding normal maps. By leveraging voxel representation, 3D inductive biases, SDF loss, and normal information, the model shows comparable inference p... | Rebuttal 1:
Rebuttal: ## More mathematical symbols and equations
Thank you for pointing this out. We will follow your suggestion to include more mathematical symbols and equations in our revision when introducing the method.
## More implementation details
We will follow the suggestion to include more implementation de... | Summary: The paper proposes a high-quality feed-forward 3D object reconstruction method from sparse view RGB images. It uses an explicit voxel structure for better geometric inductive bias, auxiliary inputs such as 2D diffusion generated normal images and SDF representation for better geometric details, and an end-to-e... | Rebuttal 1:
Rebuttal: ## Geometry supervision for real-world training datasets
We agree that image supervision is easier to add when extending to real-world training datasets. However, it is not impossible to obtain corresponding depth maps and even meshes for real-world RGB images, such as through depth sensors or Str... | Summary: In this work, the authors propose a sparse view reconstruction model that utilizes a set of images (with camera poses) and corresponding normal maps to produce a reconstructed textured mesh. The primary contribution lies in adopting voxel-based 3D representation and employing a network architecture that integr... | Rebuttal 1:
Rebuttal: # No real-world images tested?
We would like to clarify that one of our main testing datasets, OmniObject3D, is a real-world scanned 3D dataset. In addition, we also include some qualitative examples with real-world input in our rebuttal PDF (see Fig. 3), where MeshFormer performs quite well.
Th... | Summary: This paper proposes an improved framework for feed-forward reconstruction models. The authors advocate a number of improvements over the initial design of Large Reconstruction Model, including model architecture and training schemes. Experiments show that the method reconstructs better geometry and texture on ... | Rebuttal 1:
Rebuttal: ## Experiments tried/ablated but did not show significant differences
We are happy to follow the reviewer's suggestions to include more discussions about the experiments we have conducted in our revision, such as:
- the difference between joint training and separate training of the dense model and... | Rebuttal 1:
Rebuttal: We thank all reviewers for their insightful comments and valuable suggestions. We are pleased to note that all five reviewers were supportive of our work:
- They complimented the impressive mesh quality with fine-grained geometric details (r7MY, bHQc, 3423, mWQL, V11k).
- They praised our fast tr... | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: In this work, the authors propose MeshFormer, a sparse-view reconstruction model that explicitly leverages 3D native structure, input guidance, and training supervision. They leverage 3D sparse voxels as their representation and combine transformers with 3D (sparse) convolutions to inject 3D prior. Additionall... | Rebuttal 1:
Rebuttal: ## Thin structures
We would like to clarify that the loose thread of the toy was not displayed due to a slight pose mismatch when visualizing the results. In fact, the loose thread is reconstructed by our MeshFormer. We have included additional views of our generated results (see Figure 1 of the r... | null | null | null | null | null | null |
Limits of Transformer Language Models on Learning to Compose Algorithms | Accept (poster) | Summary: This paper studies whether transformers can efficiently learn compositional discrete tasks. In particular, the paper introduces two new tasks: pointer execution neighbor and pointer execution reverse multicount as well as using multiplication and highest subsequence sum from prior work. First, small models are... | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and observations.
---
**(W1)** As shown in our results, H1 can be fixed to a very large range, and 100 was chosen for illustrational purposes; what is crucial is that we assume it is empirically much lower than H2. This means that H1 is defined to make the ... | Summary: This paper focuses on analyzing the transformer language models' learning and transferability on compositional discrete tasks. Specifically, it has four hypothesis, and the author studies for a variety of language models, whether does these hypothesis hold.
H1. An LLM can learn to perform a compositional task... | Rebuttal 1:
Rebuttal: Thank you for your positive feedback, insightful comments, and observations.
**PEN and PERM: clarification and motivation**
To better understand the PEN and PERM tasks, we start the exposition by explaining the original Pointer Execution (PE) task using our encoding scheme.
The PE task is simil... | Summary: This paper evaluates the compositional learning abilities of Transformer-based models with LLaMA-like architecture on tasks requiring the composition of several discrete sub-tasks. To this end, the paper reuses two existing compositional algorithmic tasks and introduces two new ones, focusing on how many sampl... | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and observations.
---
**(W1)** We thank the reviewer for raising potential concerns regarding tokenizations. We already searched through different task designs to improve the performance of GPT-4 and Gemini-Pro. The reviews motivated us to conduct further i... | Summary: The paper investigates the capabilities of Transformer-based language models in learning compositional discrete tasks. The authors evaluate both training LLaMA models and prompting GPT-4 and Gemini-Pro on tasks that require the learning of compositions of several discrete sub-tasks. The results indicate that t... | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and questions.
---
**(W1)** We acknowledge that the investigated collection of tasks does not cover the full range of real-world applications. Nonetheless, this is a common choice across different works in this domain [paper references 16, 29, 30]. It allow... | Rebuttal 1:
Rebuttal: We want to thank all the reviewers for their helpful and supporting comments. We are encouraged that they acknowledge the importance of addressing current Transformer models' limitation in learning compositional tasks (DGs4, teuc), and that w.r.t. other benchmarks such as BIG-bench, our work prov... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Multi-Reward Best Policy Identification | Accept (poster) | Summary: The present article extends the track-and-stop approach of Garivier et al. to a multi-reward MDP setup. Given an MDP problem with a finite number of reward functions the aim is to develop an algorithm that learns optimal policies for all reward functions simultaneously. Under (drastic) assumptions the authors ... | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful review and the time spent on our paper. Below, we address each concern in detail.
>I was pushed into first reading other articles to get a rough understanding of what is going on
We acknowledge the current format assumes familiarity with related litera... | Summary: This paper studies the problem of best policy identification for RL with multiple rewards. The goal is to efficiently identify the best policy for given rewards with a high-level confidence. Authors provide an instance-dependent lower bound for the studied problem and introduce a provably-correct algorithm for... | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and for the time and effort spent reviewing our paper. Below we provide detailed responses to the main concerns raised by the reviewer.
> Simple environments for deep RL
We appreciate the reviewer's concern. However, the selected environments are intentio... | Summary: The paper addresses the challenge of identifying the best policy in RL when there are multiple rewards. The authors get a lower bound on the sample complexity and design an optimal exploration policy. The authors propose two algorithms: MR-NaS for tabular environments and DBMR-BPI for Deep RL. These algorithms... | Rebuttal 1:
Rebuttal: Thank you for your comments and valuable feedback. We appreciate the time and effort spent reviewing our paper, as well as your positive comments on the comprehensive analysis of theoretical and empirical results.
Below, we address each of your concerns and outline the corresponding revisions we ... | null | null | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their comprehensive reviews. We are pleased that you recognize the scientific quality and the rigor of our theoretical and empirical contributions.
Our work strives to bring a comprehensive and well-balanced analysis, bridging both theoretical analysis a... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Pard: Permutation-Invariant Autoregressive Diffusion for Graph Generation | Accept (poster) | Summary: The paper introduces PARD, a graph generation model that combines autoregressive and diffusion models. Traditional autoregressive models are effective but sensitive to order, while diffusion models are permutation-invariant but need many denoising steps and extra features. PARD overcomes these issues by genera... | Rebuttal 1:
Rebuttal: ### Thank you for these detailed questions and feedback for improving the presentation. Let us provide detailed response to all your questions.
>1. **It is unclear how the diffusion model is employed in PARD. Sec 3.1 and the second part of Eq. 6 are not quite relevant to each other. Can you elab... | Summary: This paper proposes a graph generation method that combines AutoRegressive (AR) models and diffusion models. By utilizing a unique partial order, it addresses the issue of non-exchangeable probabilities in AR models and the efficiency problem in diffusion models.
Strengths: 1. The proposed block-wise AR diffu... | Rebuttal 1:
Rebuttal: ### Thank you for giving positive feedback to our paper, we would like to address all your questions in detail.
>1. **Why does diffusion based on an equivariant network solve the flaw in equivariant modeling?**
The underlying magic behind the randomness introduced in the diffusion process is re... | Summary: The work proposes a new graph generative model based on an autoregressive procedure. It proposes an approach to deciding a partial order of graph nodes according to their degrees in a node-removal procedure. Based on the partial order, the work devises a new graph generative model.
Strengths: The graph algori... | Rebuttal 1:
Rebuttal: ### Thank you for your questions. We have conducted extensive new ablation studies. We want to show that our analysis/motivation is not just a fancy story, but indeed the primary driver behind performance.
>1. **Lack of justification: it is less clear about the advantage of designing a complex a... | Summary: This paper proposes to integrate autoregression models with diffusion models seamlessly to harnesses the effectiveness and efficiency of the autoregressive model while maintaining permutation invariance without order sensitivity. It also proposes architectural improvement to make the model and algorithm effici... | Rebuttal 1:
Rebuttal: ### Thank you for giving positive feedback to our paper, we would like to further answer your question in detail.
>**Provide some insights about the hyperparameter the maximum of hops $K_h$**
Let us first lay out $K_h$'s impact on the model: There is a relation between $K_h$ and block-size (he... | Rebuttal 1:
Rebuttal: ### We thank all the reviewers for their feedback and suggestions on our work. Here we first re-iterate the key contributions of our work, and then summarize the list of additional experiments we performed.
---
### **Why a Hybrid (AR+Diffusion) Approach for (Graph) Generation?**
Our proposed Pa... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
$\text{ID}^3$: Identity-Preserving-yet-Diversified Diffusion Models for Synthetic Face Recognition | Accept (poster) | Summary: This paper presents a method called $\text{ID}^3$ for the task of synthetic face recognition. The authors highlight that the accuracy of face recognition using generated data still lags behind that of training directly on real face data. They propose optimizing the generation process from the perspectives of d... | Rebuttal 1:
Rebuttal: We are glad that Reviewer enp6 finds our formulas and algorithm flow clear and that ID$^3$ has advantages over existing models over the past two years. Here we respond to your questions as follows. Hopefully it will address your concerns.
**Response to Weakness 1**
Thanks for pointing out these ... | Summary: This paper focuses on synthetic face recognition and proposes to concentrate on three aspects: inter-class diversity, intra-class diversity, and intra-class identity preservation. Based on those, an ID-preserving loss is employed to generate diverse but identity-preserving facial images. This paper also demons... | Rebuttal 1:
Rebuttal: We are glad that Reviewer 37yP appreciates our work for its insight, generality and effectiveness. Here we respond to your questions as follows. Hopefully it will address your concerns.
**Response to Weakness 1:**
Thanks for pointing it out. Factors contributing to solid FR training are complica... | Summary: This paper proposes ID3, an identity-preserving-yet-diversified diffusion model for generating synthetic face data for face recognition. ID3 leverages identity embeddings and facial attributes to control inter-class and intra-class diversity of generated faces while preserving intra-class identity consistency,... | Rebuttal 1:
Rebuttal: **Question: One area that could benefit from further clarification is the explanation of notations and symbols used in the mathematical formulas. Additionally, the formatting and typesetting of some equations, such as Equation 3, could be enhanced to improve readability and aesthetic appeal.**
**... | Summary: The paper "ID3: Identity-Preserving-yet-Diversified Diffusion Models for Synthetic Face Recognition" introduces a novel synthetic face recognition (SFR) approach using diffusion models. It focuses on maintaining identity consistency while providing high diversity in generated face images. The proposed ID3 mode... | Rebuttal 1:
Rebuttal: We are glad that Reviewer qiUe appreciates our work in terms of originality, quality, clarity and significance. Here we respond to your questions as follows. Hopefully it will address your concerns.
**Weakness 1: Additional tests on further diversified real-world datasets could strengthen the gen... | Rebuttal 1:
Rebuttal: We thank all reviewers for their time and effort in reviewing our work and their valuable comments about the paper. Attached is the one-page PDF that contains some figures and results for the rebuttal.
Pdf: /pdf/3c2aa4647e6e2f46090e5e2529b27d5676f4d4b1.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
DropBP: Accelerating Fine-Tuning of Large Language Models by Dropping Backward Propagation | Accept (poster) | Summary: The paper introduces DropBP, an innovative approach to accelerate the fine-tuning of Large Language Models (LLMs) by selectively dropping layers during backward propagation. This method is presented as a means to reduce computational costs and activation memory, significant challenges in the efficient fine-tun... | Rebuttal 1:
Rebuttal: We thank the reviewer for carefully reviewing our submission and providing valuable feedback. Please see below for our response to the questions and comments.
**Q3.1.** Pervious work like LayerDrop and others omit the layer computation in the forward pass. Then the computation could be removed ... | Summary: The paper proposes a novel method to reduce the computational and memory costs associated with fine-tuning large language models (LLMs). The authors introduce DropBP, a technique that randomly drops layers during backward propagation, effectively reducing the computational operations (FLOPs) and activation mem... | Rebuttal 1:
Rebuttal: We thank the reviewer for carefully reviewing our submission and providing valuable feedback. Please see below for our response to the questions and comments.
**Q2.1.** Can you provide more details on the sensitivity calculation process? Specifically, how is the sensitivity of each layer compute... | Summary: The paper proposed to drop layers during backward prop (BP) based on layer sensitivity. The method aims to reduce the cost for gradient computation and storage for intermediate activation in full BP.
Strengths: 1. Reducing the cost of full BP in PEFT has been an important challenge.
2. The method is simple a... | Rebuttal 1:
Rebuttal: We thank the reviewer for carefully reviewing our submission and providing valuable feedback. Please see below for our response to the questions and comments.
**Q1.1.** The idea of optimizing NNs with sparse gradient is not new.
**A1.1.** We acknowledge that the idea of optimizing neural netw... | null | null | Rebuttal 1:
Rebuttal: We thank the all reviewer for carefully reviewing our submission and providing valuable feedback. We would like to address several common and important questions in the following global response.
**GQ1.** It is unclear if the method works well for generation tasks and domain-specific transfer lea... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Unsupervised Object Detection with Theoretical Guarantees | Accept (poster) | Summary: This paper proposes an autoencoder based object detection model that makes predictions about object positions in an unsupervised manner. Imporantly, the authors can provide theoretical guarantees/ bounds about the degree of the model's detection error.
Strengths: The paper is well written and it is easy to fo... | Rebuttal 1:
Rebuttal: > In the context of the CLEVR experiments, I am wondering why the authors don’t evaluate concerning the Gaussian standard deviation as they did for the first dataset?
We thank the reviewer for their suggestion and have performed this experiment – please see fig. 11 of the rebuttal PDF. As all our... | Summary: This paper explores Unsupervised Object Detection with Theoretical Guarantees. This method is a significant advancement in the field of object detection as it provides theoretical guarantees on the accuracy of the detected object positions. By introducing a new approach that ensures reliable object localizatio... | Rebuttal 1:
Rebuttal: > In the experiments, the datasets for evaluation is the CLEVR data, please explain why choose it, not other popular object detection datasets?
We chose to base our dataset on CLEVR because it is a dataset commonly used in unsupervised learning, and because it allows us to generate images of the ... | Summary: The paper proposes a new idea for unsupervised object detection where an CNN based auto-encoder architecture is employed and the latent representation is trained to learn position of objects in images. They further provide theoretical analysis of the proposed idea under strong assumption about the input data a... | Rebuttal 1:
Rebuttal: > Can the authors provide more clarification on the training procedure and the important aspects that are necessary for the model work? For example, it is not clear how the authors processes input data during training, how the min-batch sampling is done, what input-target pairs are?, what regulati... | Summary: This paper presents the first unsupervised object detection approach that is theoretically shown to recover the true object positions up to quantifiable small deviations that are related to the encoder and decoder receptive field sizes, the object sizes, and the widths of the Gaussians used in the rendering pr... | Rebuttal 1:
Rebuttal: > It is interesting to learn that SAM and CutLER's errors are sometimes much higher than the bound derived by the proposed method.
We note that all the error plots in the paper contain the maximum position errors, as opposed to average position errors (as described in Appendix C). So, while SAM a... | Rebuttal 1:
Rebuttal: In response to reviewer’s ghv9 question, we have performed CLEVR experiments showing the position error as a function of the Gaussian standard deviation (see fig. 11 in the rebuttal PDF). As all our data points (red) lie within our theoretical bounds (blue), this successfully validates our theory.... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Dynamic Model Predictive Shielding for Provably Safe Reinforcement Learning | Accept (poster) | Summary: Naive model predictive shielding may overly restrict exploration thereby preventing an RL agent from learning a policy with good performance. In order to prevent this, the authors propose a method to optimise a backup policy that is provably safe using an online planner. An approximate model such as double int... | Rebuttal 1:
Rebuttal: We thank the reviewer for their many insightful comments and suggestions. We respond to their questions and concerns below.
**Adding traditional control and non-RL methods to literature review**
We appreciate the reviewer's paper recommendations and will include them, along with classical contro... | Summary: This paper proposes a new method for safety shielding. More precisely, the authors extend Model Predictive Shielding (MPS), where an agent reverts to a safe backup policy if, for the next predicted state, this policy would not be able to guarantee safety anymore. MPS is often overly conservative, particularly ... | Rebuttal 1:
Rebuttal: We thank the reviewer for their many insightful comments and suggestions. We respond to their questions and concerns below.
**Questions over computational complexity**
We analyze the question of the planner’s computational cost in more depth in the global rebuttal. We restate a short summary of ... | Summary: The authors introduce Dynamic Model Predictive Shielding (DMPS) an extension of Model Predictive Sheilding (MPS) that adress some of its key limitations, such as overconservatism when deploying the backup policy which consequently hinders exploration of the neural 'task' agent and slows down conergence. The ke... | Rebuttal 1:
Rebuttal: We thank the reviewer for their many insightful comments and suggestions. We respond to their questions and concerns below.
**Construction of backup policies and determination of the invariant sets**
In static environments, the backup policy involves braking as hard as possible. In dynamic envir... | Summary: The approach called dynamic model-predictive shielding for safe reinforcement learning is proposed as an improvement over its static counterpart. The main idea is to optimize for expected return on action with respect to the reinforcement-learning task when choosing a shielding backup action, and to incorporat... | Rebuttal 1:
Rebuttal: We thank the reviewer for their many insightful comments and suggestions. We respond to their questions and concerns below.
**Figure 2 clarification**
We assume that the reviewer’s question is referring to Figure 2. Figure 2 demonstrates the example described in the text, and in particular, part... | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful feedback. We summarize the responses to common questions.
**Computational cost of the planner**
There is a tradeoff between the quality of the recovery plan, and the computational cost incurred in the planner searching for it. The look-ahead controls t... | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper seeks to address provably safe RL problems where safety must be ensured even during training. It proposed DMPS, which enhances prior Model Predictive Shielding approach, to dynamically select safe actions when danger is imminent. DMPS employs local planner to plan for recovery actions and the planner... | Rebuttal 1:
Rebuttal: We thank the reviewer for their many insightful comments and suggestions. We respond to their questions and concerns below.
**Concerns over originality and significance of DMPS**
The main novelty of our approach is the synergistic relationship between a local planner tasked with finding good rec... | null | null | null | null | null | null |
A Polar coordinate system represents syntax in large language models | Accept (poster) | Summary: This paper proposes polar probes, a kind of structural probe that learns a distance and rotation function that can more accurately classify syntactic structure from language model representations than previous approaches. In particular, the question of whether direction can represent the type of syntactic rela... | Rebuttal 1:
Rebuttal: We thank reviewer rWeD for their insightful and constructive comments,
### Controlled dataset
We agree that reporting performance metrics such as UUAS, LAS, and Balanced Accuracy on the controlled dataset is an important addition. To address this, we will include a new figure in the appendix of... | Summary: Whereas prior work (Hewitt and Manning 2018) probed syntactic distance and depth, this work proposed to push that forward by also probing headedness and dependency type. Specifically, this doesn't separately probe those three, but aims for a single vector space where euclidean distance defines syntactic dista... | Rebuttal 1:
Rebuttal: We thank tm4o for their thorough constructive feedback.
We agree that our discussion of the work by Muller-Eberstein et al. (2022) was not sufficiently detailed, which may have made the novelty of our contributions seem less apparent.
To rectify this, we have expanded our discussion and incorpo... | Summary: Previous work introduced linear probes to explore how syntactic relationships are encoded in LLM embeddings. This work aims to take it a step further and examine how types of syntactic relationships are encoded in the LLMs. They introduce a polar probe that when optimized can predict the type of syntactic re... | Rebuttal 1:
Rebuttal: We thank reviewer 9UbP for their insightfulreview.
## Weaknesses
### Probing vs. Parsing:
We agree that the distinction between probing and parsing is insufficiently clear. We will amend the discussion as follows:
“The current 'probing' work is related to extensive research on 'parsing'. Howeve... | null | null | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for the detailed and relevant comments,
## The strengths pointed by the reviewers are:
* The paper is well-written (tm4o, rWeD)
* Clear contribution, the work opens several avenues of research (9UbP, rWeD)
* Convincing results (9UbP, rWeD)
* Paper provides wi... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Hierarchical Selective Classification | Accept (poster) | Summary: The authors propose hierarchical selective classification, a method that selects the hierarchical granularity of its prediction based on uncertainty.
Strengths: * The paper is well-written, and the proposed method is quite intuitive.
* I like the idea that if uncertain, it makes sense to predict at a higher l... | Rebuttal 1:
Rebuttal: Thank you for your feedback on our paper. We're glad you liked the paper and its underlying idea.
*"My biggest uncertainty is the similarity of this work to conformal prediction. To me, it seems that this method is very similar to conformal prediction, where the set of possible prediction sets is... | Summary: The paper introduces a hierarchical selective classification technique that incorporates hierarchical risk and coverage. The authors additionally proposed an algorithm that guarantees target accuracy. Experimental results demonstrate the method's effectiveness.
Strengths: Hierarchical selective classification... | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback on our paper.
*"The need of a prior tree among classes can limit its usage for complex scenarios. The construction of such tree can be a non-trivial step for the applicability of the approach."*
While it's true that our algorithms require a tree structure, t... | Summary: The paper introduces a new framework for selective classification called hierarchical selective classification. In a setting where a hierarchy in the classification task is present, the authors devise a selection strategy that considers confidence at different levels of the classification hierarchy. Extensive ... | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback.
*"I do not fully understand why the authors focus so much on showing how different training regimes affect HSC performance. I guess this improves the overall predictive performance of the (hierarchical) classifier, which is expected to impact the HSC task... | Summary: The paper proposes an extension of selective classification following a class hierarchy to reduce the specificity of model prediction when there is a high uncertainty. In particular, if the prediction confidence of a class is smaller than a predefined threshold, the proposed algorithm would proceed towards a h... | Rebuttal 1:
Rebuttal: Thank you for your positive reply,
*"It is an inference rule. This means that the algorithm is used at test time only. If this could be even integrated into training is a plus.", "Could the authors clarify if it can also be integrated into training a whole model to perform hierarchical selective ... | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Focus On What Matters: Separated Models For Visual-Based RL Generalization | Accept (poster) | Summary: Visual-based Reinforcement Learning (RL) often fails to generalize across unseen environments. This work proposes SMG (Separated Models for Generalization) to improve the generalization in VRL by introducing two models to separately extract task-relevant and task-irrelevant representations through image recons... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for your constructive comments and suggestions. We address each of your comments as follows.
### Q1: I think the novelty of learning mask models to distinguish noise from the environment is limited.
---
A1:
Thank you for your professional analysis. However, we wou... | Summary: This paper presents a novel method that utilizes two model branches to extract task-relevant and task-irrelevant representations separately from visual observations, aiming to enhance the zero-shot generalization ability of RL agents. The approach introduces four additional loss terms and two consistency losse... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for your constructive comments and suggestions. We address each of your comments as follows.
### Q1: It would be beneficial to also include a comparison with model-based RL methods.
---
A1:
To the best of our knowledge, there are still no model-based methods that e... | Summary: This paper presents a novel approach called SMG (Separated Models for Generalization) to improve generalization in visual-based reinforcement learning (RL). The approach works by using separate foreground and background encoders/decoders and employing a mask to isolate task-relevant regions. In addition, it al... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for your constructive comments and suggestions. We address each of your comments as follows.
### Q1: Some testing scenarios appear to be overlooked in this paper.
---
A1:
We admit that real-world deployment scenarios can be more diverse and complex. **However, the ... | Summary: The authors propose a novel objective to improve robustness of the visual encoder in RL to background noise and to color perturbations. First, the authors split the visual encoder into two models: background encoder/decoder and foreground encoder/decoder. The proposed training objective contains multiple compo... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for your constructive comments and suggestions. We address each of your comments as follows.
### Q1: The writing is a bit sloppy, with many typos and confusing sentences
---
A1:
Thank you for your careful review. We have thoroughly reviewed the paper multiple times... | Rebuttal 1:
Rebuttal: We revised the paper and added suggested experiments according to the reviewer’s comments. The detailed revisions are described as follows. The additional figures and a table are attached in the pdf file.
# 1. Revisions
### 1.1. Revise some typos and sentence
line 13: achieving free from overfi... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
LoRANN: Low-Rank Matrix Factorization for Approximate Nearest Neighbor Search | Accept (poster) | Summary: This paper investigates approximate nearest neighbor (ANN) search, where, given a collection $\mathcal{X}$ of points in $\mathbb{R}^d$, the task is to find the top $k$ data points that are closest to a query point $q$ according to some similarity or dissimilarity measure (denoted by $\delta(\cdot, \cdot)$), su... | Rebuttal 1:
Rebuttal: We thank the Reviewer for constructive feedback and good suggestions for improving the manuscript. In particular, clarifying the main contribution of our article as improving the accuracy of score computation in clustering-based ANN search would indeed make the presentation cleaner and our argumen... | Summary: This paper introduces a new method for the nearest neighbor search problem. Leveraging the low-rank assumption, the authors combine low-rank matrix factorization, clustering, and quantization to enhance the speed of nearest neighbor search. The authors conducted extensive experiments to demonstrate the advanta... | Rebuttal 1:
Rebuttal: We thank the Reviewer for their feedback. However, we want to clarify that our method does not reduce to techniques that have already been used in the earlier ANN literature. As nicely summarized by Reviewer udkL, the main novel contribution of the manuscript is a new supervised method (reduced-ra... | Summary: The paper describes a method for computing approximate nearest neighbors in
high dimensions. Computing nearest neighbors is a classical problem in
computational geometry, with applications in many areas of computer science.
The classical solutions in low dimensions do not generalize to high dimensions.
The app... | Rebuttal 1:
Rebuttal: We acknowledge that it is not easy to review an article without field-specific knowledge and appreciate the effort. The ANN-benchmarks project (the link is provided on page 14 in Appendix B of the original manuscript) is the de facto standard for performance evaluation in the field of ANN search.... | Summary: The paper presents LoRANN, a novel algorithm for Approximate Nearest Neighbor (ANN) search that leverages low-rank matrix factorization and k-means clustering. The core idea is to approximate the ordinary least squares solution of the inner product computation via reduced-rank regression. The authors also intr... | Rebuttal 1:
Rebuttal: We thank the reviewer for the suggestion on improving the presentation of our graphs and will incorporate this change. We would also be happy to hear any additional suggestions regarding the visual presentation.
As mentioned in Section 8 (Limitations), the reason for the lower relative performanc... | Rebuttal 1:
Rebuttal: We thank the reviewers for their constructive feedback. Here we address the most important concerns about novelty (Reviewer bJwE) and experimental methodology (Reviewer udkL) by clarifying our contribution and experimental setup, and performing new experiments:
- We clarify that our method does n... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A provable control of sensitivity of neural networks through a direct parameterization of the overall bi-Lipschitzness | Accept (poster) | Summary: This paper investigates and proposes a novel bi-Lipschitz neural network architecture. This architecture provides a simple, direct and tight control of the Lipschitz and inverse Lipschitz constants through the use of two parameters, the ideal minimum, equipped with theoretical guarantees. To devise their archi... | Rebuttal 1:
Rebuttal: Thank you very much for spending your time on carefully reviewing our paper. We really appreciate all the advice you provide to improve our paper. Please find below answers to your questions. We also summarized the comparison of the time and space complexity of our models in an independent thread.... | Summary: This paper proposes a novel neural network architecture called BLNN (Bi-Lipschitz Neural Network) that allows direct control and parameterization of the overall bi-Lipschitzness of the network. The main contributions include: i) a framework that allows tight control of Lipschitz and inverse Lipschitz constants... | Rebuttal 1:
Rebuttal: Thank you very much for spending your time on carefully reviewing our paper. We really appreciate all the advice you provide to improve our paper. Please find below answers to your questions. We also summarized the comparison of the time and space complexity of our models in an independent thread.... | Summary: This paper proposes to control the bi-Lipschitzness of a neural-network by parameterizing the output by the Legendre-Fenchel-Dual. This involves parameterizing a strongly convex function and computing the minimum of that function in the forward pass. Several benchmarks are studied in simple regression tasks an... | Rebuttal 1:
Rebuttal: Thank you very much for spending your time on carefully reviewing our paper. We really appreciate all the questions highlighting the significant potential and future directions of our work. Please find below answers to your questions and some clarifications to other important points of your review... | null | null | Rebuttal 1:
Rebuttal: # Time and Space Complexity of the BLNN and its Variants
This global rebuttal discusses the time and space complexity of the BLNN and its variants. Figures can be found in the attached PDF. This discussion will be added to the updated version of the paper.
## Theoretical Discussion
Concernin... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Solving Zero-Sum Markov Games with Continuous State via Spectral Dynamic Embedding | Accept (poster) | Summary: This paper studies the two-player zero-sum stochastic Markov games (2p0s-MGs) with large scale or continuous state spaces. These problems have a large cardinality and function approximation methods are needed. The paper consider a spectral dynamic embedding method and proposed SDEPO. This methods utilized the ... | Rebuttal 1:
Rebuttal: We are grateful for your positive feedback on our manuscript. We give the point-to-point responses to the weaknesses and questions as follows.
[Empirical evaluation:]
We add a simulated experiment to validate the effectiveness of SDEPO directly (please see the Global Response for further detail... | Summary: This paper proposes a new algorithm named Spectral Dynamic Embedding Policy Optimization (SDEPO) to solve the zero-sum Markov games with continous state and finite actions. The convergence analysis indicates that the proposed method achieves the best-known sample complexity as the case of finite-state space; t... | Rebuttal 1:
Rebuttal: We are grateful for the effort you have dedicated to our paper, and we give the one-to-one responses to the weakness and questions as follows.
Response to W1&Q1:
In Assumption 1, we assume that the transition function satisfies that $s_{t+1}=f(s_t,a_t,b_t)+\epsilon_t$, which means that next stat... | Summary: The authors introduce an innovative approach to solving 2p0s-MGs with continuous state spaces, providing both theoretical guarantees and practical improvements over existing methods. The SDEPO algorithm and its variants offer efficient and scalable solutions for complex Markov games, potentially applicable to ... | Rebuttal 1:
Rebuttal: We are grateful for your positive feedback on our manuscript. We give the point-to-point responses to the weaknesses and questions as follows.
Response to Weakness 1:
It is really a good question to consider the computational overhead of the spectral dynamic embeddings. Actually, in our SDEPO a... | null | null | Rebuttal 1:
Rebuttal: We thank all of the reviewers for the detailed comments. Here we numerically verified the convergence of SDEPO. Here, we designed a simple zero-sum Markov game with a continuous state and finite action space ($\mathcal{S} = \mathbb{R}$,$ |\mathcal{A}|$ = 5). As for the transition probability and r... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.