title string | paper_decision string | review_1 string | rebuttals_1 string | review_2 string | rebuttals_2 string | review_3 string | rebuttals_3 string | review_4 string | rebuttals_4 string | global_rebuttals string | dataset_source string | conference_year int64 | review_5 string | rebuttals_5 string | review_6 string | rebuttals_6 string | review_7 string | rebuttals_7 string | review_8 string | rebuttals_8 string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
SAMoSSA: Multivariate Singular Spectrum Analysis with Stochastic Autoregressive Noise | Accept (poster) | Summary: The authors propose SAMoSSA, an algorithm that combines deterministic trend estimation via mSSA with estimation of an autoregressive component of a time series. They provide error rates for trend estimation, estimation of the AR coefficients, as well as the prediction error. In addition, they consider real ... | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive feedback and valuable questions. Below we address the specific questions and comments they raise.
> The more general ARMA model is first discussed in the introduction. What challenges do estimating the MA components pose for the analysis?
Thank you fo... | Summary: This paper proposed SAMoSSA, a two-stage procedure that effectively handles mixtures of deterministic nonstationary and stationary AR processes with minimal model assumptions. The authors analyze SAMoSSA’s ability to estimate non-stationary components under stationary AR noise, the error rate of AR system iden... | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive and constructive feedback. Below we address the specific questions and comments they raise.
> The biggest difference between this paper and [3] lies in the different noise setting, i.e., this paper replaces the i.i.d noise with stationary AR noise. However... | Summary: This paper proposes a two-stage approach based on multivariate Singular Spectrum Analysis (mSSA) to estimate the non-stationary components in a time series in the presence of a correlated stationary AR noise, which is subsequently estimated from the residual time series. Theoretical results on the performance... | Rebuttal 1:
Rebuttal:
We thank the reviewer for their positive and constructive feedback. Below we address the specific questions and comments they raise.
> ... A few sentences to convey the big picture behind the steps would have been very helpful in the algorithm section.
Thank you, we will revise the algorithm de... | Summary: This is a comprehensive work on a new extended variant of multivariate Singular Spectrum Analysis (mSSA), which manages to handle time series of deterministic trend/seasonality with AR stationary components, with rigorous theoretical guarantees. The algorithm is a natural extension of the variant of mSSA using... | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive feedback and valuable questions. Below we address the specific questions and comments they raise.
> The numerical experiments could be richer, if this step fails completely, the whole would fail. We like to know its performance boundary and reliability... | Rebuttal 1:
Rebuttal: We thank all the reviewers for constructive feedback. Here's a succinct highlight of our paper's key contributions, beyond our individual responses to reviewers:
The main contribution of this paper is to showcase the effectiveness of a simple multi-stage algorithm in time series forecasting. For ... | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper extends previous work on multivariate Singular Spectrum Analysis (mSSA) to observations with autoregressive (AR) noise. The method constructs a sliding window representation of the target univariate or multivariate time series called the Page matrix and learns the deterministic non-stationary compon... | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed and constructive feedback. In the following sections, we address each of the questions and comments they have raised.
> However, the quantitative evaluation is very brief and does not give much insight into the performance of the method relative to either ... | Summary: The paper discusses a two-stage algorithm for time series analysis, which involves estimating deterministic, non-stationary trend and seasonality components, followed by learning the residual stochastic, stationary components. The first stage involves using multivariate Singular Spectrum Analysis (mSSA) to est... | Rebuttal 1:
Rebuttal: We express our gratitude to the reviewer for their positive and insightful feedback. In what follows, we address the specific questions and comments raised.
> The authors should provide simpler explanations or visual aids alongside the more complex mathematical definitions and proofs to make the ... | null | null | null | null |
Score-based Generative Models with Lévy Processes | Accept (spotlight) | Summary: Score-based generative models (SBGMs) generally employ Brownian motion, also known as the Wiener process, for noise injection. However, using Brownian motion in SBGMs often leads to issues such as mode collapse or slow sampling. To address these problems, the authors propose SBGMs with an isotropic α-stable Le... | Rebuttal 1:
Rebuttal: Thank you for providing valuable and keen insights to enhance the completeness of our paper.
> Question 1
>
LIM, Heavy-tailed DSM [Deasy et al., 2021], and Denoising Diffusion Gamma Models [Nachmani et al., 2021] all share the advantage of a faster convergence rate for sampling compared to DDPM... | Summary: The paper introduces the Levy-Ito Model, a novel score-based generative model (SBGM) that utilizes the isotropic $\alpha$-Levy process as perturbation noise. The authors highlight that their proposed method is the first continuous-time SBGM to incorporate a heavy-tailed process. They aim to leverage the advant... | Rebuttal 1:
Rebuttal: > Weaknesses
>
We sincerely appreciate your detailed suggestions for possible improvements. As you suggested, we will include detailed experimental results on how the convergence rate varies based on 1) architecture and 2) noise scheduling in the next paper revision. While we aimed to investigat... | Summary: Prior score-based/diffusion generative models have been defined by Brownian motion. This paper proposes a method of replacing the continuous Gaussian processes with different processes dependent on the characteristic exponent value. The heavy tail property defined by the Levy process allows for higher chance o... | Rebuttal 1:
Rebuttal: > Weakness 1
>
Thank you for your valuable feedback. We conducted additional experiments regarding $\alpha$-selection for CIFAR10, and CelebA. Below are the FID results based on different values of $\alpha$ for CIFAR10 and CelebA:
| $\alpha$ | CIFAR10 (32x32) | CelebA (64x64) |
| --- | --- | --... | Summary: This paper presents a new score-based generative model called Lévy-Ito ̄ Model (LIM) that tackles the challenges of slow convergence rate of Number of Function Evaluation (NFE) and mode collapse in diffusion models when applied to imbalanced data. This model leverages isotropic-stable Lévy processes. Initially... | Rebuttal 1:
Rebuttal: We appreciate your feedback on our paper. We have done our best to answer your keen questions.
> Weakness 1
>
<Slow convergence>
The reason for the slow convergence of Diffusion models is that the Brownian motion follows a light-tailed distribution and has a continuous path. Various methods ha... | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
No-Regret Online Reinforcement Learning with Adversarial Losses and Transitions | Accept (poster) | Summary: This paper introduces the challenge of online learning in adversarial MDPs where the loss functions and transition functions are chosen by a malicious adversary. Although previous algorithms achieving $O(\sqrt{T})$ regret with fixed transition functions could not handle adversarial transitions, in this paper, ... | Rebuttal 1:
Rebuttal: Thanks for your helpful feedback. Please see our response below:
***
**Q:** Issues of paper's organization.
**A:** Thanks for the suggestion. In the submission phase, we were unable to squeeze the algorithms and the conclusion section in the main texts because of the space limit. We will use the... | Summary: This paper studies online reinforcement learning in tabular MDPs when the losses and transitions can be adversarially changing from round to round. They show that one can achieve regret guarantees which are $O(\sqrt{T} + C^P)$ where $C^P$ measures the degree to which transitions are changing.
Specifically the... | Rebuttal 1:
Rebuttal: Thanks for your helpful feedback. Please see our responses below:
***
**Q1:** Issues of writing, organization, and presentation.
**A1:** Thanks for your suggestions. We will consider re-organizing the content in the future version.
***
**Q2:** Line 116: should this be $S$ instead of $X$?
**A2... | Summary: The authors consider no-regret learning in adversarial MDPs, when the dynamics may change adversarially across episodes. They design an algorithm which provides a regret guarantee of $\tilde{O}(\sqrt{T} + C)$ where $C$ is the total deviation from some fixed transition function, and the benchmark is the best fi... | Rebuttal 1:
Rebuttal: Thanks for your valuable feedback. Please see our responses below:
***
**Q1:**
Since the dynamics of the MDP changes adversarially across episodes, it makes less sense to set the benchmark as a Markov policy, as it is no longer the case that the optimal policy on a sequence of MDPs is WLOG Mar... | Summary: This paper studies the problem of reinforcement learning under adversarial reward and transitions. When evaluating the regret against the best fixed policy in hindsight, the algorithm proposed in this paper achieves the optimal regret O(\sqrt{T} + C^P), which is followed by other favorable extensions including... | Rebuttal 1:
Rebuttal: Thanks for your helpful feedback. Please see our responses below:
***
**Q1:** Lack of justification for the specific type of regret studied in this paper.
**A1:** First, when the MDPs are time-varying, the ``underlying uncorrupted MDP'' is not always well-defined. On the other hand, the best-in... | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies learning algorithms for adversarial MDP with adversarial transition functions. The authors developed an algorithm that enjoys O(\sqrt{T} + C^P) regret where C^P measures how adversarial the transition functions are. The developed algorithm could work without knowing C^P. Finally, the authors... | Rebuttal 1:
Rebuttal: Thanks for your positive comments. | null | null | null | null | null | null |
Improved Frequency Estimation Algorithms with and without Predictions | Accept (spotlight) | Summary: This paper studied frequency estimation and learning-augmented frequency estimation. CountMin and CountSketch are the most popular algorithms for this task. With the addition of learning augmentation, an algorithm is given access to a learned prediction, in this case the prediction of the heavy hitters. This p... | Rebuttal 1:
Rebuttal: We thanks you for your thorough review and your comments. Below we address your questions and concerns.
>I am confused about the prediction model. Normally, in learning-augmented algorithms, we measure an algorithm’s performance based on the error in the prediction. Here, as far as I could tell, ... | Summary: Summary of the Paper
==================
* This work follows (Hsu Indyk Katabi Vakilian 2019) in trying to improve the performance of hashing-based frequency estimation algorithms (such as Count-Min, CountSketch) by making use of "advice" in the form of a learning model's predictions which classify the input el... | Rebuttal 1:
Rebuttal: We are happy to hear that you found our paper interesting and thank you for your time and comments. | Summary: Authors study frequency estimation algorithms CountMin and CountSketch
and propose their modifications tailored for heavy tailed distributions.
They first analyze CountMin and CountSketch, showing that the second
one achieves better theoretical bounds on such distributions which explains
experimental results i... | Rebuttal 1:
Rebuttal: >I did not see lower bounds for the problem in their setting. It is not clear whether better algorithms are possible
Proving lower bounds for learning-augmented frequency estimation, or even frequency estimation under our expected error metric, is an interesting future research direction.
>It ... | Summary: The authors present a new error analysis for Count-Sketch (CS) and Count-Min Sketch (CMS) for heavy-tailed distributions. They propose a novel Count-Sketch-based algorithm and its learned variant to estimate the frequencies of items in a data stream. Empirically, they show that both algorithms outperform the s... | Rebuttal 1:
Rebuttal: We thank you for your interest in our paper and your comments. We address questions and concerns below
>It appears that for non-Zipfian distributions, the non-simplified Algorithm 2 would have to perform two passes over the data stream since Algorithm 6 would need to first output an estimate of t... | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors study frequency estimation in a streaming setting using CountMin and CountSketches, both their classic and learning augmented variants. They prove tight theoretical bounds for the expected error when the frequencies follow the Zipf distribution.
They also introduce and analyze a new algorithm with ... | Rebuttal 1:
Rebuttal: We are glad to hear you found our paper interesting and appreciate your comments! We address them below:
>No experiments with the theoretically analyzed algorithm, no theory for the simpler variant in the experiments.
The specific setting of parameters for the theoretical algorithm (number of CS... | null | null | null | null | null | null |
A Fast and Accurate Estimator for Large Scale Linear Model via Data Averaging | Accept (poster) | Summary: This paper studies the linear regression problem and proposes a new sketching method based on data averaging.
Strengths: Please see the "questions" section.
Weaknesses: Please see the "questions" section.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: This topic falls outside my current ... | Rebuttal 1:
Rebuttal:
**Comment:** We sincerely appreciate your valuable comments and suggestions, particularly the positive feedback regarding the proposed method.
Below, we will provide a response that centers around **questions** related to our paper. These relevant sections have been indicated with italicized ... | Summary: This paper considers a new estimation method for a large scale linear regression model. Specifically, the regression coefficients are estimated by least squares estimation of averaged observations for which data are partitioned via a method similar to the information-based optimal subdata selection (IBOSS) alg... | Rebuttal 1:
Rebuttal: We sincerely appreciate your valuable comments and suggestions, particularly the positive feedback regarding the proposed method.
Below, we will provide a response that centers around three main aspects: **weaknesses, questions, and limitations** related to our paper. We have taken note that yo... | Summary: This paper gives lower bounds of the conditional mean squared error for sketching methods. They focus on least square estimator, and show that when the problem dimension is sufficiently large, the optimal error rate among all sampling reductions is achieved by uniform sampling. They also propose a sketching me... | Rebuttal 1:
Rebuttal:
**Comment:** We sincerely appreciate your valuable comments and suggestions, particularly the positive feedback regarding the proposed method.
Below, we will provide a response that centers around two main aspects: **weaknesses, questions** related to our paper. These relevant sections have be... | Summary: This submission studies the asymptotic estimation risk of linear regression
with various data sketching strategies.
The authors start by refining an existing lower bound to show that sketching
by uniform sub-sampling is only minimax optimal when the feature dimension
is very large. otherwise, improvement is ... | Rebuttal 1:
Rebuttal: We sincerely appreciate your valuable comments and suggestions. Due to space constraints, we display the discussion about non-normal cases, presentation and tables in the global rebuttal. Below, we will provide a response for main **questions and limitations**.
**Questions:**
* *Assumption 1..... | Rebuttal 1:
Rebuttal: Thank all reviewers warmly for the time you took to review and understand our paper.
Most reviewers pointed out that the presentation of the paper should be improved. The discussion is dense, mathematical. Due to space constraints, the experimental results are all deferred to the appendix. Follow... | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies linear regression where the number of samples N is much larger than the number of predictors p (N>>p), which is computationally costly due to large N. The paper investigates lower bounds for existing sampling based methods, and proposes a novel sketching method based on data averaging which ... | Rebuttal 1:
Rebuttal: We sincerely appreciate your valuable comments and suggestions, particularly the positive feedback regarding the proposed method.
Below, we will provide a response that centers around three main aspects: **weaknesses, questions, and suggestions** related to our paper. These relevant sections hav... | null | null | null | null | null | null |
Language Models Implement Simple Word2Vec-style Vector Arithmetic | Reject | Summary: The paper presents evidence that LMs sometimes use а computational mechanism similar to traditional word embeddings, specifically using simple vector arithmetic to encode abstract relations. Experiments show that this mechanism is specific to tasks that require retrieval from pretraining memory rather than loc... | Rebuttal 1:
Rebuttal: Thank you for the thorough review and questions. We agree with the point that a more in depth error analysis would be helpful and can prepare this for the camera ready version. We received similar feedback from other reviewers and address this and related points in our rebuttal and accompanying pd... | Summary: The paper offers new findings on interpreting the internal processes of language models. Specifically, the authors identify a particular mechanism that is similar to word2vec style vector arithmetic. By decoding the next token after each attention layer and FFN layer, they examine the structure within the embe... | Rebuttal 1:
Rebuttal: Thank you for the thorough review and questions. The question about the choice of relations is a good one and so we added results for six additional tasks in the rebuttal. We received similar concerns from the other reviewers and addressed those in the rebuttal and accompanying pdf. We find that t... | Summary: This paper proposes the conjecture that Transformer-based large language models also implement the vector arithmetic (namely vector subtraction and addition) for word analogy tasks, similarly as the well-known property of word embeddings. Experiments on three word analogy tasks support the conjecture. Such fin... | Rebuttal 1:
Rebuttal: Thank you for your review.
\>\>Have you thought about other mechanisms, e.g., vector rotation, to represent word analogy?
We did not, but could you please explain why we would do this? As you mentioned, the residual structure of the transformer architecture naturally implements ‘vector arithmet... | Summary: This paper investigates how a large language model (LLM) computes the vector representation of an output token. In particular, the authors focus on tasks in which the LLM is required to output a token that is related to an input token in a certain kind of relation (e.g. Country-Capital relation). The authors s... | Rebuttal 1:
Rebuttal: Thank you for your review and for pointing out typos.
\>\> Would it be possible to add possible explanations or hypotheses for the findings?
We received similar feedback from other reviewers and addressed it in the rebuttal and pdf. To summarize, we found that this behavior does not extend to m... | Rebuttal 1:
Rebuttal: Thank you to the reviewers for thorough and thoughtful reviews. The main concern that was raised by all reviewers was when/if this behavior extends to other relations and how this explains the argument-function processing signature. We attempt to address these concerns for all reviewers below.
Fi... | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper examines whether the residual representations in GPT models obey word2vec-style arithmetic. For three different (head, relation, tail) relations, the authors find evidence that the transformer:
1. Writes the head into the residual stream
2. Transforms the head into the tail, observed via a sudden "i... | Rebuttal 1:
Rebuttal: Thank you for the review. We received similar feedback from the reviewers about negative results and additional relations to test, so we address this in the rebuttal and accompanying pdf. We liked your idea to test tasks according to relation type and indeed find that many-to-one and many-to-many ... | null | null | null | null | null | null |
Efficient Diffusion Policies For Offline Reinforcement Learning | Accept (poster) | Summary: This paper focuses on the improvement of computation efficiency of Diffusion-QL by adopting the property of marginal distribution in the diffusion model and the variance control scheme proposed by DPM-Solver. Besides, this paper extends the scope of compatibility with other offline RL methods, from value-based... | Rebuttal 1:
Rebuttal: > Q1. In lines 233-235 of Section 4.5, why adopting $\hat{a}^0$ can not reduce high variance?
A1. Here is an intuitive explanation. Given an actual action $a$, action approximation $\hat{a}^0$ represents the mean of the action Gaussian distribution that can be denoised from $a^k$. Therefore, it i... | Summary: The authors propose a method to efficiently train diffusion based policies in the offline-RL setting. The authors suggest three main tricks to enable this: 1) Removing the need to backpropagate through the diffusion sampling chain to update the policy by using what the authors call action approximation; 2) Rep... | Rebuttal 1:
Rebuttal: > Q1. The effect of action approximation
A1. We compare with and without action approximation on the following three environments by using OMS metric (Table. 4). In the following table, the DDPM column will forward and backward a policy network 100 times at training time, but action approximation... | Summary: The paper proposes to learn a policy for several offline RL tasks in the D4RL benchmark by parameterizing a policy with a diffusion model. The authors claim that their approach is computationally efficient and more compatible with several other RL approaches like maximum likelihood based approaches when compar... | Rebuttal 1:
Rebuttal: > Q1. Eqn. (12) and Eqn. (13) are empirically similar to each other. No evidence
A1. We ran experiments with Eqn. (13) on three environments with TD3 as the base algorithm. These two approximations are compared in the following table. We can observe that they indeed perform similarly.
| Base... | Summary: The focus of this paper is to enhance the diffusion policies introduced in Diffusion-QL for offline reinforcement learning. The authors address the challenges of training and sampling efficiency by incorporating action approximation and employing an advanced ODE solver for diffusion policies. They conducted ex... | Rebuttal 1:
Rebuttal: > Q1. About the training time of Diffusion-QL.
A1. Thank you for bringing this valuable problem into discussion. We noticed that in Diffusion-QL’s official code repo, they default the number of diffusion steps to 100 (K=100). Please refer to their official `diffusion-rl` repo belonging to organi... | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for recognizing the novelty and contributions of our work, as well as for providing valuable questions for discussion and constructive suggestions.
As there are limited shared questions from the reviewers, we address them individually in the corresponding resp... | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposed EDP to address the existing limitation of diffusion policy in offline RL. EDP relies on an action approximation to construct actions from corrupted ones, thus avoiding running the Markov chain for action sampling at training. shows that EDP achieves 25+ speedup over Diffusion-QL at training... | Rebuttal 1:
Rebuttal: > Q1. The baselines are a little bit weak. The superiority might be overclaimed.
A1. Thank you for reminding us of these important literatures. We’d like to clarify that our paper focuses on policy representation in offline RL, which is orthogonal to the algorithmic developments by these related... | null | null | null | null | null | null |
Pruning vs Quantization: Which is Better? | Accept (poster) | Summary: This submission conducts a series empiricial experiments and anlysis between neural network pruning and quantization. It first used some statistics method to compare pruning/quantization. Then it measure the per-layer error based on a post-training compression framework. Finally, it conducted some experiments ... | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments. We answer each of their comments below.
**W1**. We agree that our work does not propose a new quantization or pruning method. However, we respectfully disagree that our paper does not bring new insights. We refer to the general comment on the novelty abov... | Summary: The authors compare the performance of post-training quantization and pruning methods with the same compression ratio using a signal-to-noise metric, a kurtosis metric, and, ultimately, model accuracy. They study the expected performance analytically and in simple toy problems, i.e., Gaussian- and Student's t-... | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed comments and additional references, please find our comments below.
**W1**. The relation between SNR and model accuracy for both pruning and quantization is demonstrated in appendix D (see figure 6).
**W2**. We agree that naturally appearing sparsity in th... | Summary: This paper sets out to answer the question whether quantization or pruning is better. It first provides an analytical analysis of the two methods in terms of signal-to-noise ratio (SNR) and establishes an early relationship between kurtosis and SNR. It then provides a mathematical breakdown of the compression ... | Rebuttal 1:
Rebuttal: We thank the reviewer for the details review and useful suggestions. Please find our answers and comments below.
**W1**. Block sparsity is a subset of structured sparsity and therefore indeed we expect it to have strictly worse accuracy for the same model size. This was confirmed in our experimen... | Summary: In this paper, the authors try to answer whether pruning or quantization is better for network compression. The paper start by analyze the qunatization error and pruning error under standard normal distribution and heavy-tail distributions. Full-model comparison are done between quantization and pruning with a... | Rebuttal 1:
Rebuttal: We thank the reviewer for comments and suggestions. We answer each point below.
**W1**. We totally agree that pruning is useful, and we did not state the opposite in our paper. Rather, for the setups where both methods are supported, using quantization leads to more accurate models. As we mention... | Rebuttal 1:
Rebuttal: We thank the reviewers for their thoughtful comments and useful feedback. We are happy to see that they found the paper well written (S4q6), has a thorough empirical evaluation on various tasks, (S4p6, jvrg), that the comparison is performed on various levels (XswT) and that they found our finding... | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Improved Communication Efficiency in Federated Natural Policy Gradient via ADMM-based Gradient Updates | Accept (poster) | Summary: This paper proposes a communication-efficient algorithm, FedNPG-ADMM, for federated natural policy gradient by using a reformulation of quadratic problem. It reduces the communication complexity from $\mathcal{O}(d^2)$ to $\mathcal{O}(d)$. The convergence analysis is provided accordingly.
Strengths: 1. The pa... | Rebuttal 1:
Rebuttal: **Q1.a**. Yes, data heterogeneity is a key issue in FL. However, this issue only exists in model aggregation methods (with local updates). As proven in [43], gradient aggregation methods are *immune* to whether collected data is i.i.d., or not. In summary, data heterogeneity will not influence our... | Summary: The paper studies how to train a global policy using distributed data in reinforcement learning. The authors propose a distributed natural policy gradient method by employing ADMM to approximately compute a natural policy gradient direction. The communication complexity is linear in the dimension of policy par... | Rebuttal 1:
Rebuttal: **W1**. This is actually an important bottleneck for the following reasons. Generally, if one method can reach a large scale, it means that its complexity is at most $O(n)$ [53]. Thus, the $O(n^2)$ complexity in the naive method is not acceptable in large-scale FL. In DRL, a policy is approximated... | Summary: This paper applies the ADMM technique to the Fed-NPG algorithm in reinforcement learning and reduces the communication cost from $O(d^2)$ to $O(d)$ where $d$ is the number of parameters, which nearly maintains the convergence results of Fed-NPG. Empirical results verify the theoretical analysis.
Strengths: Th... | Rebuttal 1:
Rebuttal: **Q1**. Thank you for these constructive suggestions. We will add PG in the main content and communication comparisons are added in Figure 4 (see added pdf), where communication overhead is measured by the number of transmitted parameters with double precision in each agent. FedNPG-ADMM keeps the... | Summary: The work proposed a new algorithm for federated policy optimization. The proposed work uses prima-dual update to replace the primal update of FedNPG, so that the communication cost reduces from $d^2$ to $d$. The proposed work enjoys same rate of convergence as FedNPG under certain assumptions, and numerical ex... | Rebuttal 1:
Rebuttal: **Q1**. This is a thoughtful suggestion. We had the same thought at the beginning. However, local updates in the FedAvg-type might not bring advantages compared to gradient aggregation [46, 43]. On the other hand, unlike supervised learning, local updates in RL bring different **local policies**. ... | Rebuttal 1:
Rebuttal: Dear Area Chairs and Reviewers,
We appreciate your organization and valuable feedback.
In the one-page pdf, we add communication costs in Figure 4 and agent selection in Figure 5.
In the rebuttals, references [1-45] are from the main paper, and references [46-62] are listed as follows:
---
[46... | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
SaVeNet: A Scalable Vector Network for Enhanced Molecular Representation Learning | Accept (poster) | Summary: This paper introduces a novel molecule representation network that enhances the learning capacity and scalability through the integration of innovative initialization techniques and activation functions for vector features. The conducted experiments validate the network's proficiency in three distinct molecule... | Rebuttal 1:
Rebuttal: Dear Reviewer FRcS,
Thank you for your thorough assessment of our work. We value your feedback and would like to address your concerns and suggestions as follows:
> Presentation and Equation Clarifications
1. **Eq. 4 Ambiguity**: We've refined Eq. 4 to clarify the scalar-vector tuple representa... | Summary: This paper proposes an effective and efficient equivariant graph neural network for geometric learning on molecules. The model encodes 3D graphs with node types and coordinates and outputs scalar and vector representations. The message passing process is purely scalar-based, which enjoys more efficiency than b... | Rebuttal 1:
Rebuttal: Dear Reviewer jP4K,
Firstly, we extend our gratitude for the careful review and insightful feedback on our manuscript. We address each comment in detail to offer a clearer understanding of our work.
> Numerical stability and model convergence
1. **Relevance of Vector Initialization**: Recent wo... | Summary: This paper proposes an SE(3)-equivariant model called SaVeNet, designed to accommodate various geometric requirements. The proposed framework can effectively scale with the introduction of directional noise. Theoretical analysis and empirical results on several datasets are provided to validate the efficiency ... | Rebuttal 1:
Rebuttal: Dear Reviewer QLtV,
Thank you for taking the time to review our manuscript and for your invaluable feedback. We acknowledge the importance of clarity in our presentation, and we address your concerns as follows.
> The metrics std. MAE and log MAE
The standardized MAE is derived by normalizing t... | Summary: This paper proposes a framework called SaVeNet for geometric representation learning of molecules. The paper includes theoretical analysis and empirical experiments to demonstrate the superiority of SaVeNet over existing methods in terms of efficiency and expressiveness.
Strengths: 1. This paper proposes an e... | Rebuttal 1:
Rebuttal: Dear Reviewer T8Du,
Firstly, we'd like to extend our gratitude for your detailed review and the constructive feedback provided on our manuscript. We would like to address your questions as follows:
> Applicability of direction noise and vector activation to other equivariant models
We appreciat... | Rebuttal 1:
Rebuttal: Dear Reviewers,
Firstly, we'd like to extend our sincere gratitude for your diligent review of our work and your invaluable feedback. Based on your insights and suggestions, we have undertaken significant efforts to improve our manuscript, making it both more comprehensive and accessible to the w... | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors propose an efficient and scalable equivariant GNN (SaVeNet) for 3D molecular conformations. The architecture follows an encoder-decoder style framework, where the encoder is composed of "Inter-atomic Interactions" and "Atom-wise blocks" learning mechanisms. Several other modeling augmentations are ... | Rebuttal 1:
Rebuttal: Dear Reviewer 6Z9X,
We would like to express our gratitude for taking the time to review our manuscript and providing detailed and constructive feedback. We appreciate your positive reception of our work and carefully considered each of your points and would like to address your comments as follo... | null | null | null | null | null | null |
Gaussian Mixture Solvers for Diffusion Models | Accept (poster) | Summary: The authors point out that $q(x_s|x_t)$ is not necessarily Gaussian when $t$ is significantly bigger than $s$, and propose to use a mixture of Gaussians in order to model better the reverse process, when the number of integration steps is not large. Such a selection guarantees that with an increasing number of... | Rebuttal 1:
Rebuttal: Thank you for your supportive review and suggestions.
***Main Weakness 1: Presentation***
We appreciate your careful reading of our paper and your helpful suggestions. We will address these issues in the final version, including but not limited to:
1. We will revise the sentence in line 158 int... | Summary: The authors address the efficiency-effectiveness dilemma faced by existing SDE-based solvers in diffusion models during inference. They observe that the Gaussian assumption in the reverse transition kernel is frequently violated, even with a limited number of discretization steps. To overcome this limitation, ... | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and questions.
***Question 1: Clarification regarding the claim that a limited number of discretization steps amplifies the violation of assumptions for the Gaussian transition kernel used in SN-DDPM***
Thanks for the insightful comment. We clarify this issu... | Summary: Sampling from diffusion models is equivalent to solving the reverse diffusion SDEs or the corresponding probability flow ODEs. In comparison, SDE-based solvers can generate samples of higher quality and are suited for image translation tasks. However, during inference, existing SDE-based solvers are severely c... | Rebuttal 1:
Rebuttal: Thank you for your supportive review and valuable comments.
***Weakness (a): Design of the Gaussian mixture model & How can it degrade to a Gaussian***
Our design choice of the Gaussian mixture model for the reverse transition kernel is $p(x\_s|x\_t)=\\frac{1}{3} \\mathcal{N}(\\mu\^{(1)}\_t(x\_... | Summary: The paper proposes to weaken the Gaussian assumption of the transition probability in the reverse SDE used in deep diffusion models. They first illustrates how and when the Gaussian assumption is wrong. Then they suggest to approximate the non-Gaussian transition probability by a Gaussian Mixture which is adju... | Rebuttal 1:
Rebuttal: Thank you for your supportive review and valuable suggestions.
***Weakness 1: The interest of the proposed method***
We understand that your concern may relate to the theoretical contributions and practical benefits of our method. (We acknowledge the possibility of potential misunderstanding an... | Rebuttal 1:
Rebuttal: We thank all reviewers for their valuable and constructive feedback, and we have responded to each reviewer individually. We have also uploaded a rebuttal PDF that includes:
- **Fig. A**: The relation between the sample quality (in FID) and the sampling time (in seconds) of GMS and SN-DDPM on CIFA... | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
ScaleLong: Towards More Stable Training of Diffusion Model via Scaling Network Long Skip Connection | Accept (poster) | Summary: They state that diffusion models using Unet suffer from unstable training
and oscillations of features and gradients. They also state that while this is sensitive to coefficients related to scaling the skip connections of Unet. They set out to provide an explanation and more robust scaling methods for the skip... | Rebuttal 1:
Rebuttal: **`(1)` Thank you for your encouragement and taking the time to review our article.** We will further improve our paper based on the suggestions of other reviewers.
---
Rebuttal Comment 1.1:
Title: Rebuttal read
Comment: I confirm I have read the rebuttal and would like to keep my score. | Summary: U-Net is the most popular neural network backbone for diffusion models. In U-Net, the long skip connection (LSC) links the long-distant information near to the input and the intermediate network outputs. However, they suffers from unstable training, which is resolved by scaling down the LSC coefficients. This ... | Rebuttal 1:
Rebuttal: Thank you for the insightful and positive comments! In the following, we provide our point-by-point response and hope our response helps address your concerns. We also look forward to the subsequent discussion which may further help solve the current issues.
**`(1)` About the baseline EDMs.** Th... | Summary: This paper proposes to scale the skip connection in diffusion model Unets by an exponential factor. The authors should that the feature norms of vanilla Unets oscillate across batches, and that their proposed method results in much smaller feature output oscillations. They conjecture that this method stabilize... | Rebuttal 1:
Rebuttal: Thank you for the insightful and positive comments! In the following, we provide our point-by-point response and hope our response helps address your concerns. We also look forward to the subsequent discussion which may further help solve the current issues.
**`(1)` For parameter oscillation,** ... | Summary: In this paper, the authors focus on the challenge of the instability arising from the commonly adapted U-Net architecture for diffusion models. In particular, the authors start by theoretically analyzing the influence of the coefficients of long skip connects in U-Net-based diffusion models, specifically on th... | Rebuttal 1:
Rebuttal: Thank you for the insightful and very positive comments! In the following, we provide our point-by-point response and hope our response helps address your concerns. We also look forward to the subsequent discussion which may further help solve the current issues.
**`(1)` Per your suggestion, here... | Rebuttal 1:
Rebuttal: We provide the necessary charts for the rebuttal stage in the attached PDF. Reviewers are kindly requested to refer to them.
Pdf: /pdf/4ab6cb4d89415647e2a0a994b1c6a0a6d00f92ee.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper discusses the stability issues observed while training UNet in diffusion models, and theorizes on the role of Long Skip Connections (LSCs) in causing this instability.
Diffusion models (DMs), lauded for their ability to model realistic data distributions, involve a forward and a reverse diffusion p... | Rebuttal 1:
Rebuttal: Thank you for your insightful and positive comments. In the following, we provide our point-by-point response and hope our response helps address your concerns. We also look forward to the subsequent discussion which may further help solve the current issues.
**`(1)` In addition to LSCs, we ha... | Summary: The paper presents a study and algorithm for scaling UNet's long-range connections such that convergence and stability can be improved. The results are strong in the setup of training diffusion models.
Strengths: - The paper is very well written. The text and graphs are polished and it's easy to follow the id... | Rebuttal 1:
Rebuttal: Thank you for the insightful and very positive comments! In the following, we provide our point-by-point response and hope our response helps address your concerns. We also look forward to the subsequent discussion which may further help solve the current issues.
**`(1)` For the direction of the... | null | null | null | null |
Universal Prompt Tuning for Graph Neural Networks | Accept (poster) | Summary: The paper introduces a universal prompt-based tuning method called Graph Prompt Feature (GPF) and its variation (GPF-plus) for pre-trained Graph Neural Network (GNN) models. GPF is a universal method that can be applied to any pre-trained GNN model under any pre-training strategy. It operates on the input grap... | Rebuttal 1:
Rebuttal: ### **Response**
Dear reviewer KwHM,
We hope our point-to-point responses can address your concerns and provide you with better clarification.
1. (Weakness 1 \& Question 1) Comparison with linear probing.
The difference between linear probing and GPF lies in the introducing of additional lear... | Summary: This paper proposed a universal prompt-based tuning method, called GPF, for pre-trained GNN models. The idea is to operate on the feature space of the downstream input graph. The authors theoretically showed that GPF can achieve results equivalent to any prompting function, and is not weaker than full fine-tun... | Rebuttal 1:
Rebuttal: ### **Response**
Dear reviewer dbJu,
We really appreciate your comments on our work. We hope our response can address your concerns.
1. (Weakness 1) The motivation and advantages of universal prompting.
The universal graph prompt tuning method that we proposed offers three main advantages ove... | Summary: This paper aims for efficient adaptation of pre-trained graph neural networks to downstream tasks. A simple prompt tuning method (i.e. GPF) is proposed for adaptation and is applicable to GNN pretrained with any objectives. The main idea of GPF is to add a learnable vector on all the node features in the input... | Rebuttal 1:
Rebuttal: ### **Response**
Dear reviewer Cfab,
We hope our point-to-point responses can address your concerns and better clarify the contributions and value of our work.
1. (Weakness 1) The novelty of GPF and the comparison with existing methods.
The main contributions of our work compared to existing ... | Summary: This paper introduces the Graph Prompt Feature (GPF) approach, which aims to adapt pre-trained Graph Neural Networks (GNNs) for downstream tasks by appending tunable embeddings onto the frozen node embeddings. By doing so, the authors achieve a significant reduction in the number of parameters required for the... | Rebuttal 1:
Rebuttal: ### **Response**
Dear reviewer GUF8,
We hope our point-to-point responses can address your concerns and provide you with better clarification of the contributions and value of our work.
1. (Weakness 1) The relationship between our methods and prompting methods.
Our method, GPF, is a general ... | Rebuttal 1:
Rebuttal: ### **Global Response**
Dear all reviewers,
We appreciate your valuable comments on our work. We provide the following clarification and additional experimental results based on feedback.
**A. Contributions and influence**
We propose a universal graph prompt tuning method that can be applied ... | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes the Graph Prompt Feature (GPF) to improve Graph Neural Networks (GNNs) performance amidst scarce labeled data and low out-of-distribution generalization. GPF, a universal prompt-based tuning method, operates on the input graph's feature space and is applicable to any GNN architecture. The a... | Rebuttal 1:
Rebuttal: ### **Response**
Dear reviewer aqDC,
We appreciate your comments and your support for our work. We hope our response can address your concerns. Please find our detailed response below.
1. (Weakness 1 \& Question 1) Dealing with scenarios where the feature space is noisy or inadequately represe... | null | null | null | null | null | null |
Dynamic Non-monotone Submodular Maximization | Accept (poster) | Summary: This work studies non-monotone submodular maximization subject to a cardinality
constraint in a fully dynamic setting, i.e., maintaining a good solution as
elements are inserted and deleted from the "current" ground set. Studying
non-monotone submodular maximization in this model is the natural follow-up to
th... | Rebuttal 1:
Rebuttal: Thank you for your valuable comments. We will make sure to fix the issues you pointed out and incorporate your suggestions in the revised version of the paper.
---
Rebuttal Comment 1.1:
Comment: I read all the reviews and author rebuttals, and will keep my rating the same.
I would like confirma... | Summary: In this paper, the authors consider the non-monotone submodular maximization problem under the cardinality constraint and dynamic model. Here, the dynamic model means that the ground set of the submodular function changes every time step where one element is inserted into or deleted from the ground set, and th... | Rebuttal 1:
Rebuttal: Thank you for your review. Please see below for our answers to your comments:
>I’m confused with the relation between parameter $\tau$ and the results in theorem ...
Thank you for pointing out this issue. It appears that the writing here has been confusing, and you may have misunderstood the th... | Summary: The paper studies the dynamic submodular maximization problem and gives the first efficient dynamic algorithm that maintains a constant approximation solution.
In the problem of dynamic submodular maximization, there is a sequence of updates (insertions/deletions) of ground set element, and one wants to maint... | Rebuttal 1:
Rebuttal: Thank you for pointing out the issues and listing the extra references. We will make sure to incorporate them in the revised version of the paper. | Summary: The authors consider an submodular non-monotone optimization problem in fully dynamic setting. The authors propose a 8+epsilon approximation algorithm by combining several existing methods.
Strengths: - The technique behind the proposed algorithm is interesting.
- Proven approximation guarantee.
- A nice cont... | Rebuttal 1:
Rebuttal: Thank you for your review. Please see below for our answers to your comments:
> 8 + epsilon is a rather weak approximation guarantee.
Our primary goal in this paper was to obtain the first dynamic constant-factor approximation algorithm for non-monotone submodular maximization. This answers aff... | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Multi-Agent Meta-Reinforcement Learning: Sharper Convergence Rates with Task Similarity | Accept (poster) | Summary: This paper studies the interdependence between the convergence of MARL and the quality of policy initialization.
Strengths: 1. It proposes a new algorithm that has an initialization-dependent convergence guarantee.
2. It establishes several theoretical results that connect policy initialization and convergen... | Rebuttal 1:
Rebuttal: We thank the reviewer for the appreciation of our work. We would be happy to discuss if the reviewer has any questions about the paper. | Summary: The paper introduces a meta-learning method to initialize the OOMD algorithm. Combined with the introduced initialization-dependent convergence guarantees, authors then can show faster convergence when the meta-learning initialization is close.
Strengths: The paper is well written, and the prior work is well ... | Rebuttal 1:
Rebuttal: We thank the reviewer for the helpful feedback. Our detailed responses are as follows.
1. We thank the reviewer’s appreciation of our Theorem 1, which is indeed one of our most interesting results. While we agree that Theorem 2 can be considered as an application of Theorem 1 (as the reviewer men... | Summary: The authors proposed a meta-learning approach based on MAML for multi-agent domains where tasks with similar NE policies, when learned sequentially, converges faster to desired equilibria solutions.
Strengths: * Originality
The authors investigated theoretical convergence properties of multi-agent learning ... | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments and the valuable suggestions on improving our work from multiple different perspectives. Our detailed responses are as follows.
1. We appreciate the reviewer’s concern of our task similarity metric. In Section 3.2, we choose the closeness of the N... | Summary: This paper establish theoretical results for meta-learning in a wide range of fundamental MARL settings, including learning Nash equilibria in two-player zero-sum Markov games and Markov potential games. Numerical results are shown to demonstrate the advantages of meta-learning.
Strengths: This paper establi... | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback. The focus of our work is mostly theoretical, and the simulations were primarily used as proof of concept. But given the reviewer’s interests in the empirical performances of our results, we have added new and larger-scale simulations in this rebutta... | Rebuttal 1:
Rebuttal: We thank all the reviewers for the insightful feedback. In this “global” response, we would like to share some new and larger-scale simulations that we conducted in this rebuttal phase following some of the reviewers’ advice. We believe these new simulations can help address the reviewers’ questio... | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors introduce theoretical results on Model-Agnostic Meta-Learning in a multi-agent reinforcement learning setting. In particular they show that meta-learning can achieve stronger convergence guarantees than an RL baseline when tasks are similar. The results hold for zero-sum, potential and general-sum ... | Rebuttal 1:
Rebuttal: We thank the reviewer for the appreciation of our work and the valuable feedback. Our detailed responses are as follows.
1. We thank the reviewer for the comments on our empirical results. The focus of our work is mostly theoretical, and the simulations were primarily used as proof of concept. Bu... | null | null | null | null | null | null |
Efficient Potential-based Exploration in Reinforcement Learning using Inverse Dynamic Bisimulation Metric | Accept (poster) | Summary: This paper introduces a novel approach that combines bisimulation metrics with inverse dynamics modeling to formulate potential functions for reward shaping. The integration of these techniques offers potential-based exploration, and the paper provides theoretical analyses highlighting the benefits of this pro... | Rebuttal 1:
Rebuttal: Thank you for your insightful reviews, for the citations in the response please refer to the reference list in the global comment.
*W1: The authors should also evaluate their method on DMC tasks... are known to pose more challenging exploration scenarios...*
**Response**: Thank you for your sugg... | Summary: This paper focuses on the topic of reward shaping in reinforcement learning to encourage exploration. Different from previous methods that heavily rely on the count-based episodic term in the exploration bonus, they provide an end-to-end potential-based exploration bonus. This paper proposes to use the bisimul... | Rebuttal 1:
Rebuttal: Thank you for your insightful reviews, we had included the required experiments in the pdf of global comment regarding to W2 and Q2. For all the references mentioned in the response, please find the reference list in the global comment.
*W1: There are 4 theorems in the main context without enough... | Summary: This paper proposes the automatic construction of a potential function for policy-invariant reward transformation. The basic idea is adding the discrepancy in action outcomes from the inverse dynamic model to the on-policy bisimulation metric proposed by Castro [2020]. Then, the authors propose a method to tra... | Rebuttal 1:
Rebuttal: Thank you for your insightful reviews. For all the references mentioned in the response, please find the reference list in the global comment.
*W1: ...because it explicitly estimates the transition and reward functions. Therefore, comparing the proposed method and model-based approaches is import... | Summary: This paper proposes to use inverse dynamic bisimulation metric for potential-based reward-shaping (PBRS). Specifically, the authors introduce the inverse dynamic bisimulation metric, which augments the bisimulation metric with an inverse dynamics term to account for state differences caused by actions. They th... | Rebuttal 1:
Rebuttal: Thank you for your insightful reviews. The requested exp have been included in the PDF file. For all the references mentioned in the response, please find the reference list in the global comment.
*W1:.. the intuition of bisimulation metric for doing so remains largely unclear... However, it is u... | Rebuttal 1:
Rebuttal: We thank all reviewers for their valuable comments, and we summarize the major concerns regarding to the reviewers as follows:
### Sparse reward setting and more challenging environments
According to the review of the 1st reviewer naKC, there are questions about how our method can achieve good pe... | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
AR-Diffusion: Auto-Regressive Diffusion Model for Text Generation | Accept (poster) | Summary: This paper presents a diffusion model on text generation. The idea is generally interesting. It learns to diffuse sentence-level and token-level diffusion, where the latter one is diffused with dynamic movement speeds.Its experiments are well-designed and its empirical results are strong.
Strengths: 1. The me... | Rebuttal 1:
Rebuttal: Thank you very much for taking the time to review our paper, in response to your concerns, we will give the following explanations.
**Q1: The paper is hard to follow, and the writing should be improved.**
A1: Our approach is designed to apply the inherent sequential features of natural language ... | Summary: This paper presents AR-DIFFUSION, a diffusion model that displays auto-regression-like generation behavior. The primary contributions of this work can be summarized as follows:
1) A multi-level diffusion strategy is proposed, encompassing both sentence-level and token-level diffusion.
2) A skipping mechanism i... | Rebuttal 1:
Rebuttal: Thank you very much for taking the time to review our paper, and I will explain your concerns in detail below.
**Q1: My primary concern is that the main gains appear to be derived from MBR. With the skipping mechanism, the per-sentence generation step is reduced from 2000 to 20. This gives you ch... | Summary: This paper introduces a diffusion method optimized for the autoregressive text generation scheme. They employ different movement speeds for denoising with respect to the token positions. Specifically, they apply a lower movement speed to right-side tokens to guide models to reflect information in left-side tok... | Rebuttal 1:
Rebuttal: Thank you very much for your valuable suggestions, and we will reply to your questions one by one below.
**Q1: Additional case studies comparing with GENIE or (N)AR models would provide further insights.**
A1: The following two tables are the results generated by GENIE and AR-Diffusion for the s... | Summary: This work introduces left-to-right sequential characteristics into diffusion models, enhancing the text generation performance of diffusion models. By considering the AR model as a diffusion model with two states: to be decoded and already decoded, AR-Diffusion defines a continuous diffusion model with decreas... | Rebuttal 1:
Rebuttal: Thank you very much for your careful review, and I will elaborate on each of your concerns below.
**Q1: I think the authors overclaim the decoding speedup. First of all, most diffusion baseline models in the paper have no advantage in both generation quality and efficiency compared with the Trans... | Rebuttal 1:
Rebuttal: **Q1: Compare with more diffusion language models.**
A1: We have compared AR-Diffusion with SeqdiffSeq in the Table 3 , and compare with DINOISER and Diffusion-LM in the appendix Table 8. Furthermore, we enrich the comparision with more baselines in the following table. Due to the unavailability... | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Adaptive Uncertainty Estimation via High-Dimensional Testing on Latent Representations | Accept (poster) | Summary: The paper presents a new framework for uncertainty estimation in baysian neural networks. The core contribution is formulating uncertainty estimation as multiple high-dimensional hypothesis testing problem and deriving the test statistics necessary. The paper then present multiple empirical results, showing go... | Rebuttal 1:
Rebuttal: We greatly appreciate your positive comments on our manuscript and your insightful summary of the impacts and novelty of our work. We take great pleasure to respond to several intriguing discussions you raised as follows.
>**Q1. Improving Clarity of Claims.**
Thank you for your suggestions on i... | Summary: This paper proposes a framework to detect out-of-distribution (OOD) data via high-dimensional testing on latent representations.
The proposed framework consists of:
- a Bayesian Neural Network that, for any input, can produce an ensemble of latent presentations by sampling the posterior of the weights
- a hyp... | Rebuttal 1:
Rebuttal: We greatly appreciate your positive and insightful comments on our manuscript. We would address your comments as follows.
>**Q1. More Explanation on ARHT.**
We understand the audience without a solid statistical background may find it difficult to understand the proposed ARHT. We provide more e... | Summary: The paper proposes BNN-ARHT, which introduces a uncertainty estimation framework that uses high dimensional hypothesis testing in the feature space of a network. The key idea is to use ARHT to determine in vs outliers in a feature space, and so it is generic and broadly applicable to any kind of task and netwo... | Rebuttal 1:
Rebuttal: Thank you for your positive comments on the novelty and the appealing aspects of our methods. And thank you for indicating an intriguing future direction for this work using conformal inference. We will address your concerns and questions as follows.
>**Q1. More Comprehensive Evaluation.**
Acco... | Summary: This paper proposes an OOD detection procedure by applying adaptable regularized Hotelling’s T-square (ARHT) test [24] on the feature representation of learned BNN networks. Authors introduced the application o ARHT on BNN encoder, and proposed a procedure to adaptively calibrate detection threshold based on B... | Rebuttal 1:
Rebuttal: Thank you for your detailed and constructive comments. We will address your questions and concerns as follows.
>**Q1. A More accurate description of the literature.**
Thank you for the suggestions. We believe that there might be some misunderstanding. We focused on comparing the previous uncer... | Rebuttal 1:
Rebuttal: We thank all the reviewers for your time and efforts on our manuscript. According to the reviewers' comments, we conduct additional experiments to more extensively evaluate our method, including adding more baselines, experiments on more diverse datasets, and experiments with larger architectures.... | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Partial Label Learning with Dissimilarity Propagation guided Candidate Label Shrinkage | Accept (poster) | Summary: In this submission, authors propose a novel partial label learning method named DPCLS by realizing the effectiveness of the dissimilarity relationship. They develop a semantic similarity and dissimilarity matrix which form an adversarial relationship, which is further utilized to shrink the solution space of t... | Rebuttal 1:
Rebuttal: **Thank you for your time and effort in reviewing our paper.**
---
**W1: The motivation is unclear and the novelty might be limited.**
**Answer to W1:**
* **Motivation**: please refer to the **Global Response** for the **Motivation** of our work, and in the final version, we will improve the i... | Summary: This paper proposes a new approach to partial label learning, called DPCLS, which learns the similarity and dissimilarity matrices to improve labeling accuracy in an adversarial relationship. The proposed method is compared to several existing methods on a variety of datasets, and the results demonstrate its s... | Rebuttal 1:
Rebuttal: **Thank you for your time and effort in reviewing our paper.**
---
**W1: Small-scale data sets and motivation of this paper**
**Answer to W1**
* **Motivation**: Thank you for your valuable comments and suggestions. Please refer to **Global Response** for the detailed **Motivation** of our work,... | Summary: This paper constructs a second-order similarity matrix and a semantic dissimilarity matrix. The similarity matrix is obtained by leveraging the confidence obtained from the underlying model, while the semantic dissimilarity matrix is determined based on the label candidate set and the distribution of samples i... | Rebuttal 1:
Rebuttal: **Thank you for your time and effort in reviewing our paper.**
---
**W1: The research motivation of the paper is not clearly stated.**
**W2: The paper mainly focuses on describing the proposed method, without summarizing and extracting issues from existing PLL research.**
**Answer to W1 and W... | Summary: The paper proposes a new method for partial label learning called Dissimilarity Propagation guided Candidate Label Shrinkage(DPCLS). The method captures the confidence of candidate labels by constructing a constrained regression model and uses the product of the label confidence matrix and its transpose to bui... | Rebuttal 1:
Rebuttal: **Thank you for your time and effort in reviewing our paper.**
---
**W1: The structure and logic of the paper need improvement.**
**Answer to W1:**
Thank you for your suggestion. In the final version, we will improve the introduction to make the logic clearer. Specifically, the **Background (... | Rebuttal 1:
Rebuttal: Thanks to all the reviewers and the area chair for handling our paper and the valuable comments and suggestions to improve its quality. In the initial comments, we received 4 positive recommendations (1 Accept, 1 Weak Accept, 2 Borderline Accept) and 1 negative recommendation (1 Borderline Reject)... | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The manuscript delineates an innovative method, termed as DPCLS, which is designed for partial label learning. In addressing the issue - namely label disambiguation - the DPCLS method exhibits an amalgamation of similarity relationship and dissimilarity relationship in an adversarial manner that endows the me... | Rebuttal 1:
Rebuttal: **Thank you for your time and effort in reviewing our paper.**
---
**W1: Steps of Algorithm 1 and explanation of the auxiliary matrix $A$**
**Answer to W1:**
* **The effect of the different orders:**
Steps 4-9 in **Algorithm 1** solve four subproblems, and the order of them will not affect the ... | null | null | null | null | null | null |
Gaussian Process Probes (GPP) for Uncertainty-Aware Probing | Accept (poster) | Summary: This paper provides Gaussian process probes (GGP), a probabilistic method to evaluate uncertainty for a binary classification task over a (pre-trained) feature extractor. The core idea is to use GP instead of a linear probe on the feature extractor. The use of GP provides a natural way to estimate two uncertai... | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback.
> LPE results in Figure 5, “the well-trained classifier will output 0.5 as the judged probability for positive and 0 for negative examples. This should occur when the classification problem is linearly separable on the feature space, and the numb... | Summary: This work introduces a unified framework called Gaussian process probes (GPP) for probing and quantifying uncertainty in models' representations of concepts. GPP extends linear probing methods and uses a Bayesian approach to estimate the distribution of classifiers induced by the model. This distribution measu... | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback.
> “Can you explain in more detail the choice of prior as outlined in Section 2.3.1. I'm not sure I understand what you mean by matching the Beta prior and matching the normal distribution in lines 125-126.”
The goal is to use a Log-normal distr... | Summary: The manuscript proposes a Gaussian process-based probing (monitoring a layer using only the layer’s activations without influencing the model itself) method that can estimate uncertainty in prediction. In the experimental result section, the proposed method is applied to several example datasets that can check... | Rebuttal 1:
Rebuttal: We thank the reviewer for the review and constructive feedback.
We would like to emphasize the novelty of this work comes from the application of kinds of uncertainty and GPs in the context of probing. . Our goal is not simply to measure the uncertainty of predictions from a neural net, where we... | Summary: - The authors propose a probabilistic probing method to understand a given pre-trained classifier.
- The authors describe how looking at classifier predicted class probabilities is not enough since "0.5" in a binary task can happen for several reasons spanning the aleatoric/epistemic uncertainty spectrum
- On... | Rebuttal 1:
Rebuttal: We thank the reviewer for the review and constructive feedback.
We would like to first clarify that our primary goal is not to perform OOD detection but to understand which concepts a model can and cannot represent (i.e., probing). We designed GPP (Gaussian process probes) to measure uncertainty... | Rebuttal 1:
Rebuttal: We are encouraged that the reviewers found our work novel (**Reviewers 5Gmc, VzJz, zzXk**), significant (**3D6e**), well-motivated (**iFt6, VzJz**), well-written and easy to follow (**zzXk, 3D6e**). Moreover, **Reviewer 5Gmc** acknowledged that our measures of uncertainty for probing are interesti... | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors introduce Gaussian process probes, a probabilistic probing method. They use this method to obtain additional insights into the internals of deep learning models, using the concepts of aleatoric and epistemic uncertainty.
Strengths: * This is an interesting and useful addition to the literature on ... | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback.
> “writing”
Thank you for these suggestions. We will polish both the intro and experiment sections.
> “test the method on more than one dataset”
In the paper, we conducted experiments on 2 standard datasets and 1 set of photographic ima... | null | null | null | null | null | null |
Stochastic Multi-armed Bandits: Optimal Trade-off among Optimality, Consistency, and Tail Risk | Accept (spotlight) | Summary: This paper tackles the problem of trading off problem-dependent and worst-case regret and "tail risk" of the regret in bandits. Here, the tail risk means the probability that the regret is larger than $\Omega(T^\delta)$ for some $\delta>0$. Recent studies have shown that the usual algorithms in bandits have un... | Rebuttal 1:
Rebuttal: We sincerely thank you for your valuable time and review. We like to provide responses to your comments, which are helpful and enlightening.
- We’ve indeed tried adopting what you suggested, but ended up not doing so for two reasons.
- The relation between the tail probability and the tail thr... | Summary: This paper explores the stochastic multi-armed bandit (MAB) problem. The authors investigate the relationship between worst-case optimality, instance-dependent consistency, and light-tailed risk in policy design. Three main properties are considered for policy design: worst-case optimality, instance-dependent ... | Rebuttal 1:
Rebuttal: We sincerely thank you for your valuable time and review. We would like to provide responses to your comments and questions, which we find very helpful to us.
- Incorporating empirical validation: In the appendix, we provide detailed numerical experiments. We would like to emphasize that in both ... | Summary: The submission studies the stochastic multi-armed bandits. The arm-selection policy is required to be worst-case optimal, instance-dependent consistency, and have low tail risks (worst-case and instance-dependent) simultaneously. Lower bounds for achieving these goals at the same time are provided in Theorem 3... | Rebuttal 1:
Rebuttal: We sincerely thank you for your valuable time and review. We would like to provide responses to the three questions (f) (g) and (h), which we find very helpful.
- Yes, indeed you are correct. We will add more discussion for illustration in the next version.
- Yes. Our model assumes subGaussian ... | Summary: This paper presents an insightful investigation into the trade-off between optimality, measured by expected regret, and risk, defined as the probability of large regret, in the context of Multi-Armed Bandit problem algorithm design. The authors have made several significant contributions:
* They have demonstr... | Rebuttal 1:
Rebuttal: We sincerely thank you for your valuable time and review. We would like to provide responses to the four questions, which we find very helpful to improve our work.
- Thanks for the question. In Table 1, we provide the critical values of the log tail probabilities, which serves as a more intuitive... | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Task Arithmetic in the Tangent Space: Improved Editing of Pre-Trained Models | Accept (oral) | Summary: This paper studies the "task vectors" framework where the weights of models can be perturbed in specified directions corresponding to tasks which result in improvements on those tasks. They attribute the success of this framework to "weight disentanglement" which means that adding a task vector for task i does... | Rebuttal 1:
Rebuttal: We appreciate that the reviewer reported that our paper is very interesting and has impressive results, and their engagement to improve it. Below, we address their comments.
**Adding more tasks**
For the paper, we are using the experimental setting of Ilharco et al. [39], where task arithmetic... | Summary: This paper presents a comprehensive theoretical and empirical analysis of task arithmetic for model editing, where adding different task vectors (obtained by taking the difference between fine-tuned and pretrained model checkpoints) could improve the model’s performance on these tasks and vice versa. The autho... | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s recognition of our work and their engagement to improve it! Below, we address their comments.
**Effect of scale on weight disentanglement**
Our results reveal that by scaling the number of model parameters, the performance of linearized fine-tuning becom... | Summary: This paper theoretically and emperically investigates the reasons why task arithmetic (an emerging technique for editing pre-trained neural networks) works.
The paper shows that, contrary to previous hypotheses [39,79,80], lienarity of the fine-tuning on individual tasks is not sufficient to fully explain the ... | Rebuttal 1:
Rebuttal: We really appreciate the reviewer’s enthusiasm and acknowledgment of the significance of our work and their engagement to improve it! Below, we address their comments.
**Generality of our results beyond CLIP/ViT models**
We would like to emphasize that all our theoretical results are directly... | Summary: This paper presents a comprehensive analysis of task arithmetic using pre-trained CLIP models. It challenges the early hypothesis that task arithmetic arises from linear fine-tuning in the NTK regime and introduces weight disentanglement as a necessary condition for enabling task arithmetic. Further experimen... | Rebuttal 1:
Rebuttal: We appreciate that the reviewer recognized that our paper is well-written, our analysis is motivating, and our experiments are convincing and their engagement to improve it. Below, we address their comments.
**Generality of our results beyond CLIP/ViT models**
We would like to emphasize that a... | Rebuttal 1:
Rebuttal: We kindly thank all the reviewers for their time and for providing valuable feedback on our work. We appreciate that reviewers have pointed out that our work is interesting (Reviewer [eVrq](https://openreview.net/forum?id=0A9f2jZDGW¬eId=kbSkLPUU32)), intriguing (Reviewer [qb7d](https://openrevi... | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work challenges the prevailing belief regarding the origin of task arithmetic in CLIP models. While it is commonly attributed to the linear nature of fine-tuning, the authors argue that the critical factor lies in the "weight disentanglement" that occurs between tasks during the fine-tuning process. The p... | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s recognition of our work and their engagement to improve it! In what follows, we address their comments.
**Generality of our results beyond CLIP/ViT models**
We would like to emphasize that all our theoretical results are directly applicable to any model ... | null | null | null | null | null | null |
Online Ad Procurement in Non-stationary Autobidding Worlds | Accept (poster) | Summary: This work studies an advertiser's online high-dimensional lever decision problem with long-tern constraints under limited bandit feedback for different input models. The authors' main contributions include: (1) model formulation; (2) proposing an algorithm universally applicable to input models; (3) theoretica... | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive and valuable feedback.
Response to weakness: The key difference between our paper and [1] is that the reward function and the constraint functions are known in [1] before making the decision (i.e. [1] studies the full information setting), while in this ... | Summary: This work studies the problem of dynamic online allocation under constraints with bandit feedback, and derives a generic algorithm applicable to various input settings (stochastic, adversarial, $\delta$-corrupted, ergodic, periodic). It recovers regret rates close to those of the lower bounds in each of these ... | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive and valuable feedback.
Regarding Weakness 1 on the safe action: We would like to point out that the existence of a “safe action” is quite common in online advertising. Take for example the simple case where an advertiser only has a long-term budget cons... | Summary: This paper concerns a two-stage autobidding scenario, such as an advertising platform environment. Each advertiser wants to maximize value received (e.g., clicks) subject to long-run constraints (e.g., budget or ROI). As actions, the advertiser can specify certain instructions to an autobidding agent (e.g., ... | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive and positive feedback!
Regarding Weakness 1 and Question 1: The game-theoretic interactions between actions among multiple agents is indeed an interesting yet challenging future direction. For this work, one can view the various environments of interest ... | Summary: The paper proposes a universally constrained online learning framework for ad procurement in non-stationary autobidding worlds. The paper makes contributions to the field by addressing the challenges of ad procurement in non-stationary autobidding worlds and developing a unified algorithm that can perform well... | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive and valuable feedback.
Regarding Weakness 1 and Question 1 on experimental results: we agree that having experimental results would strengthen our paper’s key messages as well as contributions. We did not include experimental results in our paper due to... | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
On the Powerfulness of Textual Outlier Exposure for Visual OoD Detection | Accept (poster) | Summary: The paper notices that while outlier exposure has shown promising potential in improving OoD detection performance, all previous studies on outlier exposure have been limited to utilizing visual outliers. The paper uncovers the benefits of using textual outliers by replacing real or virtual outliers in the ima... | Rebuttal 1:
Rebuttal: Thank you for all of the constructive feedback and suggestions. In particular, we appreciate your recognition of the novelty and efficiency of our work and the insightfulness of our analyses. Here, we show additional experimental results to address your concerns.
> W1. Why do the authors choose ... | Summary: The paper proposes a new method that takes the textual outliers to help the model better detect out-of-distribution samples. Specifically, they build the pipeline on top of CLIP models with a classifier and use different representations of the text sample to synthesize outliers in the CLIP space, then use them... | Rebuttal 1:
Rebuttal: Thank you for acknowledging the novelty of our method and the extensive experimental results. Here, we show various newly-conducted experimental results to further verify the effectiveness of our method.
> Q1 (W1). How can you prove the ood robustness is come from your method but not from CLIP it... | Summary: This paper studies visual OOD detection by introducing the textual outlier under the outlier exposure paradigm. Different from previous research focused on utilizing visual outliers, this work explores the benefits of textual outliers in the image domain. Specifically, they propose different ways to generate t... | Rebuttal 1:
Rebuttal: Thank you for providing constructive feedback and suggestions. In particular, we appreciate your recognition of the novelty and efficiency of our work and the insightfulness of our analyses. Here, we hope that a more detailed explanation can address your concerns.
> Q1 (W1, W2). Could the authors... | Summary:
This paper addresses the challenge of detecting Out-of-Distribution (OOD) data by introducing "textual outlier exposure" as an alternative to visual outliers. Instead of relying on visual examples, the authors explore the benefits of using textual equivalents in OOD detection. They propose various methods for... | Rebuttal 1:
Rebuttal: Thank you for acknowledging the novelty of our method and the extensive experiments. Here, based on your comments, we show various newly-conducted experimental results to further verify our method’s effectiveness.
> Q1 (W1). It appears that the recent state-of-the-art methods, such as ASH [1] an... | Rebuttal 1:
Rebuttal:
We sincerely appreciate the reviewers' time and invaluable feedback. The unanimous consensus among reviewers highlights our paper's insightful contribution on textual outliers (T6K7, Lz76, JSZe, yebH). The reviewers also commend the novelty and significance of the addressed problem (T6K7, JSZe, y... | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models | Accept (poster) | Summary: The authors performed extensive experimental study on various image-based generative models. Based on the study, it showed that no existing metric strongly correlated with human evaluations. The authors also included alternative self-supervised features extractors for evaluation. Additionally, data memorizatio... | Rebuttal 1:
Rebuttal: Thank you for your review, we appreciate the time and effort that went into evaluating our work, and we are glad you found that the “results are expected to have a high impact on evaluating existing generative models”.
- On concerns with the “human error rate” metric: The human error rate metric ... | Summary: This paper try to get rid of the limitations of current evaluation metrics for generative models and focuses on the perceptual fidelity of diffusion models. The authors conduct an extensive study using a wide range of image-based generative models across diverse datasets. They employ psychophysics to measure h... | Rebuttal 1:
Rebuttal: Thank you for your review, we appreciate the time and effort that went into evaluating our work, and we are glad you found it “extensive” and “comprehensive”. Please see our general rebuttal for an answer to your question about potential bias of the participants, where we provide additional examin... | Summary: In this paper, the authors conduct a thorough investigation into the limitations of the Frechet Inception Distance (FID) metric for evaluating generative models. They address this issue by performing human evaluation and proposing a superior alternative for automatic generative model evaluation. Through dedica... | Rebuttal 1:
Rebuttal: Thank you for the detailed and helpful review. We are thrilled to hear you frame our insights as “valuable guidance on the appropriate approach for evaluating image generative models.” You rightly mentioned that including more details on the metrics in our paper will make it more accessible - plea... | Summary: This paper initially demonstrates which embedding space is similar to human evaluation criteria by utilizing various datasets and image generation models. It reveals that the embedding space of DINOv2 aligns most closely with the tendencies identified through a large-scale human survey. Moreover, it highlights... | Rebuttal 1:
Rebuttal: Thank you for the positive review and helpful feedback! To address your questions:
1. *Is there an intention to make all the features publicly available?*
Yes, we will be making all the features of the code publicly available, and all of the datasets. Hopefully this will help facilitate furthe... | Rebuttal 1:
Rebuttal: We thank all the reviewers for their thoughtful feedback and the time they spent assessing our work. We are very encouraged by the largely positive feedback, including that our paper provides “significant contributions” (uifn), “presents a considerable number of insights” (uTCz), and “delivers a c... | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper constructs an image dataset sampled from various generative models and scored by human participants in terms of their fidelity, and argues that existing metrics do not correlate well with this notion of fidelity. Then, it investigates how different choices of embedding space (i.e. different encoders... | Rebuttal 1:
Rebuttal: Thank you for your detailed review. We are happy our paper “provides many valuable experiments with unique insights on the limitation of existing metrics”. We believe the replies below address your concerns, and kindly ask that if you agree, to consider raising your score.
On our work being “thre... | null | null | null | null | null | null |
Knowledge Diffusion for Distillation | Accept (poster) | Summary: This authors propose to explicitly eliminate the noises in student feature with a diffusion model to reduce the dicrepancy between student and teacher model for better knowledge distillation. Specifically, they build a lightweight diffusion model to reduce computation cost and introduce an adaptive noise match... | Rebuttal 1:
Rebuttal: > 1.1 The technical contribution of this work is not significant.
We respectfully believe our contribution is sufficient. Directly applying diffusion models to KD is difficult and not straight-forward, and we provide effective solution on it with the following adaptations. (1) We assume the stude... | Summary: This paper proposes a novel method of knowledge distillation. It uses a diffusion model to denoise the student model features, reducing the gap between the teacher and student model. An auto-encoder is also designed to reduce the computational efforts and an adaptive noise module to improve the denoising effe... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the efforts in reviewing our paper and the positive evaluation. Our responses according to the reviewer's comments are summarized as follows.
---
> 1. Hyperparameters of $\lambda_1$, $\lambda_2$, and $\lambda_3$.
There is a typo in line 220, and $\lambda_3$ s... | Summary: The paper introduces a new knowledge distillation (KD) method named DiffKD, which aims to bridge the representations between teacher and student features via a diffusion model. The motivation is based on the finding that the student feature is noisier than the teacher feature, and therefore diffusion models ca... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the efforts in reviewing our paper and the positive evaluation. Our responses according to the reviewer's comments are summarized as follows.
---
> 1. Train diffusion model with teacher feature vs. Student feature.
Training the diffusion model requires the fe... | Summary: This paper presents a novel knowledge distillation (KD) approach. The difference from the existing methods lies in the computation of discrepancy between the teacher and student signals. This paper formulates it using a diffusion model and uses a denoising procedure to reconstruct the teacher features from the... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the efforts in reviewing our paper and the positive evaluation. Our responses according to the reviewer's comments are summarized as follows.
---
> 1. Does this paper mean that the diffusion procedure finds the shortest path in a distorted feature space (rathe... | Rebuttal 1:
Rebuttal: Dear Reviewers,
We thank all the reviewers for their valuable comments and efforts in reviewing our paper.
We are delighted to see that Reviewer tFex, pbtu, hqhp, and 78h4 stated that our method is interesting and novel; Reviewer hqhp and 78h4 acknowledged that our method is widely applicable ... | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors propose DiffKD, a knowledge distillation technique based on the hypothesis that the student's feature is a noisy version of the teacher's feature. Based on this assumption, they use a diffusion model to iteratively denoise the student's features before matching with the teacher. In addition, they p... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the efforts in reviewing our paper and positive evaluation. Our responses according to the reviewer's comments are summarized as follows.
---
> 1. Evidence for the claim that the student feature is the noisy version of the teacher feature.
In knowledge distil... | Summary: This work proposes the use of the diffusion model for denoising the noisy feature of student.
It tackles the issues of employing the diffusion model for knowledge distillation which are heavy computation and inexact noisy level of student feature.
To tackle the issues, it proposes a light-weight diffusion mod... | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the efforts in reviewing our paper. Our responses according to the reviewer's comments are summarized as follows.
---
> 1. Applying diffusion model to denoise the student feature is not novel.
We summarize our novelties as follows. (1) The existing methods so... | null | null | null | null |
Exploiting hidden structures in non-convex games for convergence to Nash equilibrium | Accept (poster) | Summary: This paper proposes a preconditioned hidden gradient descent to provide strong formal convergence guarantees in a general class
of multi-agent settings which are referred to as call hidden monotone games. Theoretical analyses and synthetic experiments are also provided.
Strengths: 1. The method seems novel t... | Rebuttal 1:
Rebuttal: Thank you for your input and positive evaluation. We reply to your questions point-by-point below:
1. **Discussion about the experiments included in the paper:**
Our work distinctly stands out for its depth and novelty in experimental design within the realm of hidden games. A review of recent st... | Summary: It is known that a convergence guarantee exists for monotone games. However, most games are not monotone. This paper considers a new scenario where the monotone structure is presented in latent space. It then proposes the 'Preconditioned Hidden Gradient Dynamics' to design the preconditioned hidden gradient de... | Rebuttal 1:
Rebuttal: Thank you for your support and your helpful comments.
1. **This paper is very well-written. But it may lack some real-world examples to help the reader understand the importance of studying the hidden games.**
We are glad to hear that you found our paper very well written! A natural exampl... | Summary: The paper uses hidden structure to provide continuous time and algorithmic theoretical learning guarantees for certain non-convex games. Specifically, they provide Preconditioned Hidden Gradient Flow and its discrete-time variant Preconditioned Hidden Gradient Descent that can be proved to converge when analyz... | Rebuttal 1:
Rebuttal: Thank you for your encouraging remarks and your positive evaluation. We reply to your questions point-by-point below:
1. **A discussion of the relevance of the faithfulness assumption on the hidden map could be discussed.**
We will be happy to add a discussion, however, we believe that it ... | Summary: The paper focuses on studying non-convex games with hidden structures, where latent variables can be seen as a function of control variables and are decoupled. The authors propose a discrete algorithm called Preconditioned Hidden Gradient Descent (PHGD) to exploit the hidden structure and achieve convergence t... | Rebuttal 1:
Rebuttal: Thank you for your input and detailed remarks. We address each of your questions point-by-point below and we will revise our manuscript accordingly at the first revision opportunity.
1. **Lack of diverse examples: The paper could benefit from including additional examples beyond the matching Penn... | Rebuttal 1:
Rebuttal: Dear AC, dear reviewers,
We are sincerely grateful for your time and constructive input. To streamline the discussion phase, we reply to each reviewer’s questions in a separate point-by-point thread below.
Kind regards,
The authors | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies a hidden preconditioned stochastic gradient descent method for finding Nash equilibrium in games with hidden monotone structures. It demonstrates non-asymptotic convergence bound of the proposed algorithm. The complexity bound of the hidden monotone game matches that of monotone games. The t... | Rebuttal 1:
Rebuttal: Thank you for your input. Before our point-by-point replies, we would only like to respectfully point out a potential misunderstanding: the fact that the latent structure may be known to the players does not imply that the problem can be solved in the latent space and the solution transferred back... | null | null | null | null | null | null |
Robustness Guarantees for Adversarially Trained Neural Networks | Accept (poster) | Summary: This paper studies the optimization convergence of adversarial training in two-layer neural networks. This paper also proposes a reflecting loss which search for a better attack.
Strengths: The paper is clear and easy to understand.
Weaknesses: My major concern regarding this paper is that the condition on t... | Rebuttal 1:
Rebuttal: 1.*Regarding attack strength*, the constraint $\nu \leq \beta/(2 (1-\alpha) \kappa \sqrt{m})$ is not too small since **$\kappa = 1/\sqrt{m}$; see line 134**.
In fact, it is exactly what we should hope for. Consider, for example, the setting with $\alpha=0$ (i.e., linear unit). Then the constrain... | Summary: The paper studies robust training in 2-layer networks. This problem is studied as a 2-spet procedure - finding adversarial samples, and training over these samples. The main result is given a linearly separable dataset, then a 2-layer network with leaky-ReLU activation trained robustly with SGD converges to a ... | Rebuttal 1:
Rebuttal: [Q1] We think that the proof should go through even with bias terms as well, but it may require some extra work. In particular, some of the recent work (e.g., see https://arxiv.org/pdf/2102.11840.pdf and https://arxiv.org/pdf/2301.00327.pdf). The idea here is similar to how the proof goes through ... | Summary: This paper studies the convergence of adversarial training of two layer neural network, with gradient descent ascent (GDA) type algorithm. The algorithm considered here solvies inner max problem on a surrogate concave loss, and solves outer minimization problem on log-exp loss. Finally, the authors show the co... | Rebuttal 1:
Rebuttal: [W1] Regarding the “*main technical novelty is the idea of running PGD on concave surrogate, and show the converegnce of PGD*”. That is a fair characterization of the main contribution. The computational learning guarantee for the end-to-end adversarial training follows as a corollary once we have... | Summary: This paper investigates the adversarial training of two-layer neural networks on linearly separable data. The authors propose to reflect the commonly used convex surrogate loss during the inner loop that generates adversarial attack via the PGD method, and derives guarantee on the convergence of the attack. Me... | Rebuttal 1:
Rebuttal: [W1] Regarding “*The empirical results with MNIST and CIFAR-10 do not show significant difference between the performance of standard adversarial training and adversarial training using proposed reflected loss function.*” That is actually what we were hoping for.
The goal here is not to improve ... | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Can Neural Networks Improve Classical Optimization of Inverse Problems? | Reject | Summary: In this paper the authors explore whether better optimization solutions can be found by jointly optimizing several inverse problems together. These inverse problems share a connection as they all can be formulated through a differentiable function $F(\xi_i | x_i)$. The authors implement the joint optimization ... | Rebuttal 1:
Rebuttal: > I'm not familiar with any ML application that has similar optimization problems that can be pooled together.
We have addressed this point in our general rebuttal above. Our approach can be applied whenever an experiment is performed multiple times or records multiple instances, such as time ser... | Summary: This paper develops a novel approach to gradient-based non-convex optimization. The proposed methodology begins with the reparameterization of the parameter space utilizing neural networks, followed by the application of classical techniques such as BFGS, or alternative Neural Network surrogate models for the ... | Rebuttal 1:
Rebuttal: > The authors do not clearly define the inverse problem that they are attempting to solve
Our inverse problems have very simple definitions: in all experiments, the objective is the $L^2$ loss between the target and the simulation output from the solution estimate, see Eq. 2. In the wave packet e... | Summary: The manuscript presents a method to reparameterize and solve multiple inverse problems jointly using neural networks. The manuscript tests the proposed method on multiple inverse problems (including some chaotic problems) and compares against Neural Adjoint and BFGS baselines to show measurable performance imp... | Rebuttal 1:
Rebuttal: > I would recommend adding the training wall clock times + solving times in a table to give potential users of this method a proper estimate.
We have assembled a table that includes all training and optimization times. It is provided in the general rebuttal above.
> Adding benchmarks for the sam... | Summary: This paper discusses a novel approach to finding model parameters from data, a crucial task in science. Traditional iterative optimization algorithms like BFGS can accurately solve simple inverse problems, but their reliance on local information can limit their effectiveness in complex situations with local mi... | Rebuttal 1:
Rebuttal: > For the 4th setting, [...] reparameterization method gives much higher mean losses than BFGS despite the fact that the majority of problems actually improve over BFGS.
This is an artifact of the L2 loss. Few examples with a high loss can dominate the mean even though most examples have a lower ... | Rebuttal 1:
Rebuttal: We thank all reviewers for their valuable feedback and helpful comments!
In answering the reviewer’s questions, we performed additional experiments and created many new figures and tables. The attached PDF page shows gradient descent as an additional baseline for all our experiments (Fig 1), visu... | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Complexity Matters: Rethinking the Latent Space for Generative Modeling | Accept (spotlight) | Summary: This work investigates what constitutes a good latent space for generative models, and proposes a new training paradigm for generative models – DAE. Simply put, with DAE generative models are trained as Autoencoder in two stages. First, a relatively weak decoder is employed, whose purpose is to aid the encoder... | Rebuttal 1:
Rebuttal: Thank you for the valuable comments and questions. Below we address them separately:
### 1. "The proposed method, DAE, could be introduced as an empirically-supported design, while some of the mathematical formulation could be described to serve as intuition ... In my opinion, most readers would... | Summary: This paper presents an approach with theoretical analysis to explore a more suitable latent distribution for generation. For this purpose, this paper proposes a novel distance between the latent and target distributions and tries to minimize it to obtain the optimal data-dependent latent distribution. In pract... | Rebuttal 1:
Rebuttal: Thanks for your time and effort in reviewing our work. There might be some misunderstanding and we have summarized and clarified the contributions of this work in the overall response. Hopefully, it can address some of your concerns.
It is worth noting that although our motivation involves vanill... | Summary: The paper first proposes a new framework to analyze latent spaces in the context of generative models.
This framework takes inspiration from prior results about GANs, which allowed interpreting the min-max training objective as computing a distance between distributions to be minimized, to define a similar dis... | Rebuttal 1:
Rebuttal: Thanks for the valuable comments. Below we address them separately:
### 1. "While the fresh view on latent codes is interesting, it doesn't provide any theoretical guarantees."
> Thanks for the question.
You are correct that the main idea of our DAE approach is to balance the encoder and the dec... | Summary: This paper proposes an asymmetric training scheme for auto-encoders that double as image generator. Based on analytical insights that the decoder should have less capacity than the encoder for the encoder to capture correctly the data distribution, they propose a first training cycle where a strong encoder and... | Rebuttal 1:
Rebuttal: Thanks for the valuable comments and questions. Below we address them separately:
### 1. "... most of the paper is about the analytical part ... lacks a good structure."
> Thanks for the feedback.
As is acknowledged, the analytical part mainly serves illustrative purposes and the proposed DAE ... | Rebuttal 1:
Rebuttal: # Overall Response:
We thank all the reviewers for their time and efforts in reviewing our work. Before we address the questions and concerns of each reviewer, we would like to provide a summary of our work.
Our work aims to **characterize the ideal/optimal low-dimensional latent distribution fo... | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Finite Population Regression Adjustment and Non-asymptotic Guarantees for Treatment Effect Estimation | Accept (poster) | Summary: In this paper, authors present regression adjusted estimators for estimating the average treatment effect under the Bernoulli design.
In particular, they show that by using the leverage scores and a ridge regression adjustment, favorable finite sample bounds on the (conditional) variance may be obtained.
This ... | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive assessment of our work and our contributions to finite sample analysis of regression adjustment. We thank them also for their detailed suggestions, especially those on the presentation, which we will address in a revised version of the paper.
> The major w... | Summary: This paper focuses on estimation of individual and average treatment effects (ITE and ATE) with regression adjustment, which is combined with the method of ridge leverage score sampling in order to obtain the desirable variance bounds. The leading case is algorithm 1, which estimates ATE with leverage score sa... | Rebuttal 1:
Rebuttal: We thank the reviewer very much for their review and their questions/suggestions, which we address below.
> To my reading of the paper, the most important issue is that it is unclear whether the advances in this paper are substantial relative to Harshaw et al [16]. It seems that theoretically, th... | Summary: This paper explores the design and analysis of randomized experiments for treatment effect estimation, which is an important problem in causal inference. The goal of treatment effect estimation is to estimate the effect of a specific treatment on individual subjects or the average effect in the population, usi... | Rebuttal 1:
Rebuttal: We thank the reviewer for their in depth review and valuable suggestions, especially those regarding increasing clarity of the writing and paper organization, which we will incorporate in a revised version.
> To improve clarity, the authors should clearly identify the specific problems they are f... | Summary: The paper addresses the problem of ATE and ITE estimation in the presence of covariates. In particular, the authors provide finite-sample variance bounds for regression adjustment method-based estimators and novel variants thereof. The core of the methodology is using leverage scores, a randomized numerical li... | Rebuttal 1:
Rebuttal: We thank the reviewer for positive assessment of our technical contributions and for recognizing the novelty of applying leverage scores to regression adjustment. We agree with the reviewer that the paper can be made more readable and appreciate the reviewer’s suggestions towards doing so, which w... | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for the positive assessment of our contributions to finite-population treatment effect estimation and its non-asymptotic analysis. We appreciate the comments on the presentation and organization of the paper. We believe these would make the paper stronger, and ... | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.