title
string
paper_decision
string
review_1
string
rebuttals_1
string
review_2
string
rebuttals_2
string
review_3
string
rebuttals_3
string
review_4
string
rebuttals_4
string
global_rebuttals
string
dataset_source
string
conference_year
int64
review_5
string
rebuttals_5
string
review_6
string
rebuttals_6
string
review_7
string
rebuttals_7
string
review_8
string
rebuttals_8
string
On the Ability of Developers' Training Data Preservation of Learnware
Accept (poster)
Summary: The authors theoretically analyze the properties of the learnware paradigm. In the learnware paradigm, a model developer can provide their trained models for other developers to use. To enable re-use, along with the model the developer provides a model specification that adequately represents the model's train...
Rebuttal 1: Rebuttal: Many thanks for the constructive reviews! --- Q1: I am a bit confused with regards to Theorem 3.4: Is Theorem 3.4 proven for the $\delta=0$ case or is it proven for a specific $\delta$ ? Intuitively, the overlap of a continuous distribution and a discrete distribution of synthetic data should be...
Summary: The paper presents the "Reduced Kernel Mean Embedding (RKME)" specification, which represents a model's capabilities while ideally preserving the privacy of the original training data. The paper provides a theoretical analysis and proves that the RKME specification can protect the training data against common ...
Rebuttal 1: Rebuttal: Many thanks for the constructive reviews! --- Q1: The paper focuses on theoretical proofs and lacks extensive empirical evidence to support the effectiveness of the RKME specification in real-world scenarios. A1: Thanks for the feedback! We are not entirely sure whether the "effectiveness of th...
Summary: The paper analyzes the data preserving properties of Learnware, wan interesting idea involving a marketplace of pretrained ML models. In Learnware, new inference tasks are matched to ML models capable of solving that task without any raw data being shared. Rather, the method leverages RKME to construct a small...
Rebuttal 1: Rebuttal: Many thanks for the constructive reviews! We provide detailed responses below, and hope the reviewer could reassess the significance of our results. We are looking forward to addressing any further question in the reviewer-author discussion period. --- Q1: Is the Learnware market currently opera...
null
null
Rebuttal 1: Rebuttal: We have conducted validation experiments to further illustrate the tradeoff between data privacy and search quality in our work. Below, we present the experimental setting and empirical results. All related figures can be found in the accompanying PDF. --- ### **Datasets** We use six real-world...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
The Feature Speed Formula: a flexible approach to scale hyper-parameters of deep neural networks
Accept (poster)
Summary: The paper introduces the BFA - a novel quantity to predict and control feature learning in DNNs, as well as the feature speed formula which allows expressing the magnitude of feature updates after one GD step. The paper recovers key properties of known HP scalings, and also extends these results by introducing...
Rebuttal 1: Rebuttal: We thank the reviewer for his/her detailed and encouraging review! Here is our answer to the main weaknesses raised by the reviewer: - in this paper, we focus on introducing a theoretical methodology and do not introduce new (useful) HP scalings indeed. In current work we are tackling other asympt...
Summary: The paper presents a novel perspective on infinite width and depth feature learning networks. It introduces the backward-to-feature kernel (BFK) as a central quantity determining the evolution of the intermediate layer features. The paper shows that the movement of the hidden layer features can be exactly rela...
Rebuttal 1: Rebuttal: We thank the reviewer for his/her detailed and encouraging review. - yes the scalings for ResNets are precisely those predicted by Bordelon et al, and Yang et al, this is mentioned in the manuscript but we'll make this more visible (note that the first version of our work appeared in November 2023...
Summary: The authors propose a technical strategy for deriving neural net parameterizations that relies on controlling the angle between the activation gradient and the feature update. The authors derive various theoretical results about this quantity, including a formula for computing it, and some analyses in the cont...
Rebuttal 1: Rebuttal: Thank you for your detailed review. We have replied to your first 2 criticisms in the main rebuttal. Concerning the limitation to batch-size 1: this is an assumption made by all related works on feature learning. However, note that the "feature speed formula" applies to any architecture -- and in ...
Summary: This paper studies the feature learning speed of the layers of Neural Networks (NNs). Specifically, it proposes to measure it through the quantity *Backward-Feature Angle* (BFA), denoted by $\theta_l$ for a layer $l$. This quantity is directly related to the layer-wise decomposition of the Neural Tangent Kerne...
Rebuttal 1: Rebuttal: We have replied to this review in the main comment. Can you please specify what are the references [1,2,3,4] in your review? To the best of our knowledge our approach to derive HP scalings is new (the closest work being Jelassi et al), but we would appreciate precise pointers to related results. ...
Rebuttal 1: Rebuttal: We thank the reviewers for their time and their comments and we appreciate their encouraging remarks. We also disagree with a few comments which, we believe, result from a misunderstanding of the content of our paper, the state of the theory on feature learning, and perhaps from a disagreement on ...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Quantum Algorithms for Non-smooth Non-convex Optimization
Accept (poster)
Summary: This paper considers using quantum methods for stochastic optimization, using zeroth order queries. It looks like the main idea is that using quantum methods, one can summarize over finite difference calculations quickly and efficiently, to arrive at approximate subgradients efficiently; this would usually be ...
Rebuttal 1: Rebuttal: We thank the reviewer for the positive and helpful comments. **To Weakness 1 (the presentation of Section 3)**: Thank you for this question. Hopefully, the following explanation will give you a better picture of the structure of Section 3. The main goal in Section 3 is to construct unbiased qua...
Summary: This paper investigates quantum algorithms for finding the $(\delta,\epsilon)$-Goldstein stationary point of a potentially nonconvex and nonsmooth objective function $f$. Utilizing quantum variance reduction techniques as outlined in [42], the authors have developed a zeroth-order quantum estimator for the gra...
Rebuttal 1: Rebuttal: We thank the reviewer for the positive and helpful comments. **To Weakness 1 (discussion on the technical novelty):** We appreciate the feedback. We highlight our technical novelty on the construction of zeroth-order estimator and the design of quantum algorithms as follow: * In term of the ze...
Summary: This paper studies quantum algorithm for non-smooth non-convex stochastic optimization with zeroth-order oracle. It introduces an effective quantum estimator that reduces the variance compared to classical zeroth-order estimators. Upon substituting this estimator into known zeroth-order non-smooth optimizers, ...
Rebuttal 1: Rebuttal: We thank the reviewer for the positive and helpful comments. **To Weakness 1**: We think such sub-optimal dependency on $d$ is reasonable due to the following reasons: 1. The sub-optimality on dimension $d$ is a common trade-off in quantum optimization. [49] proved that there are no quantum spee...
Summary: This paper introduces new quantum algorithms for non-smooth non-convex optimization problems. The authors propose a quantum gradient estimator for smoothed objectives and develop the Quantum Gradient-Free Method (QGFM) and its enhanced version, QGFM+, which achieve better query complexities than their classica...
Rebuttal 1: Rebuttal: We thank the reviewer for the positive and helpful comments. **To Weakness 1**: We think the assumption of having a quantum stochastic function value oracle is reasonable and not strong due to the following reasons: 1. It is very common to assume having a classical stochastic function value ora...
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Corruption-Robust Linear Bandits: Minimax Optimality and Gap-Dependent Misspecification
Accept (poster)
Summary: This paper studies corruption-robust linear bandit optimization and characterizes the regret bound in terms of both weak and strong corruption measures. Under the stochastic setting, this paper proposes a phased elimination algorithm, and the regret bounds match the lower bound. Under the adversarial setting, ...
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. The global response includes a summary of our paper and potential future work, which we will incorporate as a conclusion in the future version. Your questions are answered below. **Q1**: What is the computational cost of the proposed algorithms? ...
Summary: In this work, the authors characterize the problem of learning the presence of reward corruption in the linear bandit setting. They provide matching upper and lower bounds in the corrupted stochastic setting, and initiate the study on the corrupted adversarial setting, for which they obtain optimal scaling in ...
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. The global response includes potential future works, which we will incorporate into the future version. Your questions are answered below. **Q1**: The algorithms are not seriously different from the previous works as they mentioned **A**: Althoug...
Summary: This paper studied the corrupted linear bandits. The authors propose four different metrics to evaluate the total corruption in Eq. (1). Many settings are considered in this paper. For stochastic LB, the proposed algorithm achieves a regret bound of $d\sqrt{T}+\sqrt{d} C_{\infty}$. For adversarial LB, the prop...
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. Your questions are answered below. **Q1**: The strong adversary (AA) seems not equivalent to the CM viewpoint. **A**: The equivalence between the "strong adversary" in the AA viewpoint and the "strong measure'" in the CM viewpoint is based on the ...
null
null
Rebuttal 1: Rebuttal: ## Global Response: We thank all reviewers for their time and valuable feedback. As suggested, we summarize our paper here together with possible future directions. We will incorporate them into our future versions. Our paper contributes to three research lines. 1. For stochastic linear bandits ...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Solving Inverse Problems via Diffusion Optimal Control
Accept (poster)
Summary: The paper addresses the limitations of existing diffusion-based inverse problem solvers, which typically frame signal recovery as a probabilistic sampling task. The authors propose a novel approach that redefines the generative process as a discrete optimal control task. Inspired by the iterative Linear Quadra...
Rebuttal 1: Rebuttal: We highlight our gratitude that the reviewer appreciates both the theoretical and empirical results of our work, and for bringing several insightful shortcomings of our work to our attention. Below we respond to the reviewers concerns in a point-by-point basis. **Mathematical formulations are com...
Summary: This paper proposes diffusion optimal control that solves inverse problems via posterior sampling by combining the power of a pre-trained unconditional diffusion model and the iterative Linear Quadratic Regulator algorithm to produce optimal controls that steer the reverse diffusion process to correctly recove...
Rebuttal 1: Rebuttal: We greatly appreciate the reviewer’s positive assessment of theoretical contributions, for their in-depth reading of our manuscript, and for their suggested improvements. We respond to comments in detail below. **The runtime of Algorithm 1 seems high.** Without taking any approximations of the He...
Summary: This paper proposes a new approach to conditional generation tasks through score-based diffusion models, with a focus on inverse problems. As an alternative to using the likelihood $p(y | x_t)$ to guide the time-reversed SDE towards the posterior distribution, the authors reformulate this as an optimal contr...
Rebuttal 1: Rebuttal: We thank the reviewer for their positive assessment of our work, and their insightful critique. Below we provide a point-by-point response to the reviewer's discussion points. **Main weakness is the computational cost of the algorithm, e.g., computing Hessians. Approximation error is not well und...
Summary: The paper uses the optimal control theory to solve the diffusion posterior sampling problem by iterative Linear Quadratic Regulator (iLQR) algorithm. The method could be utilized to solve both linear and nonlinear inverse problems. Experiments on MNIST and FFHQ demonstrate the outperformance of the proposed m...
Rebuttal 1: Rebuttal: We thank the reviewer for their appreciation of the optimal control perspective, and for their insightful discussion and bringing many recent works to our attention. We respond to comments in detail, on a point-by-point basis below. **The method is well-backed but might be computationally exhaust...
Rebuttal 1: Rebuttal: We thank all reviewers for their thoroughness and diligence in reading our manuscript. We received a lot of sound, constructive criticism and positive feedback. This guided our revisions and further experiments in this rebuttal period, and we believe that the paper has meaningfully improved as a r...
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper tackles inverse problem via the perspective of optimal control. By treating the diffusion process (ODE) as a non-linear dynamic system and the extra guidance term as control signal, the authors manage to optimize the diffusion trajectory via the iterative Linear Quadratic Regulator (iLQR) algorithm....
Rebuttal 1: Rebuttal: We would like to express our gratitude that the reviewer appreciates both the theoretical and empirical results of our work, and for bringing several insightful shortcomings of our work to our attention. Below, we address the reviewers concerns on a point-by-point basis. **High computational cost...
Summary: The paper uses tools from optimal control to introduce a novel approach for solving inverse problems with diffusion models. The authors propose reframing the generative process of diffusion models as a discrete optimal control problem allowing to leverage the iterative Linear Quadratic Regulator (iLQR) algorit...
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful discussion and insightful critique. We hope to clarify some of the points of the paper, and alleviate the reviewer's concerns below in a point by point response. **Only a single a single image is provided in Figs. 3 and 6.** In the rebuttal PDF Figures 9 an...
null
null
null
null
Human Expertise in Algorithmic Prediction
Accept (oral)
Summary: This paper introduces a new framework into algorithmic predictions. The paper asks and answers the question "how can we incorporate human input into the prediction algorithm, which may not even be captured in the training data"? The authors develop a method that first runs the predictor, and then runs a second...
Rebuttal 1: Rebuttal: **In response to:** *The paper would be even more satisfying if the method is presented as a framework rather than a specific instantiation...* And: *My main comment is that the authors should comment more about the future work and implications of this method.* Thank you for your feedback, we agr...
Summary: The paper proposes a framework to incorporate human expert knowledge in algorithmic predictions. Under this framework, the authors introduce a meta-algorithm that uses a training dataset including human expert predictions together with a multi calibrated partition of the data; a partition of the dataset into b...
Rebuttal 1: Rebuttal: **In response to:** *Since the theoretical results of section 6 complement the ones of section 4, it would be perhaps more natural to follow them, rather than placing them after the experimental evaluation, which appears a bit odd.* Thank you for your feedback --- we agree that this portion of th...
Summary: The paper first presents some theory for the modelling of how to identify when human judgements may offer a better diagnosis - through access to additional information - than machine predictions, despite the latter typically being more accurate. This is followed by exploring how to integrate the human input wi...
Rebuttal 1: Rebuttal: **In response to:** *...it would be helpful if the interpretation of...definitions (3.1, 3.2) went into a bit more detail for accessibility.* Thank you for your feedback. We agree that these definitions could use a bit more exposition; we will add more background and provide a concrete interpreta...
Summary: This paper introduces a framework for joint human-AI prediction, where human experts can augment AI predictions in particular ex ante identifiable subsets. Strengths: This paper makes a lot of interesting contributions. First, its scope is broad and important: it tackles the question of how and whether human ...
Rebuttal 1: Rebuttal: **In response to:** *How might you model decision makers with richer preferences than mean squared error?* Thank you for your feedback, we agree that we should address this possibility in more detail (particularly in the final discussion section). We provide our thoughts below, and will plan to u...
Rebuttal 1: Rebuttal: We are grateful to all four reviewers for their thoughtful and constructive feedback. Below we describe how we intend to incorporate this feedback into our manuscript and include responses to specific reviewer questions and concerns.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Alignment at Pre-training! Towards Native Alignment for Arabic LLMs
Accept (poster)
Summary: This paper proposed a new method for LLM alignment during pre-training. The proposed method is call "native alignment". This method include three steps: pretrain date duplication, alignment rewriting, and model training. They trained small size alignment expert model for alignment rewriting and use the model t...
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable comments. Below, we provide responses to your concerns. ## Weakness 1: Comparison between Native Alignment and Post-Alignment To meet your curiosity, we conducted an experiment **comparing *native alignment* and *post-alignment***. The results show that the ...
Summary: The paper introduce a method called "native alignment", which is a set of procedures to create data and train an LLM to rewrite raw text into "useful" texts for pretraining. They apply this technique specifically for Arabic LLMs and conduct experiments to show that this pre-processing of pre-training data help...
Rebuttal 1: Rebuttal: ## Weakness: Why the proposed data cleaning method (native alignment) is a kind of alignment. ### What is alignment? “*Alignment*” refers to the process of ensuring that LLMs act in accordance with user intentions. Models are considered *aligned* if they are *helpful, honest, and harmless* [1]. ...
Summary: This paper proposes a data augmentation pipeline which modifies the pre training data for large language models in key aspects such as formatting, values, content moderation and knowledge preservation. The resulting pipeline, termed native alignment, is applied Arabic LLMs due to the relatively small pretraini...
Rebuttal 1: Rebuttal: ## Question I: Scalability of Alignment LLM Expert The *alignment LLM expert* is fine-tuned on pre-trained LLMs (Qwen-1.5-4B-Chat). To ensure the rewriting quality of the trained alignment LLM expert, we randomly sampled 50 data points from the pre-training corpus and processed them through the a...
Summary: This paper focuses on alignment of LLMs to human preferences and suggests to shift the alignment step from instruction-tuning (post-alignment) to the earlier stage of continued pre-training (native alignment). For that end it proposes an approach to creating aligned pre-training data, consisting of three steps...
Rebuttal 1: Rebuttal: ## Weakness 1: Comparison between native alignment and post-alignment We have added an experiment showing the comparison between native alignment and post-alignment, please check ‘**Author Rebuttal - Additional Experiment I**’ for more details.. ## Weakness 2: Typo Issues: Thank you for your cor...
Rebuttal 1: Rebuttal: ## Clarification of Open Source We have made the following resources publicly available from our research: 1. **English and Arabic Seed Rewriting Data**: Annotated pairs generated by GPT-4. 2. **Native-Aligned Arabic Language Base Models**: *LLaMA3-Tamed-8B* and *LLaMA3-Tamed-70B*. 3. **Chat Ver...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Automated Multi-level Preference for MLLMs
Accept (poster)
Summary: This paper presents the Automated Multi-level Preference (AMP) framework for improving MLLMs by addressing hallucination issues. The framework introduces a multi-level preference system for RLHF, aiming to enhance the learning process by providing more granular feedback. Strengths: - The introduction of multi...
Rebuttal 1: Rebuttal: **Q1.** The contribution of the paper heavily relies on the preference fine-tuning algorithm, showing limited innovation beyond this aspect. **A1.** In this paper, we introduce the Automated Multi-level Preference (AMP) framework, which involves generating high-quality multi-level preference data...
Summary: In this paper, the authors develop an automated dataset generation pipeline capable of producing multi-level preference datasets without the need for human annotators. This paper introduces a novel multi-round dialogues hallucination benchmark, MRHal-Bench. Additionally, the authors design the Multi-level Dire...
Rebuttal 1: Rebuttal: **Q1.** It is recommended to provide more quantitative information on the preference dataset generated by the automated dataset generation pipeline. For instance, the authors could use a subset of the dataset to demonstrate the similarity results compared to human annotators. **A1.** Thanks for r...
Summary: This work aims to mitigate hallucinations in Multimodal Large Language Models through preference optimization. Motivated by two limitations of binary preferences widely used in existing work, authors proposed a multi-level preference framework. The framework consists of 1) an automated dataset generation pipel...
Rebuttal 1: Rebuttal: **Q1.** Lack intrinsic evaluation of the AMP dataset. **A1.** Thanks for your advice. We provide more evaluation for the AMP dataset and the auto-check mechanism. **Evaluation of the AMP dataset.** We estimate the inconsistency rate of our AMP dataset to be 2.25% (by manual evaluation on 2k rand...
null
null
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude to the reviewers for their valuable comments. We are encouraged they found our method is "**novel**" (Reviewer 2sCy), our method "**provides a broader range of comparisons**" (Reviewer 2sCy, Reviewer iDRq), our automated pipeline is a "**significant c...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
KFNN: K-Free Nearest Neighbor For Crowdsourcing
Accept (poster)
Summary: This paper proposes a novel algorithm, KFNN (K-free Nearest Neighbor), which is specifically designed to enhance label integration for crowdsourcing. KFNN integrates two key components named label distribution enhancement and K-free optimization, which significantly contribute to improving the effectiveness an...
Rebuttal 1: Rebuttal: **Reviewer 4Kyj:** **Q1:** While the paper provides strong theoretical and experimental results, there is limited discussion on the computational efficiency and scalability of the proposed KFNN algorithm. I suggest moving the algorithmic flow and time complexity analysis from Appendix A to the ma...
Summary: The paper presents a novel label integration algorithm, KFNN (K-Free Nearest Neighbor), designed to enhance the performance of crowdsourcing platforms by intelligently determining the optimal neighborhood size for each instance based on its attributes and noisy labels. The authors propose a two-component solut...
Rebuttal 1: Rebuttal: **Reviewer B2kv:** **Q1:** Simulation experiment results The symbol • indicates that the algorithm in the row significantly outperforms the algorithm in the corresponding column. How is "significantly outperforms" defined for Macro-F1 score and integration accuracy? **Author Response:** Thanks f...
Summary: This paper proposes a novel label integration approach KFNN by adaptively determining the optimal neighborhood size. KFNN utilizes a Mahalanobis distance distribution to model the relationship between each instance and all classes. The authors also provide adequate theoretical analysis to illustrate the effect...
Rebuttal 1: Rebuttal: **Reviewer qVhb:** **Q1:** In section 2, the authors introduce two categories of label integration algorithms. And the proposed KFNN belongs to the algorithms which leverage neighbor instance. I suggest adding some discussion about the pros and cons of these two categories of approaches. **Autho...
Summary: This paper introduces a new algorithm for label integration called KFNN. Existing methods related to KNN produce more noisy labels; however, they fix the neighborhood size, regardless of the fact that instances close to the center of classes should have more neighbors than instances close to the boundary of cl...
Rebuttal 1: Rebuttal: **Reviewer MQSy:** Thanks a lot for your comments. Please find our detailed responses to your seven questions as follows. **Q1:** First, our research focuses on label integration in crowdsourcing, which differs from other research domains such as noisy label learning (NLL). Crowdsourcing typical...
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
An Analysis of Tokenization: Transformers under Markov Data
Accept (spotlight)
Summary: This paper presents a study on tokenization by investigating the behavior of transformers on simple data generating processes . It shows that, in the absence of any tokenization, transformers trained on $k$th-order Markov processes predict characters according to a unigram model, which is quite problematic giv...
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback and questions. Below we addressed the key points mentioned. ### **[W1] Analysis for commonly-used tokenizers such as BPE** The guarantees in section 3.2 study guarantees for the LZW tokenizer, which is arguably not used much in practice. However, in Section...
Summary: The authors show that tokenization is a fundamental property of transformer-based models, in the sense that without it, it is hard (if not impossible) to achieve low cross-entropy loss on next-word prediction. They show that tokenization helps breaking the unigram barrier (i.e., the best loss a unigram model c...
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. Below we address the weaknesses/questions pointed out: ### **[W1] Fig. 5 mitigates the impact of the theory** On datasets like Wikitext-103, transformers trained with these tokenizers (BPE, Unigram, Wordpiece) indeed perform similarly in ablations. The pur...
Summary: This paper offers theoretical insights into the importance of tokenization in language models. Tokenization is ostensibly the artifact that makes training LMs not an end-to-end procedure. This design choice introduces biases, as it is not optimized for exactly the same criterion as the full model. Yet training...
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. Below we address the main questions and weaknesses pointed out. *(references cited in the common rebuttal)* ### **[W1] How well do $k$-th order Markov processes extrapolate to linguistic distributions?** There are two points to mention here, 1. $k$-th ord...
Summary: This paper investigates the learning dynamics of unigram language models on top of tokenised vs non-tokenised data, comparing these models’ expected cross-entropy to the distribution’s entropy. The paper performs this analysis while considering different data generating distributions (mainly focusing on relati...
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive criticism. (*references cited in the common rebuttal*) **TLDR;** We are happy to see how to change the wording of the title + general rhetoric of the paper in a way which fit the contributions in the paper best. *Our proposed changes incorporating the re...
Rebuttal 1: Rebuttal: ## **Common rebuttal** We thank all the reviewers for taking the time to go through our paper and suggest constructive criticism. Please find attached a pdf containing additional plots to aid in answering reviewers' questions. We begin with the suggested changes to the paper, and then address som...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Reinforcement Learning with Lookahead Information
Accept (poster)
Summary: This paper introduces reinforcement learning (RL) problems where agents observe one-step lookahead information (either rewards or transitions) before choosing actions in episodic tabular MDPs. Two relevant lines of work exist: the control literature, which studies a similar lookahead concept in the continuous ...
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and refer to the general comment regarding experiments. In particular, MVP converges to the no-lookahead value, so there would be a linear difference between its performance and our algorithms, so the algorithms are not really comparable. While the ...
Summary: The authors proposed new forms of Bellman equations for environments where the agent knows the reward or transition outcomes one step ahead (without knowing the full model). Strengths: While previous papers (e.g., Boutilier et al. 2018) discussed utilizing lookahead information (and proved convergence), the a...
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback and refer to the general response on simulations. * As we explained in the general response, we agree that evaluation is important and believe that the best way to do so is to adapt our work to a deep RL setting, but this is outside the scope of this paper....
Summary: This manuscript proposes the RL method with lookahead information. The authors discuss two scenarios: reward lookahead and transition lookahead. Under such scenarios, the proposed method estimates the reward distribution and transition distribution, respectively. Then the monotonic value propagation skill is a...
Rebuttal 1: Rebuttal: We thank the reviewer for the response and refer to the general comment on empirical simulations. We apologize for any clarity issue and will make an effort to clarify the setting and the algorithm. Our paper studies an online setting where we repeatedly interact with an unknown environment in ep...
Summary: The paper considers the setting where the agent can see the possible next rewards and next states without assuming a prior knowledge of the environment dynamics. The predicted next rewards and next states are estimated by empirical distribution. The paper considers extending Monotonic Value Propagation to such...
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comments and refer to the general response for a detailed discussion on simulations in our paper. We also provide an additional example of the advantage of lookahead information in the response to reviewer MArZ, and discuss how the distribution affects the lo...
Rebuttal 1: Rebuttal: ## Experiments Some of the reviewers expressed concern due to the lack of experiments in the paper. While conducting experiments is always interesting, our paper theoretically studies a new setting for which there are no existing algorithms with theoretical regret guarantees. Thus, when compari...
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper studies an RL problem with a special setting, called one-step lookahead, where the agent can observe the reward or the state at the next step before the current action is taken. The paper focuses on the problem with an unknown environment (transition function). The authors proposed an efficient algor...
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. # Weaknesses 1.There are numerous applications where exact or approximate lookahead information is present: * Transactions/Market interaction - whenever the agent performs transactions, the traded items and their prices are mostly observed before the trade t...
null
null
null
null
null
null
Advancing Training Efficiency of Deep Spiking Neural Networks through Rate-based Backpropagation
Accept (poster)
Summary: Recent research indicates that rate-coding is crucial for information representation in deep Spiking Neural Networks (SNNs) trained via Backpropagation Through Time (BPTT). Building on this insight, a new training strategy called rate-based backpropagation has been developed to leverage rate-based representati...
Rebuttal 1: Rebuttal: #### **W1: Some important details (such as the top-level algorithm of the proposed rate-based backpropagation method anddetails of the experimental setup) are reported in the appendix, while, due to their importance, they should be movedto the main manuscript.** Thank you for your suggestion. We ...
Summary: This paper presents a novel rate-based backpropagation method for spiking neural network (SNNS) training, which effectively separates the time-dependent backpropagation (BPTT) process and thus reduces computational and memory costs. The method employs a rate-encoded approximation to capture the basic informati...
Rebuttal 1: Rebuttal: #### **W1: In lines 53-55, this paper mentions that the proposed method reduces training time, but there is no relevant experimental proof in the experiments section.** Thank you for your suggestion. We have added more experiments on training costs to strengthen the experimental proof of training ...
Summary: This work falls into the category of efficient SNN training methods. This paper proposes a reduced computational graph to reduce the memory and computational demands of SNNs training. This work has the potential to train SNNs on resource-limited devices. The paper evaluates the methods on CIFAR-10, CIFAR-100, ...
Rebuttal 1: Rebuttal: #### **W1: Not a clear comparison of the differences with existing e-prop methods in terms of methodology.** Thank you for your comments. In our paper, we compare our method with online-learning akin to e-prop [1], as OTTT[2,3,4], SLTT[3], and OS[4], with results shown in Table 1. Descriptions of...
Summary: This paper proposes a rate-based SNN training method, which can effectively reduce memory and time cost during training. They proved the efficiency of the rate-based back-propagation training and demonstrate that the rate-based training outperforms other back-propagation methods. Strengths: The rate-based met...
Rebuttal 1: Rebuttal: #### **W1: The novelty is weak. There are two previous works that share similar idea with this paper, since they all use rate-basedbackpropagation [1,2]. The author needs to briefly explain the differences between these papers.** Thank you for your comments, which have prompted further clarifica...
Rebuttal 1: Rebuttal: Thank you to all the reviewers for your constructive comments and suggestions. We have addressed all the weaknesses and questions raised by the reviewers; detailed responses can be found in the corresponding sections of the rebuttal for each reviewer. In response to the reviewers' feedback, we hav...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Efficient LLM Scheduling by Learning to Rank
Accept (poster)
Summary: This paper proposes a learning-based rank predictor for scheduling LLM inference to reduce Head-of-Line (HoL) blocking issues, which significantly outperforms state-of-the-art LLM serving systems. Strengths: 1. This paper addresses an important question in LLM serving. 2. This paper is easy to follow with a g...
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive and detailed suggestions. We appreciate your generous comments! **Q1: One potential issue with preemptive scheduling for LLM inference is the accumulated unused KV cache. How do you handle them when the GPU reaches the maximum memory limit?** To address...
Summary: This paper proposes an approach for optimizing scheduling in LLM serving by learning a generated token length ranking model. The authors demonstrate that understanding the relative order of generation lengths can effectively guide the scheduling process, specifically through the use of SJF/ SRTF scheduling st...
Rebuttal 1: Rebuttal: We thank the reviewer for the very insightful and helpful comments! We would like to address your questions in the below response. **Q1: While the approach is effective, it builds upon existing work that has already identified the benefits of SJF/SRTF scheduling for LLMs[1][2]. The novelty is som...
Summary: This paper reveals the Head-of-Line (HOL) blocking problems caused by the first-come-first-serve (FCFS) scheduling strategy in LLM services. To alleviate these problems, the authors train an OPT model to generate scores for evaluating the relative text length of given prompts. Based on these scores, the author...
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful and helpful feedback! We address all your questions and concerns below. **Q1: Since the request queue Q is re-ranked after each batch of data is scored, the ranking scheduler may be sensitive to the batch size.** **A1:** The ranking scheduler is not sensi...
Summary: The paper addresses the inefficiencies in scheduling LLM inference requests, which often use a first-come-first-serve (FCFS) strategy, leading to Head-Of-Line (HOL) blocking and reduced throughput. The authors propose a novel scheduling method based on predicting the relative ranks of output lengths in a batch...
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful and helpful feedback! We would like to address your questions in the below response. **Q1.1: The current scheduling approach only considers output length. Would you also consider other dimensions, such as prompt length? Longer prompt lengths can consume mo...
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning Infinitesimal Generators of Continuous Symmetries from Data
Accept (poster)
Summary: This paper proposes using neural ODEs to parameterize symmetries by viewing the ODEs flow as an element of a one-parameter group. They show that by learning the parameters of the neural ODEs, they are able to recover ground truth symmetries in image classification and PDE tasks. Strengths: The paper is easy t...
Rebuttal 1: Rebuttal: Thank you for your feedback. **Q1. Necessity of phrase in line 174.** Yes. For all $f \in \mathcal{D}$ and for all $s \in [-\sigma, \sigma]$. **Q2. Clarification in Section 4.1.** To be more rigorous, our formulation leaves us a set of constraints $S(\vartheta^*_s,f) <C$ for all $f \in \mathcal{...
Summary: The paper pertains to the topic of data-driven symmetry discovery. The authors propose a method allowing symmetry discovery beyond pre-defined Lie groups, by learning to transform datapoints, potentially in a non-affine manner, via a learned ODE (referred to as the *one-parameter group*, where the single param...
Rebuttal 1: Rebuttal: Thank you for your feedback. **Weakness 1, 2. Reliance on the validity score.** We see the requirement of a validity score as a trade-off for not requiring the predefined set of symmetry generators, and searching for symmetries across the entire class of continuous transformations, which increase...
Summary: This paper proposes a symmetry learning algorithm based on transformations defined via infinitesimal generators. Using Neural ODE, an infinitesimal generator is learned that is capable of producing a sequence of transformed data through ODE integration. Validity score has been defined to check if the transform...
Rebuttal 1: Rebuttal: Thank you for your feedback. **Question 1. Validity score in equivariant task.** We take a two-step approach: first, we learn symmetries based on validity scores, and second, we use these learned symmetries as augmentation and solve machine learning tasks. The validity score is designed to measur...
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for their insightful comments and suggestions. We are pleased that all the reviewers agree on the importance of symmetry discovery and find our research novel. Below, we address some commonly raised questions and present additional experimental results. # Comparison with ba...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Scaling White-Box Transformers for Vision
Accept (poster)
Summary: This paper introduces CRATE-α, an enhanced variant of the CRATE (Coding RATE Transformer) architecture, designed to scale efficiently while maintaining mathematical interpretability. The authors address the open question of CRATE's scalability by proposing strategic modifications to the sparse coding block and...
Rebuttal 1: Rebuttal: Thank you for your review. Below we attempt to resolve the questions you posed. >**Q1**: *Could the proposed architecture work well on other tasks like NLP?* **A1**: Thank you for your suggestions on new experiments on NLP. Please refer to our response to '**Q3: Performance of CRATE-α on NLP ta...
Summary: This paper explores how to train white-box Transformers at scale for visual tasks. The authors propose a new model architecture called CRATE-$\alpha$, which extends the sparse coding block of the original CRATE model. A series of CRATE-$\alpha$ models were trained with varying model sizes, data sizes, and patc...
Rebuttal 1: Rebuttal: Thank you for your review. Below we attempt to resolve the questions you posed. >**Q1**: *The paper is heavily symbolized, ... it severely hampers understanding of the paper's details ...* **A1**: Thank you for the paper presentation suggestion. We've added a new diagram to our rebuttal pdf (Fig...
Summary: This paper studies the scalability problem of white-box transformer CRATE and proposes CRATE-$\alpha$ to enhance the scaling ability of CRATE. To be specific, the authors propose three strategic but minimal modifications for the CRATE model architecture: Overparameterized sparse coding block, Decoupled diction...
Rebuttal 1: Rebuttal: Thank you for your review. Below we attempt to resolve the questions you posed. >**Q1**: *Performance gaps with vanilla ViT. As shown in Figure 1, CRATE-α still lags behind vanilla ViT across different scales remarkably which may limit its application in real scenarios. Besides, it is suggested t...
Summary: This paper aims to train CRATE at a large scale for vision tasks. The contribution includes an architecture modification to the sparse coding block and a light training recipe. The new model, called CRATE-alpha, shows large improvements compared with the previous CRATE model. The experiments also show promisin...
Rebuttal 1: Rebuttal: Thank you for your review. Below we attempt to resolve the questions you posed. >**Q1**: *The paper is highly centered on improving CRATE. Most of the findings might not be transferable to other models. This may limit its impact to the general audience in NuerIPS community.* **A1**: We agree tha...
Rebuttal 1: Rebuttal: ### **Common response to all reviewers**: We thank all reviewers for their insightful feedback. We are especially encouraged by their recognition of: - The novelty and impact of our central ideas (` Reviewer YH3U `: “The paper presents a novel architecture, CRATE-α, …, enhancing scalability with...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
TSDS: Data Selection for Task-Specific Model Finetuning
Accept (poster)
Summary: This paper proposes a method for data selection in foundation model fine-tuning. The proposal contains a distribution alignment loss based on optimal transport to capture the discrepancy between the selected data and the target distribution, a regularizer to encourage the diversity of the selected data, and ke...
Rebuttal 1: Rebuttal: We believe that you have missed key points of our work and we would like to correct several factually incorrect points in the provided review. > W1 - novelty To the best of our knowledge, our work is the first one that presents a unified framework for task-specific data selection which considers...
Summary: This paper formulates data selection for task-specific fine-tuning as an optimization problem based on optimal transport for distribution alignment. It proposes two KNN-based implementation methods and evaluates them on datasets for task-specific instruction fine-tuning and domain-specific continued pretrainin...
Rebuttal 1: Rebuttal: Thank you for your feedback. We address the concerns and answer the questions below: >W1 - connection to optimal transport As mentioned in lines 38-44, we want the selected data to match the distribution of the representative data from the target distribution, and optimal transport is a powerful...
Summary: This paper presents a method for data selection for task-specific model finetuning. The method relies on a small, representative sample of data from the target task to select matching, relevant data from a corresponding corpus. The method relies on framing this task as an optimization problem, utilizing an opt...
Rebuttal 1: Rebuttal: We appreciate the feedback. Here is our response to the questions and concerns: >W1 - other LLMs Thank you for your suggestions. We chose llama-7b and mistral-7b since they achieved state-of-the-art performance in various tasks at the time of submission among the 7b-size models and are shown to ...
Summary: This paper proposes task-specific training data selection for language model fine-tuning. Given a (small) set of representative examples for a task and a large set $D$ of possible training examples, the proposed method uses (regularized) optimal transport to assign a probability distribution over $D$ that matc...
Rebuttal 1: Rebuttal: Thank you for the feedback. We address the concerns and clarify the questions as follows: >W1 - ablation study using embeddings We proposed a framework where the data can be embedded in any metric space with any distance function that supports efficient nearest neighbor search. Finding the best ...
Rebuttal 1: Rebuttal: Following the reviews, we would like to expand on our choice of using optimal transport as a means to solve the problem of task-specific selection. We use optimal transport to capture the discrepancy between the distribution we will sample from and the target distribution. We include probability t...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Implicit Bias of Mirror Flow on Separable Data
Accept (poster)
Summary: In this paper, authors study the implicit bias of the mirror descent algorithm, from the perspective of the optimization trajectory of the continuous flow version. They propose the conceptions of horizon shape and horizon function $\phi_\infty$ to help characterize the properties of mirror flow at infinity. Si...
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough report and relevant comments. We will take them into account and make the appropriate changes in the revised version. Please find below our response to the comments. **Weaknesses** **W1:** Indeed, our result has connections with Theorem 5 from Gunasekar e...
Summary: This paper examines the implicit bias of mirror flow (the continuous-time counterpart of mirror descent) in the context of binary classification for linearly separable data. Given that the problem has infinitely many solutions, obtained at infinity, the authors aim to identify which solution is achieved throug...
Rebuttal 1: Rebuttal: We thank the reviewer for their report and relevant comments. **Weaknesses** The comment we make line 294 concerns the **training loss** convergence rate. We provide empirical evidence Figure 1 (left), we observe that the exact rate indeed depends on the potential. Also note that standard result...
Summary: This manuscript examines the implicit bias of mirror descent on a classification problem when the dataset is linearly separable. Assuming a coercive gradient, it demonstrates that the implicit bias is characterized by the shape of the level set of the mirror potential near infinity. Their analysis successfully...
Rebuttal 1: Rebuttal: We thank the reviewer for their positive report and relevant comments. It can be verified that the logistic loss indeed satisfies the exponential-tail condition of Assumption 1, since $\ln(1+\exp(-z))$ is equivalent to $\exp(-z)$ as $z \rightarrow + \infty$. --- Rebuttal Comment 1.1: Comment: I...
Summary: This paper considers the asymptotic behaviour of the mirror descent (MD) algorithm for a linear classification task. It is shown that the classifier (hyperplane orthogonal to $\beta$) will be a max-margin classifer, where the margin is determined by some unknown horizon function $\phi_\infty$. This works exten...
Rebuttal 1: Rebuttal: We thank the reviewer for their report and relevant comments. **Q1:** In Theorem 3, we provide a formula for computing the horizon function when the potential $\phi$ is **separable**. In that case, $\phi_\infty$ can be obtained by computing the limit at infinity of a simple one-dimensional functi...
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
MonoMAE: Enhancing Monocular 3D Detection through Depth-Aware Masked Autoencoders
Accept (poster)
Summary: This paper proposes a monocular 3D detection framework inspired by Masked Autoencoders (MAE), designed to address the challenge of object occlusions in 3D object detection. It utilizes a unique depth-aware masking module that simulates occlusions by adaptively masking non-occluded object features based on dept...
Rebuttal 1: Rebuttal: We thank the valuable comments and insightful suggestions, and we hope our detailed responses below can address your concerns. **Weakness 1: Performance Improvement** We would clarify that the proposed MonoMAE achieves clear performance improvement over the state-of-the-art. As shown in Table 1...
Summary: This paper introduces a novel framework for improving monocular 3D object detection, particularly in handling object occlusions. The proposed MonoMAE leverages depth-aware masking to simulate occlusions in the feature space and employs a lightweight completion network to reconstruct occluded object regions, th...
Rebuttal 1: Rebuttal: We thank the valuable comments and insightful suggestions, and we hope our detailed responses below can address your concerns. **Weakness 1: Depth-Aware Masking for Occlusion Simulation** We acknowledge that the proposed depth-aware masking may not perfectly replicate natural occlusion patterns,...
Summary: This paper applies Masked Autoencoder to 3D object detection. It distinguishes object queries into occluded and non-occluded categories, and during training, it applies depth-aware masking to the non-occluded queries and learns by completing them. At test time, the completion is applied to the occluded queries...
Rebuttal 1: Rebuttal: We thank the valuable comments and insightful suggestions, and we hope our detailed responses below can address your concerns. **Weakness 1: Feature-Level Masking vs. Image-Level Masking** We would clarify that the strategy of masking and completion aims to generate pairs of non-occluded and oc...
Summary: This paper introduces MonoMAE, a novel monocular 3D object detection framework designed to improve detection performance in the presence of object occlusions. MonoMAE leverages the concept of Masked Autoencoders, treating object occlusions as natural masking and training the network to complete occluded region...
Rebuttal 1: Rebuttal: We thank the valuable comments and insightful suggestions, and we hope our detailed responses below can address your concerns. **Weakness 1: Occlusion Levels: Definition and Usage** We appreciate the reviewers' insightful comments regarding the use of occlusion levels. Our method only uses the b...
Rebuttal 1: Rebuttal: # General Response We thank all the reviewers for your time, insightful suggestions, and valuable comments. We are highly encouraged by the reviewers' acknowledgment of our method in its innovative idea and novel design (xT7k, 8dEd, g4DX), superior performance (xT7k, 8dEd), exhaustive experiments...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Parsimony or Capability? Decomposition Delivers Both in Long-term Time Series Forecasting
Accept (spotlight)
Summary: This paper proposes a Selective Structured Components-based Neural Network for Long-term Time Series Forecasting Strengths: 1. This paper demonstrates originality by addressing a crucial limitation in existing SOTA methods, maintains high quality through thorough experimentation and clear presentation, offer...
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and constructive comments. --- **W1:** We chose the ECL and Traffic datasets for our experiments primarily because they present more challenging tasks due to their larger data scale. Specifically, these datasets include a mass of variables and cover an extens...
Summary: This paper identifies data decomposition as a core bottleneck in time series forecasting and proposes a novel model named SSCNN, a decomposition-based model innovatively enhanced with a selection mechanism. SSCNN is specifically designed to adeptly capture complex regularities in data while maintaining a minim...
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and constructive comments. --- **W1:** We acknowledge the importance of including 720-step predictions. In our experiments, we observed that the ranking of models for 720 output steps differed only slightly from the ranking for 336 output steps. This suggests ...
Summary: This paper addresses long-term time series forecasting and critiques the reliance on complex models with extensive parameters. It proposes a decomposition method specifically designed for time series dynamics, achieving better forecasting performance across various datasets. Remarkably, the new model uses over...
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and constructive comments. --- **W1:** We have addressed the concerns raised in your questions. Please kindly refer to the response to Q1 and Q2 below. --- **W2:** We have conducted a comparative analysis between SSCNN and MTSF models based on LLM [1] [2]. ...
Summary: This study unveils a groundbreaking approach to time series forecasting, notable for its minimal parameter count. It stands as the first model to consistently outperform state-of-the-art (SOTA) techniques while remaining compact. Unlike prevalent methods such as PatchTST and iTransformer, which are powerful bu...
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and constructive comments. --- **W1:** Thank you for highlighting this important point. We apologize for not providing sufficient context on the four components used in our model. Disentangling these components has been shown to be effective for time series fo...
Rebuttal 1: Rebuttal: We are grateful for the detailed and constructive feedback provided by the reviewers. The positive reception of our work, particularly the recognition of its innovative approach and significant contributions to the field of time series forecasting, is highly encouraging. Below, we summarize the ke...
NeurIPS_2024_submissions_huggingface
2,024
Summary: Title: Parsimony or Capability? Decomposition delivers both in long term time series forecasting. Long term time series forecasting has been an important research problem which applies to different problem domains. This paper proposes a decomposition method which shows significant performance on the benchmark...
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and constructive comments. --- **W1:** We apologize for the lack of captions for Figure 2 and Figure 3. We also include the caption for Figure 1, as requested by Reviewer aS9R. The captions for these figures are shown below Figure 2: Examination of parameter ...
Summary: The paper approaches the problem of long term time series forecasting (LTSF) using a compositional technique to reduce the model size without compromising the quality of solution. The proposed technique is a transformer based architecture with a lower number of parameters, and delivers similar performance as s...
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and constructive comments. --- **W1:** We apologize if the current symbolic formulas present challenges to comprehension. To enhance understanding, we have tried to make the selection mechanisms formulated by Equations (3)-(7) more accessible with the example ...
null
null
null
null
Sparse-view Pose Estimation and Reconstruction via Analysis by Generative Synthesis
Accept (poster)
Summary: The paper introduces a novel optimization-based method for sparse-view 3D reconstruction from unposed images. The method uses off-the-shelf pose estimator to get pose initialization, then it uses rendering loss and generative priors to optimize the pose and 3D reconstruction. In detail, the generative priors i...
Rebuttal 1: Rebuttal: > Missing baseline. We would like to thank the reviewer for the suggestion on baselines. We include the comparison between the proposed approach and SPARF in General Response. In additions, we also compare our method with UpFusion on novel view synthesis: | Method | Init Cam Pose | (N=6) PSNR ...
Summary: This paper proposes a framework for joint 3D reconstruction and pose refinement. Specifically, given estimated camera poses from off-the-shelf models, the proposed method first leverages diffusion priors and rendering loss for 3D reconstruction. The 3D reconstruction is further used to refine the current pose ...
Rebuttal 1: Rebuttal: > The proposed method is compared with SPARF only in the setting of using pose from different pose estimation baselines. However, it would be more convincing to also present the results using the same setting of SPARF, which adds noise into the GT camera pose. This will be a direct comparison with...
Summary: This paper proposes a method for the joint reconstruction of camera poses and 3D objects given sparse input views. The core idea is to use a pose-conditioned diffusion model (Zero-123) as a prior, impose the SDS loss, and jointly optimize the poses and objects, similar to the approach in ID-pose. To improve th...
Rebuttal 1: Rebuttal: > This optimization-based method requires more time compared to a feed-forward model, taking about 5-10 minutes. Additionally, the writing discussing this aspect is somewhat unclear: the paper states, “with increased inference time depending on the number of outliers.” Could this statement be more...
Summary: This paper presents a method named MV-DreamGaussian for tackling the problem of 3D reconstruction from sparse multi-view inputs. In particular, the paper extends the DreamGaussian work to use multi-view images as the inputs and proposes a scheme to optimize the inaccurate camera poses of the multi-view images....
Rebuttal 1: Rebuttal: > This paper presents very limited novelty in the reconstruction part with a trivial extension to DreamGaussian to use multi-view images, which is already implemented in a public repository stable-dreamfusion. We respectfully disagree with the reviewer's assessment that our extension to DreamGaus...
Rebuttal 1: Rebuttal: # General Response We appreciate the reviewers' insightful comments and valuable feedback. We are glad that the reviewers appreciated the results, the practicality of the setup, and found the paper well written. In this response, we address some of the common points raised by the reviewers (addit...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Stochastic Optimal Control Matching
Accept (poster)
Summary: In this paper, the authors propose a novel learning algorithm, Stochastic Optimal Control Matching (SOCM), to numerically solve general formulations of Stochastic Optimal Control (SOC) problems involving affined-controlled diffusion processes. They build upon the Iterative Diffusion Optimization (IDO) [1] fram...
Rebuttal 1: Rebuttal: We thank the reviewer for providing an extremely succinct description of our paper. We completely agree with the reviewer’s concerns regarding limitations, but we just want to highlight our perspective that this paper proposes a rather novel way of thinking about constructing methods for SOC probl...
Summary: This paper proposes a novel algorithm for approximating the solution to the Hamilton-Jacobi-Bellman (HJB) equation with a neural network control policy. Rather than backpropagating through rollouts of the dynamics, the authors develop a least-squares objective which resembles the score-matching loss used in di...
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. **“The evaluations only consider simple toy problems. Moreover, they only plot the L2 error with respect to the optimal control. However, this does not necessarily tell us about the actual task performance due to compounding errors.” “How do all the metho...
Summary: This paper presents stochastic optimal control matching (SOCM), which is an iterative diffusion optimization for optimal control aiming to fit a matching vector field. The authors introduce a new loss function and address the analysis and design of a learning-based control method. Strengths: The work is nicel...
Rebuttal 1: Rebuttal: We thank the reviewers for their encouraging rating and for providing us the opportunity to clarify the key importance of the proposed method below. **“Could the authors comment and/or perform some motivating experiments to show the stability of the training by SOCM? They can emphasize the contri...
Summary: **Summary** This paper introduces Stochastic Optimal Control Matching (SOCM), a novel algorithm for solving stochastic optimal control problems. Key contributions include: 1. SOCM algorithm, adapting ideas from conditional score matching in diffusion models 2. A new "path-wise reparameterization trick" for g...
Rebuttal 1: Rebuttal: We thank the reviewer for an accurate list of our contributions; however, we think there may be a misunderstanding regarding the scope of applications. We detail our responses to the reviewer’s concerns and questions below: **“The method is currently restricted to linear Gaussian models and requi...
Rebuttal 1: Rebuttal: We thank the reviewers for their helpful comments. We would like to clarify and reemphasize our contributions, and to provide a global response to some issues that have been raised by multiple reviewers. We acknowledge that our method has limitations due to the use of importance weighting and is...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Linear Causal Representation Learning from Unknown Multi-node Interventions
Accept (poster)
Summary: This paper studies identifiability under unknown muilti-node interventions (soft/hard), with general models (parametrtic/nonparametric) and **linear** mixing functions. This work provides both detailed proof which justifies the main theoretical statement, and a step-by-step algorithm which guides how to achie...
Rebuttal 1: Rebuttal: We thank the reviewer for finding our results an important step towards more realistic CRL settings and noting the clarity of the paper. **General causal models**: We did not provide results using non-linear causal models since our algorithm, due to its combinatorial nature, is sensitive to inpu...
Summary: This paper advances Causal Representation Learning (CRL) by addressing the challenge of using unknown multi-node (UMN) interventions to identify latent causal variables and their structures. The authors develop a score-based CRL algorithm that leverages UMN interventions to guarantee identifiability of latent...
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful review and accurate summary. We address the questions as follows. **Computational complexity**: Stage 1 (score difference estimation) is only performed once before the main algorithm starts. Stage 4 (unmixing procedure for hard interventions) essentiall...
Summary: This work studies interventional causal representation learning, where one has access to interventional data, to identify latent causal factors and latent DAG in the unknown multi-node interventions regime. The authors consider a setting where the mixing function is linear and the latent causal model is nonpar...
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging our strong theoretical and algorithmic contributions. We address the questions as follows. **Empirical results**: - *Increasing $n$*: Thanks for the suggestion. Please refer to the general response for experiment results for up to $n=8$ nodes. - *Basel...
Summary: This paper extends previous results on using score function for causal representation learning to the settings with unknown multi-node interventions. This new setting poses significant new challenges as opposed to the single node intervention case. The author first present theoretical identifiability result on...
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging the challenges of the unknown multi-node intervention setting and noting the clarity of the paper. We address the raised concerns as follows. **Noiseless transformation:** The current scope of the paper cannot handle noisy transformations. We also kindly no...
Rebuttal 1: Rebuttal: We thank all the reviewers for their thorough feedback and thoughtful questions. Below we address some shared questions by the reviewers. ### **Additional experiments** We address the shared concerns of the reviewers regarding the scalability of the algorithm via the following additional experime...
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper introduces new identifiability results for CRL in environments with unknown multi-node interventions. It shows that, with sufficiently diverse interventional environments, one can achieve identifiability up to ancestors using soft interventions and perfect identifiability using hard interventions. T...
Rebuttal 1: Rebuttal: We thank the reviewer for finding our results crucial for extending CRL into more practical contexts, for finding our algorithm insightful, and for noting the clarity of the paper. We address the raised questions about the algorithm’s scalability as follows. - **Dimension of $X$**: Our algori...
null
null
null
null
null
null
Video Token Merging for Long Video Understanding
Accept (poster)
Summary: - This paper carries out an analysis of token merging[1] in the context of long-form video understanding and proposes learnable video token merging (VTM) to select semantics/saliency guided tokens for merging. - In token merging, at each layer, the tokens are divided into two sets source S and target T throug...
Rebuttal 1: Rebuttal: Thank you for your positive and valuable comments. Please find our responses below. *** > **Exploration** Compared to image token merging methods, token merging for video is relatively under-researched. In this work, we investigate various video token merging methods and finally propose a lear...
Summary: This paper explores various video token merging strategies in the context of long-form video classification and finally propose a learnable Video Token Merging algorithm that dynamically merges video tokens based on visual salient areas. The contributions are summarized as follow: 1. Explore various video tok...
Rebuttal 1: Rebuttal: We do appreciate your constructive comments and will address them faithfully in the final paper. Please find our responses below. *** > **Difference from CTS** CTS is not similar to the proposed algorithm, since it is not even a token merging method. CTS is a semantic segmentation algorithm whi...
Summary: The paper approaches the task of long-video understanding from token reduction perspective. Specifically, Transformer-based approaches suffers from memory bottleneck and quadratic computation complexity with increasing number of tokens, which is even more pressing with long-videos as input. The paper builds on...
Rebuttal 1: Rebuttal: Thank you for your positive review and insightful suggestions, all of which will be addressed faithfully in the final paper. Please find our responses below. *** > **Difference from EVEREST** The proposed learnable VTM is quite different from EVEREST. EVEREST measures the cosine similarity of t...
Summary: This paper builds on Token Merging (ToMe) to improve its performance. In particular, the authors explore different ways to partition tokens so that the merging operation can lead to better performance while maintaining speed. They explore region-concentrated merging, motion-vector based merging and a learnable...
Rebuttal 1: Rebuttal: We do appreciate your constructive comments and will address them faithfully in the final paper. Please find our responses below. *** > **Token merging for video** To the best of our knowledge, there are only a few techniques for video token merging, such as TESTA(EMNLP2023), ToMe(ICLR2023), a...
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their time and efforts for providing constructive reviews. Also, we extend our thanks to the program and area chairs. We have faithfully responded to all comments below.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning Cortico-Muscular Dependence through Orthonormal Decomposition of Density Ratios
Accept (poster)
Summary: The paper presents a novel approach called Functional Maximal Correlation Algorithm with Trace cost (FMCA-T) for estimating cortico-muscular dependence by leveraging orthonormal decomposition of density ratios. This method is designed to model the relationship between EEG (electroencephalography) and EMG (elec...
Rebuttal 1: Rebuttal: We thank the reviewer for the comments. Please find below our replies to the concerns/questions. **1. Limited baselines.** We have added EEG-Conformer (available on GitHub) and Deep4 (from the braindecode repository on GitHub) in the attached pdf. All baselines were implemented following the off...
Summary: The paper presents a new method to model the relationship between cortical and muscular oscillations using EEG and EMG recordings. Traditional methods like Cortico-Muscular Coherence (CMC) have limitations, so the authors propose using statistical dependence estimators to learn eigenvalues, eigenfunctions, and...
Rebuttal 1: Rebuttal: We thank the reviewer for the instructive comments. Please find below our replies to the concerns/questions. **1. Mathematical and algorithmic complexity.** We thank the reviewer for the suggestion. We will add more explanations of our methodology to the supplementary. **2. Interpretation of res...
Summary: The authors apply novel but already existing (https://www.sciencedirect.com/science/article/pii/S0047259X2300074X, https://arxiv.org/pdf/2212.04631) machinery based approach on the orthonormal decomposition of density to decipher the relationship between cortical brain activity and the electromyographic signal...
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed insightful feedback. **1.1 Novelty over CCA.** We acknowledge the reviewer's observation that the final cost is the trace of a normalized canonical correlation matrix between two multivariate neural networks. But we emphasize the link between cost optimiz...
Summary: This paper introduces a novel approach to analyzing cortico-muscular connectivity using statistical dependence measures based on density ratio decomposition. The authors apply a method called Functional Maximal Correlation Algorithm with Trace cost (FMCA-T) to paired EEG and EMG recordings. The key idea is to ...
Rebuttal 1: Rebuttal: We thank the reviewer for the comments. **1. Classification baselines could be stronger.** We have now added EEG-Conformer (available on GitHub) and Deep4 (from the braindecode repository on GitHub) as new baselines in the general response letter. The results show that FMCA-T surpasses the added...
Rebuttal 1: Rebuttal: We appreciate the reviewers' feedback. The following responses address their shared concerns. We have attached a letter containing the additional results, including the requested classification baselines, a frequency analysis of brain activations, full maps for Subject 3, maps for simulated EEG-E...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
GaussianMarker: Uncertainty-Aware Copyright Protection of 3D Gaussian Splatting
Accept (poster)
Summary: This paper proposes GaussianMarker, a novel method for embedding invisible watermarks into 3D Gaussian Splatting (3DGS) models to protect their copyright. The key idea is to use uncertainty estimation to add imperceptible perturbations to 3D Gaussian parameters with high uncertainty. The method enables extract...
Rebuttal 1: Rebuttal: Dear Reviewer BKx3, We will address your comments below and in the revised paper. ### Weakness: > W1: Generalizability of decoders. We agree that the generalization ability is very important for practical use. In our designs, we employ a two-layer protection approach to achieve generalizable a...
Summary: 3D Gaussian Splatting(3DGS) has gradually become the mainstream method for acquiring 3D assets, which has led to a demand for copyright protection of 3DGS. In this paper, a watermarking method based on uncertainty called GaussianMarker is proposed. Firstly, 3DGS is partitioned based on uncertainty, and the wat...
Rebuttal 1: Rebuttal: Dear Reviewer VdSM, Thank you for your valuable feedback and constructive comments. We will address your comments below and in the revised paper. ### Weakness > W1: Parameters for uncertainty calculation and partitioning. As we have mentioned in the main paper (Section 4.1), the model paramete...
Summary: The paper presents a new method for embedding digital watermarks in 3D Gaussian Splatting (3DGS) models to protect the copyright of 3D assets. Traditional watermarking techniques for mesh, point cloud, and implicit radiance fields are not suitable for 3DGS, as they can cause distortions in rendered images. The...
Rebuttal 1: Rebuttal: Dear Reviewer QvjQ, Thank you for your valuable feedback and constructive comments. We appreciate your suggestion about considering more sophisticated scenarios, and we will address your comments below and in the revised paper. > W1: Model fine-tuning and auto-encoder attack. By following your ...
Summary: This paper proposes an uncertainty-based method to achieve watermarking for 3D Gaussian Splatting. Specifically, the Hessian matrix is used to estimate the parameter uncertainty. Then, the 3D Gaussians with high uncertainty are densified. The densified 3D Gaussians are trained to embed watermarking using a pre...
Rebuttal 1: Rebuttal: Dear Reviewer fyQG, We will address your comments below and in the revised paper. > W1. One concern about this paper is its novelty. Thanks for raising the concern. While our approach incorporates elements from classical techniques, our contributions extend beyond these traditional frameworks. ...
Rebuttal 1: Rebuttal: Dear Reviewers, We sincerely thank all reviewers for their comprehensive evaluations and valuable feedback. We are pleased to address any additional questions during the discussion period. Best Regards, Authors of Paper 3674 Pdf: /pdf/ce627dfca129bbba37fee1203b579757119efd4b.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
LiT: Unifying LiDAR "Languages" with LiDAR Translator
Accept (poster)
Summary: In this paper, the authors propose a method to help alleviate the domain gaps among different datasets with different LiDAR sensors, which can enable zero-shot detection on a new dataset. The proposed method including Scene Modeling for foreground and background reconstruction and LiDAR Modeling with statistic...
Rebuttal 1: Rebuttal: ### W1. Usage of terms and naming of the method We thank the reviewer for bringing up the point on the naming and terms used in the paper. Regarding the terms "language" and "translator", "language" refers to a particular LiDAR pattern of a specific LiDAR sensor, and the "translator" refers to th...
Summary: To address the significant gap between different LiDAR datasets (related to sensors, environments, etc.), this paper proposes a solution that differs from the existing model-based adaptation approach. By employing a scene-reconstruction-data-simulation approach, it achieves consistent representation of differe...
Rebuttal 1: Rebuttal: > W1. Does "foreground" only refer to vehicles? Do pedestrians, bicycles, and similar entities fall into this category? Currently, we only focus on the vehicle category in the foreground modeling. In this paper, the main motivation is to demonstrate the effectiveness of model-based domain adaptat...
Summary: This paper proposed a unifying LiDAR Translator named LiT to achieve LiDAR domain adaptation. Differing from current model-driven approaches, LiT adopts a novel data-driven approach, embedding disparate LiDAR attributes into a common representation. LiT proposes a generalizable scene modeling and LiDAR statist...
Rebuttal 1: Rebuttal: ### W1. About dataset normalization - **Dataset normalization is non-trivial.** It is actually non-trivial to normalize different datasets into a unified representation. The LiDAR sensors have different specifications, such as the number of beams, vertical and horizontal resolution, and field of ...
Summary: The paper presents a novel framework designed to unify LiDAR data into a single target “language” and unified domain detection capabilities across diverse LiDAR datasets, marking a step toward domain unification for LiDAR-based autonomous driving systems. Experiments on dataset KITTI, Waymo, and nuScenes demon...
Rebuttal 1: Rebuttal: ### W1. Motivation of the work We thank the reviewer for highlighting the importance of clarifying the motivation for our work. - **Background:** Imagine an autonomous driving company that has collected a substantial amount of LiDAR data from different sensors (LiDAR-A and LiDAR-B). The company...
Rebuttal 1: Rebuttal: Please refer to individual rebuttal comments. The rebuttal PDF is attached. Pdf: /pdf/97e81d42a8bc29e2986cc2890c567ed34d653215.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Real-Time Selection Under General Constraints via Predictive Inference
Accept (poster)
Summary: The paper proposes a method for online sample selection. The authors introduce the concepts of _individual_ and _interactive constraints_, and demonstrate theoretically and empirically that their method satisfies both. Strengths: The problem seems important and the formulation and approach novel. The authors ...
Rebuttal 1: Rebuttal: > W1: I am not familiar with the FDR control literature, and had to read parts of the paper (specifically sections 2.2, 2.3 and 4) multiple times to get a gist for the logic of the method and its empirical perfomance....I highly recommend the authors revise the paper to make it easier to follow. ...
Summary: In this paper, the authors quantify the uncertainty of response predictions using predictive inference; and systematically addressing individual and interactive constraints in a sequential manner. An online selection rule is developed to ensure the above two types of constraints are under control at pre-speci...
Rebuttal 1: Rebuttal: > Q: Section 4.1 would be good to tell reader how many replications are used. **To Q**: We have mentioned '500 replications' in the begining of Section 4 (line 270) when introducing our evaluation measures. To avoid repetition, we did not restate this in Section 4.1. > L: Comments are needed on ...
Summary: The paper studies online sample selection with individual and interaction constraints simultaneously. Specifically, the goal is to control (variants) of the false selection rate (FDR) and the expected similarity (ES) under the empirical bayes framework. Under distributional conditions, the proposed method ...
Rebuttal 1: Rebuttal: > Q1: In motivating examples such as candidate selection, we get to observe the ground truth after time t. ... it is reasonable to use the observed $Y_t$'s to update the estimation. **To Q1**: Thank you for the constructive suggestion. We'd like to clarify a few points: 1. **Feedback and Model U...
Summary: This paper introduces a framework to perform online sample selection such that the unseen outcomes are in specific target range while also optimizing for constraints like diversity that are dependent on the input covariates. The additional constraint involving input covariates can help ensure properties like t...
Rebuttal 1: Rebuttal: > W1: It would be interesting to understand the gaps between an ideal diversity profile and the profile obtained by the proposed method in Fig 3. Analysing the gap w.r.t changing g(X_i, X_j) function choice could be helpful. Would it be helpful to increase the weight of the g(X_i, X_j) term to red...
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Constructing Semantics-Aware Adversarial Examples with a Probabilistic Perspective
Accept (poster)
Summary: This paper tackles the field of adversarial image generation by proposing an unrestricted attack method that can be applied to both targeted and untargeted attacks. The innovative approach considers a probabilistic perspective, treating the victim classifier and geometric constraints as distinct distributions....
Rebuttal 1: Rebuttal: We appreciate your review and your recognition of our probabilistic perspective and proposed method. Thank you for indicating the oversights and shortcomings in our submission. Your feedback has effectively improved the quality of this work. We respond to your questions and concerns as follows: #...
Summary: This paper proposes a new type of adversarial attack, which generates adversarial examples by solving a box-constrained non-convex optimization problem. Different from the traditional norm-bounded attacks, this paper focuses on unrestricted adversarial attacks by replacing the geometrical distance measure with...
Rebuttal 1: Rebuttal: We appreciate the reviewer's thorough evaluation and acknowledgment of our probabilistic approach and proposed methodology. We are grateful for the identification of weaknesses in our submission. The reviewer's insights have significantly enhanced the quality of this work. We address the review po...
Summary: This paper introduces a probabilistic framework for generating adversarial examples, focusing on maintaining the semantic integrity of the original images while implementing substantial pixel-level modifications. Unlike conventional adversarial techniques that rely heavily on minimal geometric perturbations, t...
Rebuttal 1: Rebuttal: We appreciate the reviewer's time and valuable feedback, which has significantly contributed to improving our work. We address the identified weaknesses as follows: ### About Equation (4) Our proposed framework assumes that the adversarial distribution is proportional to the product of the distan...
null
null
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for their recognition and encouragement of the probabilistic perspective and related approaches we proposed. We are also very grateful for the valuable suggestions made by the reviewers. Your insightful suggestions have significantly contributed to the refineme...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Stabilize the Latent Space for Image Autoregressive Modeling: A Unified Perspective
Accept (poster)
Summary: This paper designs a better quantized autoencoder on top of VQGAN. It builds an image autoencoder which is able to both achieve good recognition performance for linear probing, and have a latent space which is suitable for training a generative model. It studies the existing autoencoders from a high-level theo...
Rebuttal 1: Rebuttal: We appreciate your constructive feedback and would like to clarify several points to enhance the understanding of our contributions. ## 1. Regarding the analysis of AE and PCA and the claim of the observation We want to emphasize that the analysis of AE and PCA is not the core focus of our theor...
Summary: Latent-based image generative models, such as LDMs and MIMs, have achieved success, but autoregressive models lag behind in image generation. Our research introduces a unified perspective on latent space stability and proposes a discrete image tokenizer, DiGIT, that significantly improves autoregressive image ...
Rebuttal 1: Rebuttal: Thank you for your thoughtful consideration of the paper and your constructive feedback. ## 1. Factual Clarification We respectfully disagree with the points raised and would like to clarify our positions. **1.1** Regarding the MIM models like MaskGIT and MAGE, as well as diffusion models, it i...
Summary: This paper tries to understand why latent autoregressive image models perform worse than latent diffusion models. The key insight is that existing tokenizers are trained primarily with the reconstruction objective, whose latent space is unstable and thus may not be easy to model autoregressively. To solve this...
Rebuttal 1: Rebuttal: Thank you for your thoughtful consideration of the paper and your constructive feedback. ## 1. Performance of the Proposed Method on Image Reconstruction We conduct an experiment to assess the reconstruction performance of the proposed discriminative tokenizer. We use the golden tokens obtained ...
Summary: The paper propose to disentangle the encoder and decoder learning for image tokenzier which ultimately will be used for providing the latent space of AR generative model. In particular, SSL model such as DinoV2 is used for encoder (plus k-means clustering). Strengths: 1. The idea of disentangling the encoder ...
Rebuttal 1: Rebuttal: Thank you for your thoughtful consideration of the paper and your constructive feedback. ## 1. Motivation for Adopting a Self-Supervised Model as Encoder/Tokenizer In section 2.1, we first conduct a theoretical analysis that demonstrates the necessity of considering both $\mathcal{D}^{H}$ and $\...
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful comments. We appreciate that reviewers highlight the novelty and effectiveness of our method, e.g. "The idea is interesting and novel... Strong empirical results ... really impressive" (Vc7B), "A new perspective...which was neglected in previous works .....
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
RoPINN: Region Optimized Physics-Informed Neural Networks
Accept (poster)
Summary: The preprint proposes to replace the collocation based PINN loss by a sum of local continuous integrals over regions around the collocation points. These continuous integrals are then again discretized using Monte Carlo integration with a single quadrature pint. The authors furthermore propose to adapt the reg...
Rebuttal 1: Rebuttal: # To Reviewer zNBp Many thanks to Reviewer zNBp for providing an insightful review and valuable suggestions. > **Clarify misconception.** > > "After all, the loss function in PINNs is already a Monte Carlo discretization over the whole computational domain." Firstly, we want to highlight that o...
Summary: This paper extends the optimization process of PINNs from isolated points to their continuous neighborhood regions, which can theoretically decrease the generalization error, especially for hidden high-order constraints of PDEs. A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly deri...
Rebuttal 1: Rebuttal: # To Reviewer KCE6 Many thanks to Reviewer KCE6 for providing the insightful review and questions. > **Q1:** "It is better to include the main proof idea of theoretical results in the main text." Following your suggestion, we will add the following descriptions into the main text as a brief pro...
Summary: The authors developed a region optimized PINN to improve the prediction accuracy compared to the scatter-point based PINN. Strengths: The authors proposed the region optimization paradigm and conducted a theoretical analysis. Weaknesses: The practical application scope is limited. Technical Quality: 4 Clar...
Rebuttal 1: Rebuttal: # To Reviewer mhfV We sincerely thank Reviewer mhfV for providing valuable feedback and suggestions in new experiments. > **Q1:** "Add some descriptions of training difficulty factors for the canonical PINN on 1D-Reaction in Section 4.2." We will add "Previous research [22 of our paper] demonst...
Summary: The paper proposes a novel optimization method for training physics-informed neural networks (PINNs): Region optimization, which extends a regular pointwise optimization of PINNs to neighborhood regions, named RoPINNs. The paper provides theoretical analysis explaining the decrease of generalization error with...
Rebuttal 1: Rebuttal: # To Reviewer aU22 We would like to sincerely thank Reviewer aU22 for providing a detailed review and insightful questions. > **Q1:** "The paper does not seem to provide general guidelines on how to set some important hyper-parameters. It would be great to see some experts’ guidelines." "More di...
Rebuttal 1: Rebuttal: ## Global Response and Summary of Revisions We sincerely thank all the reviewers for their insightful reviews and valuable comments, which are instructive for us to improve our paper further. This paper proposes and theoretically studies a **new training paradigm as region optimization**, which ...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Incentivizing Quality Text Generation via Statistical Contracts
Accept (poster)
Summary: The authors formulate a theoretical setup for a LLM text generation service to incentivize the service to output high quality text the consumer. The authors formulate this setup as having the service having a set of models that has quality (as rated by a evaluator on the end of the consumer) that increases wit...
Rebuttal 1: Rebuttal: Thank you for the insightful and encouraging review! We address your questions and remarks below: > **The main issue is the theoretical setup does require an assumption that the bounds on cost are known, which seems somewhat impractical.** This is a very good point. We note that virtually all ro...
Summary: This paper addresses the issue of moral hazard in pay-per-token pricing for large language model (LLM) services. Firms may use cheaper, lower-quality models to cut costs, compromising text quality. Moreover, the firms's costs may be unknown to the clients. To counter this, the authors propose a pay-for-perform...
Rebuttal 1: Rebuttal: Thank you for the insightful review, and for the excellent questions! > **What is the computational complexity of finding the optimal cost-robust contract that incentivizes (equivalently, the complexity of finding the optimal test)?** This is a very good question. Optimal cost-robust contracts (...
Summary: * The paper concerns the problem of incentivizing LLMs to use the most costly model, which is assumed to be the model with the best performance. Without proper incentive, the LLM company has the incentive to charge customers for the highest payment, but deliver the service using a lower-cost model, because the...
Rebuttal 1: Rebuttal: Thanks for the insightful review! We address the points below: > **In practice, each company pricing its own AIs, so who should be the principal? In other words, the paper assumes there is a trust-worthy third party who can run the quality-detector and commits to a contract with the LLM companies...
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Rethinking Patch Dependence for Masked Autoencoders
Reject
Summary: This paper reveals the role of inter-patch dependencies in the decoder of MAE on representation learning. The paper shows that MAE achieves coherent image reconstruction through global representations learned in the encoder rather than interactions between patches in the decoder. Based on this, the authors pro...
Rebuttal 1: Rebuttal: Thank you for the review and we appreciate these suggestions. We have performed the requested experiments and will revise the paper accordingly. > [W1] “The claim that MAE reconstruction is achieved through global representation learning within the encoder rather than interactions between patche...
Summary: The paper introduces a novel pre-training approach called CrossMAE. Instead of concatenating the masked and visible tokens for the decoder, the authors add cross-attention to decode the masked tokens by using them and the visible patch embeddings as separate inputs to the decoder. Further, the authors introduc...
Rebuttal 1: Rebuttal: We want to thank the reviewer for the detailed review. We provide responses via the discussion below. The most critical concern that the reviewer had was outlined in **Weakness 2**: > [W2] I don’t understand how the prediction ratio provides any benefit for better downstream performance. We apo...
Summary: This paper presents CrossMAE, a methodology for improving pre-training efficiency over that of MAE for an encoder. The paper motivates its approach by presenting visual evidence that, in standard MAE pre-training, masked tokens attend to other masked tokens significantly less than to non-masked (aka, visible) ...
Rebuttal 1: Rebuttal: Thank you for your valuable questions and suggestions! We provide responses via the discussion below: > [W1] “By averaging over all transformer blocks, variations in the attention may be hidden. Naively, (the reviewer) would think that for early blocks, the attention due to masked tokens would be...
null
null
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their thoughtful reviews as well as encouraging feedback. We are especially glad that the reviewers believe that our ablations are **“fairly thorough”** (Reviewer gKk3), the paper is **“well motivated through a practical observation”** and **“well writt...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
A Sober Look at the Robustness of CLIPs to Spurious Features
Accept (poster)
Summary: The authors aim to investigate spurious correlations learned by CLIP models. For this, they curate a novel dataset where animals are organized into common and uncommon backgrounds, e.g. a polar bear is more likely encountered in snow than on grass. The authors then perform experiments where they benchmark vari...
Rebuttal 1: Rebuttal: Thank you for your constructive comments and suggestions! Please find our responses below. >Q1. The authors missed important previous works. A1. Many thanks for your suggestion. CounterAnimal utilizes high-quality data available on the internet, whereas Terralcognita relies on camera trapping ...
Summary: This paper presents CounterAnimal, an evaluation dataset featuring two subsets: animals with common backgrounds and those with unusual backgrounds. The images were sourced from iNaturalist. Data with high CLIP accuracy are categorized as "Common", while those with low CLIP accuracy are labeled as "Counter". ...
Rebuttal 1: Rebuttal: Thank you for your constructive comments and suggestions! Please find our responses below. > Q1. The proposed dataset is not sufficiently robust to analyze the influence of spurious bias, as this is not the only difference between the common and counter datasets. A1. In our study, we primaril...
Summary: In this work, the authors create an evaluation dataset comprising two groups, one with animals in usual backgrounds (common group) and another with unusual backgrounds (counter group). They then evaluate a suite of models of different backbones, model sizes, and datasets. They find that CLIP models do poorly t...
Rebuttal 1: Rebuttal: Thank you for your constructive comments and suggestions! Please find our responses below. > Q1. Biased dataset: The dataset is split into common and counter groups using a CLIP model. Therefore, by construction, the CLIP models will perform poorly, and it is no surprise that the ImageNet-traine...
Summary: This work asks one interesting question: "Do CLIP models always generalize better than ImageNet models?" Driven by this question, this work proposes a new benchmark dataset named CounterAnimal. This dataset consists of a) the common group: comprising animals in common backgrounds, and b) the counter group: inc...
Rebuttal 1: Rebuttal: Thank you for your constructive comments and suggestions! Please find our responses below. > Q1. I think the claim is somewhat "obvious": there exists a relatively strong correlation between the object captions and the parts of image backgrounds, CLIP will learn to align the backgrounds, i.e., ...
Rebuttal 1: Rebuttal: We sincerely appreciate all the reviewers for their careful reviews and constructive feedback. We are also grateful for reviewers’ recognitions of our efforts on dataset constructions, the empirical findings, as well as the theoretical analysis. In response, we would like to emphasize the contribu...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Reflective Multi-Agent Collaboration based on Large Language Models
Accept (poster)
Summary: The paper introduces COPPER, a novel framework designed to enhance collaboration in multi-agent systems using a learnable self-reflection mechanism. COPPER utilizes a shared reflector fine-tuned to adjust actor model prompts via a counterfactual PPO mechanism. This approach includes counterfactual rewards to a...
Rebuttal 1: Rebuttal: To Reviewer tdnV: Thanks for your comments. We will try to alleviate your concerns one by one in the following. **Q1: My main concern is this paper involves a combination of various components and I could not clearly infer from the paper which part is most important. This makes the improvement f...
Summary: The paper proposes a multi-agent reflection framework COPPER to solve reasoning tasks on several datasets such as HotPotQA, GSM8K, and Checkmate in One Move. The two main contributions are: 1. designing counterfactual rewards to alleviate the credit assignment problem; 2. training a shared reflector to persona...
Rebuttal 1: Rebuttal: To Reviewer ogh3: Thanks for your comments. In the following, we try to alleviate your concerns one by one. **Q1: The motivation of the shared reflector may not align with reality. Embodied scenarios do not allow complete information sharing with a central reflector.** Thanks for this comment. ...
Summary: This paper proposes COPPER to enhance the collaboration ability of multi-agent systems through a learnable self-reflection mechanism. It involves reflections from different agent-specific profiles. The contribution of each agent-specific reflector is measured based on their marginal reward. This reflector is s...
Rebuttal 1: Rebuttal: To Reviewer gdf9: Thanks so much for your positive comments on our manuscript. In the following, we try to alleviate your concerns in detail (we combine all the questions in the weaknesses and questions). **Q1: Including the Retroformer under the multi-agent setting as one of the baselines would...
null
null
Rebuttal 1: Rebuttal: Dear reviewers: Thanks for your detailed reviews. Additional tables and figures mentioned in the rebuttals are shown in the submitted one-page pdf. Pdf: /pdf/5f5f64c530b1a9d18678029e435df512c30efe7d.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Neural Experts: Mixture of Experts for Implicit Neural Representations
Accept (poster)
Summary: This paper proposes a mixture of experts (MoE) approach for INRs, which allows the learning of local piece-wise continuous functions by subdividing the domain and fitting locally. The incorporation of a MoE architecture enhances speed, accuracy, and memory efficiency. They also propose a novel manager architec...
Rebuttal 1: Rebuttal: W1: Thank you for bringing these important works to our attention, we will address them in our paper. W2: We compare with [4,new] on their dataset (300 test images from LSUN bedrooms). We use the same learning rate and same measure (PSNR after 300 steps for SIREN). Note that these have not conver...
Summary: This paper proposes a new architecture for implicit neural representations (INRs) based on the mixture-of-experts (MoE) architecture. This new architecture is differs from traditional MoE architectures in that now all the experts have a shared encoder and expert-specific decoders, while the manager also now ha...
Rebuttal 1: Rebuttal: **One of the major weaknesses of this paper is the experimental evaluation ...** - **vs other architectures**: All the listed architectures are about changing the activation function. Our proposal is orthogonal to that as it can work with any activation function, and we have shown our method with ...
Summary: The paper presents a novel INR framework that leverages the Mixture of Experts (MoE) technique. The proposed strategy consists of an expert and a manager branch. Each branch has an encoder that processes the input coordinate and extracts an embedding. By processing the two encoder embeddings, the manager predi...
Rebuttal 1: Rebuttal: W1)a: The limitation we are mainly interested in is learning capacity rather than computation time. For traditional INRs, as each coordinate needs to be processed by the whole network, all parameters have to contribute to the output of every point in the domain, making parameter optimization diffi...
Summary: This paper introduces a MoE architecture for INRs, enhancing scalability, local fitting, and parallelization. Traditional INRs use a single network, imposing global constraints, while this method learns several local expert functions, subdividing the domain. Key contributions include a novel manager architectu...
Rebuttal 1: Rebuttal: W1: We have added results in the updated Table 1 in General Comments (see "Ours Neural Experts SIREN small **(New)**"). This version has a width of 68 in each layer (encoding, experts and manager) instead of 128, leading to 98,686 parameters. Note that it still outperforms the SIREN baseline, whic...
Rebuttal 1: Rebuttal: ## General Comments We thank the reviewers for their insightful comments. We include requested additional experimental results here and response to common questions. Specific questions are addressed to each reviewer below. **New Table 1** (page 5 of submission) now updated to have comparison wit...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Probabilistic Weather Forecasting with Hierarchical Graph Neural Networks
Accept (spotlight)
Summary: This work introduces a VAE variant of GraphCast for global medium-range weather forecasting and a VAE variant of a UNet (that is formulated as a GNN) for limited area modeling over Scandinavia. For this, they adapt GraphCast to have a similar hierarchical structure to UNets, and then treat the coarsest hierarc...
Rebuttal 1: Rebuttal: We thank reviewer PLfK for useful comments. See our response below: 1. Scores for GraphCast* As we state clearly when introducing this baseline, this is a version of GraphCast trained on the same 1.5° dataset as the other models. It thus has different scores than the original GraphCast model ev...
Summary: This paper introduces a new method for predicting weather using advanced deep learning models. The approach, called Graph-EFM, improves accuracy and better handles uncertainties in weather forecasts. It uses a 1.5 degree version of ERA5 and making weather predictions more reliable and useful for real-world app...
Rebuttal 1: Rebuttal: We thank reviewer 568u for useful comments. See our response below: 1. I need to some figures during extreme events, e.g cyclones like Yaku. We have now included such a case study for Hurricane Laura, that can be found in the global author rebuttal. 2. About higher resolution data and ERA6 ...
Summary: The authors propose a graph-based ensemble forecasting model (Graph-EFM) to provide weather prediction with a hirearchical GNN framework. They used a hierarchical mesh graph to handle the challenges of capturing processes unfolding over different spatial scales and modeling the uncertainty in the chaotic syste...
Rebuttal 1: Rebuttal: We thank reviewer LjpL for useful comments. See our response below: 1. In Figure 3, it seems like the selected ensemble members vary a lot, and how close is your forecast to the ground truth? Note that Figure 3 shows the forecasts for 10 days in the future. At such lead times there is indeed a...
Summary: The paper proposes Graph-EFM, a method that combines a hierarchical multi-scale graph neural network with a variational objective for probabilistic weather forecasting. The method performs on par with Graphcast on deterministic metrics with the extra benefit of uncertainty estimation. Strengths: - The paper i...
Rebuttal 1: Rebuttal: We thank reviewer i1dL for useful comments. See our response below: 1. The authors should replace Table 1 with a line graph figure instead, as it allows comparison across different variables and lead times. Given the limited space in the main paper we did not find a way to fit line plots for a...
Rebuttal 1: Rebuttal: We thank all reviewers for valuable comments and questions that we are sure will improve the overall quality of our paper. We have responded to the points raised by each reviewer separately, but also include this general rebuttal with a few points that we think could be relevant to all. ### Extr...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Enhancing In-Context Learning Performance with just SVD-Based Weight Pruning: A Theoretical Perspective
Accept (poster)
Summary: The authors provide a theoretical perspective on the stability of in context learning via implicit gradient descent trajectories. Ultimately, the analysis suggests that high condition numbers of the weight matrices belonging to layers with a high index can be pruned in order to achieve a model which performs b...
Rebuttal 1: Rebuttal: We appreciate your response on finding our work thorough and informative. Below we address specific questions. > **W1:** It would be good to define deep and shallow, as these are subjective terms depending on the reference frame. > **A:** Indeed, there is no universally accepted definition for t...
Summary: This paper investigates the effect of singular value decomposition (SVD)-based weight pruning on the in-context learning (ICL) performance of large language models. The Authors show that SVD-based pruning can enhance ICL performance, with deeper layers showing more stable improvements. They provide theoreti...
Rebuttal 1: Rebuttal: Thanks so much for your time and insightful comments. Please find our point-by-point response below. > **W1:** The theoretical analysis primarily focuses on linear attention, which may not fully capture the complexities of standard Softmax attention used in most transformer models > **A:** Firs...
Summary: This paper demonstrates that (1) SVD-based weight pruning can sometimes achieve better in-context learning performance, and (2) pruning weights in deeper layers often results in more stable outcomes compared to shallow layers. The authors explain their findings through theoretical analysis and propose an intui...
Rebuttal 1: Rebuttal: Thank you for your review, and for your thoughtful comments and questions. They will certainly help improve the revised paper. Please see our response below. > **A to Weaknesses:** Thanks to Reviewer Zh6a for raising the issue, which gives us the opportunity to clarify this matter. > > (1) Our g...
Summary: This paper discusses the phenomenon: SVD-based weight pruning can increase the in-context learning abilities of transformer based LLMs. In this paper, the authors conduct theorectical analysis by presenting the implicit gradient descent trajectories of ICL and providing the generation bounds visa full implicit...
Rebuttal 1: Rebuttal: Thank you for your detailed review and valuable comments. Below we address specific questions. > **W1:** More details of algorithms is not shared. e.g. the range / number of clipping rate candidates set. > **A:** Firstly, the details of the algorithm can be reviewed in the **code** provided. Spe...
Rebuttal 1: Rebuttal: We are grateful to all reviewers for their detailed and constructive feedback! We are encouraged to see that reviewers find: > - **Reviewer itEx**: It provides a detailed theoretical analysis on why SVD based weight pruning will improve ICL performance...... It provides the theoretical insight of...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Aligning Vision Models with Human Aesthetics in Retrieval: Benchmarks and Algorithms
Accept (poster)
Summary: This paper studies the problem of aligning vision models with human aesthetic standards in a retrieval system. There are three key parts in the proposed model including LLM rephrasing, re-ranking, and RL fine-tuning. Two novel benchmarks are also introduced to integrate aesthetic quality into evaluation metric...
Rebuttal 1: Rebuttal: ## W1: Cumbersome description of the method **R:** Thanks for your suggestion. We will find a way to further simplify the description of the method. The purpose of Fig. 2 is to illustrate the consistency of our approach and aesthetic concepts, and we will add more specific details to Fig. 3 that i...
Summary: The paper looks into the alignment task for vision and language models within retrieval models where properties such as visual aesthetic comes to play. To achieve this, the paper collects some data to design a metric suitable for taking into account human aesthetic evaluation. And employs an RL-based technique...
Rebuttal 1: Rebuttal: ## W: The design of the proposed metric **R:** Thanks. We proposed two metrics in the paper: HPIR weighted accuracy and win rate. 1. The construction of HPIR requires retrieving and filtering images according to the query, and then manually picking representative images to label. In order to exc...
Summary: This work aims to align vision models with human aesthetic standards in a retrieval system. To do this, the authors propose a preference-based reinforcement learning method that fine-tunes the vision models to distill the knowledge from both LLMs reasoning and the aesthetic models to better align the vision mo...
Rebuttal 1: Rebuttal: ## W1 & Q1: Human variance **R:** Thank you for the suggestion. We provide a metric 'confidence' for representing the robustness of the label. The confidence score means the degree of agreement among all annotators, rather than a value provided by labeler. It is calculated through Equation 10. Th...
Summary: This paper aligns the vision models with human values by leveraging LLM for query rephrasing and introducing preference-based reinforcement learning. The paper also presents a novel dataset named HPIR to benchmark the alignment with human aesthetics. Strengths: This paper introduces a novel approach to align ...
Rebuttal 1: Rebuttal: ## W1 & Q: Lack of user study **R:** In Table 2 on page 8 of the main paper, we have presented a user study (last two rows), where we let multiple human labelers judge the images retrieved from models w. and w/o. alignment (using the same queries). These labelers are expert search engine users. U...
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Taming the Long Tail in Human Mobility Prediction
Accept (poster)
Summary: This paper addresses the challenge of predicting less frequently visited points-of-interest (POIs) in human mobility data, a problem known as the long-tail issue in spatial distribution. The authors introduce a new framework called Long-Tailed Adjusted POI Prediction (LoTNext), which includes two main componen...
Rebuttal 1: Rebuttal: # Response to Reviewer 7hGF: **Weaknesses:** > Q1. The presentation quality of this paper can be further enhanced. **A1:** Thank you for your suggestions. We will check and polish the entire paper to ensure the presentation can be more clearly. > Q2. The authors are encouraged to conduct experim...
Summary: The paper presents the Long-Tail Adjusted Next POI Prediction (LoTNext) framework to address the long-tail problem in next POI prediction. This problem refers to the uneven spatial and temporal distribution of POI visits, making it challenging for prediction models to predict less frequently visited POIs. LoTN...
Rebuttal 1: Rebuttal: # Response to Reviewer mZab: **Weaknesses:** > Q1. The proposed model is complex and involves multiple components and adjustments, but it is not clear how computationally expensive it would be to make predictions in services and elsewhere. **A1:** Thank you for your insightful questions. Table 3 ...
Summary: This paper introduces the LoTNext framework, which is designed to improve the prediction of human mobility patterns, specifically addressing the challenge of long-tail distribution in POI visitations. The authors propose a novel approach that includes a Long-Tailed Graph Adjustment module and a Long-Tailed Los...
Rebuttal 1: Rebuttal: # Response to Reviewer Et5T: **Weaknesses:** > Q1. The evaluation could be expanded to include a broader range of metrics to further validate the generalizability of the LoTNext framework. **A1:** Thank you for valuable suggestions. We add Normalized Discounted Cumulative Gain (NDCG) as a new met...
Summary: This study proposes the Long-Tail Adjusted Next Point-of-Interest Prediction (LoTNext) framework. By combining a Long-Tailed Graph Adjustment module and a Long-Tailed Loss Adjustment module, it reduces the impact of long-tailed nodes in the user-POI interaction graph and adjusts loss through logit score and sa...
Rebuttal 1: Rebuttal: # Response to Reviewer 7ppu: **Weaknesses:** > Q1. In the related work section, the authors review common methods for addressing the long-tail problem in recommendation systems. Since this paper focuses on addressing the long-tail problem, adding several baselines that tackle the long-tail issue i...
Rebuttal 1: Rebuttal: # Response to All Reviewers: We thank the reviewer for the very valuable, detailed and constructive feedback on our work. We especially thank the positive words: * work is meaningful and valuable, worthwhile issue to study (Reviewer #7ppu & #Et5T) * filling the gap in addressing the long-tail iss...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Iterative Methods via Locally Evolving Set Process
Accept (poster)
Summary: This paper considers the study of local algorithms for graph clustering which is an important problem in the field of graph data analysis. In particular this paper is considers the task of computing Personalized Page Rank (PPR) vectors for a given graph. In this problem the algorithm is given a graph in the fo...
Rebuttal 1: Rebuttal: Thank you for taking the time and effort to review our paper carefully. We appreciate the positive perspective on our work. Your concern on the range of $\epsilon$ can be effectively addressed as follows: **Q1.** What is the core reason for not obtaining convergence results for accelerated method...
Summary: This paper uses the evolving set procedure to give a local PageRank algorithm whose dependence on \alpha (the reset probability) is \sqrt{\alpha}. It proposes accelerated local iterative methods with coefficients given by Chebyshev iteration. The convergence of this algorithm in both graph theoretic and gener...
Rebuttal 1: Rebuttal: Thank you for taking the time and effort to review our paper carefully. Your positive perspective on our work is so inspiring. We also believe our work is novel and some new interesting problems are worth to explore. Your main concerns and our responses are as follows: --- **Q1.** The gains only...
Summary: This paper considers the approximate personalized page rank. Classical results for this problem have a runtime that is linear in $1/\alpha\epsilon$ where $\alpha$ is the damping factor and $\epsilon$ is the error parameter. The authors show that APPR is simply a local variant of Gauss-Seidel Successive Overrel...
Rebuttal 1: Rebuttal: Thank you for taking the time and effort to review our paper carefully. Your positive perspective on our work is inspiring. The main concern on the assumption we made is addressed as follows: **Q1.** Comments on the assumptions we made in analyzing the local Chebyshev method. **A:** (Ignore this...
null
null
Rebuttal 1: Rebuttal: **General Responses** We thank all reviewers for their time and effort in carefully reading our paper. We are very happy that you like our work. Some general concerns are worth discussing as follows: --- **Q1.** Comments on the assumption of the local Chebyshev (LocCH) method. **A:** Let us re...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
A Theoretical Perspective for Speculative Decoding Algorithm
Accept (poster)
Summary: This paper presents a theoretical study on speculative decoding, an efficient inference method for large autoregressive models. It highlights practical implications, proposing a Pareto-optimal solution for the rejection-distribution bias tradeoff. Strengths: - The authors provide a robust theoretical foundati...
Rebuttal 1: Rebuttal: We appreciate the reviewer for the positive judgement and the great questions. Here are the detailed responses. ***Q1:*** The main figure does not clearly communicate the core concept of speculative decoding. It might lead readers to believe that speculative decoding primarily addresses hallucina...
Summary: The paper presents a theoretical perspective on speculative sampling. Through Theorems 1 and 2, the authors demonstrate that the sampling method employed by speculative sampling is optimal and unbiased. Subsequently, Theorem 3 introduces a multi-candidate approach to enhance the acceptance rate of speculative ...
Rebuttal 1: Rebuttal: We appreciate the reviewer for the positive comments and for the precise understanding of the theoretical contribution of our paper! Here are the detailed responses. ***Q1:*** I would like to see improvements in batch speculative sampling in real-world scenarios. Response: Thank you for the ques...
Summary: The author aim to develop theoretical understanding of speculative decoding. The authors assume that given a large and small model participating in speculative decoding, the computation complexity of the small model is negligible. Under this assumption, they characterize the expected rejection rate of speculat...
Rebuttal 1: Rebuttal: We appreciate the reviewer for the positive judgement and the detailed feedback. Here are the detailed responses. ***Q1:*** It is not completely clear, why making the assumption about negligible compute of the small model is not a strong assumption. Since the small model needs to generate the to...
Summary: This paper provides detailed analysis to speculative decoding and batch speculative decoding. The conclusions of the paper are: (1) speculative decoding is unbiased and it shows the expected rejection rate; (2) speculative decoding has the lowest rejection rate in all the unbiased algorithm that belongs to the...
Rebuttal 1: Rebuttal: We thank the reviewer for providing detailed feedback. We have read your comments carefully and below are our detailed responses. **Q1:** ---- Although ... Theorem 1 derived in the original speculative decoding paper. For Theorem 2, is there any algorithm that belongs to Algorithm 2 and is unbias...
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Pretraining Codomain Attention Neural Operators for Solving Multiphysics PDEs
Accept (poster)
Summary: The authors introduced an innovative attention-based neural operator and evaluated it against various baselines. They employed masked pretraining and finetuning techniques, comparing the model's performance to multiple benchmarks. Their study included interesting problems such as fluid-structure interactions. ...
Rebuttal 1: Rebuttal: We appreciate that the reviewer values our work and recognizes its novelty and the mathematical formulation of the proposed model. ``` Q1. Evaluation on additional PDE datasets ``` In addition to the coupled fluid-solid interaction problem, we also provide experiments on different PDE systems (ple...
Summary: This paper presents a new operator learning method for solving multiphysics PDEs. The attention scheme is designed on channel space to capture multiple physical variables, which is called co-domain. Moreover, positional encoding and normalization layers are considered. Such a strategy enables self-supervised p...
Rebuttal 1: Rebuttal: We appreciate that the reviewer values our work and recognizes the fact that this work is of interest to the scientific machine learning (SciML) community, presents a strategy that enables self-supervised pre-training of PDE systems, and states the importance of our experiments that have shown the...
Summary: This paper introduces Codomain Attention Neural Operator, which tokenizes function along the channel dimension. It allows to learn representations of different PDE systems within a single model. The authors shows that finetuning a pretrained CoDA-NO on different physics yields good accuracy. Strengths: - I li...
Rebuttal 1: Rebuttal: We appreciate that the reviewer values our work and recognizes that this work presented a method for learning representations of different PDE systems within a single model. ``` Q1. Formulation of the Model in the Function Space and Clarity of the Paper ``` We politely disagree with the reviewer....
Summary: The authors propose CoDA-NO, a neural operator architecture that captures interactions across different physical variables of coupled PDE systems. The method involves a generalization of the transformer architecture, including self-attention, positional encodings, and normalization, to function spaces. On two ...
Rebuttal 1: Rebuttal: We appreciate that the reviewer values our work and recognizes the main contributions to the novel generalization of the transformer architecture, including self-attention, positional encodings, and normalization to function space, along with the introduction of two new challenging datasets. We fu...
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable comments. We appreciate reviewers for recognizing the presentation of a novel neural operator architecture that “captures interactions across different physical variables of coupled PDE systems and the fact that the method involves a generalization of the ...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Neural Collapse Inspired Feature Alignment for Out-of-Distribution Generalization
Accept (poster)
Summary: This paper addresses the problem of spurious correlations caused by environments from where data are collected. The proposed method applies a mask to input data to separate spurious and semantic features. The masked input data are fed into a local model specialized to each environment. Each local model is trai...
Rebuttal 1: Rebuttal: ****Respond to W1:**** Thanks for the comments. We would like to explain as follows: The OOD methods encompass techniques for addressing spurious correlations. Our comparison methods are comprehensive, covering both approaches that handle spurious correlations, such as IRM, VREx, GroupDRO, and ot...
Summary: The paper leverages the neural collapse inspired ETF behavior to simulate different environments in datasets, and uses it for OOD classification. Strengths: The paper uses a phenomenon that's apparent in the standard setting, for a task that varies from the standard setting. It uses intuitive notions to tackl...
Rebuttal 1: Rebuttal: **Respond to Q1:** Thank you for the reviewer's suggestions. We will include citations to these papers in a subsequent version. --- Rebuttal 2: Title: We would like to supplement detailed discussion on neural collapse literature and more convincing experiments. Comment: We sincerely appreciate t...
Summary: The spurious correlation between image background features and their labels is a significant research problem, and the existing research suffers from the issue of difficult decoupling. In this paper, we propose a new approach to solve the spurious association problem by alternately performing environment segme...
Rebuttal 1: Rebuttal: ****Respond to W1:**** Thank you for pointing out the issue. We indeed omitted the comparisons in our manuscript. We have actually demonstrated this in Figure 1, where we used the F-norm to measure the degree of alignment. A smaller F-norm indicates that, after training, the feature prototypes are...
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Batched Energy-Entropy acquisition for Bayesian Optimization
Accept (poster)
Summary: This paper introduces a new acquisition function BEBBO for batched BO. BEBBO tries to build (negative) free-energies-like acquisition function, enabling gradient-based optimization, tight exploration-exploitation control, and risk-averse BO under heteroskedastic noise. It tries to improve existing parallel acq...
Rebuttal 1: Rebuttal: Thank you for your comments and questions. We respond to them individually below, and are looking forward to further discussion. > 1. The idea is straightforward [..] We are convinced that BEEBO is a novel acquisition function. In section B.2, we extensively compare BEEBO to the, to our knowledge...
Summary: Proposing a new acquisition function inspired by statistical physics, which allows explicit control of exploration-exploitation trade-offs in a batch BO setting. Strengths: Drawing inspiration from statistical physics is a promising direction, as it naturally aligns with Bayesian approaches. Weaknesses: **Ma...
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed comments. We have answered them individually below, and are happy to discuss further. >**Lack of Unique Selling Point** We would like to point out that as mentioned in line 275, supplementary section D.1 provides a comprehensive benchmark beyond q-UCB, inc...
Summary: The paper introduces a new approach to batch Bayesian optimization that explicitly trades off between exploration and exploitation via energy and entropy terms. The method is able to efficiently generate large batches for evaluation that outperform comparable methods for Bayesian optimization. Strengths: The ...
Rebuttal 1: Rebuttal: We thank the reviewer for their encouraging comments! Having a compelling experimental evaluation is of key importance to us, and we have performed the requested experiments to further demonstrate BEEBO's performance. > **Issue 1: Evaluation limited to large-batch setting** Thank you for the hel...
Summary: This work introduces a batched acquisition function that balances exploration and exploitation by using a weighted sum of mutual information and expected value, with the weights defining the trade-off. The discussion links the proposed algorithm to UCB and asserts that it naturally addresses heteroskedastic no...
Rebuttal 1: Rebuttal: Thank you for your comments! We have incorporated the feedback in the updated manuscript, and look forward to further discussion. > The introduced parameter controlling the trade-off lacks interpretation as in previous methods. We fully agree that an entropy quantity is in principle harder to in...
Rebuttal 1: Rebuttal: We thank all reviewers for their helpful feedback! We are excited to hear that they find BEEBO - *Helpful and practical* (mXVT) - *Novel and with strong context* (x4Ge) - *Effective with heteroskedastic noise* (CMG8, x4Ge) - Has a *promising Statistical physics motivation* (4oKm, CMG8) We have r...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Space-Time Continuous PDE Forecasting using Equivariant Neural Fields
Accept (poster)
Summary: The paper presents a novel framework for solving Partial Differential Equations (PDEs) by leveraging the power of Equivariant Neural Fields (ENFs). The authors propose a space-time continuous approach utilizing the symmetry of the PDEs, which is crucial for improving generalization and data-efficiency. The fra...
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough assessment, and for the valuable criticism of our work. We address each of the reviewers' concerns in detail here, and hope to continue the discussion if any details remain unclear. **Error accumulation** The reviewer is concerned about error accumulation ...
Summary: This work proposes a space-time continuous method for solving PDEs that respects the inherent symmetries of the PDE via equivariance constraints. Building upon prior work which (a) fits a conditional neural field to output latent vectors and (b) evolves the latent state through time via a Neural ODE, the contr...
Rebuttal 1: Rebuttal: We thank the reviewer for their assessment of our work, and appreciate their recognition of the benefits of incorporating the symmetry constraints in PDE solving. We elaborate on the raised concerns below. **Computational cost**. We provide comparison of the computational cost of our method to al...
Summary: The paper attempts to learn the dynamics of certain PDEs from time series data using implicit neural representations, while encoding symmetry information of the domain. In fact, constructing a neural model that is aware of Euclidean transformations is the primary focus of this paper. To this end, the authors d...
Rebuttal 1: Rebuttal: We thank the reviewer for their very thorough treatment of our manuscript, and appreciate the recognition of the importance of encoding symmetry information in neural PDE solvers. We also acknowledge the reviewer's constructive feedback and address their concerns in detail below. **How will this ...
Summary: The work proposes a novel framework combining equivariant neural fields and neural ODEs, providing a continuous space-time solution for PDEs while respecting associated equivariance constraints. The author uses PDE-specific bi-invariant attributes for equivariant neural fields and a meta-learning approach for ...
Rebuttal 1: Rebuttal: We thank reviewer kxDM for their effort in reviewing our work. We’re glad to see that the reviewer agrees on the value of adding equivariance constraints to neural PDE solving. **Improving comparative analysis** The reviewer raised concerns about comparison with a wider set of baselines. To this ...
Rebuttal 1: Rebuttal: We thank the reviewers for their thorough investigation of our work, and for investing the time to write out valuable criticism. We’re happy to see that reviewers regard our equivariant space-time continuous PDE solving method as valuable and effective for scientific research applications. Additio...
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper introduces a novel framework that leverages Equivariant Neural Fields (ENFs) to solve Partial Differential Equations (PDEs). By preserving geometric information in the latent space, the proposed method respects the known symmetries of the PDE, enhancing generalization and data efficiency. The framewo...
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough assessment of our work, and we’re happy to see that the reviewer deems our approach innovative and appreciates the experimental validation we provide. We thank the reviewer for highlighting a number of important considerations with regards to our proposed a...
null
null
null
null
null
null
CRAYM: Neural Field Optimization via Camera RAY Matching
Accept (poster)
Summary: The manuscript #3263 entitled "CRAYM: Neural Field Optimization via Camera RAY Matching" proposes a novel uncalibrated NeRF strategy based on prior keypoints matching across images. Specifically, the authors propose two novelties to improve the quality of the reconstruction and the pose estimation of the camer...
Rebuttal 1: Rebuttal: Thank you for the insightful comments and suggestions, especially the series of detailed questions, which we will all answer below. We hope the answers will address your concerns. Q: The robustness of the approach against outlier matches is not evaluated. Introducing artificial outliers (wrongly ...
Summary: This paper presents a new technique called camera ray matching, which is integrated into the joint optimization of camera poses and a neural field. The method utilizes an uncalibrated set of images as input, incorporating photometric and geometric constraints through key points and key rays matching, with the ...
Rebuttal 1: Rebuttal: Thank you for the careful review and insightful questions. The reviewer's various suggestions are right on and we hope the rebuttal will alleviate the concerns raised. Q: Additional results on accurate regression of camera poses? A: Please refer to Table 9 in the supplemental material. It shows...
Summary: This work suggests a novel neural representation and training scheme that jointly solves for the scene representation and the multi-view camera localization. It is done using several new ideas that generalize existing NeRF based methods. The representation itself is a combination of a geometry-network, which ...
Rebuttal 1: Rebuttal: Thank you for your care and insights in the reviews and the encouraging remarks! Q: Reproducibility A: All the implementation details mentioned along with other useful information will be added to the supplemental material in the revision. The source code and any data used will surely be release...
Summary: This paper introduces Camera Ray Matching for optimizing camera poses and neural fields from multi-view images. The optimized feature volume supports novel view synthesis and 3D geometry reconstruction by probing camera rays, which carry both geometric and photometric information. CRAYM claims to improves effi...
Rebuttal 1: Rebuttal: We appreciate the suggestions in the review and hope the rebuttal will help address the concerns raised. Q: Experiments were only conducted on NeRF-synthetic datasets and not on LLFF datasets. Also add Neural Image Alignment to enhance the evaluation. A: We have evaluated our method on the LLFF ...
Rebuttal 1: Rebuttal: We want to thank all the reviewers for their comprehensive reviews of our paper. The insightful questions and various suggestions for additional experiments and clarifications will surely strengthen this work. Here, let us first start with some quick remarks on common reviewer comments. The indiv...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Sharpness-diversity tradeoff: improving flat ensembles with SharpBalance
Accept (poster)
Summary: This paper presents introduces a training approach for ensemble learning called SharpBalance to balance sharpness and diversity within ensembles. This paper shows theoretically that SharpBalance achieves a better sharpness-diversity trade-off. Strengths: 1. Ensemble learning is an important research direction...
Rebuttal 1: Rebuttal: ## Weakness 1 We conducted an additional experiment to verify the effectiveness of the proposed method on small datasets, with results shown in Table 9 of the rebuttal PDF. The small datasets were generated by randomly subsampling the training set with ratios of 0.3 and 0.5. The experiments used a...
Summary: This paper investigates the sharpness and diversity within deep ensembles. Specifically, it identifies the trade-off phenomenon between sharpness and diversity with both theoretical and empirical evidence. Additionally, it proposes a method called SharpBalance, which trains individuals using selective 'sharp' ...
Rebuttal 1: Rebuttal: ## Weakness 1 In addition to the main experiments, we compared SharpBalance with ensemble baselines, including those in the appendix and new experiments in the rebuttal PDF. 1. **Ensemble with models trained with different hyperparameters.** In Appendix F.4, we compared with the "SAM+" baseline, w...
Summary: The paper proposes SharpBalance, that is a method aiming to investigate the relationship between sharpness and diversity for deep ensembles. Strengths: - SharpBalance looks quite effective for the out-of-distribution setting. The goal of balancing sharpness and diversity within ensembles is an important idea....
Rebuttal 1: Rebuttal: ## Weakness1 and Question We present three key distinctions between DASH in [1] and SharpBalance. First, SharpBalance offers a comprehensive identification and rigorous analysis of the sharpness-diversity trade-off phenomenon. Second, our novel theoretical approach using random matrix theory provi...
Summary: Ensemble methods and sharpness-aware optimization techniques are well-known strategies for improving generalization. This work identifies a trade-off between sharpness and diversity, observing that reducing sharpness can diminish diversity and harm ensemble performance. Through theoretical and empirical analys...
Rebuttal 1: Rebuttal: ## Weakness 1 and 2 In Figure 14 of the rebuttal PDF, we present the results for negative log-likelihood and expected calibration error. These uncertainty metrics exhibit trends similar to the accuracy metrics reported in the main paper: "Deep Ensemble + SAM" outperforms "Deep Ensemble", and our m...
Rebuttal 1: Rebuttal: We want to thank all the reviewers for the constructive feedback, which helps us improve our paper. Please refer to the attached PDF for our new experiments and see below for our responses to each comment. Pdf: /pdf/135158339f771e13c5c8f839b1ede0b6ea5dfa6b.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
S-MolSearch: 3D Semi-supervised Contrastive Learning for Bioactive Molecule Search
Accept (poster)
Summary: The paper introduces S-MolSearch, a framework for ligand-based virtual screening in drug discovery that addresses the challenges of limited and noisy binding affinity data. By utilizing molecular 3D information and semi-supervised contrastive learning, S-MolSearch processes both labeled and unlabeled data to t...
Rebuttal 1: Rebuttal: We appreciate the thoughtful questions and feedback provided by you. We have carefully considered your queries and provide detailed responses below. **Consideration of Screening Time** We have conducted additional experiments to measure the screening time of S-MolSearch compared to traditional m...
Summary: The paper introduces "S-MolSearch," a semi-supervised contrastive learning framework designed for ligand-based virtual screening in drug discovery. This framework uniquely leverages labeled binding affinity information to produce soft labels for unlabeled molecules, integrating 3D molecular structures and bind...
Rebuttal 1: Rebuttal: We appreciate your thorough review and constructive feedback. Below, we address each of your comments and questions, aiming to clarify and enhance the understanding of our work. **Memory Consumption Concerns:** We measure memory consumption under different scenarios, as shown in the table below....
Summary: This paper proposed a Ligand-based Virtual Screening method S-MolSearch. which can leverages molecular 3D information and affinity information in semi-supervised contrastive learning. Strengths: 1. The method is able to leverage both labeled and unlabeled data simultaneously and achieves excellent performanc...
Rebuttal 1: Rebuttal: Thank you very much for supporting our work and careful review! We have considered each of your questions, and we provide detailed responses below. **Inference Process with Encoders** During inference, only the encoder gψg_{\psi}gψ is used to generate the molecular embeddings. The encoder fθf_{\...
Summary: The paper introduces a new method for ligand-based virtual screening based on contrastive learning and inverse optimal transport. Two molecule encoders are trained. The first encoder is trained using a contrastive loss function on the ChEMBL data by pairing compounds that are active toward the same protein, an...
Rebuttal 1: Rebuttal: Thank you very much for supporting our work and careful review! We appreciate the detailed review and constructive feedback. We have addressed each of your comments and questions below, aiming to clarify and enhance the understanding of our work. **Qualitative Examples and Similarities** We agre...
Rebuttal 1: Rebuttal: We sincerely appreciate the valuable feedback and insightful comments provided by each of you. Your input has been instrumental in refining our work and enhancing the clarity and depth of our manuscript. In response to your suggestions, we prepare an additional PDF document. It includes qualitati...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Bandits with Preference Feedback: A Stackelberg Game Perspective
Accept (poster)
Summary: This paper considers bandits with preference feedback. It first constructs a novel confidence set that covers the ground truth with high probability. Then from a Stackelberg game perspective, it proposes an efficient algorithm that enjoys tighter regret bound than SOTA. Strengths: 1. The technique used to con...
Rebuttal 1: Rebuttal: Thank you for your time and the points you raised. We addressed them in the paper and we believe it will help the paper to reach a broader community. Follows our response to your questions. > The major concern is the practical applicability of the algorithm. Seems that the proposed algorithm can ...
Summary: This paper considers novel game-theoretic acquisition function for pairwise action selection with preference feedback. It is tailored to the setting with infinite domains and nonlinear kernelized rewards. The preference-based confidence sequences for kernelized utility functions are shown to be tight and anyti...
Rebuttal 1: Rebuttal: Thank you for your time and comments. We are glad that you find our contributions to be timely and strong. We have reflected your suggestions in the paper, making it more clear for future readers. > In practice, how to determine the hyper-parameters like 𝛾𝑡, 𝐿, and 𝐵 in (5)? Is there any data...
Summary: The paper examines the problem of bandit optimization with preference feedback in large domains and nonlinear (kernelized) rewards. It introduces MAXMINLCB, which adopts a game-theoretic approach to action selection under comparative feedback. Additionally, it proposes kernelized preference-based confidence se...
Rebuttal 1: Rebuttal: Thank you for your time and the points you raised. We addressed them in the paper and we believe it will help the paper to reach a broader community. In light of these updates, we appreciate if you can reconsider your assessment on the scope and relevance of the paper. > Although the paper uses ...
Summary: This paper considers bandit optimization with preference feedback over continuous action spaces and kernelized reward function. The goal in this problem is to minimize the dueling regret against an optimal action over a finite time-horizon. Previous works on this problem are either restricted to finite action ...
Rebuttal 1: Rebuttal: Thank you for the valuable review and constructive suggestions. Please find below our comments on the raised questions. > Experimental evaluation can include other algorithms that are known to perform better than RUCB such as RMED (Komiyama et al., 2015) and Double Thompson Sampling (Wu and Liu, ...
Rebuttal 1: Rebuttal: We thank all reviewers and chairs for their work reviewing our paper. We are delighted to receive high-quality constructive feedback highlighting that our contributions are “clear” and our “ideas are likely to be relevant to other learning problems with preference feedback such as RLHF”. We are c...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
FasMe: Fast and Sample-efficient Meta Estimator for Precision Matrix Learning in Small Sample Settings
Accept (poster)
Summary: This paper proposes a meta-learning method for estimating the precision matrix on a new task with small data. The proposed method uses common edges estimated from multiple auxiliary datasets as meta-knowledge. Then, it estimates the precision matrix on the new task, assuming its true edges contain all the esti...
Rebuttal 1: Rebuttal: ### Response to Weakness Thank you for your detailed review and for highlighting this crucial aspect of our research. As mentioned in Line 149-150: "The assumption has been widely adopted [28, 18, 30] and has proven feasible and applicable in the biological and genetic domains." We have validated ...
Summary: This paper introduces FasMe, a meta-learning approach for efficient precision matrix estimation in small sample settings. By leveraging meta-knowledge and maximum determinant matrix completion, FasMe reduces sample size requirements and improves computational efficiency. Experimental results show FasMe to be s...
Rebuttal 1: Rebuttal: ### Response to Question Thank you for your positive feedback and insightful question. Yes, our work is primarily applied to biological scenarios. For one thing, high-dimensional, small-sample settings are more common in biological research, as exemplified by our case study on Cholangiocarcinom...
Summary: The authors propose a method to estimate sparse precision matrices from few samples. Theoretical properties of the proposed method are studied, and experiments on synthetic and brain fMRI data are presented. Strengths: Strengths: * The paper is overall well written, and fairly easy to follow and comprehend. *...
Rebuttal 1: Rebuttal: We appreciate your kind feedback and perceptive questions. ### 1. Response to Weakness In addition to the synthetic datasets, we have conducted extensive experiments on real-world datasets. Specifically, we used the ChIP-Seq dataset from the ENCODE project and the fMRI dataset from the OpenfMRI ...
null
null
Rebuttal 1: Rebuttal: ## References [1] Mitra R, Müller P, Liang S, et al. A Bayesian graphical model for chip-seq data on histone modifications[J]. Journal of the American Statistical Association, 2013, 108(501): 69-80. [2] Lundberg S M, Tu W B, Raught B, et al. Learning the human chromatin network from all ENCODE Ch...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Trained Models Tell Us How to Make Them Robust to Spurious Correlation without Group Annotation
Reject
Summary: This paper addresses the problem of subpopulation generalization, also known as spurious correlations. Building on the Last Layer Retraining (DFR) method, it removes the constraints on a small subset of annotations. The paper introduces the Environment-based Validation and Loss-based Sampling (EVaLS) method. U...
Rebuttal 1: Rebuttal: Thank you for highlighting the relevant works and for your insightful requests regarding the method’s sensitivity and the compatibility of its theoretical and practical aspects. # Weakness 1 Please refer to the general response in the Author Rebuttal for a review of the contributions of this work....
Summary: To address the issue of spurious correlations when group labels are unavailable, this paper proposes a new method called EVaLS. It first creates a balanced training dataset using loss-based sampling. Then, it evaluates the accuracy of the balanced training set based on the inferred environments from the valida...
Rebuttal 1: Rebuttal: Thank you for your constructive and insightful feedback. Our response to the mentioned weaknesses is as follows: # Weakness 1 1. We must emphasize that the optimal number of selected high/low loss samples for retraining the last layer is chosen from a set of various values, based on the worst va...
Summary: The paper studies how to improve the model’s robustness to multiple spurious correlations when the group labels (indicator for spurious correlation) are unknown in general. The proposed approach, EVaLS, leverages the loss from a base ERM model to sample a balanced subset to prevent learning from spurious corre...
Rebuttal 1: Rebuttal: Thank you for your detailed reviews and insightful questions. # Weakness 1 Figure 2 in the paper **does not** show the proportion of minority (majority) samples that have high (low) loss. Instead, it depicts the proportion of minority/majority samples among the top x% of samples with the highest/l...
null
null
Rebuttal 1: Rebuttal: We are thankful for the time and consideration that reviewers have dedicated to reviewing our work. Our work is in continuation of numerous efforts towards annotation-free group robustness (see Appendix A, L538-L549). The main contributions are as follows: 1. **Environment-based validation drops ...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Algorithmic Collective Action in Recommender Systems: Promoting Songs by Reordering Playlists
Accept (poster)
Summary: This paper explores the possibility of boosting the recommendation of a song in an automatic playlist continuation system by using a collective strategy of adding the song into the training playlists of the APC at a specific position. The paper shows that adopting a strategy that targets low frequency context ...
Rebuttal 1: Rebuttal: Thank you for the careful reading and the detailed feedback. We have performed the additional investigations in response to your suggestions. Let us elaborate on the individual comments below. **Other baselines.** We have implemented baselines where the song is placed in earlier positions in the ...
Summary: They proposed a strategy for the streaming platform users to collectively act in order to promote targeted songs. The promotion efficacy is measured by the targeted songs' recommendation frequency boost at testing time. This strategy is approved to be effective through simulation experiment. Another finding i...
Rebuttal 1: Rebuttal: Thank you for your feedback. Let us elaborate on your comments below. **Intuition for the proposed strategies.** The design of our strategies relies on the idealized assumption that transformer-based models perform next song prediction by learning to model the conditional probability of songs, gi...
Summary: The paper shows that strategic collective action by a small fraction of the population can lead to significant amplification of a particular song in a recommender system. The authors propose two strategies for the collective (for a transformer-based song recommender) that achieve this amplification. Strength...
Rebuttal 1: Rebuttal: Thank you for the feedback, we will incorporate your suggestions and adjust the notation. In response to your comment, we also decided to add pseudo code to make the strategies more clear. It can be found in the supplementary PDF. In the following let us elaborate on your questions: **Our strate...
Summary: This research work proposes a novel solution to promote songs on music streaming platforms strategically. Under the following assumptions: 1. Fans can collaborate to promote a specific song by collectively reordering playlists. 2. The visibility of a song in a playlist affects its recommendation frequency. ...
Rebuttal 1: Rebuttal: Thank you for your feedback and the positive assessment of our work. To respond to your question we relate our work to Bendada et al. (2023). The authors describe a transformer-based recommender system for automatic playlist continuation (APC) that Deezer has deployed in production. The model i...
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their insightful feedback. Based on the reviewers' comments, we ran additional experiments and made some updates to the write-up, which we describe in detail in the individual rebuttals. To support our discussion, we provide additional experiments and illu...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Diffusion Models are Certifiably Robust Classifiers
Accept (poster)
Summary: This paper derives an upper bound of the Lipschitz constant for diffusion classifiers. Then, it proposes Exact Posterior Noised Diffusion Classifier (EPNDC) and Approximated Posterior Noised Diffusion Classifier (APNDC) by deriving ELBO upper bounds on $\log p (x_\tau)$ and thereby enabling classifying noisy i...
Rebuttal 1: Rebuttal: Thank you for recognizing the topics and contributions of our paper. We are greatly encouraged by your appreciation. Below, we address your detailed comments and hope that you find our responses satisfactory. ***Weakness 1: Table 4 should be put in the main text.*** Thank you for your advice. We...
Summary: The authors investigate the certified robustness of diffusion classifiers. For this purpose, they first show that these classifiers have O(1) Lipschitzness and subsequently achieve tighter robustness bounds through Bayes' theorem and the ELBO. Strengths: S1: Using diffusion models to generate large amounts of...
Rebuttal 1: Rebuttal: Thank you for recognizing the contribution of our work and for providing valuable feedback. Below we address your detailed comments and hope that you find our responses satisfactory. ***Weakness 1: References could be ordered by appearance.*** Thank you for your suggestion. We will revise the re...
Summary: This work proves that diffusion classifiers possess inherent robustness to adversarial attacks by demonstrating their O(1) Lipschitzness and establishing their certified resilience. By generalizing these classifiers to handle Gaussian-corrupted data and using evidence lower bounds for likelihood approximation,...
Rebuttal 1: Rebuttal: Thank you for appreciating the strong results of our methods. Below we address the detailed comments, and hope you may find our response satisfactory and update the score accordingly. ***Weakness 1: Insufficient Novelty.*** We disagree with our highest respect. We justify the novelty from two a...
Summary: This paper presents a theoretical analysis of the enhanced robustness in diffusion-based classifiers and introduces a generalized Noised Diffusion Classifier, EPNDC. The authors utilize the Evidence Lower Bound (ELBO) of each conditional log-likelihood $\log p(x_\tau | y) $and Bayes' theorem as the logits for ...
Rebuttal 1: Rebuttal: Thank you for appreciating the writing and contribution of our paper. We are deeply encouraged by your kind words. Below we address detailed comments, and hope you may find our reponse satisfactory. ***Weakness 1: Nabla operator is unclear.*** Thank you for pointing this out. In our paper, the...
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Decision Mamba: Reinforcement Learning via Hybrid Selective Sequence Modeling
Accept (poster)
Summary: This paper investigates an emerging foundation model, Mamba, in Reinforcement Learning (RL) scenarios and compares it with Transformer in terms of effectiveness and efficiency. The authors find that in-context RL methods with Mamba as the backbone are generally more efficient than Transformer, but there is no ...
Rebuttal 1: Rebuttal: We are particularly encouraged that Reviewer UASH finds our method effective. ### [I].Reply to the weakness >**[1/2]W1. The baseline AD (Mamba) in Figure 2 and the baseline DM in Figure 3, which appear to be AD (Transformer) and DT variants, are crucial for the readers' understanding of how Mamba...
Summary: This paper presents Hybrid Mamba (HM), a method that combines the Mamba model and Transformer to enhance reinforcement learning (RL) performance. HM leverages Mamba to generate high-value sub-goals, which then condition the transformer, leading to significant improvements in online testing efficiency and task-...
Rebuttal 1: Rebuttal: Thanks for the reviewer's positive appraisal, insightful comment, and criticism of our paper. ### [I].Reply to the Weakness >**[1/7]W1. This paper claims to present a in-context RL approach. The motivation of this paper is concerned with the problems encountered with the no-gradient updates in-co...
Summary: This paper investigates to utilize the Mamba [1] architecture for In-Context RL task. Addressing this task with Transformer architecture is effective while it is very inefficient due to the quadratic computation overhead of Transformer. The Mamba can reduce this overhead dramatically while sustain the performa...
Rebuttal 1: Rebuttal: We are particularly encouraged that Reviewer wVw1 finds our method effective. ### [I].Reply to the Weakness >**[1/2]W1. The high-level encoding is done by encoding the intervalled trajectories (e.g., every $c$-th trajectory), which might miss important information in the middle of the interval.**...
Summary: The paper proposes Hybrid Mamba (HM) for in-context RL. Existing in-context RL methods are predominantly based on the Transformer architecture. Transformers come with quadratic complexity of self-attention and are computationally costly. Consequently, the authors propose a hybrid architecture that uses Mamba t...
Rebuttal 1: Rebuttal: Thanks for the reviewer's positive appraisal, insightful comment, and criticism of our paper. ### [I].Reply to the Weakness >**[1/6]W1.What is the reasoning behind sampling the sub-goal from a multi-variate Gaussian? How does this compare to using a fixed representation?** It is possible to pred...
Rebuttal 1: Rebuttal: Dear Reviewers, We are very grateful to the reviewers for their valuable suggestions, which further improved our work. We provide the learning curve of our HM and ablation studies in d4rl tasks with a submitted 1-page pdf. Thank you again for your careful review and helpful comments. Kind regar...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Communication Efficient Distributed Training with Distributed Lion
Accept (poster)
Summary: The paper introduces Distributed Lion, a variant of the Lion optimizer, tailored for distributed training environments. Lion, known for its memory and computational efficiency, is adapted to reduce communication costs between workers and a central server. This is achieved by communicating binary or low-precisi...
Rebuttal 1: Rebuttal: **1. Incompatible with all reduce.** Our current algorithm indeed requires a customized all_reduce, but we believe the code should be relatively simple to apply to various real-world applications. Additionally, we are exploring ways to optimize the communication process for low-bit information. Y...
Summary: Large-scale AI model training has increasingly higher requirements on time, cost and environmental impact, so it is crucial to develop efficient optimizers. As an emerging optimizer, Lion optimizer has advantages in memory, computation and sample efficiency compared with AdamW. Distributed Lion: The paper prop...
Rebuttal 1: Rebuttal: **1. Comparison to quantization methods.** The actual update on each worker is actually not the gradient, but rather the Lion’s update (the sign() plus weight decay). To our knowledge, the quantization methods often quantize the gradients before feeding the quantized gradient to the optimizer. In...
Summary: This paper extends the Lion optimizer to data parallel distributed training. Unlike optimizers like SGD and Adam, the binary update in Lion can be exploited to minimize the communication. They investigate two cost effective methods for the communication of binary updates; averaging and majority vote. Experime...
Rebuttal 1: Rebuttal: **1. Wall-clock time reduction.** We refer the reviewer to the common response for this question. **2. Compatibility with ZeRO3 data parallelism.** Although large-scale parallelism techniques such as ZeRO3 and FSDP require additional inter-node gather operations that cannot be accelerated by o...
Summary: This paper proposes Distributed Lion, a new variant of Lion optimizer for distributed training. The proposed algorithm only requires to communicate binary or lower-precision vectors between workers to the center server, significantly reducing the communication cost. The theoretical analysis proves the converge...
Rebuttal 1: Rebuttal: **1. i.i.d assumption.** Indeed, currently, we assume data are i.i.d (the dataset on each worker is pre-sharded before training). We leave it as a future work to show the convergence of distributed Lion under a non-i.i.d setting. **2. Wall-clock time comparison and communication reduction.** P...
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their valuable feedback. In the following, we provide the general response and address individual concerns separately in individual responses. **1. Wall-clock time comparison.** Several reviewers have requested a wall-clock time comparison. Our study primaril...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning Spatially-Aware Language and Audio Embeddings
Accept (poster)
Summary: This paper presents an approach to learning spatially aware language representations. The authors propose a contrastive representation model that integrates spatial context into language representations, aiming to enhance the performance of tasks that require spatial reasoning. The model combines visual and te...
Rebuttal 1: Rebuttal: Thank you for taking the time to provide comment and feedback on our submission. We address the questions and concerns below. Please let us know if further clarification is required. ***The reliance on synthetic datasets may limit the generalizability of the findings. The authors could explore ...
Summary: This paper describes a method for learning to represent spatial audio (and text). The proposed model is trained on synthetically spatialized audio data with corresponding text prompts. The authors evaluate the system on audio captioning/retrieval and localization tasks, showing that the proposed model effect...
Rebuttal 1: Rebuttal: Thank you for your time reviewing the paper and providing valuable feedback and suggestions. We will do our best to clarify the points that you have raised below. Please let us know if there are further questions. ***While the spatial representation part of the work (ie FOA-derived input) is ex...
Summary: The paper presents ELSA (Embeddings for Language and Spatial Audio), a novel model designed to learn spatially-aware audio and text embeddings using multimodal contrastive learning. The primary aim is to address the limitations of existing audio foundation models, which lack spatial awareness, and sound event ...
Rebuttal 1: Rebuttal: Thank you for taking the time to provide comments and suggestions. We address the points raised below. ***Would it be possible to test ELSA in other real scenarios, for example, in some of the tasks in the latest DCase competition, e.g. Sound Event Localization?*** ***Model performance in real ...
Summary: The paper presents ELSA (EMbeddings for Language and Spatial Audio), a spatially aware-audio and text embedding model. The training data is created by synthesizing spatial audio in ambisonic format and augmenting text captions with spatial information. A small real world data is also collected for evaluations...
Rebuttal 1: Rebuttal: Thank you for your suggestions and comments, which we address here. Please let us know if anything remains unclear. ***For table 2, I would be curious to see what CLAP on its own can achieve.*** Performance using a pre-trained CLAP checkpoint is close to random for all tasks, which is expected ...
Rebuttal 1: Rebuttal: We are pleased to see such a strong positive sentiment from the reviewers about our work. Reviewers highlighted how **interesting and rewarding** (SQBX), **strong and significant** (rKMo), and how **rigorous** (rKMo) our work is. Reviewers also mentioned how **well-written** (JMmV, USXb) and **i...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
FactorSim: Generative Simulation via Factorized Representation
Accept (poster)
Summary: This work presents FACTORSIM, a framework that converts any language specification into a complete simulation for training RL agents. FACTORSIM decomposes the input prompt into steps and uses a factored Partially Observable Markov Decision Process (POMDP) to minimize the context needed for each generation step...
Rebuttal 1: Rebuttal: Thank you for your insightful feedback. We are glad that you share the excitement of the implication of this to the potential of scaling up the training of embodied generalist agents. # How can this be applied to the field of robotics and embodied AI? We present a novel method for factorizing sim...
Summary: The paper proposes a LLM prompting method to generate full game / robot simulations in code based on text descriptions. Given a long text description, the method first utilizes an LLM to decompose it into multiple sentences, and then use them to iteratively generate and update simulation code. For each iterat...
Rebuttal 1: Rebuttal: Thank you for dedicating your time to review our paper and for providing insightful feedback. We are glad to learn that you find our paper to be well-written and comprehensively evaluated. # Novelty We present a novel method for generating coded simulations that allows for efficient context selec...
Summary: The paper proposed a factorized approach to generate simulated games via LLM code synthesis. The code idea is that one doesn't need to generate the entire code at once, but rather generate different part of a POMDP game, such as controller, model, and view. The generated simulation allows RL policies to train ...
Rebuttal 1: Rebuttal: Thank you for your feedback! We are glad that you find this an important problem and that you find our evaluation solid and comprehensive. # Clarification of our Motivation and Novelty and why we chose Robotics as the primary area Recent advancements in foundational models have demonstrated their...
Summary: The paper introduces an LLM-based method for generting code for simulations. After generating the simulation of famous games based on their manual and description, the authors show that policies trained in these environments transfer well to the real games. Strengths: - **S.1 Great results.** I think that the...
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper and for providing constructive feedback. We are glad you share our excitement about FactorSim's performance on the challenging task of zero-shot transfer. We appreciate your feedback and would like to address your questions. # Missing examples Tha...
Rebuttal 1: Rebuttal: We are grateful for the insightful feedback provided on our paper. We are encouraged to find that the reviewers found our paper well presented, comprehensively evaluated, achieved impressive zero-shot results, and shared our excitement about its applicability to training generalized agents. Below,...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning Structured Representations with Hyperbolic Embeddings
Accept (poster)
Summary: The paper introduces a novel regularization method, HypStructure, which utilizes hyperbolic geometry to improve the embedding of hierarchical relationships within feature representations. This approach enhances the learning of structured representations, reducing distortion and boosting generalization in low-d...
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed comments, and address each of the questions and weaknesses below. ## Concerns about the novelty We humbly disagree with the reviewers’ statement since our learning setting is **different from the existing works** in the hyperbolic geometry literature, wh...
Summary: The paper presents a novel approach, HypStructure, for learning structured representations. Comparing with the existing method, the proposed method adds an regularizer calculated from hyperbolic geometry. This approach aims to reduce distortion and improve generalization performance, particularly in low-dimens...
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed comments, and address each of the questions and weaknesses below. ## Hyperbolic $L_{flat}$ We choose the Supervised Contrastive Loss (SupCon) loss as the $L_{flat}$ loss in our experiments primarily to provide a fair comparison with the prior works, wher...
Summary: This work introduces a regularization scheme based on Cophenetic Correlation Coefficient to more appropriately embed semantic label hierarchical structure in the representation. The method exploits the hierarchical benefits of hyperbolic space reformulating the CPCC regularization term to operate on the Poinca...
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed comments, and address each of the questions and weaknesses below. ## Boundary collapse Part of the design of our HypStructure methodology, **including the centering loss and embedding of internal nodes** mitigates boundary collapse. Embedding the intern...
Summary: The paper introduces HypStructure, a novel approach for learning structured representations using hyperbolic embeddings, which are well-suited for modeling hierarchical relationships due to their tree-like structure. The method incorporates a hyperbolic tree-based representation loss and a centering loss to em...
Rebuttal 1: Rebuttal: We thank the reviewer for detailed comments, and address each of the questions and weaknesses below. ## Contribution While our work builds on prior research, this does not diminish the contribution of our work and we request the reviewer to kindly refer to our note on novelty in the global resp...
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for your valuable reviews and constructive feedback that has helped us improve our work. We first share the results of additional experiments we conducted based on these suggestions which demonstrate the wide applicability of our method, and then summarize our tech...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Energy-Based Modelling for Discrete and Mixed Data via Heat Equations on Structured Spaces
Accept (poster)
Summary: This article, "Energy-based modelling for discrete and mixed data via heat equations on structured spaces", proposes to perform the training on EBM, using the Energy Discrepancy (ED) loss, in the case where having multi-modal dataset mixing eventually continuous inputs but also discrete (categorical) ones. The...
Rebuttal 1: Rebuttal: Thank you for your review. We first want to comment on the weaknesses of our paper mentioned in your review: > The authors extend the formalism of Energy Discrepancy to the case of including discrete state in addition to continuous features. Whether or not this justifies an entire publication can...
Summary: This paper extends the Energy Discrepancies framework introduced by Schroder et al. to the setting of discrete data. In order to do this, the authors first describe ways to perturb discrete data by modeling the perturbation process as a CTMC. They describe suitable perturbation choices for different types of d...
Rebuttal 1: Rebuttal: Thank you for your review and your questions! Due to lack of space we focus on questions of most interest for all reviewers. Citations and responses to the rest can be found in the comment. > Significance: [...] Are there better baselines to compare against [...]? > EBMs are appealing because t...
Summary: The paper proposes a suite of methods for training energy-based models for discrete and mixed data using the Energy Discrepancy loss, a recently proposed method for training EBMs. Compared to contrastive divergence, it does not require MCMC sampling from the model distribution during training, improving traini...
Rebuttal 1: Rebuttal: Thank you for your helpful comments and questions. > Although the energy discrepancy method has already been proposed and published in previous work, I found the justification for the method slightly confusing while reading this paper. What is Theorem 1 exactly saying? (see questions) > > How s...
Summary: : The paper introduces a novel method for training energy-based models (EBMs) on discrete and mixed data using heat equations on structured spaces. This method employs the Energy Discrepancy (ED) loss function, which eliminates the need for Markov chain Monte Carlo (MCMC) by using graph-structured perturbation...
Rebuttal 1: Rebuttal: Thank you for your review and your helpful comments. > Despite the method's solid contributions and experimental design, the motivations behind each step and their presentations are not very clear, making it hard to follow. For instance, in Section 3.1, the paper discusses different structured an...
Rebuttal 1: Rebuttal: We thank all reviewers for their constructive comments. First, we would like to summarise the strengths of the paper according to the reviewers. The reviewers agree that our paper is a successful extension of energy discrepancy to discrete and mixed data where energy-based modelling is challengin...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Self-Calibrated Tuning of Vision-Language Models for Out-of-Distribution Detection
Accept (poster)
Summary: This paper first reveals the relationship between the quality of out-of-distribution (OOD) features and the prediction uncertainty of in-distribution (ID) data. Then, the paper introduces modulating factors to weight the ID loss and OOD loss, with the weights being related to the ID data prediction confidence....
Rebuttal 1: Rebuttal: Thank you for your time devoted to reviewing this paper and your constructive suggestions. Here are our detailed replies to your questions. >**Q1:** Overall, the technical contribution of this paper is relatively incremental, primarily focusing on how to weight the two loss components. > Thanks...
Summary: This paper presents a novel few-shot approach to regularizing prompt tuning-based OOD detection methods called Self-Calibrated Tuning (SCT). SCT is specifically built to address the problems of incorrect OOD features being used in prompt tuning-based OOD detection methods. More specifically, by weighting regio...
Rebuttal 1: Rebuttal: Thank you for your time devoted to reviewing this paper and your constructive suggestions. Here are our detailed replies to your questions. >**W1:** A primary concern of the reviewer is the lack of evaluations against the more traditional CIFAR set of benchmarks for OOD detection. > Thanks for ...
Summary: Based on the observation that CLIP undercalibration will affect the existing prompt-tuning-based method's OOD regularization, i.e. samples with uncertain True-Class Probability (referred to as ID uncertainty in this paper) may provide false OOD features and harm to negative training used in the existing method...
Rebuttal 1: Rebuttal: Thank you for your time devoted to reviewing this paper and your constructive suggestions. Here are our detailed replies to your questions. > **W1:** Inaccurate motivation verifications Thanks for the constructive comments! Although true-class probability (TCP) and accuracy are not two identical ...
Summary: In response to challenges in OOD detection using CLIP-based methods, this paper introduces Self-Calibrated Tuning (SCT), a novel framework that addresses issues with unreliable OOD features extracted from ID data. SCT dynamically adjusts the influence of OOD regularization during model training based on the pr...
Rebuttal 1: Rebuttal: Thank you for your time devoted to reviewing this paper and your constructive suggestions. Here are our detailed replies to your questions. > **Q1:** My concern is mainly about the computational cost and training cost of SCT, since it involves operations on dense/local features. > Thanks for you...
Rebuttal 1: Rebuttal: # General Response We appreciate all the reviewers for their thoughtful comments and suggestions on our paper. We are very glad to see that the reviewers find our focused problem is **important** (R1,R2,R3,R4) within the OOD detection research, and **simple but adaptable** (R1,R2,R4) to various o...
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper focuses open-set detection method based on CLIP. The authors propose an additional weighting mechanism based on the LoCoOp method to alleviate the problem that the outlier related regions extracted by the LoCoOp method are not trustworthy in some cases. Strengths: Outlier detection with VLM is an i...
Rebuttal 1: Rebuttal: Thank you for your time devoted to reviewing this paper and your constructive suggestions. Here are our detailed replies to your questions. > **W1,Q1:** The contribution over LoCoOp is incremental. The only difference is an extra reweighting term based on the current prediction score. And the rewe...
null
null
null
null
null
null
Training Dynamics of Transformers to Recognize Word Co-occurrence via Gradient Flow Analysis
Accept (poster)
Summary: This paper investigates the training dynamics of a single-layer transformer followed by a single MLP layer on a synthetic binary classification task, where the objective is to identify the co-occurrence of two specific tokens in the input sequence. They analyze the gradient flow dynamics for the case that all ...
Rebuttal 1: Rebuttal: We thank the reviewer very much for your time and efforts on providing helpful review comments. **Comment:** There are some restrictive assumptions on the synthetic data model: The vocabulary set $d$ is considered to be larger than the number of training tokens, which is not the case in realisti...
Summary: This paper studies the training dynamics of a single hidden layer transformer network (self-attention + linear MLP) trained on a binary word cooccurrence task. Specifically, given a data matrix $X \in R^{d \times L}$ representing L "words" (each column of X is a word vector of dimension d), the model must outp...
Rebuttal 1: Rebuttal: We thank the reviewer very much for your time and efforts on providing helpful review comments. **Comment:** This word co-occurrence task is very simple, and thus it is not surprising that a single transformer layer can easily learn it. **Response:** We agree that this is a simple task from a r...
Summary: This article delves into the gradient flow dynamics for detecting word co-occurrence, demonstrating that the gradient flow approach can achieve minimal loss. The training process commences with random initialization and can be delineated into two distinct phases. Strengths: - This article noticed an interesti...
Rebuttal 1: Rebuttal: We thank the reviewer very much for your time and efforts on providing helpful review comments. **Comment:** The setting of empirical experiments is also simple and ideal and readers may have no idea if this is a general phenomenon during training for detecting word co-occurrence. **Response:** ...
null
null
Rebuttal 1: Rebuttal: Dear Reviewers, Please find all the mentioned additional experiment results in rebuttals (to Reviewer G3j5 and Reviewer cFaT) in the attached pdf. Thank you. Best, Authors Pdf: /pdf/f47595ec6729d6dec536104ee266e733d15692e3.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Unified Generative and Discriminative Training for Multi-modal Large Language Models
Accept (poster)
Summary: This paper proposes a novel learning paradigm to learn MLLMs based on interleaved image-text corpora. It introduces a structure-induced training strategy that imposes semantic relationships between input samples and the MLLM’s hidden state. This work apply the dynamic time warping framework to calculate the ...
Rebuttal 1: Rebuttal: We sincerely thank you for your comprehensive comments and constructive advice. We will explain your concern as follows. > **Q1:** This paper did not discussed the impact of including interleaved image-text pairs in MLLM learning. For example, how will it affect the performance on basic visual-la...
Summary: The paper addresses the limitations of Vision-Language Models (VLMs) by proposing a unified approach that combines generative and discriminative training paradigms. This new method leverages interleaved image-text sequences and introduces a structure-induced training strategy. It aims to enhance the MLLM's abi...
Rebuttal 1: Rebuttal: We sincerely thank you for the valuable comments and we will explain your concern as follows. > **Q1:** While the paper shows impressive results, there is limited discussion on the potential limitations and areas where the model might underperform. **A1**: Thank you very much for your valuable q...
Summary: This paper proposed a method for unifying generative training and discriminative training of multi-modal LLMs. Generative training mainly uses auto-regressive formulation while discriminative training mainly performs contrastive representation matching. The goal of this paper is trying to use discriminative tr...
Rebuttal 1: Rebuttal: We sincerely appreciate your constructive and insightful comments. We will explain your concerns point by point. First, we must clarify that we only used **a very limited amount of data** (see **Table A of the Rebuttal PDF**). Despite this, we have effectively integrated the generative and discri...
null
null
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for their insightful and valuable comments! We thank all the reviewers for agreeing that this paper presents a very interesting idea of **addressing the limitations of the original generative paradigm** in comprehensively capturing global information and keen...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Discovery of the Hidden World with Large Language Models
Accept (poster)
Summary: This paper presents Causal representatiOn AssistanT (COAT), which introduces large language models (LLMs) to bridge this gap. LLMs are trained on massive observations of the world and have shown great capability in extracting key information from unstructured data. Thus, employing LLMs to propose useful high-l...
Rebuttal 1: Rebuttal: > W1 ‘We will release an anonymous link during the discussion period.’ I will consider raising my score if the code is reasonable. We send an anonymized link of code to the AC in a separate comment, as we are not allowed to include any links to external pages in the responses this year. > W2 The...
Summary: The paper tackles the problem of discovering relevant features for recovering the underlying causal graph in the absence and/or in lieu of a human domain expert. The proposed method, COAT, first queries an LLM through a prompt elucidating the task (for eg., discovering relevant features that affect a product r...
Rebuttal 1: Rebuttal: Thanks for the insightful and constructive comments on our work. We hope our response can sufficiently address your concerns. > W1.1 "strong assumptions" of “sufficiently powerful” LLMs... **Thank you for pointing out these potentially confusing words. We revised the paper to clarify that "suff...
Summary: This work proposes COAT (Causal representation AssistanT), a novel framework to leverage LLMs to assist with causal discovery from unstructured data. COAT aims to combine the advantages of LLMs and causal discovery algorithms. To do so, COAT employs LLMs to identify high-level variables and parse unstructured ...
Rebuttal 1: Rebuttal: Thanks for your support and constructive comments on our work. We hope our response can sufficiently address your concerns. > W1 More comparisons with advanced prompting techniques such as CoT. **We construct a CoT baseline** based on *DATA*, where the LLM is prompted to "Think step by step to c...
Summary: This paper combines the power of LLMs with that of causal discovery by proposing a Causal representatiOn AssistanT (COAT) approach. Specifically, it considers datasets with textual descriptions, and tries to identify the Markov blanket with respect to a target variable (such as customer ratings and medical dia...
Rebuttal 1: Rebuttal: Thanks for the detailed and insightful comments on our work. We hope our response can sufficiently address your concerns. > W1. Reality of benchmarks **The choice of synthetic and realistic benchmarks is because of the evaluation purpose**. Since we usually do not have access to the ground truth...
Rebuttal 1: Rebuttal: Dear Reviewers, Thank you for your time and constructive comments on our work. To summarize, all reviewers agree **the paper's proposal to reliably advance rigorous causal discovery methods with the advantages of foundation models like LLMs is novel and valuable** (AG9s, szEm, 1cJQ, PF9K). The me...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Can Language Models Learn to Skip Steps?
Accept (poster)
Summary: The paper explores the ability of language models to skip steps in their reasoning processes. The authors introduce a controlled framework to stimulate step-skipping behavior by iteratively refining models to generate shorter and accurate reasoning paths. The study demonstrates that models can develop this abi...
Rebuttal 1: Rebuttal: Thank you for your valuable and constructive feedback. We provide specific responses and clarifications as follows. **[W1]** Thank you for the suggestion. We acknowledge the importance of testing our method across different models. Following your advice, we performed additional experiments on P...
Summary: This paper proposes an iterative training method that helps sequence models learn to skip steps. The method starts from a training set with full-length solutions or mixed with some skipped-length solutions. At each stage a model learns these solutions with the instruction “Solve it in n steps” and is prompted ...
Rebuttal 1: Rebuttal: Thank you for your valuable and constructive feedback. We provide specific responses and clarifications as follows. **[W1]** Thank you for your valuable suggestion! We acknowledge the importance of evaluating the generalizability of our method. We are actively working on additional experiments ...
Summary: This paper proposes to teach LLMs to deliberately skip steps when doing complex tasks involving multi-step reasoning. The authors use self-generated inference paths with fewer steps to fine-tune the models, which is similar to self-distillation. The authors conduct experiments on a few controlled tasks show th...
Rebuttal 1: Rebuttal: Thank you for your constructive feedback and valuable suggestion! We acknowledge the importance of using established practical benchmarks. We are currently conducting additional experiments with these datasets and will provide an update on the results once the experiments are completed.
Summary: The paper proposes a method for training an LLM to solve reasoning problems using fewer verbalized reasoning steps than it is naturally encouraged to by a fixed training dataset. The resulting model is shown to maintian or improve performance on in-distribution data and OOD data testing extrapolation w.r.t. le...
Rebuttal 1: Rebuttal: Thank you for your insightful and encouraging feedback. We are pleased that you found our approach and results promising and will continue to refine and expand upon these ideas in future revisions. **[W1]** We only require the step number as input when generating the skipped data for the trainin...
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Optimization Can Learn Johnson Lindenstrauss Embeddings
Accept (poster)
Summary: This work shows that a deterministic optimization procedure can find a matrix $A$ that satisfies the Johnson Lindenstrauss guarantee. That is, a matrix $A$ maps a set of $n$ vectors to a lower dimensional space while preserving all pairwise distances up to some chosen multiplicative distortion. Typically, $A$ ...
Rebuttal 1: Rebuttal: Thank you for your detailed and thoughtful review of our paper. We appreciate your recognition of the strengths and innovation of our work. We understand your concerns regarding the practical efficiency of our method and the explicit determination of its complexity. While our paper primarily focus...
Summary: The paper proposes to calculate the embedding matrices used in the statement of the Johnson-Lindenstrauss lemma using optimization instead of randomization. The proposed algorithm is a Hessian descent. Authors prove that the algorithm finds the matrix of minimum distortion. Numerical results display the findin...
Rebuttal 1: Rebuttal: Thank you for the review and valuable comments. We appreciate your recognition of the strengths of our work and have addressed your concerns below. You raise a valid point about the computational expense of second-order methods. We should clarify here that our primary result is that second-order ...
Summary: The paper considers using the optimization method to "learn" the Johnson Lindenstrauss transform. The paper first shows that the naive objective may not be good enough -- there are stationary points that are sub-optimal. Instead, they consider the way that optimize the random Gaussian space rather than the con...
Rebuttal 1: Rebuttal: Thank you for your constructive comments and for appreciating our analysis. We agree with you that there are not many results analyzing the landscape of the learned sketching matrix. We address your questions below. **Q: [...] it gives a deterministic construction of the JL lemma, or it gives a b...
Summary: This paper investigates the problem of using optimization-based approaches to learn Johnson Lindenstrauss(JL) embedding. The authors proposed a new framework to achieve the JL guarantee via optimization, instead of the traditional randomized methods. Similar with diffusion models, the authors proposed a novel ...
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback and for recognizing the innovation in our work. **Q: Could you explain the statement in lines 215-216? Are the values of 1/(3n) and 1/3 derived based on the chosen value of** $\epsilon$**?** A: Yes, that's exactly correct: you can choose $\epsilon$ appropri...
Rebuttal 1: Rebuttal: We thank the reviewers for their insightful and constructive feedback on our submission. We appreciate the time and effort they have dedicated to reviewing our work. We are encouraged by their positive reception, noting that they found our contribution innovative (pTrj, WAK9), our analysis strong ...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Adversarially Robust Multi-task Representation Learning
Accept (poster)
Summary: In this study, the authors explore adversarial multi-task representation learning, where a predictor and feature extractor are trained on multiple source tasks with adversary and then another predictor following the feature extractor is trained on a target task with adversary. They provide bounds on the excess...
Rebuttal 1: Rebuttal: We are thankful for your thoughtful comments on our work. > One might (easily) predict this result from [38]. Under Assumption 4, the sample complexity of the perturbed dataset can be regarded as the finitely scaled sample complexity of the original dataset (as the authors exploited this concept)...
Summary: This paper conducts theoretical studies on adversarially robust transfer learning, which is to learn a model with small robust error on a downstream (target) task from a model pretrained on multiple other (source) tasks. Considering the specific multi-task representation learning (MTRL) setting, this paper pro...
Rebuttal 1: Rebuttal: Thank you for your careful reading of our work. > The proposed theoretical results are interesting, but empirical experiments are missing to support the presented theories, such as the benefits of adversarial pertaining to downstream tasks and that it takes fewer samples to learn a good predictor...
Summary: The paper studies adversarially robust transfer learning, wherein, given labeled data on multiple (source) tasks, the goal is to train a model with small robust error on a previously unseen (target) task. The paper considers a multi-task representation learning (MTRL) setting, i.e., assuming that the source an...
Rebuttal 1: Rebuttal: We are grateful for your insights and recognition of our work. > 1.What's the experimential results of the proposed theory? We agree the paper would benefit from experiments, as many theory papers would. Although we do not have experimental results, we emphasize that our results are complete and...
Summary: This work studies the adversarially robust multi-task representation learning. They introduce the definition of robust $(\nu, \epsilon, \mathcal{A})$-task diversity and the algorithm of two-stage adversarial MTRL. Using these, they show novel results on excess transfer risk for adversarial loss under mild cond...
Rebuttal 1: Rebuttal: Thank you for the feedback and appreciating our work. > The proofs shown in section F.1 are not clear. The authors do not show the formal proofs of these theoretical results. We will revisit this section to improve the clarity and rigor. For Theorem 1 and Theorem 4, we were careful to identify e...
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
High Rank Path Development: an approach to learning the filtration of stochastic processes
Accept (poster)
Summary: The paper addresses the issue of weak convergence of stochastic processes, whereby evolving information is generally unaccounted for. This can lead to discontinuities when applying these processes to multi-period decision-making problems. Prior work has proposed the concept of extended weak convergence, as int...
Rebuttal 1: Rebuttal: We address the reviewer's comments and questions in detail as below. --- **Comments:** *Unfortunately this paper lies well outside my area of expertise and I am unable to review it effectively. The mathematical framework around extended weak convergence is not an area I’m familiar with, and I co...
Summary: Time series is ubiquitous in machine learning. They are modeled as stochastic processes and therefore notions of distance between stochastic processes and more generally convergence of stochastic processes are fundamental ideas. Weak convergence of probability measures occupies a central position in this area,...
Rebuttal 1: Rebuttal: We are pleased to know that the reviewer enjoyed reading our paper. We address the reviewer's comments as follows. **Comments:** *Although as mentioned above the paper defines everything clearly, the exposition on PCF and HRPCF could be improved.* *It took me quite some time after re-reading the ...
Summary: The paper constructs a computationally-implementable metric which metrizes an "extended" weak convergence for stochastic processes, which more plausibly accounts for the convergence of the process with respect to their filtrations. The result can apparently more effectively account for similarities between con...
Rebuttal 1: Rebuttal: We address the reviewer's comments and questions in detail as below. --- **Comments:** *The results seem to be an improvement both theoretically and empirically over the main antecedents* *[18] Hang Lou, Siran Li, and Hao Ni. PCF-GAN: generating sequential data via the characteristic function o...
Summary: This paper proposes High Rank Path method, motivated by the extended convergence notion and the rough path theory, to generate (conditioned) time-series data. A new metric HRPCFD is introduced, and experiments are conducted for Brownian motion, GANs with applications in finance. Strengths: The paper is rigoro...
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful comments. We address each question in detail as follows. --- **Answer to Q(1) / Why this work fits NeurIPS** Our paper addresses the crucial problem of defining the computationally feasible metric on the law of stochastic processes to capture the extended...
Rebuttal 1: Rebuttal: We deeply appreciate all the reviewers for their helpful comments and constructive suggestions. We are pleased that all the reviewers find our work sound and well-presented. In the following, we provide detailed responses to the questions raised by each reviewer individually.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Conditional Generative Models are Sufficient to Sample from Any Causal Effect Estimand
Accept (poster)
Summary: The paper leverages state-of-the-art conditional generative models and algorithms from causal do calculus to perform "approximately correct" high-dimensional interventional sampling. Their contribution is ID-GEN, a recursive algorithm that uses diffusion models (among other generative models) to sample from an...
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable effort. We are happy that they acknowledged our algorithm as novel and sound. Below we address their concerns. > ..., Obstacles for moving towards automatic causal inference with images?. > How feasible is this prospect in the near future? We interpret t...
Summary: This paper proposes an algorithm for sampling from intervention distributions under known causal models with hidden confounders, using conditional generative models. This allows nonparametric distributional estimation of high-dimensional outcomes such as X-ray image generation, unlike existing methods. The pro...
Rebuttal 1: Rebuttal: We thank the reviewer for their efforts and useful feedback. We are happy that they found our work important and our theoretical background solid. Below we address their concerns. >Is it important in cases where there are bidirectional edges ..., but the causal orientations are all > identified ...
Summary: This paper studies the problem of sampling from an interventional distribution of high-dimensional data, where the causal relationships are described by a acyclic directed mixed graph (ADMG). Motivated by the ID algorithm that provides a recursive way of identifying any causal effect from a conditional probabi...
Rebuttal 1: Rebuttal: We thank the reviewer for their effort on our paper. We are really happy that they found our work broadly applicable, our paper well-written and our experiments extensive. Below, we address their concerns. >In some identification formulas e.g. Eq.(1) in the paper, the probability on the denomina...
Summary: This paper provides an algorithm for sampling from a causal interventional distribution using conditional generative models, building on Shpitser and Pearl's ID algorithm. They discuss how their algorithm, ID-GEN, can sample from any identifiable distribution given a causal graph, and handles the presence of u...
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable efforts on our paper. We are happy to receive their appreciation for our theoretical contribution and experimental setup. Below we address their concerns. > Step 1 of ID-GEN ... why we can't just learn a model of P(y) directly? ..., how is the sum over va...
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Revive Re-weighting in Imbalanced Learning by Density Ratio Estimation
Accept (poster)
Summary: This paper presents a dynamic re-weighting method for imbalanced learning. The author defines the ratio of the balanced data set distribution to the training set distribution, and tries to estimate it with an iterative update method. The effectiveness of this method is proved by experiments. Strengths: 1. Thi...
Rebuttal 1: Rebuttal: > The formula derivation in Sec. 3.3 can be more detailed. It is suggested to explain how formula (7) is obtained in the appendix. Thank you for the suggestion. We will include detailed derivations of Eq. (7) in the appendix of revision, which we simply summarize the deduction as follows for clar...
Summary: The paper introduces a novel approach called Re-weighting with Density Ratio (RDR) to address the challenges posed by imbalanced data distributions in machine learning. The RDR approach aims to mitigate overfitting on majority classes and enhance adaptability across diverse datasets by continuously updating t...
Rebuttal 1: Rebuttal: > The paper does not provide a detailed theoretical analysis or justification for the proposed Re-weighting with Density Ratio (RDR) method, beyond the intuition that it can mitigate overfitting on majority classes and enhance adaptability across diverse datasets. Thank you for your suggestion. T...
Summary: The paper presents a weighting strategy in order to handle class imbalance. Contrary to existing method, they propose to adapt the weight throughout the training procedure. Their method estimates the discrepancy between the sample distribution and the balanced sample distribution for parameterization w and u...
Rebuttal 1: Rebuttal: > Row 125, the authors refer to the distribution of training set, which get parameterized by w. Thus, my understanding is that the authors refer to the distribution of the training set "captured by the model". Yes, you are right. We use the distribution parameterized by $w$ to represent "the dist...
null
null
Rebuttal 1: Rebuttal: We thank reviewers for your valuable feedback, and appreciate the great efforts made by all reviewers, ACs, SACs and PCs. Please refer to our detailed responses to each reviewer, where we addressed each question and concern point by point. In the **attached PDF**, we have included a notation sum...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Rethinking Misalignment in Vision-Language Model Adaptation from a Causal Perspective
Accept (poster)
Summary: This paper investigates the two different misalignment issues between CLIP and downstream tasks, i.e., task misalignment and data misalignment. The author designed several experiments that demonstrated that over-fitting occurs when tuning with the learnable prompt. They propose the Causality-Guided Semantic D...
Rebuttal 1: Rebuttal: We thank Reviewer 7jjJ for the valuable feedback and constructive suggestions. The mentioned issues are addressed as follows: *** **W1**: There is no obvious evidence to show that the CDC will improve the prediction in certain cases, which category previous wrong and correct with CDC. It would be ...
Summary: This paper addresses the two-level misalignment (task and data) issue in adapting CLIP to specific tasks. The authors develop a structural causal model to analyze CLIP's pre-training and adaptation processes, revealing how task-irrelevant knowledge interferes with predictions. To mitigate this, they propose Ca...
Rebuttal 1: Rebuttal: We thank Reviewer JJqY for the valuable comments and constructive suggestions. The mentioned issues are addressed as follows: *** **W1**: Figure 1(a) appears to illustrate task misalignment. Consider enhancing the caption of Figure 1 with more detailed explanations to clarify this concept. **A1**...
Summary: This paper investigates the task and data misalignment issues in pre-trained vision-language models such as CLIP. It discovers that the task-irrelevant information significantly affects the prediction of CLIP and soft prompt tuning cannot mitigate the data misalignment issue. The authors propose a novel Causal...
Rebuttal 1: Rebuttal: We thank Reviewer Ptf4 for the valuable suggestions. The mentioned issues are addressed as follows: *** **W1**: In the experiments section, the method is currently adapted solely to the CLIP model. This limitation may not fully demonstrate the model's universality. The authors can adapt the method...
null
null
Rebuttal 1: Rebuttal: Response to **Weakness 1** of **Reviewer** **Ptf4** and **Weakness 2** of **Reviewer** **7jjJ**. *** Thank you for your suggestions on exploring the impact of the misalignment issues we proposed across different models. We believe that most current VLMs can suffer from the misalignment problem whe...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Low Precision Local Training is Enough for Federated Learning
Accept (poster)
Summary: This paper proposes an efficient federated learning (FL) paradigm, where the local models in the clients are trained with low-precision operations and communicated with the server in low precision format, while only the model aggregation in the server is performed with high-precision computation. The performan...
Rebuttal 1: Rebuttal: **Q1: The biggest concern is regarding the novelty w.r.t. SWALP [40].** **A1:** We summarize the differences between our method and SWALP as follows: 1) SWALP is designed for standard sgd in centralized training, our approach focuses on FL. 2) We show both empirically and theoretically the effe...
Summary: The paper proposes a federated learning approach that performs local training on low precision through quantization combined with a high-precision averaging and a moving average at the server. The paper guarantees convergence and empirically compares several levels of low-precision local training to full-preci...
Rebuttal 1: Rebuttal: **Q1: Contribution in terms of test accuracy.** **A1:** It is a misunderstanding. We do not aim to improve test accuracy, but rather to demonstrate that low precision local training is sufficient for FL and can be used to reduce training and communication cost. Our method, which performs low pr...
Summary: The paper studies an FL system with data heterogeneity, a topic has been extensively studied in the past few years. The idea is to perform local training with lower precision through applying block floating point quantization. The idea per se is not new, but proving that the convergence can be achieved using l...
Rebuttal 1: Rebuttal: **Q1: What about resource heterogeneity? It would be important to have a discussion (or better some experimental results) on how the proposed solution perform in such setting. Should we use the same quantization level for all clients, or can we adjust the precision according to the resource avail...
Summary: The paper proposes an efficient Federated Learning (FL) paradigm where local models are trained using low-precision operations and communicated with the central server in low precision format. The aggregation on the server, however, is performed with high-precision computation to ensure accuracy. The authors d...
Rebuttal 1: Rebuttal: **Q1: The integration of low precision training and high precision aggregation may add complexity to the implementation.** **A1:** Actually, in our method, the transformation between low and high-precision parameters is performed on the server, and the computation in the clients is standard for ...
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
GREAT Score: Global Robustness Evaluation of Adversarial Perturbation using Generative Models
Accept (poster)
Summary: This paper presents an innovative adversarial robustness measure, which leverages a generative model to produce data samples, record the marginal confidence score as a local statistic and average them over the data distribution. The proposed measure is designed to be efficient, scalable, and potentially applic...
Rebuttal 1: Rebuttal: We express sincere gratitude for your constructive and detailed comments. ## Weakness 1: Definition of adversarial robustness; Algorithm in appendix reduces coherence; Clarification on Figure 2. (1) We apologize for the lack of definition. To clarify, adversarial robustness evaluation refer...
Summary: The authors propose the GREAT score that uses conditional generative models to mimic the data generating distribution. Thereafter, the classification margin on a set of generated images can be used to obtain a global robustness score. For this, the authors make the connection between local and global robustnes...
Rebuttal 1: Rebuttal: We appreciate your detailed and constructive comments, and we are encouraged that you find our work “well written, easy to follow”. ## Weakness 1: Generative model limitations: (a) valid class instance, (b) unambiguous class, (c) data approximation. Thank you for your feedback regarding the a...
Summary: The paper introduces a novel framework called GREAT Score (Global Robustness Evaluation of Adversarial Perturbation using Generative Models), aimed at evaluating the global robustness of machine learning models against adversarial perturbations. Unlike traditional methods that aggregate local robustness result...
Rebuttal 1: Rebuttal: We appreciate your detailed and constructive comments. ## Weakness 1 & Question 1: Ablation studies for other norms (e.g., L∞)? We thank for bring the limititions of our theorem. As stated in Section 5, our framework currently focuses on the $\mathcal{L}_2$ norm due to limitations in extendi...
Summary: The paper addresses the important and under-explored problem of "global robustness evaluation" for neural networks. It proposes GREAT Score, a novel framework for assessing global robustness using generative models (GMs). Besides, through Monte Carlo sampling from GMs and using Hoeffding's concentration bound...
Rebuttal 1: Rebuttal: We express sincere gratitude for your valuable feedback and constructive comments. ## Question 1: GAN reliability and distribution coverage? Thank you for your feedback on the reliance on GANs as a proxy for the true data distribution. We acknowledge the concerns about the method's accuracy, par...
Rebuttal 1: Rebuttal: We appreciate the valuable feedback from the reviewers . Below is a high-level summary of our rebuttal, addressing the major concerns raised: ### Performance and reliability of the generative model * **Concern:** The reviewers were concerned about the dependency of our metric on the generati...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Average gradient outer product as a mechanism for deep neural collapse
Accept (poster)
Summary: Given the complexity of the process of neural network training, any understanding of robust phenomena that can be identified in the training process has potential value that can guide the design of models and algorithms. Neural Collapse (and its deep counterpart) is one such phenomenon that has been identified...
Rebuttal 1: Rebuttal: Thank you for your review. We address all your concerns below: **Given that the main results apply both to a non-standard kernel method and a non-standard training algorithm [...]** Our first motivation for studying DNC with Deep RFM is that, unlike the standard analytical approaches for neural...
Summary: This paper studies deep neural collapse (DNC) in deep neural networks (DNN) through the prism of the neural feature ansatz (NFA) and deep recursive feature machines (RFM). It is comprised of several results: - empirical evidence that DNC occurs in deep RFMs, - a theoretical analysis of DNC in a high-dimensiona...
Rebuttal 1: Rebuttal: Thank you for your feedback. We note that we will significantly improve the presentation of our paper, and specifically Sections 4.2 and 4.3. Please see the global response for a summary of our changes. We now proceed to address individual comments. **I could not follow most of section 4.2 [...]...
Summary: The submission introduces a mechanism for Deep Neural Collapse (DNC) using the average gradient outer product (AGOP). The authors also propose the Deep Recursive Feature Machines (Deep RFM) model, which employs AGOP in its architecture to empirically and theoretically demonstrate DNC. The main contribution is ...
Rebuttal 1: Rebuttal: Thank you for your review. We address your concerns below. **I found the paper challenging to read.** We will make a number of changes to our presentation. Please see our global response to all reviewers for a summarized description of these changes. **Can the authors clarify if other metrics,...
Summary: The authors study two effects associated with neural collapse: the within class variability going to zero and the orthogonality/tight-frame of the class means. They study the deep recursive feature machine model, and show that neural collapse forms in that setting as well, due to the projection of the data ont...
Rebuttal 1: Rebuttal: Thank you very much for your detailed review. In our response here, we will pay significant attention to improving the writing and organization of our work, especially Section 4. Please also read our global response where we discuss the changes in presentation in detail to all reviewers. We procee...
Rebuttal 1: Rebuttal: We thank the reviewers for their thorough feedback on our manuscript. We will make a number of clarifying changes to the organization and presentation of our results. We list major changes here. First, we will split Section 4 into two new sections. The first section will contain the empirical res...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning Segmentation from Point Trajectories
Accept (spotlight)
Summary: The authors propose a loss function that seeks to group the trajectories into low-rank matrices where the motion of object points can be approximately explained as a linear combination of other point tracks. Experiments on the synthetic MOVi-F variant of the Kubric dataset and the real datasets DAVIS 2016, Se...
Rebuttal 1: Rebuttal: We are happy the reviewer has found our work to be original, detailed, reproducible, addressing key issues, and with well-conducted experiments. We also thank the reviewer for suggestions to improve the clarity of our work. > About the presentation, please clearly state a name/acronym to the prop...
Summary: This paper introduces a method for training a segmentation network using long-term point trajectories as a supervisory signal to enhance optical flow. It proposes a novel loss function aimed at grouping these trajectories into low-rank matrices, allowing the motion of object points to be approximately represen...
Rebuttal 1: Rebuttal: We are happy the reviewer has found our work detailed, well-structured and showing significant improvement. We thank the reviewer for their thoughtful questions and suggestions. We reply to each comment below. > The contribution of the paper in Subspace Clustering is not described clearly. [...]...
Summary: This paper proposes a novel loss function that allows training image object segmentation models based on object motion in videos. Motivated by recent work on self-supervised learning of segmentation using optical flow, the authors propose to use longer point trajectories as additional self-supervision signal. ...
Rebuttal 1: Rebuttal: We thank the reviewer for thoughtful comments and suggestions and are happy that they recognized our work as well motivated, well explained, and easy to compare. > To my understanding the task is multi-object segmentation for MOVi and binary segmentation for all other datasets. This should be cle...
Summary: This paper proposes a model to process long-term motion and short-term motion simultaneously to achieve motion-based segmentation. Specifically, motivated by subspace clustering, this work proposes a loss function that enables training a neural network to learn motion grouping from both optical flows and point...
Rebuttal 1: Rebuttal: We are glad that the reviewer finds the motivation and method clear, easy to follow, convincing and reasonable, with strong results and comprehensive ablation. We thank the reviewer for constructive comments and helpful suggestions. > the paper's principle assumes that the object is rigid. Howev...
Rebuttal 1: Rebuttal: We thank the Reviewers for their thoughtful comments and suggestions. We are happy they found our presentation clear, well-flowing, well-motivated, convincing and reasonable, our results strong and our experiments comprehensive. We reply to each comment individually. To aid replies, we also provi...
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper tackles video object segmentation by incorporating into the loss function not only instantaneous optical flow information but also long term pixel tracking information. Raft was used for optical flow and CoTracker was used for long term pixel tracking in the experiments. The experiments show a marg...
Rebuttal 1: Rebuttal: We thank the Reviewer for constructive comments and suggestions. We reply to concerns below. > Table 2 where the experimental results are presented lists a collection of methods categorized into different groupings. Perhaps these groupings and methods could be better discussed in the lit review. ...
null
null
null
null
null
null
MoTE: Reconciling Generalization with Specialization for Visual-Language to Video Knowledge Transfer
Accept (poster)
Summary: The paper introduces a novel framework called MoTE. This framework addresses the trade-off between zero-shot generalization and close-set performance in video recognition tasks by tuning a mixture of temporal experts. The key contributions include: - Introducing Weight Merging Regularization to balance genera...
Rebuttal 1: Rebuttal: We appreciate the valuable comments and the time you have dedicated to this paper. Please find the responses to your comments below. *** **1. Expanding the semantic space with large-scale generative models.** Thanks for your constructive suggestion! Following your suggestion, we replace the categ...
Summary: This paper addresses the issue of Video-Language Models (VLMs), such as CLIP, experiencing reduced generalization performance to unseen categories when learning domain-specific knowledge for video understanding tasks. The authors propose the MoTE framework, which introduces temporal experts and employs a Mixtu...
Rebuttal 1: Rebuttal: We appreciate the valuable comments and the time you have dedicated to this paper. Please find the responses to your comments below. *** **1. Ambiguity in the use of certain symbols.** Thanks for your kind reminder, and sorry for the confusion. We double-checked the usage of all symbols and will ...
Summary: This paper introduces MoTE (Mixture-of-Temporal-Experts) to improve the generalization and specialization capabilities of visual-language models (VLMs) when adapting to video tasks. MoTE addresses two main questions: how to enhance the generalization of additional parameters during fine-tuning, and how to bala...
Rebuttal 1: Rebuttal: We appreciate the valuable comments and the time you have dedicated to this paper. Please find the responses to your comments below. *** **1. Discussion on FROSTER.** Thanks for your kind reminder! Our motivation does resemble FROSTER's in some way but differs in the following aspects. *Motivati...
Summary: To preserve the generalization ability of the model trained on general visual-language model (VLM) with task-specific data, while boost the performance on specific task, this paper propose a new framework and training strategy to learn a unified model with specific performance and generalization ability. Three...
Rebuttal 1: Rebuttal: We appreciate the valuable comments and the time you have dedicated to this paper. Please find the responses to your comments below. *** **1. The experimental setting may hide the weakness of the proposed method. It would be great to also fine-tune the model on the small-scale UCF-101 and evaluate...
Rebuttal 1: Rebuttal: We are grateful to all reviewers for their valuable and constructive comments. We have carefully considered the points raised by each reviewer and provided comprehensive responses to each question. Besides, we attach an additional PDF file containing a detailed analysis of the category-wise perfor...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
UltraPixel: Advancing Ultra High-Resolution Image Synthesis to New Peaks
Accept (poster)
Summary: The paper presents UltraPixel, an innovative architecture for ultra-high-resolution image generation that tackles semantic planning, detail synthesis, and high resource demands. UltraPixel uses cascade diffusion models to generate images at multiple resolutions within a single model, efficiently guiding high-r...
Rebuttal 1: Rebuttal: `Q1`: **The manuscript's layout requires some refinement.** Thank you for your constructive suggestion. We have carefully revised the format accordingly. `Q2`: **Paper would benefit from a more comprehensive set of visual results, including additional comparisons with state-of-the-art methods.**...
Summary: This paper introduces UltraPixel, a method for generating high-quality ultra-high-resolution images. It utilizes the semantics-rich representations of lower-resolution images in a later denoising stage to guide the overall generation of highly detailed high-resolution images. The method incorporates implicit n...
Rebuttal 1: Rebuttal: `Q1` **Clarification on INR and analysis of SAN feature** Compared to discrete grid pixels, INR represents data as a neural function, mapping continuous coordinates to signals. Its representation capacity depends not on grid resolution but on the neural network's ability to capture underlying dat...
Summary: This paper presents a method for Ultra-High-Resolution image generation from text prompts. The method is based on StableCascade. The original StableCascade can generate 1024x1024 images. This paper proposes another HR latent diffusion model that can utilize the guidance from 1024 x 1024 images and generate 409...
Rebuttal 1: Rebuttal: `Q1` **Comparison with the SOTA generative upsampler** Thank you for the valuable suggestion. We compare our method with SOTA diffusion-based SR methods, namely **SUPIR** and **StableSR**. The visual results in **Figure 1** of the rebuttal PDF demonstrate that our method produces more reasonable ...
null
null
Rebuttal 1: Rebuttal: **Response to AC and reviewers (with PDF)** We sincerely appreciate your time and efforts in reviewing our paper. We are glad to find that reviewers recognized the following merits of our work: - **Innovative and effective solution [rwnV, z6xV, UnLF]**: The proposed UltraPixel introduces a Low-R...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Remix-DiT: Mixing Diffusion Transformers for Multi-Expert Denoising
Accept (poster)
Summary: This paper proposes Remix-DiT, which creates multiple experts by mixing fewer basis diffusion transformers, allowing each expert to specialize in the denoising task for corresponding timestep intervals. It achieves performance improvements by having each expert responsible for a larger number of timestep inter...
Rebuttal 1: Rebuttal: > **Q1: Lack of experiments. The authors have to validate the performance of Remix-DiT by reporting comparisons with previous methodologies on the FFHQ or MS-COCO datasets. It would make the manuscript more solid if Remix-DiT achieves consistent performance improvements on multiple datasets.** Th...
Summary: The paper introduces Remix-DiT, a modification to the diffusion transformer architecture that incorporates the multi-expert denoiser framework during both training and inference. Unlike traditional multi-expert methods that train $N$ separate individual experts independently for each time interval, Remix-DiT e...
Rebuttal 1: Rebuttal: > **Q1: While the authors show the benefits of Remix-DiT on finetuning a pretrained DiT model, it would be interesting to see its effect when training all components from scratch. If the compute budget allows, I suggest that the authors also add this experiment for better insights into what happen...
Summary: The paper proposes Remix-DiT, a model architecture designed to enhance the capacity of a standard DiT model without significantly increasing inference costs. This is accomplished by training mixing coefficients to adaptively fuse multiple DiT models and developing specialized experts for multi-expert denosing....
Rebuttal 1: Rebuttal: > **Q1: The visualization results in Figure 4 are very interesting. It seems that the model has a certain preference in allocating the capacity of basis models, with clear segmentation across the timesteps. Additionally, a high coefficient is observed at early timesteps, such as 0-150. Does this i...
Summary: To improve the generation quality of diffusion transformers, Remix-DiT proposes to enhance output quality at a lower cost and aims to create N diffusion experts for different denoising timesteps without the need for expensive training of N independent models. Remix-DiT achieves this by employing K basis models...
Rebuttal 1: Rebuttal: > **Q1: Lack of Visualization Results: The paper does not include any visualization results. Providing visual examples of generated outputs is crucial for qualitatively evaluating the effectiveness of the proposed method.** Thanks for the suggestion. We supplement visualization results in the att...
Rebuttal 1: Rebuttal: We would like to extend our sincere gratitude to all the reviewers for their time, effort, and insightful feedback on our submission. In response to reviewers' questions, we included some visualization results in the attached PDF file to compare the RemixDiT-B to a standard DiT-B, where our method...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
PaDeLLM-NER: Parallel Decoding in Large Language Models for Named Entity Recognition
Accept (poster)
Summary: This paper introduces a novel approach to reduce generation latency in Named Entity Recognition (NER) using Large Language Models (LLMs). The primary issue addressed is the high latency caused by the sequential decoding process in LLMs, which significantly lengthens the sequence by autoregressively generating ...
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback and appreciate the recognition of our paper’s novelty and writing. We are grateful for the opportunity to address the concerns raised. **W1-speedup is weak when only one entity type**: Even if the entity type is one, if there are many mentions in ...
Summary: They create an NER system where an LLM first outputs the number of mentions there are of a given type (for all possible types). Then all mentions can be generated in parallel. This results in faster inference times as each generation is short, and they can be done in parallel. Strengths: They compare to seve...
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback and appreciate the recognition of our paper’s contributions. We are grateful for the opportunity to address the concerns raised. **W1-loss aligment of mention to the tokens**: We acknowledge that PaDeLLM loses token position information, and we re...
Summary: This paper presents PaDeLLM-NER, a novel approach for accelerating Named Entity Recognition (NER) inference in Large Language Models (LLMs) through parallel decoding. A reformulation of the NER task that enables parallel generation of label-mention pairs, significantly reducing inference latency. A two-step in...
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback and appreciate the recognition of our paper’s contributions, writing and novelty. We are grateful for the opportunity to address the concerns raised. **W1-token location information**: - NER traditionally relies on sequence labeling, where each ...
Summary: This paper proposes an interesting extension of the parallel text generation paradigm, where the authors tackle the NER task and propose to generate the labels independently. For each label prediction, the proposed method first predicts the number of mentions and then predicts the exact entity. The results sho...
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback and appreciate the recognition of our paper’s contributions. We are grateful for the opportunity to address the concerns raised. **W1-Justification of the importance of two-step prediction**: We conducted an additional experiment using one-step ...
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
$\epsilon$-Softmax: Approximating One-Hot Vectors for Mitigating Label Noise
Accept (poster)
Summary: This paper proposes $\epsilon$-softmax to deal with label noise. $\epsilon$-softmax modifies the outputs of the softmax layer to approximate one-hot vectors with a controllable error $\epsilon$. Both theoretical and empirical studies show the effectiveness of the proposed method. Strengths: 1. The writing of...
Rebuttal 1: Rebuttal: Thanks very much for your valuable comments. We would like to offer the following responses to your concerns. **1. Response to Weakness 1 and Question 3** Thanks for your kind comment. Previous work indicates that, for a fixed vector $\mathbf{v}$ and $\forall L \in \mathcal{L}$, we have $$\sum ...
Summary: This submission proposes a enhanced softmax layer for label-noise learning, namely $\epsilon$-softmax. By incorporating with the well-known $\epsilon$-relaxation, the proposed $\epsilon$-softmax can regularize the outputs of the model and avoid fitting the label-noise sample. This simple and plug-and-play meth...
Rebuttal 1: Rebuttal: Thanks very much for your positive comments. We would like to offer the following responses to your concerns. **1. Response to Weakness 1** Thanks for you kind comment. --- About ablation study on gradient clipping We fully followed the experimental setup of previous work [1], and we used gra...
Summary: This manuscript proposes a novel method to approximate the symmetric condition of the loss function, which is necessary for robustness to label noise. Specifically, the proposed method, named \\( \\epsilon \\)-softmax, can adjust the model output to approximate one-hot vector. However, the proposed method alon...
Rebuttal 1: Rebuttal: Thanks very much for your positive comments. We would like to offer the following responses to your concerns. **1. Response to Weakness 1** Thanks for your insightful comment. In the following, we give the theoretical discussion for temperature-dependent softmax. For model with temperature-...
Summary: The paper introduces “ϵ-softmax,” a method to adjust softmax outputs for better approximation to one-hot vectors, thereby mitigating the impact of label noise in classification tasks. The approach modifies the softmax layer outputs to include a controllable error term ϵ, aiming to improve noise robustness with...
Rebuttal 1: Rebuttal: Thanks very much for your positive comments. We would like to offer the following responses to your concerns. **1. Response to Weakness** Thanks for your kind comment. For Figures 2 and 3, we extract the high-dimensional features of the test set at the second-to-last fully connected layer, then ...
Rebuttal 1: Rebuttal: We appreciate all reviewers for their valuable time and insightful comments. We have carefully considered the suggestions and will revise our manuscript accordingly. We conducted some additional experiments in this global rebuttal space. Responses to other specific comments can be found in the in...
NeurIPS_2024_submissions_huggingface
2,024
Summary: The author proposes the epsilon-softmax technique as a method to address label noise. Epsilon-softmax facilitates peaky predictions by increasing the value of the highest prediction, and it also functions to reduce the magnitude of the gradient when the prediction aligns with the given label. The author introd...
Rebuttal 1: Rebuttal: Thanks very much for your positive comments. We would like to offer the following responses to your concerns. **1. Response to Weakness 1** Thanks for your insightful comments. --- About the better trade-off The better trade-off means that we achieve better performance on both fitting ability...
null
null
null
null
null
null
Bileve: Securing Text Provenance in Large Language Models Against Spoofing with Bi-level Signature
Accept (poster)
Summary: The robustness of previous watermark algorithms would lead to a type of spoofing attack where attacker would modify the watermarked text to contain harmful contents while ensuring the watermark can still be detected. This paper introduce a bi-level signature scheme called bileve to mitigate spoofing attacks an...
Rebuttal 1: Rebuttal: Thank you for your questions and feedback. We appreciate the opportunity to address your concerns here. **W1: Quality** The key baseline of this work is SLS, as it also focuses on defeating spoofing attacks. Bileve has already shown improvements over SLS, indicating potential for further enhance...
Summary: The paper presents a novel approach to secure the provenance of texts generated by large language models (LLMs) through a bi-level signature scheme. This method aims to mitigate spoofing attacks—where malicious actors alter the content generated by LLMs to forge harmful content or misattribute blame—by integra...
Rebuttal 1: Rebuttal: Thanks for your feedback, and we would like to address the weakness (**W**) below. **W1: Generalizability** The primary goal of our experiments was to establish a proof of concept for the Bileve scheme. We believe demonstrating effectiveness with models like OPT-1.3B and LLaMA-7B would provide a...
Summary: This paper proposes to consider spoofing attack, where an attacker wants to prove the proposition like "The person holding this watermark private key used an LLM to write this text A." where text A is constructed by the attacker. The paper proposes a defense against spoofing attacks. Strengths: This paper poi...
Rebuttal 1: Rebuttal: Thanks for your comments and we would like to address the weaknesses (**W**) and questions (**Q**) individually. **W1** >I have doubt ... specific text A. First, we understand that you are emphasizing the possibility of false positives in watermark detection, suggesting that non-watermarked ...
Summary: The submission proposes a spoofing attack on LLM watermarks and a new bi-level scheme meant to protect against spoofing by distinguishing five possible scenarios. The scheme is based on signature bits for integrity checks and rank-based sampling on top of a Kuditipudi-like random key sequence. Strengths: - Th...
Rebuttal 1: Rebuttal: Thanks for your input. We address the weaknesses (**W**) and questions (**Q**) below. **W1: Experimental Results** It is important to clarify that the increase in perplexity observed is **not an order-of-magnitude increase**. Additionally, **high perplexity does not necessarily indicate bad qual...
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Newton Losses: Using Curvature Information for Learning with Differentiable Algorithms
Accept (poster)
Summary: The paper presents an alternative backpropagation scheme for deep learning with algorithmic losses that combines a preconditioned step on the loss with a gradient step on a least square objective. Two preconditioning methods are investigated: using the Hessian, or the empirical Fisher. Experiments demonstrate ...
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer for dedicating their time to evaluate our work and helping us improve it further. Below, we have addressed your insightful comments. **Weaknesses** > The soundness of the approach from a theoretical viewpoint is lacking. However, it is probably better to hav...
Summary: The paper proposes second-order optimization with splitting for hard objectives that arise as smoothing of such hard problems as sorting and ranking to address the problem of vanishing/exploding gradients. Strengths: It is a well-written and very complete description of algorithms for reproducibility, which i...
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer for dedicating their time to evaluate our work and helping us improve it further. Below, we have addressed your insightful comments. **Weaknesses** > 1. Insufficient experiments. I'd appreciate adding a comparison here with the SFA technique from there, as i...
Summary: The paper proposes a new method to optimize complex possibly non-smooth and algorithmic losses for neural networks. The approach is based on splitting the problem into two-step procedure, where in the first step we construct and optimize the so-called Newton loss and the second step is based on SGD-type proced...
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer for dedicating their time to evaluate our work and helping us improve it further. Below, we have addressed your insightful comments. **Weaknesses** > The paper does not contain any proofs or convergence guarantees. It is important to keep in mind that the me...
null
null
Rebuttal 1: Rebuttal: 1 page rebuttal attachment: Illustrations of the gradient of the NeuralSort and logistic DSN losses. Pdf: /pdf/2fd9c91b7402bb975771152d2e88e3c2fa1ebdae.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Improved off-policy training of diffusion samplers
Accept (poster)
Summary: The paper studies the problem of training diffusion models to sample from a target distribution. The contributions are summarized as follows: 1. A codebase is provided for the study of diffusion-based samplers, due to the issue of inconsistent experimental settings in previous research; 2. Exploration in t...
Rebuttal 1: Rebuttal: Thank you for your comments and positive assessment of the paper. ### Evaluation metrics Thank you for your question regarding the different aspects of the methods' behaviours, as measured by various evaluation metrics (here: $\log \hat Z^{\rm RW}$ and $\log \hat Z$). We agree with your suggesti...
Summary: This paper proposes an off-policy diffusion-based sampler training method to match a target distribution and a corresponding exploration strategy and credit assignment to improve it. Strengths: 1. The proposed idea of this paper is interesting, which connects the Euler-Maruyama sampler and GFlowNets. Weaknes...
Rebuttal 1: Rebuttal: Thank you for your comments. Below we've tried to address what we believe to a number of misunderstandings. > Strength: The proposed idea of this paper is interesting, which connects the Euler-Maruyama sampler and GFlowNets First, we'd like to point out that this paper is not the first to connec...
Summary: This paper focuses on the problem of sampling with distributions defined by a black-box and unnormalized energy function. This work provides a comprehensive review of existing works, including both variational methods and policy-based methods, and offers a codebase and benchmark to replicate and evaluate the e...
Rebuttal 1: Rebuttal: Thank you for your comments, in particular, for acknowledging the strength of our comprehensive benchmarking. Regarding your questions about the benchmarks and their dimensionality, first, we kindly direct you to the response to all reviewers for discussion of the choice of target densities. Se...
Summary: The paper presents a variety of improvements to off-policy strategies for training diffusion models to sample from unnormalized densities. Equation 13. These include maintaining a replay buffer (obtained with Langevin sampling) to enable efficient off-policy exploration and incorporating an inductive bias into...
Rebuttal 1: Rebuttal: Thank you for your review. Below we will answer your questions and concerns. ### Statistically significant improvement over the baselines Firstly, we want to highlight that our method achieved comparable or better results to current SOTA models. In Tables 1 and 2, we highlighted the best results...
Rebuttal 1: Rebuttal: We thank all the reviewers for the effort they put into reviewing our paper and are grateful for the constructive feedback. We appreciate the reviewers remarking that the paper is well-written and well-organized (FAJh), studies a important problem (5SPJ, 6eYp), and does comprehensive benchmarking ...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Can We Leave Deepfake Data Behind in Training Deepfake Detector?
Accept (poster)
Summary: This study introduces a novel training strategy for Deepfake detection using real, blendfake, and deepfake datasets. By designing an oriented progressive regularizer and a feature bridging module, the proposed approach effectively extracts forgery information from the training data, resulting in enhanced gener...
Rebuttal 1: Rebuttal: Thanks for the insightful comments. Here, we carefully clarify each issue mentioned by the respected reviewer. **Q1. What is the motivation for conceptualizing real to fake as a progressive transition? Why it should be continuous rather than discrete?** **R1.** Thanks for this concern. We ende...
Summary: The authors introduced a method aimed at detecting deepfakes. Their approach, known as Oriented Progressive Regularizor (OPR), employs a progressive transition strategy. This strategy is designed to enable the model to effectively train on a combination of blendfake and deepfake data, ultimately leading to imp...
Rebuttal 1: Rebuttal: # **Response to Reviewer Ssp1** We are thankful for the reviewer's positive comments and interest in our research. We hope that the following point-by-point responses will enable the respected reviewer to further recognize our work. **Q1. The argument of 'blendfake data is sufficient' is based on...
Summary: This paper investigates the generalization ability of deepfake detectors and proposes a novel training approach using "blendfake" data to enhance the model's learning of generic forgery artifacts. The authors point out that existing state-of-the-art methods do not incorporate deepfake data in their training pr...
Rebuttal 1: Rebuttal: # **Response to Reviewer t2ao** Thanks for your comments. Below we provide a point-by-point response to address the concern from the respected reviewer. **Q1. The attribution of the unorganized latent-space distribution lacks comprehensive experiments.** **R1.** Thanks for the concern. Please re...
Summary: The paper explores the utilization of blendfake and pseudo-fake data in training deepfake detectors. It argues that the significance of deepfake samples has been underestimated due to insufficient exploration. To better exploit both pseudo-fake and deepfake data, the paper introduces a progressive transition f...
Rebuttal 1: Rebuttal: # **Response to Reviewer LWpa** We sincerely appreciate the reviewer's positive comments and rating on our paper, and the following are our point-to-point responses **Q1. Choice of Blend Algorithms: Why apply SBI and CBI instead of other Blendfake methods?** **R1.** Thanks for your thoughtful su...
Rebuttal 1: Rebuttal: We sincerely thank all reviewers for their valuable time and constructive comments, and we are strongly encouraged by their recognition of several strengths of our submission, including: - **Fresh perspective/Well-motivated** (Reviewers LWpa, Ssp1) - **Extensive/Robust Evaluations** (Reviewers L...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Homology Consistency Constrained Efficient Tuning for Vision-Language Models
Accept (poster)
Summary: A Homology Consistency (HC) constraint for efficient transfer on VLMs is proposed in this paper, which explicitly constrains the correspondence of image and text latent manifolds through structural equivalence based on persistent homology in downstream tuning. The proposed method tracks the persistence of the ...
Rebuttal 1: Rebuttal: Thank you for your valuable concerns. **W1&Q1**: Referring to the widely-acknowledged evaluation standard in the same field, we conduct experiments on few-shot learning in the 1-/2-/4-/8-/16-shot setting. We find that perturbations near the optimal scaling hyper-parameters $\eta$, $\lambda$, $\om...
Summary: The paper identifies a key issue with existing methods for tuning pre-trained vision-language models to downstream tasks with limited data: they adjust the alignment between image and text based solely on observed samples, which may not generalize well beyond the training data. To address this issue, the paper...
Rebuttal 1: Rebuttal: Thank you for your constructive comments and insightful questions. **W1&Q1**: Our main differences and advantages over existing image-text alignment techniques are that we explicitly constrain the structural equivalence between image and text latent manifolds, and achieve a topological consistenc...
Summary: The paper introduces a Homology Consistency (HC) constraint for efficient transfer learning on vision-language models (VLMs), ensuring task-specific image-text alignment while preserving general knowledge by using structural equivalence based on persistent homology. This approach mimics the topology of latent ...
Rebuttal 1: Rebuttal: Thank you for your constructive comments. **W1**: Our main results, the performance comparisons between baselines and our proposed HC/HC* on 1-/2-/4-/8-/16-shot settings over 11 benchmark datasets are shown in Fig. 3, and the corresponding detailed numerical results are in Appendix C. It can be f...
Summary: This paper proposes Homology Consistency (HC) constraint for transfer learning on VLMs, and it explicitly constrains the correspondence of image and text latent manifolds by structural equivalence based on persistent homology in downstream tuning. Strengths: 1. The proposed method is well-founded and clearly ...
Rebuttal 1: Rebuttal: Thank you for your valuable concerns, and we would like to clarify as follows. **W1**: Since our proposed HC constraint incurs no additional cost in inference and the cost increase in offline training is marginal (less then 1.0%), we do not discuss computational cost in our paper. Following your ...
Rebuttal 1: Rebuttal: We sincerely thank all reviewers for reviewing this paper, and have provided detailed responses to all the concerns raised by the reviewers. Pdf: /pdf/b4c1b66a0b7a9a1d93a54da846fb17a1195c5050.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Template-free Articulated Gaussian Splatting for Real-time Reposable Dynamic View Synthesis
Accept (poster)
Summary: The authors propose to combine a reposable 4D reconstruction from multi-view video based on a skeletal LBS model with 3D Gaussian splatting. To this goal they introduce a novel strategy for estimation of the skeletal model from a superpoint clustering. The results demonstrate a superior image quality and, than...
Rebuttal 1: Rebuttal: Thank you for the detailed comments. - **Limitations/W2**: Limitations and broader impacts **Answer**: We will move the limitations and broader impacts to the main paper in the final version. - **Q1**: More discussion about Eq. 13. **Answer**: Based $g_i$ calculated by Eq. 13, we ca...
Summary: This paper presents a novel approach for learning articulated objects, alongside the skeletons/kinematic trees directly from input videos, eliminating the need for pre-defined meshes or hand-crafted skeleton priors. Specifically, the paper introduces a hierarchical 3D Gaussian representation, where a set of s...
Rebuttal 1: Rebuttal: We thank the reviewer for the useful suggestions. - **Q1**: Higher resolution **Answer**: We provide the results at the same resolution as WIM and AP-NeRF in the attached `PDF`. We believe the improved performance mainly comes from the powerful representation ability of 3D-GS and the better ...
Summary: The paper introduces a method combining 3D Gaussian Splatting and superpoints for dynamic object modeling, achieving real-time rendering and high visual fidelity. Empirical results show that the proposed method achieves state-of-the-art results on several benchmarks. Strengths: 1. The paper is well-written an...
Rebuttal 1: Rebuttal: We sincerely thank you for your time and efforts. - **W1**: Lacking in innovation **Answer**: While our work builds upon previous works, to the best of our knowledge, it is the first work to discover the skeleton of articulated objects represented by 3D Gaussian Splatting. - **W2**: Distingu...
Summary: The paper proposes a novel approach for reconstructing reposable dynamic 3D objects from RGB videos using Gaussian Splatting, without requiring any template as input. To achieve this, the paper suggests grouping Gaussians around superpoints, which are intended to represent rigid parts of the scene. By optimiz...
Rebuttal 1: Rebuttal: Thanks for your careful and valuable comments. Below are our responses to the specific points raised. - **Q1/W2**: Optimization time and required resources **Answer**: Similar to 3D-GS, the optimization time and required resources are dependent on the number of Gaussians. For the D-NeRF datase...
Rebuttal 1: Rebuttal: We thank the reviewers for their positive and constructive feedbacks. The attached `PDF` contains 3 tables and 2 figures. Pdf: /pdf/2c72137e9137f6bedad6dae3ce1b3b67fa96ef28.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
An Empirical Analysis of Compute-Optimal Inference for Problem-Solving with Language Models
Reject
Summary: This paper explores compute-optimal inference for large language models (LLMs), focusing on designing models and strategies that balance additional inference-time computation with improved performance. The study evaluates the effectiveness and efficiency of various inference strategies, including Greedy Search...
Rebuttal 1: Rebuttal: > Concern 1: Although the paper offers quite thorough experimental analysis, it does not look deep in terms of theoretical ideas (although there are 2 theorems), which may be a problem for a flagship venue like NeurIPS. Our main focus is on formalizing the compute-optimal inference problem, desig...
Summary: The paper presents an approach to select an optimal inference strategy for LLMs and empirical analysis on Math problem solving tasks. The main idea is to select an inference strategy based on a computational budget (FLOPs). The underlying policy model samples solutions by generating tokens based on the budget ...
Rebuttal 1: Rebuttal: > Concern 1: In terms of the method itself, I was not sure if it is very novel. It seems to be a smaller variation on the tree search methods that search for solutions in the generated space Our emphasis in this work is on formulating and studying a new setting of compute-optimal inference. As pa...
Summary: This paper investigates the optimal training configurations of large language models (LLMs) during inference. The proposed inference strategy, REward BAlanced SEarch (REBASE), combines the strengths of Monte Carlo Tree Search (MCTS) with reduced inference costs, resulting in improved performance on math-domain...
Rebuttal 1: Rebuttal: > Concern 1: Did you take into account the inference cost of the reward model (RM) in your analysis? As the REBASE frequently uses RM to judge the quality of immediate solutions than other sampling strategies, such as, weighted major voting, It's crucial to consider this aspect to provide a holist...
null
null
Rebuttal 1: Rebuttal: **General Response** We are grateful to all reviews for their insightful comments. We appreciate that reviewers found our method to be novel (PMgX), basis for analyzing inference scaling law to be comprehensive (PMgX, pgJ7, drh5), and our topic to be interesting (pgJ7, drh5). We summarize our c...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Curriculum Fine-tuning of Vision Foundation Model for Medical Image Classification Under Label Noise
Accept (poster)
Summary: The authors propose Cufit, a curriculum fine-tuning method for improving the performance of medical image classification under the noisy labels setting. The method shows strong performance against other baselines on several medical datasets. The authors have also provided results on a non-medical dataset. Str...
Rebuttal 1: Rebuttal: ### **Q2, W6: Experimental results with LoRA** We greatly appreciate your suggestion. We have conducted the experiment primarily using the Rein adapter, as it achieves excellent performance in domain-generalized semantic segmentation, and we believe it has chance to have excellent performance in ...
Summary: In this paper, the authors propose a curriculum learning strategy for fine-tuning on noisy medical datasets. The key insight comes from that linear probing with limited training samples can be more robust to label noise. The performance is good compared to the former methods. Strengths: Generally, I think thi...
Rebuttal 1: Rebuttal: ### **Q1, Q2: Clear description of our method.** We apologize for the unclear term usage. Our method trains the model like a multi-task training approach (i.e., all modules are trained simultaneously for a current given batch). We use the term “curriculum” to represent the order of the agreement c...
Summary: This paper presents a curriculum fine-tuning paradigm called Cufit. This method is designed to fine-tune Vision Foundation Models (VFMs) for medical image classification tasks under the presence of noisy labels. The approach leverages the robustness of linear probing and the generalization capabilities of fine...
Rebuttal 1: Rebuttal: ### **Q1. Computational cost of Cufit compare to other noise-robust training methods.** We appreciate your feedback. Since the audience may be curious about the computational cost of our method and other training methods, we will provide the resource usage of these methods in the supplementary mat...
Summary: The paper presents Cufit, a curriculum fine-tuning paradigm for Vision Foundation Models (VFM) aimed at improving medical image classification under label noise. This method leverages the robust feature extraction capabilities of pre-trained VFMs and employs a linear probing strategy to mitigate the impact of ...
Rebuttal 1: Rebuttal: ### **Q1: Discussion of previous methods** We greatly appreciate your valuable feedback about including strengths and weaknesses of previous methods in the paper. Our method outperforms previous methods in the medical image classification using VFMs. As shown in Figure 3 in the paper, our modules...
Rebuttal 1: Rebuttal: We appreciate the reviewers for their valuable comments and constructive feedback on our paper. As summarized by all reviewers, we propose a novel parameter-efficient fine-tuning (PEFT) framework for medical image classification under noisy labels. We believe our framework can outperform previous ...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Unified Speech Recognition: A Single Model for Auditory, Visual, and Audiovisual Inputs
Accept (poster)
Summary: This paper proposes a unified architecture and training method for auditory/visual speech recognition. Building upon this model, the authors introduce a semi-supervised pseudo-labeling method to leverage unlabeled audio-visual data, as well as self-supervised pre-training to enhance model performance. Experime...
Rebuttal 1: Rebuttal: Thank you for your detailed review and time. Below we address the key concerns raised. > How is the weight of the teacher model in self-supervised pretraining initialized? Is it initialized randomly or with pretrained weight on another task? The teacher model is randomly initialised in pre-trai...
Summary: This paper proposes a training methodology for a *single* model which can use *either* audio, visual, or audiovisual features as input for automatic speech recognition. This is done by enforcing a training batch always includes (feature,label) pairs of all three modalities, using a 1D/2D ResNet-18 feature extr...
Rebuttal 1: Rebuttal: Thank you for your detailed review and time. Below we address the key concerns raised. > Line 104 states a single FC layer on top of the encoder for vocabulary predictions, while line 107 states to use the decoder output sequence, which is subsequently not used as 1−λctc=0. So the decoder is not...
Summary: This paper proposes USR, a unified speech recognition model that leverages pseudo labels during fine-tuning. It introduces a single model capable of handling three tasks—ASR, VSR, and AVSR—simultaneously, delivering state-of-the-art performance. Strengths: 1. The paper is well-organized. Although the USR syst...
Rebuttal 1: Rebuttal: Thank you for your detailed review and time. Below we address the key concerns raised. > The complexity of training current SSL-based VSR or AVSR systems remains a challenge. We recognise that VSR and AVSR systems present unique challenges compared to audio-only systems, and one of our future ...
Summary: This paper unifies the ASR, VSR, and AVSR tasks in a single model and shows the performance benefits of a single model in LRS3 data. There are several attempts at unifying these three models, but I think this is the first successful trial of realizing it. The paper proposes an effective training strategy to av...
Rebuttal 1: Rebuttal: Thank you for your detailed review and time. Below we address the key concerns raised. > the technical novelty is not very strong. Most techniques are well-known or straightforward (e.g., the use of CTC, pseudo-label filtering, etc.). While individual components of our work have been previously...
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their thoughtful comments, which have greatly contributed to improving our paper. We are pleased that the reviewers recognise the effectiveness of our method (Reviewers d2RY, d9WG, WdRe), the quality of our experiments (Reviewers d2RY, d9WG, Fi7g, WdRe), and th...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
DynaMo: In-Domain Dynamics Pretraining for Visuo-Motor Control
Accept (poster)
Summary: This paper presents a way of pre-training vision encoder for robot control. Specifically, instead of using vanilla contrastive or masked autoencoder approaches, this method creates two models: 1) an inverse dynamics model that estimates the transition latent (actions) and 2) a forward dynamics model that takes...
Rebuttal 1: Rebuttal: Thank you for your thorough review and constructive feedback. We are glad that you found our approach to visual pretraining novel. We will address each of your concerns below. **"Instead of having pre-training and fine-tuning using the same dataset… train on a mass amount of data… finetune to a s...
Summary: This paper presents a self-supervised model, DynaMo, for pretraining visual encoders adopted for visuo-motor control. The targeted downstream task is imitation learning for robotic manipulation. Instead of using an out-of-domain dataset for pretraining and then transferring to a new domain using alternative te...
Rebuttal 1: Rebuttal: Thank you for your insightful review and for suggesting these papers. We are glad that you found our action-free assumption innovative. After reading these papers, it is clear that our work and indeed many others in the field of representation learning and imitation learning have been inspired by ...
Summary: This paper presents a self-supervised learning method for robot learning that learns representations by using data from demonstrations. The objective is based on learning latent actions from inverse dynamics, and learning forward dynamics model that uses such latent actions as inputs. Several techniques are ut...
Rebuttal 1: Rebuttal: Thank you for your thoughtful review, and pointers to missing baselines. We are glad that you find our in-domain visual pretraining setting important. We will address each of your concerns below. **"Is there a way to ensure that baseline methods are well-tuned?"**: We monitor the observation embe...
Summary: This paper presents DynaMo, using in-domain data for self-supervision. It jointly learns a latent inverse dynamics model and a forward dynamics model over a sequence of image embeddings. The Strengths: This paper is easy to follow. Weaknesses: Simplified Real-World Setup: The real-robot experiments appear ov...
Rebuttal 1: Rebuttal: Thank you for your detailed review and constructive comments. We are glad that you consider our robot experiments a notable strength. We will address each of your concerns below. **Simplified real-world setup**: We would like to clarify that the red marker on the table is for setup reference only...
Rebuttal 1: Rebuttal: We thank the reviewers for your insightful and constructive comments, and for finding our robot results strong (96in, kzSJ) and the approach novel (B6gm, MZTu). To address your concerns, we have run all requested experiments within our compute budget. You can find our detailed response in individu...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Unveiling LoRA Intrinsic Ranks via Salience Analysis
Accept (poster)
Summary: The work presents an algorithm for adapting the rank of the LORA matrices according to a novel “saliency metric” assigned to each singular value of the LORA matrices. The saliency measure is computed taking into account a sequence of steps (time window) during training and computing two quantities at the en...
Rebuttal 1: Rebuttal: We appreciate the constructive suggestions provided by the reviewer. We provide a detailed explanation and experimental analysis as follows. ***W1: the authors must reference the algorithm they use for “de-cycling” the graph, describing its steps at least in the appendix.*** We provide a detaile...
Summary: The paper introduces SalientLoRA, an approach designed to optimize the intrinsic ranks of LoRA components in LLMs through salience measurement. The method first utilizes salience measurement to analyze the variations and inter-dependencies of singular value magnitudes over time, which helps assess matrix impor...
Rebuttal 1: Rebuttal: ***Question: To fully evaluate the robustness of the proposed method, could you provide detailed ablation studies and analyses for the hyperparameters, including β, γ, Ti, and Tf?*** We appreciate the insightful suggestions provided by the reviewer. In response, we conduct additional experimental...
Summary: This paper proposes SalientLoRA, a new method for adaptively optimizing the intrinsic ranks of low-rank adaptation (LoRA) matrices. The key ideas are: Using singular value decomposition (SVD) to decompose the LoRA matrices and measure the salience/importance of each singular value based on its magnitude, orth...
Rebuttal 1: Rebuttal: We appreciate the constructive feedback provided by the reviewer. We provide a detailed explanation and experimental analysis as follows. ***W1: The article contains some details that are not clearly explained, such as how the R function on line 145 is calculated, and what specifically is done in...
null
null
Rebuttal 1: Rebuttal: ***1. A Detailed Explanation of the De-cycling Process.*** Since the dependency graph between singular values is a directed cyclic graph, we use a depth-first search (DFS) algorithm to detect and remove cycles. Specifically, we begin by performing a depth-first traversal of each node in the graph...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning Multimodal Behaviors from Scratch with Diffusion Policy Gradient
Accept (poster)
Summary: This paper introduces DDiffPG for online reinforcement learning with multi-modal behaviour discovery. DDiffPG consists of two parts: 1) a new policy improvement method to stabilise the diffusion policy by cloning a target action; 2) a mode discovery mechanism to train mode-specific and intrinsic Q functions. I...
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. Due to the rebuttal limit and the number of questions, our responses are concise. We are happy to provide more detailed answers during the discussion period. > The paper is hard to follow ... for improvement. Based on the reviewer's feedback, we will: 1....
Summary: This paper addresses the challenges associated with employing diffusion policy in online reinforcement learning (RL), particularly the intractability of policy likelihood approximation and the bias towards a single mode. The author introduces the Deep Diffusion Policy Gradient (DDiffPG) method, which decouples...
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback and insightful comments. We want to address the reviewer's concerns and questions as follows. > Several claims require additional support. ... enhance their reliability. We thank the reviewer for the constructive comment. First, given the RL objec...
Summary: This paper aims to solve online RL problems with diffusion policy. It includes 1. a diffusion policy optimization method for diffusion online training. 2. A combination of intrinsic rewards motivated skill discovery method and model-seeking Q-learning to facilitate exploration and prevent mode-collapse behavio...
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and insightful comments. We address the reviewer's concerns and questions as follows. > The proposed diffusion training objective ... further application. We proposed diffusion policy gradient, a method that combines RL and behavioral cloning (BC) ...
null
null
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their feedback and constructive suggestions on our manuscript. We are glad that the reviewers find our paper to be: * introducing a novel and interesting idea (Reviewer HLNL, Reviewer TVD7) * having informative and good-quality visualizations and experiment...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
AdaNeg: Adaptive Negative Proxy Guided OOD Detection with Vision-Language Models
Accept (poster)
Summary: The paper "AdaNeg: Adaptive Negative Proxy Guided OOD Detection with Vision-Language Models" presents a novel approach to out-of-distribution (OOD) detection using pre-trained vision-language models (VLMs). The primary innovation is the introduction of adaptive negative proxies, which are dynamically generated...
Rebuttal 1: Rebuttal: Dear Reviewer *eV4A*, We sincerely thank you for the constructive comments and recognition on our work! Please find our responses below. ### **Q1: Potential Overhead in Memory Management: The implementation of a memory bank for caching features may introduce significant overhead in memory manage...
Summary: In this paper, the authors propose AdaNeg, a test-time adaption method for CLIP-based post-hoc OOD detection. AdaNeg is an extension of NegLabel and introduces a class-wise memory bank for each ID and negative labels. The memory bank is gradually filled with ID and OOD features during the model deployment. The...
Rebuttal 1: Rebuttal: Dear Reviewer *Pmx9*, We sincerely thank you for the constructive comments and recognition on our work! Please find our responses below. ### **Q1: Analyses on the stability of our method with different ID and OOD sample mixture ratios** **A1:** Many thanks for the detailed and valuable comment...
Summary: This paper introduces a new algorithm for Out-Of-Distribution (OOD) sample detection. First, it analyzes the shortcomings of previous Vision-Language OOD detection methods and proposes improvements based on these findings. Specifically, the paper presents a scheme for online updating of the memory bank during ...
Rebuttal 1: Rebuttal: Dear Reviewer *yxd9*, We sincerely thank you for the constructive comments! We hope our following responses can address this reviewer's concerns. ### **Q1: The motivation in this paper is not very clear. Specifically, in Figure 1(a), it is not evident why the newly proposed AdaNeg is better tha...
Summary: The authors introduce a new approach to leverage the pre-trained vision-language model for identifying out-of-distribution (OOD) samples. Compared to prior works that employ consistent negative labels across different OOD datasets, they introduce adaptive negative proxies to dynamically generate text labels du...
Rebuttal 1: Rebuttal: Dear Reviewer *ACdF*, We sincerely thank you for the constructive comments and recognition on our work! Please find our responses below. ### **Q1: While the proposed AdaNeg shows clear improvements over training-free baselines, its overall performance on ImageNet still lags behind training-base...
Rebuttal 1: Rebuttal: ## **Common Responses to All Reviewers** **Dear Reviewers, Area Chairs, and Program Chairs:** We are grateful for the reconstructive comments and valuable feedback from the reviewers. We are glad that the reviewers found our idea novel (Reviewers ACdF and eV4A) and design interesting (Reviewer P...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Supra-Laplacian Encoding for Transformer on Dynamic Graphs
Accept (poster)
Summary: This paper introduces a new method called Supra-Laplacian Encoding for spatio-temporal Transformers(SLATE) to deal with dynamic graph challenges. Its core approach is to enhance the graph transformer(GT) architecture by integrating spatio-temporal information more efficiently. It deploys a new technology to co...
Rebuttal 1: Rebuttal: **We thank reviewer n3Tf for their meaningful and valuable comments.** **Q.1) How does the model perform as the size of the graph increases?** Scaling our SLATE method to graphs with ~10, 000 nodes has been one central objective in our submission. Through engineering techniques like using FlashA...
Summary: This paper proposes SLATE, a novel method for link prediction in dynamic graphs. SLATE transforms dynamic graphs into multi-layer networks and generates a unified spatio-temporal encoding by leveraging the spectral properties of the supra-Laplacian matrix. It uses a fully connected transformer architecture to ...
Rebuttal 1: Rebuttal: **We thank reviewer G1C1 for their questions and remarks, we hope the explanations below will answer their concerns.** **Q1 & W1.) The explanation for adding temporal connections seems inadequate, especially the description of *AddTempConnection()* in Algorithm 1. The explanation for adding tempo...
Summary: This paper proposes a spatial-temporal encoding for transformers on dynamic graphs. Specifically, graphs at each time step are treated as a single multilayer graph and packed into a larger adjacency matrix, with temporal self-connections between each node and its past. Eigenvectors of the constructed Laplacian...
Rebuttal 1: Rebuttal: **We thank reviewer nC2C for their meaningful and valuable comments.** **W1 on scalability:** As any Transformer model, SLATE’s main bottleneck in terms of scalability is the quadratic complexity of the attention matrix. However, as you pointed out, we use Flash attention to mitigate this issue....
Summary: This work introduces Supra-Laplacian encoding for spatio-temporal Transformers (SLATE) which aims to learn both spatio and temporal information in a dynamic graph with a transformer architecture. The key is to convert Discrete Time Dynamic Graphs into multi-layer networks and then extract the spectral features...
Rebuttal 1: Rebuttal: **We thank reviewer kt1z for their meaningful and valuable comments.** **W1 on scalability:** Like any Transformer model, SLATE's primary scalability bottleneck is the quadratic complexity of the attention matrix. However, as noted by reviewer nC2C, we mitigate this with FlashAttention, allowing ...
Rebuttal 1: Rebuttal: # Global response to reviewers We would like to thank all the reviewers for their excellent feedback, their relevant questions, the enthusiastic reception of our SLATE method and their encouragement. We would like to clarify here some key points raised by the reviewers along the two main lines o...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Large Pre-trained time series models for cross-domain Time series analysis tasks
Accept (poster)
Summary: Training large time series (TS) models is often limited by the scarce data available for a specific application. Existing pretraining methods use a simplistic tokenization scheme where the TS is cut up into equally sized parts, independent of its content. The newly proposed method *Large Pre-trained Time-serie...
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. We will address them as follows: **Domain-generalization settings** We wish to emphasize that we generalize to a wide range of domains but we require to learn segmentation module for each of the domains. Generalizing to unseen domains is an important re...
Summary: The paper introduces a new approach for creating pre-trained models for time-series data, similar to those used in language and vision tasks. The authors propose a model called Large Pre-trained Time-series Models (LPTM), which includes an innovative adaptive segmentation module to handle diverse time-series d...
Rebuttal 1: Rebuttal: We thank the reviewer for comments and suggestions. We address them as follows: **Aggregate rank metric** We thank the reviewer for the suggestion. We will add the average rank of each model as: | Model | Score | |-----------------------|-------------| | AutoARIMA ...
Summary: The paper proposes Large Pre-trained Time-series Models (LPTM), a novel method designed to improve the efficiency and performance of time-series analysis across multiple domains. The key contribution is an adaptive segmentation module that automatically identifies optimal segmentation strategies for diverse d...
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and we address them as below: **Intuition on segmentation** We note that even for vision models such as ConvNets and ViT, the images are ingested as fixed sized patches (such as 16 x 16 patches) which have some similarity to segments in time-series. Moreo...
Summary: This paper proposes a novel contribution to pretrained time series models for forecasting and classification by paying attention to the fact that currently several transformer models take time series segmentations of the same size, regardless of the particular characteristics of the time series in consideratio...
Rebuttal 1: Rebuttal: We thank the reviewer for the comments. We address them as follows: **The proposed framework is not differentiable..** As stated in the paper we did not observe any instability in the training. Yes, the segmentation of time-series is a discrete operation. LPTM tackles this challenge via a novel ...
Rebuttal 1: Rebuttal: Tables for the standard deviation of RMSE across 10 runs, mean rank of the models and 3 additional classification tasks. Pdf: /pdf/523e5175f69e0f08c5403ba64aa64981b1c4d2e4.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Weakly-Supervised Cortical Surfaces Reconstruction from Brain Ribbon Segmentations
Reject
Summary: The submission presents a deep learning-based approach for cortical surface reconstruction (CSR) from brain MRI data using weak supervision derived from cortical brain segmentation maps. The claimed contributions are: 1. Weak Supervision Paradigm: The authors introduce a new weakly supervised paradigm for re...
Rebuttal 1: Rebuttal: **C1: Limited novelty: this work combines [1] and [2]. Explain methodology and exp setup of [1-3]** **A1**: Our SegCSR framework is model agnostic, and we choose CoCSR [1] as the baseline b/c it is the SOTA and able to reconstruct multiple cortical surfaces simultaneously. SegCSR is weakly super...
Summary: The authors proposed a novel new method to jointly reconstruct multiple cortical surfaces using weak supervision from brain MRI ribbon segmentation results, which deforms midthickness surface deformed inward and outward to form the inner (white matter) and outer (pial) cortical surfaces. The proposed method is...
Rebuttal 1: Rebuttal: **C1: Overclaim. The pseudo ground truth (pGT) surface mentioned in the manuscript seems the GT mesh in other approaches, obtained by Marching Cubes (MC)/FreeSurfer. Why is the proposed method weakly supervised?** **A1**: We have summarized the supervision signals used by our method and others in...
Summary: The paper presents a deep learning approach to jointly reconstruct multiple cortical surfaces using weak supervision from brain ribbon segmentations derived from brain MRIs. The method leverages the midthickness surface and deforms it inward and outward to fit the inner and outer cortical surfaces by jointly l...
Rebuttal 1: Rebuttal: **C1: The paper's central contribution of weak supervision is undermined by the fact that the model is trained on pseudo ground truth (pGT) surfaces for WM and pial surfaces.** **A1**: Previous DL methods typically rely on pGT surfaces from conventional pipelines as optimization targets, which we...
Summary: The paper presents a novel deep learning method for the reconstruction of cortical surfaces from 3D MRI. The proposed method follows an approach learning explicit surface deformations, in which a CNN is used to predict three velocity fields, corresponding to the pial, white matter and midthickness surfaces. Un...
Rebuttal 1: Rebuttal: **C1: Main motivation (prolonged time for generating pGT surfaces) is doubtful b/c the pGT surfaces can be generated automatically offline. Recent pipelines, e.g. FastSurfer, can extract surfaces in a fraction of the time** **A1**: The lengthy time to generate pGT surfaces is not our only motivat...
Rebuttal 1: Rebuttal: **We thank all reviewers for their efforts in reviewing our paper and providing comments.** **1. Motivation (Reviewers Guru)** - Conventional pipelines involve multiple processing steps, leading to lengthy processing time. - Each pipeline requires meticulously tuned parameters, posing challenges...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Flaws can be Applause: Unleashing Potential of Segmenting Ambiguous Objects in SAM
Accept (poster)
Summary: The paper presents a novel approach to handling the inherent ambiguities in the SAM used for image segmentation. SAM, despite its robustness, often exhibits sensitivity to slight variations in prompts and object granularity, leading to inconsistent predictions. The authors propose a new framework leveraging a ...
Rebuttal 1: Rebuttal: Thanks for appreciating our paper as addressesing a critical challenge, contributing to the advancement of robust and adaptable segmentation models. We provide pointwise responses to your concerns below. ## Q1. Applicability to real-world non-synthetic and non-medical datasets As shown in Fig. ...
Summary: This paper proposes a SAM-based framework to address the ambiguous image segmentation problem. The authors present an optimization framework based on a conditional variational autoencoder, which simultaneously models the prompt and the granularity of the object using a latent probability distribution. This app...
Rebuttal 1: Rebuttal: We are very glad and appreciate that you had a positive initial impression. Thanks for appreciating our paper as the first work that leverages the inherent properties in vision foundation models for ambiguous image segmentation, demonstrating impressive advantages and value in practical applicatio...
Summary: This paper builds a framework for amigous object segmentation on top of SAM prompted with bounding boxes, which is known to be sensitive to small prompt changes. The framework is based on a VAE, and the main idea is to jointly model the prompt and the object granularity with a latent probability distribution...
Rebuttal 1: Rebuttal: We appreciate your positive initial impression and valuable feedback. We look forward to revising our manuscript based on your suggestions. Below are our point-by-point responses to your concerns. For brevity, we address recurring issues only once. ## Q1. General remarks **<Applicable to real-wo...
Summary: This paper aims to convert the flaws in the vision foundation model (e.g., SAM) into advantages for ambiguous object segmentation. To this end, the authors propose a novel framework that employs latent distribution and an optimization architecture. The authors validated the performance of the proposed methods ...
Rebuttal 1: Rebuttal: Thanks for appreciating our paper as harnessing SAM's sensitivity, redeemed as a weakness, to address ambiguous and uncertain predictions. We provide pointwise responses to your concerns below. ## Q1. Method details **<How to extract the mean and standard deviation from networks?>** As noted in ...
Rebuttal 1: Rebuttal: ## Global Response 1. Results of original SAM As suggested by **Reviewer YPmC** and **Reviewer 2dZP**, we have added the results of the original SAM for comparison. As shown in the figure below, SAM (point) and SAM (box) represent the results of the original SAM obtained using different prompts. ...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
The Benefits of Balance: From Information Projections to Variance Reduction
Accept (poster)
Summary: This paper introduces a technique called iterative data balancing—altering data distributions to match predefined marginal distributions—that can lead to variance reduction in model predictions. The authors highlight its utility for self-supervised learning, which has been used to train several foundation mode...
Rebuttal 1: Rebuttal: Thank you for your insightful comments. We address them below. >**The authors could expand the range of experiments to include a more diverse set of tasks.** Thank you for raising this point. We also show performance on an image retrieval task in Figure 8 of the attached PDF using the Pascal VOC...
Summary: This paper explores the use of data balancing in various self-supervised learning (SSL) frameworks. The authors argue that this iterative algorithm, which is typically used to avoid representation collapse in SSL models, also provides a benefit of reducing the variance of empirical functionals of the distribut...
Rebuttal 1: Rebuttal: Thank you for your helpful comments and suggestions. We address them below. The upcoming comments concern the interpretation of the spectral gap condition (and the second largest singular value $s_2$). To facilitate this discussion, we introduce a simple example by starting with an arbitrary valu...
Summary: This work focusses on data balancing strategies in context of self-supervised learning. The main claim of the paper is that data balancing, commonly used to avoid representation collapse, has a variance reduction effect. The authors introduce an upper bound on the MSE of a balancing estimator, relating it to e...
Rebuttal 1: Rebuttal: Thank you for your comments and questions. We address them below. >**The theory is very extensive. The Appendix contains several pages of proofs that are difficult to parse and come on top of the formalism presented in the main paper.** We provide the complete, self-contained, proofs of all of o...
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for their hard work reviewing our paper and providing concrete comments! We collect the broad points made below and address other reviewer concerns in the individual responses. To summarize, our paper provides three theoretical innovations: 1. The first quantitative and non-...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Improving Subgroup Robustness via Data Selection
Accept (poster)
Summary: This paper proposes a data-centric model debiasing technique to identify and remove data which harm worst-group accuracy. This method removes fewer data than standard balancing techniques and can be adapted for settings with and without group annotations. Experiments are provided on standard group robustness b...
Rebuttal 1: Rebuttal: We thank the reviewer for their review, and address their questions below. **[Further comparisons to the no-information regime]** Our goal was to include the strongest baselines (to the best of our knowledge) and show that our methods perform better/comparably to them. For instance, since Auto-D...
Summary: The paper introduces a method called Data Debiasing with Datamodels (D3M) that addresses the problem of model bias (using the worst-case loss over groups as the metric). The approach leverages a process known as datamodeling to predict model behavior based on training data influence, focusing on removing data ...
Rebuttal 1: Rebuttal: We thank the reviewer for providing a thorough review of our paper. Below, we address the feedback points raised by the reviewer: **[Focus on worst-group accuracy (WGA)]** The reviewer raises the concern of evaluating only WGA without reporting overall accuracy. We note that we report balanced (...
Summary: This paper introduces a new data debiasing technique called Debiasing with Data Attribution (DDA). DDA utilizes data modelling framework to identify and eliminate training examples that negatively impact the accuracy of the worst-performing groups. Additionally, the paper presents AUTO-DDA, an extension of DDA...
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their feedback on our work. Below, we have address the concerns raised by the reviewer: **[Selection of specific classes chosen for ImageNet experiments]** We selected classes that previous work found had biases in the ImageNet dataset. Specifically biases...
Summary: The paper proposes Data Debiasing with Datamodels (D3M), a method to improve machine learning model performance on underrepresented subgroups by removing specific training examples that cause failures. Unlike traditional balancing methods, D3M efficiently debiases classifiers without needing group annotations,...
Rebuttal 1: Title: Please read rebuttal and provide more substantive comments Comment: Dear reviewer FTVd, Your review appears to mostly mention formatting issues. Please read the authors' response to other reviews and provide comments regarding the content of the paper, if you have any. Thanks, AC --- Rebuttal Co...
Rebuttal 1: Rebuttal: Thank you for your reviews! We respond to the questions of each reviewer individually below. We additionally include results with error bars in the attached PDF as requested by Reviewer yCP5. Pdf: /pdf/a9bf007e68534ce595870fb5bcc21d558e941ad5.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Speaking Your Language: Spatial Relationships in Interpretable Emergent Communication
Accept (poster)
Summary: This work investigates the presence of spatial deixis (e.g., spatial references in language dependent on the context of the utterance) in a signalling game within the paradigm of emergent communication. It begins by introducing a variant of the signalling game which requires the sender to communicate the relat...
Rebuttal 1: Rebuttal: # Response to Reviewer vZ26 We thank the reviewer for the insightful feedback and constructive criticisms. We appreciate that the reviewer found our experimental design and overall approach to be of high quality and relevant to the field of EC. We also appreciate that the code was found to be of ...
Summary: This paper proposes a new communication game in the emergent communication framework to analyze the emergence of _deictic reference, i.e. expressions akin to demonstratives like "this" and "that". These are important expressions in natural language and especially in this emergence literature, since their mean...
Rebuttal 1: Rebuttal: # Response to Reviewer JyuC We thank the reviewer for their comprehensive comments and thoughtful critique. We appreciate that the reviewer has found our paper interesting and that the investigation of deictic references is perceived to be of high value. To address the points raised by the revie...
Summary: The authors design a referential game environment intended to motivate the emergence of spatial references, cast in the form of a task where the target integer must be selected from an integer sequence. The character vocabulary for the message is smaller in size than the set of integers in the list, and this ...
Rebuttal 1: Rebuttal: # Response to Reviewer VwFJ We would like to thank the reviewer for the insightful feedback and detailed criticism. We also appreciate that the reviewer found our experimental setup and analysis interesting. To address the points raised by the reviewer: ## Weaknesses > Overall I find the biggest...
Summary: This paper shows that emergent communication can learn spatial references. They first create a modified referential game which requires the agents to communicate by messages that indicate the relative position of a number. The proposed agent architecture shows that the GRU-based agents can achieve good perform...
Rebuttal 1: Rebuttal: # Response to Reviewer YdNG We would like to thank the reviewer for their insightful comments and feedback. We appreciate that the reviewer found our game setting novel and the measure effective. We address the concerns and weaknesses raised below. ## Weaknesses > The paper is not very easy to f...
Rebuttal 1: Rebuttal: # General Response We would like to thank the reviewers for their constructive comments. We address some common themes in the general response, with more detailed comments in each reviewer rebuttal. Where it was needed, the quoted parts of the review texts were shortened to (...) for brevity. W...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
LOVA3: Learning to Visual Question Answering, Asking and Assessment
Accept (poster)
Summary: This paper augments presents a data augmentation / multi-task learning technique to improve model quality for Visual Question Answering (VQA). The key idea of the paper, motivated by analogy to humans, is that asking questions and assessing answers are also key skills, apart from just answering questions. The ...
Rebuttal 1: Rebuttal: Thank you for your thorough review! We sincerely appreciate your acknowledgment of LOVA3’s motivation, clarity, novelty, effectiveness, and consistent performance gains. **W1: The model size.** We sincerely thank your valuable comments. Due to limited GPU resources, it is hard for us to train wi...
Summary: This paper enhances the MLLM's visual understanding capability by training it to ask questions about an image and evaluate the correctness of given question-answer pairs about an image. To achieve this goal, new data is extracted from existing datasets and a new model is fine-tuned on the new data. The experim...
Rebuttal 1: Rebuttal: Thank you for your thoughtful and valuable review. We thank your suggestions about extending the research scope to other domains. **W1-1: Why do we apply three key capabilities in the current static environment rather than in an interactive environment?** Firstly, it should be noted that these t...
Summary: The paper introduces LOVA3, a framework designed to enhance Multimodal Large Language Models (MLLMs) by incorporating not only visual question answering (VQA) but also the capabilities of generating questions (GenQA) and evaluating question-answer pairs (EvalQA). The primary objective is to improve the compreh...
Rebuttal 1: Rebuttal: Thank you for thoroughly reviewing our work! We have carefully considered all your concerns and addressed them in the following responses. **W1: Comparison with SEED-Bench and LLaVA-Bench in using LLMs or MLLMs.** (1) GenQA and EvalQA are two new training tasks, whereas SEED-Bench and LLaVA-Benc...
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null