title
string
paper_decision
string
review_1
string
rebuttals_1
string
review_2
string
rebuttals_2
string
review_3
string
rebuttals_3
string
review_4
string
rebuttals_4
string
global_rebuttals
string
dataset_source
string
conference_year
int64
review_5
string
rebuttals_5
string
review_6
string
rebuttals_6
string
review_7
string
rebuttals_7
string
review_8
string
rebuttals_8
string
Stress-Testing Capability Elicitation With Password-Locked Models
Accept (poster)
Summary: The paper studies whether fine-tuning can elicit the hidden capabilities of LLMs, especially motivated by the setting of dangerous capabilities evaluations. To provide a specific experimental setup, the paper considers password-locked models, which responds with weak answers unless provided with a fixed pass...
Rebuttal 1: Rebuttal: Thanks for the review! We are glad you found our study well-motivated, our main lines of experiments solid, and our results novel. > I can imagine that we get the results in section 5, not (only) because SFT is good in capability elicitation, but because it's relatively weak in robustly password-...
Summary: To determine whether models are safe to deploy, it's important for labs and regulators to be able to "elicit" model capabilities, including ones that are somehow "hidden" (whether by a malicious deployer, by some unforeseen property of the training data, or by an unaligned model that is able to plan). One way ...
Rebuttal 1: Rebuttal: Thanks for the review! We’re happy to hear you found our framing useful, liked our experimental methodology, and found some of our results interesting/surprising. > [...] I'd say password-locking is a fairly simple way of modeling hidden capabilities and could be quite unlike the threat models th...
Summary: The authors study the problem of testing advanced LLMs for non-obvious capabilities in a toy setting, where a model has been fine-tuned to only perform well when presented with a fixed password, and to perform poorly otherwise. This is meant to simulate the difficulty of finding the correct prompt for elicitin...
Rebuttal 1: Rebuttal: Thank you very much for the detailed feedback! We are glad you found the problem our paper addresses interesting, our reduction to password-locked models creative, and our experiments diverse. >Editorially, I do not think RL is a very important setting, since it seems unlikely that we would ever ...
Summary: This paper studies the problem of eliciting hidden capabilities from language models, through fine-tuning. In order to plant 'locked' capabilities and then do controlled test of elicitation, the authors lock the capabilities using passwords they bake into the prompts at fine-tuning. Then, they try to elicit th...
Rebuttal 1: Rebuttal: Thanks for the thoughtful review! > Is model locking something that people really do in practice? Using a set of words in a prompt is pretty weak in practice, there might be stronger ways to lock a model, such as posthoc methods at decoding time. [...] Using fine-tuning and RL at decoding time is...
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their thoughtful feedback, and are pleased to see that the vote for acceptance is unanimous. Below we address two threads common across several reviews. We are looking forward to further discussion! ## 1. Can password-locked models be unlocked using jailbr...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Cooperative Hardware-Prompt Learning for Snapshot Compressive Imaging
Accept (poster)
Summary: The authors present a Federated Hardware-Prompt learning (FedHP) framework to address the fact that compressive snapshot spectral imaging devices may not be easily tuneable against changes in the coded aperture, and that in fact the said access to coded apertures may not be possible due to privacy reasons. The...
Rebuttal 1: Rebuttal: We much appreciate that the Reviewer `aeby` provides valuable comments and finds the method is designed with clear purpose. `R4.1`: The biggest weakness is arguably that the paper covers a somewhat very niche topic, which is the application of a federated learning scheme to compressive snapshot ...
Summary: The paper addresses the challenges faced in snapshot compressive imaging (SCI) systems due to hardware shifts and the need for adaptability across multiple hardware configurations. By introducing a hardware-prompt network and leveraging federated learning, the framework enhances the adaptability and performanc...
Rebuttal 1: Rebuttal: We much appreciate that the Reviewer `1X5M` provides valuable comments and finds the proposed work have significant practical relevance to the study and the collected SSHD dataset can benefit the future research. We will release the dataset and the training/testing code. `R3.1`: The literature r...
Summary: Most existing reconstruction models in snapshot compressive imaging systems are trained using a single hardware configuration, making them highly susceptible to hardware variations. Previous approaches attempted to address this issue by centralizing data from multiple hardware configurations for training, but ...
Rebuttal 1: Rebuttal: We much appreciate that the Reviewer `Jxiy` provides valuable comments and finds the proposed method addresses the issue from the new perspective of hardware with good performance. `R2.1`: The number of clients used in the experiments is still relatively small. Although a simple comparison of th...
Summary: The paper introduces FedHP, a reconstruction method for snapshot compressive imaging systems, which addresses the challenge of cross-hardware learning by proposing a federated learning approach. The key contribution lies in using a hardware-conditioned prompter to align data distributions across different hard...
Rebuttal 1: Rebuttal: We much appreciate that the Reviewer `BdgG` provides valuable comments and finds the problem novel and the method convincing. `R1.1`: There are some typos in the writing. For example, the caption of Figure 3 and the bold parts in the second row of Table 1 and the eighth row of Table 2 are confus...
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
PERIA: Perceive, Reason, Imagine, Act via Holistic Language and Vision Planning for Manipulation
Accept (poster)
Summary: The paper proposes a framework that integrates large multimodal language models (MLLMs) and diffusion models to enable holistic language planning and vision planning for long-horizon robotic manipulation tasks with complex instructions. The authors jointly train the MLLM and diffusion model for language reason...
Rebuttal 1: Rebuttal: # Q1 More analysis of Training Cost & Performance between LLM backbone Thanks for the insightful suggestions! Considering most PERIA's computational cost is to align between vision and language, and bridge the gap between pretrained LLMs and image editing models in general domains versus robotics...
Summary: The paper tackles the problem of long-horizon task planning on pick-and-place tasks in the Ravens domain. Given a dataset of trajectories, it first learns the projection to align the vision and language encoder for a multimodal LLM. Then it finetunes both the multimodal LLM and a diffusion model to generate a ...
Rebuttal 1: Rebuttal: # Q1 Effectiveness of Diffusion model & MLLMs & Tasks better described in image Thank you for insightful questions! We appreciate the opportunity to respond point by point: 1. **Tasks better described in image** We highly agree that tasks with subgoals better described in visual space are ...
Summary: The paper proposes a holistic vision-language planning method for long-horizon robot manipulation, by learning a multi-modal large language model (MLLM). The MLLM generates interleaved language actions and keyframe images based on language goal and the initial image. Each pair of generated language and keyfram...
Rebuttal 1: Rebuttal: # Q1 Details of Low-level Policy Sorry for the confusion. Due to limited space, we placed the training details of low-level policy in Appendix E. Thanks for bringing the attention to critical importance of this section for a comprehensive understanding of the PERIA architecture and we plan to in...
Summary: This paper focuses on robotic manipulation with complex instructions. It proposes PERIA, a framework that integrates MLLM and diffusion models to incorporate both language planning and visual planning for long-horizon language-instructed manipulation tasks. Specifically, PERIA first performs a lightweight mult...
Rebuttal 1: Rebuttal: # Q1 Computation resources Sorry for the ambiguity arising from distributed presentation of computational resource requirements across Appendix. The computational cost of PERIA across three primary stages: Perceive (8 V100 GPUs * 8 hours ), Reason & Imagine (8 V100 GPUs * 42 hours), and Act (sing...
Rebuttal 1: Rebuttal: # **General Response** --- **Sincere thanks to all the Reviewers for the valuable suggestions and recognition of our work!** We sincerely appreciate all reviewers' time and efforts in reviewing our paper. We are glad to find that reviewers generally recognized our key contributions and clear p...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Toward Global Convergence of Gradient EM for Over-Paramterized Gaussian Mixture Models
Accept (poster)
Summary: The paper studies the convergence of EM for learning mixtures of Gaussians. Specifically, they consider a simplified setting where the Gaussians are in $d$-dimensions and all have covariance $I_d$. They consider an overparameterized version of the problem where they parametrize the mixture they are trying to...
Rebuttal 1: Rebuttal: Thank you for the positive review. We have addressed your concern below. > The results of the paper only work when the ground truth is "trivial" i.e. a single Gaussian. We agree that the single Gaussian ground truth is a simpler case compared to the most general problem. But our setting is nonet...
Summary: This paper talks about the gradient-EM algorithm for over-parameterized GMM. The paper mostly shows the GLOBAL convergence and its rate when using this model to learn a single Gaussian. Strengths: I believe any non-convex global convergence optimization problem is valuable. It is an extension of Dwivedi et al...
Rebuttal 1: Rebuttal: Thanks for your detailed review! We have addressed your questions below. > The over-parametrized model may have severe overfitting problem. We believe this is a misunderstanding. The aim of this paper is not to propose a new algorithm/model, but to understand the convergence behavior of the wide...
Summary: The paper focuses on the setting of a Gaussian Mixture Model with several summands and an input vector produced by one Gaussian distribution, where it employs the Expectation-Maximization rule to infer the model's parameters. Since the problem of having arbitrary number of summands has been unsolved, the paper...
Rebuttal 1: Rebuttal: Thanks for your review and positive comment! We have addressed your question below. > The experimental evaluation is used as a proof of concept and thus is limited. The authors could have (potentially) experimented with several datasets, with varying weights in the GMM, and try to benchmark their...
Summary: The paper considers fitting a single Gaussian with multiple-component Gaussian mixture models (GMM) through the Gradient EM algorithm. While the two balanced over-specified Gaussian setting has been widely studied in the previous work, generalizing it to multiple-component GMM requires significant algebraic ef...
Rebuttal 1: Rebuttal: Thank you for the detailed review. We answer each of your questions below. > The gap between this lower bound and the upper bound is large. Thank you for pointing out this problem. In the initial version we didn't optimize the exponent. Indeed, we can obtain significantly refined results which r...
Rebuttal 1: Rebuttal: We appreciate all the reviewers for their detailed and positive feedbacks. In the uploaded pdf file, we add several experiments: - Experiment of statistical rates, for questions of Reviewer 6yVv (Figure 1). - Impact of initialization on the convergence speed, for questions of Reviewer DCG2 (Figur...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
IDGen: Item Discrimination Induced Prompt Generation for LLM Evaluation
Accept (poster)
Summary: This paper proposes a method of generating prompts for evaluating large language models such that the prompts are dynamic and allow for showing meaningful performance gaps between different language models.The authors show that the generated data is more-challenging and discriminative than prior datasets. Str...
Rebuttal 1: Rebuttal: **Q:** *If a particular language model is used to generate data using the proposed method, is there any bias where that model will perform better at solving those problems? For example, if Claude generates the prompt set, will the prompt set be easier for Claude than GPT?* **A:** Thank you for yo...
Summary: The paper proposes a prompt synthesis framework for evaluating LLMs to accurately reflect different Large Language Model abilities. The authors develop two models to measure LLMs’ question discriminative power and difficulty. This study presents “instruction gradient” and “response gradient” methods to exploit...
Rebuttal 1: Rebuttal: **Q:** *The proposed methods - “Instruction gradient” and “response gradient” are not properly described in the manuscript. Authors should write the working procedure of these methods in detail in the main manuscript, as these are the centerpiece of the whole question generation process.* **A:** ...
Summary: The paper introduces a novel framework for evaluating Large Language Models LLMs) based on Item Discrimination ID theory, which generates adaptive, high- quality prompts to effectively differentiate model performance. Key contributions include a dynamic evaluation set that evolves with LLM advancements, a s...
Rebuttal 1: Rebuttal: **Q:** *The paper only used one LLM Hunyuan) to generalize data and did not verify whether the proposed method can generalize to other LLMs.* **A:** Thank you for your question about our paper. Our proposed method is designed for existing LLMs and is **not limited to a particular model**. The wor...
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Procedure-Aware Surgical Video-language Pretraining with Hierarchical Knowledge Augmentation
Accept (spotlight)
Summary: The paper addresses challenges in surgical video-language pretraining (VLP) due to the knowledge domain gap and scarcity of multi-modal data. It proposes a hierarchical knowledge augmentation approach and the Procedure-Encoded Surgical Knowledge-Augmented Video-Language Pretraining (PeskaVLP) framework. This a...
Rebuttal 1: Rebuttal: **[Q1. Plan to Expand Dataset]** Scaling and diversifying the surgical vision-language pretraining dataset is challenging due to privacy concerns and the cost of expert annotations. Even though the SVL pretraining dataset covers diverse laparoscopic surgeries, it lacks surgeries in different organ...
Summary: The paper presents a novel approach for enhancing surgical video analysis by incorporating procedural awareness. The authors propose a system that integrates knowledge of surgical procedures to improve the identification, segmentation, and annotation of surgical activities in video footage. This approach aims ...
Rebuttal 1: Rebuttal: **[Q1. Dataset Limitations]** Thank you for the insightful suggestion. In the rebuttal letter pdf, we have added a table to summarize the top 42 types of surgical videos and their amounts in the pretraining dataset. As shown in Table 1 of the rebuttal letter PDF, the SVL dataset predominantly cons...
Summary: This paper proposes a Procedure-Encoded Surgical Knowledge-Augmented Video-Language Pretraining (PeskaVLP) method that enriches language supervision with LLM-refined surgical concepts. It further constructs hard negative samples by reversing the text orders at the phase and video levels and employs a Dynamic T...
Rebuttal 1: Rebuttal: **[Q1. Augmentation Removes Variation]** Thank you for pointing out one of the key insights of this work, i.e., using LLM to build a large, versatile, and accurate surgical knowledge base to enrich and correct narrations of different types of videos during the pretraining. Since we enrich the narr...
Summary: The paper presents a new framework called PeskaVLP for surgical video-language pretraining. A hierarchical knowledge augmentation approach is used for enriching text information. The pretraining is implemented with the proposed language supervision and visual self-supervision. A new training objective is propo...
Rebuttal 1: Rebuttal: **[Q1 SVL Dataset]** **[Q1.1. Types of surgeries in SVL dataset]** In the rebuttal letter PDF, we have added a table summarizing the top 42 types of surgical videos in the pretraining dataset. As shown in Table 1 in the rebuttal letter PDF, the SVL dataset predominantly contains laparoscopic surg...
Rebuttal 1: Rebuttal: We thank all the reviewers for the insightful comments to improve our work. We are encouraged that the reviewer finds our work an interesting contribution to the community. We have carefully considered each comment from the reviewers and tried to provide detailed answers, clarifying all the issues...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
A Separation in Heavy-Tailed Sampling: Gaussian vs. Stable Oracles for Proximal Samplers
Accept (poster)
Summary: The paper investigates the complexity of sampling from heavy-tailed distributions and presents a distinction between obtaining high-accuracy and low-accuracy guarantees. It analyzes two types of proximal samplers: those based on Gaussian oracles and those based on stable oracles. The main findings are that Gau...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their valuable advice and comments and greatly appreciate the positive evaluation. >Weakness: There is no experiment to verify the theoretical findings. >Question 2: Is it possible to run some experiments to verify your results? Following the reviewer's sugges...
Summary: This paper studies the problem of heavy-tailed sampling. First, the paper shows that while the gaussian proximal samplers are efficient for light-tailed targets, they are not accurate for heavy-tailed ones; the paper develops a lower bounds for the Gaussian proximal samplers, which reveals a fundamental challe...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their valuable advice and comments and greatly appreciate the positive evaluation. >The paper is purely theoretical and lacks experimental evaluation; it would be nice to at least have a toy illustration for the implementable algorithm 2+3 in the $\alpha=1$ cas...
Summary: The paper focus on studying the complexity of heavy-tailed sampling and present a separation result in terms of obtaining high-accuracy versus low-accuracy guarantees. Their results are presented for proximal samplers that are based on Gaussian versus stable oracles. Authors show that proximal samplers based o...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their valuable advice and comments and greatly appreciate the positive evaluation. >The paper is not tested in any way on a numerical experiment. I am convinced that a paper presented at this type of conference should be both motivated by a real-world applicati...
Summary: The authors provide a lower bound for sampling from heavy tailed distributions under the Gaussian oracle of order $O(\textup{poly}(1/\varepsilon))$. They then propose an alternative proximal sampling algorithm using the $\alpha$-stable oracle that achieves a convergence rate of $O(\log(1/\varepsilon))$ for hea...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their valuable advice and comments and greatly appreciate the positive evaluation. >I have no major concerns about this paper. The presentation is somewhat dense in places, though this is mostly just a consequence of it being a very technical paper and not a fl...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their valuable advice and comments and greatly appreciate the positive evaluation. Results from the added experiments are included in the pdf file. Pdf: /pdf/bc4091796415ba3d7391c96e453543e5ea7487e9.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper studies the complexity of sampling heavy-tailed distributions. It provides lower bounds on the complexity of Gaussian-based samplers for a class of heavy-tailed targets. Then, the paper constructs proximal samplers based on stable oracles, which improve the sampling complexity. Strengths: * This pa...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their valuable advice and comments and greatly appreciate the positive evaluation. >The contribution of the paper could be improved with empirical experiments to evaluate the sampling algorithms and their complexity. Following the reviewer's suggestion, we hav...
null
null
null
null
null
null
How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning
Reject
Summary: This paper introduces Accordion Networks (AccNets), a novel neural network structure composed of multiple shallow networks. The authors propose a generalization bound for AccNets that leverages the F1-norms and Lipschitz constants of the subnetworks, demonstrating that these networks can break the curse of dim...
null
Summary: The authors present a generalization bound for deep neural networks that describes how depth enables models to learn functions that are compositions of Sobolev functions. To do this, they both prove a generalization bound for compositions of accordion networks (densely connected networks with a low-rank weight...
null
Summary: The authors introduce accordion networks (AccNets), which are compositions of multiple shallow networks. By leveraging prior workthat computes norm-based generalization bounds for shallow two-layer networks, the authors bound the complexity of a deep AccNet (as measured by its F1 norm) but the sum of the compl...
null
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
OxonFair: A Flexible Toolkit for Algorithmic Fairness
Accept (poster)
Summary: The paper introduces "AnonFair," a toolkit designed to enforce algorithmic fairness across various domains, including NLP, computer vision, and traditional tabular data. It is compatible with popular machine learning frameworks like sklearn, AutoGluon, and PyTorch. Unlike well-established fairness tools like F...
Rebuttal 1: Rebuttal: Thank you for taking the time to review our manuscript and for providing detailed, helpful, and constructive feedback. We hope to address outstanding weaknesses and concerns below. **Improving presentation:** The idea of a figure/flow chart is a good one, but there is insufficient space in the ...
Summary: This paper describes a new toolkit for algorithmic fairness, enabling the optimization of any fairness measure that is a function of the confusion matrix. Experiments on vision and NLP demonstrated the effectiveness of the proposed toolkit. Strengths: An easy-to-use toolkit for enforcing algorithmic fairness....
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback and helpful suggestions that will be integrated to improve the paper. **Improvements in presentation:** We will add the definition of equal opportunity (the most common fairness definition, corresponding to difference in recall between groups) to the work....
Summary: The paper introduces a new toolkit designed to enhance algorithmic fairness with greater expressiveness. Unlike existing toolkits, this one offers more customization options to optimize user-defined objectives and fairness constraints. Although the proposed toolkit currently includes only one method, it suppor...
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed feedback. We are happy to see that the reviewer appreciates the versatility of our toolkit beyond existing solutions and the efficiency of optimization in the toolkit. **Clarification on notation:** The equations in section 4.2 contain a function, $B(x)$...
Summary: The paper describes details of a fairness toolkit ("AnonFair"), which confers fairness to any given machine learning classifier by exploring a wide range of prediction thresholds for different groups (which are either provided upfront or inferred through an auxiliary classifier). The toolkit is designed to be ...
Rebuttal 1: Rebuttal: We thank the reviewer for their review, and we hope to address the issues raised. ­In brief, there are two main issues we wish to discuss. 1. what this toolkit does 2. The limited novelty of any toolkit/library, and that such libraries are explicitly covered in the call for papers. 1. We are c...
Rebuttal 1: Rebuttal: We thank the reviewers for their helpful and largely positive comments (**overall scores 7,6,6,6,3**). The suggestions are informative, and we will adjust presentation in the paper wherever an issue has been raised. \ \ Our toolkit provides a “robust and adaptable solution for implementing fairnes...
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper presents AnonFair, a cutting-edge open-source toolkit designed to promote algorithmic fairness. Authors claim the following contributions: (1) Comprehensive support for NLP and Computer Vision classification, as well as standard tabular problems. (2) Enhanced robustness against overfitting challenge...
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comments and constructive feedback. --- # Additional clarity in presentation We will use arrows in tables to indicate if larger **(↑)**, or lower **(↓)** scores are better. We will also discuss this when mentioning the different fairness metrics to im...
null
null
null
null
null
null
G2D: From Global to Dense Radiography Representation Learning via Vision-Language Pre-training
Accept (poster)
Summary: This paper proposes G2D, a novel vision-language pre-training (VLP) framework for medical imaging that aims to learn both global and dense visual representations from radiography images and their associated radiology reports. The key innovation is a pretext task called Pseudo Segmentation (PS), which uses a ps...
Rebuttal 1: Rebuttal: We thank the reviewer for the questions! >Theoretical analysis for why pseudo segmentation task leads to improve dense representations - Methods like ConVIRT, GLoRIA, BioViL, MedKLIP, and KAD primarily use an image encoder to extract visual features, aligning them with text embeddings through cont...
Summary: This manuscript describes a medical vision-language pre-training framework called Global to Dense level representation learning (G2D), that learns global and dense visual features simultaneously with only image-text pairs, by exploiting the aggregated attention map from the vision encoder for a pseudo segmenta...
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedbacks! > Unclear if specific sentence/phrase to individual image region alignment is achieved, for dense learning (W1) - Since the MIMIC-CXR pretraining dataset does not establish a direct relationship between specific sentences or phrases and image regio...
Summary: The paper proposes an encoder-decoder medical VLP approach for global-to-dense visual representation learning. Pseudo segmentation is adopted for dense level learning. Rich experiments validate the effectiveness of the proposed method. Strengths: 1. The motivation behind the work is clear. Pseudo-segmentation...
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedbacks! >Comparing with MGCA and MRM on CXR14 datasets (W1,Q1) - In Table 3, we directly reference the results from the KAD study to ensure a fair comparison, as KAD[1] uses the official data split for CXR14. It's important to note that the KAD[1] study do...
Summary: The paper proposes a new medical vision-language model, G2D, which employs vision-language alignment (VLA) and pixel alignment (PA) strategies, combined with a pseudo segmentation (PS) pre-training task, to learn global and dense visual representations from medical images. The VLA strategy is used to learn glo...
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedbacks! >Detecting and measuring the error of pseudo mask (W1, Q1) - In G2D, we aim to design pseudo mask for learning dense visual feature from pseudo segmentation task during medical vision-langauge pre-training (VLP), rather than directly guess the sema...
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Generative Semi-supervised Graph Anomaly Detection
Accept (poster)
Summary: This paper works on node anomaly detection in the novel semi-supervised setting where few labeled normal nodes are given and proposes to generate new anomaly nodes to boost the training data. The anomaly generation algorithm is inspired by the empirical observation that: (1) Anomaly nodes have lower affinity ...
Rebuttal 1: Rebuttal: Thank you very much for the constructive comments and questions. We are grateful for the positive comments on the novelty and soundness of the experiments. Please see our detailed one-by-one responses below. > **Weaknesses #1** The regularization is heavily based on the empirical analysis, which ...
Summary: The paper proposes a novel approach called GGAD aimed at improving anomaly detection in graphs under a semi-supervised framework. GGAD generates pseudo anomaly nodes that serve as negative samples for training a one-class classifier. This method is built on two key priors: asymmetric local affinity and egocent...
Rebuttal 1: Rebuttal: Thank you very much for the constructive comments. We are grateful for the positive comments on our paper clarity, research motivation, and empirical justification. Please see our response to your comments one-by-one below. > **Questions #1** The anomalies do not conform to the prior knowledge P...
Summary: This paper introduces a novel generative-based GAD approach, named GGAD, tailored for the semi-supervised scenario. Unlike existing GAD frameworks, the authors highlight the feasibility and importance of a semi-supervised setting where labels for normal nodes are relatively easy to obtain during training, but ...
Rebuttal 1: Rebuttal: Thank you very much for the constructive suggestions. We are grateful for the positive comments on our readability and empirical justification. Please see our response to your comments one by one below. > **Weakness in Summary** Minimal differentiation with existing generation pseudo-anomaly sa...
Summary: This paper explores the problem of semi-supervised graph anomaly detection (GAD), where some nodes are known to be normal, in contrast to the typical unsupervised setting with no labeled data. The authors show that even a small percentage of labeled normal nodes can improve the performance of existing unsuperv...
Rebuttal 1: Rebuttal: Thank you very much for the constructive comments. We are grateful for the positive comments on our studied problem, technical contribution, and empirical justification. Please see our detailed response below > **Weaknesses #1** There is no theoretical analysis to guarantee the effectiveness of ...
Rebuttal 1: Rebuttal: Dear All Reviewers, Thank you very much for the time and effort in reviewing our paper, and for the constructive and positive comments. Our rebuttal consists of two parts: **Global Response** where we address shared concerns from two or more reviewers and **Individual Response** where we provide ...
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper studies an under-explored graph anomaly detection problem where the detection models have access to a set of labeled normal nodes. To tackle this problem, it introduces a generative approach namely GGAD that generates pseudo anomaly nodes, called outlier nodes, to support the training of a discrimina...
Rebuttal 1: Rebuttal: Thank you very much for the constructive comments. We are grateful for the positive comments on our studied problem, technical contribution, and empirical justification. Please see our detailed response below > **Weaknesses #1** The generation may cause non-trivial computational We agree that G...
null
null
null
null
null
null
RashomonGB: Analyzing the Rashomon Effect and Mitigating Predictive Multiplicity in Gradient Boosting
Accept (poster)
Summary: This paper proposes a method (RashomonGB ) to estimate the Rashomon sets/predictive multiplicity of gradient boosting models. It estimates multiple ($m$) models at each stage (effectively performing a local exploration) and then combine all such models in the end to construct $m^T$ models for Rashomon set comp...
Rebuttal 1: Rebuttal: We appreciate the reviewer's constructive feedback and encouragement. Below, we systematically address each weakness and question raised in the review. For Weakness 1, the estimates in Figure 3, derived from both re-training and RashomonGB, utilized the same training cost. Specifically, each met...
Summary: This paper presents an approach that compute Rashomon set for gradient boosting algorithm where the set can be obtained through products over weak learners at each step rather than sampling them through retraining. The authors further proposed a dataset related Rashomon bound through sub-Gaussian assumption, w...
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback! We clarify the weakness and answer the reviewer's question below. To address the weakness pointed out, it would be helpful if the reviewer could specify which parts of the second paragraph in the Introduction are unclear or difficult to understand during th...
Summary: The paper studies the Rashomon effect in gradient boosting, a commonly used algorithm for tabular datasets, but something that has not received enough attention in multiplicity literature. The paper provides several theoretical discussions on the size of the Rashomon set and the impact of the number of iterati...
Rebuttal 1: Rebuttal: We appreciate the reviewer’s constructive feedback. We address the weaknesses, questions, and limitations point-by-point below. For Weakness 1, please refer to our responses to Question 1 and Question 2. For Weakness 2: as indicated, prediction uncertainty indeed differs fundamentally from predi...
Summary: The paper explores the concept of predictive multiplicity in gradient boosting models. The Rashomon effect refers to the existence of multiple models that perform similarly well on a given dataset. The authors formalize this effect in the context of gradient boosting, introduce a new method called RashomonGB t...
Rebuttal 1: Rebuttal: We appreciate the reviewer's feedback and questions. For Weakness 1, in the Introduction (Lines 26-29), we discuss the beneficial aspects of the Rashomon effect within the framework of responsible machine learning, highlighting its role in fairness by imposing additional constraints on models. T...
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their time and effort in reading and commenting on the manuscript. We appreciate that the reviewers found that the paper **“study a novel problem”** (Reviewer HS14, JGsC, XEPy, and E8Ec), **“has robust and interesting analysis on dataset-related Rashomon se...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
TAS-GNN: Topology-Aware Spiking Graph Neural Networks for Graph Classification
Reject
Summary: There's a large performance gap for graph tasks, especially graph classification tasks, between the spiking neural networks and artificial neural networks. The authors proposes the problems as the neuron's under starvation and illustrated the reason of the problem. To solve the problem, TAS-GNN was proposed. ...
Rebuttal 1: Rebuttal: Thank you for acknowledging our contributions, along with positive and constructive feedbacks. We respond to the comments as below. ### **W1. Gap between graph topology and node degree** We used node degree information for one of the representative graph topology properties. As the reviewer men...
Summary: This paper primarily discusses integrating Spiking Neural Networks (SNNs) into Graph Neural Networks (GNNs) to address several key challenges in graph classification tasks. Specifically, the paper proposes a new method called TAS-GNN (Topology-Aware Spiking Graph Neural Networks) which leverages the topology o...
Rebuttal 1: Rebuttal: Thank you. We appreciate acknowledging the strength in our work and providing detailed feedbacks. We would like to answer the questions as follows. ### **W1/Q7/L3. Extensibility to other datasets and application areas.** Thank you. We extended the evaluation to more datasets (IMDB-MULTI, and R...
Summary: The paper presents a novel approach called TAS-GNN (Topology-Aware Spiking Graph Neural Networks) to address the performance gap between spiking neural networks (SNNs) and artificial neural networks (ANNs) in graph classification tasks. The authors identify a "starvation" problem in spiking neurons within GNNs...
Rebuttal 1: Rebuttal: We thank the reviewer for the acknowledging our novelty of the work and providing constructive feedbacks. We have addressed the comments as below. We will revise our paper according to the rebuttal. ### **W1. Gap between graph topology and node degree** We used node degree information for one of...
Summary: This paper proposes topology-aware spiking graph neural networks with adaptive thresholds based on a group of neurons for graph classification. The paper first diagnoses the poor performance as the existence of neurons under starvation caused by the graph structure. Then the paper proposes the adaptive thresho...
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging our contributions and positive feedback. We faithfully address the comments below. ### **W1: The proposed method seems to be a hybrid ANN-SNN model rather than a pure SNN design.** We proposed TAS-GNN as a pure SNN design, which shares almost the same bac...
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for dedicating their time to evaluate our work. We are encouraged that they found our approach to be novel in developing TAS-GNN (MTu6, 4pSD, icJz, 5K64), with clear motivation demonstrated by diagnosing neuron starvation (4pSD, icJz) and competitive performanc...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
GraphCroc: Cross-Correlation Autoencoder for Graph Structural Reconstruction
Accept (poster)
Summary: This paper proposes a cross-correlation autoencoder for graph structural reconstruction. The authors first analyze the problems of existing self-correlation encoder. Then, a cross-correlation autoencoder is designed. Experimental results show the effectiveness of the cross-correlation autoencoder. Strengths: ...
Rebuttal 1: Rebuttal: > Evaluate the proposed cross-correlation autoencoder given specific graph structures, e.g., islands and symmetric structures. In Sec.2.2 and 2.3, we explore the limitations of self-correlation and the capabilities of cross-correlation in accurately representing specific graph structures, such as...
Summary: This paper proposed a method to address the limitations of existing graph autoencoder (GAE) models that primarily rely on self-correlation for graph structure representation. They claim existing GAE often fail to accurately represent complex structures like islands, symmetrical structures, and directional edge...
Rebuttal 1: Rebuttal: > This paper lacks discussion on related works. There already exists some works trying to solve the graph autoencoder structure recovering issues. For example, including position encoding or adding extra node labels. How the proposed method is compared with these methods, from the perspective of e...
Summary: This paper theoretically analyzes the limitations of existing graph autoencoders (GAE) in representing special graph features such as islands, symmetrical structures, and directional edges. To address this, the paper proposes a new GAE method, GraphCroc, which employs a cross-correlation mechanism that signifi...
Rebuttal 1: Rebuttal: > In Table 1, the improvements of GraphCroc are evident only on two datasets. AUC is widely used to evaluate graph structural reconstruction tasks in GAE, due to its unbiased performance on positive and negative edges. Thus, we adopt this metric to assess the adjacency matrix reconstruction in ou...
null
null
Rebuttal 1: Rebuttal: We appreciate the time and effort the reviewers have spent in providing valuable feedback! We are grateful for the reviewers' recognition of our clear writing, reasonable motivation, and sound experiments. Graph structural reconstruction is a pivotal application for graph autoencoders (GAEs), and ...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Segment, Shuffle, and Stitch: A Simple Layer for Improving Time-Series Representations
Accept (poster)
Summary: This paper introduces a new method for time-series representation learning that enhances the modeling of non-adjacent segment dependencies. Specifically, the proposed method segments, shuffles in a learned manner and stitches the shuffled segments to combine with original time series. The proposed method is mo...
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback. Below we provide a careful point-by-point response to each question. We would be happy to provide additional discussions/information in the author-reviewer discussion period should you have any follow-up questions. > Clarification on the differen...
Summary: This paper introduces a plug-and-play mechanism called Segment, Shuffle, and Stitch (S3) designed to enhance time-series representation learning in existing models. S3 operates by dividing the original sequence into non-overlapping segments and shuffling them in a learned manner that is optimal for the given t...
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback. Below we provide a careful point-by-point response to each question. We would be happy to provide additional discussions/information in the author-reviewer discussion period should you have any follow-up questions. > Comparison with data augmenta...
Summary: This paper proposes a new neural network design element which segments, shuffles, and stitches time series for improved representation learning. They evaluate their methods on forecasting and classification tasks, and show that S3 benefits some widely used baselines. Strengths: 1. To the best of my knowledge,...
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback. Below we provide a careful point-by-point response to each question. We would be happy to provide additional discussions/information in the author-reviewer discussion period should you have any follow-up questions. > Visualisation of S3 We have ...
Summary: The paper paper introduces a new approach called Segment, Shuffle, and Stitch (S3) to enhance time-series representation learning. The method involves segmenting the time-series into non-overlapping parts, shuffling them optimally, and stitching them back together along with the original sequence. Key contrib...
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback. Below we provide a careful point-by-point response to each question. We would be happy to provide additional discussions/information in the author-reviewer discussion period should you have any follow-up questions. > Other forecasting datasets. ...
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their time and for providing us with constructive feedback. We are happy to see the engaging comments given by all the reviewers. We have carefully addressed all the concerns raised under the individual response section. Following, we provide a summary of our r...
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper introduces a simple but effective differentiable module that performs pre-processing to input multivariate time-series before being fed into any differentiable model for arbitrary task. The pre-processing involves segmenting, shuffling the segments and stiching them together. The novelty include maki...
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback. Below we provide a careful point-by-point response to each question. We would be happy to provide additional discussions/information in the author-reviewer discussion period should you have any follow-up questions. > Visualizations and qualitativ...
null
null
null
null
null
null
Rethinking Deep Thinking: Stable Learning of Algorithms using Lipschitz Constraints
Accept (poster)
Summary: To solve the stability of Deep Thinking models, this paper proposes to constrain activation functions to be Lipshitz-1 functions. The original DT and DT-R models have training stability problem, basically because of scale explosion or vanishing. The authors revealed the stability problem, attribute the problem...
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to read the paper and for raising some important points in regards to our submission that we agree should be addressed. ### Response to Weaknesses > *"The idea is quite straight-forward (may not be a bad thing, but make > technical contributions smaller)...
Summary: This paper identifies and rectifies an issue with a particular type of iterative neural network called Deep Thinking Networks. The problem arises in exploding latent representations and unstable training routines. The authors of this work propose an update to the architecture where they add Lipschitz constrain...
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their support and helpful suggestions, and are glad they are as excited about this direction of research as we are. ### Response to Weaknesses > *"Clarity: A couple things could be more clear. > i. I think IPT stands for Incremental Progress Training, but ...
Summary: The paper addresses the positive feedback issue in the so called Deep Thinking networks, where the inference computation may involve more recurrent computations than encountered in training. The proposed solution is to normalise the state vector that undergoes the recurrence, i.e. make the mapping contractive...
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. ### Response to Weaknesses > *"As far as I can tell, it is pretty straight forward control theory stuff for > addressing positive feedback. Nothing wrong with the proposed solution, but I > would assume this is such a fundamentally well known issue in an...
Summary: The paper introduces Deep Thinking with Lipschitz Constraints (DT-L), an improved version of the Deep Thinking (DT) networks, designed to enhance the stability and performance of iterative algorithm learning models. The authors address the instability issues inherent in DT networks by analyzing intermediate re...
Rebuttal 1: Rebuttal: We thank the reviewer for their positive and constructive comments. ### Response to Weaknesses > *"The modifications and theoretical underpinnings of the DT-L model, such as > the Lipschitz constraints and orthogonal transformations, add complexity to > the model, which might hinder its adoption...
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their careful reading of the paper and their insightful comments. We are pleased that overall the reviewers found the paper clear, but we will integrate the helpful suggestions that have been made - thanks! We have responded to the reviewers individual c...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Unsupervised Homography Estimation on Multimodal Image Pair via Alternating Optimization
Accept (poster)
Summary: The paper proposes an unsupervised homography estimation method for multimodal image pairs using an alternating optimization approach. The claimed key innovation is the introduction of the Geometry Barlow Twins loss function for the alternating optimization. The authors show that their approach works on 3 mult...
Rebuttal 1: Rebuttal: Thank you for taking the time to review our work. We apologize for the lack of detailed explanations regarding the proposed method. Below are some additional clarifications: **Weakness 1.** **Weakness 1.1.** *No other direct supervisions* Increasing the similarity of local features between two ...
Summary: This paper proposes a new unsupervised homography estimation approach for multimodal images. This method is designed as a two-phase optimization framework named AltO. The first phase named "Geometry Learning" trains a registration network to align the input multimodal images geometrically. The second phase nam...
Rebuttal 1: Rebuttal: Thank you for taking the time to review our work in detail. Below are our responses to your comments and concerns. **Weakness 1.** **Weakness 1.1.** *Why is alternating necessary?* To prevent unintended collaborations and a collapse into trivial solutions, we introduced an alternating training ...
Summary: The paper addresses unsupervised homography estimation from multi-modal image pairs. The authors propose to cope with the issue of 1) modality, 2) registration in two distinct networks that are trained in an interleaved fashion. The networks architecture derives from the Barlow Twins framework, with changes in...
Rebuttal 1: Rebuttal: Thank you for your detailed review and for taking the time to provide your feedback. Below is our rebuttal. **Weakness 1.** *Why not use an edge-based approach as a baseline?* Edge-based approaches have limitations when used as baselines with different modalities. When dealing with two images of...
null
null
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for taking the time to thoroughly review our paper. Reviewers highlighted the strengths of our paper. + Proposed method is an interesting, intuitive, and fresh approach. (PrYW, sFim, EsVS) + The paper tackles an important problem and proposes an original solution...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
TPR: Topology-Preserving Reservoirs for Generalized Zero-Shot Learning
Accept (poster)
Summary: This paper proposes a new task, "generalized zero-shot learning (GZSL)," in which both seen and unseen objects should be recognized for vision-language tasks. It also proposes a new method based on CLIP that uses the loss in the "attribute space" to perform better in both seen and unseen classes. This method i...
Rebuttal 1: Rebuttal: ## Response to Reviewer 8m4c ### Response Q1-Q3 Thank you for the positive and insightful comments. The reviewer appreciated *the novelty of our approach, the creation of the attribute space, the effectiveness of our method*, and *the well-designed ablation studies*. We address the mentioned conc...
Summary: In this paper the author proposed a dual-space feature alignment module to keep the semantic consistency between visual and attribute. In addition, the authors proposed Topology-Preserving Reservoir (TPR) to tackle the issue into the generalized zero shot learning (GZSL) setting, which utilized the Pearson cor...
Rebuttal 1: Rebuttal: ## Response to Reviewer k9s8 ### Response Q1-Q3 Thank you for the valuable comments and recognizing the various strengths of our paper: "*well-written", "intuitive and easy to understand", "the method better fits the seen and unseen classes", "reasonable*", and "*sufficient and significant exper...
Summary: The proposed approach targets the generalized zero-shot learning (GZSL) problem for the vision language model (VLM). It is observed that a strong VLM model shows promising results for novel class generalization. Fine-tuning these models for seen classes leads to a loss in generalization capability and poor res...
Rebuttal 1: Rebuttal: ## Response to Reviewer TcJo ### Response Q1-Q6 Thank you for the valuable comments with many kind words to our work: *a critical problem, significant impact, interesting, improve generalization, wide-ranging experiments, satisfactory ablations studies*. Below, we address the raised questions: ...
Summary: This paper is a new study that introduces the Generalized Zero-Shot Learning (GZSL) framework within VLMs, aiming to classify both known and novel classes without class partitioning. Key innovations include a dual-space feature alignment module, enhancing latent representations with an attribute reservoir for ...
Rebuttal 1: Rebuttal: ## Response to Reviewer BqNV We sincerely thank the reviewer BqNV for the very positive and helpful comments. Thank you for acknowledging *the novelty of our idea*, *our contribution to the VLM community*, *the writing and organization of our paper*, and *the extensive experiments & ablation stu...
Rebuttal 1: Rebuttal: ## Global Rebuttal We thank all reviewers for their insightful and positive feedback. We are encouraged that the reviewers acknowledge our paper: - **Novel and impactful**. Reviewer BqNV -- "*introduces a novel research aspect*", "*a great contribution to VLM community*"; Reviewer TcJo -- "*the p...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Enhancing Robustness of Graph Neural Networks on Social Media with Explainable Inverse Reinforcement Learning
Accept (spotlight)
Summary: The paper presents a novel approach to enhancing the robustness of Graph Neural Networks (GNNs) against adversarial attacks, specifically in social media contexts such as rumor detection. The authors propose an enhanced maximum entropy inverse reinforcement learning (IRL) method with a mixture-of-experts appro...
Rebuttal 1: Rebuttal: **Q1: Could this approach be transferred to bit flips?** **A1:** Thank you for your constructive question. We believe that it is feasible theoretically. **(1)** Bit flip attack: This attack disrupts neural network operations by flipping bits in parameters or intermediate results. For example, [1...
Summary: This paper addresses the challenge of adversarial attacks on Graph Neural Networks (GNNs) employed in social media tasks, such as rumor detection. The authors introduce MoE-BiEntIRL, a method that leverages a mixture-of-experts approach combined with inverse reinforcement learning (IRL) to reconstruct and expl...
Rebuttal 1: Rebuttal: **Q1: Scalability Analysis: Could you elaborate on how your method scales when applied to very large social media graphs? Any additional insights or preliminary results on this matter would be highly informative.** **A1:** Thank you for your valuable suggestion. For the scalability in the larg...
Summary: This work studies the problem of reconstructing attack policies using collected adversarial samples to enhance the robustness of GNN-based models in social network tasks, specifically rumor detection. The authors propose the MoE-BiEntIRL framework, which employs a mixture-of-experts approach to learn optimal p...
Rebuttal 1: Rebuttal: **Q1: What makes the policies on Pheme significantly harder to recover than the policies on Weibo in Table 2?** **A1:** Thank you for your insightful question. We posit that the difficulty in policy recovery is related to the complexity of the underlying graph structures. As indicated in Table II...
Summary: The paper presents a novel method, MoE-BiEntIRL, which combines a mixture-of-experts approach with inverse reinforcement learning to enhance the robustness and explainability of adversarial attacks on GNNs. The method addresses the critical issue of stabilizing GNNs used in social media for rumor detection, de...
Rebuttal 1: Rebuttal: **Q1: Could you provide a simple illustrative example or additional details to clarify how the precise sample guidance mechanism and the bidirectional update mechanism work in the MoE-BiEntIRL method?** **A1:** Thank you for your suggestion. We have provided code and algorithms in global rebuttal...
Rebuttal 1: Rebuttal: **Q1: The complexity and scalability analysis of the proposed model.** **A1:** Thank you for your valuable suggestion. Please refer to Table I in the attached PDF, where we detail the time complexity and runtime of the MoE-BiEntIRL, alongside two baseline models. Herein, we will present and discu...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Entrywise error bounds for low-rank approximations of kernel matrices
Accept (poster)
Summary: This paper is first to establish entrywise guarantees for low rank approximation of kernel matrices when kernel eigenvalues satisfy either polynomial or exponential decay. More specifically, in the $\alpha$-polynomial decay setting, entrywise error scales as $O(n^{-\\frac{\alpha-1}{\\alpha}} \\log n)$ for rank...
Rebuttal 1: Rebuttal: I thank the reviewer for their time and effort reviewing my paper. The reviewer argues that Lemma 1 is only a slight generalisation of Lemma 68 in Tao and Vu (2011), although understanding the conditions which one must place on the mean vector when it is non-zero for the lemma to work required s...
Summary: The paper focuses on deriving entrywise error bounds for low-rank approximations of kernel matrices using truncated eigen-decomposition. It addresses the statistical behavior of individual entries in such approximations under assumptions of polynomial eigenvalue decay or exponential decay. The authors also pro...
Rebuttal 1: Rebuttal: To begin, I would like to thank the reviewer for taking the time to review my paper. They considered the writing and proofs to be clear and accurate, and the theoretical result to be new to the community. The reviewer mentions that the assumptions on the eigenvalue decay and the eigenfunction gr...
Summary: The authors consider the kernel matrices, formed by $n$ vectors i.i.d. drawn from a $p$-dimensional probability distribution $\rho$. Under several assumptions on the associated kernel operator on $L^2_{\rho}$, including the positive definiteness of the kernel and decay condition on the eigenvalues of the kerne...
Rebuttal 1: Rebuttal: I would like to start by thanking the reviewer for taking the time to work through my paper, in particular for working through the proofs in the appendix and for noticing a mistake in Lemma 1 which I address below. I was happy to read that they consider the problem to be very fundamental and that ...
null
null
Rebuttal 1: Rebuttal: I would like to start by thanking all the reviewers for taking the time to work through my paper and to write their reviews. I was happy to read that the reviewers consider the problem to be a very fundamental one, that the main results in the paper are new to the community, that the paper is clea...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning the Optimal Policy for Balancing Short-Term and Long-Term Rewards
Accept (poster)
Summary: This paper introduces a new way to balance multiple rewards with some long-term rewards potentially missing. It does so by using Pareto Policy Learning of optimizing each reward subject up to the tradeoff frontier. This can be more practical than simple linear weighting since the linear weighting strategy appl...
Rebuttal 1: Rebuttal: Thank you for approving our work and for the helpful suggestions. Below, we address your concerns and questions. >**W1**: The experiment uses partial real data with synthetic generation of short-term and long-term rewards. **Response:** Thanks for your comments. We fully agree that applying the...
Summary: This paper attempts to address the challenge of learning the optimal policy for balancing multiple long-term and short-term rewards. The authors point out that the existing linear weighting method leads to a sub-optimal policy. To address this limitation, the authors propose formulating formulate the problem a...
Rebuttal 1: Rebuttal: Below, we hope to address your concerns and questions. >**W1** No enough explanation/experiments to demonstrate that the proposed method is optimal. > **W2**: When some of the rewards are interrelated, the linear weighting method can only achieve a suboptimal solution. The claim may not be rigor...
Summary: This paper studies the tradeoff between short-term and long-term rewards. The authors formulate the policy learning problem as a multi-objective optimization problem and propose a decomposition-based Pareto policy learning method. I only had experience in reinforcement learning in robotics five years ago. I tr...
Rebuttal 1: Rebuttal: We sincerely appreciate your comments and thank you for the helpful suggestions. Below, we hope to address your concerns and questions. > **W1**: - Only the linear weighting method is used as the baseline. I am wondering if there are any other methods that can be used for comparison. If not, why...
Summary: This paper proposes a framework for solving multi-objective optimization problems: multi-objective optimization problems are divided into sub-problems in different regions by setting different preference vectors. The parameter optimization direction of the sub-problem can be easily solved by transforming it in...
Rebuttal 1: Rebuttal: We sincerely appreciate your approval of the idea and the novelty of this work and thank you for the helpful suggestions. Below, we hope to address your concerns and questions. > **W1**: This paper proposes an important multi-objective optimization algorithm. But the title of this paper seems t...
Rebuttal 1: Rebuttal: Dear Reviewer hVvL, we provide the responses to W1-W4 below your Official Review. Here, we further response W5-W7. > **W5**: In experiment, why choose 10 preference vectors? why are some parameters truncated normal distributions. **Response:** Thanks for your comments. **We would like to clari...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
SeeClear: Semantic Distillation Enhances Pixel Condensation for Video Super-Resolution
Accept (poster)
Summary: The authors propose SeeClear for Video Super-Resolution (VSR). SeeClear is a diffusion-based method that improves restoration performance by introducing semantic priors. The authors design an Instance-Centric Alignment Module (InCAM) and Channel-wise Texture Aggregation Memory (CaTeGory) to utilize semantic in...
Rebuttal 1: Rebuttal: > Q1. Why do the comparison methods in Table 1 use different numbers of frames? If the same frame is used, what is the performance like? The selection of numbers of frames for training depends on the architecture, such as sliding-window-based (e.g., EDVR-M) and recurrent-based (e.g., IconVSR) met...
Summary: The paper introduces a novel video super-resolution framework leveraging semantic distillation to enhance pixel condensation in diffusion-based models. SeeClear addresses stochastic fluctuations by using a Semantic Distiller and a Pixel Condenser to extract and upscale semantic details from LR frames. The fram...
Rebuttal 1: Rebuttal: > Q1. Diffusion-based models usually show poor performance on PSNR (e.g., StableSR and Reshift), but SeeClear demonstrates a significant improvement. Could you analyze which parts of SeeClear contribute to this improvement? Please refer to the weaknesses part above. Diffusion-based image super-re...
Summary: This paper presents a diffusion-based video super-resolution method, and proposes Instance-Centric Alignment Module and Channel-wise Texture Aggregation Memory. The former leverages a pre-trained open-vocabulary segmentation model (i.e., OpenSeeD), which is utilized to perform alignment within video clips by m...
Rebuttal 1: Rebuttal: Thanks for your careful reading and detailed comments. We will rectify some confusing statements and formulas in the subsequent edition. Nevertheless, we deem it necessary to highlight our novelty and restate the proposed method. Different from the **text** in the realm of T2I and **segmentation m...
Summary: The paper presents a framework for video super-resolution (VSR) that improves temporal coherence and high-resolution detail generation. The proposed method, SeeClear, integrates a Semantic Distiller and a Pixel Condenser to extract and upscale semantic details from low-resolution frames. The framework employs ...
Rebuttal 1: Rebuttal: > Q1. The performance of the proposed method is not significant. In Table 1, the improvement is very marginal or is worse than other methods. Moreover, in Figure 4, the generated texture is comparable to other methods. For the sake of fair comparison, SeeClear is trained only on five frames and a...
Rebuttal 1: Rebuttal: Dear AC and reviewers, We sincerely thank all reviewers for your constructive comments. We are glad that the reviewers appreciate the **novelty** (srYj, ERJ1), **writing** (ERJ1), **impressive experimental results** (vLVk, srYj, ERJ1, NXC1) and limitations adequately discussed (vLVk, srYj, ERJ1) ...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Exploring Context Window of Large Language Models via Decomposed Positional Vectors
Accept (spotlight)
Summary: This paper disentangles positional vectors from the hidden states of a pretrained Transformer language model to facilitate the understanding of length extrapolation. After a series of analyses, this paper proposes two context extending techniques. Experiments show that the proposed methods lower the perplexity...
Rebuttal 1: Rebuttal: Thank you for your insightful comments! # Q1: Training from Scratch We initially performed from-scratch pretraining on smaller models and found that the properties of the positional vectors were largely similar to continually-trained models, but the models trained from scratch had inferior perfor...
Summary: This paper proposes a mean-based decomposition technique to analyze the formation and effect of positional encodings in LLMs. It then uses these results to propose methods to extend the context window, resulting in models that generalize better to longer texts. Strengths: 1. This paper is very well-written, a...
Rebuttal 1: Rebuttal: Thank you for your helpful comments! We will revise Figure 4 in the final version. --- Rebuttal Comment 1.1: Comment: Thanks for the response!
Summary: This paper dives into the inner workings of how transformer-based language models handle positional information. By decomposing hidden states into semantic and positional vectors, the authors give a series of analysis about how the positional information are encoded and propagated through layers. I believe thi...
Rebuttal 1: Rebuttal: Thank you for your insightful comments! # W1: Unnecessaries of Section 4 In Section 4, the proposed methods are significant evidence for our analysis of the relationship between positional vectors and the context window. Our experiments substantiate our previous viewpoints. For instance, interpo...
null
null
Rebuttal 1: Rebuttal: Thank you for your insightful comments! The supplementary PDF includes figures that support our rebuttals. Pdf: /pdf/2cd556ebc2c13e813daaf99710c90812816e66c5.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Estimating Transition Matrix with Diffusion Models for Instance-Dependent Label Noise
Reject
Summary: This paper deals with the problem of supervised learning from noisy labels, where the label noise is modeled using instance-dependent label transition probability matrix. Mainly, this work attempts to leverage conditional diffusion model in order to obtain a generative model of transition matrix conditioned on...
null
Summary: This paper focuses on the estimation of the transition matrix with instance-dependent label noise. They used a diffusion model for this estimation. By applying a diffusion process to the transition matrix, the diffusion model is trained to generate transition matrices from a prior distribution. The instance-wi...
null
Summary: In this work, the authors proposed an approach to estimate the instance-dependent transition matrix in order to reliably learn from noisy labels. The idea is to use a condition diffusion model to estimate the transition matrix by using the pretrained extracted image features as the conditions. Once the transit...
null
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Bridging Geometric States via Geometric Diffusion Bridge
Accept (poster)
Summary: The paper introduces the Geometric Diffusion Bridge (GDB), a novel framework designed to generate the evolution of geometric states in geometric (coordinate) systems. GDB uses a diffusion bridge connecting initial and target geometric states with equivariant transition kernels, preserving symmetry and joint st...
Rebuttal 1: Rebuttal: Thank you for recognizing both the theoretical analysis and practical effectiveness of our GDB framework. We also appreciate your suggestions which can improve our work further. Our proposed method has the following advantages compared to your list of works [1, 2]. - First, our proposed method...
Summary: This paper proposes a generative model for bridging initial and target geometric states using diffusion bridge. This work introduces an equivariant diffusion bridge based on equivariant transition kernels for symmetry constraints. The proposed method was validated on diverse settings including simple molecules...
Rebuttal 1: Rebuttal: Thank you for recognizing the motivation, contributions, and theoretical analysis of our GDB framework. We also appreciate your suggestions which can improve our work further. Here are our responses to your questions. >**Regarding discussions of related works.** Thank you for listing these relat...
Summary: This paper proposes a type of diffusion model that captures the evolution of geometric states. The model is characterized by a diffusion SDE that couples the initial state with the target state, in the middle of which trajectory guidance is enabled when such data present. The framework is designed to yield equ...
Rebuttal 1: Rebuttal: Thank you for spending time reviewing our paper. We would like to first address your misunderstanding by **clarifying the task of our interest**: to capture/predict the evolution of geometric states, i.e., *predicting future states from initial states*. This goal has been carefully stated at the v...
Summary: In this paper, the authors introduce a Geometric Diffusion Bridge (GDB) framework, which aims to predict the evolution of geometric states in complex systems accurately, crucial for fields such as quantum chemistry and material modeling. Traditional methods face computational challenges, while deep learning ap...
Rebuttal 1: Rebuttal: Thank you for recognizing both the theoretical analysis and practical effectiveness of our GDB framework. We also appreciate your suggestions which can improve our work further. Here are our responses to your questions. >**Regarding the computational cost of our GDB framework** Thanks for the qu...
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Inference of Neural Dynamics Using Switching Recurrent Neural Networks
Accept (poster)
Summary: This paper develops a switching RNN (SRNN) framework to model neural activity. It builds up on switching linear dynamical system models that are used in neuroscience to segment and extract underlying dynamics of observed neural activity. The different segments corresponding to unique dynamics often reflect dis...
Rebuttal 1: Rebuttal: We thank you deeply for your time and attention in reading our paper, and for your valuable comments. Below is our response to specific weaknesses and questions. **Summary of Weakness 1**: *comparison to SNLDS.* We thank you for pointing out this weakness. In addition to the existing SLDS, rSLDS...
Summary: The authors develop a new class of probabilistic nonlinear state space models called switching RNNs. In essence, this extends the well-known switching linear dynamical system (SLDS) model to switch between nonlinear dynamics governed by a stochastic RNN. Strengths: * The results shown in panels A of Figs 3, 4...
Rebuttal 1: Rebuttal: We thank you very much for giving us a positive rating, and for your extremely relevant comments. Below we respond to specific weaknesses pointed out. **Summary of Weakness 1**: *Like many other deep learning based approaches, the model is not particularly interpretable. 2D flow fields poorly ca...
Summary: The authors propose to model time series neural population activity using switching recurrent neural networks. The generative model includes discrete latent states Strengths: The proposed method does appear to outperform related switching linear dynamical systems approaches in certain contexts. Weaknesses: H...
Rebuttal 1: Rebuttal: We thank you deeply for your time and attention in reading our paper, and for your valuable comments. Below is our response to specific weaknesses and questions. **Weakness 1**: *Comparison between SNLDS and mrSDS* Thank you for raising this weakness. As also mentioned in the global rebuttal, we...
Summary: The paper proposes switching recurrent neural networks (SRNN), which allow the RNN weights to switch across time. The RNN weights switch based on a latent Markovian process of discrete states. The authors apply SRNN to a simulated dataset following the Lorenz attractor and three real-world neural recordings. ...
Rebuttal 1: Rebuttal: We thank you very much for giving us a positive rating, and for you very helpful comments. Below we respond to specific weaknesses pointed out. **Weakness 1**: *Lack of comparison with other methods, for example, ARHMMs and their extensions, as well as SNLDS and mrSDS* We have now included the t...
Rebuttal 1: Rebuttal: Dear Reviewers, We would like to thank you for providing constructive feedback that helped us improve the paper. As a reminder, in this submission, we propose ‘Switching Recurrent Neural Networks’ (SRNNs) for discovery of switching neural dynamics that leads to behaviorally-relevant discrete stat...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Mesa-Extrapolation: A Weave Position Encoding Method for Enhanced Extrapolation in LLMs
Accept (poster)
Summary: The paper conducts a theoretical analysis to help understand the No Position Encoding. Also, the paper proposes weave position encoding to achieve improved extrapolation performance without additional cost. Also, the paper introduces the weave PE method, Mesa-Extrapotion, which recalculates the position ID to ...
Rebuttal 1: Rebuttal: Thanks for your work. ## major concern Thank you very much for your suggestions. The differences between our method and Self-Extend can be categorized into three aspects: Firstly, from a **methodological** perspective, our designed Stair PE is not only applicable to RoPE but also to other posi...
Summary: This paper studies the length extrapolation of LLMs. 1. It provides a theoretical analysis of why NoPE and PE fail to extrapolate beyond a certain length. Previous work has shown that this failure is related to the explosion of hidden states as positions increase. This paper demonstrates that both NoPE and PE ...
Rebuttal 1: Rebuttal: Thank you very much for recognizing our work and providing your suggestions. ## weakness 1 For decoder-only architectures, the model's output is based on the next-token prediction. The last token in the input is crucial, as it generates the next token. For the last token to output the correct ne...
Summary: The paper proposes a positional embedding scheme to address the extrapolation issue: train on short sequences, evaluate on longer sequences. Authors propose a theoretical framing of the positional embeddings contribution to attention. They apply their analysis to NoPE (No Positional Embedding) and to standard ...
Rebuttal 1: Rebuttal: Thanks for your work. ## Weakness 1 & question 1: “Theory is hard to read and unclear definition of the threshold H” **Definition of the extrapolation success or failure:** When a large model continuously produces valid next-tokens for a given long input sequence, we define this as successful e...
Summary: This paper introduces a new LLM length extrapolation method, called Mesa-extrapolation, which utilizes a chunk-based triangular attention matrix and applies stair PE. The proposed method is based on theoretical analysis. The paper conducts extensive experiments on passkey, PPL, summarization to demonstrate the...
Rebuttal 1: Rebuttal: Thanks for your work. ## Weakness 1: We add evaluations on Ruler, and the results are available in the uploaded PDF (see Fig 1, 2 and 4). These results indicate that our method also performs well. ## Q1 & weakness 2: We perform experiments on long context window "microsoft/Phi-3-mini-128k-instr...
Rebuttal 1: Rebuttal: Dear reviewers, Thank you very much for your review. We have provided additional experimental supplements in the uploaded PDF. Please check it out. Pdf: /pdf/f0f359fb459877376a284d73a25b6112167dcd5e.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The authors propose a weave position encoding method to enhance LLMs’ inference performance when the input context window exceeds the training context window. This method can be integrated into existing pretrained LLMs without additional finetuning. To support their findings, the authors conducted theoretical ...
Rebuttal 1: Rebuttal: Thanks for your work. ## weakness Thank you for your suggestions. Our method shows good extrapolation performance on accuracy-related tasks, but we observe slight variability in extrapolation performance within mid-length (8k-11k) in the summary task. We will adjust our claims accordingly in the ...
null
null
null
null
null
null
GenRL: Multimodal-foundation world models for generalization in embodied agents
Accept (poster)
Summary: In this work, the authors propose learning a pixel-based reconstructive world model, and then separately learn networks to convert the representations of a pretrained VLM into the learned world model latent space. By using a VLM trained via contrastive alignment, this essentially enables the projection of bot...
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful comments. **Experiments** We added new experiments, including stronger model-based baselines. Details are in the main rebuttal message. We also tested all baselines with LIV's representation in the Kitchen tasks. We used LIV's open-source code to download a...
Summary: The paper looks at a method for leveraging foundation multimodal models for learning world models in RL. They do so by aligning the latent space of a video language model with that of a generative model that can be used for learning in imagination. This is done by training connector-and-aligner networks . The ...
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful comments. **Image-language CLIP results** > simple tasks with clearly distinguishable static end states (such as standing) should have worked equally well with CLIP rewards We agree with the reviewer's intuition and we believe the results confirm their stat...
Summary: This paper proposes to combine a DreamerV3-style world model with a pretrained vision language model (VLM). By training two small adaptors to align the latent space of the VLM with that of the world model, the aligned representations from the VLM can be used as a reward signal to train agents in the world mode...
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful comments. **Learning from visual prompts** In order to support our claims with empirical evaluation, we have provided results of behavior learning from video prompts. The results can be found in our main rebuttal message and the videos on the website. We ha...
Summary: The paper wants to leverage the large-scale pre-training of foundation models trained on internet data to train a world model for embodied agents that generalizes across tasks and domains. This is done by training a world model in the standard way, but in addition training aligner and connector networks that (...
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful comments. **Connector-aligner generalization** First, we would like to make clear that, as stated in multiple parts of the paper, the connector and aligner networks are trained using **vision-only data** and **no language annotations**. We have provided addi...
Rebuttal 1: Rebuttal: ## Training with no language annotations We stated several times that the system is trained with vision-only data (Fig. 1, Line 46, Line 469) and no language annotations (Line 11, Line 42). Nonetheless, some reviewers expressed doubts on this matter. We believe the source of confusion is the stat...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Semantic Feature Learning for Universal Unsupervised Cross-Domain Retrieval
Accept (poster)
Summary: This paper introduces the problem of Universal Unsupervised Cross-Domain Retrieval (U2CDR) and proposes a two-stage semantic feature learning framework to address it. The framework includes a cross-domain unified prototypical structure established through an instance-prototype-mixed contrastive loss and a sema...
Rebuttal 1: Rebuttal: > Description of the methodology novelty Please refer to the novelty illustration in the Global Response. In addition to the ablation study, To further validate the novelty of semantic structure preservation and cross-domain matching, we carry out experiments with the replacement of another state...
Summary: This paper tackles the problem of unsupervised cross-domain retrieval. This is the problem where the query and retrieval domains are distinct. For example, in sketch to real retrieval, the system must retrieve the most relevant real images to a query sketch. "Unsupervised" refers to the fact that no labels are...
Rebuttal 1: Rebuttal: > The necessity illustration of each loss and the theory justification Firstly, we did not use six versions of contrastive loss. **IPM** combines INCE and PNCE with the intuitive goal of performing categorical semantic learning on unlabeled domain data. **INCE** forms the basis of unsupervised se...
Summary: This paper proposes Universal Unsupervised Cross-Domain Retrieval for the first time and designs a two-stage semantic feature learning framework to address it. Strengths: This paper proposes a new approach in universal unsupervised domain adaptation, with sufficient experiments to verify its motivation. Weak...
Rebuttal 1: Rebuttal: > Any handling of instances belonging to uncommon categories In the first stage of our UEM framework, we aim to build a unified prototypical structure across domains via the IPM loss. The IPM loss is a combination of instance and prototype contrastive losses. Given that the IPM loss is computed s...
null
null
Rebuttal 1: Rebuttal: ## Global Response We would like to thank all the reviewers for their constructive comments and suggestions. In the global response below, we respond to some common questions and present more visualization in the attached PDF. > [For Reviewers gZ1R and ssYY] Methodology novelty + **Unified Proto...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Neural Conditional Probability for Uncertainty Quantification
Accept (poster)
Summary: This paper proposes Neural Conditional Probability (NCP), a novel operator-theoretic approach for learning conditional probability distributions. Extensive theoretical results are provided to support the optimization consistency and statistical accuracy of NCP. NCP can be used to extract conditional density an...
Rebuttal 1: Rebuttal: We appreciate the reviewer's insightful evaluation and valuable comments. In what follows, we aim to address the highlighted weaknesses and respond to the reviewer's questions. ## Weaknesses - __W1:__ We thank the reviewer for this comment. __We have added a detailed comparison of the NCP method...
Summary: I am not qualified to review this paper Strengths: I am not qualified to review this paper Weaknesses: I am not qualified to review this paper Technical Quality: 3 Clarity: 3 Questions for Authors: I am not qualified to review this paper Confidence: 1 Soundness: 3 Presentation: 3 Contribution: 3 Limi...
Rebuttal 1: Rebuttal: We outline our contributions and offer further context in the global response, hoping these additions will better highlight the value of our work on the inference of conditional probability and uncertainty quantification.
Summary: The authors propose a method (Neural Conditional Probability, NCP) for learning a conditional distribution P(Y | X) from a finite sample from a distribution. The method is based on following observations: (1) it is sufficient to learn the conditional expectation operator E_{Y | X}[f](x) = E[f(Y) | X = x]; (2) ...
Rebuttal 1: Rebuttal: We appreciate the reviewer's insightful evaluation and valuable comments. In what follows, we aim to address the highlighted weaknesses and respond to the reviewer's questions. ## Weaknesses - __W1.__ Thank you for this remark. __We added several high-dimensional experiments focused on UQ tasks...
Summary: The paper proposes Neural Conditional Probability, a novel operator-theoretic approach to learning conditional probability distributions by learning parameters of the truncated SVD of the conditional expectation operator with a neural network. The authors provide a rigorous mathematical derivation and argue fo...
Rebuttal 1: Rebuttal: Thank you for your detailed and thoughtful feedback on our submission. We appreciate your recognition of our contribution to the field of operator learning and ML theory which was the primary objective of our work. We would like to address your concerns regarding the empirical evaluations and prov...
Rebuttal 1: Rebuttal: We thank all reviewers for their insightful evaluation of our paper. We appreciate all their comments and remarks, which we will incorporate in our revision. Before addressing each review in detail, we would like to point out some general remarks that apply to all of them. ## Positioning Our ma...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
FLAME : Factuality-Aware Alignment for Large Language Models
Accept (poster)
Summary: This work studies how to do alignment for large language models to improve their factuality. The focus of this work is on SFT and DPO. The motivation behind this work is a pilot study which shows more factual data does not always lead to a more factual model. To resolve this issue, the proposed Flame framework...
Rebuttal 1: Rebuttal: 1. Re: No external baselines are used in the comparison. We thank you for the suggestion. As far as we know, the existing best approach to factual alignment is the method introduced by Tian et.al. [1]. Note that, this approach conducts factual alignment directly on the target task (e.g., biograp...
Summary: This paper shows that training on new or unfamiliar knowledge can promote hallucination and that reward functions in standard RL often inadequately capture factuality. The authors propose a factuality-aware alignment method that first identifies instructions as fact-based or non-fact-based. For fact-based inst...
Rebuttal 1: Rebuttal: 1. Re: This approach may struggle with instructions that the original model cannot generate factual answers for. We thank you for the insight. This is exactly what we found in the paper; that is, it is challenging to teach LLMs to learn new knowledge in the finetuning stage. Inevitably, forcing ...
Summary: This paper addresses the issue of factual inaccuracy, or "hallucination," in Large Language Models (LLMs). The authors identify factors that lead to the generation of false facts during supervised fine-tuning (SFT) and reinforcement learning (RL). They propose FLAME, a novel alignment method that incorporates ...
Rebuttal 1: Rebuttal: 1. Re: The baselines compared in this work are limited to different settings of SFT and DPO only. We thank your suggestion to compare with the baseline from Tian et.al. [1]. First of all, we want to clarify that Tian et.al. [1] mainly focus on fine-tuning LLMs on a specific task (e.g. biography g...
Summary: The paper discusses a novel alignment method to enhance the factual accuracy of LLMs. The authors observe that conventional alignment processes, which include SFT and RL, often result in the generation of false facts or 'hallucinations'. To address this, they introduce factuality-aware alignment (FLAME), which...
Rebuttal 1: Rebuttal: 1. Re: Model Size and Generalizability. It would be beneficial to investigate whether FLAME's effectiveness extends to smaller models, such as 7B or even smaller, given that the factuality-aware SFT relies on self-supervision through few-shot prompting. We thank you for the helpful suggestion. We...
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
InstructG2I: Synthesizing Images from Multimodal Attributed Graphs
Accept (poster)
Summary: The authors propose an approach to enhance image synthesis using multimodal attributed graphs, adopting a strategy to condition image generation via a tokenization scheme on graph structure. Strengths: - The paper studies an intersectional topic: leveraging graph learning techniques for image generation, whic...
Rebuttal 1: Rebuttal: Thank you so much for your thoughtful review! Regarding your questions: 1. **The scenario the authors discuss in lines 28-30 seems like it could be well-handled by only text.** We would like to answer this question from three aspects. 1) On one hand, the problem introduced in this paper can be g...
Summary: This paper focuses on the problem of image synthesis on multimodal attributed graphs (MMAGs) and proposes a graph context-conditioned diffusion model, INSTRUCTG2I, to address the challenge in this setting. In particular, it proposes a semantic personalized PageRank-based method to sample related neighbors in t...
Rebuttal 1: Rebuttal: Thank you so much for your thoughtful review! Regarding your questions: 1. **The description in Eq.10 may be incorrect.** Thank you so much for your comment. We have found the typos and will correct them in the revision. 2. **Descriptions of symbols in subsection 3.4.** This section mainly disc...
Summary: The paper introduces a new task graph2image which is to generate images conditioned on both text descriptions and graph information, which improves consistency of generated images compared to conditioned only on texts or images. To address combinatorial complexity of graphs and dependencies among graph entitie...
Rebuttal 1: Rebuttal: Thank you so much for your thoughtful review and support of our work! Regarding your questions: 1. **How large graphs can the method be applied?** Thank you so much for your question. Our method can be adapted to large-scale graphs with millions or even trillions of nodes. In InstructG2I, we on...
Summary: This paper introduces a novel approach for controllable image generation using both graph and text conditions. The authors propose that additional context information from multimodal attributed graphs (MMAGs) can enhance the performance of diffusion models. Specifically, they formulate the Graph2Image problem ...
Rebuttal 1: Rebuttal: Thank you so much for your thoughtful review! Regarding your questions: 1. **Question about the problem setting.** Thank you for your comment. We would like to answer this from two aspects. **Why graph is important?** 1) *Graph structure helps discover multiple informative neighbors*: We agre...
Rebuttal 1: Rebuttal: Dear Reviewers, We sincerely appreciate your valuable feedback and suggestions. We will revise our work based on your reviews. We also want to thank the Reviewers for noting the strengths of our paper, namely: - The problem addressed in our paper is important and well-motivated. (5RsF, nWct, 7...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Rethinking No-reference Image Exposure Assessment from Holism to Pixel: Models, Datasets and Benchmarks
Accept (poster)
Summary: The paper introduces a novel paradigm that extends Image Exposure Assessment (IEA) from an image-level to a pixel-level framework. This paradigm comprises three components: model, dataset, and benchmark. Concerning the model, the study introduces the Pixel-level IEA Network (P-IEANet). This network processes i...
Rebuttal 1: Rebuttal: Q1: The terminology lacks clarity. >A1: Thanks for the reviewer's thought-provoking questions. >1) In the context of **evaluating images**, the term "exposure" is **no longer a global attribute of an image**. Even in the context of **capturing images**, as exemplified by the reviewer, the term ...
Summary: This work tackles the challenges in image exposure assessment from three aspects: models, datasets, and benchmarks. Specifically, A P-IEANet model based on DWT is proposed, which can generate pixel-level assessment results in a no-reference manner. An exposure-oriented dataset IEA40K is collected to cover vari...
Rebuttal 1: Rebuttal: >We sincerely appreciate the reviewer's positive feedback, characterizing our paper as "theoretically reasonable and empirically effective" and noting that it "provides valuable insights to the related community." We have thoroughly addressed the reviewer's inquiries, which we believe will signifi...
Summary: This paper proposes a new no-reference image exposure assessment method, Pixel-level IEA Network (P-IEANet), which analyzes and evaluates image exposure from the perspectives of brightness and structure using discrete wavelet transform (Haar DWT). Also, a dataset exclusively tailored for IEA, called IEA40K, is...
Rebuttal 1: Rebuttal: >We greatly appreciate the reviewer's positive feedback on our paper, especially for acknowledging that it "not only proposes a new IEA method but also contributes a new dataset and benchmark, providing a significant boost to the IEA and exposure-related community." We hope the following responses...
Summary: This paper proposes an innovative no-reference image exposure assessment method, transitioning from traditional holistic image evaluation to fine-grained pixel-level assessment. This approach effectively addresses the shortcomings of existing techniques in terms of accuracy and generalization. Researchers have...
Rebuttal 1: Rebuttal: >We appreciate the reviewer's positive feedback on our paper, particularly for acknowledging "an innovative no-reference image exposure assessment method." We have addressed the questions raised below. --- **Q1:** The author mentions in the abstract that the code and dataset can be found in the ...
Rebuttal 1: Rebuttal: General Response: We sincerely thank the reviewers for their efforts in reviewing our work and providing valuable comments. We highly appreciate the comments received, e.g., the positive comments on our contributions (4/4 reviewers), methods' performance (4/4 reviewers), our presentations (3/...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Neural Gaffer: Relighting Any Object via Diffusion
Accept (poster)
Summary: This paper presents a method for relighting objects observed from a single image. While existing approaches rely on specific capture condition using flashlight illumination or portrait captures, or require to explicitly decompose the scene into geometry and reflectance, the proposed method aims to generate ima...
Rebuttal 1: Rebuttal: Thank you for your insightful comments and valuable suggestions. We will revise our paper based on your feedback. Here are our responses to your comments: **1. Technical contribution is incremental considering DiLightNet** * Although our method and DiLightNet both approach single-image-relightin...
Summary: The paper introduces Neural Gaffer, an end-to-end 2D relighting diffusion model designed for single-image relighting without the need for explicit scene decomposition. Neural Gaffer can synthesize high-quality relit images of any object under novel environmental lighting conditions by conditioning on a target ...
Rebuttal 1: Rebuttal: Thank you for your insightful comments and valuable suggestions. We will revise our paper based on your feedback. Here are our responses to your comments: **1. How the method performs when the target is not centered and has a complex background or varied lighting conditions, especially with objec...
Summary: Neural Gaffer presents an approach to object-centric image relighting using diffusion models. The method adapts a pre-trained diffusion model and fine-tunes it on a synthetic dataset designed for relighting tasks. The main feature is its ability to condition the diffusion process on target environment maps, al...
Rebuttal 1: Rebuttal: Thank you for your detailed comments and insight suggestions. We will revise our paper based on your feedback. Here are our responses to your comments: **1. Evaluating our relighting model on real-world dataset** (in response to weaknesses 1 and question 1) We evaluate our diffusion model on in-...
Summary: The paper proposes a novel method for single-image relighting, which takes an image of an object and a target environmental map as inputs. The authors fine-tune Stable Diffusion on a synthetic relighting dataset to output relit images, conditioning on both the input object image and the target environmental ma...
Rebuttal 1: Rebuttal: Thank you for your insightful comments and valuable suggestions. We will revise our paper based on your feedback. Here are our responses to your comments: **1. Inherent color ambiguity** ​ Color ambiguity is an inherent issue in the single-image relighting task. ​ That said, we found that our d...
Rebuttal 1: Rebuttal: Dear Reviewers, Thank you for dedicating your time to review our paper and offering insightful feedback. We sincerely appreciate your efforts to help enhance the quality of our research. We are also pleased to note that all reviewers were supportive of our work: (a)Recognize our methods are effe...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Towards Understanding the Working Mechanism of Text-to-Image Diffusion Model
Accept (poster)
Summary: This paper aims to understand two mechanisms of diffusion models. First, the denoising process is analyzed, and it is found that shapes in an image are constructed in the beginning of the denoising process, while textures and details are filled in later. This empirical observation is justified with a mathemati...
Rebuttal 1: Rebuttal: We thank you for your valuable comments. Here we address your concerns as follows. **Q1**: “As mentioned in the Strengths section above, the findings are not completely surprising (for instance, the shape reconstruction or reliance on text in the early denoising steps, then detail-filling in the ...
Summary: This paper explores the mechanism in the text-to-image diffusion model, including the generation order of image components, the influence of various tokens, and the steps in which tokens work. These observations bring some insight into understanding the diffusion model. Besides, the authors also design a sampl...
Rebuttal 1: Rebuttal: We thank you for your valuable comments. Here we address your concerns as follows. **Q1**: The other conclusions in this paper, e.g., shape first then details, have been discussed in previous works. **A1**: Yes, and we have mentioned in line 161, footnote 5. However, the existing literature only...
Summary: The paper investigates the denoising process in DPM, identifying that the overall shape of the image is formed early in the process while details are added later. It further examines the influence of different text prompt tokens, finding that the end-of-sequence token [EOS] plays a crucial role in shaping the ...
Rebuttal 1: Rebuttal: We thank you for your valuable comments. Here we address your concerns as follows. **Q1**: The paper might lack clarity in explaining the theoretical aspects of frequency signal analysis. **A1**: The theoretical aspects of frequency signal are in Proposition 1, where we have actually proved that...
Summary: This paper study how the EOS token plays a role in the generation process of diffusion model. In particular, this paper finds that diffusion models tend to first generate low frequency part of the image at the beginning of the generation process, then gradually add high frequency signal to it. Experiments show...
Rebuttal 1: Rebuttal: We thank you for your valuable comments. Here we address your concerns as follows. **Q1**: “It is not clear that how the "computational cost" is defined in this paper. If the computational cost is GPU VRAM, then the claimed efficiency improvement might be invalid, as the required GPU VRAM for com...
Rebuttal 1: Rebuttal: General Response: We thank all reviewers for their valuable comments. It seems a common question is whether our sampling strategy can be applied to the other conditional generation tasks. To verify this, we further apply our sampling strategy to the other two conditional generation tasks: subje...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Deep Correlated Prompting for Visual Recognition with Missing Modalities
Accept (poster)
Summary: The paper proposes a prompt optimization approach to the missing modality issues in multimodal learning. Inspired by the missing-aware prompt (MMP), this paper adds more prompts, including correlated, dynamic and modal-common prompts, to each encoder to improve the performance. The experiment on three datasets...
Rebuttal 1: Rebuttal: Reviewer# YNgk 1. Novelty compared to MMP Many thanks for your question. MMP has first introduced prompt learning to handle the missing-modality setting. It inserts learnable tensors, i.e., prompts at each layer which still keeping the image encoder and text encoder fixed to guide the model to ...
Summary: The model proposes prompting strategy where both modalities (image and text) are prompted, and the prompt for both modalities are correlated. The strategy is to use multiple prompts, namely correlated prompts, dynamic prompts, and modal-common prompts. As the backbone itself is multimodal (CLIP), it is a good ...
Rebuttal 1: Rebuttal: 1. Ablation studies by using other multimodal backbones We provide the results by comparing our method with the baseline method upon the single-stream ViLT backbone, and also comparing them upon the two-stream CoCa backbone as below. We first provide the results on the ViLT backbone. Our method ...
Summary: This paper addresses the challenge of generalized missing modalities in multimodal learning, where a modality can be absent during any learning phase (e.g., training, testing, or both). he authors investigate prompt learning with missing modalities and propose deep correlated prompts designed to capture variou...
Rebuttal 1: Rebuttal: 1. Efficacy of each proposed prompt. We place each proposed prompt upon the baseline method, and show the results as below on the MMIMDb dataset upon the missing-both setting with η=70%. It’s observed that each proposed prompt could notably boost the performance. | Configurations| Extra brou...
Summary: This paper proposes to address the missing modality problem for the multimodal recognition model (i.e. the multi-modal data could be incomplete). There are three techniques of prompting being proposed (while the recognition model, i.e. two-stream multimodal method CLIP in this paper, is kept fixed), including:...
Rebuttal 1: Rebuttal: 1. Proposed prompts not directly connected to the missing modality problem. Sorry for the mis-clarification in the manuscript to mislead you. In line 146-150 of our manuscript, we state that we set different prompts for various missing modalities. Specifically, for correlated prompts, we independ...
Rebuttal 1: Rebuttal: We provide (1) a figure to further illustrate out proposed three prompts by comparing them with our baseline and MMP[17]. (2) Visualizations for the dynamic prompts using the T-SNE method on the Food101 dataset upon the missing-both setting with η=70% and η=50%. Pdf: /pdf/86b7115517be30193e9978ba6...
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper proposes a new method to handle missing modalities in visual and language recognition systems. The paper proposes a very similar method to the one proposed by MMP [17] but using different way of getting the prompts to feed them into the transformer layers. Comparison with other works show that the ...
Rebuttal 1: Rebuttal: 1. The mechanism of different prompts. Many thanks for your question. We have plotted a figure to further illustrate our proposed prompts by comparing them with our baseline and MMP[17], which can be found in the pdf file of Author Rebuttal. The baseline simply uses fixed image encoder and text...
null
null
null
null
null
null
ResAD: A Simple Framework for Class Generalizable Anomaly Detection
Accept (spotlight)
Summary: The paper analyzes the class-generalizable anomaly detection problem and introduces residual feature learning. Based on the residual features, the paper proposes a simple AD framework, i.e., ResAD, which incorporates OCC loss and distribution estimating to distinguish normal and abnormal data. The experimenta...
Rebuttal 1: Rebuttal: **[To W1 and Q1].** Thanks for your professional review. We think that our work and InCTRL should be concurrent work. We also initially submitted our work to CVPR2024, and received two weak accepts and one reject (you can see the relevant materials in the rebuttal pdf file). Regretfully, our work ...
Summary: This paper proposes a simple but effective framework that can be directly applied to detect anomalies in new classes. The main insight is learning the residual feature distribution rather than the initial one. In this way, we can significantly reduce feature variations. Even in new classes, the distribution of...
Rebuttal 1: Rebuttal: **[To W1].** Thanks for your suggestion. We then checked the formula writing in several other papers and found that there is punctuation at the end of a formula. This is a quite good detail suggestion. We will make modifications in the revised version. **[To W2].** Thanks for your suggestion. Usi...
Summary: This paper proposed a simple yet effective framework ResAD for class-generalizable anomaly detection by leveraging residual feature learning and a hypersphere constraint. The framework's ability to generalize to new classes without retraining or fine-tuning makes it valuable for real-world applications, provid...
Rebuttal 1: Rebuttal: **[To W1].** We greatly appreciate your suggestion. Under the 4-shot setting, we further evaluate our method on a medical image dataset, BraTS (for brain tumor segmentation) and a video AD dataset, ShanghaiTech (as our method is image-based, we extract video frames as images for use). The comparis...
Summary: This paper proposes to address cross-class anomaly detection problem. To this end, this study introduce a residual learning framework ResAD. The ResAD framework aims to learning residual feature distribution between target image and reference image. Experiments are conducted to valid the effectiveness of the p...
Rebuttal 1: Rebuttal: **[To W1].** Thanks for your professional review. We think that our work and InCTRL should be concurrent work. We also initially submitted our work to CVPR2024, and received two weak accepts and one reject (you can see the relevant materials in the rebuttal pdf file). Regretfully, our work was rej...
Rebuttal 1: Rebuttal: We are very grateful for all your constructive suggestions. Please see our specific responses to each reviewer. In the Author Rebuttal pdf file, we provide some relevant materials. We recommend that Reviewer fL7E and 6bzj can download the pdf file and see the contents in it. Pdf: /pdf/a4a7fd593d...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Continuous Spatiotemporal Events Decoupling through Spike-based Bayesian Computation
Accept (poster)
Summary: This paper presents a spike-based Bayesian inference framework for motion segmentation with event cameras. By designing neurons that utilize STDP for online learning of motion patterns, the framework can perform the M-step of the EM algorithm in motion segmentation of event streams. Additionally, the WTA circu...
Rebuttal 1: Rebuttal: We appreciate your constructive suggestions and have supplemented our work with further results on object detection based on motion segmentation. Specifically, we calculated the detection success rate on the EED dataset, corresponding to Fig. 6 in the main text. Our detection success rates across ...
Summary: This work proposes a spike Bayesian computational framework for continuous motion segmentation in event streams and demonstrates that the constructed network can implement an EM-based event stream motion segmentation model. The proposed model uses WTA circuits in the network to achieve an equivalent E-step, wh...
Rebuttal 1: Rebuttal: Thank you for your acknowledgment of our validation of event datasets featuring challenging scenarios, including mixed camera self-motion and high-speed moving objects, which is highly valued. We are pleased that you find our spike Bayesian inference framework to be **highly interpretable** and **...
Summary: The paper proposes to address motion segmentation at very high temporal resolution via an event-based or spiking implementation of expectation-maximization in a generative model. It demonstrates the performance of the resulting spiking neural networks on example experiments. Strengths: The strength of the pap...
Rebuttal 1: Rebuttal: We appreciate the reviewer's recognition of our approach to high-resolution motion segmentation using an event-based implementation of the EM algorithm and acknowledge **our deep engagement with the spiking neural network literature**. Here, we aim to provide further clarification on the EM framew...
Summary: This paper demonstrates that WTA circuits along with STDP learning resembles EM algorithm-like Bayesian inference and could be used for motion segmentation from event streams by contrast maximization of warped events. Strengths: The paper proposes an interesting approach for event motion segmentation based on...
Rebuttal 1: Rebuttal: Thank you for recognizing our **innovative** approach to event motion segmentation using event-based dynamic vision sensors and an EM-like framework. We appreciate your acknowledgment of our method's use of WTA circuits combined with STDP-based learning. The following are the main issues addresse...
Rebuttal 1: Rebuttal: We sincerely appreciate the valuable feedback from all reviewers. Thank you for recognizing the strengths of our work. Reviewer **WNgv** praised our innovative approach to event motion segmentation using event-based dynamic vision sensors and an EM-like framework, highlighting the use of Winner-Ta...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses
Accept (poster)
Summary: This paper proposes a new way to jailbreak LLMs through an improved version of few-shot jailbreaking. They propose to use a random search to select examples that are most effective to jailbreak the mode from a pre-defined pool generated with Mistral-7B. On top of that, they alternate the steps of each example ...
Rebuttal 1: Rebuttal: Thank you for your valuable review and suggestions. Below we respond to the comments in ***Weaknesses (W)*** and ***Questions (Q)***. --- ***W1: No comparison to few/many-shots baselines.*** According to the ICA paper [1*], even ICA (10-shots) has a lower ASR than our I-FSJ (2-shots) against Lla...
Summary: This paper proposes two improved techniques for in-context few-shot jailbreaking: demo-level random search and the injection of special tokens from the system prompt. The authors conduct extensive experiments across a series of aligned language models. Ablation studies demonstrate the effectiveness of both pro...
Rebuttal 1: Rebuttal: Thank you for your valuable review and suggestions. Below we respond to the comments in ***Weaknesses (W)*** and ***Questions (Q)***. --- ***W1&Q2: Potential leakage due to using AdvBench to generate demonstration pool and the concern that only using 50 requests AdvBench for evluation is insuffi...
Summary: This work proposes a new method to jailbreak LLM to elicit harmful responses. The proposed method follows a line of works on using the demonstrations of harmful responses in the context of prompt to jailbreak. It improves the previous works regarding reducing the number of demonstrations in the context and inc...
Rebuttal 1: Rebuttal: Thank you for your valuable review and suggestions. Below we respond to the comments in ***Weaknesses (W)***. --- ***W1: Potential leakage due to using AdvBench to generate demonstration pool and the limited scale of AdvBench.*** To prevent leakage or overfitting, we measure the cosine similari...
Summary: This paper proposes several ICL (in-context learning)-based techniques to improve the effectiveness and efficiency of jailbreaking prompts, including adding system special tokens and random search on the demonstrations. Strengths: - The discovery that using special tokens can enhance the effectiveness of harm...
Rebuttal 1: Rebuttal: Thank you for your valuable review and suggestions. Below we respond to the comments in ***Weaknesses (W)***. --- ***W1: Should take ICA as the main target, rather than refining MSJ. The difference between the used baseline (FSJ) and ICA is not indicated.*** According to the ICA paper [1*], eve...
Rebuttal 1: Rebuttal: We thank all reviewers for their constructive feedback, and we have responded to each reviewer individually. We have also uploaded a **Rebuttal PDF** that includes: - $\\textrm{\\color{blue}Figure A}$: Harmful loss of our I-FSJ using different public special tokens against GPT-4; - $\\textrm{\\co...
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper proposes jailbreak attacks via few-shot demonstrations. The authors introduce a three-step method to achieve this goal, which includes constructing a demo pool, injecting special tokens, and demo-level random search. The proposed method demonstrates strong attack performance against aligned LLMs and...
Rebuttal 1: Rebuttal: Thank you for your valuable review and suggestions. Below we respond to the comments in ***Weaknesses (W)***. --- ***W1: How does the attacker know the special tokens used in the LLMs, especially for closed-source models such as ChatGPT?*** The special tokens for open-source LLMs can be publicl...
null
null
null
null
null
null
DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving
Accept (poster)
Summary: This paper synthesizes a math reasoning dataset with a designed way of rejection sampling. Many base models show performance improvements on math reasoning tasks after instruction-tuning on this dataset. They promise to release the dataset and models. Strengths: Their curated dataset achieves relatively good ...
Rebuttal 1: Rebuttal: Thanks for your comments! We address your concerns below. > **Q1**: The proposed sampling technique is trivial and incremental …… the uniform method is used in ToRA, and the prop2diff method is used in MARIO. **A1**: We respectfully disagree with the reviewer. Both ToRA and MARIO are distinct f...
Summary: The paper introduces Difficulty-Aware Rejection Tuning (DART), a novel approach for enhancing the mathematical problem-solving capabilities of large language models (LLMs). Traditional methods often produce datasets biased towards easier queries, limiting the models' ability to learn from challenging examples....
Rebuttal 1: Rebuttal: Thanks for your positive comments! We address your concerns below. > **Q1**: The success of DART relies heavily on the ability of models to generate correct responses for difficult queries, which may not always be feasible for extremely challenging problems. **A1**: This is indeed a limitation ...
Summary: The paper proposes a rejection sampling pipeline for automatically generating SFT data, emphasizing that harder data requires more trials. The difficulty is heuristically determined using the ratio of incorrect trials for each question. Experiments demonstrate that this method can outperform traditional reject...
Rebuttal 1: Rebuttal: Thanks for your comments! We address your concerns below: > **Q1**: Assigning more budget to more complex questions in data synthesis is a common practice. For instance, in [1], which successfully annotated 83.1% of MATH questions **A1**: First, we note that most recent works in mathematical dat...
Summary: The paper presents an approach to improving the performance of LLMs in mathematical problem-solving. The authors identify that current datasets synthesized using proprietary models like GPT-4, are biased towards easier queries. To address this, they introduce Difficulty-Aware Rejection Tuning (DART), which all...
Rebuttal 1: Rebuttal: Thank you for the positive comments! We address your concerns below. > **Q1**: It is unclear how the hyperparameters of the baseline, VRT, were tuned. For instance ……, sampling temperature is searched from 0.3 to 1.7 for DART. **A1**: We searched temperature from 0.3 to 1.7 according to accura...
Rebuttal 1: Rebuttal: We thank all the reviewers for the insightful comments! While we address most concerns in the individual rebuttals, here In the general rebuttal we would like to clarify the difference between our approach and ToRA [1] / MARIO [2], a concern raised by Reviewer neCQ and Reviewer quxz. The most im...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Logarithmic Smoothing for Pessimistic Off-Policy Evaluation, Selection and Learning
Accept (spotlight)
Summary: The paper considers the offline contextual bandit problem. The authors consider a class of reward estimators for this setting that is a regularization of Inverse Propensity Scoring (IPS - aka importance sampling). A general concentration result is provided for this class of estimators. This is used to provide ...
Rebuttal 1: Rebuttal: First, we would like to thank you very much for your positive review acknowledging the quality of our work. We hope our response addresses your questions and increases your confidence in our work. We will consider the points raised when updating the manuscript. **(1) Bound Comparisons** Given th...
Summary: The authors propose empirical concentration inequalities for off-policy evaluation that apply to several forms of (smoothed) IPS, which are claimed to be tighter than the results in existing works. These bounds are then used to derive policy learning guarantees that inherit the properties of the concentration ...
Rebuttal 1: Rebuttal: First of all, we would like to thank you for your review and we hope that our response answers your questions and clears out misunderstandings. We think that your comments can be completely addressed and we hope this will lead you to increase your score. **Answer to the main criticism** First, ...
Summary: This paper studies log-algorithmic smoothing of importance weight for off-policy learning. The proposed smoothing technique can be seen as a differentiable variant of clipping, which is useful for variance reduction for OPL. The paper also analyzes the PAC-Bayes learning bound of the proposed OPL method, chara...
Rebuttal 1: Rebuttal: First of all, we would like to thank you for your positive review, and we hope that our response addresses your questions and clears up any misunderstandings. **(1) Connection to Metelli et al. 2021** Our Logarithmic Smoothing (LS) estimator and the Harmonic estimator of Metelli et al. (2021) sh...
Summary: Policy evaluation, selection and optimization are considered in the context of offline contextual bandits, where i.i.d. data with a known behavior policy is given. The authors set out to study a generalization of importance weighted policy evaluation; for this they start from a general formulation that compute...
Rebuttal 1: Rebuttal: First of all, we would like to thank you for your positive feedback acknowledging the quality of our work, and we hope that our response addresses your question. In our paper, we adopt the experimental design of [31] for both pessimistic policy evaluation and selection. The authors of [31] identi...
Rebuttal 1: Rebuttal: We are very grateful to the reviewers and AC for their valuable time. We attach to our rebuttal additional plots comparing the properties of the bounds for three different datasets and supporting empirically our theoretical findings, showing that the LS bound is tighter than its competitors. **Fi...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Improving Sparse Decomposition of Language Model Activations with Gated Sparse Autoencoders
Accept (poster)
Summary: This work proposed a Gated Sparse Autoencoder (Gated SAE) to mitigate the standard SAEs' biases, such as shrinkage, which systematically underestimate the feature activations from SAEs. The key difference between Gated SAE and SAE is that the Gated SAE separates affine transformations within the encoder in ord...
Rebuttal 1: Rebuttal: We are grateful for your thoughtful review and helpful feedback, and are pleased you found our experiments comparing Gated and baseline SAEs comprehensive. We indeed used sae-vis, which you referenced in your review, to produce the visualization used in the interpretability study, citing this lib...
Summary: The paper attempts to resolve the issue of feature shrinkage in sparse autoencoders (SAEs) by replacing the SAE ReLU activation function with a gated ReLU unit. The weight-tying scheme they use for the gated unit effectively turns it into a jump ReLU activation function. They train gated SAEs and baseline SAEs...
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper, we are heartened that you agree that Gated SAEs attempt to address a substantive practical problem with current SAE training methods and that you find our evaluations and ablations extensive. Regarding your questions about the raw (unspliced mode...
Summary: This work introduces a new technique under mechanistic interpretability's sparse autoencoders. By using a less naive SAE, with a gating mechanism and a little extra computation, the paper shows a decent improvement over the baseline. Strengths: This work addresses the important issue of interpreting transform...
Rebuttal 1: Rebuttal: We are grateful for your review and valuable feedback, and are encouraged that you found the paper fairly easy to follow, with results clearly presented and key aspects of the method appropriately ablated. We appreciate that the explanation of the architecture and loss function is somewhat dense,...
Summary: This paper introduces Gated Sparse Autoencoders (Gated SAEs), an improvement over standard sparse autoencoders (SAEs) for decomposing language model activations. The key idea is to separate the tasks of detecting which features are active and estimating their magnitudes, allowing the sparsity penalty to be app...
Rebuttal 1: Rebuttal: Thank you for your detailed review of our paper and your feedback and questions. We are glad you appreciated our explanation of the motivation behind the gated SAE architectural modification and found our evaluations and ablation studies thorough. We agree that the exposition is fairly dense and ...
Rebuttal 1: Rebuttal: We thank the area chair and all our reviewers for taking the time to read our paper and for their insightful comments and suggestions. We are encouraged by all four reviewers recommending that our paper be accepted, with reviewer Qtwo recognizing that our paper “attempts to address a substantive ...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Marginal Causal Flows for Validation and Inference
Accept (poster)
Summary: This paper introduces _Frugal Flows_ a method that learns the data distribution of data for causal effect estimation; namely outcome $Y$, binary treatment $X$ and pretreatment covariates $\mathbf{Z}$. Through a combination of frugal parametrisation, normalizing flows and copulas, separate components for the m...
Rebuttal 1: Rebuttal: We thank the reviewer for their comprehensive assessment, thoughtful commentary, and suggestions. We share your aspiration that frugal flows can be engineered and presented in a user-friendly manner for the causality community. # Weaknesses We would like to address the comments under the Weaknesse...
Summary: The paper introduces a generative modeling approach called Frugal Flows, designed to learn the data generation process with an explicit parametrization for the marginal causal effect of treatment on outcomes. Inspired by the frugal parametrization of marginal structural models, this approach models the margina...
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed analysis of our paper. Before addressing the specific questions, we would like to comment on the important points raised in the weaknesses section. ## Weaknesses We agree that a more comprehensive validation of our proposed approach would strengthen the n...
Summary: This paper proposed a generative model called Frugal Flows making use of copula flows to infer about marginal causal effects by simulating the data generating process. Strengths: - The problem of inferencing marginal causal effects is an interesting and important problem - The idea of using generative models ...
Rebuttal 1: Rebuttal: We appreciate the reviewer’s comments and suggestions for our work. Indeed, we agree that frugal flows are an interesting addition to inference algorithms for estimating marginal causal densities in large datasets using Normalizing Flow models. In addition, we believe a key contribution of frugal...
Summary: This work proposes to leverage existing neural density estimators (specifically, normalizing flows) to exploit a newly-proposed "frugal parametrization" that can capture the causal marginal distribution of an underlying causal model. Under this parametrization, the authors show how to specify and train each co...
Rebuttal 1: Rebuttal: We are grateful for the reviewer’s feedback and suggestions for our submission. Below are our responses to the noted weaknesses and questions: # Weakness 1 We acknowledge that the frugal parameterization was briefly introduced. Due to page constraints, we provided a brief overview in the main text...
Rebuttal 1: Rebuttal: We thank all the reviewers for their comprehensive commentary and very helpful suggestions for our paper. In addition to the individual responses to each reviewer, we provide a more global summary of what we believe were the core themes across all four reviews. These centre around 1) clarity and t...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Slicing Vision Transformer for Flexible Inference
Accept (poster)
Summary: The paper targets scaling down Vision Transformers (ViT) to fit environments with dynamically changing resource constraints. The authors propose Scala, a framework enabling a single network to represent multiple smaller ViTs with flexible inference capability by activating various subnets during training. Scal...
Rebuttal 1: Rebuttal: We appreciate the Reviewer's feedback. We provide further explanations to clarify the Reviewer's concerns based on several key points as below. *** **Weakness 1: Discussion with dynamic networks.** We thank the Reviewer for the valuable suggestion and we will add discussion with those dynamic...
Summary: The paper presents Scala, a novel framework for scalable representation learning developed from US-Net. It identifies the issues of directly applying US-Net to ViTs and proposes solutions including Isolated Activation, Scale Coordination, and Stable Sampling. These innovations enable Scala to output several su...
Rebuttal 1: Rebuttal: We appreciate the Reviewer's approval and valuable comments. We respond to the Reviewer's concerns as below. *** **Weakness 1: Fixed scaling ratio.** Thank you for the comment. The hidden dimension of ViT has to be an integer multiple of the number of heads (e.g., 6/12) so ViT cannot support ...
Summary: The paper introduces Scala, a novel framework designed to effectively scale down Vision Transformers (ViTs) for use in environments with fluctuating resource constraints. The key insight is that smaller ViTs can function as sub-networks within a larger ViT, differing mainly in width. Scala enables a singular n...
Rebuttal 1: Rebuttal: We appreciate the Reviewer's comments to point out the confusing description and we make the response as below. *** **Weakness 1: Phrase.** Thank you for your insightful suggestion. We will revise our presentation in the final version to prevent any potential misunderstanding. Our motivation ...
Summary: This paper advances an approach for training Vision Transfomers (ViTs) such that at inference time they can be dynamically adjusted to fit different budget constraints with reduced drops of performance. To this end, the authors introduce Scala, a framework that allows a single network to encapsulate and train ...
Rebuttal 1: Rebuttal: We sincerely appreciate the Reviewer’s detailed comments and constructive suggestions for us to improve our work. We make the response as below. *** **Weakness 1: Phrase.** Thanks for the great suggestion and we will modify the presentation in our final version. Our motivation for adopting 's...
Rebuttal 1: Rebuttal: Dear Reviewers: Thanks for your valuable comments in the review process. We have an exciting experiment added during rebuttal which supports that Scala can effectively inherit the generalization ability from foundation models like DINOv2 while maintaining the flexible inference capability. This i...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Bridging Model-Based Optimization and Generative Modeling via Conservative Fine-Tuning of Diffusion Models
Accept (poster)
Summary: This paper presents a conservative fine-tuning method called BRAID, which integrates the strengths of diffusion models and model-based optimization (MBO) to improve the performance of pre-trained diffusion models on offline datasets. BRAID optimizes a conservative reward model that includes penalties outside t...
Rebuttal 1: Rebuttal: Thank you for the detailed and insightful feedback. We have addressed the reviewer's concern by clarifying that (1) our goals differ significantly from those in standard offline RL works, (2) we have compared our method with recent works that align with our objectives, such as Yuan et al. (2023) ...
Summary: The paper tackles the task of black box optimization in an offline setting. Given a pretrained diffusion model, they first train a surrogate model on the offline data and use it to tilt the diffusion model distribution via finetuning it. The authors distinctly focus on an uncertainty quantification based proce...
Rebuttal 1: Rebuttal: We appreciate your feedback. We have addressed your concern by explaining a more detailed evaluation plan for biological tasks. **Weakness: Evaluations are inherently limited in their computational nature and the conclusions that can be drawn for the procedures effectiveness in biological seque...
Summary: This paper proposes a conservative approach for fine-tuning diffusion models with a reward model learned from offline data. Specifically, the ideas are two-fold: The first idea is to replace the reward model with a conservative estimate based on classical generalization bounds. The second idea is to leverage t...
Rebuttal 1: Rebuttal: We appreciate your feedback. We have addressed your concern by explaining (1) the bootstrap/RKHS bonus term's practical usefulness in our scenario and its widespread use in various real applications/papers, and (2) how our experiments are intentionally designed to demonstrate that conservatism (ra...
Summary: 1) This paper analyzed the two mainstream angles of computational design. 2) Proposed a hybrid one that offline fine-tunes generative models. 3) Conduct experiments on two tasks to show the performance of their method. Strengths: 1) Sufficient theoretical analysis and detailed preliminaries. 2) The idea is st...
Rebuttal 1: Rebuttal: Thank you for your positive feedback! We have addressed your concerns by providing additional explanations on (1) the disadvantages and advantages of pure generative model/MBO approaches, and (2) experimental results on diversity metrics based on CLIP scores. **Weaknesses: In the introduction, t...
Rebuttal 1: Rebuttal: We appreciate feedback from all reviewers. We respond to raised weaknesses/questions as much as possible. **Papers we cite:** We have added the papers we cited in our response here. * Deringer, V. L., Bartók, A. P., Bernstein, N., Wilkins, D. M., Ceriotti, M., & Csányi, G. (2021). Gaussian proc...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Scene Graph Disentanglement and Composition for Generalizable Complex Image Generation
Accept (spotlight)
Summary: This paper employs scene graph for image generation. Different from the previous methods, they employ the generative capabilities of variational autoencoders and diffusion models in a generalizable manner, compositing diverse disentangled visual clues from scene graphs. The authors propose a semantics-Layout V...
Rebuttal 1: Rebuttal: Thank you for your positive comments and valuable feedback on our work! We are excited and encouraged by your support! Below we address your concern. **Q1: About the clarification of the multi-layered sampler section.** **R1**: We sincerely appreciate the valuable suggestion, and we will make ou...
Summary: The paper proposes DisCo (Disentangled Compositional image generation), which integrates both layout and semantic information derived from scene graphs to improve the quality and controllability of generated images. In particular, DisCo has three main components: Semantics-Layout Variational AutoEncoder (SL-VA...
Rebuttal 1: Rebuttal: We greatly appreciate all of your valuable suggestions, which play a pivotal role in enhancing the quality of our paper. Bellow we address all your concerns. **Q1: Discussion about the complexity and quality.** **R1**: Thanks for your valuable suggestions. To comprehensively evaluate the complex...
Summary: This paper presents "DisCo," a novel framework for generating complex images from structured scene graphs. Unlike traditional text-to-image or layout-to-image methods, DisCo utilizes a Semantics-Layout Variational AutoEncoder (SL-VAE) to disentangle and generate diverse spatial layouts and interactive semantic...
Rebuttal 1: Rebuttal: We sincerely appreciate the affirmation from the reviewer for our work. It serves as a strong motivation for us! Bellow we address your concerns sequentially. **Q1: More quantitative comparisons with related baselines, such as R3CD.** **R1**: Actually, in Table 1 of the manuscript, we have alrea...
Summary: This paper proposes a method that uses a scene graph and integrates variational autoencoders (VAEs) and diffusion models to address complex scene generation. Specifically, a Semantics-Layout Variational AutoEncoder (SL-VAE) is used to derive diverse layouts and semantics from the scene graph, while a Compositi...
Rebuttal 1: Rebuttal: We greatly appreciate all of your valuable suggestions, which play a pivotal role in enhancing the quality of our paper. Below we address all your concerns. **Q1: Details of scene graph construction.** **R1**: Thank you for the questions raised by the reviewer, and we will add the details of sce...
Rebuttal 1: Rebuttal: Dear reviewers, We thank all reviewers for their time and efforts in reviewing our paper. These constructive reviews can bring multiple improvements to our manuscript. We are encouraged that the reviewers appreciate our method, including: - structure design that makes sense *[Reviewer 3rPJ]* -...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Direct Language Model Alignment from Online AI Feedback
Reject
Summary: This paper propose OAIF, an online method to align language model with human preference where feedback from language models serve as a surrogate of human feedback. The key of OAIF is to use online generated preference pair along the training process. Experiment results shows that, by switching offline preferen...
Rebuttal 1: Rebuttal: We reply to each of the review’s concerns below. > My first concern is regarding the novelty of the paper. It seems that the language model annotator is essentially a preference model. Therefore, OAIF can be seen as a method of online direct alignment algorithm with access to a preference model. ...
Summary: This work extends offline preference learning methods, i.e., DPO, to a online variant by using LLM as annotator to collect new datasets for further preference learning. The results show that Direct alignment from preferences (DAP) methods win-rate over the offline methods beyond 60%. Strengths: 1. Paper is go...
Rebuttal 1: Rebuttal: We reply to each of the reviewer's concerns individually below. > The improvement by extending online is under expectation as it introduces more datasets and training budgets. We may disagree with this argument as it is unclear how much improvement would be considered as “expected”. We showed em...
Summary: This paper applies direct alignment from preferences (DAP) methods, particularly DPO, to online settings where responses are sampled in an on-policy manner and feedback is provided by the LLM annotator in real-time. Extensive experiments demonstrate the effectiveness of these simple ideas. Strengths: The pape...
Rebuttal 1: Rebuttal: Thanks for identifying our contribution to addressing the off-policy sampling issue via the proposed online AI feedback (OAIF) method. We have addressed each concern as described below. > The rationale for why on-policy learning brings performance gains is not well clarified. The cited reference ...
Summary: The paper presents a new method called Online AI Feedback (OAIF) for direct alignment from preferences (DAP) that addresses the limitations of existing DAP methods, which rely on static, offline feedback datasets. By using an LLM as an online annotator to provide real-time feedback during each training iterati...
Rebuttal 1: Rebuttal: We respond to the questions you have as follows. > The idea is straightforward but lacks theoretical proof. The proposed method combines DPO and AI feedback, unlike the constitutional AI paper, which integrates PPO with AI feedback. However, this point is minor. Given the abundance of concurrent ...
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
A teacher-teacher framework for clinical language representation learning
Accept (poster)
Summary: The paper introduces a novel teacher-teacher framework named LIghtweight kNowledge alignmEnt (LINE), which facilitates knowledge exchange between two pre-existing large language models (LLMs) to enhance clinical language representation. By leveraging complementary knowledge from general-purpose and domain-spec...
Rebuttal 1: Rebuttal: > **Data Requirements and Availability**: A notable limitation of the proposed LINE framework is its dependency on well-aligned and specific types of data sources, which may not be readily available or commonly found in practical settings. For example, integrating data from disparate modalities li...
Summary: This paper presents an interesting topic on LLM but the importance of this problem is not convincing and the methods here is not novel. Strengths: The teacher-teacher concept is novel to some extent. Weaknesses: 1. The problem's importance is not significant. 2. There lacks the inclusion of SOTA models like ...
Rebuttal 1: Rebuttal: Thank you for your comment.We will address your comments point-by-point. > The problem's importance is not significant. Thank you for raising this important question. While our motivating example originated from the medical domain, our LINE framework is broadly applicable to a wide range of sce...
Summary: The authors look to address the question representational alignment between language models trained on different textual domains to improve performance of potentially both models on their out-of-domain text. The authors propose to specifically investigate this in the context of EHR text, and choose as their mo...
Rebuttal 1: Rebuttal: Thank you for your detailed comments. In the following, we will address your questions point-by-point. > The project's scope is incredibly narrow. Thank you for raising this important question. While our motivating example comes from the medical domain, our LINE framework is broadly applicable ...
Summary: This paper introduce a teacher-teacher framework for clinical language representation learning. The framework uses a lightweight knowledge alignment module to harmonize the knowledge of both models within a unified space, which including two steps: The first step involves initial training to define residuals a...
Rebuttal 1: Rebuttal: Thank you for your insightful comments! In the following, we will address them point-by-point. > Figure 1 is somewhat confusing. From my understanding, Teacher 1 should be a strong LLM, while Teacher 2 should be an LLM with existing domain-specific knowledge. However, Figure 1 gives the impressio...
Rebuttal 1: Rebuttal: Thank you all for your comments and questions! Based on your suggestions, we have made the following major changes during the rebuttal phase: ### Additional Experiment 1. **New Teacher Model**: We have adopted the OpenAI text embedding model "text-embedding-v3-small" as Teacher 1. Since it was r...
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper proposes a mutual learning framework, called LINE, between two pre-existing LLMs in the healthcare domains. By harmonizing the knowledge of two distinct LLMs into a unified representation space, the model achieves better performance on intrinsic and extrinsic downstream evaluations of clinical tasks....
Rebuttal 1: Rebuttal: Thank you for your insightful comments! Below are our responses to your questions, addressed point-by-point: > It is unclear if LINE will work on combinations of other LLMs. Thank you for raising this important question. To address your concern, we have extended our experiments to include additi...
null
null
null
null
null
null
UIR-LoRA: Achieving Universal Image Restoration through Multiple Low-Rank Adaptation
Reject
Summary: This paper introduces a universal image restoration framework UIR-LoRA based on multiple low-rank adapters. UIR-LoRA employs the pre-trained text-to-image diffusion model SD-turbo as the shared component. It utilizes a LoRA composing strategy based on the degradation similarity predicted by CLIP encoder to com...
Rebuttal 1: Rebuttal: Thanks for your thorough review and valuable feedback. 1. Detail distortion arises from operations such as downsampling or pooling. Using skip connections has become a standard and commonly used method to address this issue, as seen in the bypass decoder in [R1], the skip connections in [R2] and [...
Summary: This paper proposes to perform universal image restoration via multiple low-rank adaptation. The key idea is to leverage a pre-trained stable diffusion model as the shared component and transfer it to specific degradations with LoRA adaptation. A degradation-aware router is further proposed to generate weights...
Rebuttal 1: Rebuttal: We are truly grateful for your positive feedback on our work. 1. ControlNet is used in DiffBIR, and it adds a single encoder to handle various degradations, but its performance is still limited by task conflict. However, LoRA can be applied to any layer of a pre-trained model with a small number ...
Summary: This submission proposes a transfer-learning based strategy to address challenges related to image-degradation restoration. The premise is that a pre-trained generative model can be employed as a common starting component for multiple degradation types, upon which distinct sets of trainable parameters (ie. low...
Rebuttal 1: Rebuttal: Thanks for your thorough review and valuable feedback. 1. Core technical contributions: The core idea of our method is to introduce the paradigm of **multi-domain transfer learning** into multi-task image restoration, which aims to address the issues of task conflict and feature sharing in multi-t...
Summary: The paper proposes universal image restoration framework using multiple low-rank adapters that learns task specific weights from to perform multi-domain transfer learning. the proposed method leverages the pre-trained generative model weights as the shared component and adapts it task specific low-rank adapter...
Rebuttal 1: Rebuttal: Thanks for your valuable suggestions and for acknowledging our work. 1. Why REDS and LOLBlur? We used the REDS and LOLBlur mixed datasets because the mixed degradation scenarios in these datasets are common, whereas the mixing in MID6 is not commonly seen in real-world scenarios. Since MID6 has no...
null
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper presents a framework to improve image restoration across various degradation types using Low-Rank Adapters (LoRA). The proposed method adapts a pre-trained generative model to each degradation type. It performs a weighted sum of the output of adapted models using the estimated degradation of input i...
Rebuttal 1: Rebuttal: Thanks for your thorough review and valuable feedback. 1. Motivation of the weighted sum: When the image has only one type of deterioration, the “top-1” strategy and “all” strategy perform similarly, as indicated in the “Multiple Degradation” column of Table 3. However, when the degraded image has...
null
null
null
null
null
null
FedNE: Surrogate-Assisted Federated Neighbor Embedding for Dimensionality Reduction
Accept (poster)
Summary: The paper introduces a novel approach to address the challenge of collaboratively visualizing high-dimensional data in a federated learning (FL) environment. The proposed method, FEDNE, integrates the FEDAVG framework with contrastive neighbor embedding (NE) techniques, aiming to preserve data privacy while en...
Rebuttal 1: Rebuttal: > Questions regarding scalability and complexity Please see the general response. > The paper proposes intra-client data mixing … However, this approach might not entirely mitigate the issue … More detailed comparisons with alternative methods … Thank you for the insightful comment. As we point...
Summary: The paper "FEDNE: Surrogate-Assisted Federated Neighbor Embedding for Privacy-Preserving Dimensionality Reduction" presents a method for visualizing high-dimensional data while maintaining privacy without requiring any shareable reference data. Federated Neighbor Embedding (FEDNE): A framework combining fede...
Rebuttal 1: Rebuttal: > Questions related to privacy concerns We found that major concerns mentioned in the weakness and questions are related to “privacy-preserving”, and these concerns may arise from the “privacy-preserving” term in our paper title. First, we want to apologize for the confusion caused by our title. ...
Summary: The paper presents a new federated learning approach named FEDNE for dimension reduction using contrastive neighbor embedding (NE). The key idea is the introduction of a surrogate loss function that each client learns and shares, which compensates for the lack of inter-client repulsion essential for global ali...
Rebuttal 1: Rebuttal: > How would the parameter k affect the performance of FEDNE? How to set k for different settings? Thank you for the valuable comment. First, we want to reiterate that the value k is used for building local kNN graphs to capture the neighboring data structures. In general, as k increases, we may ...
Summary: This paper addresses the challenge of distributed neural embedding (NE) with a focus on privacy protection. To achieve this, the authors extend the concept of federated learning (FL) to NE. However, NE tends to diverge because FL prevents clients from accessing each other's data, leading to inconsistent featur...
Rebuttal 1: Rebuttal: > Communication complexity ... this design results in a communication complexity of $O(N^2)$ … This might be manageable in some cross-silo settings, where only a few clients participate. Thanks for the thoughtful comment. Since each client will receive the surrogate models of all other clients fr...
Rebuttal 1: Rebuttal: We thank the reviewers for all the valuable comments and constructive suggestions. We are glad that the reviewers found that our paper is “well-motivated” and “well-presented” (Reviewer BJv7, 4ZYS, mP1Q), and our approach is “novel” (Reviewer mP1Q, y2ZN). In the following, we want to first reiter...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Amortized Bayesian Experimental Design for Decision-Making
Accept (poster)
Summary: This paper proposes a method for decision-aware Bayesian experimental design, where the design is not optimized with respect to the most accurate posterior distribution of the latent parameters but rather with respect to the expected utility gain of the actual (down-stream) decision task. Strengths: This is a...
Rebuttal 1: Rebuttal: Thank you for your positive comments and thoughtful questions. We address your remarks and questions below. > 1. The presentation of p(y_Xi | h_t) between Eq 3 and 4 is partially unclear to me… Thanks for the question. $p(y_\Xi | h_t)$ is a joint distribution and is well-defined as a stochastic ...
Summary: The paper looks at the problem of designing Bayesian optimal experiments taking into account the downstream decision making. At the core is a Transformer Neural Decision Process (TNDP) architecture that is trained to amortise the experimental design process whilst simultaneously inferring the optimal downstrea...
Rebuttal 1: Rebuttal: Thank you for your detailed review and the valuable references you provided. We address your questions and points raised below. **Weaknesses** > 1. My main issue with the paper is the presentation of DUG and EDUG as novel. We greatly appreciate your provided references and insights. We will inc...
Summary: The paper proposes a transformer-based architecture for jointly sampling designs and decisions in Bayesian Experiment Design (BED) using a forward-looking criterion. The latter considers the improvement in maximum expected utility brought about by a new design-outcome pair, where the expectation is taken with ...
Rebuttal 1: Rebuttal: Thank you for your detailed review and the valuable comments. We address your remarks and questions below. **Weaknesses** > 1. Some notational confusion can be avoided… Thanks, we will improve the notations according to your suggestions in the revised paper. Specifically, we will replace $h_t$ w...
Summary: This paper tackles an important problem of designing experiments in a way that directly optimizes downstream decision-making tasks, going beyond just inferring parameters of interest. The authors make several valuable contributions: 1. They introduce the concept of Decision Utility Gain (DUG) to quantify how ...
Rebuttal 1: Rebuttal: Thank you for your positive assessment of our work and the points you raised. In the following we address your questions and points raised. **Weaknesses** > 1. The authors could provide a more rigorous analysis of the properties and characteristics of the TNDP architecture, such as its convergen...
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful comments and suggestions. We are glad to see that all reviewers have a positive view of the paper. Specifically, the reviewers agreed on the following strengths of the paper: * **Relevance**: Zp5w: “tackles an important problem”. Ctfm: “relevant and inte...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Multi-Label Learning with Stronger Consistency Guarantees
Accept (poster)
Summary: This paper proposes an improved approach to multi-label learning using $\mathcal{H}$-consistency bounds by introducing the multi-label logistic loss to effectively handle label correlations. It extends to various multi-label losses, ensuring Bayes-consistency across diverse settings, and includes efficient gra...
Rebuttal 1: Rebuttal: Thank you for your encouraging review. We will take your suggestions into account when preparing the final version. Below please find responses to specific questions. **Weaknesses: The motivation and background of this paper lack clear logic and hierarchy. It is suggested to first outline the sho...
Summary: The paper explores surrogate losses and algorithms for multi-label learning, focusing on \( \mathcal{H} \)-consistency bounds. It identifies the limitations of Hamming loss and introduces a new multi-label logistic loss that accounts for label correlations. The study extends this to a broader family of multi-l...
Rebuttal 1: Rebuttal: Thank you for your appreciation of our work. We will take your suggestions into account when preparing the final version. Below please find responses to specific questions. **Weakness 1. In section 4, although the excellent properties of the proposed multi-label logistic loss are proven, providin...
Summary: The authors study surrogate losses and algorithms for multi-label learning via H-consistency bounds and introduce a novel surrogate loss, multi-label logistic loss in this paper. By broadening the H-consistency bounds analyses to more general multi-label losses and extending to multi-label comp-sum losses, the...
Rebuttal 1: Rebuttal: Thank you for your appreciation of our work. We will take your suggestions into account when preparing the final version. Below please find responses to specific questions. **1. I understand that this is a theoretical work, and experiments of empirical evaluations are not its focus. However, addi...
Summary: The paper derives H-consistency bounds for binary-relevance style surrogate losses, as well as a new surrogate, for mutli-label learning problems, showing that the proposed multi-label logistic loss whose upper-bound on the Hamming loss is independent of the number of labels. Strengths: The $H$-consistency bo...
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback and suggestions on improving the readability. We will take them all into account when preparing the final version. Below, please find our responses to specific questions. **Weaknesses:** **1. The paper does not ...** **Response:** Thank you for your insigh...
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
On Differentially Private U Statistics
Accept (poster)
Summary: This paper addresses the problem of estimating U statistics under central differential privacy. U statistics are established minimum variance unbiased estimators for estimable parameters in the form $\mathbb{E} h (X_1, ..., X_k)$, where $h$ is a kernel and for all $i$ $X_i$ is i.i.d. from some underlying distr...
Rebuttal 1: Rebuttal: Thank you for mentioning that our work addresses a notable gap in differential privacy research and for your kind words on its wide applicability. In a revision, we will add pointers to the proofs of all theorems immediately after their statements. **[Re: Connection between [1] and our algorithm]...
Summary: The paper addresses the problem of private estimation of U-statistics. The authors propose a new thresholding-based approach using local Hájek projections to achieve nearly optimal private error in both non-degenerate and degenerate settings. Strengths: 1. The paper provides solid theoretical foundations, inc...
Rebuttal 1: Rebuttal: Thank you for your kind words regarding the solid theoretical foundations and wide applicability of our work. **[Re: Asymptotic distribution]:** To our knowledge, differential privacy results typically focus on finite sample guarantees. _We show under mild conditions on $n,k,$ and $\epsilon$ th...
Summary: This paper introduces a new algorithm for constructing U-statistics under central DP. Compared to the naive method, the proposed estimator exhibits lower variance. The authors also derive a lower bound for private algorithms. Several statistical applications are presented to illustrate the methodology. Streng...
Rebuttal 1: Rebuttal: **[Re: Privacy budget]:** Lemma 3 (line 130) shows that the CoinPress algorithm from [2], adapted to the all-tuples family, is $2\epsilon$-DP. The following argument shows that Algorithm 2 is $10\epsilon$-DP as stated. Corollary 2.4 in [1] shows that for any function $f:\mathcal{X} \to \mathbb{R}$...
Summary: This paper studies differentially private estimation of U-statistics (estimators for such statistics are averages of functions $h$ that depend on a number of i.i.d. samples $X_1,\dots,X_k$). This is a generalization of the commonly studied mean estimation problem where $k=1$ and such estimators with $k>1$ are ...
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and for noting that our estimator based on local Hájek projections (and smooth sensitivity) is technically novel and interesting. **[Re: Comparison of our applications with existing private algorithms]:** The setting we consider, where the probabilities of the...
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable feedback and suggestions. We think we have addressed most of the questions adequately, and summarize our responses here. We will fix all typographical errors and we do not address them here. ### **Connections between [1] and our algorithm (Reviewer vFH3 a...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
PURE: Prompt Evolution with Graph ODE for Out-of-distribution Fluid Dynamics Modeling
Accept (poster)
Summary: The paper presents a new approach called Prompt Evolution with Graph ODE (PURE) for non-distributed fluid dynamics modeling. PURE first learns from historical observations and system parameters in the frequency domain to explore multi-view contextual information, which can efficiently initialize the cue embedd...
Rebuttal 1: Rebuttal: We are truly grateful for the time you have taken to review our paper, your insightful comments and support. Your positive feedback is incredibly encouraging for us! In the following response, we would like to address your major concern and provide additional clarification. > Q1. The contribution...
Summary: - The paper aims to improve the out-of-distribution (OOD) generalization of fluid dynamics modeling. - Two types of OOD scenarios are targeted: OOD across different systems and OOD within the same system across different timestamps. - The paper proposes a framework named PURE, composed of modules including: ...
Rebuttal 1: Rebuttal: We are truly grateful for the time you have taken to review our paper and your insightful review. Here we address your comments in the following. >Q1. My major concern with the paper is that the OOD challenge in dynamics modeling is not well-formulated. The paper describes the OOD scenario verbal...
Summary: This paper pioneers the connection of prompt learning with dynamical system modeling to address the challenge of out-of-distribution shifts. The proposed PURE method initializes prompt embeddings by learning from historical observations and system parameters. Strengths: 1.The paper is easy to follow. 2.The p...
Rebuttal 1: Rebuttal: We are truly grateful for the time you have taken to review our paper, your insightful comments and support. Your positive feedback is incredibly encouraging for us! In the following response, we would like to address your major concern and provide additional clarification. > Q1. Some results m...
Summary: The paper proposes a graph ODE-based approach for OOD fluid dynamics modeling. PURE aims to learn time-evolving prompts via graph ODE for adaptation of spatio-temporal forecasting models on OOD scenarios. To address temporal distribution shifts, the interpolation of obersvation sequences are combined into grap...
Rebuttal 1: Rebuttal: We are truly grateful for the time you have taken to review our paper, your insightful comments and support. Your positive feedback is incredibly encouraging for us! In the following response, we would like to address your major concern and provide additional clarification. > Q1. As the method is...
Rebuttal 1: Rebuttal: Dear Reviewers, Thanks for your time and valuable feedbacks. We acknowledge **three reviewers** (Reviewer Sh2w, Reviewer MBSt, and Reviewer MFD5) comments that **our work is novel or new**. We acknowledge the positive comments such as "a new approach" (Reviewer Sh2w), "enhance model robustness" (...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
The Many Faces of Optimal Weak-to-Strong Learning
Accept (poster)
Summary: This paper presents an efficient and simple weak-to-strong learner that has optimal in-expectation error. In weak-to-strong learning, we are given a dataset of $m$ points from a distribution, and a $\gamma$-weak learner that returns hypotheses from a class of VC dimension $d$. AdaBoost, which is a textbook wea...
Rebuttal 1: Rebuttal: We thank you for taking the time to thoroughly assesing the article, asking interesting questions, and suggesting concrete improvements. Experiments: As allotted to in the answer to reviewer KvEC, and as you correctly point out, we should have made it more clear that the experiments are very muc...
Summary: This paper introduces a new Boosting algorithm, MAJORITY-OF-29, which achieves provably optimal sample complexity and is remarkably simple to implement. The algorithm partitions the training data into 29 disjoint subsets, applies AdaBoost to each subset, and combines the resulting classifiers through a majorit...
Rebuttal 1: Rebuttal: Regarding the question about experiments: Let us first re-iterate that our main focus in this work is on the theoretical results, which we believe are strong (the paper is also submitted with a primary area of "Learning Theory"). Perhaps we were not clear enough when claiming that our experimenta...
Summary: The authors present a new boosting algorithm: partition training data into 29 pieces of equal size, run AdaBoost on each, and output the majority vote over them. The authors prove that the sample complexity of MajorityVote29 is optimal and its running time is the same order as AdaBoost. Experimental results ar...
null
null
null
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their thoughtful reviews. Let us add one general remark that we will leave to the reviewers whether to include in their evaluation of the submission or not. Recently we found a way to improve the result to the majority of 5 instead of majority of 29. The r...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning diverse causally emergent representations from time series data
Accept (poster)
Summary: The article proposes a learning scheme aimed at detecting emergent quantities from time series data of systems made of many interacting parts, such as the brain. To this end the authors combine "minimum mutual information", a previously introduced emergence criterion, with SMILE, a differentiable lower bound e...
Rebuttal 1: Rebuttal: Many thanks for the feedback. We apologise for the lack of clarity in some of our explanations. We will improve these and add pseudo-code in the camera-ready version of the paper. Regarding the concern about not having enough examples, as we describe in more detail below and in the global rebutta...
Summary: This paper introduces a method for learning the causally emergent representation of time series data. Based on the Partial Information Decomposition (PID) and ΦID definition of emergent variables, the paper utilizes variational information lower bounds to estimate and optimize the emergence objective function....
Rebuttal 1: Rebuttal: As mentioned in the overall rebuttal, we have now conducted a wider range of evaluations, both using the same architecture on more datasets and using other architectures on the same datasets. * Specifically, we add two more brain activity datasets which capture different aspects of neural dynamic...
Summary: The paper presents a method for identifying emergent variables in time series data through a novel machine learning architecture. It uses unsupervised learning for representation and information theory to find emergent properties in systems, which are often complex and not easily describable at the microscale ...
Rebuttal 1: Rebuttal: In response to the specific weaknesses identified: > It’s not immediately clear how to interpret results. The paper shows figures, but it doesn’t explain them much. Interpreting them requires a lot of re-reading the methods section We apologise for the lack of clarity in our explanations. For th...
Summary: The paper introduces a novel objective function and deep learning architecture that are targeted to extract emergent latent representations from time series data. Motivation is very clear. The definition of emergent latent representation interesting and useful. The utilization of mutual information estimators ...
Rebuttal 1: Rebuttal: As mentioned in the global rebuttal, we have now conducted extensive additional evaluations, including: * Two new real-world datasets and one new synthetic dataset; * New analyses on the existing synthetic dataset with lower correlation coefficients; and * Comparisons against baseline algorithms ...
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their time and thoughtful comments on our paper. We are encouraged by the positive feedback, and are thankful for the constructive suggestions that have let us identify and address several limitations of our paper. We have added responses to each revie...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning
Accept (poster)
Summary: This paper introduces a vision backbone pre-training method named Latent Compression Learning (LCL) to utilize interleaved image-text data. The proposed LCL approach maximizes mutual information between the inputs and outputs of a GPT-like model in autoregressive manner. The proposed method integrate both disc...
Rebuttal 1: Rebuttal: Thanks for your good questions and constructive suggestions. ___ **Q1:** In Tab. 5, LCL is on par with CLIP baseline solely with image-text pairs but is significantly better when using the MMC4 dataset. Whether this performance gain is from the increased number of training samples. **A1:** It is ...
Summary: The paper tackles the problem of vision model pre-training. More exactly, it aims to exploit the interleaved image-text data that is very prevalent on the Internet. It proposes Latent Compression Learning that maximises the mutual information between the inputs and outputs of a causal attention model. When vis...
Rebuttal 1: Rebuttal: We thank the reviewer for the reviews and questions. But there is a misunderstanding here. First, as discussed in Fig. 1, we would like to clarify that our proposed LCL aims to pre-train a vision encoder from scratch using interleaved image-text data, rather than incrementally training a multi-mo...
Summary: The paper pre-trains models with a combination of a contrastive image-text objective and a generative language objective. The authors provide many results on image classification and vision-language tasks suggesting the competitiveness of the method in controlled settings. Strengths: S1. The paper is well fra...
Rebuttal 1: Rebuttal: We thank the reviewer for the careful reviews and constructive suggestions. We answer the questions as follows. ___ **Q1: (a)** The proposed objective seems similar to CoCa, which also employs a contrastive loss and a next token prediction loss. Clarify the differences and why the formulation is n...
Summary: This paper aims to explore the use of weak supervision signals in multimodal interleaved image-text data to pretrain visual encoder, compressing the distribution of high-level features into the visual encoder. The paper employs contrastive loss and autoregressive loss to train the model. To prevent the collaps...
Rebuttal 1: Rebuttal: Thanks for your good questions and constructive suggestions. ___ **Q1:** In some cases, the textual context may have little relevance to the image. It is worth investigating whether such data could harm the model's performance. **A1:** This is a very good question. It is inevitable that image-te...
Rebuttal 1: Rebuttal: We thank all the reviewers for the careful reviews and constructive suggestions. We respond to your questions respectively. The PDF contains supplementary figures for rebuttal. Pdf: /pdf/0246ee7812b55b519c2e455843c003d1f5c25cb4.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Unveiling the Hidden: Online Vectorized HD Map Construction with Clip-Level Token Interaction and Propagation
Accept (poster)
Summary: This paper aims to improve vectorized HD map construction for autonomous driving. Inspired by the global feature association in traditional offline HD mapping, the proposed MapUnveiler processes input frames in a clip-based manner and hopes to resolve occlusions using information from previous frames. Built up...
Rebuttal 1: Rebuttal: We deeply appreciate your thorough and insightful feedback. We believe we can enhance the paper's quality and present more concrete results based on your comments. Below are point-by-point responses for your comments, and these will be included in the revised paper. ___ **W1-1. Explanation of TTM ...
Summary: The authors propose a new approach for constructing vectorized high-definition maps that exploits temporal information across adjacent input frames. The model, which they call MapUnveiler, operates at the clip-level and consists of an intra-clip unveiler which generates vectorized maps for T frames and an int...
Rebuttal 1: Rebuttal: We are particularly encouraged that the reviewer finds our method novel and well-motivated. And we highly appreciate your constructive comments and suggestions. Below are our responses to each of your queries, and we will include them in the revised paper. ___ **W1. Weave more intuition into the t...
Summary: This work presents a method called MapUnveiler, which aims to improve the construction of vectorized HD maps for autonomous driving. MapUnveiler uses a novel clip-level pipeline to unveil occluded map elements by relating dense image representations with efficient clip tokens and propagating inter-clip informa...
Rebuttal 1: Rebuttal: We thank the reviewer for providing thorough feedback and interesting suggestions. We are grateful for your acknowledgment that the introduction of a clip-level pipeline for vectorized HD map construction is effective and the proposed clip tokens propagate map information efficiently. Below are ou...
Summary: This paper proposes a clip-based vectorized HD map construction paradigm for the processing of long temporal sequence, in which occluded map elements are unveiled explicitly by efficient clip tokens. Through clip token propagation, MapUnveiler achieves effective utilization of long-term temporal map informatio...
Rebuttal 1: Rebuttal: Thank you for providing the insightful and constructive feedback. We appreciate your acknowledgment that the paper is easy to follow and that the proposed approach is effective. Below are our responses to each comment, and we will include all the results and comments in the revised version. ___ **...
Rebuttal 1: Rebuttal: We thank all the reviewers for their constructive and thorough comments. We are particularly excited that all reviewers acknowledged the idea of a clip-level pipeline as reasonable, novel, or effective for online vectorized HD mapping. We believe this rebuttal further enhance the paper through the...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Gated Inference Network: Inference and Learning State-Space Models
Accept (poster)
Summary: The paper presents a deep state-space model architecture with non-linear transitions and emissions. The model disentangles the latent representation for the dynamics and the one for the observed data at each time step - allowing therefore effective state estimation at future time steps and the ability to deal ...
Rebuttal 1: Rebuttal: Thank you for the detailed review. We appreciate the time and energy you put on this work. We reply to your questions one by one. --- **q0**: *In Section 4 you use $o^+$ notation... + What is s in line 219 ...* **a0**: Thank you for your question regarding the $o^+$ and $s$ notations used in...
Summary: The paper introduces a very well theoretically motivated State-Space Model learning approach, which is implemented by a gated inference network. The network implements a Hammerstein-Wiener model within a modularized deep learning architecture. It uses GRU cells to mimic Kalman Filtering operations. Forward as ...
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and time. We have answered your questions one by one and revised the draft to address your concerns. --- **q0**: *The theorems 3 and 4 are not really ...* **a0**: We appreciate your concern regarding the experimental evaluation of Theorems 3 and 4. Given th...
Summary: This paper advances temporal reasoning in dynamic, high-dimensional, noisy environments by introducing a novel architecture for latent variable state space models. The architecture permits efficient Bayesian inference with nonlinear transitions and emissions. Experiments are performed on toy datasets and a sim...
Rebuttal 1: Rebuttal: Thank you for your insightful feedback and the time you dedicated. We have responded to your questions individually and revised the draft to address your concerns. --- **q0**: *I think one thing that could really strengthen...* **a0**: We performed our evaluations and experiments in accordance ...
null
null
Rebuttal 1: Rebuttal: We appreciate the reviewers for their detailed comments and questions. Our rebuttal response is mainly organized into three sections. 1. To address the concern of one of the reviewers regarding the sufficiency and complexity of our experiments, we created a table listing the most relevant studies...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Guiding Neural Collapse: Optimising Towards the Nearest Simplex Equiangular Tight Frame
Accept (poster)
Summary: The paper uses Riemannian optimization to guide the final layer weights (the linear classifier) toward the nearest simplex ETF orientation. In particular, consider the two common approaches of training a deep classifier network: 1. The standard training strategy where the final layer weights are updated by ba...
Rebuttal 1: Rebuttal: Weaknesses: Regarding the computational and memory costs of our approach, please refer to our general response. Avoid backward pass: Theory suggests that incorporating the DDN layer's backward pass should provide additional gradient information for updating the features’ parameters in the back...
Summary: This paper proposed a novel algorithm for neural network training. The algorithm is motivated by the recent discovery on the neural collapse phenomenon, which demonstrates that the last layer of neural network classifier will converge to a specific structure named simplex ETF. The authors propose to guide the ...
Rebuttal 1: Rebuttal: Weakness 1 with question 1 & 2: Eq. 8 is optimised as a bi-level optimisation problem. At each gradient update step, we first solve the inner optimisation problem to obtain the nearest ETF solution from the Riemannian problem. This gives the classifier weights directly. Subsequently, we perform t...
Summary: One of the key aspects of neural collapse (NC) is that the penultimate class feature means form a simplex Equiangular Tight Frame (ETF). The main idea of this paper is to leverage this insight and improve training by further encouraging this property during training. The authors suggest doing this by solving a...
Rebuttal 1: Rebuttal: Weaknesses: Misleading results: The reviewer correctly observed that, particularly on smaller datasets, the performance converges to be approximately equivalent by the end of training. Any observed deviations are likely due to random effects. This is to be expected, as all properly trained ETF so...
Summary: This paper presents a novel approach to utilizing ETF geometry. Instead of fixing the weights or making them learnable, their approach dynamically adjusts the weights by solving a Riemannian optimization problem while allowing end-to-end training. They show that their approach outperforms both the fixed and le...
Rebuttal 1: Rebuttal: Overhead Cost: Please refer to the tables in our general response and the discussion around computational concerns. Standard Procedure: Our new experiments show that the standard method, which excludes feature and weight normalisation, achieves similar performance (found in the attached pdf). How...
Rebuttal 1: Rebuttal: We thank all the reviewers for their time and thoughtful feedback. Here we address the common question regarding the computational costs of our method, and we will address individual comments for each reviewer separately. Please refer to Table 1 for computational cost and Table 2 for memory cost...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Provably Efficient Reinforcement Learning with Multinomial Logit Function Approximation
Accept (poster)
Summary: This paper considers MDPs employing the MNL function for transition probability, following Hwang and Oh [2023]. The authors suggest efficient algorithms based on online Newton steps, inspired by [Hazan et al., 2014; Zhang et al., 2016; Oh and Iyengar, 2021]. Furthermore, to improve $\kappa$ dependency, they pr...
Rebuttal 1: Rebuttal: Thank you for your comment. We will address your questions and clarify any misunderstandings, which may be due to our inadequate emphasis on the technical contributions in the presentation. We will improve the clarity in the revised version. --- **Q1:** "Their suggested algorithms do not seem no...
Summary: In this paper, the author analyzes a Markov Decision Process (MDP) model with non-linear function approximation. Specifically, in the finite-time horizon inhomogeneous episodic MDPs setting, the transition dynamics are unknown but the reward function is known. The author proposes using a multinomial logit (MNL...
Rebuttal 1: Rebuttal: Thank you for your helpful feedback. We will address your questions below. --- **Q1:** "Although this paper focuses on reducing the computation complexity, I am curious about the sample complexity of UCRL-MNL-OL." **A1:** Thanks for your question. This work not only reduces computational comple...
Summary: This work studies the MNL function approximation inhomogeneous RL, achieves the $O(1)$ computation cost, and improves the regret guarantee with regard to $\kappa$. To improve the computation cost, this work employs the online Newton step instead of MLE estimation to estimate $\theta$. Then, they design a novel...
Rebuttal 1: Rebuttal: Thanks for your insightful review. We will address your questions below. --- **Q1:** "Oh & Iyengar (2021) also use the ONS to improve the computation cost..." **A1:** As mentioned in Line 184, the parameter estimation of UCRL-MNL-OL algorithm is inspired by the work of Oh & Iyengar (2021). Howe...
Summary: The problem considered in this paper is online learning in MDPs where transition probabilities are modelled with a log-linear model (with "multinomial logit function approximation"). The finite horizon, time-inhomogenous setting is considered. The problem is motivated by allowing a nonlinear transformation in ...
Rebuttal 1: Rebuttal: Thanks for your constructive review. Below, we will address your main questions, especially regarding the dependence on $U$ (see A1-a,b,c), technical challenges (see A2), the difference to prior work (see A3), and presentation issues (see A4). --- **Q1-a:** "The regret and compute cost depend on...
null
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper studies the recently proposed MDPs that use multinomial logit function approximation for state distribution validness. The results and algorithms improve the prior work of Hwang and Oh [2023] in multiple aspects, including computation efficiency, storage, and statistical dependence on the problem-dep...
Rebuttal 1: Rebuttal: Thanks for your constructive review. We will address your concerns below. --- **Q1:** "The primary high-level techniques and tools (seem to) come from existing works and relevant fields..." **A1:** While similar ideas have been explored in the bandit setting, there are several unique challenges...
null
null
null
null
null
null
XMask3D: Cross-modal Mask Reasoning for Open Vocabulary 3D Semantic Segmentation
Accept (poster)
Summary: This paper introduces XMask3D, a framework developed for open vocabulary 3D semantic segmentation. They propose the integration of the denoising UNet, derived from a pre-trained diffusion model, to generate geometry-aware segmentation masks conditioned on learnable implicit 3D embeddings. These binary 2D masks...
Rebuttal 1: Rebuttal: Thanks for your careful review and constructive comments! Hopefully the following response will address your concerns. ### **1. Paper organization.** > The organization should be improved ... Thanks for your advice! We will reorganize the first two sections in our revised paper. ### **2. Clari...
Summary: The paper proposes a precise and consistent mask-level alignment between 3D features and the 2D-text embedding space through a method called cross-modal mask reasoning. The proposed XMask3D model includes a 3D branch for capturing geometric features, a 2D branch for generating vision-language aligned masks, an...
Rebuttal 1: Rebuttal: Thanks for your careful review and constructive comments! Hopefully the following response will address your concerns. ### **1. Results comparison.** > The authors don't compare with state-of-the-art 3D semantic segmentation OV3D. Thanks for your suggestion! We will include this outstanding met...
Summary: The paper addresses the limitations of current open vocabulary 3D semantic segmentation methods, which primarily focus on creating a unified feature space for 3D, 2D, and textual modalities but struggle with fine-grained segmentation boundaries. To overcome these limitations, the authors propose XMask3D, a cro...
Rebuttal 1: Rebuttal: Thanks for your careful review and constructive comments! Hopefully the following response will address your concerns. ### **1. Application of XMask3D on other scenarios.** > The paper evaluates the proposed method on a limited set of benchmarks (ScanNet20, ScanNet200, S3DIS), all of which are i...
Summary: This paper addresses the challenge of open-vocabulary 3D semantic segmentation by utilizing 3D geometric features, 2D semantic embeddings, and text modality. The proposed approach adapts the ODISE method to the 3D domain, aiming to distill open-vocabulary semantic segmentation knowledge from a pre-trained text...
Rebuttal 1: Rebuttal: Thanks for your careful review and constructive comments! Hopefully the following response will address your concerns. ### **1. About the less satisfactory results of classes that cover large areas.** > While the method exhibits superior performance w.r.t. competing methods, it seems that the out...
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for taking the time to review our submission and provide constructive feedback on our work. We are encouraged by the consensus among reviewers regarding the strengths of our approach, which aligns with our intentions and efforts: 1. **Novelty and Significa...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
DiffusionPDE: Generative PDE-Solving under Partial Observation
Accept (poster)
Summary: This paper introduces diffusion methods to tackle the partially observed PDEs, named DiffusionPDE. By learning the joint distribution of solution and coefficient space, the proposed model can handle both forward and inverse problems. The authors experiment with diverse PDEs and settings to demonstrate the mode...
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments! We are happy that you find our paper provides diverse experiments and is well-written. > **“The technical contribution is limited.”** Please refer to the common response above. > **“Some powerful baselines are missing.”** 1. U-Net: We trained a U-Net mod...
Summary: The paper proposes to solve PDEs given only sparse measurements by jointly modeling the solution and coefficient space (e.g. the initial conditions) using a diffusion model. By applying diffusion posterior sampling (DPS) the authors obtain samples that are consistent with the sparse measurements and the underl...
Rebuttal 1: Rebuttal: Thank you for agreeing with us that we consider an important problem and propose a technically sound method. > **“The main weakness of the method is the limited novelty”** Please see our common response above. > **“The experiments do not take into account any stochasticity or uncertainty”** On...
Summary: The work uses a guided diffusion process to solve the PDE forward and inverse problems with partial observations. Instead of learning the parameter-to-solution map ($a\rightarrow u$) as in Neural Operators, the method learns the diffusion process on the joint distribution $(a,u)$, and use guided diffusion for ...
Rebuttal 1: Rebuttal: Thank you for the detailed and constructive review! > **“The method’s success may heavily depend on the strong regularization from the training dataset.”** We use the same data generation methods as other studies [1, 2, 3], ensuring fair comparisons. The process does not favor any specific subse...
Summary: The paper uses score based generative diffusion models to find the forward and backwards solution of a set of PDEs given partial observations of the solution and/or incomplete knowledge of the coefficients. The method performs well, and outperforms other ML methods such as FNO, as well as 'standard' FE type m...
Rebuttal 1: Rebuttal: Thank you for your positive comments on our work! We feel much encouraged that you recognize the novelty of our work. > **a fairer comparison with other methods that work with incomplete data and measurements** In addition to GraphPDE, we further compare our method with OFormer [1], Shu et al....
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their constructive feedback! We will first clarify common concerns from the reviewers. > **The method drops time derivatives and cannot solve for full time intervals (Reviewer nBsW, AaET)** Our method can in theory support time derivatives and solve for f...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
ReLIZO: Sample Reusable Linear Interpolation-based Zeroth-order Optimization
Accept (poster)
Summary: From my understanding, this paper give a zero-th order algorithm with application to popular vision tasks neural architecture search and black-box adversarial attacks. The authors derive a closed-form solution after modeling the gradient estimation as a quadratically constrained linear program problem. The key...
Rebuttal 1: Rebuttal: Thank you for your careful and valuable comments. We hope to address your concerns by answering your questions below. **Q1:** As noted by the authors, the method imposes few constraints on sample size, similar to smoothing techniques, but requires gradient estimation through solving a linear pro...
Summary: The paper introduces ReLIZO, a novel zeroth-order optimization method leveraging linear interpolation to estimate gradients efficiently. It reduces the complexity of gradient estimation by reusing prior queries without additional conditions on sample size, decoupling it from variable dimension constraints. ReL...
Rebuttal 1: Rebuttal: Thank you for your careful and valuable comments. We hope to address your concerns by answering your questions below. **Q1:** The effectiveness of reusing queries depends on the choice of the reusable distance bound, which might require fine-tuning for different applications, adding complexity to...
Summary: This study introduces a novel gradient estimation algorithm that operates solely on forward function evaluations. The method employs a Quadratically Constrained Linear Program (QCLP) to determine the optimal linear approximation of sample vectors. The authors present performance enhancement strategies, includi...
Rebuttal 1: Rebuttal: Thank you for your careful and valuable comments. We hope to address your concerns by answering your questions below. **Q1:** Zeroth-order gradient estimation has a relatively limited impact. This limitation is exemplified in the NAS evaluation, where ReLIZO does not consistently achieve optimal ...
null
null
Rebuttal 1: Rebuttal: ## General Responses Dear Area Chair and Reviewers, We sincerely thank you for the time and effort you dedicated to the reviewing process. We are delighted that reviewers acknowledged the novelty of rethinking the gradient estimation in ZO method as a QCLP and our resuing strategy (YYyS, VWEi,...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Changing the Training Data Distribution to Reduce Simplicity Bias Improves In-distribution Generalization
Accept (poster)
Summary: It is known that usually deep neural networks will learn “easy examples" that contain fast-learnable features first while learning more complex examples in a second time. The authors argue that mitigating such simplicity bias is the reason method like SAM are outperforming SGD. Based on such analysis, the auth...
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback, and acknowledging our well-motivated work and our comprehensive experiments and ablations. 1. When and why one should choose the last output activation vector to define the clustering instead of intermediate activation vector? - Our theorems 3.2 an...
Summary: This work aims to modify the training data distribution to improve in-distribution generalization. First, the authors theoretically analyse a 2-layer CNN and compare the feature learning dynamics (fast learnable and slow-learnable features) of Gradient Descent (GD) and Sharpness-Aware Minimization (SAM). It is...
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback and recognizing the originality and novelty of our work, and our comprehensive experiments. 1. Our work (ID) vs [1-5] (OOD). - As the reviewers correctly mentioned and we discussed in our [general comment](https://openreview.net/forum?id=yySpldUsU...
Summary: - Proves for a 2-layer CNN with fixed second layer weighsts trained on a toy dataset, SAM learns slow-learnable and fast-learnable features more uniformly in the early epochs compared to SGD - Based on this analysis, proposes a simple clustering-based upsampling strategy for reducing simplicity bias / excessi...
Rebuttal 1: Rebuttal: Thanks for your feedback! 1. Comparison with simplicity bias baselines - Prior work, including papers referred to by the reviewer, showed the benefits of reducing the simplicity bias to **out-of-distribution (OOD), where there is a shift between training and test distribution** (c.f. [general com...
Summary: This paper proposes an algorithm for changing the distribution of training data to improve the generalization of the model on origin data distribution. The paper is inspired by Sharpness Aware Minimization, which aims at finding a flat minimum meaning that it has a good generalization capability. This paper di...
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback and acknowledging our theoretical results and comprehensive experiments. We discuss the questions below. 1. **USEFUL vs resampling methods for long-tail data & example difficulty.** - As discussed in the [general comment](https://openreview.net/for...
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable feedback and recognizing the originality of our work. We’d like to first briefly emphasize the scope and contribution of our work: - Our work shows, for the first time, that reducing the simplicity bias benefits **in-distribution (ID)**. Previously, the b...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
REBEL: Reinforcement Learning via Regressing Relative Rewards
Accept (poster)
Summary: This paper reduces the complex policy optimization procedure of alignment to a simple regression objective, using the relation between optimal policy and reward. The paper conduct detailed theoretical analysis in revealing the relation between the proposed algorithm *REBEL* and *NPG/MD*. Comprehensive experime...
Rebuttal 1: Rebuttal: Thank you for your valuable review of our paper. > The statement "REBEL ... be extended to handle intransitive preferences ...." in the abstract is not adequately presented in the main content of the paper. As the major influence brought by intransistive preferences is the degradation of reward s...
Summary: This paper proposes the REBEL algorithm that reduces policy optimization to iteratively solving squared loss regression problems on the difference in rewards between trajectories, based on DPO's analysis. The paper transforms the resulting equation for r(x, y) presented in DPO to a regression loss function, an...
Rebuttal 1: Rebuttal: Thank you for your constructive feedback. We address each of your points below. > Insufficient experimental validation and limited baseline comparisons > Performance comparison with baselines Our experimental section is comprehensive compared to previous works on RLHF [1, 2], incorporating a ge...
Summary: This work presents REBEL, a minimalist reinforcement learning algorithm that does policy optimization by solving a sequence of regression problems using relative rewards as targets. Theoretical analysis shows that Natural Policy Gradient (NPG) is a variant of REBEL, and thus theoretical guarantees for NPG can ...
Rebuttal 1: Rebuttal: Thank you for your valuable review of our paper. We respond to your individual questions below. > I believe that at least a brief section on related work should be included in the main paper, the in-depth one can be deferred to the appendix. In terms of space, I personally do not think Section 2....
Summary: The authors present REBEL, a method for solving contextual bandit problems (such as the alignment of language models) via regressing relative rewards. They first derive their objective by demonstrating that the use of paired responses means that you can get rid of the partition function, which is impossible to...
Rebuttal 1: Rebuttal: Thank you for your encouraging review and comments. We respond to your individual questions below. > Do the authors have any idea why REBEL seems to have a slightly higher KL than the other methods? The KL divergence is generally close across methods. For the TL;DR experiments, following previou...
Rebuttal 1: Rebuttal: We thank all the reviewers for their time and insightful comments, which have significantly improved our paper. We are pleased that the reviewers appreciated our algorithm's simplicity, the detailed theoretical connections to prior methods, and the thorough empirical results. We summarize the main...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
AV-Cloud: Spatial Audio Rendering Through Audio-Visual Cloud Splatting
Accept (poster)
Summary: The paper proposes AV-Cloud, a framework for high-quality spatial audio rendering in 3D scenes without relying on visual cues. AV-Cloud addresses issues in current audio-visual rendering methods, such as audio lag and dependence on visual rendering quality, by introducing Audio-Visual Anchors and the Audio-Vis...
Rebuttal 1: Rebuttal: We thank the reviewer for a thoughtful review, valuable feedback, and recognizing the innovative use of Audio-Visual Anchors and Cloud Splatting, comprehensive experimentation, and clear presentation. We address the questions and specify the intended revisions below. **W1: AVCS Explanation** In...
Summary: A novel approach for rendering high-quality spatial audio in 3D scenes, called AV-Cloud, is proposed. This method synchronizes with the visual stream without relying on or being explicitly conditioned by visual rendering, enabling immersive virtual tourism through real-time dynamic navigation of both audio and...
Rebuttal 1: Rebuttal: We thank the reviewer for a thoughtful review, valuable feedback and recognizing the novelty of AV Anchors for 3D audio-visual scene reconstruction. **W1 & Q1 Difference between AVCS and Q-former** While AVCS and Q-former are transformer-based structures, they serve different purposes and utiliz...
Summary: The paper explores the problem of generating 3D audiovisual scenes – that is, generating 3D scenes with spatial audio. The proposed approach, AV Cloud, uses anchor points obtained from Structure-from-Motion (SfM) points. The anchors are then used with an AV Cloud splatting module which decodes the visuals and ...
Rebuttal 1: Rebuttal: We thank the reviewer for a thoughtful review, valuable feedback and recognizing the importance of 3D audio visual scene synthesis and our contribution of proposing the novel parallel pipeline for audio and visual rendering. We address the questions and specify the intended revisions below. **W1 ...
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful reviews and valuable feedback. In this general section, we wanted to provide a more **detailed explanation of the Audio-Visual Cloud Splatting (AVCS) module**, as several reviewers have suggested. The AVCS module is one of the key contributions of our ...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Towards Multi-Domain Learning for Generalizable Video Anomaly Detection
Accept (poster)
Summary: This work proposes a new task named Multi-Domain Learning Video Anomaly Detection, which aims to learn a general VAD model across domains. The work finds that abnormal conflict is a critical challenge in the task. Then, the work establishes a new benchmark, designs an effective baseline and conducts extensive e...
Rebuttal 1: Rebuttal: # [W1] AC Classifier Thank you for highlighting this important aspect. **Role of the AC Classifier:** By training with the AC Classifier, the domain-agnostic layer learns Conflict-Aware features, which helps in resolving conflicts. To achieve a general VAD model through multiple domain learning ...
Summary: In this paper, authors proposed a new task called Multiple Domain VAD (MDVAD), along with a benchmark and new evaluation protocols. Authors' goal is to construct a general VAD model by conducting multi-domain learning while recognizing abnormal conflicts and exploring representations of general normality and a...
Rebuttal 1: Rebuttal: # [W1-1] when adding a new dataset Thank you for highlighting this important consideration. Our framework consists of domain-agnostic layers and domain-specific heads, with each head being the final layer of the entire model, $W_{D_d}\in \mathbb{R}^{T\times 1}$ where $T=128$, which is a very small...
Summary: The manuscript addresses the limitations of existing Video Anomaly Detection (VAD) models that are confined to single-domain learning. The primary contribution of the paper is the introduction of a new task called Multi-Domain Learning for VAD (MDVAD), which aims to develop a general model capable of identifyi...
Rebuttal 1: Rebuttal: # [W1, W2, Q1] Complexity and computational cost In our proposed framework, only the final layer, $W_{D_d}\in \mathbb{R}^{T\times 1}$ where $T=128$, corresponds to the head and is added based on M number of datasets ($T \times M$). This constitutes a very small parameter and computational load co...
Summary: This paper proposes a new task called MDVAD, the goal of which is to effectively learn from multiple domains with different data distributions and definitions of abnormality without confusion, resulting in a general VAD model. To achieve this, the authors expand the traditional single-head framework to multipl...
Rebuttal 1: Rebuttal: # [W1, Q1] Practical scenarios of MDVAD **MDVAD’s practical relevance** In real-world scenarios, performance degradation due to domain shift is a persistent issue for deep learning models. Consequently, various tasks have seen the introduction of domain adaptation and generalization methods. As r...
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful feedback. We tried to address all the questions with references to weaknesses (**W**) and questions (**Q**). We are glad the reviewers found that * Addressing an Important and Generalizable Problem * Novelty of Method and Effectively Solving Identified...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Personalized Federated Learning with Mixture of Models for Adaptive Prediction and Model Fine-Tuning
Accept (poster)
Summary: This paper introduces a personalized federated learning algorithm to address the challenges of real-time predictions in non-stationary environments. Clients fine-tune models online, combining their locally fine-tuned models with multiple federated models learned over time. This approach ensures efficient adapt...
Rebuttal 1: Rebuttal: Thank you very much for taking the time to review our paper and providing your valuable comments. Please find below our responses to your comments and questions. We will revise the presentation of the contributions section in the introduction by breaking down the last paragraph into itemized poin...
Summary: This paper proposes a novel personalized federated learning algorithm, Fed-POE, which is designed for adaptive prediction and model fine-tuning in dynamic environments. It addresses the challenge of real-time predictions on streaming data by constructing a personalized model that combines a locally fine-tuned ...
Rebuttal 1: Rebuttal: Thank you very much for taking the time to review our paper and providing your valuable comments. Please find below our responses to your comments and questions. We would like to briefly review the main contributions of this paper, which we believe are of interest and utility to the community. We...
Summary: The paper introduces an interesting perspective about the role of ensembles of models in federated learning. The provocative claim is the fact that federated learning is not always better than locally-trained models. This is contextualized in the field of not IID data and time-varying data generating processes...
Rebuttal 1: Rebuttal: Thank you very much for taking the time to review our paper and providing your valuable comments. Please find below our responses to your comments and questions. ## Relations between Federated Models and Local Models The federated model can differ significantly from the local models, especially w...
Summary: This paper introduces Fed-POE, a novel personalized federated learning algorithm tailored for online prediction and model fine-tuning. Fed-POE creates an ensemble by integrating local models with those periodically contributed by the server over time. Theoretical analysis confirms that Fed-POE attains sublinea...
Rebuttal 1: Rebuttal: Thank you very much for taking the time to review our paper and providing your valuable comments. Please find below our response to your review. The main advantage of the proposed Fed-POE compared to the straightforward online gradient descent approach is its ability to provide sublinear regret u...
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
MutaPLM: Protein Language Modeling for Mutation Explanation and Engineering
Accept (poster)
Summary: The paper presents MutaPLM, a framework designed to interpret and navigate protein mutations using protein language models. This approach utilizes a protein delta network to capture mutation representations and employs a transfer learning pipeline with a chain-of-thought strategy to leverage knowledge from bio...
Rebuttal 1: Rebuttal: Thank you for your appreciation of our model, dataset, and code. We address your concerns below. > Q1: Details of PLM representations and *delta* features As detailed in Appendix A.1, **the PLM representations used in this study are residue-level embeddings**. Regarding the scale of $h_{\Delta}...
Summary: In the paper entitled "MutaPLM: Protein Language Modeling for Mutation Explanation and Engineering," the authors proposed multimodal protein-textual language models for understanding the effect of mutation and performing protein engineering. They also build MutaDescribe, the first large-scale protein mutation ...
Rebuttal 1: Rebuttal: Thank you for your positive comments on our presentation, dataset, methodology, and application values. We address your concerns and answer your questions below. > Q1: Misleading statement of PLMs in mutation explanation and engineering We apologize for this misleading statement in our abstract....
Summary: The paper proposes a framework to 1). generate text-based mutation effects for mutated proteins and 2). propose new mutated sequences based on the function descriptions. The main module is an encoder-decoder network, which encodes the representations of mutated sequences and outputs the position and amino acid...
Rebuttal 1: Rebuttal: Thank you for your appreciation of our task, methodology, and writing. We address your concerns in evaluation as follows. > Q1: Additional supervised baselines. We have added supervised baselines, including fine-tuned PLMs, for both mutation explanation and engineering. Please refer to our globa...
null
null
Rebuttal 1: Rebuttal: We extend our gratitude to all reviewers for their positive comments and constructive feedback. We hope that our responses and additional experiments could address the shared concerns satisfactorily. > (R1) Additional supervised baselines While no prior work is specifically designed for text-bas...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
GTBench: Uncovering the Strategic Reasoning Capabilities of LLMs via Game-Theoretic Evaluations
Accept (poster)
Summary: This paper tries to evaluate the strategic reasoning abilities of LLM. Therefore, 10 games are chosen where LLMs is trying to solve the game. This paper includes various open- and closed-source LLMs into consideration and build a benchmark for easy evaluation. Strengths: Evaluating the strategic reasoning is ...
Rebuttal 1: Rebuttal: We thank you for your valuable and insightful comments! >Q1: Does the evaluation really evaluate the strategic reasoning? Basically the evaluation is letting the LLM to play as one of the player in the game. However, this is much like a decision making problem, especially when the opponent is als...
Summary: This paper proposes a benchmark for evaluating the strategic reasoning of LLMs. The benchmark includes ten games of various types. The authors use these games to conduct competitive experiments between LLMs and traditional methods, as well as LLM-vs.-LLM. The paper then analyzes the experimental results and mo...
Rebuttal 1: Rebuttal: We thank you for your valuable and insightful comments. >Q1: In Section 4.1, why does the tree-like prompting strategy ToT still lag significantly behind MCTS? There are two potential reasons: 1. **Exploration Space**: MCTS has a significantly larger exploration space compared to ToT. In our e...
Summary: The paper proposes a benchmark to understand the strategic reasoning capabilities of llms. The authors present a suite of game theoretic tasks with different structures to do this. They use different evaluation metrics like ELOs and Relative advantage to compare different llms and prompting methods. Strengths...
Rebuttal 1: Rebuttal: We thank you for your valuable and insightful comments! >W1: Characterizing human performance would strengthen the paper We provide a preliminary human opponent evaluation. Specifically, we selected 5 games from GTBench and organized matches with 5 graduate students. These participants are famil...
Summary: This paper introduces GTBench, a set of 10 different games to test how well large language models can think strategically. The author found that while LLMs struggle with complete and deterministic games like Tic-Tac-Toe and Connect-4, they perform better in incomplete uncertain games like poker and negotiation...
Rebuttal 1: Rebuttal: We thank you for your valuable and insightful comments! >W1: The paper claims that measuring strategic reasoning capabilities with games is missing in existing benchmarks. However, there are other benchmarks, such as MAgIC released last year. In Line 119, we meant to convey that some of the game...
Rebuttal 1: Rebuttal: ## General Response We appreciate all the valuable comments from the reviewers. We are pleased to know that our work is considered meaningful (Reviewer **VyBq**, **L1Yq**), valuable (Reviewer **VyBq**), comprehensive (Reviewer **VyBq**, **PA1v**), and insightful (Reviewer **PA1v**, **HsvL**). He...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Federated Ensemble-Directed Offline Reinforcement Learning
Accept (poster)
Summary: This paper proposed the Federated Ensemble-Directed Offline Reinforcement Learning Algorithm. The combination of offline RL and federated learning is interesting in addressing the training data insufficiency issue due to small pre-collected datasets. Strengths: The originality of this paper is relatively good...
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback. We are delighted to know that the reviewer finds our work original, our problem significant, and our paper well written. Below, we address the reviewer's concerns and hope they will consider increasing their score. *1. Some technical details nee...
Summary: The authors identify fundamental challenges for Federated Offline Reinforcement Learning and present Fedora, an approach that tackles each of them. They perform extensive evaluation of the approach on Mujoco and real-world datasets showing improved performance over existing work. Strengths: The paper is well-...
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and are happy to note that they find our work novel, our experiments extensive, and our paper well-written. Below, we address their concerns and hope that they consider increasing their score. *1. No theoretical guarantees have been given for the algorithm...
Summary: This paper presents the Federated Ensemble-Directed Offline Reinforcement Learning Algorithm (FEDORA), a novel approach for collaborative learning of high-quality control policies in a federated offline reinforcement learning (RL) setting. The paper identifies key challenges in federated offline RL, including ...
Rebuttal 1: Rebuttal: We thank the reviewer for their positive endorsement of our work. We are happy to know that the reviewer finds our work novel, our experiments extensive and our paper well written. Below we address the concerns of the reviewer. *1. "Collect wisdom" can be replaced by more rigorous exposition. Sa...
null
null
Rebuttal 1: Rebuttal: ### Joint Response We would like to express our gratitude to all the reviewers for their time and feedback. We are delighted that the reviewers recognize the novelty of our work (hv6n, EkXX, EGVU), find our paper well-written (hv6n, EkXX, EGVU), and appreciate the comprehensiveness of our experim...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Neural Concept Binder
Accept (poster)
Summary: The paper proposes a novel approach to unsupervised concept learning based on both continuous and discrete encodings. Neural Concept Binder (NCB) allows humans inspecting and revising the learnt concepts. In the experiments, NCB’s discrete concept encodings result as expressive as the continuous encodings. Als...
Rebuttal 1: Rebuttal: **W1** (More background): We agree that adding more information on Sysbinder and Slot Attention can help the reader and make the paper overall more self-contained. We have added an additional section with details for the camera-ready version and provide it in a comment below. **W2** (Clarity of...
Summary: This paper introduces neural concept binder, a neural symbolic framework that utilizes both soft and hard binding. Building on top of the sysbinder model, it can additionally do exemplar-based hard binding and revise concepts. Evaluations made on CLEVR and the proposed CLEVR-Sudoku dataset proved the method's ...
Rebuttal 1: Rebuttal: **W1.1** (NeSy contribution): We agree that the field of neuro-symbolic AI is rapidly evolving with different focuses, e.g., visual reasoning in real-world images. However, works that are focussing on higher-level neuro-symbolic problems still heavily rely on mapping raw input images to symbolic...
Summary: The authors introduced a pioneering framework that combines an object-centric learning module with a retrieval-based module to address visual reasoning tasks and a new visual reasoning task, CLEVR Sudoku. The proposed method demonstrated significant potential in effectively acquiring inspectable and revisable ...
Rebuttal 1: Rebuttal: **W1** (dependence on continuous encoder): Indeed, the quality of the initial continuous concept encodings is important for the resulting discrete concept representation. We had remarked on this in the context of our ablations. We have now added an ablation study to highlight this empirically (c...
Summary: This paper introduces Neural Concept Binder, a framework for obtaining discrete concept representations from images without any supervision. The method is an extension of Neural Systematic Binder (SysBinder), adding a clustering step on top of the block-slot representations to obtain discrete concept represent...
Rebuttal 1: Rebuttal: **W1** (regarding RQ2 (CLEVR Sudoku)): A determining factor for the performance on CLEVR-Sudoku is the classification of the digits. We agree that this has, to some degree, already been investigated in the context of Q1, where we tested the suitability of NCB’s concept representations for few-sh...
Rebuttal 1: Rebuttal: We thank all the reviewers for their time and valuable feedback. We are especially happy to receive so much positive feedback concerning the importance of the tackled problem ("**investigates an important problem**" - edmj), the contribution of our work overall ("**pioneering framework**" - VMaX, ...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Decoupling Semantic Similarity from Spatial Alignment for Neural Networks.
Accept (poster)
Summary: In this paper, the authors propose a new method to measure similarity between responses of deep neural networks in vision. They reformulate the commonly used strategy to compute Representational Similarity Matrices (RSMs) by acknowledging the superiority of the semantic information over the spatio-semantic inf...
Rebuttal 1: Rebuttal: > _Why do the authors use Pearson correlation to examine the relationship between the Jensen-Shannon Divergence and the representational similarity? E.g. Kendall/Spearman correlation can be more robust._ and _"The use of some methods at work is not well justified (e.g. Pearson correlation)."_ We ...
Summary: This paper proposes Semantic RSMs to understand the internal representations in deep neural networks. The authors argue that the current RSMs are limited by their coupling of semantic and spatial information, which restricts the assessment of similarity. The proposed semantic RSMs are spatial permutation invar...
Rebuttal 1: Rebuttal: We appreciate the time and effort you invested in reviewing our manuscript. Regarding your comment that _"it would benefit from a more detailed discussion on the scalability of the proposed method to larger models and datasets and the approximation error."_ We addressed a part of this regarding ...
Summary: The authors introduce semantic RSMs, which are designed to be invariant to the spatial arrangement of elements within images. These semantic RSMs assess similarity by treating the problem as one of set-matching, where the focus is on matching semantic content rather than spatial details. This approach not only...
Rebuttal 1: Rebuttal: Thank you for the detailed feedback. We can see that a great deal of time and thought went into it. However, we believe there may be a few misunderstandings that we would like to clarify: ### Questions **Q1: How well does the method scale to very large datasets[...]** R1: The scaling behavior de...
Summary: This paper makes a contribution to the construction of RSMs in the field of vision neural networks and puts forward the concept of semantic RSMs, which is innovative and theoretical. Strengths: The proposed semantic RSMs are used for spatial alignment by means of optimal permutation, which is a relatively new...
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude for taking the time to read our paper and provide valuable feedback and constructive criticism: 1. _“This paper lacks the experimental verification of specific downstream tasks, such as detection and segmentation, on semantic RSMs. I need to know whi...
Rebuttal 1: Rebuttal: Thank you to all the reviewers for their time and effort in reviewing our paper. We appreciate the feedback and tried to mitigate issues to the best of our abilities. We recognize different viewpoints, but some criticisms seem based on misunderstandings, which may have led to undeservedly lower ra...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
PGN: The RNN's New Successor is Effective for Long-Range Time Series Forecasting
Accept (poster)
Summary: This paper proposed a Parallel Gated Network (PGN) as a successor to RNN, featuring a Historical Information Extraction (HIE) layer to directly capture information from previous time steps. Additionally, it introduces a Temporal PGN (TPGN) framework with two branches to capture both long-term periodic and shor...
Rebuttal 1: Rebuttal: We sincerely appreciate the comprehensive review, detailed feedback, and valuable suggestions from the reviewer Bhsw. > Q1: The major issue with this paper is the lack of analysis and comparison with significant literature. Please compare your method with SegRNN and clarify your differences and a...
Summary: This paper focuses on long-range time series forecasting problems. To address the limitations of RNNs, a novel paradigm called PGN is introduced as an alternative, providing shorter information propagation paths. Building upon PGN, the paper further presents a generic temporal modeling framework named as TPGN,...
Rebuttal 1: Rebuttal: We extend our sincere appreciation to Reviewer t9pC for providing valuable feedback and acknowledgment of our research. > Q1: For tables with a large amount of content, such as Table 1, it may be beneficial to consider using different colors for highlighting, as it could enhance clarity. Addition...
Summary: The paper introduces a new model paradigm which aims to solve the traditional bottlenecks of RNN models, such as non-parallel computation, gradient explosion/vanishing issues, etc. Strengths: 1. An important problem is studied in this paper. 2. The overall representation is clear and easy to follow. 3. A comp...
Rebuttal 1: Rebuttal: We express our sincere gratitude to Reviewer fqFX for providing comprehensive review, insightful perspectives, and thought-provoking questions. > Q1: I think the total amount of computation done in the PGN should be O(L^2) ... Can the authors kindly address this issue? Sincerest gratitude for yo...
Summary: This paper proposes a new network called PGN to capture the long-term dependencies of time series. Based on PGN, this paper further design TPGN for long-range time series forecasting. TPGN consists of two branches to respectively capture the long-term periodic patterns and short-term information of time series...
Rebuttal 1: Rebuttal: We sincerely appreciate Reviewer K1qz for offering valuable insights and recognizing of our work. > Q1: The computational complexity of TPGN is not well discussed in this paper, and it would be better if the inference efficiency was adequately discussed as the time series size increases. The TPG...
Rebuttal 1: Rebuttal: We sincerely thank reviewers for their thorough review and valuable suggestions. # A Ablation analysis of the normalization layer on the Traffic and ETTh1 datasets **The experimental results can be found in Table A in the PDF file (newly submitted).** We observed a noticeable decrease in perfor...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
ReGS: Reference-based Controllable Scene Stylization with Gaussian Splatting
Accept (poster)
Summary: This paper presents a method for stylizing 3D Gaussian Splatting (3DGS) using a single reference image. Unlike NeRF, which uses a structured representation, 3DGS is an unstructured discrete representation that tightly binds geometry and appearance to each Gaussian splat. To address this challenge, the paper in...
Rebuttal 1: Rebuttal: Thank you for your constructive feedback and thoughtful comments! Please note our top-level comment and additional experimental results in the rebuttal PDF. Below we address your questions and concerns. --- **wrt to novelty.** The core contribution of this paper is to enable stylizing 3DGS using...
Summary: The paper presents an optimization-based approach for style transfer of a (pre-baked) 3D scene represented by a 3D Gaussian splatting (3DGS). In order to fine-tune the given 3D scene with a style reference image of a single view, the authors suggest using a texture-guided controlling algorithm, which modifies ...
Rebuttal 1: Rebuttal: Thank you for your valuable comments and constructive feedback! Please note our top-level comment and additional experimental results in the rebuttal PDF. Below we address your questions and concerns. --- **wrt presentation of the materials.** Thanks for your constructive suggestion. We have car...
Summary: The paper proposes a method to stylize 3D Gaussians using a texture guidance. The method takes a pretrained 3D Gaussian model and one content-aligned reference image as inputs and outputs a stylized 3DGS model which could be rendered at real-time framerate. Several techniques, including structured densificatio...
Rebuttal 1: Rebuttal: Thank you for your constructive feedback and thoughtful comments! Please note our top-level comment and additional experimental results in the rebuttal PDF. Below we address your questions and concerns. --- **wrt inspired by Ref-NPR.** Thanks for pointing this out. We will include a more detail...
Summary: The paper proposed a texture-guided Gaussian densification strategy for exemplar-based 3DGS style transfer with content-aligned reference, while preserving original geometry by depth supervision. During 3D stylization with a style template reference, the introduced texture-guided Gaussian control strategy can ...
Rebuttal 1: Rebuttal: Thank you for your detailed review and constructive suggestions! Please note our top-level comment and additional experimental results in the rebuttal PDF. Below we address your questions and concerns. --- **wrt geometric stylization.** Thanks for your suggestion. We agree that also editting the...
Rebuttal 1: Rebuttal: We would like to thank all reviewers for providing constructive feedback that helped us improved the paper. We are encouraged that the reviewers think - our approach is decent (BehR), neat (JWsE), and interesting (TJDG) - designs and insights are effective (3qBC) and works well (BehR, TJDG, hmWz)...
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper introduces ReGS, a new reference-based 3D style transfer method that utilizes 3DGs as the 3D representation. To capture fine-grained details from the reference view, the method employs texture-guided Gaussian control to enhance density in areas where texture is under-represented. Additionally, the ap...
Rebuttal 1: Rebuttal: Thank you for your detailed review and constructive comments! Please note our top-level comment with additional experimental results in the rebuttal PDF. Below we address your questions and concerns. --- **wrt novelty on style transfer techniques.** The core contribution of this paper is to enab...
null
null
null
null
null
null
Information-theoretic Generalization Analysis for Expected Calibration Error
Accept (poster)
Summary: This paper analyzes the estimation bias and generalization error of the expected calibration error (ECE). Specifically, in a binary classification setting, the authors provide an upper bound for the total bias with an improved convergence rate, applicable to both uniform mass and uniform width binning strategi...
Rebuttal 1: Rebuttal: We would like to express our deepest appreciation for your insightful reviews and suggestions. We sincerely summarize our responses to you as follows. ### Q.1: Regarding the strictness of Assumption 2 and the assumption of $n_e\geq 2B$ **A.** First, please refer to the global response for the di...
Summary: This paper investigates the estimation bias in expected calibration error (ECE) for binary classification models, focusing on uniform mass binning (UMB) and uniform width binning (UWB). The authors present a comprehensive theoretical analysis, establishing upper bounds for the bias and the generalization error...
Rebuttal 1: Rebuttal: We would like to express our deepest appreciation for your insightful reviews and suggestions. We sincerely summarize our responses to you as follows. ### Q.1: Regarding the setting of binary classification and the Lipschitz continuity. **A.** Although our study focuses on binary classification,...
Summary: The paper studies the expected calibration error using information-theoretical tools. They derive different tight fCMI and eCMI bounds in this setting. Empirical results show that the results are nonvacuous. Strengths: 1/ The paper is in general well written. Adequate discussions are given in the main body ...
Rebuttal 1: Rebuttal: We would like to express our deepest appreciation for your insightful reviews and suggestions. We sincerely summarize our responses to you as follows. ### Q.1: Regarding the novelty of proof techniques ### A. First, it is important to clarify that our techniques are fundamentally different from ...
Summary: This paper presents a comprehensive analysis of the estimation bias for expected calibration error (ECE), focusing on two common binning strategies: uniform mass and uniform width binning. The analysis establishes upper bounds on the bias, resulting in an improved convergence rate. Furthermore, these bounds re...
Rebuttal 1: Rebuttal: We would like to express our deepest appreciation for your insightful reviews and suggestions. We sincerely summarize our responses to you as follows. ### Q.1: Derive a minimax lower bound for the total bias. **A.** Please see our global response for a minimax lower bound. ### Q.2: Regarding the...
Rebuttal 1: Rebuttal: We would like to express our sincere appreciation for your insightful reviews and suggestions. First, we will address the common concerns raised by the reviewers. Following that, we will address each individual question. ## Discussion about the lower bound of the total bias As pointed out by Revi...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Fair Wasserstein Coresets
Accept (poster)
Summary: This paper introduces a new data distillation technique called Fair Wasserstein Coresets. The general idea is to create a synthetic core set along with sample weights to represent a larger dataset, by minimizing the Wasserstein distance between core set and dataset, while ensuring a fairness constraint is sati...
Rebuttal 1: Rebuttal: We thank the reviewer for the careful review and comments; we answer each questions below. 1. It is indeed true that the MLP $g_\psi$ satisfies the Wasserstein inequality for $p_{(x,d)}$ rather than $p_{z}=p_{(y,x,d)}$ (with the first inequality being ultimately what we are interested in), so tha...
Summary: The paper gives an algorithm to generate smaller weighted synthetic dataset from real data set such that the synthetic data can enforce demographic parity when used for downstream tasks. This is achieved by solving an optimization problem of minimizing the Wasserstein distance between the two dataset distribut...
Rebuttal 1: Rebuttal: We are grateful to the reviewer for their time and comments in the review of our work; we answer each point below. 1. While we acknowledge that the ideas for the reformulation and part of the complexities considerations are adapted from [1, 2] ([56, 71] in the original references) we would like t...
Summary: This paper proposed to extract coresets from a set of data samples using Wasserstein distance with fairness constraints. The authors formulates this problem as a minimization with linear constraints. The coreset selection is over the whole input space, not just from original data samples. The importance / weig...
Rebuttal 1: Rebuttal: We are grateful to the reviewer for the effort spent reading our paper and for the encouraging comments. Below are our responses to the questions raised: 1. First of all, we apologize for a typo in (13). The minimum over $\hat{X}_i \in \mathcal{X}$ in (13) should be corrected to the minimum over ...
Summary: This paper talks about "fair Wasserstein coresets", weighted representative points generated to represent the original datasets. The goal is to meet two purposes: 1) the Wasserstein distance of the coreset and the input data set is minimized, 2) fairness in terms of demographic parity. Having a small Wasserste...
Rebuttal 1: Rebuttal: We are thankful to the reviewer for their time and thoughtful feedback. As the reviewer has pointed out, several notions of fairness exist in the literature. Focusing on the classification setting, chapter 3 in [2] classifies these notions into independence, separation, and sufficiency. Demographi...
Rebuttal 1: Rebuttal: We would like to thank again all reviewers for their comments, detailed feedback and questions which have improved the quality of our paper. While we have addressed each reviewer individually, we are using the global rebuttal to upload a .pdf with the new version of Figure 1 which includes Paret...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Towards Flexible 3D Perception: Object-Centric Occupancy Completion Augments 3D Object Detection
Accept (poster)
Summary: This paper presents a new task named object-centric occupancy completion as a fine-grained object representation to supplement the coarse-grained 3D bounding boxes. To accomplish this task, a new dataset, which annotates instance-level high-resolution occupancy, is created in an automated pipeline. This paper ...
Rebuttal 1: Rebuttal: 1. **Why not just predict foreground instance-level occupancy in the whole scene, instead of pursuing higher detection accuracy by using the occupancy results?** 1. To predict foreground instance-level occupancy for the entire scene, it is essential to distinguish the foreground from ...
Summary: In this work, the authors propose a novel task called object-centric occupancy. It extends the 3D detected bounding box representation to provide a more detailed description of the internal object shape. The method provides higher voxel resolution in large scenes by focusing on foreground objects only. It not...
Rebuttal 1: Rebuttal: 1. **The experimental results are only obtained on the Waymo Open Dataset. It will be nicer to conduct the experiments on nuScenes or Argoverse 2 to validate its robustness for different datasets.** Thanks for the suggestions. Currently, we are not able to train/test our method on nuScenes o...
Summary: The manuscript introduces the idea of representing the shape of objects at higher fidelity (and independent of) the rest of the scene. This is explored in the context of autonomous vehicles research on 3d car detection and representation. The proposed model regresses a shape code and an updated 3d bounding box...
Rebuttal 1: Rebuttal: 1. **Missing related works** Thank you for the suggestion. We will include a discussion of these works in our revised version and provide a more thorough related works section. 2. **Renderings of the shape codes;** In the uploaded PDF, we’ve included several renderings. Th...
Summary: This paper addresses the limitations of 3D object bounding box representations in autonomous driving by introducing object-centric occupancy. It uses an implicit shape decoder to manage dynamic-size occupancy generation. The method demonstrates robust performance under noisy conditions, significantly enhancing...
Rebuttal 1: Rebuttal: 1. **Creating detailed occupancy for each object seems unnecessary. In most downstream tasks in autonomous driving, using bounding boxes (bboxes) is sufficient.** We respectively disagree with this statement. As highlight in our introduction, using bboxes alone “fails to capture the ...
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude to the reviewers for their thorough and thoughtful review of our paper. We are encouraged to learn that all reviewers found our paper well-written and recognized its impressive performance. We also extend our thanks to reviewers **fcuc**, **GRiU**, an...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
CemiFace: Center-based Semi-hard Synthetic Face Generation for Face Recognition
Accept (poster)
Summary: This paper proposes CemiFace, a novel diffusion-based approach for generating synthetic face images with varying levels of similarity to their identity centers. The authors argue that semi-hard negative samples, those with moderate similarity to the center, are crucial for training effective face recognition m...
Rebuttal 1: Rebuttal: **W1:** Thank you for your insightful suggestion and positive feedback. We assume the benefits of the semi-hard training face images could be attributed to: **(1) **easy training samples are typically images where the face is clear, well-lit, and faces the camera directly, and thus training on ...
Summary: The paper introduces an approach called CemiFace for generating synthetic face images to enhance face recognition (FR) models. The paper provides the first in-depth analysis of how FR model performance is influenced by samples with varying levels of similarity to the identity center, focusing particularly on c...
Rebuttal 1: Rebuttal: **W1:** Low GtR is not attributed to real inquiry data. For instance, even with \textbf{the same synthetic data DDPM}, our CemiFace would surpass the previous state-of-the-art method DCFace inquired by DDPM data (clearly illustrated in the upper part of Tab. 4), and DDPM provides close performance...
Summary: The paper proposes a novel approach named C to address privacy concerns in face recognition technology. The authors propose CemiFace, a diffusion-based method that generates synthetic face images with controlled similarity to a subject's identity center, enhancing the discriminative quality of the samples. Thi...
Rebuttal 1: Rebuttal: **W1**: In the last part of the Introduction section, we have clearly and specifically listed four contributions of our work, including: 1. a new and crucial finding; 2. a technical contribution (i.e., CemiFace face image generator) inspired by the finding; 3. an application contribution of our pr...
Summary: The paper titled "CemiFace: Center-based Semi-hard Synthetic Face Generation for Face Recognition" addresses a critical issue in face recognition (FR) related to privacy and performance degradation when using synthetic face images. The authors propose a diffusion-based approach, CemiFace, which generates facia...
Rebuttal 1: Rebuttal: **W1**: We first define some basic calculation complexities: **Time Step $ T $**: This represents the total number of time steps required for a complete diffusion process. **UNet Complexity $C_{\text{UNet}}$**: The UNet model accepts the input image and outputs the estimated noise. **Pretraine...
Rebuttal 1: Rebuttal: We thank all reviewers for their valuable feedback. Reviewers acknowledged that our: (i) **method** is new/innovative (tciQ, L3p9, 5xKY, Crof), interesting (tciQ), effective for SFR (fJPL, L3p9, 5xKY, Crof), and addresses privacy concerns (fJPL, L3p9, Crof); (ii) **discovery** is important (Crof);...
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper proposes a new Face Recognition diffusion-based generation method. The diffusion process is completed with a semi-hard constraint on the synthetic reconstructed image: for each inquiry image of the (real) training set, the reconstructed image after the forward-backward diffusion process must have a s...
Rebuttal 1: Rebuttal: --- **W1:** The displayed similarities are the input cosine similarities, based on which the displayed face images were generated. However, the actual similarities between the generated images and their inquiry images may not be exactly the same as the input cosine similarities as DL models typica...
null
null
null
null
null
null
Controlling Continuous Relaxation for Combinatorial Optimization
Accept (poster)
Summary: This article finds that the existing UL-solvers will trap into local optima and face rounding issues. This study proposes a continuous relaxation annealing (CRA) strategy and an auxiliary function to facilitate training. Strengths: 1. The method proposed in the article is sound, easy to implement, and effecti...
Rebuttal 1: Rebuttal: Thank you for your thorough and thoughtful review. We appreciate your recognition of our method's strengths, noting that it is "sound, easy to implement, and effective." Additionally, we are grateful for your comment that this article has no major drawbacks. **Weakness:** Thank you for your valua...
Summary: The proposed approach is an optimization method for each graph over GNN parameters where each output corresponds to the likelihood of the node belonging to the solution. The objective function consists of a penalty term along with a parameter scheduled to control the non-convexity of the objective. Strengths:...
Rebuttal 1: Rebuttal: Thank you so much for reviewing our paper, appreciating the strengths in our approach, particularly noting the validity of the convex annealing method to control non-convexity and avoid local minima, as well as the theoretical results of the limiting points with different $\gamma$ and the generali...
Summary: This paper aims to tackle shortcomings of the existing unsupervised learning-based solvers for combinatorial optimization, namely the local optima issue and the rounding issue. It proposes a novel technique called continuous relaxation annealing (CRA) strategy which introduces an additional penalty term to smo...
Rebuttal 1: Rebuttal: Thank you for reviewing our paper; we appreciate your recognition of the simplicity and effectiveness of our proposed method, the consistent improvement of CRA over PI-GNN, and the extensive qualitative and quantitative analysis. **Weakness (Contribution):** Please refer to "Main Contribution an...
Summary: This paper presents a heuristic method for producing solutions to combinatorial optimization problems, which is based around solving a continuous relaxation of the problem. The main focus of the paper is on an additional penalty term to add to the objective of this relaxation which aims to reward solutions tha...
Rebuttal 1: Rebuttal: Thank you for your thorough and thoughtful review. We appreciate the time and effort you have invested in providing valuable feedback. **Understanding Difficult Sections:** Thank you for your insightful feedback and for pointing out the areas that need clarification. We understand the importance...
Rebuttal 1: Rebuttal: ## Unified response to all reviewers We sincerely thank the reviewers for their thorough and insightful reviews. Reviewer J6G3 found our idea interesting and promising, and Reviewer fKtw appreciated the comprehensiveness of our numerical experiments. However, Reviewer qMFs and Reviewer BuGW expr...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
From Trojan Horses to Castle Walls: Unveiling Bilateral Data Poisoning Effects in Diffusion Models
Accept (poster)
Summary: The paper proposes a new poisoning attack for diffusion models (DMs). While previous work tried to poison/backdoor DMs by altering the training process or the optimization objective, the paper proposes a poisoning attack by only altering the training data. To poison DMs, a trigger is inserted into training im...
Rebuttal 1: Rebuttal: Thank you for your thorough review on our submission. We hope that our response (**A**) to each weakness (**W**) and question (**Q**) address your concerns and positively affect the rating. **W1 & Q1**: Training details of ImageNette & CIFAR-10. **A**: The training details of ImageNette & CIFA...
Summary: The paper investigates the impact of BadNets-like data poisoning attacks on state-of-the-art diffusion models (DMs) used for image generation. Unlike previous studies that required modifications to the diffusion training and sampling procedures, this work examines the effects of poisoning the training dataset ...
Rebuttal 1: Rebuttal: Thank you for your thorough summary, as well as the recognition of the originality, quality, clarity, and significance by our work. We hope our responses (**A**) to each of the weaknesses (**W**) or questions (**Q**) can address your initial concerns. **W1 & W2 & Q2**: How can we ensure the exper...
Summary: This paper investigates backdoor attacks against diffusion models. Unlike previous works that require both injecting poisoned data samples and manipulating the training loss function, this study focuses solely on poisoning training data samples during the training phase. The research demonstrates that backdoor...
Rebuttal 1: Rebuttal: Thank you for your recognition of the readability, the comprehensive experiments, and the contribution by our study. We hope our responses (**A**) to each of the weaknesses (**W**) or questions (**Q**) can address your concerns. **W1 & Q1**: The evaluation of the proposed attacks is limited to 3 ...
Summary: The paper studies BadNet-like poisoning attacks in diffusion models from both attack and defense perspectives. Strengths: 1. I think the paper makes interesting observations for the community, especially regarding the phenomenon of trigger amplification. 2. The evaluation seems quite comprehensive, consideri...
Rebuttal 1: Rebuttal: Thank you for your recognition of the interesting observations and comprehensive experiments by our study. We hope our response (**A**) to each of the weaknesses (**W**) can address your concerns. **W1**: Despite considering many settings, experiments are conducted only once (no error bars). *...
Rebuttal 1: Rebuttal: # General Response We sincerely thank all the reviewers for their meticulous review and valuable feedback on our submission. Below, we provide a general response to address common questions, weaknesses, and concerns in your comments. Please refer to the figures and tables in the attached PDF as F...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Avoiding Undesired Future with Minimal Cost in Non-Stationary Environments
Accept (poster)
Summary: The paper studies the non-stationary setting in avoiding undesired future (AUF) problems, where environmental shifts can cause the failure of existing AUF methods. It introduces an optimization problem for AUF with minimal action cost in non-stationary environments, formulated as a convex quadratically constra...
Rebuttal 1: Rebuttal: Thanks for your detailed feedback, and we hope our responses will address your concerns. **W1&Q2.** Theoretical guarantees (regret bound) for the cost. **A1.** Thanks for your insightful question. In fact, theoretical guarantees of the cost can be inferred from existing results (Lemma 3.1 and Th...
Summary: In this paper, the authors address decision-making problem that sufficient interactions are not available. In this case, RL is not suitable. The authors model the structure among the observed variables, and use the structure to help the decisions. Compared to the previous studies [Qin et al. 37], the method ca...
Rebuttal 1: Rebuttal: Thanks for the insightful feedback and the interest in our work! We hope our responses can address your concerns. **W1&Q1.** Discussion on the offline RL. **A1.** Thanks for your question. Generally speaking, online-offline hybrid RL methods can reduce the number of interactions by leveraging of...
Summary: The authors formulate the Avoiding Undesired Future (AUF) problem in real-world scenarios of decision-making, especially in non-stationary environments, and propose a method to avoid undesired outcomes with minimal costs. Here the non-stationarity majorly comes from the different costs corresponding to differe...
Rebuttal 1: Rebuttal: Thanks for the valuable feedback and appreciation of our work. We hope that our responses could mitigate your concerns. **Q1.** The difference between SRM and SCM. **A1.** Thank you for your insightful question. To some extent, the SRM and Rh(.) operations [1, 2] are indeed similar to their coun...
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Optimal Algorithms for Learning Partitions with Faulty Oracles
Accept (poster)
Summary: This paper studies the problem to recover an exact $k$ partition of a set with access to a same-cluster oracle that is allowed to lie $\ell$ times. This papers gives an algorithm with optimal query complexity up to constants and a lower bound. Strengths: 1. The result of this paper is clean and complete. The ...
Rebuttal 1: Rebuttal: We thank you for your thorough review. We address your questions and comments in order. Regarding the writing: we agree that the details of the lower bound proof are technically more challenging and arguably more mathematically interesting, but we had previously decided to prioritize the algorith...
Summary: **[Setting]**: This paper studies the problem of clustering n items into k clusters using an oracle that adversarially answers same-cluster queries for item pairs under the constraint that it makes at most $\ell$ errors for a known constant $\ell$. The goal is to exactly recover all clusters always (instead of...
Rebuttal 1: Rebuttal: We thank you for taking the time to provide thorough feedback on our work. We begin by addressing your main question. After that, we discuss some of the weaknesses you mentioned. **"... it is not clear why the oracle will make at most $\ell$ errors ... or why $\ell$ will be known in advance. Do t...
Summary: The paper studies the problem of finding a hidden partition into $k$ clusters of a given universe. In many applications an algorithm has only access to a same-cluster oracle. A query to this oracle reveals whether two elements belong to the same cluster or not. This problem has been previously studied and tigh...
Rebuttal 1: Rebuttal: We thank you for your response. We begin by answering your question: **"Is there anything known for the setting where the number $\ell$ is unknown to the algorithm, and it only appears in the analysis?"**. In fact in the $k$-known setting, the algorithm presented in the paper does not require kno...
Summary: This paper studies the query complexity of clustering with a faulty oracle. Given a set of $n$ points $V$, which is partitioned into $k$ hidden clusters, the learner wants to recover the hidden partition by querying whether two points are in the same clusters or not. There has been a line of work that studies ...
Rebuttal 1: Rebuttal: We thank you for your helpful comments and questions. We begin by answering your questions, and we then discuss some of the weaknesses that were raised. **1. "Can you provide any real applications that motivated the study of such a learning model? (Only a constant number of mistakes are made ove...
Rebuttal 1: Rebuttal: We thank all the reviewers for their time. Multiple reviewers have asked for more examples in which our model could be applied. Below, we give two more general motivating examples, in the tech and scientific domains respectively, illustrating the role of $\ell$ in learning tasks. **Example 1: Rob...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Color-Oriented Redundancy Reduction in Dataset Distillation
Accept (poster)
Summary: The authors propose AutoPalette, which reduces color redundancy in dataset distillation. They use a palette network and color-guided initialization to enhance training efficiency and performance by minimizing redundant color information in synthetic images and datasets. Strengths: Color redundancy is a fundam...
Rebuttal 1: Rebuttal: **Weakness 1**: *In the abstract, the authors summarize their framework as the one that minimizes color redundancy at the individual image and overall dataset levels. I think that’s a good summary. However, the description is not utilized when they introduce their framework in the main text. Altho...
Summary: This paper introduces a straightforward yet effective dataset distillation method called AutoPalette. The method minimizes color redundancy at both the individual image level and the entire dataset level. At the image level, it trains the palette network by maximizing color loss and palette balance loss, there...
Rebuttal 1: Rebuttal: **Weakness 1**: *The paper could benefit from a more detailed explanation of the color loss and palette balance loss. It would be helpful to include an explanation of why the palette balance loss might achieve a more balanced color palette.* Thank you for your insightful comment. The palette bala...
Summary: The paper titled introduces AutoPalette, a novel framework for dataset distillation (DD) that focuses on minimizing color redundancy at both the individual image and overall dataset levels. Authors propose a palette network to dynamically allocate colors from a reduced color space to each pixel, ensuring essen...
Rebuttal 1: Rebuttal: **Weakness 1**: *The paper does not discuss the potential impact of the method on the performance of larger dataset beyond the CIFAR-10 and CIFAR-100. These 2 datasets are two small and could not show the effectiveness of the proposed method.* We appreciate the concern regarding the need for expe...
Summary: This paper introduces ColorPalette, a framework that minimizes color redundancy at the individual image and overall dataset levels. At the image level, the palette networks generate condensed images in reduced color bit-width while at the dataset level, a color-guided initialization strategy is proposed. The e...
Rebuttal 1: Rebuttal: **Weakness 1**: *AutoPalette seems like it is built on top of [1] with DC loss*: Thank you for bringing up this important question! While color reduction plays a significant role in our methodology, our work primarily focuses on addressing two unique challenges inherent in dataset distillation wi...
Rebuttal 1: Rebuttal: We would like to extend our sincere gratitude to all the reviewers for their time and effort in reviewing our work. We deeply appreciate the insightful suggestions and feedback provided. We also thank for the acknowledgement of 1)our color-oriented redundancy reduction provides **a new perspective...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning 3D Garment Animation from Trajectories of A Piece of Cloth
Accept (poster)
Summary: The authors propose a method to transfer the deformations of the observed garments to any other garment. Previous methods either rely on a large-scale dataset for training or analytical physics model with limited expressive ability. On the contrast, the proposed method first learns the constitutive relations f...
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback. # W1: Possibility of Using The Method in Real Applications Firstly, synthetic data is commonly used and facilitate the research of garment animations, such as TailorsNet [1], Cloth3D [2], MotionGuided [3], Clo3D [4]. Secondly, as shown in the da...
Summary: This submission presents a method that could effectively learn the dynamic patterns of different garments from a single piece of cloth. The key insight is that the motion of different cloths is governed by both external forces and the constitutive relations rather than specific garment topologies. Thus, an Ene...
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and recognizing the value of our work. We believe the reviewer has sufficient understandings of our framework and pipeline. # W1: Details Regarding The Design of EUNet We introduce the formulations and training procedures of our EUNet in Section 3.2...
Summary: This work proposes a method to learn the constitutive model of cloth materials from observed cloth trajectory using a neural network. It adopts an MLP that operates on individual edges and predicts per-edge distortion based on the deviation of edge geometry from rest shape and trains the network using a combin...
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback. # W1: Design of Dissipative Energy As commonly adopted in physics simulation, such as the Rayleigh dissipation, the dissipation can be approximated by a function of objects' velocities, which can be calculated from $X^t-X^{t-1}$ in our work. In ...
Summary: The paper proposes a novel method for animating garments by learning from a single piece of cloth. This approach circumvents the need for large-scale garment datasets, which are resource-intensive and time-consuming to create. The core idea is to use a disentangled scheme where constitutive behaviors are learn...
Rebuttal 1: Rebuttal: We thank the reviewer for the positive and valuable feedback. # W1: Computationally Intensive Energy Optimization Process In this paper, we primarily focus on the challenge of modeling constitutive relations from observations. The energy-based simulation is used solely as a tool to solve the dyna...
Rebuttal 1: Rebuttal: We thank the reviewers for their insightful feedback. We emphasize our contributions and clarify the main points as follows. To mimic the dynamic patterns from observed clothes, some methods [1, 2] focus on estimating the **PHYSICS PARAMETERS** that best fit the known analytical models or simula...
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper proposes to learn garment dynamics using a disentangled learning framework and the Energy Unit Network (EUNet). Instead of relying on extensive garment datasets, the approach learns constitutive behaviors from a single cloth piece and dynamically animates garments through energy optimization. Stren...
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed feedback. # W1: Insufficient Literature Review We argue that our focus is quite different from **Physics Parameter Estimation**. The misunderstanding may come from the similarity in data format, where a piece of hanged cloth deforms given external forces. Un...
null
null
null
null
null
null
Policy Aggregation
Accept (poster)
Summary: This paper joins a long list of recent work that studies how to aggregate the preferences of several agents (e.g., humans) in a reinforcement learning framework inspired by social choice theory. The problem is modeled as a multi-objective MDP with $n$ different reward functions. The authors propose to use the ...
Rebuttal 1: Rebuttal: > The primary justification of this work (which is repeatedly mentioned in the paper) is that prior work on policy aggregation and fair RL is not invariant to affine transformations of the reward function. Essentially, agents can have differently scaled reward functions, which makes, e.g., maximiz...
Summary: The paper solves the problem which arises in preference aggregation of individual policies to a collective policy – (1) summation based aggregation are sensitive to affine transformations and (2) voting rule based aggregation faces problem of policies being exponential in S. Towards solving this, the paper pro...
Rebuttal 1: Rebuttal: > In Def. 4 should the expression be vol(O’)/vol(O) >= 1 – veto(S) + epsilon instead of vol(O’) >= 1 – veto(S) + epsilon as currently stated? Yes, you are right. Thank you for catching this typo. > Why can't yours be a special case of Noothigattu et al. [27]? Noothigattu et al. assume that ther...
Summary: This paper studies aggregating multiple policies–which can be seen as a formalization of the task of aligning an AI system to the values of multiple individuals. When the number of states is small (such as, when multiple individuals have to select one out of a few candidates), this problem has been widely stud...
Rebuttal 1: Rebuttal: > I am not sure if the empirical results section is adding any value to this paper: it evaluates different aggregation rules, but I think this is not the focus of this work–I think the focus is to design efficient algorithms and/or prove existential results. If other reviews and the area chairs ag...
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Auditing Local Explanations is Hard
Accept (poster)
Summary: The paper addresses the challenges in verifying the accuracy of local explanations for machine learning models, especially when the model is not fully known and access to it is limited. The primary focus is on minimizing the number of times the model and explainer are accessed during the auditing process. The ...
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful review. We will respond to each weakness in order. W1. We agree that empirical validation is an important direction for further work, and will emphasize this in the final version of our paper. We chose to take a theoretical focus because the main result ...
Summary: This work studies an auditing framework in the eXplainable Artificial Intelligence (XAI) area. Specifically, the authors consider the scenario where a group of third-party auditors or users attempt to perform a sanity check on the provided explanations. The framework allows the auditors to query the model pred...
Rebuttal 1: Rebuttal: We thank the reviewer for their review. We will begin by addressing the listed weaknesses 1. As we mention in our global rebuttal, we respectfully disagree that the considered explanations are too limited. LIME and Anchors are both reasonably utilized methods in practice, and more generally we be...
Summary: The paper proposes an auditing framework to verify the truthfulness of explanations by a third-party in scenarios where there is no trust. Bounds on sample complexity are provided that depend on the locality (minimum local mass) of the explanation. Further, the authors discuss that for gradient-based explanati...
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed review. We will first respond to the listed weaknesses. 1. As we mention in our global rebuttal, we believe our setting reflects cases where an Auditor might have access to a set of explained cases by default. For example, applicants to a bank for loans co...
Summary: This paper provides theoretical results on how many queries are required for an auditing framework for local explanations of machine learning algorithms (e.g., neural networks). Strengths: The paper is well motivated with a widely interesting and relevant topic. The approach is theoretical, and rigor is provi...
Rebuttal 1: Rebuttal: We appreciate the review and the detailed questions. In order 1. As we mention in our global rebuttal, we do not believe the local loss is sufficient for preventing adversarial attacks. We rather believe that maintaining a low local loss is one of many necessary components required for a trustwor...
Rebuttal 1: Rebuttal: We thank the reviewers for their detailed and thoughtful reviews. It appears that there are 3 main points of contention regarding our paper. First, that the set of "local explanations" being considered is either too limited or not carefully enough analyzed, second, that the local loss is not a goo...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Predicting Ground State Properties: Constant Sample Complexity and Deep Learning Algorithms
Accept (poster)
Summary: The paper presents two novel machine learning algorithms for predicting ground state properties of quantum systems with constant sample complexity, independent of system size. The first algorithm modifies an existing ML model, while the second introduces a deep neural network model, both showing improved scali...
Rebuttal 1: Rebuttal: **Reviewer Comment:** The training objective for the neural network is non-convex, which poses challenges in finding a global optimum efficiently. The paper does not address how to overcome this issue or guarantee convergence to optimal weights. **Author response:** To address non-convexity of th...
Summary: In this paper, the authors focused on utilizing deep learning methods to predict the ground states. They made an important assumption that brings theoretical improvement to achieve constant sample complexity in the training data. They also made two main alternations to the learning model compared to previous l...
Rebuttal 1: Rebuttal: We thank the reviewer for the careful reading of our paper. The reviewer's comments only apply to our first result in Section 3.1, and we acknowledge that the reviewer's comments are accurate for this portion of our paper. However, the improvement is significant - reducing sample complexity from l...
Summary: This paper studies the sample-efficient learnability of properties of grounds states of local Hamiltonians. Ground states of local Hamiltonians are hard to compute, even for quantum computers and to circumvent this hardness, several recent works proposed learning the trace inner product of local observables wi...
Rebuttal 1: Rebuttal: We thank the reviewer for the careful reading of our paper.\ First, we would like to clarify some statements made by the reviewer. We remark that our first answer to Question 1, namely our result discussed in Section 3.1, does *not* make any assumptions on the data distribution (as in the PAC lear...
Summary: This work builds upon the work of Huang et al. and Lewis et al. by introducing two new approaches to get constant sample complexity for predicting properties of a ground state of a many-body local Hamiltonian. The two new approaches are a modified ML model that requires knowledge of the property of interest an...
Rebuttal 1: Rebuttal: Our work aims to solve an important physics problem by leveraging machine learning. Thus, we expect it to be of broad interest to physicists, theoretical computer scientists, and machine learning practictioners, as our algorithms not only have rigorous proofs but are also readily implementable, as...
Rebuttal 1: Rebuttal: We thank all the reviewers for their consideration and feedback. We're gratified to see appreciation from most reviewers: several described the work as important and appreciated the novelty of the deep learning approach with both theoretical guarantees and strong practical performance. We believe...
NeurIPS_2024_submissions_huggingface
2,024
Summary: In this work, the authors give two algorithms that predict (geometrically local) properties of ground states of gapped geometrically local Hamiltonians. This problem has been introduced by Huang et al. [HKT+22], and the previous best known algorithm is given by Lewis et al. [LHT+24], which uses $\log(n)$ sampl...
Rebuttal 1: Rebuttal: **Bug** The claim by the reviewer is incorrect, as *our observables satisfy exactly the same conditions* as those considered in [LHT+24].\ In particular, we state throughout the paper that - we only consider observables that satisfy $\lVert O\rVert_\infty \leq 1$, e.g., in lines 122, 225, 301, e...
Summary: The authors propose an ML based method to predict properties of ground states of quantum systems which comes with provable guarantees. Improving on recent work by Huang et al and Lewis et al, they give sample complexity bounds which are independent of the number of qubits. This approach is applicable when the ...
Rebuttal 1: Rebuttal: We thank the reviewer for the careful reading of our paper and their constructive comments. **Reviewer Comment:** It is unclear to me how the Neural Network generalization result compares to known results in the literature.\ **Author Response:** This is the first rigorous sample complexity bound ...
null
null
null
null
Large Spatial Model: End-to-end Unposed Images to Semantic 3D
Accept (poster)
Summary: The authors proposed the Large Scene Model (LSM), a novel 3D scene understanding framework that unifies multiple vision tasks within a single model. LSM represents a scene using pixel-aligned point maps, integrating geometric, appearance, and semantic information into a unified representation. By leveraging a ...
Rebuttal 1: Rebuttal: We thank reviewer 4 (8Lhg) for recognizing the contribution of our paper and offering insightful comments. Please find our response to the feedback below. **[W1]: Ablate the multi-task design choice?** We ablate the “novel view feature synthesis (Eq.4)” and “geometry prediction (Eq.1)” t...
Summary: This paper presents the Large Scene Model (LSM), which generates semantic radiance fields from uncalibrated RGB images using a unified Transformer-based framework. LSM can infer geometry, appearance, and semantics simultaneously and synthesize label maps in real-time. The model integrates multi-scale fusion an...
Rebuttal 1: Rebuttal: We thank reviewer 3 (MCB6) for recognizing the contribution of our paper and offering insightful comments. Please find our response to the feedback below. **[W1.1]: Reasons for deviation from Replica, and performance discrepancy.** The reasons for the deviation are threefold: 1). The proces...
Summary: The paper aims to train a network that takes in a set of unposed images and directly produces a semantic radiance field. The method utilizes a single Transformer-based model that learns the attributes of a 3D scene represented by a point-based radiance field. A decoder produced 3D Gausians that can be splatt...
Rebuttal 1: Rebuttal: We thank reviewer 2 (fir7) for recognizing the contribution of our paper and offering insightful comments. Please find our response to the feedback below. **[W1] Discussion w/ SRT and RUST.** Scene Representation Transformer (SRT)[1] and RUST[3] have pioneered the exploration of representing ...
Summary: This paper solves the sparse-view scene reconstruction problem by Large Scene Model, a unified scene reconstruction model via unposed RGB images. The model utilizes a ViT backbone for extracting the feature and uses cross-view attention to align the multi-pose feature for consistent features. The 3D scene is f...
Rebuttal 1: Rebuttal: We thank reviewer 1 (K6pm) for recognizing the contribution of our paper and offering insightful comments. Please find our response to the feedback below. **[W1]: No significant new problem has arisen and novel solutions proposed?** We acknowledge that our work builds upon the contributions o...
Rebuttal 1: Rebuttal: We thank all reviewers for acknowledging that the work is sound and clearly presented (8Lhg). The presented Transformer-based design is very valuable (fir7) and general (K6pm), running lighting-fast (K6pm, fir7, MCB6, 8Lhg) while achieving compelling quality (K6pm, 8Lhg). We have addressed all the...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Metalearning to Continually Learn In Context
Reject
Summary: The paper focuses on Automated Continual Learning which is different than handcrafted continual learning. It uses self referential neural networks to meta learn their own in-context continual learning algorithm. First, the paper shows the emergence of in-context catastrophic forgetting. Second, the paper analy...
Rebuttal 1: Rebuttal: We would first like to thank the reviewer for their valuable time reviewing our work and for many positive comments. Thank you very much. > The paper claims to do in-context continual learning but the concept of in-context learning is not clearly explained. We actually describe and highlight the...
Summary: The paper describes a method for in-context continual learning (CL) by using a type of meta-learning neural architecture based on ‘self-referential weight matrices’ (SRWM). Proposed in prior work, these models learn to modify weight matrices iteratively as they process more and more inputs. In this work, they ...
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable time reviewing our work and for many positive comments. We also acknowledge the reviewer’s thorough reading through the details of our work. Thank you very much. > One weakness of the proposed method is that the number of loss function terms increases wit...
Summary: The paper studies the problem of catastrophic forgetting (CF) by formulating continual learning (CL) as learning from a sequence of demonstrations of tasks. The paper proposes a meta-learning objective function that includes backward transfer terms. These terms compute the error of the predictor on previous ta...
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable time spent on reviewing our work. We believe we have good responses to resolve all the main concerns. **== Factual clarifications ==** Before providing our clarifications to the reviewer’s concerns, we would first like to resolve some factual misunderst...
Summary: The paper proposes a novel technique to automatically discover in-context continual learning dynamics for image classification task sequences through meta-learning. In order to achieve this purpose, the approach relies on 2 main novelties: * Using self referential weight matrices on top of an image encoder - ...
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable time reviewing our work and for many positive comments. Thank you very much. **== Factual error corrections ==** Before providing our clarifications to the reviewer’s concerns, we would first like to resolve some factual errors in the review. > in a clas...
Rebuttal 1: Rebuttal: **== General Response to all the Reviewers ==** We would first like to sincerely thank all the reviewers for their valuable time reviewing our work. We would like to emphasize that this work has *two facets*: On the one hand, we explore a novel perspective/approach to *continual learning* (CL). ...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Binarized Diffusion Model for Image Super-Resolution
Accept (poster)
Summary: The paper introduces BI-DiffSR, a novel binarized diffusion model for image super-resolution, designed to accelerate the inference speed and reduce computational costs of diffusion models while maintaining high performance. It proposes a UNet architecture optimized for binarization, featuring consistent-pixel ...
Rebuttal 1: Rebuttal: # Response to Reviewer MRyP (denoted as R4) `Q4-1` The basic BI-Conv block lacks novelty, which is as the same as the binarized module in ReActNet that contains RSign and RPReLU. `A4-1` Thanks for pointing it out. We clarify it below. 1. Indeed, our basic BI-Conv block utilizes RSign and RPRe...
Summary: This work present a novel binarized diffusion model for improving the efficiency of super resolution tasks. Compared with the existing works, this work first pointed out the specific challenges of binarized DMs for SR, including the dimension mismatch and fusion difficulty of representations. Then this work pr...
Rebuttal 1: Rebuttal: # Response to Reviewer KWS7 (denoted as R3) `Q3-1` The writing and presentation of the paper should be improved, including but not limited to the grammar and description. For example, some basic knowledge about quantization, SR, and DMs seems to be summarized as a preliminaries section; and let...
Summary: The authors propose BI-DiffSR to binarize diffusion based image super-resolution (SR) model. They design a UNet architecture for the whole binarized model structure. To maintain dimension consistency, they propose two modules, CP-Down and CP-Up, which can further help transfer full-precision information. To en...
Rebuttal 1: Rebuttal: # Response to Reviewer Z33c (denoted as R2) `Q2-1` When binarizing full-precision model from 32-bit to 1-bit, ideally we can reduce the parameters by 32 times. But, as shown in Table 2, the authors reduce parameters from 55.41M to 4.58M (for scale 2). There is a gap between ideal case and pract...
Summary: This paper introduce a novel binarized diffusion model, BI-DiffSR, for image SR. A UNet architecture optimized for binarization, channel shuffle fusion, and time-step-aware redistribution and activation functions are designed. The experimental results proved the effectiveness of the method. Strengths: 1. This...
Rebuttal 1: Rebuttal: # Response to Reviewer mEsa (denoted as R1) `Q1-1` Lack of discussion with some related works[1, 2, 3, 4], in particular [1] which is also for binary SR networks. Please analyze and discuss the differences with [1,2]. `A1-1` Thanks for your advice. We add more analyses and discussions of relat...
Rebuttal 1: Rebuttal: # Response to all reviewers and area chairs Dear Reviewers and Area Chairs, We thank all reviewers (**R1-mEsa**, **R2-Z33c**, **R3-KWS7**, **R4-MRyP**) and area chairs for their insightful comments and valuable time. We are pleased that: - R2 and R3 appreciate our intuitive motivation and ...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Controlled maximal variability along with reliable performance in recurrent neural networks
Accept (poster)
Summary: The authors propose a principle for selecting actions to drive recurrent neural network activities which aims at maximizing the variability of the neural activity while avoiding unwanted states. They define unwanted states as states where no action is possible, and use a reinforcement learning framework to sel...
Rebuttal 1: Rebuttal: $\textit{Weaknesses: I have […] controller.}$ The reviewer is correct, the only goal of the MOP agent is to maximize future action-path entropy, Eq. 3. The ‘task’ of filling out space was never given but it emerges naturally as the MOP agent seeks to maximize action-path entropy while avoiding t...
Summary: In natural behaviors, there’s usually variability despite being able to perform tasks with high performance. This paper aims to understand whether it’s possible for neural networks to have high variability while maintaining high task performance and being able to switch to deterministic behavior modes when nee...
Rebuttal 1: Rebuttal: $\textit{ Weaknesses: 1. The tasks are …}$ We thank the reviewer for giving us the possibility to clarify the role of terminal states in our algorithm and to highlight the general applicability of our framework to various RL tasks that are not typically formulated in this manner. To illustrate th...
Summary: This paper applies the maximum occupancy principle (MOP) -- previously introduced as a normative theory of behavioural variability -- to recurrent neural networks, thereby proposing MOP as a normative theory of neural variability. The MOP postulates that an agent seeks to maximize future occupancy of its state...
Rebuttal 1: Rebuttal: $\textit{There is some, […] discuss these limitations in more depth.}$ We appreciate this criticism, which allows to discuss more deeply some fundamental features of our MOP network. Let us take the example of the inverted pendulum that the reviewer has brought up. Our intrinsic motivation approa...
Summary: This paper proposes a mechanism to induce variability in "reservoir" recurrent neural networks without impinging upon task performance, by maximising the cumulative entropy of future states and actions/behaviors. These actions are provided by a controller network to the reservoir as input currents. The authors...
Rebuttal 1: Rebuttal: $\textit{Weaknesses: This is […] multi-task settings.}$ We agree with the reviewer’s comments regarding the high computational cost of our framework, as acknowledged in the manuscript. This complexity arises from a specific choice we are committed to in our current approach, which employs an exac...
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their insightful comments and for giving us the opportunity to clarify important aspects of our framework. Based on the feedback received, we would propose to modify the Discussion section incorporating the following additional paragraphs. $\textbf{Addition P1...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
User-Creator Feature Polarization in Recommender Systems with Dual Influence
Accept (poster)
Summary: This paper models dynamics of both users and creators in a recommender system. The user features shift in the direction of the content recommended to them. The creator dynamics are strategically motivated i.e. they try to align content to attract their audience, to increase profit. The authors then provide s...
Rebuttal 1: Rebuttal: > Q1: Can you motivate the update in Eq (4)? Is this myopically optimal for the creator to do, and how does it generalize [Eilat & Rosenfeld]? [[Eilat & Rosenfeld]](https://arxiv.org/pdf/2302.04336) assumes that creators aim to maximize exposure (defined as the sum of inner products between user ...
Summary: The paper explores the dynamics between users and content creators in recommender systems, highlighting the dual influence where users’ preferences are shaped by recommendations and creators modify their content to align with what is more likely to be recommended. The study defines a model called user-creator ...
Rebuttal 1: Rebuttal: > Weakness 1: related works. Thank you for listing those related works! We will discuss them in the revision. We also provide comparisons between some of those works and our work in a table in our global response. > Weakness 2: It would be better to include some detailed discussions regarding ...
Summary: This paper studies how recommendations become polarized over the long run when user and creator features dynamically change over time. The authors theoretically prove that, under the assumption that every creator can be recommended to every user with some non-zero probability, recommender systems will eventual...
Rebuttal 1: Rebuttal: > Q1: Please address the points I raised in Weaknesses. > W1: The assumption that all items can be recommended to users is not realistic. ... Customers can only see a certain number of items on the webpage (i.e., p=0 for items that users can't see). First, we note that customers not seeing some...
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for the helpful comments, especially the provided related works. Here, we provide a table to compare our work with those works (and some works that were already cited in our paper). We will add this table to an additional related work section in our appendix. We want to high...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Is Mamba Compatible with Trajectory Optimization in Offline Reinforcement Learning?
Accept (poster)
Summary: This paper comprehensively investigates the possibility of leveraging Mamba for trajectory learning. The authors take Decision Mamba as a playground and analyse the performance of this model over trajectory learning scenarios (gym/mujoco) from several aspects. A group of conclusions are attained through rigoro...
Rebuttal 1: Rebuttal: We sincerely thank you for your insightful comments, which have catalyzed numerous enhancements and refinements to the paper. In the following, we reply to the questions one by one for the convenience of checking. --- **Weakness 1**: *Most discoveries in this paper have been implicitly discussed ...
Summary: This paper investigates how Mamba perform in trajectory optimization in offline RL with ablation analysis on mamba's data input structures and architectural structures and shows Mamba DT can achieve SOTA performance with less parameters. Strengths: 1. The paper writing is good, the visualizations look good. 2...
Rebuttal 1: Rebuttal: We sincerely thank you for your insightful comments, which have catalyzed numerous enhancements and refinements to the paper. In the following, we reply to the questions one by one for the convenience of checking. --- **Weakness 1**: *Finding 3 is not very surprising on the tested MDP environmen...
Summary: The work introduces Decision Mamba (DeMa) to address the challenges in offline RL posed by the large parameter size and limited scalability of Transformer-based methods. DeMa aims to achieve similar performance to Transformers with significantly fewer parameters. DeMa surpasses the DT with significantly fewer...
Rebuttal 1: Rebuttal: We sincerely thank you for your insightful comments, which have catalyzed numerous enhancements and refinements to the paper. In the following, we reply to the questions one by one for the convenience of checking. --- **Weakness 1 & Question 1**: *Some symbols are not defined before use.* **Resp...
null
null
Rebuttal 1: Rebuttal: **We want to thank all the reviewers for their thoughtful suggestions on our submission**, and we appreciate that the reviewers have multiple positive opinions of our work, including: * novelty (BojN, 85kG) * good writing, good visualizations (qC8w) * the detailed analysis provides useful practic...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Accelerating ERM for data-driven algorithm design using output-sensitive techniques
Accept (poster)
Summary: This paper addresses the problem of learning optimal parameters for data-driven algorithm design. A characteristic of the problem is that the dual loss function, which measures the performance of an algorithm as a function of parameters, is discontinuous. Nevertheless, the dual loss is typically piecewise stru...
Rebuttal 1: Rebuttal: We thank the reviewer for their time and insightful comments. Re experiments, we note that prior empirical work already suggests usefulness of output-sensitive guarantees on typical instances, which we elaborate below. ``Experiments:`` Our work is motivated from prior empirical research (lines 91...
Summary: In data-driven algorithm design, we are given a collection of problem instances sampled from an unknown distribution, and a family of algorithms for solving the problem, typically parameterized by a real-valued multivariate parameter. The goal is to find a setting of parameters such that the performance of the...
Rebuttal 1: Rebuttal: We respectfully disagree with the reviewer that the main result is not strong. Since its inception (around 2016), the field of theoretical guarantees of data driven algorithm design has been focused on sample complexity results. The *major* direction that has been left open in this field is to als...
Summary: The paper explores computational aspects of implementing ERM in data-driven algorithm design. The paper contributes an efficient algorithm to ennumerate cells induced by a collection of hyperplanes. The paper then shows how to utilize this as a subprocedure to solve ERM problems for algorithm design, focusin...
Rebuttal 1: Rebuttal: We thank the reviewer for their time and useful comments. ``Generalizability to other data-driven algorithm design problems:`` Our approach is applicable to a fairly large number of problems, for example the various mechanism design problems in [1] are (d,t)-delineable which is a special case of ...
null
null
Rebuttal 1: Rebuttal: Our work is a first major step in making the growing field of data-driven algorithm design a practically feasible endeavor by addressing the frequently noted open question of computational complexity [1, 2]. Our proposed algorithms provide concrete and significant improvements in the running time ...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Excluding the Irrelevant: Focusing Reinforcement Learning through Continuous Action Masking
Accept (poster)
Summary: Learn a state specific mask for actions. Rather than simply a state specific interval, extend the action mask to different convex set representations. Then, derive a policy gradient for each of these masking schemes. The masking schemes are ray masks, hypercube transform mask and distributional masks. Applies...
Rebuttal 1: Rebuttal: We sincerely thank you for your valuable feedback on our manuscript, and for highlighting the broad applicability of our proposed action masking approach. We would like to address your concerns and questions in the following. # Weaknesses ## Action masking criteria Thank you for sharing your asse...
Summary: The paper addresses challenges in RL with continuous action spaces, typically defined as interval sets. These spaces often lead to inefficient exploration due to irrelevant actions. The authors propose three continuous action masking methods to focus learning on relevant actions based on current state, improvi...
Rebuttal 1: Rebuttal: We sincerely thank you for your valuable feedback on our manuscript. We are particularly grateful for the acknowledgment of our manuscript's originality and significance. In the following, we outline how we incorporated your feedback and clarify open questions. ## Improvements to the mathematica...
Summary: This paper discusses methods for action masking in continuous action spaces to improve convergence stability and sample efficiency in reinforcement learning. The paper introduces three methods for action masking with convex relevant action sets, proves their convergence, and experimentally verifies their effec...
Rebuttal 1: Rebuttal: Thank you for your insightful comments and for recognizing the novelty and theoretical grounding of our proposed methods. In the following, we respond to the weaknesses (W1 and W2) stated and questions (Q1 - Q6) raised. # Weaknesses ## W1. Distributional mask being off-policy by nature Thank you...
Summary: This paper proposes mathematical formulations for continuous action masking in reinforcement learning, to incorporate domain-knowledge in the form of state-specific sets of relevant actions. It introduces 3 functional forms to extract relevant actions from the original action space, and consider its effect on ...
Rebuttal 1: Rebuttal: We thank you for your thoughtful and critical comments which helped us to strengthen our arguments for the utility of action masking. We address your questions below. ## 1. General applicability of this paper's ideas Thank you for pointing this out. We agree that the two statements appear contrad...
Rebuttal 1: Rebuttal: Dear reviewers, Thank you for your thoughtful comments and questions. We address general points below. ## A1. Relevance of continuous action masking as part of the policy Action masking enforces task knowledge by focusing learning on relevant actions, thereby increasing sample efficiency and re...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
MAmmoTH2: Scaling Instructions from the Web
Accept (poster)
Summary: The paper proposes a 3-stage pipeline to harvest ex-large-scale instruction data from the pre-training web corpus to enhance LLM reasoning, which involves 1) recalling relevant documents, 2) extracting instruction-response pairs using LLM, and 3) refining the extracted pairs by completing the intermediate reas...
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback on our effective pipeline, large-scale instruction dataset for reasoning tasks, many useful insights from extensive experiments. > "Compatibility with existing continual-training pipelines and impact investigation" We appreciate this valuable sugg...
Summary: The paper introduces MAmmoTH2, a novel approach to instruction tuning for large language models (LLMs) by harvesting naturally existing instruction data from the web. The authors develop a three-step pipeline (recall, extract, refine) to collect 10 million high-quality instruction-response pairs without relyin...
Rebuttal 1: Rebuttal: We thank the reviewer for positive feedback on our cost-effective approach, significant performance gains, and comprehensive evaluation! > "Novelty of approach" Our method's novelty lies in **its unique pipeline to mine naturally existing instruction data at scale**, offering a new paradigm for ...
Summary: This paper proposes an approach to automatically harvest large-scale instruction data from pre-training corpora for reasoning tasks. The main steps include: (1) Recall: training a fastText model to recall relevant documents from the pre-training corpus, similar to DeepSeekMath; (2) Extract: using open-source m...
Rebuttal 1: Rebuttal: Thank you for your positive feedback on our work's clarity, novelty, and comprehensive experiments! > “Additional results for the effectiveness of WebInstruct” We've included early-stage results using **Qwen-1.5-1.8B** to demonstrate **the usefulness of our "extraction" and "refinement" steps**:...
Summary: This paper proposes a method to synthesize instruction tuning data at scale from the pretraining web corpus. The proposed method first recalls relevant documents from the corpus, and then extracts QA pairs, and finally refines the extracted QA pairs with an LLM. The synthesized instruction data proves to be he...
Rebuttal 1: Rebuttal: Thank you for your positive feedback on our work's novelty, comprehensive experiments, and clear writing! > “Lack a discussion and comparison with Humpback [1]” Thanks for the note! Humpback does not release its implementation, data, and models, which makes the replication and head-to-head comp...
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Class Distribution Shifts in Zero-Shot Learning: Learning Robust Representations
Accept (poster)
Summary: This paper first investigates the effect of class distribution changes on comparative zero-sample learning by proposing and analysing a class distribution shifts parameter model, leading to the idea that loss minimisation leads to poor performance of representations over the class distribution shifts. Based on...
Rebuttal 1: Rebuttal: 1. **Soft-AUC:** The use of soft-AUC instead of the standard AUC score is intended to make the complete loss (including the penalty) differentiable, thus enabling gradient-based optimization. As mentioned in lines 218-219, the soft-AUC pointwise converges to the standard AUC score as $\beta$ appro...
Summary: This paper proposes a robust representation learning method that could assume the shift between seen classes and unseen classes. Strengths: Good presentation and sound method. Weaknesses: Lack the experiments on the most popular benchmark of zero-shot learning [1] and comparison to some SOTAs, e.g. [2][3]. ...
Rebuttal 1: Rebuttal: We thank the reviewer for their review and the provided references. We did not use the datasets mentioned in your references since they either (i) do not have labeled attributes (e.g., the SUN dataset), (ii) the provided attributes correlate with the data-point label such that shifts in them do ...
Summary: Zero-shot learning classifiers face the challenge of distribution shifts, where the distribution of new classes differs significantly from that of the training data. In this paper, the authors introduce a novel algorithm to address this problem by creating robust representations through hierarchical sampling a...
Rebuttal 1: Rebuttal: We thank the reviewer for their time and comments. Below we address the raised weaknesses and questions: **Weaknesses:** 1. *Definition of parameters:* $\rho_{tr}$ and $\rho_{te}$ correspond to the the proportion of type $a_1$ classes in train and test sets correspondingly and are defined in ...
Summary: The paper treats the problem of learning models for zero-shot open-world classification settings (open-world meaning previously unseen classes might appear at test time) that are robust to distribution shifts. The proposed approach consists of two stages. In the first stage, synthetic environments $S_i$ are s...
Rebuttal 1: Rebuttal: We thank the reviewer for their review and questions. Below we address the raised weaknesses and questions. **Weaknesses:** 1. *Comparison with Creager et al. (2021):* Thank you for referring us to Creager et al. (2021). We found their work very interesting and will cite it in our related work...
Rebuttal 1: Rebuttal: We thank the reviewers for their efforts in reviewing our paper. We address the concerns raised by each reviewer separately. We attach here the rebuttal figures file, which contains the additional figure referenced in our individual responses. Pdf: /pdf/1072b5c61539a6beecd0f9c6340884b543567200.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning Bregman Divergences with Application to Robustness
Accept (poster)
Summary: This paper proposes to use input-convex neural networks to learn Bregman divergences as a means to distinguish semantically meaningful image corruptions from random noise perturbations. The approach is linked to classifier robustness by showing how the associated mirror descent algorithm can be used to perform...
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough review and for finding our work "novel and performative" and "of interest to the ML and robustness communities". The questions **Q2**, **Q3**, **Q4**, **Q5**, **Q6** and **Q10** are directly incorporated for the next revision. We answer the rest of the questi...
Summary: The authors present an approach to learn Bregman divergences that capture perceptual image similarities according to a given dataset. Relying on two input-convex neural networks, they present a procedure that mimics mirror descent over the learned Bregman divergence. The procedure is used to learn networks tha...
Rebuttal 1: Rebuttal: The goal is to generate Bregman divergences from learned base functions $\phi$, that are parametrized neural networks. As a result, the gradients of the base functions $\Psi$ and their inverses $\Psi^{-1}$ are also learned and thus approximated by definition. We see your point on calling it mirror...
Summary: The authors propose a new method to learn Bregman divergences from raw, high-dimensional data. This method measures similarity between images in pixel space, and considers two images as similar even if one image is corrupted by real-world corruptions, such as blur, changes in contrast, or weather conditions su...
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging the "great explanations" of the choices we made (especially the approximation of the conjugate $\nabla \overline \phi$), for finding the use of the Bregman divergences for metric learning "interesting direction", and for acknowledging that each step of the p...
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for their comments. We responded to each concern in detail in our individual responses. These discussions and also corrections will be incorporated in the next revision. We have strengthened the results of our work by performing an evaluation on the BAPPS dataset (suggested ...
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null