ResearchArcade
Collection
23 items • Updated
venue stringclasses 16
values | review_openreview_id stringlengths 8 13 | replyto_openreview_id stringlengths 9 13 | writer stringlengths 2 34.4k | title stringlengths 14 49 | content stringlengths 29 44.2k | time stringdate 2013-02-06 08:34:00 2025-11-14 02:51:26 |
|---|---|---|---|---|---|---|
ICLR.cc/2025/Conference | Is5Qh2Gs5x | zzR1Uskhj0 | Reviewer_8LjP | Official Review by Reviewer_8LjP | {"Rating": 6, "Summary": "The submission studies contextual bandits with cross learning. Previously, the existing regret bound held in expectation. The submission refines the regret analysis so that the regret bound holds with high probability. The main contribution is to show how the weak dependency structure can be e... | 2024-11-04 01:05:41 |
ICLR.cc/2025/Conference | Hsumvt7DeH | zzR1Uskhj0 | Reviewer_Sstz | Official Review by Reviewer_Sstz | {"Rating": 6, "Summary": "The paper studied adversarial context bandits in a special setting where the losses of arm $a_i$ could be observed under all contexts when the algorithm plays arm $a_i$. The goal, like in classical adversarial bandit problems, is to minimize the regret compared to the loss of the best arm in h... | 2024-11-07 22:01:20 |
ICLR.cc/2025/Conference | smWIsNwjkv | zzR1Uskhj0 | Reviewer_uDyZ | Official Review by Reviewer_uDyZ | {"Rating": 8, "Summary": "The paper studies cross learning in contextual adversarial linear bandits where the learner observes the losses of all contexts in each round. Recent work in Schneider et al. proposed an algorithm with a regret upper bound only in expectation. The paper studies the same algorithm and proves th... | 2024-11-08 08:31:50 |
ICLR.cc/2025/Conference | UfYuXDF8OO | zzR1Uskhj0 | Reviewer_NZtQ | Official Review by Reviewer_NZtQ | {"Rating": 5, "Summary": "This paper addresses the challenge of achieving high-probability regret bounds in the adversarial contextual bandit framework, where the learner encounters varying contexts and must minimize cumulative loss over time. The focus is on \"cross-learning\" contextual bandits, where learners can ob... | 2024-11-08 09:01:46 |
ICLR.cc/2025/Conference | NLrlOlSugS | zzR1Uskhj0 | Reviewer_NaxM | Official Review by Reviewer_NaxM | {"Rating": 5, "Summary": "The paper proposes an algorithm that achieves high probability regret bound (which is stronger than the expected regret bound) for the cross-learning contextual bandits under unknown context distribution by developing refined martingale inequalities.", "Questions": "1. What is the intuition be... | 2024-11-12 13:14:27 |
ICLR.cc/2025/Conference | WQLpTsquBi | NLrlOlSugS | Authors | Response by Authors | {"Title": "Rebuttal by Authors", "Comment": "Dear reviewer NaxM:Thank you for your valuable feedback. We address the comments below in detail:---**Question 1: Lack of explanation of the algorithm**The reviewer suggested that we proposed an algorithm, but did not elaborate on its intuition, making it challenging to unde... | 2024-11-19 15:11:20 |
ICLR.cc/2025/Conference | 2LhrQBBkiK | UfYuXDF8OO | Authors | Response by Authors | {"Title": "Rebuttal by Authors", "Comment": "Dear reviewer NZtQ:We sincerely thank the reviewer for their valuable suggestions. Below, we address the reviewer\u2019s concerns in detail. ---**Question: Significance of our results** **Response:** We appreciate the reviewer\u2019s accurate understanding of our results ... | 2024-11-20 04:39:57 |
ICLR.cc/2025/Conference | wR6AlCqdjL | Is5Qh2Gs5x | Authors | Response by Authors | {"Title": "Rebuttal by Authors", "Comment": "Dear reviewer 8LjP:We sincerely thank the reviewer for their suggestions and positive feedback. Below, we provide detailed responses to the reviewer\u2019s comments.---**Question: What is the reason for assuming a finite concept class?** **Response:** We would like to poin... | 2024-11-20 06:04:06 |
ICLR.cc/2025/Conference | 6LIz7vD6vJ | Hsumvt7DeH | Authors | Response by Authors | {"Title": "Rebuttal by Authors", "Comment": "Dear Reviewer Sstz, Thank you for your positive, thorough, and thoughtful review. Your feedback has greatly helped us improve our paper. Below, we provide detailed responses to your comments: ---**Question: Should the definition of regret on page 3 be reversed?** **Respon... | 2024-11-20 08:01:55 |
ICLR.cc/2025/Conference | p1r3CeMT7X | smWIsNwjkv | Authors | Response by Authors | {"Title": "Rebuttal by Authors", "Comment": "Dear Reviewer uDyz,Thank you for your positive and encouraging feedback. We greatly appreciate you bringing to our attention an interesting piece of work that we had previously overlooked\u2014it has truly broadened our perspective.Unfortunately, the mentioned work cannot be... | 2024-11-20 11:31:23 |
ICLR.cc/2025/Conference | hrek1FOSJ7 | zzR1Uskhj0 | Authors | Response by Authors | {"Title": "The revised version of the paper", "Comment": "Dear Reviewers,We sincerely thank all the reviewers for their valuable suggestions. Based on your feedback, we have restructured the paper, and the revised version has been uploaded. In the new version, we have made the following changes:- **Section 1**: We adde... | 2024-11-21 07:14:28 |
ICLR.cc/2025/Conference | jAvCQROBD4 | 6LIz7vD6vJ | Reviewer_Sstz | Response by Reviewer | {"Title": "", "Comment": "Thanks for the response and the updated paper. I took a look at the revised paper, and I believe the presentation of the paper has been improved. The paper is now much more accessible for readers unfamiliar with the work of SZ [NeurIPS\u201923].Due to my comments on the combined novelty for sc... | 2024-11-21 21:26:42 |
ICLR.cc/2025/Conference | mAqJKkBBov | NLrlOlSugS | Reviewer_NaxM | Response by Reviewer | {"Title": "", "Comment": "Thank you for detailed response. I increase my score to 5 for the revised manuscript but unfortunately, I have concerns to raise score more due to theoretical novelties.While deriving high-probability bound is an interesting problem, the technical novelty to derive the bound is limited.The rev... | 2024-11-25 06:01:47 |
ICLR.cc/2025/Conference | ozYvTN3mlR | wR6AlCqdjL | Reviewer_8LjP | Response by Reviewer | {"Title": "", "Comment": "Thank you for the feedback. After going through the reviews and all feedback replies, I will keep my score for now. Thank you!"} | 2024-11-26 12:01:35 |
ICLR.cc/2025/Conference | rA3OQEiDA0 | p1r3CeMT7X | Reviewer_uDyZ | Response by Reviewer | {"Title": "", "Comment": "Thank you for your response."} | 2024-11-27 02:09:51 |
ICLR.cc/2025/Conference | AUQCPco8Ia | mAqJKkBBov | Authors | Response by Authors | {"Title": "Response by Authors", "Comment": "Dear Reviewer,Thank you very much for your time, attention, and for improving our score. However, we respectfully disagree with your comments regarding the theoretical novelty of our work. You stated that the novelty of our paper is \"limited to replacing the random variable... | 2024-11-28 08:12:43 |
ICLR.cc/2025/Conference | w9jjG2HQky | UfYuXDF8OO | Authors | Response by Authors | {"Title": "", "Comment": "Dear Reviewer NZtQ,We sincerely appreciate your time and attention. We hope our responses have addressed your concerns and that you might consider increasing your support for our paper. If you have any further questions, please don't hesitate to ask us!"} | 2024-12-02 02:24:33 |
ICLR.cc/2025/Conference | QJB25lfeno | zzR1Uskhj0 | Area_Chair_zibT | Meta Review of Submission11051 by Area_Chair_zibT | {"Meta Review": "This paper explores adversarial context bandits, specifically focusing on a scenario where the losses of each arm are observable under all contexts when the algorithm selects that arm. The objective is to minimize regret by comparing the algorithm's performance to the best arm in hindsight, similar to ... | 2024-12-21 13:32:41 |
ICLR.cc/2025/Conference | atAHt9GNWL | zzR1Uskhj0 | Program_Chairs | Paper Decision | {"Comment": "", "Decision": "Reject"} | 2025-01-22 05:34:59 |
ICLR.cc/2025/Conference | KgCysAmNli | zyGrziIVdE | Reviewer_nQGL | Official Review by Reviewer_nQGL | {"Rating": 3, "Summary": "The paper proposes a new intrinsic exploration objective for maximizing state entropy. The objective uses a discounted mixture of past state occupancy measures and encourages policies that maximize distance from the discounted mixture. As statistical distance, the KL divergence and Wasserstein... | 2024-11-01 01:01:19 |
ICLR.cc/2025/Conference | mQtVHAFXdK | zyGrziIVdE | Reviewer_YHsc | Official Review by Reviewer_YHsc | {"Rating": 5, "Summary": "The paper proposes an exploration paradigm of \"running away from the past\" (RAMP), which encourages the RL algorithm to generate trajectories in distribution different from the past. This is instantiated as an intrinsic exploration bonus that estimates the discrepancy between the current and... | 2024-11-03 18:05:04 |
ICLR.cc/2025/Conference | 1xdvlSq9RV | zyGrziIVdE | Reviewer_FV3w | Official Review by Reviewer_FV3w | {"Rating": 3, "Summary": "The authors present a new algorithm for learning policies where the marginal distribution of states in a trajectory of length $T$ has a high entropy. Their method consists in iteratively maximizing intrinsic reward bonuses that measure a distance (metric) between the distribution of states of ... | 2024-11-03 20:09:07 |
ICLR.cc/2025/Conference | Gkkip187zs | zyGrziIVdE | Reviewer_65e4 | Official Review by Reviewer_65e4 | {"Rating": 3, "Summary": "The paper proposes RAMP (Running away from the past), an RL-based method for performing state space exploration by approximately maximizing either the KL divergence or Wasserstein distance between the current policy's state occupancy measure and the discounted sum of the state occupancy measur... | 2024-11-04 22:35:59 |
ICLR.cc/2025/Conference | 3SH3N8EZUm | Gkkip187zs | Authors | Response by Authors | {"Title": "", "Comment": "# Answer to Reviewer 65e4We thank the reviewer for their thorough feedback.## State coverageWe acknowledge Reviewer 65e4's point that using Euclidean coordinates may not be the most accurate way to measure state space coverage. As the reviewer suggests, we quantify coverage by discretizing spe... | 2024-11-21 08:22:26 |
ICLR.cc/2025/Conference | Y47yIBZbvJ | 3SH3N8EZUm | Authors | Response by Authors | {"Title": "", "Comment": "## Proof in the paperTheorems 2 and 3 were introduced to confirm that maximizing the reward models defined in the paper indeed maximizes the KL Divergence and Wasserstein distance described in Section 2. As a result, the assumptions underlying these theorems involve bounding estimation errors ... | 2024-11-21 08:22:40 |
ICLR.cc/2025/Conference | jvHqM1MqbG | 1xdvlSq9RV | Authors | Response by Authors | {"Title": "", "Comment": "# Answer to Reviewer FV3wWe thank the reviewer for their thorough feedback.## Section 2### Assumption on Entropy of the Occupancy MeasureWe agree with Reviewer FV3w regarding the assumption that maximizing the expected entropy of the policies on the occupancy measure leads to high entropy of t... | 2024-11-21 08:25:07 |
ICLR.cc/2025/Conference | uXl59HfDoY | mQtVHAFXdK | Authors | Response by Authors | {"Title": "", "Comment": "# Answer to Reviewer YHscWe thank the reviewer for their thorough feedback. To ensure we fully understand each of the comments, we would appreciate it if the reviewer could confirm that our interpretations are correct.## Intrinsic Reward Alone* **Table 1**:Table 1 shows the final coverage reac... | 2024-11-21 08:25:54 |
ICLR.cc/2025/Conference | JwSUh2nrMF | KgCysAmNli | Authors | Response by Authors | {"Title": "", "Comment": "# Answer to Reviewer nQGLWe thank the reviewer for their thorough feedback.## Related Work on Epistemic UncertaintyWe appreciate Reviewer nQGL's suggestion to discuss the relationship between our work and methods that leverage epistemic uncertainty. We agree that this is an interesting aspect ... | 2024-11-21 08:26:19 |
ICLR.cc/2025/Conference | zhflrolxM7 | JwSUh2nrMF | Reviewer_nQGL | Response by Reviewer | {"Title": "Response to the author's rebuttal", "Comment": "I thank the authors for their response.**Epistemic Uncertainty**: While I appreciate the authors including epistemic uncertainty-based exploration methods in the discussion. I do not understand how they are considered out of scope for this paper. Particularly, ... | 2024-11-21 19:42:04 |
ICLR.cc/2025/Conference | hO6YX4PH33 | Y47yIBZbvJ | Reviewer_65e4 | Response by Reviewer | {"Title": "", "Comment": "Thanks to the authors for their response. First, I appreciate the sharing of the code for the submission, which partially addresses my concerns raised in Weakness 2. I am also grateful for the clarification regarding the differences between $r_W$ and $r_{KL}$, which provides some intuition tha... | 2024-11-25 21:51:03 |
ICLR.cc/2025/Conference | axDhB9sZck | jvHqM1MqbG | Reviewer_FV3w | Response by Reviewer | {"Title": "Follow-up", "Comment": "Thank you for responding to my review. I will clarify some elements.1. I don't believe that the KL divergence objective is irrelevant at all, sorry for the misinterpretation. I believe that the argument about the 'geometry ' is not completely correct. In my opinion, the KL objective i... | 2024-11-26 15:45:53 |
ICLR.cc/2025/Conference | ZqwXdV3JLP | uXl59HfDoY | Reviewer_YHsc | Response by Reviewer | {"Title": "reply", "Comment": "Thank you to the authors for the reply.Thanks for clarifying on some technical details that I missed from first reading the paper. It will indeed be helpful having access to a more detailed explanations of the hyper-parameters used for experiments.I will adjust my scores after the discuss... | 2024-11-27 14:53:11 |
ICLR.cc/2025/Conference | WHGGAhftAV | zyGrziIVdE | Area_Chair_dHcV | Meta Review of Submission7454 by Area_Chair_dHcV | {"Meta Review": "This paper proposes an exploration strategy by maximizing the Shannon entropy of the state occupancy measure. This is achieved by maximizing a measure of divergence between successive state occupancy measures. The authors argue for the efficacy of their method by evaluating on a set of mazes and roboti... | 2024-12-19 01:51:56 |
ICLR.cc/2025/Conference | YZIp2aNPWj | zyGrziIVdE | Program_Chairs | Paper Decision | {"Comment": "", "Decision": "Reject"} | 2025-01-22 05:30:39 |
ICLR.cc/2025/Conference | fxl5YNtkzp | zxqdVo9FjY | Reviewer_YeXr | Official Review by Reviewer_YeXr | {"Rating": 6, "Summary": "Motivated by the problem of training the readout of a two-layer network after on large gradient step on the first layer, the authors consider the problem of linear regression on a spiked data model. They provide a characterization of the test error, for two linear target functions, respectivel... | 2024-10-21 17:29:33 |
ICLR.cc/2025/Conference | PoAfoEMac0 | zxqdVo9FjY | Reviewer_xFzE | Official Review by Reviewer_xFzE | {"Rating": 5, "Summary": "This paper analyses the generalization error of linear regression with spiked covariance. Previous literature has been using asymptotic limit of the empirical spectral density to analyse the generalization error of linear regression. At the limit, the effect of the spike vanishes. However, it ... | 2024-10-24 13:54:32 |
ICLR.cc/2025/Conference | nnuWhTQ0sX | zxqdVo9FjY | Reviewer_LgJ3 | Official Review by Reviewer_LgJ3 | {"Rating": 5, "Summary": "The paper considers the linear least squares regression for data with simple spiked covariance. They quantify the empirical risk of test data.", "Questions": "1. Could you provide a reference for the statement, 'It has been shown that to understand the generalization...' on line 39?2. Is your ... | 2024-11-01 16:43:37 |
ICLR.cc/2025/Conference | 5x6iVSaqHT | zxqdVo9FjY | Reviewer_qVHv | Official Review by Reviewer_qVHv | {"Rating": 3, "Summary": "Motivated by a recent work studying two-layer neural networks (Moniri et al., 2023), the paper studies linear regression under a data model with a spiked covariance (Couillet & Liao, 2022). The spiked covariance consists of a spike component (signal) and a bulk component (noise). Thus, the aut... | 2024-11-01 18:13:30 |
ICLR.cc/2025/Conference | WqthOZZiOY | zxqdVo9FjY | Reviewer_weJN | Official Review by Reviewer_weJN | {"Rating": 5, "Summary": "The authors analyze the generalization properties of spiked covariate models. The theoretical analysis is motivated by recent works on two-layer networks trained with a single gradient step that showed how the feature matrix possesses different spikes associated with the learning rate scaling ... | 2024-11-07 14:46:20 |
ICLR.cc/2025/Conference | eztAgKUYJ0 | zxqdVo9FjY | Authors | Response by Authors | {"Title": "Introducing Dependency Between Bulk and Spike", "Comment": "A common criticism among reviewers was our abstraction of the dependency between bulk and spike components. Here we demonstrate how our proof framework extends to handle the dependent case from Moniri et al. 2023.Recall that Moniri et al.'s spike st... | 2024-11-14 01:45:48 |
ICLR.cc/2025/Conference | MQlWbuDYAI | WqthOZZiOY | Authors | Response by Authors | {"Title": "", "Comment": "We thank the reviewer for the feedback and comments. Key differences between our work and important prior research are that we (1) provide finite matrix correction terms and (2) offer simplified closed-form expressions.> Could the authors comment on the link between their results and (Ba et al... | 2024-11-14 01:45:55 |
ICLR.cc/2025/Conference | fSyf3ogiBb | nnuWhTQ0sX | Authors | Response by Authors | {"Title": "", "Comment": "We thank the reviewer for their comments. > They reference the work of Moniri et al., but this work is unrelated to neural networks or gradient descent; it addresses a purely linear regression problem for data with simple spiked covariances.We respectfully disagree. Our work is directly motiva... | 2024-11-14 01:46:02 |
ICLR.cc/2025/Conference | 0HcKHqAtaa | PoAfoEMac0 | Authors | Response by Authors | {"Title": "", "Comment": "We thank the reviewer for the feedback. > The paper is motivated by the spiked covariance from the one-step gradient feature learning in neural networks (Section 1). However, it did not show how the results can be applied to the feature learning scenario. I question the amount of contribution ... | 2024-11-14 01:46:05 |
ICLR.cc/2025/Conference | 5KSuT56TbI | fxl5YNtkzp | Authors | Response by Authors | {"Title": "", "Comment": "> The authors claim l.083 that Moniri et al. (2023) do not quantify the test error after one gradient step. To the best of my understanding, they do provide such a tight characterization (Theorem 4.5). Could the authors clarify their claim, and emphasize how their work is positioned with respe... | 2024-11-14 01:46:10 |
ICLR.cc/2025/Conference | v0dyh2j6Us | 5x6iVSaqHT | Authors | Response by Authors | {"Title": "Part 1", "Comment": "We thank the reviewer for their detailed feedback. Let us address the key points:> Limited contribution/novelty... Most of the results in this paper are trivial extensions of the results by Hastie et al. (2022) and Li & Sonthalia (2024)We respectfully disagree. Our contributions extend b... | 2024-11-14 01:46:26 |
ICLR.cc/2025/Conference | 4gNRRKAfIm | v0dyh2j6Us | Authors | Response by Authors | {"Title": "Part 2", "Comment": "> In footnote 3 (Line 266), the authors say \"... If we use these results, then similar to Eqautuons C.23 in Ba et al. 2022 and Equation (5) in Moniri et al. 2023, we would have that the value of Stieljtes transform is given to us as the unique solution to a set of consistency equations.... | 2024-11-14 01:46:37 |
ICLR.cc/2025/Conference | 4Xxtq8X6cH | zxqdVo9FjY | Authors | Response by Authors | {"Title": "", "Comment": "We thank the reviewers for their comments and help in improving the paper and hope that our responses with the new results have improved their opinions. If there are further questions that we can answer, we would be happy to continue the discussion."} | 2024-11-14 01:47:59 |
ICLR.cc/2025/Conference | vjcieT0Djg | 0HcKHqAtaa | Reviewer_xFzE | Response by Reviewer | {"Title": "", "Comment": "Thank you for your detailed reply. I will raise my score accordingly."} | 2024-11-20 10:43:20 |
ICLR.cc/2025/Conference | 1wvMEgfJkt | vjcieT0Djg | Authors | Response by Authors | {"Title": "", "Comment": "We thank the reviewer for the discussion and for increasing their score. If there are more aspects of the work that the reviewer would like to discuss, we would be delighted to continue the discussion."} | 2024-11-21 04:13:39 |
ICLR.cc/2025/Conference | ysTPYKPMOg | 4gNRRKAfIm | Reviewer_qVHv | Response by Reviewer | {"Title": "", "Comment": "Thank you for the detailed responses. I appreciate that the authors introduced a dependency between bulk and spike during the rebuttal to address discrepancies with the motivating work by Moniri et al. (2023). However, I still believe the paper requires significant revision, particularly in it... | 2024-11-23 12:15:58 |
ICLR.cc/2025/Conference | nNJiZ7vHzS | zxqdVo9FjY | Authors | Response by Authors | {"Title": "Interpolation between signal-plus-noise and signal-only models", "Comment": "One other implicit criticism seems to be the lack of connection between the two models we are studying. We would like to point out that there can be a way to interpolate the signal-plus-noise and signal-only models. Intuitively, we ... | 2024-11-24 05:49:00 |
ICLR.cc/2025/Conference | tyfVRPmVAX | zxqdVo9FjY | Authors | Response by Authors | {"Title": "Revised Version", "Comment": "We have posted the revised version, where we fixed the typos and ambiguities pointed out by the reviewers. We would like to thank the reviewers for their invaluable feedback and their time to help improve our work. If there is anything else we can answer, please let us know, and... | 2024-11-24 07:56:12 |
ICLR.cc/2025/Conference | KuKFxWuwMO | fSyf3ogiBb | Reviewer_LgJ3 | Response by Reviewer | {"Title": "", "Comment": "Thanks so much for all your careful reply and the new updated version draft! I'll raise my score a little bit!"} | 2024-11-24 20:00:43 |
ICLR.cc/2025/Conference | GwAF3v9v9u | KuKFxWuwMO | Authors | Response by Authors | {"Title": "", "Comment": "We thank the reviewer for increasing their score and valuable contributions"} | 2024-11-25 08:10:05 |
ICLR.cc/2025/Conference | gWY128H3mk | ysTPYKPMOg | Authors | Response by Authors | {"Title": "", "Comment": "We thank the reviewer for their valuable feedback. We hope that our new results help expand on contribution (2)."} | 2024-11-25 08:10:53 |
ICLR.cc/2025/Conference | ihrSCQhQgj | fxl5YNtkzp | Reviewer_YeXr | Response by Reviewer | {"Title": "Acknowledgement of rebuttal", "Comment": "I thank the authors for taking the time to provide all the detailed clarifications and answers to my interrogations. I think the paper is scientifically sound, and thus increase slightly my score, although I have not checked the proofs."} | 2024-11-25 18:21:46 |
ICLR.cc/2025/Conference | w70pJI4hmK | ihrSCQhQgj | Authors | Response by Authors | {"Title": "", "Comment": "We thank the reviewer for their valuable feedback and for increasing their score"} | 2024-11-26 00:53:54 |
ICLR.cc/2025/Conference | 2dFBTjIXgw | WqthOZZiOY | Reviewer_weJN | Response by Reviewer | {"Title": "", "Comment": "I warmly thank the authors for addressing my concerns. After reading carefully the other reviewers' comments, I believe that the paper still heavily relies on previously published works, and I would like therefore to keep my original score. On the writing side,I believe the authors should expa... | 2024-11-26 15:18:15 |
ICLR.cc/2025/Conference | t8lTNn6Pwj | 2dFBTjIXgw | Authors | Response by Authors | {"Title": "", "Comment": "We thank the reviewer for the feedback and help in improving the paper."} | 2024-11-28 02:56:19 |
ICLR.cc/2025/Conference | GzIdpBrTa1 | zxqdVo9FjY | Area_Chair_aazW | Meta Review of Submission13673 by Area_Chair_aazW | {"Meta Review": "Summary of Scientific Claims and Findings:The paper investigates the generalization properties of least squares regression with spiked covariance matrices, motivated by neural network training dynamics after one gradient step. The authors provide asymptotic analyses and derive corrections for finite-sa... | 2024-12-20 10:36:30 |
ICLR.cc/2025/Conference | V2Rtla8eUt | zxqdVo9FjY | Program_Chairs | Paper Decision | {"Comment": "", "Decision": "Reject"} | 2025-01-22 05:37:55 |
ICLR.cc/2025/Conference | ZIZqJZi9Au | zxg6601zoc | Reviewer_RZ9b | Official Review by Reviewer_RZ9b | {"Rating": 5, "Summary": "This paper adopted the parameter-efficient fine-tuning method Representation Tuning to the multimodal large language model domain. This paper used different representation editors for the vision encoder, LLM, and cross-modality projectors to optimize the visual representation, cross-modality r... | 2024-10-30 05:55:43 |
ICLR.cc/2025/Conference | C0hVt0e9Y5 | zxg6601zoc | Reviewer_PDvR | Official Review by Reviewer_PDvR | {"Rating": 6, "Summary": "This paper introduces a method for tuning large multi-modal models (LMM) in a efficient but effective way so that it can achieve similar performance to full fine-tuning, with an additional objective of having a controllability. The key idea of this paper is based on a prior technique that lear... | 2024-11-02 19:18:09 |
ICLR.cc/2025/Conference | A2b4OZEZV3 | zxg6601zoc | Reviewer_Yzen | Official Review by Reviewer_Yzen | {"Rating": 6, "Summary": "This paper introduces Multimodal Representation Tuning (MRT), a parameter-efficient fine-tuning method to enhance controllability and interpretability in multimodal large language models (LMMs). MRT addresses the challenge of adapting LMMs effectively with fewer parameters by leveraging token-... | 2024-11-04 02:32:43 |
ICLR.cc/2025/Conference | 4mcwYbkgth | zxg6601zoc | Reviewer_uVrb | Official Review by Reviewer_uVrb | {"Rating": 6, "Summary": "The paper proposes a novel Multimodal Representation Tuning which can editing LMM representation and provide control. The paper introduces a representation editor $\\phi$ based on linear representation hypothesis and interchange interventions, which can apply to different representations in LM... | 2024-11-05 05:33:18 |
ICLR.cc/2025/Conference | iS2px2xmKO | 4mcwYbkgth | Authors | Response by Authors | {"Title": "To Reviewer uVrb", "Comment": "Dear Reviewer uVrb,We sincerely appreciate your time and effort in reviewing our paper and providing valuable comments. We provide explanations to your questions point-by-point in the following.**Q1: Regarding the typos.****A1:** Thank you for pointing it out. We have revised a... | 2024-11-19 23:33:49 |
ICLR.cc/2025/Conference | uA3W381dSn | A2b4OZEZV3 | Authors | Response by Authors | {"Title": "To Reviewer Yzen (Part I)", "Comment": "Dear Reviewer Yzen,We sincerely appreciate the time and effort you've devoted to reviewing our work and providing helpful feedback!**Q1: Regarding the plan to automate the rank-tuning process.****A1:** Thank you for the great insights. We completely agree that manual r... | 2024-11-19 23:50:28 |
ICLR.cc/2025/Conference | GGnZTLB3zl | A2b4OZEZV3 | Authors | Response by Authors | {"Title": "To Reviewer Yzen (Part II)", "Comment": "**Q4: Changing the order of text instruction can break the controllability.****A4:** We would like to clarify that simply changing the order of text instruction can\u2019t break the controllability. MRT is able to accommodate variations in prompt formats by training c... | 2024-11-19 23:52:03 |
ICLR.cc/2025/Conference | wIbwx616M2 | A2b4OZEZV3 | Authors | Response by Authors | {"Title": "To Reviewer Yzen (Part III)", "Comment": "**Q8: Typos.****A8:** Thank you for pointing it out. We have fixed it accordingly in the revised version.[ref1] Zhang, R., et al. AutoLoRA: Automatically Tuning Matrix Ranks in Low-Rank Adaptation Based on Meta Learning. ArXiv, 2024.[ref2] Moe, C., et al. Bayesian-Lo... | 2024-11-19 23:55:20 |
ICLR.cc/2025/Conference | 3bPd5joLzI | C0hVt0e9Y5 | Authors | Response by Authors | {"Title": "To Reviewer PDvR (Part I)", "Comment": "Dear Reviewer PDvR,We sincerely thank reviewer PDvR for the valuable time and constructive feedback! We provide explanations to your questions point-by-point in the following.**Q1: Don\u2019t use RoI but train with all tokens.****A1:** Although editing more tokens (i.e... | 2024-11-19 23:57:23 |
ICLR.cc/2025/Conference | 8oidjT7Yws | C0hVt0e9Y5 | Authors | Response by Authors | {"Title": "To Reviewer PDvR (Part II)", "Comment": "**Q4: Optimization landscape.****A4:** Thank you for your positive assessment. We want to address your concerns from two perspectives. **How do we pick the optimization landscape?**It is not a cherry-picked landscape, but instead a general visualization. Figure 5 illu... | 2024-11-19 23:58:57 |
ICLR.cc/2025/Conference | mUe8p5h0BE | ZIZqJZi9Au | Authors | Response by Authors | {"Title": "To reviewer RZ9b (Part I)", "Comment": "Dear Reviewer RZ9b,We sincerely thank you for the valuable time and constructive feedback, which are crucial for improving our work. We provide explanations to each question as follows.**Q1: Regarding MRT\u2019s technical contributions.****A1:** While representation tu... | 2024-11-20 00:05:32 |
ICLR.cc/2025/Conference | SskQ2cv2pk | ZIZqJZi9Au | Authors | Response by Authors | {"Title": "To reviewer RZ9b (Part II)", "Comment": "**Q4: Memory & Time Efficiency in training.****A4:** Following the suggestion, we have included the efficiency in the training stage w.r.t. trainable parameters, memory usage, and training time in the table below. It can be seen that MRT enjoys a competitive training ... | 2024-11-20 00:09:22 |
ICLR.cc/2025/Conference | TpExpE7hGG | ZIZqJZi9Au | Authors | Response by Authors | {"Title": "To reviewer RZ9b (Part III)", "Comment": "**Q8.1: The reason for applying prefix and suffix editors on textual tokens.****A8.1:** We would like to explain the reason for choosing prefix and suffix tokens. Prefix tokens are vital for conditioning the model to specific tasks or behaviors [ref10, ref12], while ... | 2024-11-20 00:13:44 |
ICLR.cc/2025/Conference | syI2Phhkeo | zxg6601zoc | Authors | Response by Authors | {"Title": "Summary of Revisions", "Comment": "To all reviewers:Thank you for your thorough review and insightful comments. We have revised our paper according to the suggestions. The major changes are summarized as follows:* We have performed more ablation experiments to explore applying MRT to a single modality at a t... | 2024-11-20 00:30:29 |
ICLR.cc/2025/Conference | ZxqFDasycC | zxg6601zoc | Authors | Response by Authors | {"Title": "Looking forward to the discussion", "Comment": "Dear Reviewers,We sincerely appreciate the time and effort you've devoted to reviewing our work. We understand that your schedule may be quite busy, and we are truly grateful for your valuable feedback. As we are presently in the discussion phase, we would grea... | 2024-11-22 06:22:30 |
ICLR.cc/2025/Conference | tGTEHkarQ4 | wIbwx616M2 | Reviewer_Yzen | Response by Reviewer | {"Title": "", "Comment": "Thanks for the detailed responses and additional results. Most of my concerns are well addressed. I think the mentioned contribution in A2 is intuitive but not novel enough to increase the score. I hope to see some generalization results leveraging LLM regarding Q4. I will keep the original sc... | 2024-11-22 19:45:03 |
ICLR.cc/2025/Conference | hx6I2itEp7 | tGTEHkarQ4 | Authors | Response by Authors | {"Title": "Thank you for the prompt response", "Comment": "Thank you for your valuable feedback. To further address your comment on generalization, we have leveraged **a lightweight rephraser** based on T5-small (i.e., 60M parameters), and customized a dataset for fine-tuning the rephraser, containing **6 different var... | 2024-11-23 01:58:34 |
ICLR.cc/2025/Conference | jJ0XvUFotn | 8oidjT7Yws | Reviewer_PDvR | Response by Reviewer | {"Title": "", "Comment": "Thank you for your response. I read the response other review. I agree with Reviewer RZ9b in that the controllability experimental setup is a bit contrived, and it could be nice to think of how to design more natural setups. But I still think this paper deserves the score of 6 so I maintain my... | 2024-11-25 03:17:46 |
ICLR.cc/2025/Conference | 3IoD9MV4fM | jJ0XvUFotn | Authors | Response by Authors | {"Title": "Thank you for your response", "Comment": "We sincerely thank the reviewer for their prompt response and thoughtful feedback. To address controllability, we have included additional experiments in Appendix S6, covering the _robustness of token-level control_, _extensions to other multimodal tasks_, and _gener... | 2024-11-25 03:38:08 |
ICLR.cc/2025/Conference | h8PgljGBak | ZIZqJZi9Au | Authors | Response by Authors | {"Title": "Looking forward to the discussion", "Comment": "Dear Reviewer RZ9b,We deeply appreciate the time and effort you\u2019ve taken to review our work, especially given your busy schedule. As the authors-reviewer discussion phase draws to a close, we would be grateful for the opportunity to engage in dialogue with... | 2024-11-25 19:24:31 |
ICLR.cc/2025/Conference | oFEIHo3fcf | h8PgljGBak | Reviewer_RZ9b | Response by Reviewer | {"Title": "Response to rebuttal", "Comment": "Thanks for the authors' detailed responses. Part of my concerns are addressed. I still have concerns about the generalizability of the proposed method as the authors suggest in Q3.1 that different training data would lead to significant performance differences for some meth... | 2024-11-26 01:39:10 |
ICLR.cc/2025/Conference | vwsxcMRayn | ZIZqJZi9Au | Authors | Response by Authors | {"Title": "Thank you for the prompt response", "Comment": "Dear Reviewer RZ9b,We sincerely appreciate your engagement in the discussion and your valuable feedback, which are crucial in enhancing the quality of our work. We are pleased that our response addresses most of your concerns and would like to take this opportu... | 2024-11-26 23:26:47 |
ICLR.cc/2025/Conference | EThjHK8HGs | iS2px2xmKO | Authors | Response by Authors | {"Title": "Looking forward to the discussion", "Comment": "Dear Reviewer uVrb,We sincerely appreciate your dedicated time and effort in reviewing our submission. We understand how demanding your schedule might be and are genuinely grateful for your valuable insights. As the discussion phase nears its conclusion, we kin... | 2024-11-29 03:50:15 |
ICLR.cc/2025/Conference | mIjca6M5Ug | ZIZqJZi9Au | Authors | Response by Authors | {"Title": "", "Comment": "Dear Reviewer RZ9b,We've updated the revised paper based on your suggestions by adding the full fine-tuning and LoRA results to those additional experiments.As the end of the discussion phase is approaching, we would be truly grateful if you could inform us whether our recent response has adeq... | 2024-12-02 02:05:18 |