Dataset Viewer
Auto-converted to Parquet Duplicate
paper_title
stringlengths
12
156
paper_id
stringlengths
10
10
conference
stringclasses
1 value
review_id
stringlengths
10
10
weakness_content
stringlengths
10
3.03k
perspective
stringclasses
7 values
rebuttal_content
stringlengths
3
10.6k
rebuttal_label
stringclasses
5 values
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages
zzqn5G9fjn
ICLR-2024
UQfBBoocAY
Although the paper is generally well-structured, the title mentions `low-resource` languages. However, the two tasks leveraged are primarily on high-resource languages, rather than low-resourced language. I would suggest to the authors to include more tasks - there are many low-resource language datasets (for instance ...
Experiments
Thank you for recommending these excellent datasets for our evaluation. We agree that diversifying our dataset to include African and Indic languages will significantly strengthen our paper's scope and alignment with its title. To address this, we have initiated experiments with MasakhaNEWS and plan to conduct further...
CRP
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages
zzqn5G9fjn
ICLR-2024
YhvDQa0GKX
The proposed is a very trivial combination of federated learning and prompt tuning, which both are established methodology in their own realm. There is no novelty, such as modification or adjustment to the method that may have give a better results. In other words, people with an objective to do federated learning for ...
Novelty
We appreciate the opportunity to address the concerns raised by the reviewer and would like to defend our proposal, emphasizing its novelty and significance. In summary, we would like to clarify that our paper introduces federated prompt tuning as a solution to help address the **linguistic and geographic boundaries** ...
DWC
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages
zzqn5G9fjn
ICLR-2024
YhvDQa0GKX
Though it may have implicitly inferred by the concept of FL, the paper did not mention why and how federated learning helps with privacy and in which case one should use FL for their application.
Writing
We thank the reviewer for the insightful comments and concerns regarding privacy! We appreciate the opportunity to clarify this aspect of our work. It's important to note that multilingual finetuning here is not an approach for preserving privacy but rather a problem we aim to solve. 1. Our approach inherently support...
CRP
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages
zzqn5G9fjn
ICLR-2024
YhvDQa0GKX
There are better parameter-efficient finetuning methods, such as LORA/QLora, that the authors should conduct experiments on and do comparision with prompt tuning.
Experiments
Thank you for your valuable suggestions! Following the reviewer's constructive feedback, we have implemented experiments with LoRA (r=8, lora_alpha=16, lora_dropout=0.1) and summarized the results in the table below. Table 4 in our revised paper presents the preliminary results of experiments on the NC task. Bold scor...
CRP
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages
zzqn5G9fjn
ICLR-2024
YhvDQa0GKX
The results show prompt tuning are much worse than full-federated tuning, thus casting doubt if the cost-saving is worth it.
Evaluation
Thank you for your valuable suggestions! Following the reviewer's constructive feedback, we have implemented experiments with LoRA (r=8, lora_alpha=16, lora_dropout=0.1) and summarized the results in the table below. Table 4 in our revised paper presents the preliminary results of experiments on the NC task. Bold scor...
CRP
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages
zzqn5G9fjn
ICLR-2024
YhvDQa0GKX
Other generative and knowledge-based tasks, such as QA, translations and summarizations should be performed.
Experiments
We appreciate the feedback. Our current paradigm is general-purpose and can be easily adapted to other generative and knowledge-based tasks. In response, we have expanded our evaluations to encompass a broader range of scenarios, addressing the concern of limited task selection. This rebuttal is part of a series, and w...
SRP
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages
zzqn5G9fjn
ICLR-2024
YhvDQa0GKX
Citation format is incorrect; \citep{} should be used to produce something like (Abc, et al., 2023) and not Abc, et al., 2023 everywhere.
Presentation
Thanks for pointing it out. We have corrected the citations for all references in our revised paper.
CRP
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages
zzqn5G9fjn
ICLR-2024
YhvDQa0GKX
Many grammatical errors exist, such as in the phrase "Throughout the fine-tuning...".
Writing
We appreciate your feedback on the grammatical errors. We have revised the grammar to avoid any confusion in our updated version.
CRP
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages
zzqn5G9fjn
ICLR-2024
yJ6uMWYzMY
poor presentation: the citations are not separable enough from the main text, e.g., without any parenthesis, rendering the submission unreadable. Against the tradition and ease of reading, abbreviations are not defined in advance, e.g., NLI, PFL, PLM.
Presentation
We apologize for any confusion caused by the current citation format. We have corrected the citations for all references in our revised paper. We realize the oversight in not defining certain abbreviations, such as NLI (Natural Language Inference), PFL (Prompt Federated Learning), and PLM (Pre-trained Language Models...
CRP
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages
zzqn5G9fjn
ICLR-2024
yJ6uMWYzMY
claims unverifiable: no code release.
Reproducibility
We provide an anonymized version of the code repository, accessible through this link: https://anonymous.4open.science/r/Breaking_Physical_and_Linguistic_Borders-F1C5.
CRP
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages
zzqn5G9fjn
ICLR-2024
yJ6uMWYzMY
conflating existing metrics with innovation: language distance is not a new concept.
Novelty
Thank you for your insightful comments on our paper. We acknowledge and agree with your review that the concept of language distance is not novel, having been explored in various contexts previously. However, we emphasize that our work introduces this concept within a unique and specific scenario: multilingual federa...
CRP
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages
zzqn5G9fjn
ICLR-2024
yJ6uMWYzMY
conceptual weakness: the contrived baseline was bound to give the proposed approach an edge due to lack of federated learning.
Experiments
We appreciate the opportunity to clarify this aspect of our work. In previous cases, data transmission was always one-directional. Existing approaches focus on solving this locally, for example, through local transfer with monolingual data. In our paper, we approach it from a collaborative perspective, which we call ...
CRP
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages
zzqn5G9fjn
ICLR-2024
yJ6uMWYzMY
conceptual weakness: what the paper refers to as prompts are just classifier model input, which are different from decoders-style LLM prompts as commonly acknowledged.
Theory
We would like to clarify that the prompts in our paper are NOT the same as classifier model input, and they are suited for all decoders-style LLMs. To further clarify the prompt tuning procedure and the prompt construction, we've added more details in Section 3 and Appendix B, and C in the revised version. Instead of ...
CRP
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages
zzqn5G9fjn
ICLR-2024
yJ6uMWYzMY
conceptual weakness: the approach has absolutely nothing to do with privacy which the abstract and the main body consistently bolsters.
Theory
We appreciate the opportunity to clarify this aspect of our work. - Our approach inherently supports data privacy. Specifically, it complies with international data privacy regulations by minimizing the need for cross-border data transmission. This not only ensures legal compliance but also facilitates collaboration a...
DWC
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages
zzqn5G9fjn
ICLR-2024
yJ6uMWYzMY
evaluation weakness: only two tasks (new classification and XNLI) was used in evaluation.
Evaluation
we would like to highlight additional evaluation results that we have been conducting to substantiate our claims further. These additional evaluations encompass a broader range of tasks and scenarios, which we believe address the concern of limited task selection. This response is the first in a series of comprehensive...
SRP
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages
zzqn5G9fjn
ICLR-2024
yJ6uMWYzMY
In section 5.4.1, regarding the statement: "In both the NC and XNLI tasks, despite the total number of parameters exceeding 278 million, the trainable parameters are only around 1.2 million, accounting for less than 0.5% of the total." β€” Could the authors clarify which part of the model is being fine-tuned?
Reproducibility
Yes, we clarify that we only update the prompt encoders. This includes the parameters of the local prompt encoders $h_k$ on Client k, and the parameters of the global encoder $h_g$ on the server in the revised paper (referred to as $h_{global}$ in the original paper). During this process, we keep the pre-trained langua...
CRP
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages
zzqn5G9fjn
ICLR-2024
DwcYUFIxnh
In terms of novelty, the proposed idea is not new, and it is only a further investigation of the multilingual setting.
Novelty
## W1: > In terms of novelty, the proposed idea is not new, and it is only a further investigation of the multilingual setting. We would like to kindly defend our proposal. To further clarify the significance and originality of our work, we've added our motivation with Multilingual NLP Background in Appendix A and t...
DWC
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages
zzqn5G9fjn
ICLR-2024
DwcYUFIxnh
Lack of clarity. The paper does not provide enough information about how the prompts are constructed or look like and hyperparameters for all settings. I suggest adding the information to the paper or appendix.
Reproducibility
## Q2 & W2: > Lack of clarity. The paper does not provide enough information about how the prompts are constructed or look like and hyperparameters for all settings. I suggest adding the information to the paper or appendix. > How did you tune the training and parameter averaging? To further clarify the prompt tunin...
CRP
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages
zzqn5G9fjn
ICLR-2024
DwcYUFIxnh
Do you have any findings on why multilingual centralized learning is far worse than federated learning in Table 2?
Evaluation
## Q1 > Do you have any findings on why multilingual centralized learning is far worse than federated learning in Table 2? Yes. This phenomenon has also been observed in previous works on Federated Learning [1]. Here are some possible reasons (Section 5.1, Page 7): Firstly, **Federated Learning** has a **weight aver...
DWC
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages
zzqn5G9fjn
ICLR-2024
DwcYUFIxnh
Figure number is missing on Page 2 β€” "As depicted in Figure , "
Presentation
## Suggestions: > Figure number is missing on Page 2 > "As depicted in Figure , " > Missing Figure/Table > "This translates to over 99% reduction in the communication overhead shown in 3" > Typo > "Finetuning accuracy across different lanugages on the NC task." > We appreciate your detailed suggestions on the typos ...
CRP
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages
zzqn5G9fjn
ICLR-2024
DwcYUFIxnh
Missing Figure/Table β€” "This translates to over 99% reduction in the communication overhead shown in 3"
Presentation
## Suggestions: > Figure number is missing on Page 2 > "As depicted in Figure , " > Missing Figure/Table > "This translates to over 99% reduction in the communication overhead shown in 3" > Typo > "Finetuning accuracy across different lanugages on the NC task." > We appreciate your detailed suggestions on the typos ...
CRP
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages
zzqn5G9fjn
ICLR-2024
DwcYUFIxnh
Typo β€” "Finetuning accuracy across different lanugages on the NC task."
Writing
## Suggestions: > Figure number is missing on Page 2 > "As depicted in Figure , " > Missing Figure/Table > "This translates to over 99% reduction in the communication overhead shown in 3" > Typo > "Finetuning accuracy across different lanugages on the NC task." > We appreciate your detailed suggestions on the typos ...
CRP
On the generalization capacity of neural networks during generic multimodal reasoning
zyBJodMrn5
ICLR-2024
c4bD4kpXHW
While the paper’s studies show that certain designs (e.g. cross-attention) seem to confer multi-modal generalization, there are still some key questions that can be more thoroughly studied to uncover the reasons why this is the case.
Experiments
In response to the Reviewer’s concerns (and the related comment by Reviewer a4Su), we have now performed additional experiments that focus on how model scale and complexity can influence multimodal generalization. While the original manuscript was focused on understanding how a class of base neural architectures would ...
CRP
On the generalization capacity of neural networks during generic multimodal reasoning
zyBJodMrn5
ICLR-2024
c4bD4kpXHW
Important discussions such as why the (cross-attention) transformers might fail at productive generalization is lacking.
Evaluation
This is a challenging question to tackle. Our ongoing hypothesis is that productive generalization is a fundamentally distinct type of generalization relative to systematic compositional generalization. We have now included a brief discussion in the Results section of Productive Compositional Generalization (Section 3....
CRP
On the generalization capacity of neural networks during generic multimodal reasoning
zyBJodMrn5
ICLR-2024
c4bD4kpXHW
What is the key architectural difference between dual stream transformer and transformers with cross attn that can explain their generalization performance? Is it only the lack of a cross attention between the different modalities?
Theory
The short answer is yes. When comparing the Dual Stream Transformer with the models with cross attention, indeed, the only distinction is the lack of an attention mechanism to explicitly integrate outputs from the two input streams. The longer answer, which became clear after performing the new experiments (that sca...
CRP
On the generalization capacity of neural networks during generic multimodal reasoning
zyBJodMrn5
ICLR-2024
c4bD4kpXHW
Possible typo: β€œFinally, we included a Perceiver-like model (Jaegle et al., 2021), an architecture designed to generically process multimodal inputs (Fig. 2f).”: (Fig. 2f) > (Fig. 2e).
Writing
We thank the Reviewer for spotting this error. The manuscript has now been updated.
CRP
On the generalization capacity of neural networks during generic multimodal reasoning
zyBJodMrn5
ICLR-2024
WaCqOkvd4I
I'm concerned about the strength of the baselines used in the paper (see my related questions below). While the primary contribution of the paper is the dataset, it is also important to establish strong baselines for this new dataset and to ensure that the conclusions from the empirical results are valid. The appendix ...
Experiments
We thank the Reviewer for their thorough and thoughtful feedback. Below, we have worked to address some of the weaknesses the Reviewer raised, particularly the strength of the baselines. We have included new experiments to directly address these concerns, taking into consideration this Reviewer’s suggestion. Below, we ...
CRP
On the generalization capacity of neural networks during generic multimodal reasoning
zyBJodMrn5
ICLR-2024
WaCqOkvd4I
The qualitative difference between gCOG and datasets from prior work such as gSCAN was not very clearly described. For example, one of the key claims seemed to be gCOG "employs generic feature sets that are not tied to any specific modality". However, it seems like it is a useful property for a multimodal dataset to ha...
Novelty
The second weakness was the lack of clear distinction between our presented task, gCOG, and prior tasks such as the gSCAN task. We thank the Reviewer for this comment, and have now emphasized the primary distinctions between the two tasks. In brief, the two tasks require different neural network architectures. In terms...
CRP
On the generalization capacity of neural networks during generic multimodal reasoning
zyBJodMrn5
ICLR-2024
WaCqOkvd4I
Appendix A.2.1 - Maybe reference Tables 8 and 9 where you discuss different positional embeddings.
Presentation
*Position embeddings - Since you are representing 10x10 grids as 1D sequences, 1D relative positions may not capture this structure well. On the other hand, absolute position embeddings seem potentially problematic in the case of the SSTrfmr model, since they will not be consistently assigned to the same grid position ...
DWC
On the generalization capacity of neural networks during generic multimodal reasoning
zyBJodMrn5
ICLR-2024
WaCqOkvd4I
Consider discussing [3] in related work. [3] demonstrated the importance of cross-modal attention for gSCAN, and similarly studied the relative difficulty of various aspects of generalization, including distractors.
Novelty
We have additionally included discussion of Qiu et al., 2021 in the results section reporting that distractor generalization is improved using cross-modal attention: β€œWhile all models performed IID generalization well, only models that contained cross-attention mechanisms (CrossAttn and Perceiver models) exhibited exce...
CRP
On the generalization capacity of neural networks during generic multimodal reasoning
zyBJodMrn5
ICLR-2024
umBGrmnYm6
**Pre-trained models** The paper focuses on models trained from scratch rather than pre-trained. This could be a strength and a weakness. On the one hand, it allows for isolating the contribution of the architectural choices from other factors of optimization, and training data. On the other hand, it has been observed ...
Experiments
Weakness 1: Lack of evaluation using pre-trained models. We agree with the Reviewer that there is utility in assessing how a pretrained model performs on a new task to the literature. However, when trying to address this question regarding our specific task, we realized that most models would not be able to perform thi...
CRP
On the generalization capacity of neural networks during generic multimodal reasoning
zyBJodMrn5
ICLR-2024
umBGrmnYm6
**COG task**: It will be useful to discuss the COG task (rather than just mentioning it) before describing the new gCOG one, so that it will be clearer to the reader what are new contributions of the new benchmark compared to COG and the degree of their importance. In the overview diagram I would also recommend showing...
Presentation
*COG task: It will be useful to discuss the COG task (rather than just mentioning it) before describing the new gCOG one, so that it will be clearer to the reader what are new contributions of the new benchmark compared to COG and the degree of their importance. In the overview diagram I would also recommend showing a ...
CRP
On the generalization capacity of neural networks during generic multimodal reasoning
zyBJodMrn5
ICLR-2024
umBGrmnYm6
**Figures**: Would be good to increase the size of the plots in Figure 3b. It will also be good the increase the distance and visual separation between the sub-figures in each figure throughout the paper.
Presentation
*Figures: Would be good to increase the size of the plots in Figure 3b. It will also be good the increase the distance and visual separation between the sub-figures in each figure throughout the paper.* We have now increased the size of the plots in Figure 3b, splitting panel 3b in to 3b and 3c. We have also worked t...
CRP
Learning Mean Field Games on Sparse Graphs: A Hybrid Graphex Approach
zwU9scoU4A
ICLR-2024
KFdqHGoMzN
Even though the authors explained in the paper, I didn't like the fact that the proposed GXMFGs have no baseline competitors to compare against. While I agree that one could argue on the contrary that the ability to work with sparse graphs is precisely the unique advantage of GXMGFs, I think that the authors should at ...
Experiments
Thank you for bringing up the important topic of an empirical comparison with existing approaches. As you mention, the ability of GXMFGs to work with sparse realistic graphs can be seen as a major conceptual advantage over existing approaches such as GMFGs and LPGMFGs. We agree that there should be an empirical compari...
CRP
Learning Mean Field Games on Sparse Graphs: A Hybrid Graphex Approach
zwU9scoU4A
ICLR-2024
KFdqHGoMzN
In Figure 3a, it looks like the curves are diverging rather than converging as k increases? Are the curves coloured correctly?
Presentation
The colors in Figure 3a are correct and all curves converge as the graph size $\nu$ increases. For higher $k$ the curves tend to converge slower than for low $k$ which might seem counter-intuitive. The reason for the different convergence speed is that if we sample finite graphs from the power law graphex, with high pr...
DWC
Learning Mean Field Games on Sparse Graphs: A Hybrid Graphex Approach
zwU9scoU4A
ICLR-2024
Q5LsF6DBEl
Providing an intuitive explanation for assumptions 1(b) and 1(c) would greatly enhance the paper's overall readability and accessibility.
Writing
Thank you for the valuable suggestion! To increase the accessibility of Assumptions 1 b) and c), we have added a more detailed explanation for the respective assumptions in the updated paper draft. The intuitive interpretation of Assumption 1 b) is that it describes the behavior of $\xi_W$ at infinity. More specificall...
CRP
Learning Mean Field Games on Sparse Graphs: A Hybrid Graphex Approach
zwU9scoU4A
ICLR-2024
Q5LsF6DBEl
While the paper assumes finite state and action spaces, it may be beneficial to explore whether the proposed approach can be extended to scenarios with infinite action spaces.
Theory
In our opinion, it is worthwhile to extend the GXMFG approach to continuous state and action spaces (and also continuous time) to increase the generality of the learning method. Since the extension to a continuous setting will require different and adapted mathematical and algorithmic approaches, it is outside the scop...
DRF
Learning Mean Field Games on Sparse Graphs: A Hybrid Graphex Approach
zwU9scoU4A
ICLR-2024
Q5LsF6DBEl
Including the code for the simulations would enhance reproducibility.
Reproducibility
We have uploaded the code and will add a link in the final, deanonymized version of the paper.
CRP
Learning Mean Field Games on Sparse Graphs: A Hybrid Graphex Approach
zwU9scoU4A
ICLR-2024
gBU7uwifxA
The model is quite abstract at some places. For the theoretical results, they are mostly about the analysis of the game and I am not sure how relevant they are for this conference (although they are certainly interesting for a certain community). It might have been more interesting to focus more on the learning algorit...
Theory
A: The analysis of the game provides the key insights into complex agent systems that are necessary to eventually provide the equilibrium learning algorithm. Only through a thorough understanding of the core periphery structure and its implications it is possible to state a principled equilibrium learning approach. The...
DWC
Learning Mean Field Games on Sparse Graphs: A Hybrid Graphex Approach
zwU9scoU4A
ICLR-2024
gBU7uwifxA
Assumption 2 as used for instance in Lemma 1 does not seem to make much sense (unless I missed something): What is \( \boldsymbol{\pi} \)? We do not know in advance the equilibrium policy and even if we did, we would still need to define the set of admissible deviations for the Nash equilibrium. Could you please clarif...
Theory
A: We completely agree with the reviewer: the policy $\boldsymbol \pi$ should not be part of Assumption 2 and (of course) we do not assume the equilibrium policy to be known in advance; the set of admissible deviation policies from the Nash equilibrium is not restricted. Instead, we have added the Lipschitz condition (...
CRP
Learning Mean Field Games on Sparse Graphs: A Hybrid Graphex Approach
zwU9scoU4A
ICLR-2024
gBU7uwifxA
Algorithm 1, line 14: Could you please explain or recall what is \( Q^{k, \mu^{\tau_{\mathrm{max}}}} \)?
Writing
A: In Algorithm 1, $Q^{k, \mu^{\tau_{\max}}}$ is defined similar to $Q_{i,t}^{\pi, \mu}$, except that we substitute the reward function $r$ by $r'_k$ and use the transition kernel $P'_k$ instead of $P$. We have added the definition in the updated paper.
CRP
Learning Mean Field Games on Sparse Graphs: A Hybrid Graphex Approach
zwU9scoU4A
ICLR-2024
gBU7uwifxA
Some typos: Should the state space be either \( \mathcal{X} \) or \( X \) (see section 3 for instance)? Does \( \mathbb{G}^\infty_{\alpha,t} \) depend on \( \boldsymbol{\mu} \) or not (see bottom of page 4)? Etc.
Writing
A: The state space should be $\mathcal{X}$. We used $X$ in the beginning of Section 3 to denote an arbitrary finite set. Thanks for pointing out the ambiguous notation; we have corrected it in the updated paper version. A: In our framework, the neighborhood distribution $\mathbb{G}^\infty_{\alpha, t}$ always depends...
CRP
Out-of-Variable Generalisation for Discriminative Models
zwMfg9PfPs
ICLR-2024
EOWYeM4s8B
It would be helpful to include more intuitive discussion throughout the paper providing more analysis on the sections. For example, more discussion on the assumptions of the settings/theorems would be helpful, and it's not clear exactly under what assumptions the proposed predictor is appropriate.
Theory
> It would be helpful to include more intuitive discussion throughout the paper providing more analysis on the sections. For example, more discussion on the assumptions of the settings/theorems would be helpful, and it's not clear exactly under what assumptions the proposed predictor is appropriate. We thank the revie...
CRP
Out-of-Variable Generalisation for Discriminative Models
zwMfg9PfPs
ICLR-2024
fZAon7Cssu
For future work, is there a more complicated/realistic dataset to validate the algorithm?
Experiments
> For future work, is there a more complicated/realistic dataset to validate the algorithm? We thank the reviewer for the suggestion and included results on real-world dataset in Table 2 (Appendix D.4).
CRP
Out-of-Variable Generalisation for Discriminative Models
zwMfg9PfPs
ICLR-2024
fZAon7Cssu
Theorem 3 connects all moments of the residual distribution to the partial derivatives with respect to the unique variable of the target environment. If additional moments were to be calculated as part of the proposed algorithm, would it improve results (for the general function case)?
Theory
> Theorem 3 connects all moments of the residual distribution to the partial derivatives with respect to the unique variable of the target environment. If additional moments were to be calculated as part of the proposed algorithm, would it improve results (for the general function case)? Thank you for the question, ye...
DWC
Out-of-Variable Generalisation for Discriminative Models
zwMfg9PfPs
ICLR-2024
fZAon7Cssu
In general, since the paper's main claim is that in the real world, it is likely to encounter both aspects of OOD and OOV - How simple is it to combine state-of-the-art OOD methods with the proposed approach? I cannot imagine at the moment a straightforward way to do that.
Experiments
> In general, since the paper's main claim is that in the real world, it is likely to encounter both aspects of OOD and OOV - How simple is it to combine state-of-the-art OOD methods with the proposed approach? I cannot imagine at the moment a straightforward way to do that. The reviewer asks re insights on combining ...
DWC
Out-of-Variable Generalisation for Discriminative Models
zwMfg9PfPs
ICLR-2024
bF4y5DIQ4i
The main weakness is the applicability of the method. The authors only showed results for proof-of-concept, not for real-world usage.
Experiments
We thank the reviewer for the suggestion and here include results on a real-world dataset in Table 2 (Appendix D.4). We observe our method outperforms the other baselines. Overall, we thank the reviewer for taking the time and pushing us to be more explicit on our method’s robustness and applicability. Following your...
CRP
Out-of-Variable Generalisation for Discriminative Models
zwMfg9PfPs
ICLR-2024
bF4y5DIQ4i
It is not yet clear what realistic problem can be well modeled by OOV generalization.
Novelty
We think OOV generalization is a general ability present in both human and animal’s navigation towards Nature. For example, AI in medicine is a relevant area where we often face strong limitations in guaranteeing dataset consistency. To begin with, patients have unique circumstances, and some diseases/symptoms are rare...
DWC
Out-of-Variable Generalisation for Discriminative Models
zwMfg9PfPs
ICLR-2024
bF4y5DIQ4i
It seems OOV fits very well the frame of missing-not-at-random and covariate-dependent missingness. Could the authors comment on that?
Theory
We thank the reviewer for the question. Missing-not-at-random and covariate-dependent missingness refer to a scenario where whether a variable is observed or not contains information about other covariates, or about certain other properties of the data point. One can thus hope to exploit this assumption to recover more...
DWC
Out-of-Variable Generalisation for Discriminative Models
zwMfg9PfPs
ICLR-2024
bF4y5DIQ4i
Theorem 2 is slightly confusing for me at first glance because I thought PA_Y by definition includes all parents of Y (so x1, x2, x3 in the example) and not just those in the target environment (x2, x3). It may be helpful to clarify.
Writing
Thank you for your feedback on improving the paper. We have incorporated an explanation on the difference of Theorem 2 setting with the original setup in the paragraph between section 3.3 and section 3.3.1 in the updated version.
CRP
Out-of-Variable Generalisation for Discriminative Models
zwMfg9PfPs
ICLR-2024
bF4y5DIQ4i
How does the method fair with the oracle as the magnitude of the noise increases?
Evaluation
Thank you for the question, we performed an additional systematic analysis similar in Table 1 with increase in noise standard deviation from 0.01 to 1 in an interval of 0.2. We averaged results over 5 repeated runs and observe our method are robust to increasing noise level and consistently outperforms remaining benchm...
CRP
Out-of-Variable Generalisation for Discriminative Models
zwMfg9PfPs
ICLR-2024
bF4y5DIQ4i
What if the noise is not gaussian but more heavy tailed?
Theory
Thank you for your question, we included an additional systemic noise when noise is heavy tailed. Specifically, when noise follows a log-normal distribution with mean 0 and sigma 0.5. We repeated the experiment over 5 runs and averaged over a hyperparameter sweep. Table 4 (Appendix D.6) shows the results. We observe as...
CRP
Out-of-Variable Generalisation for Discriminative Models
zwMfg9PfPs
ICLR-2024
bF4y5DIQ4i
Does the performance degrade or improve with increasing number of variables?
Experiments
Thank you for your question. When the number of variables increases in the source environment, we expect the performance remains the same as the base network $f_s$’s accuracy is indifferent to the number of variables given sufficient sample size; when the number of variables increases in the space of missing variables,...
CRP
Out-of-Variable Generalisation for Discriminative Models
zwMfg9PfPs
ICLR-2024
bF4y5DIQ4i
Figure 3: I don’t quite understand the figure. It would be helpful to define OOV loss, be explicit about the number of samples on the y-axis being (x2, x3, y) or (x1, x2, y) or something else. I also don’t understand why relative loss is zero means the method is on par with the oracle predictor. Why not just show how t...
Presentation
We apologize for the confusion, the number of samples on the x-axis refer to (x2, x3, y), and the relative loss is calculated as $log(loss_{pred}/loss_{oracle})$, the log ratio of predictor’s loss divided by the oracle’s loss. If the predictor achieves the same loss as the oracle loss then the ratio would be 1 and $log...
DWC
Out-of-Variable Generalisation for Discriminative Models
zwMfg9PfPs
ICLR-2024
gmtb8dbv8B
Referring to Figure 1, in the first paragraph in page 3, the claim "it would seem all but impossible...(orange box)" could be better explained.
Writing
We thank the reviewer for the suggestions to help us clarify Figure 1. We have incorporated the proposed changes in the updated version, by explaining the claim further and changing the border of the orange box. We also thank the reviewer for their detailed observations on typos and have corrected them in the updated ...
CRP
Out-of-Variable Generalisation for Discriminative Models
zwMfg9PfPs
ICLR-2024
gmtb8dbv8B
In Figure 1, it is unclear whether "With $Y$ not observed in the target domain" is an assumption made or is somehow indicated in the diagram or earlier in the paper. Eventually I realized that it's an assumption made, but the illustration Figure 1a alone isn't enough to show this assumption. This ambiguity may clear fo...
Presentation
We thank the reviewer for the suggestions to help us clarify Figure 1. We have incorporated the proposed changes in the updated version, by explaining the claim further and changing the border of the orange box. We also thank the reviewer for their detailed observations on typos and have corrected them in the updated ...
CRP
Out-of-Variable Generalisation for Discriminative Models
zwMfg9PfPs
ICLR-2024
gmtb8dbv8B
The abstract states "merely considering differences in data distributions is inadequate for fully capturing differences between learning environments." Doesn't out-of-variable technically fall under out-of-distribution, so shouldn't this be adequate? Perhaps more specificity is needed here.
Writing
Here we refer to settings exhibit in-distribution if the environments share the same data-generating process. For example, in Fig 1a, though the environments share the same data generating process (in-distribution) but observed different sets of variables (out-of-variable). To give an example on OOD but not OOV, consid...
SRP
Out-of-Variable Generalisation for Discriminative Models
zwMfg9PfPs
ICLR-2024
gmtb8dbv8B
On page 2, should "modal" be "model?"
Writing
We thank the reviewer for the suggestions to help us clarify Figure 1. We have incorporated the proposed changes in the updated version, by explaining the claim further and changing the border of the orange box. We also thank the reviewer for their detailed observations on typos and have corrected them in the updated ...
CRP
Out-of-Variable Generalisation for Discriminative Models
zwMfg9PfPs
ICLR-2024
gmtb8dbv8B
On page 6, do you mean "parentheses" instead of "brackets" between Eq (9) and Eq (10)?
Writing
We thank the reviewer for the suggestions to help us clarify Figure 1. We have incorporated the proposed changes in the updated version, by explaining the claim further and changing the border of the orange box. We also thank the reviewer for their detailed observations on typos and have corrected them in the updated ...
CRP
Out-of-Variable Generalisation for Discriminative Models
zwMfg9PfPs
ICLR-2024
gmtb8dbv8B
Why is the joint predictor considered an oracle predictor if MomentLearn outperforms it?
Theory
We apologize for the confusion. In Table 1, we refer to the joint predictor as oracle predictor in cases where there are enough data samples in the target environment such that regressing on the joint variable will lead to a near optimal predictor. In Figure 3, we showed that when the sample size is small (~100 samples...
DWC
Out-of-Variable Generalisation for Discriminative Models
zwMfg9PfPs
ICLR-2024
gmtb8dbv8B
Could you explain why MomentLearn is reliably more sample efficient than the oracle predictor for "few"-shot prediction?
Evaluation
MomentLearn is more sample efficient than the joint predictor in β€œfew”-shot prediction because with small sample size, the joint predictor would lead to estimation error, whereas MomentLearn leverages the information observed in the source environment (due to large sample size in source) are able to mitigate the proble...
DWC
Neural Sinkhorn Gradient Flow
ztuCObOc2i
ICLR-2024
M64FIZKM1B
The article is full with typos. Just to name a few: "piror", "Sinkhron", "Experimrnts", "speedest descent", question mark in the appendix and so on. Please fix those.
Writing
We have fixed all the typos, thank you!
CRP
Neural Sinkhorn Gradient Flow
ztuCObOc2i
ICLR-2024
M64FIZKM1B
The authors write "We do not compare with extant neural WGF methods on MNIST because most of the neural WGF methods only show generative power and trajectories on this dataset and lack the criteria to make comparisons." There are several papers (also gradient flow based ones), which evaluate a FID on MNIST. Please prov...
Evaluation
In fact, we are very willing to make quantitative comparisons with different methods, but due to limited time and computational resources, we did not replicate related algorithms on MNIST and test FID. The literature we reviewed that conducted MNIST experiments with neural WGF-based methods did not provide relevant dat...
DWC
Neural Sinkhorn Gradient Flow
ztuCObOc2i
ICLR-2024
M64FIZKM1B
Although the CIFAR10 value seems good, there are unfortunately no generated images provided. It is standard practice to sample many images in the appendix.
Presentation
Due to space constraints, we have added sampling images in the Appendix.
CRP
Neural Sinkhorn Gradient Flow
ztuCObOc2i
ICLR-2024
M64FIZKM1B
The statement of theorem 2 is incorrect. I guess the authors do not want to sample the Euler scheme (eq 14) but the continuous gradient flow, otherwise the statement would need to depend on the step size $\eta$.
Theory
We have revised Equation 14 to an ODE (Ordinary Differential Equation) form in the amended version.
CRP
Neural Sinkhorn Gradient Flow
ztuCObOc2i
ICLR-2024
M64FIZKM1B
In the proof of Theorem 2: Please provide a proof (or reference) why the mean field limit exists. Or do you mean the gradient flow starting at $\mu_0$ with target $\mu$ (first two sentences).
Theory
Apologies for the previous lack of clarity. We have re-examined and revised the proof of Theorem 2, and these modifications are included in the revised version of our paper (see Appendix for details). To prove that the empirical distribution $\tilde{\mu}^M_t$ evolving with equation 14 weakly converges to $\mu_t$ which ...
CRP
Neural Sinkhorn Gradient Flow
ztuCObOc2i
ICLR-2024
M64FIZKM1B
Later in that proof: why does there exists a weakly convergent subsequence of $\mu_t^M$? Further, I cant find the definition of $U_{\mu}$.
Theory
Apologies for the previous lack of clarity. We have re-examined and revised the proof of Theorem 2, and these modifications are included in the revised version of our paper (see Appendix for details). To prove that the empirical distribution $\tilde{\mu}^M_t$ evolving with equation 14 weakly converges to $\mu_t$ which ...
CRP
Neural Sinkhorn Gradient Flow
ztuCObOc2i
ICLR-2024
M64FIZKM1B
The code is not runnable, as the model (or any checkpoints) are not provided.
Reproducibility
We have added a readme and the missing neural network structure components to ensure the completeness of the code.
CRP
Neural Sinkhorn Gradient Flow
ztuCObOc2i
ICLR-2024
OPYpDioiBD
With the provided code, there are several insights that should be discussed in the paper. In the provided cifar experiments, the number of Gaussian samples used is 50000 samples. This number is extremely low to approximate the semi-discrete OT. Therefore, a discussion regarding the statistical performance of the method...
Evaluation
In our experiments, we indeed used significantly more than 50,000 Gaussian samples. As detailed in the revised version of our Algorithm 1, we clarified that our method comprises two distinct phases: building the trajectory pool and velocity field matching. The number of Gaussian samples utilized during training is dete...
CRP
Neural Sinkhorn Gradient Flow
ztuCObOc2i
ICLR-2024
OPYpDioiBD
As your method requires the simulation of the probability path, I wonder about the training time between your method and the recent Flow Matching approaches which are simulation free.
Experiments
Our algorithm takes longer in training time compared to simulation-free Flow Matching approaches [A],[B], and [C] (we tested the time required to reach the same $W_2$ in 2D experiments, and the table is attached below), but since our model follows the steepest descent flow, we use fewer steps in the inference process t...
CRP
Neural Sinkhorn Gradient Flow
ztuCObOc2i
ICLR-2024
OPYpDioiBD
There are many typos in the paper (including in titles: ie ExperimRnts, Notaions) that lead to poor clarity.
Writing
We have fixed all the typos, thank you!
CRP
Neural Sinkhorn Gradient Flow
ztuCObOc2i
ICLR-2024
OPYpDioiBD
The experiments include two toy datasets (synthetic 2D and MNIST). I would like to know how the method performs on other big datasets (Flowers, CelebA) or on other tasks such as single-cell dynamics [4].
Experiments
Due to time and computational power limitations, we will construct experiments with big datasets and single-cell dynamics in our future work.
SRP
Neural Sinkhorn Gradient Flow
ztuCObOc2i
ICLR-2024
lVRWOCrXf2
There are minor typos throughout, such as "euclidean" instead of "Euclidean".
Writing
**Answer to Weakness 2:** "Typos" We have fixed all the typos, thank you!
CRP
Neural Sinkhorn Gradient Flow
ztuCObOc2i
ICLR-2024
lVRWOCrXf2
There are minor typos throughout, such as "$lim$" instead of "$\lim$" atop page 15 in the appendix.
Writing
**Answer to Weakness 2:** "Typos" We have fixed all the typos, thank you!
CRP
Neural Sinkhorn Gradient Flow
ztuCObOc2i
ICLR-2024
lVRWOCrXf2
There are minor typos throughout, such as the positive scalar $\delta$ not being defined in the proof of Theorem $1$.
Writing
**Answer to Weakness 2:** "Typos" We have fixed all the typos, thank you!
CRP
Neural Sinkhorn Gradient Flow
ztuCObOc2i
ICLR-2024
lVRWOCrXf2
There are minor typos throughout, such as in the statement of Lemma 3: "teh" should read "the".
Writing
**Answer to Weakness 2:** "Typos" We have fixed all the typos, thank you!
CRP
Neural Sinkhorn Gradient Flow
ztuCObOc2i
ICLR-2024
lVRWOCrXf2
Some references are obscure. For example, for the fact that $\mu + t\delta \mu$ converges weakly to $\mu$, it may be worth simply noting that this is due to the linearity of integration (with respect to the measure term).
Theory
**Answer to Weakness 3:** "Some references are obscure" Apologies for the unclear referencing in proof of Theorem 1. We have revised this reference in the revised version of our paper.
CRP
Neural Sinkhorn Gradient Flow
ztuCObOc2i
ICLR-2024
lVRWOCrXf2
Can it be shown that approximate vector field matching yields approximate solutions for all time $t$?
Theory
**Answer to Weakness 1 & Question 1:** "Approximate vector field matching yields approximate solutions for all timeΒ $t$" It is challenging to rigorously analyze the error bounds of neural networks fitting vector fields at all times $t$, as it is related to properties like the smoothness of the flow and the expressive...
DRF
Neural Sinkhorn Gradient Flow
ztuCObOc2i
ICLR-2024
rb1Sv7qIos
Some theoretical results from the paper are known. For example, the statement of Theorem 1 could be found in [B] (eq. 26) or [C] (eq. 8).
Novelty
**Answer to Weakness 1:** β€œSome theoretical results from the paper are known” For the sake of completeness of our paper, we have included the theory of Sinkhorn WGF derivation in our methods section. Obviously, regarding the discussion of $\mathcal{W}_{\varepsilon}$-potential, we are not the first to do so, and we ha...
CRP
Neural Sinkhorn Gradient Flow
ztuCObOc2i
ICLR-2024
rb1Sv7qIos
The quality of the code provided is not good. There is no README/or other instruction to run the code. There are imports of non-existing classes. So, there is no possibility of checking (at least, qualitatively) the provided experimental results.
Reproducibility
**Answer to Weakness 2:** "The quality of the code provided is not good." Apologies that only the algorithm part was provided in the code. We have added a readme and the missing neural network structure components to ensure the completeness of the code in the latest version.
CRP
Neural Sinkhorn Gradient Flow
ztuCObOc2i
ICLR-2024
rb1Sv7qIos
The main weakness of the proposed paper is the limited methodological contribution. The authors simulate the particles of data following Sinkhorn divergence β€” as already mentioned, this is not a super fresh idea. To make a generative model from these simulated trajectories, the authors simply solve the regression task ...
Novelty
**Answer to Weakness 3:** "Limited methodological contribution" We observed that existing neural SDE/ODE-based models necessitate multiple iterations through a neural network to produce high-quality samples. We noticed that the generative flows used in these models, such as reverse Ornstein-Uhlenbeck processes or line...
DWC
Neural Sinkhorn Gradient Flow
ztuCObOc2i
ICLR-2024
rb1Sv7qIos
The authors propose to compute certain $\mathcal{W}_{\varepsilon}$ potentials (on discrete support of available samples) and then somehow take the gradients of these potentials w.r.t. the corresponding samples (eq. (13)). From the paper it is not clear how to compute the gradients, because the obtained potentials look ...
Reproducibility
**Answer to Question 1:** "explicitly use SampleLoss in the algorithm's listing" We have added the detailed computation procedure of $\mathcal{W}_{\varepsilon}$-potentials in section 4.2 of the revised version. In the original paper, we only explained the calculation of $\mathcal{W}_{\varepsilon}$-potentials and th...
CRP
Neural Sinkhorn Gradient Flow
ztuCObOc2i
ICLR-2024
rb1Sv7qIos
The vector field of the Sinkhorn gradient flow is estimated by empirical samples. It is not clear how well this sample estimate approximates the true vector field. This point should be clarified. Note that Theorem 2 works only for the mean-field limit.
Theory
**Answer to Question 2:** "How well does this sample estimate approximate the true vector field" In our paper, we concentrate on the mean-field limit situation and plan to address the analysis of the exact approximation error in our future work. It is important to note that conducting error analysis and theoretical pr...
DRF
Neural Sinkhorn Gradient Flow
ztuCObOc2i
ICLR-2024
rb1Sv7qIos
In the Introduction section, the authors consider a taxonomy of divergences used for gradient flow modelling, namely, "divergences [...] with the same support" and "divergences [...] with possible different support". As understood, the first class is about $f$-divergences and the second class is about the other types (...
Theory
**Answer to Question 3:** "The taxonomy in Introduction section" Based on our understanding, papers [I] and [J] develop algorithms designed for a broad range of functionals, not limited to KL divergence. For instance, [I] uses the Free Energy Functional in the experiment section. [K] utilizes the Free Energy Functiona...
CRP
Neural Sinkhorn Gradient Flow
ztuCObOc2i
ICLR-2024
rb1Sv7qIos
What is the β€œground” set ($\S$ 3.1, first line).
Writing
**Answer to Question 5:** "What is the β€œground” set" The "ground set" is simply the underlying set or space under consideration.
DWC
Enhancing Small Medical Learners with Privacy-preserving Contextual Prompting
ztpy1gsUpT
ICLR-2024
DT9YN4xa8A
Privacy Protection as an Innovation Point. Regarding the extraction of key words for privacy protection, the paper uses a medical NER model proposed by Neumann et al in 2019. We suggest further improvement of this model, for example, considering age as a crucial keyword for certain diseases and extracting it as necessa...
Novelty
In our paper, we utilize NER to extract keywords automatically, and we mention in the introduction that keywords can also be extracted by other methods, e.g., a manually created dictionary based on domain expertise. In practice, we can also use rules to post-process the extracted keywords or keep clinicians in the loo...
SRP
Enhancing Small Medical Learners with Privacy-preserving Contextual Prompting
ztpy1gsUpT
ICLR-2024
DT9YN4xa8A
The overall innovation of the methodology needs improvement, as the majority of the content relies on existing methods, such as the medical NER (Named Entity Recognition) model.
Novelty
We appreciate the opportunity to further elucidate the innovative aspects of our work. Our paper introduces a novel approach that simplifies the integration of medical knowledge into SLMs by leveraging LLMs as a knowledge base. Unlike prior research that often relies on complex training algorithms and the intricate pro...
DWC
Enhancing Small Medical Learners with Privacy-preserving Contextual Prompting
ztpy1gsUpT
ICLR-2024
x343VevwM7
As this research utilized a named entity recognition model to extract keywords, it is possible that the NER model can extract privacy information such as patient names. Is there any filtering or postprocessing step to avoid that? In addition, it is not guaranteed that NER system will never extract sensitive patient inf...
Reproducibility
**1. Concerns on privacy preserving in practical usage.** The data we utilized in experiments have already undergone post-processing; however, even well-processed data cannot be directly shared with third parties in a real-hospital setting. Here, we adopt NER methods directly, solely for automation, to show that LLM ca...
DWC
Enhancing Small Medical Learners with Privacy-preserving Contextual Prompting
ztpy1gsUpT
ICLR-2024
x343VevwM7
As the LLM already provides a preliminary decision, I am curious about the performance if we only feed the preliminary decision from LLM to SLM. It is worth knowing which part of the LLM-generated information improves the SLM most.
Experiments
**2.Question about what SLM learns for decision making.** We feed preliminary decisions (PD) as context into SLM with backbone BioLinkBert-Base on three datasets. Three separate runs for each setting are conducted and the average results along with the standard deviation are reported. The results are shown in the Tabl...
CRP
Enhancing Small Medical Learners with Privacy-preserving Contextual Prompting
ztpy1gsUpT
ICLR-2024
x343VevwM7
The related work section need to discuss more LLM application in the clinical area, especially the knowledge-enhanced LLM in clinical settings. For example, paper "Qualifying Chinese Medical Licensing Examination with Knowledge Enhanced Generative Pre-training Model." also utilized external knowledge for clinical quest...
Novelty
**3. Suggestion about related work in LLM application in the clinical domain.** Thanks for the suggestion in the related work. We will add the suggested work into the related work section in the revision.
SRP
Enhancing Small Medical Learners with Privacy-preserving Contextual Prompting
ztpy1gsUpT
ICLR-2024
x343VevwM7
By adding the LLM-generated content, will the new concatenated input be too long and out of the word window in SLM? How do you deal with the long content problem?
Reproducibility
**4. Question abut address long medical context generated by LLM.** We utilize the Fusion-in-Decoder [1] approach in our general domain experiments. This strategy is also effective for encoding long contexts. It works by dividing the input into smaller passages, encoding each one separately, and then combining the enc...
DWC
Enhancing Small Medical Learners with Privacy-preserving Contextual Prompting
ztpy1gsUpT
ICLR-2024
78lQNasrZS
(Clarity) There is no specific definition of the private information. From Figure 1, it seems that privacy definition is restricted to private identifiable information (PII). The authors should clarify the scope of privacy risks. Importantly, the proposed method cannot address general private information leakage that i...
Writing
**1. Concern on privacy definition.** Privacy is a multifaceted concept with varying definitions that depend on the context and use cases. Our definition of privacy differs from that of differential privacy, which offers a theoretical guarantee but may not always be practically applicable. In our setting, privacy issu...
SRP
Enhancing Small Medical Learners with Privacy-preserving Contextual Prompting
ztpy1gsUpT
ICLR-2024
78lQNasrZS
(Quality - Risks) The evaluation of privacy is not strict. It is possible that the keyword extraction includes private identifiable information (PII), for instance, names and dates as shown in Figure 1. There is no theoretical guarantee for privacy protection or empirical evaluation of the leakage rates of such PII.
Evaluation
In practice, we can employ additional de-identification models and rules to in the preprocessing or postprocessing stages to avoid the PII information leakage. Additionally, we can involve a human review to double-check the data before it is transmitted to an API for practical applications. The concept of 'privacy-pre...
DWC
Enhancing Small Medical Learners with Privacy-preserving Contextual Prompting
ztpy1gsUpT
ICLR-2024
78lQNasrZS
(Quality - Metric) The evaluation of privacy is not strict. The authors used the privacy budget for quantifying privacy risks: the ratio of the number of words provided to the LLM to the total words in the original question. However, I doubt if the metric can imply some privacy risks. There essentially lacks an intuiti...
Evaluation
**1. Concern on privacy definition.** Privacy is a multifaceted concept with varying definitions that depend on the context and use cases. Our definition of privacy differs from that of differential privacy, which offers a theoretical guarantee but may not always be practically applicable. In our setting, privacy issu...
SRP
Enhancing Small Medical Learners with Privacy-preserving Contextual Prompting
ztpy1gsUpT
ICLR-2024
78lQNasrZS
(Motivation) As the authors said, SLM presents a large gap compared to LLMs and thus there is no clear motivation to use SLM for prediction. Although the authors mention that ChatGPT requires access to data, it is essentially ignored that open-source LLMs, for example, Llama, can be used. In the paper, there is no refe...
Experiments
**2. Concerns about motivation to utilize ChatGPT instead of open-source LLMs.** We evaluate medical domain specific LLMs based on LLaMA [5] : AlpaCare [6], PMC_LLAMA [7], ChatDoctor [8], Medalpaca [9] and Baize-healthcare [10] on the three QA datasets following [11]. The results are shown in the table below. | ...
CRP
Enhancing Small Medical Learners with Privacy-preserving Contextual Prompting
ztpy1gsUpT
ICLR-2024
78lQNasrZS
There is no clear motivation to see SLM for prediction. Although the authors mention that ChatGPT requires access to data, it is essentially ignored that open-source LLMs, for example, Llama, can be used. Is there any evidence for the large gap between open-source LLMs and ChatGPT on the concerned medical tasks?
Experiments
**2. Concerns about motivation to utilize ChatGPT instead of open-source LLMs.** We evaluate medical domain specific LLMs based on LLaMA [5] : AlpaCare [6], PMC_LLAMA [7], ChatDoctor [8], Medalpaca [9] and Baize-healthcare [10] on the three QA datasets following [11]. The results are shown in the table below. | ...
CRP
Enhancing Small Medical Learners with Privacy-preserving Contextual Prompting
ztpy1gsUpT
ICLR-2024
9sLDVBo2SI
The contribution of this paper to the algorithm and the significance of the clinical problems it addresses seem not to be very high.
Novelty
The reviewer asserts that there seems to be room for improvement in our algorithm, but it is not clear to us from which perspective we should improve. In addition, we wish to highlight that the contributions of a paper can be evaluated from various perspectives, not limited to algorithms alone. The primary contributio...
DWC
Pricing with Contextual Elasticity and Heteroscedastic Valuation
zt8bb6vC4m
ICLR-2024
d7eMHE2Gje
The motivation for this contextual price elasticity seems unclear.
Theory
As in the generalized linear demand model $S(\alpha p + x_t^\top\beta)$, the real-world motivations of assuming the price elasticity $\alpha=x_t^{\top}\eta^*$ as contextual are mainly from the fact that different products have different price elasticities [Anderson et al. 1997], given how crucial they are for our daily...
SRP
Pricing with Contextual Elasticity and Heteroscedastic Valuation
zt8bb6vC4m
ICLR-2024
d7eMHE2Gje
Certain assumptions, such as $x^\top \eta$ having a positive lower bound, lack a real-world explanation.
Theory
The assumption that $\alpha=x_t^\top\eta^*>0$, i.e. a negative elasticity, is necessary for the monotonicity of a demand-price function, given that the link function $S(\cdot)=1-F(\cdot)$ is non-increasing. Stated as *the law of demand* [Gale, 1955] [Hildenbrand, 1983], that the quantity purchased varies inversely with...
SRP
Pricing with Contextual Elasticity and Heteroscedastic Valuation
zt8bb6vC4m
ICLR-2024
d7eMHE2Gje
Lack of applying this framework to real-data studies.
Experiments
We are actually motivated by real-world scenarios to consider a heteroscedastic setting where the price elasticity is feature-based. However, it is unfortunate that we are unable to have real-world evaluations of our algorithm, which requires either massive investments or confidential commercial-use data. On the one ha...
DWC
End of preview. Expand in Data Studio

πŸš€ RMR-75K

πŸ“Œ RMR-75K (Review-Map-Rebuttal) is a large-scale segment-level mapping dataset that links review weakness/question key points to the specific rebuttal span that addresses them, and annotates each pair with

  • a review perspective label (7 categories) and
  • a rebuttal impact category (5 levels) reflecting the author’s reaction and degree of uptake.

πŸ“Š Dataset size

  • Total mappings: 75,542
  • Total papers: 4,825
  • Distinct reviews: 16,583
  • Avg. mappings per paper: 15.66
  • Avg. mappings per review: 4.56
  • Conference source: ICLR 2024

πŸ“ Data format

Each line is a JSON object (JSONL). One object corresponds to one mapped review key point and its aligned rebuttal response span, with labels.

πŸ”‘ Fields

  • paper_title The paper title.
  • paper_id The OpenReview submission id.
  • conference The source venue and year, for example ICLR-2024.
  • review_id Identifier of the review the segment comes from.
  • weakness_content The atomic weakness or question segment extracted from the review.
  • perspective One of 7 review perspective labels.
  • rebuttal_content The rebuttal span that addresses weakness_content.
  • rebuttal_label One of 5 rebuttal impact categories.

πŸ§ͺ Example

{
  "paper_title": "Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages",
  "paper_id": "zzqn5G9fjn",
  "conference": "ICLR-2024",
  "review_id": "UQfBBoocAY",
  "weakness_content": "Although the paper is generally well-structured, the title mentions `low-resource` languages ... I would suggest ... include more tasks ... MasakhaNEWS ...",
  "perspective": "Experiments",
  "rebuttal_content": "Thank you for recommending these excellent datasets for our evaluation. ... we have initiated experiments with MasakhaNEWS ... Table 2 ...",
  "rebuttal_label": "CRP"
}

🧭 Label taxonomy

Review perspective labels (7)

Each review segment has exactly one perspective label:

Perspective Definition (brief)
Experiments Experimental setup/design: missing/insufficient experiments, weak baselines, missing ablations, unclear datasets/splits, hyperparameters/seeds, compute/training details.
Evaluation Metrics/analysis/interpretation: missing or inappropriate metrics, lack of statistical testing or error bars, insufficient analysis, inconsistencies between claims and results.
Reproducibility Reproducibility details: missing code/data/links, missing hyperparameters, unclear preprocessing, seeds, hardware, insufficient instructions to replicate results.
Novelty Originality/positioning vs prior work: incremental contribution, overlap, unclear differentiation, missing related work.
Theory Theoretical correctness/justification: flawed assumptions, gaps in proofs, incorrect derivations, mismatch between theorems and algorithms.
Writing Clarity/readability: grammar/style, ambiguous phrasing, undefined terms/symbols, confusing explanations.
Presentation Figures/tables/organization: unclear plots/legends, formatting issues, misplaced/redundant content, overall structure hard to follow.

Rebuttal impact categories (5)

Each aligned rebuttal span has exactly one impact label:

Label Meaning (brief)
CRP Concrete Revision Performed: authors point to specific changes or verifiable artifacts already added.
SRP Specific Revision Plan: concrete future edits are committed with where/what to revise, but not yet implemented.
VCR Vague Commitment to Revise: promises to improve without actionable details.
DWC Defend Without Change: argues the paper already addresses the point; no edits proposed.
DRF Deflect/Reframe: shifts responsibility or reframes the issue; no change offered.

πŸ“‰ Label distribution (RMR-75K)

Counts and percentages for Perspective Γ— Impact:

Perspective (total) CRP SRP VCR DWC DRF
Evaluation (11,257) 4,766 (42.3%) 903 (8.0%) 171 (1.5%) 5,249 (46.6%) 168 (1.5%)
Experiments (25,160) 12,059 (47.9%) 2,272 (9.0%) 401 (1.6%) 9,833 (39.1%) 595 (2.4%)
Novelty (8,585) 2,828 (32.9%) 872 (10.2%) 185 (2.2%) 4,578 (53.3%) 122 (1.4%)
Presentation (4,776) 2,894 (60.6%) 803 (16.8%) 256 (5.4%) 784 (16.4%) 39 (0.8%)
Reproducibility (4,402) 2,009 (45.6%) 465 (10.6%) 120 (2.7%) 1,747 (39.7%) 61 (1.4%)
Theory (12,822) 4,253 (33.2%) 1,110 (8.7%) 282 (2.2%) 6,859 (53.5%) 318 (2.5%)
Writing (8,540) 4,693 (55.0%) 1,149 (13.5%) 631 (7.4%) 1,997 (23.4%) 70 (0.8%)
Overall 33,502 (44.3%) 7,574 (10.0%) 2,046 (2.7%) 31,047 (41.1%) 1,373 (1.8%)

🎯 Intended use

RMR-75K is designed for:

  • training and evaluating perspective-conditioned review feedback generation
  • leveraging rebuttal outcomes as weak supervision for multiple dimensions such as actionability
  • studying the relationship between review and rebuttal responses

πŸ“ Citation

If you find this dataset useful in your research, please cite:

@misc{wu2026rbtactrebuttalsupervisionactionable,
  title={RbtAct: Rebuttal as Supervision for Actionable Review Feedback Generation},
  author={Sihong Wu and Yiling Ma and Yilun Zhao and Tiansheng Hu and Owen Jiang and Manasi Patwardhan and Arman Cohan},
  year={2026},
  eprint={2603.09723},
  archivePrefix={arXiv},
  primaryClass={cs.CL},
  url={https://arxiv.org/abs/2603.09723},
}
Downloads last month
26

Paper for shwu22/RMR-75K