paper_title stringlengths 12 156 | paper_id stringlengths 10 10 | conference stringclasses 1
value | review_id stringlengths 10 10 | weakness_content stringlengths 10 3.03k | perspective stringclasses 7
values | rebuttal_content stringlengths 3 10.6k | rebuttal_label stringclasses 5
values |
|---|---|---|---|---|---|---|---|
ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by Learning to Scale | 6cMmSnOpCs | ICLR-2024 | yFbDQXSSp2 | In table 3, the version of the ScaLearn that achieves best performance varies a lot across tasks. Why is this the case? Can you provide some intuition around when would each version work better? | Evaluation | **[Q1] Variability in ScaLearn\* performance**: Overall, the performance of the different variations of ScaLearn is rather similar. Moreover, the performance of ScaLearn and ScaLearn++ tends to be highly similar on most tasks (cf. Table 2, 3, and 4), with both performing the best overall. The performance of the variant... | DWC |
ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by Learning to Scale | 6cMmSnOpCs | ICLR-2024 | 99fH2Lf3D1 | The experimental section is not very strong and is missing some very strong baselines, relevant settings, and analysis. Please refer to the questions below. | Experiments | **[W1] Experimental section and baselines**: We acknowledge the reviewer's emphasis on the importance of a diverse evaluation to demonstrate the merit of our method. We have already included a broad range of strong single-task learning (STL) and multi-task learning (MTL) baselines, and we appreciate the reviewer's pers... | CRP |
ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by Learning to Scale | 6cMmSnOpCs | ICLR-2024 | 99fH2Lf3D1 | The paper is slightly harder to read, the motivation and the analysis on scaling part was not very clear to me until I read section 4 about the method. | Writing | **[W2] Readability**: We thank the reviewer for raising this point. We have walked through the paper to ensure clarity and readability throughout. Based on this, we have made a number of alterations (highlighted in blue), including an introductory sentence to our analyses (Section 3). | CRP |
ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by Learning to Scale | 6cMmSnOpCs | ICLR-2024 | 99fH2Lf3D1 | The only modular composition baseline in this paper is AdaptorFusion from 2021, which is quite old now and there have been other works that tackle the same problem from different angles that should act as baselines here. [1] Combining Modular Skills in Multitask Learning, [2] AdapterSoup: Weight Averaging to Improve Ge... | Experiments | **[Q1] Modular composition baseline**: We acknowledge the inclusion of additional baselines. To address the reviewer's concerns, we have included two additional baselines (see first comment). Regarding the proposed baselines, neither of them tackles the very same problem; all the proposed works are generally related to... | CRP |
ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by Learning to Scale | 6cMmSnOpCs | ICLR-2024 | 99fH2Lf3D1 | The experimental setup used in the paper can be significantly improved by studying this problem on good seq2seq zero-shot models like T0, or maybe LLaMA family models and then comparing the zero-shot performance, few-shot performance, and the performance obtained via this kind of composition of learned modules. | Experiments | **[Q2] Experiments with seq2seq models**: We acknowledge the potential of expanding our method to seq2seq models. Our current focus is on encoder PLMs, a point that we make more explicit in the revised draft. This focus was chosen based on our initial analyses of scaling the output representations of adapters, and it a... | DWC |
ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by Learning to Scale | 6cMmSnOpCs | ICLR-2024 | 99fH2Lf3D1 | How is the classification layer for the target task finetuned? In Table 1 and everywhere in the paper, it seems like you do not count these parameters when counting for the number of trainable parameters. Can you clarify this if the classifier parameters are also learned on each source task then this needs to be clarif... | Reproducibility | **[Q3] Clarification on the classification layer**: Indeed, the classifier (task head) parameters are learned on each target task (also for two-stage MTL methods), but we do not count them when considering the number of trainable parameters. We chose not to include them to focus on the efficiency of the main method, as... | CRP |
ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by Learning to Scale | 6cMmSnOpCs | ICLR-2024 | 99fH2Lf3D1 | The experiments in the paper should add IA3 as a baseline. IA3 paper shares very high similarity to the proposed method, the ScaLEARN method is like adapting source task adapter modules using IA3 on a downstream task. Hence, for all the experiments, IA3 would be a good baseline as it would learn a lower number of param... | Experiments | **[Q4] Adding (IA)^3 [4] as a baseline**: We thank the reviewer for raising this point. Indeed, the approach of scaling key and value as well as the intermediate feed-forward output representations as done in (IA)^3 is conceptually related to ScaLearn. As suggested by the reviewer, we also added (IA)^3 as an additional... | CRP |
ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by Learning to Scale | 6cMmSnOpCs | ICLR-2024 | 99fH2Lf3D1 | At multiple places, the paper talks about how the scaling coefficient does not need to sum to 1 and I agree that for ScaLearn this might be the case however, I am not sure if there is enough evidence in the paper, to claim that this is how it should be for all other methods and this has been talked about at multiple pl... | Evaluation | **[Q5] Scaling coefficient summation constraint**: The insights gained in our Analysis (Section 3) provide the motivation for learning scaling coefficients without imposing distributional constraints, and we do not think we claim that this is how it should be in other methods. Rather, our point is that enforcing the sc... | DWC |
ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by Learning to Scale | 6cMmSnOpCs | ICLR-2024 | OCBu4WOBCd | Analysis limited to adapter-based methods. Unclear how well it will perform to other PEFT architectures (e.g. Prompt Tuning). | Experiments | **[W1] Focus on adapter-based methods**: We acknowledge the reviewer's observation regarding the focus on adapter-based methods. This focus was intentional, following Pfeiffer et al. [1]. Our method is strongly motivated by our analyses of scaling the output representations of adapters (Section 3). Furthermore, we chos... | DWC |
ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by Learning to Scale | 6cMmSnOpCs | ICLR-2024 | OCBu4WOBCd | To my best knowledge, IA3 [1] achieves stronger fine-tuning performance by scaling the weighted activations in the activation layer using learned vectors. This is similar to your method, but I did not find it in the PEFT baselines you compared. | Experiments | **[W2] Comparison with (IA)^3 [3]**: We thank the reviewer for raising this point. Indeed, the approach of scaling key and value as well as the intermediate feed-forward output representations as done in (IA)^3 is conceptually related to ScaLearn. As suggested by the reviewer, we performed new experiments using (IA)^3.... | CRP |
ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by Learning to Scale | 6cMmSnOpCs | ICLR-2024 | OCBu4WOBCd | It seems that many source tasks are closely related to each other. I would suggest authors use benchmarks such as CrossFit [2] to do a more large-scale analysis, where the transferring is more challenging as some source tasks can be relatively less related to the target tasks. | Experiments | **[W3] Use of different benchmarks such as CrossFit [5]**: We agree with the importance of diversity in target and source tasks as well as large-scale analyses. To address diversity, we included SuperGLUE and HumSet in addition to the commonly used GLUE benchmark. SuperGLUE presents considerably more challenging tasks ... | CRP |
ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by Learning to Scale | 6cMmSnOpCs | ICLR-2024 | OCBu4WOBCd | For the second stage during training, the output representations of multiple source adapters are scaled and combined, which reminds me of MoE (Mixture of Experts), where each source adapter corresponds to an expert. It is a well-known phenomenon that learnable MoE can lead to overfitting and collapse. However, in your ... | Theory | **[W4] Comparison to MoEs and overfitting**: We appreciate the insightful comment and acknowledge the conceptual similarity between the scaling in our method and Mixture of Experts (MoEs) models. A key difference between our method and MoEs is that, in our setup, the backbone language model remains frozen during both s... | DWC |
ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by Learning to Scale | 6cMmSnOpCs | ICLR-2024 | pFHczL41FY | I don't think this is a glaring weakness, but I do think the paper could benefit from more diverse source/target tasks, especially sequence-generation tasks. It could be possible that the simpler ScaLearn parameterization doesn't work as well for different configurations of source and target tasks. I don't particularly... | Experiments | We agree with the importance of diversity in target and source tasks. We included SuperGLUE for this very purpose, as it presents considerably more challenging tasks than GLUE, which are also more varied in terms of tasks and corpora sizes. We relied on GLUE and SuperGLUE as we wanted to use benchmarks that are most co... | CRP |
ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by Learning to Scale | 6cMmSnOpCs | ICLR-2024 | pFHczL41FY | Some paragraphs are very long and hard to parse (e.g. "Models and Baselines" on page 6), and could be written in a more organized manner in my opinion. | Writing | We thank the reviewer for highlighting the importance of clarity in our paper. In response to the reviewer's valuable feedback, we have thoroughly revised the "Models and baselines" section on page 6 to enhance its readability. Additionally, we have reviewed the entire manuscript to ensure clarity in all sections. We a... | CRP |
ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by Learning to Scale | 6cMmSnOpCs | ICLR-2024 | pFHczL41FY | Did the authors try other parameterizations of the fusion module? For example, a low-rank MLP per task instead of a vector per task would be a step towards finding a sweet spot (if it exists) between attention-based fusing and ScaLearn. It's also not clear to me whether a task-vector-per-layer would be better than an M... | Experiments | We acknowledge the idea of a broader study encompassing a larger range of parametrizations; however, it would meaningfully deviate from the core contribution of the paper. Our experiments on scaling the output representations of adapters have provided a strong motivation for learning the scaling coefficients to combine... | DWC |
Efficient Point Cloud Matching for 3D Geometric Shape Assembly | 6cGiRiExUd | ICLR-2024 | glhEEqqWAz | The whole pipeline is developed upon GeoTransformer (CVPR 2022) and uses a majority of the previous design. The differences are the use of PMT to replace the Self- and Cross-Attention mechanisms in the coare level, as well as the use of PMT after each stage in decoder. I think this weakens the contribution and novelty ... | Novelty | We recognize that the presentation of our method might raise questions about the uniqueness of our technical contribution. It's important to clarify that our approach is built upon the coarse-to-fine matching framework, which serves as a flexible and shared platform for efficient matching across diverse domains [1,2,3,... | DWC |
Efficient Point Cloud Matching for 3D Geometric Shape Assembly | 6cGiRiExUd | ICLR-2024 | glhEEqqWAz | The methodology part is overly complex, and I do not think it is organized well and easy to follow. | Presentation | We acknowledge our method section was not easy to follow, and the concerns raised about its complexity and organization. In response, we have revised Section 3 and added an overview in the early part of the section.
This revision is primarily focused on improving the clarity of notations and presenting the methods in ... | CRP |
Efficient Point Cloud Matching for 3D Geometric Shape Assembly | 6cGiRiExUd | ICLR-2024 | glhEEqqWAz | In Eq. (2) and the following equations, the calculation of attention matrix $\mathbf{A}$ is unclear. | Writing | We would like to thank the reviewer for the careful reading and the constructive feedback.
To enhance readability and organizational clarity, we have included the absent descriptions of the attention matrix in this section. Additionally, we have made revisions to the methodology section of the manuscript. Kindly review... | CRP |
Efficient Point Cloud Matching for 3D Geometric Shape Assembly | 6cGiRiExUd | ICLR-2024 | glhEEqqWAz | It is misleading that the proposed PMT is used to replace the attention mechanisms used in GeoTransformer, while in Fig. 1 it is compared to the convolutions. Also, in many other places convolutions are introduced, but all the computation of PMT seems like attention-based. | Presentation | We apologize for any confusion in our explanation. The relation between convolution and attention is described in the Lemma 1 of the appendix.
To clarify it in the main paper, we added the related theorem in the preliminary of Section 3, saying that multi-head self-attention (MHSA) can express convolution, as theoretic... | CRP |
Efficient Point Cloud Matching for 3D Geometric Shape Assembly | 6cGiRiExUd | ICLR-2024 | glhEEqqWAz | In Eq. (2), it seems the output of PMT is the enhanced features, while in Eq. (4), the output is some correlation scores. Do I make a mistake in understanding this? | Theory | $$\text{PMT}(\mathbf{F}\_{\mathcal{X}}) \coloneqq \sum\_{h \in [{N_h}]} \mathbf{A}\_{\mathcal{X}}^{(h)} \\ \mathbf{F}\_{\mathcal{X}} \\ \mathbf{P}^{(h)\top} {w}\_{\mathcal{X}}^{(h)}.$$
In Eq. 2 (above), $\text{PMT}(\mathbf{F}_{\mathcal{X}})$ is indeed intended to represent the enhanced feature, rather than any form of... | DWC |
Efficient Point Cloud Matching for 3D Geometric Shape Assembly | 6cGiRiExUd | ICLR-2024 | glhEEqqWAz | The experiments are only conducted on synthetic dataset, which makes me doubt its value in real applications. Therefore, it is better to include some real data. If there is no real data in this task, as this method is strongly based on GeoTransformer, simply running on GeoTransformer's benchmark also makes sense. | Experiments | We have extended our experimental evaluation to include the Fantastic Breaks dataset (CVPR 2023) [1], which consists of real data samples for shape re-assembling. We acknowledge that this dataset is relatively small, containing only 150 samples. Still, we believe it provides a valuable preliminary indication of our mod... | CRP |
Efficient Point Cloud Matching for 3D Geometric Shape Assembly | 6cGiRiExUd | ICLR-2024 | glhEEqqWAz | The methodology part should be re-organized and the symbols simplified. Fig. 1 does not help understand the main contributions. | Presentation | We appreciate pointing this out again. In response to the concerns, we have revised the method section in our manuscript to improve its readability. We have also made efforts to simplify the symbols and re-organize the content to make it more accessible to readers. Please refer to our revised manuscript for the changes... | CRP |
Efficient Point Cloud Matching for 3D Geometric Shape Assembly | 6cGiRiExUd | ICLR-2024 | SwhV0f9ekf | The paper is written in a way that is rather difficult to follow -- I would suggest doing another pass (maybe with feedback from some external readers) to make the language more simple and streamlined. | Writing | We acknowledge our method section was not easy to follow, and the concerns raised about its complexity and organization. In response, we have revised Section 3 and added an overview in the early part of the section.
This revision is primarily focused on improving the clarity of notations and presenting the methods in ... | CRP |
Efficient Point Cloud Matching for 3D Geometric Shape Assembly | 6cGiRiExUd | ICLR-2024 | SwhV0f9ekf | I am also not entirely sure of the magnitude of the contribution. The proxy match transform is a clever trick that significantly enhances efficiency and accuracy in real-world training scenarios. But it sits inside a large pipeline that draws heavily upon previous work, and it is difficult to gauge its conceptual contr... | Novelty | We recognize that the presentation of our method might raise questions about the uniqueness of our technical contribution. It's important to clarify that our approach is built upon the coarse-to-fine matching framework, which serves as a flexible and shared platform for efficient matching across diverse domains [1,2,3,... | DWC |
Efficient Point Cloud Matching for 3D Geometric Shape Assembly | 6cGiRiExUd | ICLR-2024 | SwhV0f9ekf | Since "shape assembly" is more commonly used to refer to assembling shapes from parts (e.g. a chair from seat, back and legs), it might be clearer to use "shape re-assembly" instead, or even "fractured shape re-assembly". | Writing | Thank you for your suggestion to use "shape re-assembly" or "fractured shape re-assembly" in place of "shape assembly." We understand the reasoning behind this suggestion.
However, our decision to use the term "geometric shape assembly" was guided by the precedent set in pioneering research by Sell ́an et al. (2022)... | DWC |
Efficient Point Cloud Matching for 3D Geometric Shape Assembly | 6cGiRiExUd | ICLR-2024 | SwhV0f9ekf | "... two sets of features, $\mathcal{F}_P$ and $\mathcal{F}_Q$, associated with each point cloud" --> this reads as: each point cloud has two sets of features. You might want to rephrase as "... two sets of features $\mathcal{F}_P$ and $\mathcal{F}_Q$ associated with the two point clouds respectively" or something like... | Writing | Appreciate the constructive feedback. We've refined the wording to eliminate ambiguity. Kindly consult the updated version of our manuscript for reference. | VCR |
Efficient Point Cloud Matching for 3D Geometric Shape Assembly | 6cGiRiExUd | ICLR-2024 | SwhV0f9ekf | Please don't use $P$, $\mathbf{P}$ and $\mathcal{P}$ to denote totally different things (near Eq. 3). It's super-confusing. | Writing | Thanks for the suggestions. In response, we've revised the notations within the equations. Notably, we've adjusted the notations as follows: $\mathbf{P}$ represents the proxy, same as before. The spatial resolution of the proxy is now denoted by $D_{proxy}$ rather than $P$. Additionally, we've introduced $\mathcal{X}$ ... | CRP |
Efficient Point Cloud Matching for 3D Geometric Shape Assembly | 6cGiRiExUd | ICLR-2024 | WBdR4AoPZu | Limited theoretical insight: the paper feels a bit ad hoc, in the sense of "I have done this and it works." It is not clear how the architecture was derived. | Theory | **[W1&2]. Lack of motivation & theoretical insight**
**Answer:**
**Motivation and derivation:**
As outlined in our paper's introduction, high-order feature transform methods have been known to excel in addressing matching problems by capturing structural patterns of correlations in high-dimensional spaces. It is mor... | DWC |
Local Search GFlowNets | 6cFcw1Rxww | ICLR-2024 | DN3NBQldAd | The proposed method adds some computational overhead | Evaluation | **W1: About computational overhead**
LS-GFN does not introduce any additional computational overhead. Below are the wall clock times for each training round in the QM9 task.
| | $I$ |$M$ | Wall clock time per round|
| -------- | -------- | -------- | -------- |
| TB | 0 | 32 | 1.93 ± 0.13 seconds |
| T... | CRP |
Local Search GFlowNets | 6cFcw1Rxww | ICLR-2024 | DN3NBQldAd | How is the number of steps K picked for backtracking? If it's fixed, is there a way to pick it automatically? | Experiments | **Q1: How is the number of steps K picked for backtracking? If it's fixed, is there a way to pick it automatically?**
We used $K=\lfloor (L+1) / 2\rfloor$, where $L$ is length of trajecotry. We also provided a hyperparameter analysis in the original manuscript in Appendix B.5. As you mentioned, we can automatically pi... | DWC |
Local Search GFlowNets | 6cFcw1Rxww | ICLR-2024 | JXChBRaaIB | The paper does not declare the sampling complexity of the proposed method. The local search may require more sampling, which can lead to an unfair comparison. | Experiments | Thank you for highlighting this for clarification. All our experiments were conducted under a fair setting, as we used exactly the same number of samples across all experiments. Note this is already outlined in Section 5.3.
The calculation of sampling complexity follows this formula: $(I+1) \times M$, where $I$ stands... | DWC |
Local Search GFlowNets | 6cFcw1Rxww | ICLR-2024 | JXChBRaaIB | The limitations of the proposed algorithm and potential drawbacks are not discussed in detail. | Evaluation | A limitation of LS-GFN lies in the potential impact of the quality of the backward policy on its performance, particularly when the acceptance rate of the local search becomes excessively low. One immediate remedy is to introduce an exploratory element into the backward policy, utilizing techniques like $\epsilon$-gree... | SRP |
Local Search GFlowNets | 6cFcw1Rxww | ICLR-2024 | JXChBRaaIB | What about the sampling complexity of local search? Can you conduct experiments under the same sampling complexity or compare your algorithm with an upper bound sampling times? | Experiments | Thank you for highlighting this for clarification. All our experiments were conducted under a fair setting, as we used exactly the same number of samples across all experiments. Note this is already outlined in Section 5.3.
The calculation of sampling complexity follows this formula: $(I+1) \times M$, where $I$ stands... | DWC |
Local Search GFlowNets | 6cFcw1Rxww | ICLR-2024 | JXChBRaaIB | How to decide the local search interaction I? What about the results with different I? | Experiments | We have conducted an ablation study on the hyperparameter $I$, as now detailed in Appendix B.3.
It's worth noting that across various hyperparameter candidates, namely $I \in \{1, 3, 7, 15, 31\}$, LS-GFN ($I > 0$) consistently outperforms GFN ($I = 0$). This observation suggests that choosing the appropriate hyperpar... | CRP |
Local Search GFlowNets | 6cFcw1Rxww | ICLR-2024 | JXChBRaaIB | As PRT is adopted in LS-GFN training, is PER used for RL methods in the experiments? | Experiments | Taking into consideration your feedback, we implemented reward-prioritized experience replay training in conjunction with the off-policy RL baseline, SQL. It's worth noting that PPO and A2C are on-policy RL methods in which replay training is not directly utilized. Here are new experiment results that compare replay-tr... | CRP |
Local Search GFlowNets | 6cFcw1Rxww | ICLR-2024 | jazpW7vg2h | In addition to the hyper-parameter $I$ (number of revisions with local searches), I wished the number of backtracking steps and the acceptance rate were also studied — how much tuning did they require in order to get these good results? | Experiments | **W2: In addition to the hyper-parameter
(number of revisions with local searches), I wished the number of backtracking steps and the acceptance rate were also studied — how much tuning did they require in order to get these good results?**
We've already conducted experiments on hyperparameter $I$ in Appendix B.3, ... | DWC |
Local Search GFlowNets | 6cFcw1Rxww | ICLR-2024 | jazpW7vg2h | Wall-clock time is never mentioned — how much overhead does the refining process of LS-GFNs incur? How about in terms of sampled states (instead of training rounds)? | Evaluation | **W3: Wall-clock time is never mentioned — how much overhead does the refining process of LS-GFNs incur? How about in terms of sampled states (instead of training rounds)?**
There's negligible additional wall clock time overhead.
Here are the wall clock times for each training round in the QM9 task. Wall clock time ... | CRP |
Local Search GFlowNets | 6cFcw1Rxww | ICLR-2024 | jazpW7vg2h | One baseline I wished were included: given a first trajectory, resample the last K steps and only keeping the best candidate. This is akin to beam search in LLMs or go-explore in RL. Other nice-to-have baselines include top-p and top-k sampling. | Experiments | **W4: One baseline I wished was included: given a first trajectory, resample the last K steps, and only keeping the best candidate. This is akin to beam search in LLMs or go-explore in RL. Other nice-to-have baselines include top-p and top-k sampling.**
Thank you for recommending baselines that can effectively showcas... | CRP |
Local Search GFlowNets | 6cFcw1Rxww | ICLR-2024 | jazpW7vg2h | Why call it “destroy” since the original trajectory isn’t always discarded? A more intuitive name could be “backtrack”, “rewind”, or anything that doesn’t suggest destruction. | Writing | **Q1: Why call it “destroy” since the original trajectory isn’t always discarded? A more intuitive name could be “backtrack,” “rewind,” or anything that doesn’t suggest destruction.**
We are in agreement on this matter. We have updated our manuscript to incorporate the term "backtrack" instead of "destroy." | CRP |
Local Search GFlowNets | 6cFcw1Rxww | ICLR-2024 | jazpW7vg2h | The top plots in Figure 5 are strange: it looks like some curves go beyond 100% accuracy. Could you either fix them so we can still see the curves on the plot, or explain what is happening? | Presentation | **Q2: The top plots in Figure 5 are strange: it looks like some curves go beyond 100% accuracy. Could you either fix them so we can still see the curves on the plot or explain what is happening?**
Please note that Figure 5 is plotted accurately in accordance with the baseline from the paper by Shen et al., 2023, where... | CRP |
Local Search GFlowNets | 6cFcw1Rxww | ICLR-2024 | jazpW7vg2h | How can LS-GFNs recover from a biased backward policy? In other words, assume the forward policy is fine but the backward policy always backtracks to states which yield the same (high reward) candidate objects — how can LS-GFNs overcome this lack of exploration? | Theory | **Q3: How can LS-GFNs recover from a biased backward policy? In other words, assume the forward policy is fine but the backward policy always backtracks to states that yield the same (high reward) candidate objects — how can LS-GFNs overcome this lack of exploration?**
Biased backward policies can indeed pose challeng... | VCR |
Local Search GFlowNets | 6cFcw1Rxww | ICLR-2024 | jazpW7vg2h | Please confirm that lines 7 and 8 in Algorithm 1 aren’t swapped. If they aren’t (which partially addresses my question above), wouldn’t swapping them and extending $\mathcal{D}$ with $\tau_m$ further improve exploitation? Maybe this should be added as an ablation as well. | Experiments | **Q4: Please confirm that lines 7 and 8 in Algorithm 1 aren’t swapped. If they aren’t (which partially addresses my question above), wouldn’t swapping them and extending them further improve exploitation? Maybe this should be added as an ablation as well.**
No swapping is involved here. We update every sample to the ... | CRP |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.