review_point
stringlengths
45
642
paper_id
stringlengths
10
19
venue
stringclasses
15 values
focused_review
stringlengths
200
10.5k
batch
int64
2
10
actionability
dict
actionability_label
stringclasses
5 values
actionability_label_type
stringclasses
1 value
id
int64
31
1.53k
- Line 226-238 seem to suggest that the authors selected sentences from raw data of these sources, but line 242-244 say these already have syntactic information. If I understand correctly, the data selected is a subset of Li et al. (2019a)’s dataset. If this is the case, I think this description can be revised, e.g. me...
ARR_2022_65_review
ARR_2022
1. The paper covers little qualitative aspects of the domains, so it is hard to understand how they differ in linguistic properties. For example, I think it is vague to say that the fantasy novel is more “canonical” (line 355). Text from a novel may be similar to that from news articles in that sentences tend to be com...
2
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
31
- Table 4 needs a little more clarification, what splits are used for obtaining the ATIS numbers? I thank the authors for their response.
ACL_2017_726_review
ACL_2017
- Claims of being comparable to state of the art when the results on GeoQuery and ATIS do not support it. General Discussion: This is a sound work of research and could have future potential in the way semantic parsing for downstream applications is done. I was a little disappointed with the claims of “near-state-of-th...
2
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
33
781 "both tasks": antecedent missing The references should be checked for format, e.g. Grice, Sorower et al for capitalization, the verbnet reference for bibliographic details.
ACL_2017_818_review
ACL_2017
1) Many aspects of the approach need to be clarified (see detailed comments below). What worries me the most is that I did not understand how the approach makes knowledge about objects interact with knowledge about verbs such that it allows us to overcome reporting bias. The paper gets very quickly into highly technica...
2
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
37
- The abstract is written well and invokes intrigue early - could potentially be made even better if, for "evaluating with gold answers is inconsistent with human evaluation" - an example of the inconsistency, such as models get ranked differently is also given there.
ARR_2022_227_review
ARR_2022
1. The case made for adopting the proposed strategy for a new automated evaluation paradigm - auto-rewrite (where the questions that are not valid due to a coreference resolution failure in terms of the previous answer get their entity replaced to be made consistent with the gold conversational history) - seems weak. W...
2
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
44
1. Some discussions are required on the convergence of the proposed joint learning process (for RNN and CopyRNN), so that readers can understand, how the stable points in probabilistic metric space are obtained? Otherwise, it may be tough to repeat the results.
ACL_2017_699_review
ACL_2017
1. Some discussions are required on the convergence of the proposed joint learning process (for RNN and CopyRNN), so that readers can understand, how the stable points in probabilistic metric space are obtained? Otherwise, it may be tough to repeat the results. 2. The evaluation process shows that the current system (w...
2
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "4", "4", "4" ] }
4
gold
46
- In figure 5, the y-axis label may use "Exact Match ratio" directly.
ARR_2022_113_review
ARR_2022
The methodology part is a little bit unclear. The author could describe clearly how the depth-first path completion really works using Figure 3. Also, I'm not sure if the ZIP algorithm is proposed by the authors and also confused about how the ZIP algorithm handles multiple sequence cases. - Figure 2, it is not clear a...
2
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
51
- In section 2.3 the authors use Lample et al. Bi-LSTM-CRF model, it might be beneficial to add that the input is word embeddings (similarly to Lample et al.) - Figure 3, KNs in source language or in English? ( since the mentions have been translated to English). In the authors' response, the authors stated that they w...
ACL_2017_71_review
ACL_2017
-The explanation of methods in some paragraphs is too detailed and there is no mention of other work and it is repeated in the corresponding method sections, the authors committed to address this issue in the final version. -README file for the dataset [Authors committed to add README file] - General Discussion: - Sect...
2
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
56
- Two things must be improved in the presentation of the model: (1) What is the pooling method used for embedding features (line 397)? and (2) Equation (7) in line 472 is not clear enough: is E_i the random variable representing the *type* of AC i, or its *identity*? Both are supposedly modeled (the latter by feature r...
ACL_2017_483_review
ACL_2017
- 071: This formulation of argumentation mining is just one of several proposed subtask divisions, and this should be mentioned. For example, in [1], claims are detected and classified before any supporting evidence is detected. Furthermore, [2] applied neural networks to this task, so it is inaccurate to say (as is cl...
2
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
64
1. The paper raises two hypotheses in lines 078-086 about multilinguality and country/language-specific bias. While I don't think the hypotheses are phrased optimally (could they be tested as given?), their underlying ideas are valuable. However, the paper actually does not really study these hypotheses (nor are they e...
ARR_2022_215_review
ARR_2022
1. The paper raises two hypotheses in lines 078-086 about multilinguality and country/language-specific bias. While I don't think the hypotheses are phrased optimally (could they be tested as given?), their underlying ideas are valuable. However, the paper actually does not really study these hypotheses (nor are they e...
2
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "4", "4", "4" ] }
4
gold
65
2. Would the use of feature engineering help in improving the performance? Uto et al. (2020)'s system reaches a QWK of 0.801 by using a set of hand-crafted features. Perhaps using Uto et al. (2020)'s same feature set could also improve the results of this work.
ARR_2022_121_review
ARR_2022
1. The writing needs to be improved. Structurally, there should be a "Related Work" section which would inform the reader that this is where prior research has been done, as well as what differentiates the current work with earlier work. A clear separation between the "Introduction" and "Related Work" sections would ce...
2
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
66
2. It would be nice to include the hard prompt baseline in Table 1 to see the increase in performance of each method.
gybvlVXT6z
EMNLP_2023
1. I feel that paper has insufficiant baseline. For example, CoCoOp (https://arxiv.org/abs/2203.05557) is a widely used baseline for prompt tuning research in CLIP. Moreover, it would be nice to include the natural data shift setting as in most other prompt tuning papers for CLIP. 2. It would be nice to include the har...
3
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
75
1. The experimental comparisons are not enough. Some methods like MoCo and SimCLR also test the results with wider backbones like ResNet50 (2×) and ResNet50 (4×). It would be interesting to see the results of proposed InvP with these wider backbones.
NIPS_2020_295
NIPS_2020
1. The experimental comparisons are not enough. Some methods like MoCo and SimCLR also test the results with wider backbones like ResNet50 (2×) and ResNet50 (4×). It would be interesting to see the results of proposed InvP with these wider backbones. 2. Some methods use epochs and pretrain epochs as 200, while the repo...
3
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
77
2: the callout to table 5 should go to table 3, instead. Page 7, section 5, last par.: figure 6 callout is not directing properly
ICLR_2023_977
ICLR_2023
the evaluation section has 2 experiments, but only 2 very insightful detailed examples. The paper can use a few more examples to illustrate more differences of the output sequences. This would allow the reader to internalize how the non-monotonicity in a deeper way. Questions: In details, how does the decoding algorith...
3
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
83
2 More analysis and comments are recommended on the performance trending of increasing the number of parameters for ViT (DeiT) in the Figure 3. I disagree with authors' viewpoint that "Both CNNs and ViTs seem to benefit similarly from increased model capacity". In the Figure 3, the DeiT-B models does not outperform Dei...
ICLR_2022_1794
ICLR_2022
1 Medical imaging are often obtained in 3D volumes, not only limited to 2D images. So experiments should include the 3D volume data as well for the general community, rather than all on 2D images. And the lesion detection is another important task for the medical community, which has not been studied in this work. 2 Mo...
3
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
85
- The paper is not difficult to follow, but there are several places that are may cause confusion. (listed in point 3).
ICLR_2022_3352
ICLR_2022
+ The problem studied in this paper is definitely important in many real-world applications, such as robotics decision-making and autonomous driving. Discovering the underlying causation is important for agents to make reasonable decisions, especially in dynamic environments. + The method proposed in this paper is inte...
3
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
86
3: To further backup the proposed visual reference resolution model works in real dataset, please also conduct ablation study on visDial dataset. One experiment I'm really interested is the performance of ATT(+H) (in figure 4 left). What is the result if the proposed model didn't consider the relevant attention retriev...
NIPS_2017_356
NIPS_2017
] My major concerns about this paper is the experiment on visual dialog dataset. The authors only show the proposed model's performance on discriminative setting without any ablation studies. There is not enough experiment result to show how the proposed model works on the real dataset. If possible, please answer my fo...
3
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
92
- As mentioned in the previous question, the distribution of videos of different lengths within the benchmark is crucial for the assessment of reasoning ability and robustness, and the paper does not provide relevant explanations. The authors should include a table showing the distribution of video lengths across the d...
BTr3PSlT0T
ICLR_2025
- I express skepticism about whether the number of videos in the benchmark can achieve a robust assessment. The CVRR-ES benchmark includes only 214 videos, with the shortest video being just 2 seconds. Upon reviewing several videos from the anonymous link, I noticed a significant proportion of short videos. I question ...
3
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
100
8.L290: it would be good to clarify how the implemented billinear layer is different from other approaches which do billinear pooling. Is the major difference the dimensionality of embeddings? How is the billinear layer swapped out with the hadarmard product and MCB approaches? Is the compression of the representations...
NIPS_2017_53
NIPS_2017
Weakness 1. When discussing related work it is crucial to mention related work on modular networks for VQA such as [A], otherwise the introduction right now seems to paint a picture that no one does modular architectures for VQA. 2. Given that the paper uses a billinear layer to combine representations, it should menti...
3
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
101
1. The novelty is limited. The proposed method is too similar to other attentional modules proposed in previous works [1, 2, 3]. The group attention design seems to be related to ResNeSt [4] but it is not discussed in the paper. Although these works did not evaluate their performance on object detection and instance se...
ICLR_2023_3203
ICLR_2023
1. The novelty is limited. The proposed method is too similar to other attentional modules proposed in previous works [1, 2, 3]. The group attention design seems to be related to ResNeSt [4] but it is not discussed in the paper. Although these works did not evaluate their performance on object detection and instance se...
3
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
103
1. Fig. 3 e. Since the preactivation values of two networks are the same membrane potentials, their output cosine similarity will be very high. Why not directly illustrate the results of the latter loss term of Eqn 13?
ICLR_2023_2283
ICLR_2023
1. 1. The symbols in Section 4.3 are not very clearly explained. 2. This paper only experiments on the very small time steps (e.g.1、2) and lack of some experiments on slightly larger time steps (e.g. 4、6) to make better comparisons with other methods. I think it is necessary to analyze the impact of the time step on th...
3
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
104
- This paper investigates the issue of robustness in video action recognition, but it lacks comparison with test-time adaptation (TTA) methods, such as [A-B]. These TTA methods also aim to adapt to out-of-distribution data when the input data is disturbed by noise. Although these TTA methods mainly focus on updating mo...
eI6ajU2esa
ICLR_2024
- This paper investigates the issue of robustness in video action recognition, but it lacks comparison with test-time adaptation (TTA) methods, such as [A-B]. These TTA methods also aim to adapt to out-of-distribution data when the input data is disturbed by noise. Although these TTA methods mainly focus on updating mo...
3
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
105
4. Section 3.2.1: The first expression for J ( θ ) is incorrect, which should be Q ( s t 0 , π θ ( s t 0 ) ) .
ICLR_2021_863
ICLR_2021
Weakness 1. The presentation of the paper should be improved. Right now all the model details are placed in the appendix. This can cause confusion for readers reading the main text. 2. The necessity of using techniques includes Distributional RL and Deep Sets should be explained more thoroughly. From this paper, the il...
3
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
106
8: s/expensive approaches2) allows/expensive approaches,2) allows/ p.8: s/estimates3) is/estimates, and3) is/ In the references: Various words in many of the references need capitalization, such as "ai" in Amodei et al. (2016), "bayesian" in many of the papers, and "Advances in neural information processing systems" in...
ICLR_2021_872
ICLR_2021
The authors push on the idea of scalable approximate inference, yet the largest experiment shown is on CIFAR-10. Given this focus on scalability, and the experiments in recent literature in this space, I think experiments on ImageNet would greatly strengthen the paper (though I sympathize with the idea that this can a ...
3
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
107
3. I didn't find all parameter values. What are the model parameters for task 1? What lambda was chosen for the Boltzmann policy. But more importantly: How were the parameters chosen? Maximum likelihood estimates?
NIPS_2016_339
NIPS_2016
weakness of the model. How would the values in table 1 change without this extra assumption? 3. I didn't find all parameter values. What are the model parameters for task 1? What lambda was chosen for the Boltzmann policy. But more importantly: How were the parameters chosen? Maximum likelihood estimates? 4. An answer ...
3
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
108
1. The authors should make clear the distinction of when the proposed method is trained using only weak supervision and when it is semi-supervised trained. For instance, in Table 1, I think the proposed framework row refers to the semi-supervised version of the method, thus the authors should rename the column to ‘Full...
4N97bz1sP6
ICLR_2024
1. The authors should make clear the distinction of when the proposed method is trained using only weak supervision and when it is semi-supervised trained. For instance, in Table 1, I think the proposed framework row refers to the semi-supervised version of the method, thus the authors should rename the column to ‘Full...
3
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
121
- Small contributions over previous methods (NCNet [6] and Sparse NCNet [21]). Mostly (good) engineering. And despite that it seems hard to differentiate it from its predecessors, as it performs very similarly in practice.
NIPS_2020_1454
NIPS_2020
- Small contributions over previous methods (NCNet [6] and Sparse NCNet [21]). Mostly (good) engineering. And despite that it seems hard to differentiate it from its predecessors, as it performs very similarly in practice. - Claims to be SOTA on three datasets, but this does not seem to be the case. Does not evaluate o...
3
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
122
- "semantic" segmentation is not low-level since the categories are specified for each pixel so the statements about semantic segmentation being a low-level cue should be removed from the paper.
NIPS_2018_25
NIPS_2018
- My understanding is that R,t and K (the extrinsic and intrinsic parameters of the camera) are provided to the model at test time for the re-projection layer. Correct me in the rebuttal if I am wrong. If that is the case, the model will be very limited and it cannot be applied to general settings. If that is not the c...
3
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
123
2.b) On lines 182-183 the authors note measuring manifold capacity for unperturbed images, i.e. clean exemplar manifolds. Earlier they state that the exemplar manifolds are constructed using either adversarial perturbations or from stochasticity of the network. So I’m wondering how one constructs images for a clean exe...
NIPS_2021_1222
NIPS_2021
Claims: 1.a) I think the paper falls short of the high-level contributions claimed in the last sentence of the abstract. As the authors note in the background section, there are a number of published works that demonstrate the tradeoffs between clean accuracy, training with noise perturbations, and adversarial robustne...
3
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
130
1) Originality is limited because the main idea of variable splitting is not new and the algorithm is also not new.
NIPS_2018_476
NIPS_2018
Weakness] 1) Originality is limited because the main idea of variable splitting is not new and the algorithm is also not new. 2) Theoretical proofs of existing algorithm might be regarded as some incremental contributions. 3) Experiments are somewhat weak: 3-1) I was wondering why Authors conducted experiments with lam...
3
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
134
2. Â Some ablation study is missing, which could cause confusion and extra experimentation for practitioners. For example, the \sigma in the RBF kernel seems to play a crucial role, but no analysis is given on it. Figure 4 analyzes how changing \lambda changes the performance, but it would be nice to see how \eta and \...
NIPS_2019_1131
NIPS_2019
1. There is no discussion on the choice of "proximity" and the nature of the task. On the proposed tasks, proximity on the fingertip Cartesian positions is strongly correlated with proximity in the solution space. However, this relationship doesn't hold for certain tasks. For example, in a complicated maze, two nearby ...
3
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
138
1) The paper was extremely hard to follow. I read it multiple times and still had trouble following the exact experimental procedures and evaluations that the authors conducted.
5UW6Mivj9M
EMNLP_2023
1) The paper was extremely hard to follow. I read it multiple times and still had trouble following the exact experimental procedures and evaluations that the authors conducted. 2) Relatedly, it was hard to discern what was novel in the paper and what had already been tried by others. 3) Since the improvement in number...
3
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
139
- Limited Experiments - Most of the experiments (excluding Section 4.1.1) are limited to RoBERTa-base only, and it is unclear if the results can be generalized to other models adopting learnable APEs. It is important to investigate whether the results can be generalized to differences in model size, objective function,...
zpayaLaUhL
EMNLP_2023
- Limited Experiments - Most of the experiments (excluding Section 4.1.1) are limited to RoBERTa-base only, and it is unclear if the results can be generalized to other models adopting learnable APEs. It is important to investigate whether the results can be generalized to differences in model size, objective function,...
3
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
142
5. **(Performance of TTA methods)** This is an interesting observation, that using non-standard benchmarks breaks a lot of popular TTA methods. If the authors can evaluate TTA on more conditions of natural distribution shift, like WILDS [9], it could really strengthen the paper.
X4ATu1huMJ
ICLR_2024
**Overall comment** The paper discusses evaluating TTA methods across multiple settings, and how to choose the correct method during test-time. I would argue most of the methods/model selection strategies that are discussed in the paper are not novel and/or existed before, and the paper does not have a lot of algorithm...
3
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
144
2. The authors need to show a graph showing the plot of T vs number of images, and Expectation(T) over the imagenet test set. It is important to understand whether the performance improvement stems solely from the network design to exploit spatial redundancies, or whether the redudancies stem from the nature of ImageNe...
NIPS_2020_204
NIPS_2020
1.The authors have done a good job with placing their work appropriately. One point of weakness is insufficient comparison to approaches that aim to reduce spatial redudancy, or make the networks more efficient specifically the ones skipping layers/channels. Comparison to OctConv and SkipNet even for a single datapoint...
3
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
157
- It would be good to include in the left graph in fig 3 the learning curve for a model without any mean teacher or pi regularization for comparison, to see if mean teacher accelerates learning or slows it down.
NIPS_2017_114
NIPS_2017
- More evaluation would have been welcome, especially on CIFAR-10 in the full label and lower label scenarios. - The CIFAR-10 results are a little disappointing with respect to temporal ensembles (although the results are comparable and the proposed approach has other advantages) - An evaluation on the more challenging...
3
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
163
2. To utilize a volumetric representation in the deformation field is not a novel idea. In the real-time dynamic reconstruction task, VolumeDeform [1] has proposed volumetric grids to encode both the geometry and motion, respectively.
NIPS_2022_728
NIPS_2022
Weakness 1. The setup of capturing strategy is complicated and is not easy for applications in real life. To initialize the canonical space, the first stage is to capture the static state using a moving camera. Then to model motions, the second stage is to capture dynamic states using a few (4) fixed cameras. Such a 2-...
3
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
169
4) The rock-paper-scissors example is clearly inspired by an example that appeared in many previous work. Please, cite the source appropriately.
NIPS_2018_707
NIPS_2018
weakness of the paper is the lack of experimental comparison with the state of the art. The paper spends whole page explaining reasons why the presented approach might perform better under some circumstances, but there is no hard evidence at all. What is the reason not to perform an empirical comparison to the joint be...
3
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
171
3. The innovations of network architecture design and constraint embedding are rather limited. The authors discussed that the performance is limited by the performance of the oracle expert.
NIPS_2022_69
NIPS_2022
1. This work uses an antiquated GNN model and method, it seriously impacts the performance of this framework. The baseline algorithms/methods are also antiquated. 2. The experimental results did not show that this work model obviously outperforms other variant comparison algorithms/models. 3. The innovations of network...
3
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
172
12. It would seem that this update would have to integrate over all possible environments in order to be meaningful, assuming that the true environment is not known at update time. Is that correct? I guess this was probably for space reasons, but the bolded sections in page 6 should really be broken out into \paragraph...
ICLR_2022_3205
ICLR_2022
This method trades one intractible problem for another: it requires the learning of cross-values v e ′ ( x t ; e ) for all pairs of possible environments e , e ′ . It is not clear that this will be an improvement when scaling up. At a few points the paper introduces approximations, but the gap to the true value and the...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
177
1) there is a drop of correlation after a short period of training, which goes up with more training iterations;
NIPS_2022_1770
NIPS_2022
Weakness: There are still several concerns with the finding that the perplexity is highly correlated with the number of decoder parameters. According to Figure 4, the correlation decreases as top-10% architectures are chosen instead of top-100%, which indicates that the training-free proxy is less accurate for paramete...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
179
2)The derivation from Eqn. 3 to Eqn. 4 misses the temperature τ , τ should be shown in a rigorous way or this paper mention it.
ICLR_2023_650
ICLR_2023
1.One severe problem of this paper is that it misses several important related work/baselines to compare[1,2,3,4], either in discussion [1,2,3,4]or experiments[1,2]. This paper addresses to design a normalization layer that can be plugged in the network for avoiding the dimensional collapse of representation (in interm...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
181
1. Line 156. It'd be useful to the reader to add a citation on differential privacy, e.g. one of the standard works like [2].
3vXpZpOn29
ICLR_2025
It is unclear that linear datamodels extend to other kinds of tasks, e.g. language modeling or regression problems. I believe this to be a major weakness of the paper. While linear datamodels lead to simple algorithms in this paper, the previous work [1] does not have a good argument for why linear datamodels work [1; ...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
182
2) Shapely values over other methods. I think the authors need to back up their argument for using Shapely value explanations over other methods by comparing experimentally with other methods such as CaCE or even raw gradients. In addition, I think the paper would benefit a lot by including a significant discussion on ...
ICLR_2021_1504
ICLR_2021
W1) The authors should compare their approach (methodologically as well as experimentally) to other concept-based explanations for high-dimensional data such as (Kim et al., 2018), (Ghorbani et al., 2019) and (Goyal et al., 2019). The related work claims that (Kim et al., 2018) requires large sets of annotated data. I ...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
184
3. The authors conduct comprehensive experiments to validate the efficacy of CATER in various settings, including an architectural mismatch between the victim and the imitation model and cross-domain imitation.
NIPS_2022_2373
NIPS_2022
weakness in He et al., and proposes a more invisible watermarking algorithm, making their method more appealing to the community. 2. Instead of using a heuristic search, the authors elegantly cast the watermark search issue into an optimization problem and provide rigorous proof. 3. The authors conduct comprehensive ex...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
188
- Generally, this seems like only a very first step towards real strategic settings: in light of what they claim ("strategic predictions", l28), their setting is only partially strategic/game theoretic as the opponent doesn't behave strategically (i.e., take into account the other strategic player).
NIPS_2017_143
NIPS_2017
For me the main issue with this paper is that the relevance of the *specific* problem that they study -- maximizing the "best response" payoff (l127) on test data -- remains unclear. I don't see a substantial motivation in terms of a link to settings (real or theoretical) that are relevant: - In which real scenarios is...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
189
- The authors approach is only applicable for problems that are small or medium scale. Truly large problems will overwhelm current LP-solvers.
NIPS_2018_430
NIPS_2018
- The authors approach is only applicable for problems that are small or medium scale. Truly large problems will overwhelm current LP-solvers. - The authors only applied their method on peculiar types of machine learning applications that were already used for testing boolean classifier generation. It is unclear whethe...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
191
- Multiscale modeling:- The aggregation operation after "Integration" needs further clarification. Please provide more details in the main paper, and if you refer to other architectures, acknowledge their structure properly.
8HG2QrtXXB
ICLR_2024
- Source of Improvement and Ablation Study: - Given the presence of various complex architectural choices, it's difficult to determine whether the Helmholtz decomposition is the primary source of the observed performance improvement. Notably, the absence of the multi-head mechanism leads to a performance drop (0.1261 -...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
195
3) mentioned above would become even more important. If the figures do not show results for untrained networks then please run the corresponding experiments and add them to the figures and Table 1. Clarify: Random data (Fig 3c). Was the network trained on random data, or do the dotted lines show networks trained on una...
ICLR_2021_1716
ICLR_2021
Results are on MNIST only. Historically it’s often been the case that strong results on MNIST would not carry over to more complex data. Additionally, at least some core parts of the analysis does not require training networks (but could even be performed e.g. with pre-trained classifiers on ImageNet) - there is thus n...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
198
- The improvement over previous methods is small, about 0.2%-1%. Also the results in Table 1 and Fig.5 don't report the mean and standard deviation, and whether the difference is statistically significant is hard to know. I will suggest to repeat the experiments and conduct statistical significance analysis on the numb...
NIPS_2018_985
NIPS_2018
Weakness: - One drawback is that the idea of dropping a spatial region in training is not new. Cutout [22] and [a] have been explored this direction. The difference towards previous dropout variants is marginal. [a] CVPR'17. A-Fast-RCNN: Hard Positive Generation via Adversary for Object Detection. - The improvement ove...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
202
1. The task setup is not described clearly. For example, which notes in the EHR (only the current admission or all previous admissions) do you use as input and how far away are the outcomes from the last note date?
ARR_2022_209_review
ARR_2022
1. The task setup is not described clearly. For example, which notes in the EHR (only the current admission or all previous admissions) do you use as input and how far away are the outcomes from the last note date? 2. There isn't one clear aggregation strategy that gives consistent performance gains across all tasks. S...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
206
1. The proposed algorithm DMLCBO is based on double momentum technique. In previous works, e.g., SUSTAIN[1] and MRBO[2], double momentum technique improves the convergence rate to $\mathcal{\widetilde O}(\epsilon^{-3})$ while proposed algorithm only achieves the $\mathcal{\widetilde O}(\epsilon^{-4})$. The authors are ...
K98byXpOpU
ICLR_2024
1. The proposed algorithm DMLCBO is based on double momentum technique. In previous works, e.g., SUSTAIN[1] and MRBO[2], double momentum technique improves the convergence rate to $\mathcal{\widetilde O}(\epsilon^{-3})$ while proposed algorithm only achieves the $\mathcal{\widetilde O}(\epsilon^{-4})$. The authors are ...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
207
3. The approach section is missing in the main paper. The reviewer did go through the “parallelization descriptions” in the supplementary material but the supplementary should be used more like additional information and not as an extension to the paper as it is. Timothy Nguyen, Zhourong Chen, and Jaehoon Lee. Dataset ...
NIPS_2021_386
NIPS_2021
1. It is unclear if this proposed method will lead to any improvement for hyper-parameter search or NAS kind of works for large scale datasets since even going from CIFAR-10 to CIFAR-100, the model's performance reduced below prior art (if #samples are beyond 1). Hence, it is unlikely that this will help tasks like NAS...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
208
35. No.3. 2021. Competing dynamic-pruning methods are kind of out-of-date. More recent works should be included. Only results on small scale datasets are provided. Results on large scale datasets including ImageNet should be included to further verify the effectiveness of the proposed method.
ICLR_2023_1599
ICLR_2023
of the proposed method are listed as below: There are two key components of the method, namely, the attention computation and learn-to-rank module. For the first component, it is a common practice to compute importance using SE blocks. Therefore, the novelty of this component is limited. Some important SOTAs are missin...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
224
1. Although the paper argue that proposed method finds the flat minima, the analysis about flatness is missing. The loss used for training base model is the averaged loss for the noise injected models, and the authors provided convergence analysis on this loss. However, minimizing the averaged loss across the noise inj...
NIPS_2021_121
NIPS_2021
Weakness] 1. Although the paper argue that proposed method finds the flat minima, the analysis about flatness is missing. The loss used for training base model is the averaged loss for the noise injected models, and the authors provided convergence analysis on this loss. However, minimizing the averaged loss across the...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
229
- The text inside the figure and the labels are too small to read without zooming. This text should be roughly the same size as the manuscript text.
ICLR_2023_1765
ICLR_2023
weakness, which are summarized in the following points: Important limitations of the quasi-convex architecture are not addressed in the main text. The proposed architecture can only represent non-negative functions, which is a significant weakness for regression problems. However, this is completed elided and could be ...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
230
* The model is somewhat complicated and its presentation in section 4 requires careful reading, perhaps with reference to the supplement. If possible, try to improve this presentation. Replacing some of the natural language description with notation and adding breakout diagrams showing the attention mechanisms might he...
NIPS_2017_104
NIPS_2017
--- There aren't any major weaknesses, but there are some additional questions that could be answered and the presentation might be improved a bit. * More details about the hard-coded demonstration policy should be included. Were different versions of the hard-coded policy tried? How human-like is the hard-coded policy...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
234
1. Symbols are a little bit complicated and takes a lot of time to understand.
NIPS_2018_461
NIPS_2018
1. Symbols are a little bit complicated and takes a lot of time to understand. 2. The author should probably focus more on the proposed problem and framework, instead of spending much space on the applications. 3. No conclusion section Generally I think this paper is good, but my main concern is the originality. If thi...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
236
1. The introduction to orthogonality in Part 2 could be more detailed.
oKn2eMAdfc
ICLR_2024
1. The introduction to orthogonality in Part 2 could be more detailed. 2. No details on how the capsule blocks are connected to each other. 3. The fourth line of Algorithm 1 does not state why the flatten operation is performed. 4.The presentation of the α-enmax function is not clear. 5. Eq. (4) does not specify why Ba...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "3", "3", "3" ] }
3
gold
240
1) The proposed methods - contrastive training objective and contrastive search - are two independent methods that have little inner connection on both the intuition and the algorithm.
NIPS_2022_2315
NIPS_2022
Weakness: 1) The proposed methods - contrastive training objective and contrastive search - are two independent methods that have little inner connection on both the intuition and the algorithm. 2) The justification for isotropic representation and contractive search could be more solid.
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
242
- Similar analyses are already present in prior works, although on a (sometimes much) smaller scale, and then the results are not particularly surprising. For example, the robustness of CIFAR-10 models on distributions shifts (CIFAR-10.1, CINIC-10, CIFAR-10-C, which are also included in this work) was studied on the in...
RnYd44LR2v
ICLR_2024
- Similar analyses are already present in prior works, although on a (sometimes much) smaller scale, and then the results are not particularly surprising. For example, the robustness of CIFAR-10 models on distributions shifts (CIFAR-10.1, CINIC-10, CIFAR-10-C, which are also included in this work) was studied on the in...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
245
1. It is not very clear how exactly is the attention module attached to the backbone ResNet-20 architecture when performing the search. How many attention modules are used? Where are they placed? After each block? After each stage? It would be good to clarify this.
NIPS_2020_125
NIPS_2020
1. It is not very clear how exactly is the attention module attached to the backbone ResNet-20 architecture when performing the search. How many attention modules are used? Where are they placed? After each block? After each stage? It would be good to clarify this. 2. Similar to above, it would be good to provide more ...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
259
2). The proposed method looks stronger at high bitrate but close to the baselines at low bitrate. What is the precise bitrate range used for BD-rate comparison? Besides, a related work about implementing content adaptive algorithm in learned video compression is suggested for discussion or comparison: Guo Lu, et al., "...
ICLR_2022_1522
ICLR_2022
Weakness: The overall novelty seems limited since the instance-adaptive method is from existing work with no primary changes. Here are some main questions and concerns: 1). How many optimization steps are used to produce the final reported performance in Figure.1 as well as in some other figs and tables? 2). The propos...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
260
- If I understand correctly, In Tables 1 and 2, the authors report the best results on the **dev set** with the hyper-parameter search and model selection on **dev set**, which is not enough to be convincing. I strongly suggest that the paper should present the **average** results on the **test set** with clearly defin...
ARR_2022_59_review
ARR_2022
- If I understand correctly, In Tables 1 and 2, the authors report the best results on the **dev set** with the hyper-parameter search and model selection on **dev set**, which is not enough to be convincing. I strongly suggest that the paper should present the **average** results on the **test set** with clearly defin...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
268
3. Insufficient ablation study on \alpha. \alpha is only set to 1e-4, 1e-1, 5e-1 in section 5.4 with a large gap between 1e-4 and 1e-1. The author is recommended to provide more values of \alpha, at least 1e-2 and 1e-3.
ICLR_2023_2396
ICLR_2023
1. Lack of the explanation about the importance and the necessity to design deep GNN models . In this paper, the author tries to address the issue of over-smoothing and build deeper GNN models. However, there is no explanation about why should we build a deep GNN model. For CNN, it could be built for thousands of layer...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
269
6: How many topics were used? How did you get topic-word parameters for this "real" dataset? How big is the AG news dataset? Main paper should at least describe how many documents in train/test, and how many vocabulary words.
ICLR_2022_1872
ICLR_2022
I list 5 concerns here, with detailed discussion and questions for the authors below W1: While theorems suggest "existence" of a linear transformation that will approximate the posterior, the actual construction procedure for the "recovered topic posterior" is unclear W2: Many steps are difficult to understand / replic...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
270
3. For evaluation, since the claim of this paper is to reduce exposure bias, training a discriminator on generations from the learned model is needed to confirm if it is the case, in a way similar to Figure 1. Note that it is different from Figure 4, since during training the discriminator is co-adapting with the gener...
NIPS_2020_1592
NIPS_2020
Major concerns: 1. While it is impressive that this work gets slightly better results than MLE, there are more hyper-parameters to tune, including mixture weight, proposal temperature, nucleus cutoff, importance weight clipping, MLE pretraining (according to appendix). I find it disappointing that so many tricks are ne...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
275
- The Related Work section is lacking details. The paragraph on long-context language models should provide a more comprehensive overview of existing methods and their limitations, positioning SSMs appropriately. This includes discussing sparse-attention mechanisms [1, 2], segmentation-based approaches [3, 4, 5], memor...
NJUzUq2OIi
ICLR_2025
I found the proposed idea, experiments, and analyses conducted by the authors to be valuable, especially in terms of their potential impact on low-resource scenarios. However, for the paper to fully meet the ICLR standards, there are still areas that need additional work and detail. Below, I outline several key points ...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
287
1)Less Novelty: The algorithm for construction of coresets itself is not novel. Existing coreset frameworks for classical k-means and (k,z) clusterings are extended to the kernelized setting.
ICLR_2022_2425
ICLR_2022
1)Less Novelty: The algorithm for construction of coresets itself is not novel. Existing coreset frameworks for classical k-means and (k,z) clusterings are extended to the kernelized setting. 2)Clarity: Since the coreset construction algorithm is built up on previous works, a reader without the background in literature...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
294
- The proposed solution is an incremental step considering the relaxation proposed by Guzman. et. al. Minor suggestions:
NIPS_2016_238
NIPS_2016
- My biggest concern with this paper is the fact that it motivates “diversity” extensively (even the word diversity is in the title) but the model does not enforce diversity explicitly. I was all excited to see how the authors managed to get the diversity term into their model and got disappointed when I learned th...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
299
- Results presentation can be improved. For example, in Figure 2 and 3, the y-axis is labeled as “performance” which is ambiguous, and the runtime is not represented in those figure. A scatter plot with x/y axes being runtime/performance could help the reader better understand and interpret the results. Best results in...
RsnWEcuymH
ICLR_2024
- My main concern is that the performance improvement, though generally better, is not particularly too significant, not to mention that those proxy-based method achieves also pretty good IM results while using only a negligible amount of time compared to BOIM (or other simulation-based method in general) - Other choic...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
302
- The time complexity will be too high if the reply buffer is too large. [1] PRM-RL: Long-range Robotic Navigation Tasks by Combining Reinforcement Learning and Sampling-based Planning
NIPS_2019_1366
NIPS_2019
Weakness: - Although the method discussed by the paper can be applied in general MDP, the paper is limited in navigation problems. Combining RL and planning has already been discussed in PRM-RL~[1]. It would be interesting whether we can apply such algorithms in more general tasks. - The paper has shown that pure RL al...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
309
4: Even though authors compare their framework with an advanced defense APE-GAN, they can further compare the proposed framework with a method that is designed to defend against multiple attacks (maybe the research on defense against multiple attacks is relatively rare). The results would be more meaningful if the auth...
ICLR_2021_2717
ICLR_2021
1: The writing could be further improved, e.g., “via being matched to” should be “via matching to” in Abstract. 2: The “Def-adv” needs to be clarified. 3: The accuracies of the target model using different defenses against the FGSM attack are not shown in Figure 1. Hence, it is unclear the difference between the known ...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
313
- l111: Please define the bounds for \tau_i^l because it is important for understanding the time-warp function.
NIPS_2017_110
NIPS_2017
of this work include that it is a not-too-distant variation of prior work (see Schiratti et al, NIPS 2015), the search for hyperparameters for the prior distributions and sampling method do not seem to be performed on a separate test set, the simultion demonstrated that the parameters that are perhaps most critical to ...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
315
5)There are some writing errors in the paper, such as "informative informative" on page 5 and "performance" on page 1, which lacks a title.
ICLR_2023_1980
ICLR_2023
Motivated by the fact that local learning can limit memory when training the network and the adaptive nature of each individual block, the paper extends local learning to the ResNet-50 to handle large datasets. However, it seems that the results of the paper do not demonstrate the benefits of doing so. The detailed wea...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
316
2. The efficiency of such pairwise matching is very low, making it difficult to be used in practical application systems.
NIPS_2021_2304
NIPS_2021
There are four limitations: 1. In this experiment, single dataset training and single dataset testing cannot verify the generalizable ability of models, it should conduct experiments on large-scale datasets. 2. The efficiency of such pairwise matching is very low, making it difficult to be used in practical application...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
318
- The first sentence of the abstract needs to be re-written.
NIPS_2016_238
NIPS_2016
- My biggest concern with this paper is the fact that it motivates “diversity” extensively (even the word diversity is in the title) but the model does not enforce diversity explicitly. I was all excited to see how the authors managed to get the diversity term into their model and got disappointed when I learned th...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
321
- The method seems more involved that it needs to be. One would suspect that there is an underlying, simpler, principle that is propulsing the quality gains.
NIPS_2020_335
NIPS_2020
- The paper reads too much like LTF-V1++, and at some points assumes too much familiarity of the reader to LTF-V1. Since this method is not well known, I wish the paper was a bit more pedagogical/self-contained. - The method seems more involved that it needs to be. One would suspect that there is an underlying, simpler...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
323
6) Adding a method on the top of other methods to improve transferability is good but cannot be considered a significant contribution.
ICLR_2022_3330
ICLR_2022
1) One very serious problem is that this paper is full of grammatical errors. It is too many and many of them can be detected and corrected by grammatical checker. I only list some in here to justify my observations, instead of all because I don’t want to proofread the authors’ paper. Page 1, learned,, Page 2 and Kurak...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
324
- The hGRU architecture seems pretty ad-hoc and not very well motivated.
NIPS_2018_15
NIPS_2018
- The hGRU architecture seems pretty ad-hoc and not very well motivated. - The comparison with state-of-the-art deep architectures may not be entirely fair. - Given the actual implementation, the link to biology and the interpretation in terms of excitatory and inhibitory connections seem a bit overstated. Conclusion: ...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
325
2) On algorithm 1 Line 8, shouldn't we use s_n instead of s_t? Questions I am curious of the asymptotic performance of the proposed method. If possible, can the authors provide average return results with more env steps? [1] https://github.com/watchernyu/REDQ
ICLR_2023_1214
ICLR_2023
As the authors note, it seems the method still requires a few tweaks to work well empirically. For example, we need to omit the log of the true rewards and scale the KL term in the policy objective to 0.1. While the authors provide a brief intuition on why those modifications are needed, I think the authors should prov...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
326
3)It is not clear what are the challenges when the authors analyze Adam under the (L0,L1)-smoothness condition. It seems one can directly apply standard analysis on the (L0,L1)-smoothness condition. So it is better to explain the challenges, especially the difference between this one and Zhang et al.
ICLR_2023_3705
ICLR_2023
1)The main assumption is borrowed from other works but is actually rarely used in the optimization field. Moreover, the benefits of this assumption is not well investigated. For example, a) why it is more reasonable than the previous one? B) why it can add gradient norm L_1 \nabla f(w_1) in Eqn (3) or why we do not add...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
327
1) most of person re-ID methods build on the basis of pedestrian detector (two-step method), and there are also end-to-end method that combines detection and re-ID [5];
ICLR_2022_2791
ICLR_2022
The technical contribution of this paper is limited, which is far from a decent ICLR paper. In particular, All kinds of evaluations, i.e., single-dataset setting (most of existing person re-ID methods), cross-dataset setting [1, 2,3] and live re-id setting [4], have been discussed in previous works. This paper simply m...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
329
• Section 3.2 - I suggest to add a first sentence to introduce what this section is about.
ICLR_2021_1740
ICLR_2021
are in its clarity and the experimental part. Strong points Novelty: The paper provides a novel approach for estimating the likelihood of p(class image), by developing a new variational approach for modelling the causal direction (s,v->x). Correctness: Although I didn’t verify the details of the proofs, the approach se...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
330
1: "The uncertainty is defined based on the posterior distribution." For more clarity it could be helpful to update this to say that the epistemic model uncertainty is represented in the prior distribution, and upon observing data, those beliefs can be updated in the form of a posterior distribution, which yields model...
ICLR_2021_2047
ICLR_2021
As noted below, I have concerns around the experimental results. More specifically, I feel that there is a relative lack of discussion around the (somewhat surprising) outperformance of baselines that VPBNN is aiming to approximate, and I feel that the experiments are missing what I see as key VPBNN results that otherw...
4
{ "annotators": [ "6740484e188a64793529ee77", "6686ebe474531e4a1975636f", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
332
2. Regarding the abstention process, it appears to be based on a prediction probability threshold, where if the probability is lower than the threshold, the prediction is abstained? How does it different from a decision threshold used by the models? Can authors clarify that?
t8cBsT9mcg
ICLR_2024
1. The abstract should be expanded to encompass key concepts that effectively summarize the paper's contributions. In the introduction, the authors emphasize the significance of interpretability and the challenges it poses in achieving high accuracy. By including these vital points in the abstract, the paper can provid...
5
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
337
4) The analysis from line 128 to 149 is not convincing enough. From the histogram as shown in Fig 3, the GS-P-50 model has smaller class selectivity score, which means GS-P-50 shares more features and ResNet-50 learns more class specific features. And authors hypothesize that additional context may allow the network to...
NIPS_2018_865
NIPS_2018
weakness of this paper are listed: 1) The proposed method is very similar to Squeeze-and-Excitation Networks [1], but there is no comparison to the related work quantitatively. 2) There is only the results on image classification task. However, one of success for deep learning is that it allows people leverage pretrain...
5
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
339
* Similarly, many important experimental details are missing or relegated to the Appendix, and the Appendix also includes almost no explanations or interpretations. For example, the PCA experiments in Figures 3, 7, and 8 aren't explained.
39n570rxyO
ICLR_2025
This paper has weaknesses to address: * The major weakness of this paper is the extremely limited experiments section. There are many experiments, yet almost no explanation of how they're run or interpretation of the results. Most of the results are written like an advertisement, mostly just stating the method outperfo...
5
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "4", "4", "4" ] }
4
gold
346
- The authors need to perform ablation experiments to compare the proposed method with other methods (e.g., TubeR) in terms of the number of learnable parameters and GFLOPs.
Va4t6R8cGG
ICLR_2024
- This paper does not seem to be the first work of fully end-to-end spatio-temporal localization, while TubeR has proposed to directly detect an action tubelet in a video by simultaneously performing action localization and recognition before. This weakens the novelty of this paper. The authors claim the differences wi...
5
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
353
1. Please define the dashed lines in fig. 2A-B and 4B.
NIPS_2016_153
NIPS_2016
weakness of previous models. Thus I find these results novel and exciting.Modeling studies of neural responses are usually measured on two scales: a. Their contribution to our understanding of the neural physiology, architecture or any other biological aspect. b. Model accuracy, where the aim is to provide a model whic...
5
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
368
- Since the results are not comparable to the existing methods, there seems not too much significance for the proposed methods.
ICLR_2022_2196
ICLR_2022
weakness] Modeling: The rewards are designed based on a discriminator. As we know, generative adversarial networks are not easy to train since generative networks and discriminative networks are trained alternatively. In the proposed method, the policy network and the discriminator are trained alternatively. I doubt if...
5
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
369
- The paper is incremental and does not have much technical substance. It just adds a new loss to [31].
NIPS_2017_217
NIPS_2017
- The paper is incremental and does not have much technical substance. It just adds a new loss to [31]. - "Embedding" is an overloaded word for a scalar value that represents object ID. - The model of [31] is used in a post-processing stage to refine the detection. Ideally, the proposed model should be end-to-end witho...
5
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
377
- The main contribution of combining attention with other linear mechanisms is not novel, and, as noted in the paper, a lot of alternatives exist.
bIlnpVM4bc
ICLR_2025
- The main contribution of combining attention with other linear mechanisms is not novel, and, as noted in the paper, a lot of alternatives exist. - A comprehensive benchmarking against existing alternatives is lacking. Comparisons are only made to their proposed variants and Sliding Window Attention in fair setups. A ...
5
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
385
7. Since the method is applied on each layer, the authors should provide a plot of how different different weights of the model move, for instance plot the relative weight change after unlearning to see which layers are affected the most after unlearning.
pUOesbrlw4
ICLR_2024
1. The paper is lacking a clear and precise definition of unlearning. Its is important to show the definition of unlearning that you want to achieve through your algorithm. 2. The proposed algorithm is an empirical algorithm without any theoretical guarantees. It is important for unlearning papers to provide unlearning...
5
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
386
2) Technical details and formulations are limited. It seems that the main novelty reflected in the scheme or procedure novelty.
ICLR_2022_2110
ICLR_2022
Weakness: 1) Although each part of the proposed method is effective, the overall algorithm is still cumbersome. It has multiple stages. In contrast, many of existing pruning methods do not need fine-tuning. 2) Technical details and formulations are limited. It seems that the main novelty reflected in the scheme or proc...
5
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
396
7. A discussion on the prompt dataset (for the few-shot case) creation together with its source should be discussed.
qb2QRoE4W3
ICLR_2025
Despite the idea being interesting, I have found some technical issues that weakened the overall soundness. I enumerate them as follows: 1. The assumption that generated URLs are always meaningfully related to the core content of the document from where the premises are to be fetched is not true by and large. It works ...
5
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
406
1. Add a few more sentences explaining the experimental setting for continual learning 2. In Fig 3, explain the correspondence between the learning curves and M-PHATE. Why do you want to want me to look at the learning curves? Does worse performing model always result in structural collapse? What is the accuracy number...
NIPS_2019_165
NIPS_2019
of the approach and experiments or list future direction for readers. The writeup is exceptionally clear and well organized-- full marks! I have only minor feedback to improve clarity: 1. Add a few more sentences explaining the experimental setting for continual learning 2. In Fig 3, explain the correspondence between ...
5
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
413
1. The authors propose a new classification network, but I a bit doubt that its classification error is universally as good as the standard softmax network. It is a bit dangerous to build a new model for better detecting out-of-distribution samples, while losing its classification accuracy. Could the authors report the...
NIPS_2018_681
NIPS_2018
Weakness: However, I'm not very convinced with experimental results and I a bit doubt that this method would work in general and is useful in any sense. 1. The authors propose a new classification network, but I a bit doubt that its classification error is universally as good as the standard softmax network. It is a bi...
5
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
416
1. **Limited Applicability**: While the paper claims that SGC offers a more flexible, fine-grained tradeoff, PEFT methods typically target compute-constrained scenarios, where such granular control may require extra tuning that reduces practicality. It would be beneficial to include a plot with sparsity on the x-axis a...
xrtM8r0zdU
ICLR_2025
1. **Limited Applicability**: While the paper claims that SGC offers a more flexible, fine-grained tradeoff, PEFT methods typically target compute-constrained scenarios, where such granular control may require extra tuning that reduces practicality. It would be beneficial to include a plot with sparsity on the x-axis a...
5
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
420
1.They stack the methods of Mirzasoleiman et al., 2020 and Group-learning setting, and then use one classical method DBSCAN to cluster.
ICLR_2023_1823
ICLR_2023
Weakness: 1.They stack the methods of Mirzasoleiman et al., 2020 and Group-learning setting, and then use one classical method DBSCAN to cluster. 2. In comparison of gradient space and feature space, the normalization of the data in Figure 2 is not so clear. I think you do not need to do normalization of the data, sinc...
5
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
429
* synthetic data: Could you give an example what kind of data could look like this? In Figure 1, what is meant by "support data" and what by "predicted training count data"? Could you write down the model used here explicitly, e.g. add it to the appendix?
NIPS_2019_1350
NIPS_2019
of the method. CLARITY: The paper is well organized, partially well written and easy to follow, in other parts with quite some potential for improvement, specifically in the experiments section. Suggestions for more clarity below. SIGNIFICANCE: I consider the work significant, because there might be many settings in wh...
5
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
432