review_point
stringlengths
45
642
paper_id
stringlengths
10
19
venue
stringclasses
15 values
focused_review
stringlengths
200
10.5k
batch
int64
2
10
actionability
dict
actionability_label
stringclasses
5 values
actionability_label_type
stringclasses
1 value
id
int64
31
1.53k
3. It is suggested that the authors provide a brief introduction to energy models in the related work section. In Figure 1, it is not mentioned which points different learning rates in the left graph and different steps in the right graph correspond to. [1] Context-aware robust fine-tuning. [2] Fine-tuning can cripple ...
2JF8mJRJ7M
ICLR_2024
1. Utilizing energy models to explain the fine-tuning of pre-trained models seems not to be essential. As per my understanding, the objective of the method in this paper as well as related methods ([1,2,3], etc.) is to reduce the difference in features extracted by the models before and after fine-tuning. 2. The author...
7
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
841
- Figure 2 right. I found it difficult to distinguish between the different curves. Maybe make use of styles (e.g. dashed lines) or add color.
NIPS_2016_117
NIPS_2016
weakness of this work is impact. The idea of "direct feedback alignment" follows fairly straightforwardly from the original FA alignment work. Its notable that it is useful in training very deep networks (e.g. 100 layers) but its not clear that this results in an advantage for function approximation (the error rate is ...
7
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
849
- The claims made in the introduction are far from what has been achieved by the tasks and the models. The authors call this task language learning, but evaluate on question answering. I recommend the authors tone-down the intro and not call this language learning. It is rather a feedback driven QA in the form of a dia...
NIPS_2016_93
NIPS_2016
- The claims made in the introduction are far from what has been achieved by the tasks and the models. The authors call this task language learning, but evaluate on question answering. I recommend the authors tone-down the intro and not call this language learning. It is rather a feedback driven QA in the form of a dia...
7
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
850
2. Also, proving lower bounds for round complexity is the major chuck of work involved in proving results for batched ranking problems. However, this paper exploits an easy reduction from the problem of collaborative ranking, and hence, the lower bound results follow as an easy corollary of these collaborative ranking ...
NIPS_2020_1344
NIPS_2020
1. There have been several results on the problems of batched top-k ranking and fully adaptive coarse ranking in recent years. From that point of view the results in this paper are not particularly surprising. Even the idea that one can reduce the size of active arm set by a factor of n^{1/R} has appeared in [37] for t...
7
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
853
5 shows evidence that some information is learned before the model is able to use the concepts." --> I think "evidence" may be too strong here, and would say something more like "Fig.
NIPS_2022_331
NIPS_2022
I believe that one small (but important) part of the paper could use some clarifications in the writing: Section 3.2 (on Representational Probing). I will elaborate below. I think that a couple of claims in the paper may be slightly too strong and need a bit more nuance. I will elaborate below. A lot of the details des...
7
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
857
1. **Limited Novelty in Video Storyboarding**: The innovation of the proposed video storyboarding approach is limited. The primary method relies on frame-wise SDSA, which largely mirrors the approach used in ConsiStory. The only notable difference lies in the mask source, utilizing CLIPseg and OTSU segmentation rather ...
0zRuk3QdiH
ICLR_2025
1. **Limited Novelty in Video Storyboarding**: The innovation of the proposed video storyboarding approach is limited. The primary method relies on frame-wise SDSA, which largely mirrors the approach used in ConsiStory. The only notable difference lies in the mask source, utilizing CLIPseg and OTSU segmentation rather ...
7
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
858
2. Although there is good performance on imageNet classification with ResNet50/34/18, there are no results with larger models like ResNet101/152.
NIPS_2020_125
NIPS_2020
- In section 3.1, the logic of extending HOGA from second order is not consistent with the extension from first order to second order; i.e., second order attention creates one more intermediate state U compared to the first order attention module. However, from the second order to higher order attention module, althoug...
7
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "4", "4", "4" ] }
4
gold
867
3. There is not much novelty in the methodology. The proposed meta algorithm is basically a direct extension of existing methods.
NIPS_2017_401
NIPS_2017
Weakness: 1. There are no collaborative games in experiments. It would be interesting to see how the evaluated methods behave in both collaborative and competitive settings. 2. The meta solvers seem to be centralized controllers. The authors should clarify the difference between the meta solvers and the centralized RL ...
7
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
875
- The performance is only compared with few methods. And the proposed is not consistently better than other methods. For those inferior results, some analysis should be provided since the results violate the motivation. I am willing to change my rating according to the feedback from authors and the comments from other ...
ICLR_2021_1014
ICLR_2021
- I am not an expert in the area of pruning. I think this motivation is quite good but the results seem to be less impressive. Moreover, I believe the results should be evaluated from more aspects, e.g., the actual latency on target device, the memory consumption during the inference time and the actual network size. -...
7
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "3", "3", "3" ] }
3
gold
879
- "D" is used to represent both dimensionality of points and dilation factor. Better to use different notation to avoid confusion. Review summary:
NIPS_2018_76
NIPS_2018
- A main weakness of this work is its technical novelty with respect to spatial transformer networks (STN) and also the missing comparison to the same. The proposed X-transformation seems quite similar to STN, but applied locally in a neighborhood. There are also existing works that propose to apply STN in a local pixe...
7
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
883
2) I noticed that in Sec 5.3, a generator equipped with a standard R-GCN as discriminator tends to collapse after several (around 20), while the proposed module will not. The reason behind this fact can be essential to show the mechanism how the proposed method differs from previous one. However, this part is missing i...
NIPS_2020_0
NIPS_2020
I mainly have the following concerns. 1) In general, this paper is incremental to GIN [1], which limits the contribution of this paper. While GIN is well motivated by WL test with solid theoretical background, this paper lacks deeper analysis and new motivation behind the algorithm design. I suggest the authors to give...
7
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
886
4. **Originality Concerns**: The article's reasoning and writing logic bear similarities to those found in "How Do Nonlinear Transformers Learn and Generalize in In-Context Learning." It raises the question of whether this work is merely an extension of the previous study or if it introduces novel contributions.
n7n8McETXw
ICLR_2025
1. **Limitations in Model Complexity**: The paper primarily analyzes a single-head attention Transformer, which may not encapsulate the full complexity and performance characteristics of multi-layer and multi-head attention models. The validation was confined to binary classification tasks, thereby restricting the gene...
7
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
887
7. Experiments - Ablation - missing components: There should be experiments and explanation regarding the different queries used in spatio-temporal representation, i.e., spatial, temporal and summary. That is the key difference to VideoChatGPT and other works. What if only have spatial one, or temporal and summary one?
R6sIi9Kbxv
ICLR_2025
1. The approach of decomposing video representation into spatial and temporal representation for efficient and effective spatio-temporal modelling is a general idea in video understanding. I'm not going to blame using this in large video language models, however, I think proper credit and literature reviews should be i...
7
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
891
6 Societal impact The authors state that they foresee no negative social impacts of their work (line 379). While I do not believe this work has the potential for significant negative social impact (and I'm not quite sure if/how I'm meant to review this aspect of their work), the authors could always mention the social ...
NIPS_2021_780
NIPS_2021
5 Limitations a. The authors briefly talk about the limitations of the approach in section 5. The main limitation they draw attention to is the challenge of moving closer to the local maxima of the reward function in the latter stages of optimization. To resolve this they discuss combining their method with local optim...
7
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
893
- Section 3 and Section 4 are slightly redundant: maybe putting the first paragraph of sec 4 in sec 3 and putting the remainder of sec 4 before section 3 would help.
NIPS_2018_606
NIPS_2018
, I tend to vote in favor of this paper. * Detailed remarks: - The analysis in Figure 4 is very interesting. What is a possible explanation for the behaviour in Figure 4(d), showing that the number of function evaluations automatically increases with the epochs? Consequently, how is it possible to control the tradeoff ...
7
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
894
- The applicability of the methods to real world problems is rather limited as strong assumptions are made about the availability of camera parameters (extrinsics and intrinsics are known) and object segmentation.
NIPS_2017_35
NIPS_2017
- The applicability of the methods to real world problems is rather limited as strong assumptions are made about the availability of camera parameters (extrinsics and intrinsics are known) and object segmentation. - The numerical evaluation is not fully convincing as the method is only evaluated on synthetic data. The ...
7
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
900
- L006, as later written in the main text, "thousands" is not accurate here. Maybe add "on the subword level"?
ARR_2022_286_review
ARR_2022
While there exist many papers discussing the softmax bottleneck or the stolen probability problem, similar to what the authors found, I personally have not found enough evidence in my work that the problem is really severe. After all, there are intrinsic uncertainties in the empirical distributions of the training data...
7
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
901
- A number of hyperparameters (e.g. regularization) are not given - For all the latent path figures (eg Fig 3) why is the y value at x= 0 always 0? Is it normalized to this? Be clear in your description (or maybe I missed it) - I would be interested in seeing some further analysis on this model, perhaps using the inter...
ICLR_2021_634
ICLR_2021
+ Clarifications: - The question of the latent variable model seems relevant and interesting. It seems that the mixup method is only as good as the model, and also the trained model might add its own biases to the classification task. It would be nice to see some discussion of this in the paper - I am surprised that mi...
7
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
902
- In the paper it is mentioned that the obtained core tensors can be rounded to smaller ranks with a given accuracy by clustering the values of the domain sets or imposing some error decision epsilon if the values are not discrete. It is not clear what is, in theory, the effect on the approximation in the full tensor e...
ICLR_2023_1195
ICLR_2023
- The assumption that a set of analytical derivative functions is available is a very strong hypothesis so the number of cases where this method can be applied seems limited. - The high dimensional tensor can be also compactly represented by the set of derivative functions avoiding the curse of dimensionality, so it is...
7
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
904
* It would help if the form of p was described somewhere near line 135. As per my above comment, I assume it is a Gaussian distribution, but it's not explicitly stated.
NIPS_2019_663
NIPS_2019
of their work?"] The submission is overall reasonably sound, although I have some comments and questions: * Regarding the model itself, I am confused by the GRU-Bayes component. I must be missing something, but why is it not possible to ingest observed data using the GRU itself, as in equation 2? This confusion would p...
7
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
909
- Typically, expected performance under observation noise is used for evaluation because the decision-maker is interested in the true objective function and the noise is assumed to be noise (misleading, not representative). In the formulation in this paper, the decision maker does care about the noise; rather the objec...
NIPS_2021_1251
NIPS_2021
- Typically, expected performance under observation noise is used for evaluation because the decision-maker is interested in the true objective function and the noise is assumed to be noise (misleading, not representative). In the formulation in this paper, the decision maker does care about the noise; rather the objec...
7
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
919
- Can we run VGAE with a vamp prior to more accurately match the doubly stochastic construction in this work? That would help inform if the benefits are coming from a better generative model or better inference due to doubly-semi implicit variational inference. Minor Points - Figure 3: It might be nice to keep the gene...
NIPS_2019_961
NIPS_2019
- It would be good to better justify and understand the bernoulli poisson link. Why are the number of layers used in the link in the poisson part? The motivation for the original paper [40] seems to be that one can capture communities and the sum in the exponential is over r_k coefficientst where each coefficient corre...
7
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
920
- Though the performance of method is pretty good especially in Table 2, the novelty/contribution of the method is somewhat incremental. The main contribution of the work is a new network design drawing inspirations from prior work for the sound source localization task.
NIPS_2020_844
NIPS_2020
- It is claimed that the proposed method aims to discrminatively localize the sounding objects from their mixed sound without any manual annotations. However, the method aslo aims to do class-aware localization. As shown in Figure 4, the object categories are labeled for the localized regions for the proposed method. I...
7
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
922
- The paper opens that learning long-range dependencies is important for powerful predictors. In the example of semantic segmentation I can see that this is actually happening, e.g., in the visualisations in table 3; but I am not sure if it is fully required. Probably the truth lies somewhere in between and I miss a di...
NIPS_2018_849
NIPS_2018
- The presented node count for the graphs is quite low. How is performance affected if the count is increased? In the example of semantic segmentation: how does it affect the number of predicted classes? - Ablation study: how much of the learned pixel to node association is responsible for the performance boost. Previo...
7
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "2", "2", "2" ] }
2
gold
923
- Do you have a demonstration or result related to your model collapsing less than other methods? In line 159, you mentioned gradients become 0 and collapse; it was a good point, is it commonly encountered, did you observe it in your experiments?
NIPS_2021_2445
NIPS_2021
and strengths in their analysis with sufficient experimental detail, it is admirable, but they could provide more intuition why other methods do better than theirs. The claims could be better supported. Some examples and questions(if I did not miss out anything) Why using normalization is a problem for a network or a t...
7
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
926
- This method seems to only work for generative models that can be fine-tuned as an in/outpainting model.
F61IzZl5jw
ICLR_2025
- The paper uses 5,000 images as the training set (am I correct?) . I think the training set size is too small, and is easily memorized with sufficient long learning by large models such as SD 2 . What I am concerned about is what proportion of data is memorized when training with a huge set. - This method seems to onl...
7
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "1", "1", "1" ] }
1
gold
930
2. The experimental setup, tasks, and other details are also moved to the appendix which makes it hard to interpret this anyway. I would suggest moving some of these details back in and moving some background from Section 2 to the appendix instead.
NIPS_2018_494
NIPS_2018
1. The biggest weakness is that there is little empirical validation provided for the constructed methods. A single table presents some mixed results where in some cases hyperbolic networks perform better and in others their euclidean counterparts or a mixture of the two work best. It seems that more work is needed to ...
7
{ "annotators": [ "6686ebe474531e4a1975636f", "6740484e188a64793529ee77", "boda" ], "labels": [ "5", "5", "5" ] }
5
gold
934
- It would be helpful if you provided glosses in Figure 2.
ACL_2017_433_review
ACL_2017
- The annotation quality seems to be rather poor. They performed double annotation of 100 sentences and their inter-annotator agreement is just 75.72% in terms of LAS. This makes it hard to assess how reliable the estimate of the LAS of their model is, and the LAS of their model is in fact slightly higher than the inte...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
935
6) In line 135, the author says "Initially the network only has a few active vertices, due to sparsity." How is "active vertices" defined here?
NIPS_2018_700
NIPS_2018
Weakness: The major quality problem of this paper is clarity. In terms of clarity, there are several confusing places in the paper, especially in equation 9, 10, 11, 12. 1) What is s_{i,j} in these equations? In definition 1, the author mentions that s_{i,j} denotes edge weights in the graph, but what are their values ...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
940
1) The proposed method can be viewed as a direct combination of GCN and normalizing flow, with the ultimate transformed distribution, which is Gaussian in conventional NF, replaced by Gaussian mixture distribution, encouraging the latent representation to be more clustered. Technically, there is no enough new stuffs he...
ICLR_2023_2698
ICLR_2023
1) The proposed method can be viewed as a direct combination of GCN and normalizing flow, with the ultimate transformed distribution, which is Gaussian in conventional NF, replaced by Gaussian mixture distribution, encouraging the latent representation to be more clustered. Technically, there is no enough new stuffs he...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "1", "1", "1" ] }
1
gold
944
1). Only projection head (CNN layers) are affected but not classification head (FCN layer);
ICLR_2022_2470
ICLR_2022
Weakness: The idea is a bit simple -- which in of itself is not a true weakness. ResNet as an idea is not complicated at all. I find it disheartening that the paper did not really tell readers how to construct a white paper in section 3 (if I simply missed it, please let me know). However, the code in the supplementary...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "1", "1", "1" ] }
1
gold
945
- The analogy between HOI analysis and Harmonic analysis is interesting at first glance, but the link is quite weak. In the problem contexts, there is only two “basis” (human and object) to form an HOI. The decomposition/integration steps introduced in this paper also do not have a close connection with the Fourier ana...
NIPS_2020_420
NIPS_2020
**Exposition** - I think the paper contains interesting ideas with good empirical results. However, the exposition of the method is not easy to follow and require significant revision. Here are a couple of examples that were unclear. - L6: “coherent HOI.” What does it mean to have “coherent HOI”? What are the incoheren...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "2", "2", "2" ] }
2
gold
947
- fig 8 shows images with different focusing distance, but it only shows 1m and 5m, which both exist in the training data. How about focusing distance other than those appeared in training? does it generalize well?
ICLR_2021_2674
ICLR_2021
Though the training procedure is novel, a part of the algorithm is not well-justified to follow the physics and optics nature of this problem. A few key challenges in depth from defocus are missing, and the results lack a full analysis. See details below: - the authors leverage multiple datasets, including building the...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
949
- The authors should also consider defining content and style more broadly as it relates to their specific neural application (e.g., as in Gabbay &Hosehn (2018)) where style is instance-specific(?) and content includes information that can be transferred among groups. More specifically, since their model is not sequent...
NIPS_2021_28
NIPS_2021
The paper is overall interesting, well-written and makes a valuable contribution. I do, however, have some comments for the authors to consider (which in my mind, are potential limitations of the study): - Comparison of the proposed unsupervised method with the supervised baseline is not suggestive because of the absen...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
950
8. Eq. 12 is confusing. Where does the reward come from at each trial? Is one of the r_i taken from Eq. 11? Explaining the network model in Sec. 4.2 with equations would greatly improve clarity. [1] https://www.sciencedirect.com/science/article/pii/S0893608019301741 [2] https://www.frontiersin.org/articles/10.3389/fnin...
ICLR_2022_1824
ICLR_2022
. However, I struggle to see the novelty in the author’s approach: spikes and local connections alone have been tried many times (Tab.3 and also [1]). Training the output layer (rather than the whole network) with an RL-based rule is somewhat new, but I find this approach unreasonable for the following reasons: The las...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
954
248 "Above definition": determiner missing Section 3 "Action verbs": Which 50 classes do you pick, and you do you choose them? Are the verbs that you pick all explicitly tagged as action verbs by Levin? 306ff What are "action frames"? How do you pick them?
ACL_2017_818_review
ACL_2017
1) Many aspects of the approach need to be clarified (see detailed comments below). What worries me the most is that I did not understand how the approach makes knowledge about objects interact with knowledge about verbs such that it allows us to overcome reporting bias. The paper gets very quickly into highly technica...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
956
3 Minor Issues: Ln 32 on Page 1, ‘Empiically’ should be ‘Empirically’
mhCNUP4Udw
ICLR_2025
1 The motivation for incorporating vision modality into MPNNs for link prediction should be better clarified and discussed. Why is this design effective? Any theoretical evidence? Maybe a dedicated section for this discussion could be valuable. 2 The counterpart methods used for experimental comparison seem not SOTA en...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
957
* The number of entities is fixed and it's not clear how to generalize a model to different numbers of entities (e.g., as shown in figure 3 of INs).
NIPS_2017_434
NIPS_2017
--- This paper is very clean, so I mainly have nits to pick and suggestions for material that would be interesting to see. In roughly decreasing order of importance: 1. A seemingly important novel feature of the model is the use of multiple INs at different speeds in the dynamics predictor. This design choice is not ab...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "4", "4", "4" ] }
4
gold
961
3. The novelty of the idea is not enough. In addition to the limitations pointed out above, both new metric and method are relatively straightforward.
NIPS_2022_2182
NIPS_2022
Weakness: 1. Contribution is not convincing. They argue that the traditional adaptive filterbank uses a scalar weight shared by all nodes, and their proposed method learns different weights for different nodes. However, in my opinion, FAGCN can do the same thing. 2. There is a gap between the proposed metric and method...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "1", "1", "1" ] }
1
gold
964
- Identifying rationales is not a simple problem, specifically for more complicated NLP tasks like machine translation. The paper is well organized and easy to follow. Figure 2 is a bit cluttered and the "bold" text is hard to see, perhaps another color or a bigger font could help in highlighting the human identified r...
ARR_2022_252_review
ARR_2022
- The proposed approach is not fully automatic, and still requires human annotations for identifying rationales and correcting errors from the static semi-factual generation phase. While this annotation effort could be less significant that other data augmentation methods, it still presents a significant cost overhead....
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
968
- In the proposed E2W algorithm, what is the intuition behind the very specific choice of $\lambda_t$ for encouraging exploration? What if the exploration parameter $\epsilon$ is not included? Also, why is $\sum_a N(s, a)$ (but not $N(s, a)$) used for $\lambda_s$ in Equation (7)?
NIPS_2019_854
NIPS_2019
weakness I found in the paper is that the experimental results for Atari games are not significant enough. Here are my questions: - In the proposed E2W algorithm, what is the intuition behind the very specific choice of $\lambda_t$ for encouraging exploration? What if the exploration parameter $\epsilon$ is not include...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
979
* L75. Maybe say that pi is a function from R^m \to \Delta^{K+1} * In (2) you have X pi(X), but the dimensions do not match because you dropped the no-op action. Why not just assume the 1st column of X_t is always 0?
NIPS_2016_386
NIPS_2016
, however. For of all, there is a lot of sloppy writing, typos and undefined notation. See the long list of minor comments below. A larger concern is that some parts of the proof I could not understand, despite trying quite hard. The authors should focus their response to this review on these technical concerns, which ...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
981
1. I am concerned about the importance of this result. Since [15] says that perturbed gradient descent is able to find second-order stationary with almost-dimension free (with polylog factors of dimension) polynomial iteration complexity, it is not surprising to me that the decentralized algorithm with occasionally add...
NIPS_2020_1776
NIPS_2020
1. I am concerned about the importance of this result. Since [15] says that perturbed gradient descent is able to find second-order stationary with almost-dimension free (with polylog factors of dimension) polynomial iteration complexity, it is not surprising to me that the decentralized algorithm with occasionally add...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "1", "1", "1" ] }
1
gold
986
- Keypoint detection results should be included in the experiments section.
NIPS_2017_217
NIPS_2017
- The paper is incremental and does not have much technical substance. It just adds a new loss to [31]. - "Embedding" is an overloaded word for a scalar value that represents object ID. - The model of [31] is used in a post-processing stage to refine the detection. Ideally, the proposed model should be end-to-end witho...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
987
3. Since ternary potential seems to be the main factor in the performance improvement of the proposed model, I would like the authors to compare the proposed model with existing models where answers are also used as inputs such as Revisiting Visual Question Answering Baselines (Jabri et al., ECCV16).
NIPS_2017_351
NIPS_2017
1. The approach mentions attention over 3 modalities – image, question and answer. However, it is not clear what attention over answers mean because most of the answers are single words and even if they are multiword, they are treated as single word. The paper does not present any visualizations for attention over an...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
988
- The simple/traditional experiment for unseen characters is a nice idea, but is presented as an afterthought. I would have liked to see more eval in this direction, i.e. on classifying unseen words - Maybe add translations to Figure 6, for people who do not speak Chinese?
ACL_2017_543_review
ACL_2017
- Experimental results show only incremental improvement over baseline, and the choice of evaluation makes it hard to verify one of the central arguments: that visual features improve performance when processing rare/unseen words. - Some details about the baseline are missing, which makes it difficult to interpret the ...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
994
9. The equation below Line 502: I think the '+' sign after \nu_j should be a '-' sign. In the definition of B under Line 503, there should be a '-' sign before \sum_{j=1}^m, and the '-' sign after \nu_j should be a '+' sign. In Line 504, we should have \nu_{X_i|Z} = - B/(2A). Minor comments:
NIPS_2019_1397
NIPS_2019
weakness of the manuscript. Clarity: The manuscript is well-written in general. It does a good job in explaining many results and subtle points (e.g., blessing of dimensionality). On the other hand, I think there is still room for improvement in the structure of the manuscript. The methodology seems fully explainable b...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,003
2) the authors did not compare any models other than GPT2. Several sections of the paper read confusing to me. There is a missing citation / reference in Line 99, section 3.1. The notation \hat{D}(c) from Line 165, section 3.4 is unreferenced. The authors made great effort to acknowledge the limitations of their work.
NIPS_2021_725
NIPS_2021
Comparing the occupational statistics computed by GPT2 vs those by the United States is very interesting and informative. However, the presentation on the methodology and the subsequent discussion is confusing to me. Particularly from section 3.4, I am not sure what “adj.” in equation (1) means and why “adj. Pred” is a...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,011
- Authors mention in the abstract and other times that "online learning formulation overlooks key practical considerations. A proper comparison (in evaluation results) against online learning approaches is missing (as well as against RL, mentioned below). In such way, it would be clear why online learning cannot be use...
iGX0lwpUYj
ICLR_2025
- Very important: It is not mentioned in the title or main parts of the work (Abstract) that the work focuses on classification only. It is important, as there is no discussion or result on how this solution could work for regression tasks. Please make sure you explicitly mention this, including a possible modification...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,016
3). Important references are missing. The GFF[1] and EfficientFCN[2] both aims to implement the fast semantic segmentation method in the encode-decoder architecture. I encourage the authors to have a comprehensive comparison with these work. [1]. Gated Fully Fusion for Semantic Segmentation, AAAI'20. [2]. EfficientFCN:...
NIPS_2021_2247
NIPS_2021
1). Lack of speed analysis, the experiments have compared GFLOPs of different segmentation networks. However, there is no comparisons of inference speed between the proposed network and prior work. The improvement on inference speed will be more interesting than reducing FLOPs. 2). For the detail of the proposed NRD, i...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,021
2 It is also recommended to compare the performance with "Multilingual unsupervised neural machine translation with denoising adapters. " and "MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer", which trains adapters on top of a well trained multilingual pretrained model.
u9Fvsy8Brx
EMNLP_2023
1 The technical contribution is incremental. The model architecture is very similar to that of "Multilingual unsupervised neural machine translation with denoising adapters. " However, the authors fail to talk about the relations and differences. However, it is interesting to see the performance of such model on NLU an...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,023
- I am concerned about the practical performance of the proposed algorithms. Specifically, Algorithm 1 uses Vandermonde matrix, which is known to be very ill-conditioned and numerically unstable, especially for large $n$ and small $\alpha$. Since $\alpha$ can be as small as $1/k^2L^2$, I am worried that the algorithm m...
NIPS_2019_945
NIPS_2019
Weakness: - I am concerned about the practical performance of the proposed algorithms. Specifically, Algorithm 1 uses Vandermonde matrix, which is known to be very ill-conditioned and numerically unstable, especially for large $n$ and small $\alpha$. Since $\alpha$ can be as small as $1/k^2L^2$, I am worried that the a...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,025
- Your setting is very specific: you need to know the model or/and have access to a generative model (for expanding or generating trajectories), the problem should be episodic and the reward should be given just at the end of a task (i.e., reaching the target goal). Can you extend this approach to more general settings...
NIPS_2018_296
NIPS_2018
weakness of the proposed approach. Model-based algorithms (LevinTS is model-based) for planning do not have such requirements. On the other hand, if the goal is to refine a policy at the end of some optimization procedure I understand the choice of using a policy-guided heuristic. - Concerning LubyTS it is hard to quan...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,027
- In Section 5, the authors present Figure 1, Figure 2, and Figure 3, yet fail to provide explanations or analysis for them. The authors need to clarify why there are negative numbers in Figure 1, as well as the implications of Figure 2 and Figure 3.
RWH1WazQqE
EMNLP_2023
- It appears that this paper merely employs a simplified version of the Self-Refine method (https://arxiv.org/abs/2303.17651) initially devised for GPT, and applies it to open-source LLMs, without presenting any substantial differentiation. - Recent research indicates a considerable discrepancy between the auto-evaluat...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,031
2) learning with MMD. Without an ablation study, it is hard to see the net effect of each component. For instance, we can try learning the proposed model with typical knowledge distillation loss, or try distilling a Hydra architecture with MMD loss.
NIPS_2022_1622
NIPS_2022
Only evaluated with accuracy and ECE, so the results do not fully reflect the uncertainty quantification aspect of the models. Somewhat unfair comparison environment for DM (see below). No experiments on larger scale datasets such as TinyImageNet or ImageNet. As far as I see from the paper, DM and Hydra use only 8 head...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,034
* The proposed NC measure takes the whole training and test datasets as input. I can hardly imagine how this method can be learned and applied to large scale datasets (e.g. ImageNet). Is there any solution to address the scalability issue? Otherwise, the practical contribution of this paper will be significantly reduce...
NIPS_2020_813
NIPS_2020
* The proposed NC measure takes the whole training and test datasets as input. I can hardly imagine how this method can be learned and applied to large scale datasets (e.g. ImageNet). Is there any solution to address the scalability issue? Otherwise, the practical contribution of this paper will be significantly reduce...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,037
- There is almost no discussion or analysis on the 'filter manifold network' (FMN) which forms the main part of the technique. Did authors experiment with any other architectures for FMN? How does the adaptive convolutions scale with the number of filter parameters? It seems that in all the experiments, the number of i...
NIPS_2017_370
NIPS_2017
- There is almost no discussion or analysis on the 'filter manifold network' (FMN) which forms the main part of the technique. Did authors experiment with any other architectures for FMN? How does the adaptive convolutions scale with the number of filter parameters? It seems that in all the experiments, the number of i...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,040
4. The proposed PSA method requires more computation than baselines. In algorithm 1, when feeding forward, the PSA requires the calculation of all the flipped previous layer output into the current layer. The comparison of computation complexity is expected in the experiment part.
NIPS_2020_1080
NIPS_2020
1. The experiments setups are not persuasive. For the gradient estimation accuracy, the author conduct experiment only on 2 classes 2D simulation data. The author does not mention how the 100 training data generated, which is in quite a small amount even in the simulation study. The network is in special design as 5-3-...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,042
- The proposed model benefits from two factors : noise and keeping an exponential moving average. It would be good to see how much each factor contributes on its own. The \Pi model captures just the noise part, so it would be useful to know how much gain can be obtained by just using a noise-free exponential moving ave...
NIPS_2017_114
NIPS_2017
Weakness- - Comparison to other semi-supervised approaches : Other approaches such as variants of Ladder networks would be relevant models to compare to. Questions/Comments- - In Table 3, what is the difference between \Pi and \Pi (ours) ? - In Table 3, is EMA-weighting used for other baseline models ("Supervised", \Pi...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,043
3. Excellent drawing on figures. However, fonts could be larger fig 1. The words in grey box may be larger. V_mem, Th_i, U_i^t too small. “CTRL” long form explanation. Also, font in figure 2 is too small. (Conv5 +BN) 4. Lack of details comparison, such as epochs and number of params, with other state-of-the-art Transfo...
wPK65O4pqS
ICLR_2024
1. What is the baseline model on the ablation experiments? Is the baseline model for your own architecture or other study’s baseline? The study has shown that without the STCore and SGA, the trained model already has excellent performance (80.9% on DVS-CIFAR10) while general accuracy from other studies as shown in your...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,044
5: I can't understand the meaning of the sentence "While a smaller j to simulate more accumulate errors along with the inference steps.", please rewrite it. P. 5, p. 3, l.
ICLR_2023_3811
ICLR_2023
Most of the paper is poorly written and difficult to understand. The idea of scheduled sampling is not new, so I would categorize this paper a purely empirical contribution. However the amount of inconsistencies, and overall lack of rigor in reporting and interpreting the results, paired with the lack of clarity in the...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,046
- For DocRED, did you consider the documents as an entire sentence? How do you deal with concepts (multiple entity mentions referring to the same entity)? This information is currently missing from the manuscript.
ARR_2022_237_review
ARR_2022
of the paper include: - The introduction of relation embeddings for relation extraction is not new, for example look at all Knowledge graph completion approaches that explicitly model relation embeddings or works on distantly supervised relation extraction. However, an interesting experiment would be to show the impact...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,047
- The authors claimed that they used active learning in step 2. Is the “active learning pipeline” method the same as traditional active learning that select informative samples to label? If not, the description can mislead the readers.
F0XXA9OG13
ICLR_2024
- The framework is quite straightforward, and there is not much technical contribution. It is mostly a combination of multiple existing models. And the idea of transferring tabular data into text is not novel at all. There are a bunch of existing works [1][2][3], including one of their baselines TabLLM[4]. The further ...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,052
- in the abstract, I think it should say something like "...attain greater expressivity, as measured by the change in linear regions in output space after [citation]. " instead of just "attain greater expressivity" - it would be nice to see learning curves for all experiments, at least in an appendix.
NIPS_2018_743
NIPS_2018
- quality: It seems to me that the chosen "algorithm" for choosing dendrite synapses is very much like dropout with a fixed mask. Introducing this sparsity is a form of regularization, and a more fair comparison would be to do a similar regularization for the feed-forward nets (e.g. dropout, instead of bn/ln; for small...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,056
- It's unclear what this paper's motivation is as I do not see a clear application from the proposed method. The paper showed results mapping one RGB image to another RGB image (with a different style). When do we need this domain adaptation, and how would this be useful? For example, it would have been better to demon...
ICLR_2021_2929
ICLR_2021
Weakness - It's unclear what this paper's motivation is as I do not see a clear application from the proposed method. The paper showed results mapping one RGB image to another RGB image (with a different style). When do we need this domain adaptation, and how would this be useful? For example, it would have been better...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,057
4. In Table 2, under the leave one out setting the proposed method only be compared to “+LFP”. As ATA is a bit better than FP according to the results in Table 1, it would be more convincing to also including it in the comparison.
ICLR_2022_2725
ICLR_2022
1. The differences of the proposed instance normalization from other normalization methods, such as batch/group/layer normalization, should be explained in detail. What’s more, its advantages should be elaborated. 2. The memorized restitution sounds to be the most important contribution of the proposed method. For memo...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,060
- The comparison with some baselines is somehow unfair since they lack the prior knowledge of users or any language embedding computation. A better comparison should be considered.
i3e92uSZCp
ICLR_2025
- The experimental scenarios are simple, in which the exampled prompts and semantically controlled spaces are easy to follow yet fail to demonstrate the generalizablity and scalability --- after all, the method relies much on the description of states. LGSD’s dependence on LLMs for real-time distance evaluation might l...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "3", "3", "3" ] }
3
gold
1,064
3) why the output-side layers do not benefit from it? Furthermore, Figure 4 is not clearly illustrated. The details of Pixel-shuffle are not clearly presented. Is it the pixel-shuffle operation used in the super-resolution field? Then, why the dimensionality remains the same after upsampling in Figure 2. (b)? The autho...
NIPS_2022_2523
NIPS_2022
Novelty is incremental. The major change over the baseline ResTv1 is the pixel-shuffle only, and the rest of the modifications are not new and cannot be one of the contributions. Any intuitions or insights of why the architecture should be designed like this are missing: why the upsampling module should be involved? Wh...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,065
3. Are the negative chips fixed after being generated from the lightweight RPN? Or they will be updated while the RPN is trained in the later stage? Would this (alternating between generating negative chips and train the network) help the performance?
NIPS_2018_857
NIPS_2018
Weakness: - Long range contexts may be helpful for object detection as shown in [a, b]. For example, the sofa in Figure 1 may help detect the monitor. But in the SNIPER, images are cropped into chips, which makes the detector cannot benefit from long range contexts. Is there any idea to address this? - The writing shou...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,066
2. What if the patients are the first time visitors without historical reports. The authors need to evaluate the proposed approach on new patients and old patients respectively.
tSfZo6nSN1
EMNLP_2023
1. The proposed approach fails to outperform existing works. For example, in Table 1, the B-4 of proposed approach is lower than the basic baseline ViT-transformer on MIMIC-ABN. Why the ViT-transformer is not evaluated on MIMIC-CXR data set. 2. What if the patients are the first time visitors without historical reports...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,067
- While the general architecture of the model is described well and is illustrated by figures, architectural details lack mathematical definition, for example multi-head attention. Why is there a split arrow in Figure 2 right, bottom right? I assume these are the inputs for the attention layer, namely query, keys, and ...
NIPS_2017_575
NIPS_2017
- While the general architecture of the model is described well and is illustrated by figures, architectural details lack mathematical definition, for example multi-head attention. Why is there a split arrow in Figure 2 right, bottom right? I assume these are the inputs for the attention layer, namely query, keys, and ...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,074
2. The presentation of this paper is hard to follow for the reviewer.
NIPS_2022_670
NIPS_2022
1. Lack of numerical results. The reviewer is curious about how to apply it to some popular algorithms and their performance compared with existing DP algorithms. 2. The presentation of this paper is hard to follow for the reviewer.
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "1", "1", "1" ] }
1
gold
1,080
- While not familiar with the compared models DMM and DVBF in details, the reviewer understood from the paper their differences with KVAE. However, the reviewer would appreciate a little bit more detailed presentation of the compared models. Specifically, the KVAE is simpler as the state space transition are linear, bu...
NIPS_2017_345
NIPS_2017
of the paper are mainly on the experiments: - While not familiar with the compared models DMM and DVBF in details, the reviewer understood from the paper their differences with KVAE. However, the reviewer would appreciate a little bit more detailed presentation of the compared models. Specifically, the KVAE is simpler ...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,082
1. The authors conduct experiments on T5, PaLM and GPT series LLMs and show the influence of parameter size on benchmark score. However, I think more experiments on different famous LLMs like LLaMA, Falcon, etc are needed as benchmark baselines.
q38SZkUmUh
ICLR_2024
1. The authors conduct experiments on T5, PaLM and GPT series LLMs and show the influence of parameter size on benchmark score. However, I think more experiments on different famous LLMs like LLaMA, Falcon, etc are needed as benchmark baselines. 2. For better visualization, the best results in Table 1 need to be displa...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,083
2. In Table 3, besides the number of queries, it would be better to compare the real search cost (e.g. in terms of GPU days).
ICLR_2022_2660
ICLR_2022
1. There is an assumption “graphs are topological close should have also comparable performance”. Nevertheless, it may not hold for architectures. For example, by only modifying one node/edge (add or remove skip connection), the architecture may incur significant performance drop. Thus, it is questionable to use spectr...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,088
- As this work has the perspective of task-oriented recommendation, it seems that works such as [] Li, Xiujun, et al. "End-to-end task-completion neural dialogue systems." arXiv preprint arXiv:1703.01008 (2017). are important to include, and compare to, at least conceptually. Also, discussion in general on how their wo...
NIPS_2018_894
NIPS_2018
- As this work has the perspective of task-oriented recommendation, it seems that works such as [] Li, Xiujun, et al. "End-to-end task-completion neural dialogue systems." arXiv preprint arXiv:1703.01008 (2017). are important to include, and compare to, at least conceptually. Also, discussion in general on how their wo...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,090
* The proposed methods (DualIS and DualDIS) are not generic on some cross-model retrieval tasks, i.e., the performance in MSVD (Table 3) shows minor improvements.
Md1YdfqAed
EMNLP_2023
* The proposed methods (DualIS and DualDIS) are not generic on some cross-model retrieval tasks, i.e., the performance in MSVD (Table 3) shows minor improvements. * I think the proposed gallery bank is supplementary and less effective compared to the query bank to address hubness issues in cross-model retrieval tasks. ...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "1", "1", "1" ] }
1
gold
1,091
6: In the phrase "for 'in-between' uncertainty", the first quotation mark on 'in-between' needs to be the forward mark rather than the backward mark (i.e., ‘ i n − b e t w e e n ′ ). p.
ICLR_2021_872
ICLR_2021
The authors push on the idea of scalable approximate inference, yet the largest experiment shown is on CIFAR-10. Given this focus on scalability, and the experiments in recent literature in this space, I think experiments on ImageNet would greatly strengthen the paper (though I sympathize with the idea that this can a ...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,094
2. If the theme is mainly about FedSP, the performance of FedSP is not the best in Table 1 and Table 2 on some datasets.
57yfvVESPE
EMNLP_2023
1. The writing is hard to follow. There is no contribution list at the end of Introduction part. I read it several times but I am sorry that I cannot catch your theme. Is this paper mainly about model privacy in FL (FedSP) or soft prompt usage? 2. If the theme is mainly about FedSP, the performance of FedSP is not the ...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "1", "1", "1" ] }
1
gold
1,095
1) the models are learned directly from pixels without a Markovian state
ICLR_2023_226
ICLR_2023
The world modelling task is definitely interesting but it is hard to see how it is directly relevant outside of this environment. We would likely never have access to a Markovian state in such a controlled setting. The section appears to be motivated by works such as World Models and Dreamer, but in those cases 1) the ...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "1", "1", "1" ] }
1
gold
1,098
- The use of the sequence example at different step of the paper is really useful, however I'm a bit surprised that you mention in Example 2 a 'common' practice in the context of CRF corresponding to using as a scoring loss the Hamming distance over entire parts of the sequence. I've never seen this type of approach an...
NIPS_2019_656
NIPS_2019
Despite the shown results and the details added in the appendix K, I think that the experimental part remains the weak part of this paper. The results displayed are convincing but I am disappointed that the authors did not tried their approach on more popular problems mentioned in the supplementary such as hierarchical...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,099
1) The name of the "Evaluation" element can be changed to "Metrics" since 'evaluation' can have a more general meaning. Even better, the corresponding sections can be removed and the metrics can be briefly mentioned along with the datasets or in the captions of the tables since most, if not all, of the metrics are well...
ARR_2022_329_review
ARR_2022
Although the paper is mostly easy to follow due to its simple and clear organization, it is not very clearly written. Some sentences are not clear and the text contains many typos. Although the included tasks can definitely be helpful, the proposed benchmark does not include many important tasks that require higher-lev...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,100
- The new proposed dataset, DRRI, could have been explored more in the paper.
ARR_2022_295_review
ARR_2022
- The paper would be easy to follow with an English-proofreading even though the overall idea is still understandable. - The new proposed dataset, DRRI, could have been explored more in the paper. - It is not clear how named entities were extracted from the datasets. An English-proofreading would significantly improve ...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "3", "3", "3" ] }
3
gold
1,101
- [218] Please use more objective terms than remarkable: "and remarkable accuracy improvement with same size of networks". Looking at the axes, which are rather squished, the improvement is definitely there but it would be difficult to characterize it as remarkable.
NIPS_2018_83
NIPS_2018
- An argument against DEN, a competitor, is hyper-parameter sensitivity. First, this isn't really shown, but second (and more importantly) reinforcement learning is well-known to be extremely unstable and require a great deal of tuning. For example, even random seed changes are known to change the behavior of the same ...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,102
- As with most work on pruning, it is not yet possible to realize efficiency gains on GPU.
NIPS_2020_1710
NIPS_2020
- While the baselines are strong, the way they are reported may be a bit misleading. In particular, models are compared based on the sparsity percentage, which puts models with fewer parameters (e.g., MiniBERT) at a disadvantage. - As with most work on pruning, it is not yet possible to realize efficiency gains on GPU.
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "1", "1", "1" ] }
1
gold
1,104
* How much do men and women pay for insurance after this method is applied?
Q7uE3M5aMD
ICLR_2025
The experiments and evaluation section in this paper claims to show that the method "achieves fair pricing effectively", but it answers none of the questions that would allow us to determine if such pricing is fair, effective, or desirable. * How much do men and women pay for insurance after this method is applied? * H...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,107
- Section 3.2, lines 230-234 and 234-235: please provide references for the following two passages: "In the sequence-to-sequence machine translation (MT) model, the encoder and decoder are responsible for encoding input sentence information and decoding the sentence representation to generate an output sentence"; "Some...
ACL_2017_333_review
ACL_2017
There are some few details on the implementation and on the systems to which the authors compared their work that need to be better explained. - General Discussion: - Major review: - I wonder if the summaries obtained using the proposed methods are indeed abstractive. I understand that the target vocabulary is build ou...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,113
- Discrepancy between eq. 9 and Figure 1. From eq. 9, it seems like the output patches are not cropped parts of the input image but just masked versions of the input image where most pixels are black. Is this correct? In this case, Figure 1 is misleading. And if so, wouldn't zooming on the region of interest using bili...
NIPS_2019_1338
NIPS_2019
, this paper is a solid submission. The idea is interesting and effective. It outperforms the state of the art. Strength: + The paper is well written and the explanations are clear. + The quantitative results (especially Table 2) clearly demonstrate the effectiveness of the proposed method. + Figure 1 is well designed ...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,119
- Pg.5: “The training time reduction is less drastic than the parameter reduction because most gradients are still computed for early down-sampling layers (Discussion).” This seems not to have been revisited in the Discussion (which is fine, just delete “Discussion”).
ICLR_2021_973
ICLR_2021
. Clearly state your recommendation (accept or reject) with one or two key reasons for this choice. I recommend acceptance. The number of updates needed to learn realistic brain-like representations is a fair criticism of current models, and this paper demonstrates that this number can be greatly reduced, with moderate...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,122
1) that problem applies to other downstream tasks or is just specific to binding affinity prediction — and if so, why?
ICLR_2023_879
ICLR_2023
The ablations for the different pre-training tasks in section 4.5 / Figure 6 are a bit puzzling. It does seem that the CRD task has destructive value on that particular binding affinity prediction task since: a) the performance of CRD + MLM or CRD + PPI leads to both lower performance Vs MLM or PPI alone respectively b...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,123
2 The technical contribution is limited. For example, the contents of Section 4 are not about a formal and principled solution, but most about heuristics.
ICLR_2023_2640
ICLR_2023
Weakness: 1 For key issues in federated recommendation, the authors do not contribute/discuss much, e.g., communication cost, privacy protection, time complexity. 2 The technical contribution is limited. For example, the contents of Section 4 are not about a formal and principled solution, but most about heuristics. 3 ...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "1", "1", "1" ] }
1
gold
1,125
- It's regrettable that the probability mass function is practically unexploited. In MixBoost it is set to a quasi-uniform distribution, which depends on only one single parameter. Intuitively, each learner class should be considered individually, even in the case of BDT of different depths. I think that considering va...
NIPS_2020_936
NIPS_2020
I have a few comments on this paper, even though it would be unfair to call them weaknesses. They are listed below in no particular order. - It's regrettable that the probability mass function is practically unexploited. In MixBoost it is set to a quasi-uniform distribution, which depends on only one single parameter. ...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,127
- Missing references: the references below are relevant to your topic, especially [a]. Please discuss connections with [a], which uses supervised learning in QBF solving, where QBF generalizes SMT, in my understanding. [a] Samulowitz, Horst, and Roland Memisevic. "Learning to solve QBF." AAAI. Vol.
NIPS_2018_947
NIPS_2018
weakness of the paper, in its current version, is the experimental results. This is not to say that the proposed method is not promising - it definitely is. However, I have some questions that I hope the authors can address. - Time limit of 10 seconds: I am quite intrigued as to the particular choice of time limit, whi...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,129
* Why is this approach more privacy preserving than other federated learning approaches? Is privacy preservation an issue for traffic signal control, i.e. one traffic signal not to know what is the color of the next one? One would think that this is a very bad example of an application of federated learning.
tUiYbVqcuQ
ICLR_2024
* The claims of the paper are unclear. What does it mean that the "optimal actions are personalized?". How do we measure personalization? * What kind of communication overhead we are talking about, during training or during inference? How big is the communication cost for a stoplight? Although this is one of the three ...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,133
3) Lack of fair comparison: In Figure 3, the authors compared CPEF with PMEF to demonstrate the advantages of the pre-trained question representation model under data scarcity conditions (line 529-lin534). However, emphasizing the advantages of CPEF through this comparison is unjust since PMEF lacks a pre-training modu...
0DkaimvWs0
EMNLP_2023
1) Inadequate method details: The "expert ID embedding" mentioned in Section 4.3 is somewhat confusing as it lacks specific clarification. It remains unclear whether this ID refers to the registered name of the expert or some other form of identification. If it simply represents the expert's registered name, its abilit...
8
{ "annotators": [ "boda", "6686ebe474531e4a1975636f", "6740484e188a64793529ee77" ], "labels": [ "5", "5", "5" ] }
5
gold
1,134
- To me, the task looks closer to Argument Mining rather than Summarization. In any case, the paper should further clarify the differences against Argument Mining/Discussion Summarization.
pFTBsdZ1UM
EMNLP_2023
(W1) (Task definition and novelty are not clear.) The notion of indicative summarization is not clear. According to Footnote 2, it is not clear what is different from extreme summarization (e.g., single-document such as XSum and multi-document such as TLDR). The TIFU Reddit dataset [Ref 1] is not cited or mentioned in ...
9
{ "annotators": [ "boda", "6740484e188a64793529ee77", "6686ebe474531e4a1975636f" ], "labels": [ "5", "5", "5" ] }
5
gold
1,146
4. The linear program in Theorem 3 need to be explained intuitively. I understand that this is a main theorem but it would help the reader a lot if the authors can explain what are the objective and the constraints in (3).
NIPS_2020_25
NIPS_2020
1. The proposed method is inapplicable to data from absolutely continuous probability distribution. The number of possible values of a data point in this case will be infinite. However, the paper relies on the vectorization of the probability distribution. For truly real world continuous data, huge matrices will have t...
9
{ "annotators": [ "boda", "6740484e188a64793529ee77", "6686ebe474531e4a1975636f" ], "labels": [ "5", "5", "5" ] }
5
gold
1,149
3. Does the bound in Theorem 2, Eq. (30) converge to 0 when T goes to infinity? As the bound in [Grunewalder et al, 2010], Eq. (27) does converge to 0. The first term in Eq. (30) does converge to 0, but it is not trivial to derive that the 2nd term in Eq. (30) also converges to 0. Can the authors prove this? Note: I'm ...
NIPS_2021_2307
NIPS_2021
1. It is unclear what the exact setting the paper considers in the continuous domain and how prior work would fail in that setting (please see the Questions). 2. Even when the paper proposes new algorithms (EI2 and UCB2) for the theoretical analysis, but it still benefits if we can see some experimental results about h...
9
{ "annotators": [ "boda", "6740484e188a64793529ee77", "6686ebe474531e4a1975636f" ], "labels": [ "5", "5", "5" ] }
5
gold
1,151
1. Whether is it possible to update one node based on the results from multiple connected nodes (i.e., one node is activated)? Algorithm 2 is unclear. 'avg' is computed but not used. What are j' and 'i''? Update The authors' response addresses some concerns, and I would like to keep the initial scores.
ICLR_2021_961
ICLR_2021
Weakness: The number of graphs satisfying the property is very limited. It requires an r-regular graph. That is, the number of edges connected to one node is the same for all nodes. This condition is very difficult to satisfy in applications. Therefore, the application would be limited too. The quantization part is lim...
9
{ "annotators": [ "boda", "6740484e188a64793529ee77", "6686ebe474531e4a1975636f" ], "labels": [ "5", "5", "5" ] }
5
gold
1,152
1. The specific definition of the sparsity of the residual term in this paper is unclear. Does it mean that the residual term includes many zero elements? Besides, could the authors provide some evidence to support the sparsity assumption across various noisy cases? I think it's necessary to show the advantages of the ...
lHtNW6xqCd
ICLR_2024
1. The specific definition of the sparsity of the residual term in this paper is unclear. Does it mean that the residual term includes many zero elements? Besides, could the authors provide some evidence to support the sparsity assumption across various noisy cases? I think it's necessary to show the advantages of the ...
9
{ "annotators": [ "boda", "6740484e188a64793529ee77", "6686ebe474531e4a1975636f" ], "labels": [ "5", "5", "5" ] }
5
gold
1,153