Dataset Viewer
Auto-converted to Parquet Duplicate
paper_id
stringlengths
10
19
venue
stringclasses
15 values
focused_review
stringlengths
7
9.67k
point
stringlengths
55
634
NIPS_2019_899
NIPS_2019
Weakness: - Latent language seems to add a more complex intermediate problem. You are now introducing text understanding which might be a harder problem. Do we really need text? why not use a program for guidance? Surely, a program is more expressive than macro-action and interpretable and you dont have language unders...
- Many missing citations (see Tellex et al., AAAI 2011, Tellex et al., RSS 2014, Chaplot et al., AAAI 2017, Bahdanau et al., ICLR 2017, Misra et al., EMNLP 2018, Mirowski et al., 2019, Chen et al., CVPR 2019) etc.
NIPS_2018_710
NIPS_2018
- My general reservation about this paper is that while it was helpful in clarifying my own understanding of BN, a lot of the conclusions are consistent with folk wisdom understanding of BN (e.g. well-conditioned optimization), and the experimental results were not particularly surprising. Questions: - Taking Sharp Min...
- My general reservation about this paper is that while it was helpful in clarifying my own understanding of BN, a lot of the conclusions are consistent with folk wisdom understanding of BN (e.g. well-conditioned optimization), and the experimental results were not particularly surprising. Questions:
dapU3n7yfp
ICLR_2024
- Lack of baselines: The attack algorithm is based on HotFlip (2018), which is a bit old and less effective than the recently proposed baselines. I am wondering if the authors have compared with adversarial attack baselines proposed more recently such as Seq2sick [1], which shows better optimization effectiveness for t...
- Lack of defense models (detoxified models): While I appreciate the authors’ efforts in comparing different pretrained models, it would also be interesting to evaluate against different defense approaches/detoxification approaches, such as [2,3,4], and confirm whether the attack is still effective.
ICLR_2023_3305
ICLR_2023
1.The review of related work on uncertainty in meta-learning is small and it is difficult to locate the main contribution of this paper. 2.The Ood detection only extends from classification to regression, which is not very innovative. 3.The experiments only focus on the comparison between various models proposed by aut...
2.The Ood detection only extends from classification to regression, which is not very innovative.
NIPS_2020_1507
NIPS_2020
1. The proposed method is based on a pre-defined causal graph, which has limitations if the causal graph is unavailable. In the experimental results sections, the authors only showed the results with the graph constructed by the PC algorithm. It is not clear how the way of graph construction affects the final results. ...
1. The proposed method is based on a pre-defined causal graph, which has limitations if the causal graph is unavailable. In the experimental results sections, the authors only showed the results with the graph constructed by the PC algorithm. It is not clear how the way of graph construction affects the final results.
oIwoBDsJJI
ICLR_2024
1. The graph Foster distance is a direct application of the optimal transport problem on the graph Foster distributions. 2. Compared with the Fused Gromov-Wasserstein Distance (FGW), the improvement in the computation time and the classification accuracy for the graph Foster distance in the experiments is very marginal...
2. Compared with the Fused Gromov-Wasserstein Distance (FGW), the improvement in the computation time and the classification accuracy for the graph Foster distance in the experiments is very marginal.
3LdaPmAnji
EMNLP_2023
1. Experimental results do not show the effectiveness of fine-grained classes. 2. Pherpas lacks Inter-annotator Agreement to better demonstrate the quality of EDeR.
1. Experimental results do not show the effectiveness of fine-grained classes.
NIPS_2017_349
NIPS_2017
- The paper is not self contained Understandable given the NIPS format, but the supplementary is necessary to understand large parts of the main paper and allow reproducibility. I also hereby request the authors to release the source code of their experiments to allow reproduction of their results. - Use of deep-reinfo...
- Consider increasing the axes margins? Markers at 0 and 12 are cut off.
ACL_2017_333_review
ACL_2017
The criticisms are very minor: - It would be best to report ROUGE F-Score for all three datasets. The reasons for reporting recall on one are understandable (the summaries are all the same length), but in that case you could simply report both recall and F-Score. - The Related Work should come earlier in the paper. - T...
- It would be best to report ROUGE F-Score for all three datasets. The reasons for reporting recall on one are understandable (the summaries are all the same length), but in that case you could simply report both recall and F-Score.
NIPS_2016_208
NIPS_2016
1. The novelty is a little weak. It is not clear what's the significant difference and advantage compared to NCA [6] and "Small codes and large image databases for recognition", A. Torralba et al., 2008, which used NCA in deep learning. 2. In the experiment of face recognition, some state-of-the art references are miss...
1. The novelty is a little weak. It is not clear what's the significant difference and advantage compared to NCA [6] and "Small codes and large image databases for recognition", A. Torralba et al., 2008, which used NCA in deep learning.
NIPS_2021_1872
NIPS_2021
Weakness: 1. The background on linear GCNs may not be clear. Why do we need to study linear GCN? Most of current models are non-linear. 2. The difference between with self loop and without self loop? 3. The authors used the step-size to be T/K. Why exactly T/K? can we use larger or smaller than T/K?
2. The difference between with self loop and without self loop?
NIPS_2021_1907
NIPS_2021
There is little improvement empirically. Furthermore, it is unclear if the gains in this paper are due solely to the confidence widths or if the design of the algorithm is important too. For the empirical study, it is unclear how the other experiments would perform if they had access to the same confidence widths prese...
- Does theorem 1 hold for an adaptive sequence of x_n’s or a fixed sequence? The theorem just seems to specify a set of (x,y)’s that have been collected. Ie, is this a truly anytime result or for a fixed sequence? In the case of a linear kernel, the gap in the confidence widths between an anytime and fixed confidence b...
NIPS_2020_773
NIPS_2020
* It's not clear how the Kalman Filtering perspective provides any new insight. Both the global query-specific prior and frequency capping are trivial to specify in the standard attention framework. The Kalman Filtering perspective seems like an unnecessary distraction from what is in reality two simple modifications t...
* There are only two benchmark results. While the improvements are statistically significant, it's unclear whether they are nontrivial improvements. The authors need to provide more context here, especially for the real-world system. For example, is a +4.4% CTRgain big or small for this system?
NIPS_2017_320
NIPS_2017
#ERROR!
- I believe in section 2 it could be made clearer when gradients are calculated by a solver and when not.
bxltAqTJe2
EMNLP_2023
1. I strongly recommend that the authors validate the quality of the GFC API output to ensure the accuracy of the ground truth. This step is crucial in ensuring the reliability of the findings. 2. Since the ground truth output can be assess on the Internet, which may lead to test data leakage problem. I encourage autho...
2. Since the ground truth output can be assess on the Internet, which may lead to test data leakage problem. I encourage authors to discuss the data leakage problem and its implications in this work.
ARR_2022_63_review
ARR_2022
1. There are some other contemporary state-of-the-art models, the authors can consider citing and including them for an extensive comparison. 2. It will be good to see some analysis and insights on different combinations of pre-training datasets introduced in Table 1. Here are some questions: 1. Since some of the sub-t...
1. Since some of the sub-tasks, like dialogue state tracking, require a fixed format of the output, if the model generation is incomplete or in an incorrect format, how can we tackle this issue?
NIPS_2020_1491
NIPS_2020
1. The algorithms require some prior knowledge of the problem such as the number of tasks and switches, time horizon, and the full-information feedback, which is due to the binary loss. 2. I think the authors need a further survey in the contextual bandit with switching regret. Here the task index resembles the context...
2. I think the authors need a further survey in the contextual bandit with switching regret. Here the task index resembles the context. [Luo et al, 2018] would be close to this paper. Moreover, the idea of "meta-experts" can also be seen in [Wu et al, 2019]. [Luo et al, 2018] Luo, Haipeng, et al. "Efficient contextual ...
ICLR_2022_3267
ICLR_2022
Weakness: Rigorousness: This paper is a purely theoretical work in my opinion (though it has numerical simulations). Unfortunately, I did not find the claims in the paper is rigorous enough for a theoretical work. Specifically, the paper made several strong claims without rigorous mathematical analysis. For example, in...
1) state clearly which parts are rigorous math and which are just intuitions/illustrations
NIPS_2017_53
NIPS_2017
Weakness 1. When discussing related work it is crucial to mention related work on modular networks for VQA such as [A], otherwise the introduction right now seems to paint a picture that no one does modular architectures for VQA. 2. Given that the paper uses a billinear layer to combine representations, it should menti...
2. Given that the paper uses a billinear layer to combine representations, it should mention in related work the rich line of work in VQA, starting with [B] which uses billinear pooling for learning joint question image representations. Right now the manner in which things are presented a novice reader might think this...
NIPS_2019_634
NIPS_2019
see section 5 ("improvements") below. Originality: while the methods are not particularly novel (autoregressive and masked language modelling pretraining have both been used before for ELMo and BERT; this work extends these objectives to the multi-lingual case), the performance gains on all four tasks are still very im...
- Quality: This paper's contributions are mostly empirical. The empirical results are strong, and the methodology is sound and explained in sufficient technical details.
NIPS_2022_768
NIPS_2022
. W1: The presentation can be improved. There is no overview of the approach to explain the components, and a few components and concepts appear without much prior context. For example, "encoder" appears without where it is exactly being used. Same for "topic aggregation". "count vector" was used only once without defi...
.W1: The presentation can be improved. There is no overview of the approach to explain the components, and a few components and concepts appear without much prior context. For example, "encoder" appears without where it is exactly being used. Same for "topic aggregation". "count vector" was used only once without defin...
ICLR_2022_1919
ICLR_2022
The proposed Plug-In inversion method is introduced very late in the paper (at the end of Section 3.4) even though it consists only of combining the augmentation and search space restriction techniques provided in the previous sections (3.1 - 3.3). It would have been much clearer for the reader if this fact was fully d...
- As a minor comment related to the above, I believe the authors should indicate in the captions of the figures which model the respective images come from.
NIPS_2018_840
NIPS_2018
1. It is confusing to me what the exact goal of this paper is. Are we claiming the multi-prototype model is superior to other binary classification models (such as linear SVM, kNN, etc.) in terms of interpretability? Why do we have two sets of baselines for higher-dimensional and lower-dimensional data? 2. In Figure 3,...
3. Since the parameter for sparsity constraint has to be manually picked, can the authors provide any experimental results on the sensitivity of this parameter? Similar issue arises when picking the number of prototypes. Update after Author's Feedback: All my concerns are addressed by the authors's additional results. ...
ZyAwBqJ9aP
ICLR_2025
1. The paper does not introduce a novel method, it simply applies a typical graph neural network (GAT) and protein language model (ESM) to solve a binary classification problem. If the paper is to be improved, it will need to introduce a novel approach that brings with it significantly improved performance on this task...
3. The authors compare the performance of their model to the performance of other models and do not achieve the best results across all of the Isoforms. They claim this is because the DeepP450 model is trained on a smaller dataset which might have an impact on generalizability but this claim is not substantiated with e...
NIPS_2020_1671
NIPS_2020
I have some major concerns with the evaluation part of the paper. 1. Paper compared their method with influence functions and representer selection. A simple baseline could be a loss based selection method. Simply select training points based on loss change. A recent paper [DataLens IJCNN 20] shows that a simple loss b...
1. Paper compared their method with influence functions and representer selection. A simple baseline could be a loss based selection method. Simply select training points based on loss change. A recent paper [DataLens IJCNN 20] shows that a simple loss based selection outperforms both influence functions and represente...
4wAKqlfV5t
EMNLP_2023
1. The authors say that ''Although previous methods have proposed multimodal representations and achieved promising results, most of them focus on forming positive and negative pairs, neglecting the variation in sentiment scores within the same class'', do you mean previous MSA research mainly use contrastive learning ...
4. The modality-losing problem has been solved by several previous works, but the authors don't discuss them. For example, Tag-assisted Multimodal Sentiment Analysis under Uncertain Missing Modalities and Missing Modality Imagination Network for Emotion Recognition with Uncertain Missing Modalities.
ACL_2017_395_review
ACL_2017
My main concern with the paper is the magnification of its central claims, beyond their actual worth. 1) The authors use the term "deep" in their title and then several times in the paper. But they use a skip-gram architecture (which is not deep). This is misrepresentation. 2) Also reinforcement learning is one of the ...
4) They claim linear-time sense selection in their model. Again, it is not clear to me how this is the case. A highlighting of this fact in the relevant part of the paper would be helpful.
ICLR_2022_1895
ICLR_2022
1.It is obvious that this paper applies CVAE to the OOD data detection. The question is why to select CVAE as the efficient model to generate the OOD data. What is the motivation? 2.This paper claims that we can already produce comparable results to existing SOTA contrastive learning models but much more efficient. Why...
1.It is obvious that this paper applies CVAE to the OOD data detection. The question is why to select CVAE as the efficient model to generate the OOD data. What is the motivation?
NIPS_2019_573
NIPS_2019
of the paper: - no theoretical guarantees for convergence/pruning - though experiments on the small networks (LeNet300 and LeNet5) are very promising: similar to DNS [16] on LeNet300, significantly better than DNS [16] on LeNet5, the ultimate goal of pruning is to reduce the compute needed for large networks. - on the ...
- Authors state that GSM can be used for automated pruning sensitivity estimation.
NkmJotfL42
ICLR_2024
I find the formal results stated in Sections 5 and 6 to be extremely difficult to follow. While the informal statements in Section 2 are understandably vague, Sections 5 and 6 failed to clarify my confusions from Section 2. I think this is due to two issues: 1. Section 4 did a poor job at explaining the formal notation...
2. Certain points in the intro were not explained properly in later sections. a) The term "vacuous" was never formally defined, b) the connection between tightness of generalization bound (eq
NIPS_2021_776
NIPS_2021
weakness: 1 The theoretical parts (Section 3.2 and 3.3) are a bit hard to follow. 1-1. too many symbols are used. It would be more clear if the table list of all symbols is provided. 1-2. I am not sure if the following assumption is really valid: 1-2-1. lines 143-144: "Besides, under mild assumptions, if (E) ! 0 144 th...
1 The theoretical parts (Section 3.2 and 3.3) are a bit hard to follow. 1-1. too many symbols are used. It would be more clear if the table list of all symbols is provided. 1-2. I am not sure if the following assumption is really valid: 1-2-1. lines 143-144: "Besides, under mild assumptions, if (E) !
ICLR_2023_1584
ICLR_2023
Weakness: 1. The proposed method relies on a pretrained object detection network that contains the sufficient semantic information for the in-distribution data. When the semantic of in-distribution data like medical images is not covered by the object detection network (pre-trained on natural image), the built semantic...
1. The proposed method relies on a pretrained object detection network that contains the sufficient semantic information for the in-distribution data. When the semantic of in-distribution data like medical images is not covered by the object detection network (pre-trained on natural image), the built semantic graph can...
hn0B3jTlwE
EMNLP_2023
- The paper lacks some results on the performance of the models on other tasks. While perplexity does not increase significantly, it would be interesting to know the impact of the method on other probing tasks. - A limitation of the method is its dependency on human-annotated datastores. The paper lacks some results on...
- A limitation of the method is its dependency on human-annotated datastores. The paper lacks some results on the quality of the datastores data on the final performance of the model.
ICLR_2022_1648
ICLR_2022
Weakness/concerns: 1. The theoretical analysis is limited to regular graph for influence score and non-attribute graph for expressiveness of link representation. Could them be generalized to more applicable graphs? 2. The authors concatenate node representation as link representation. In this way, the expressiveness of...
4. The proposed method seems to rely heavily on sophisticated hyperparameter searching. (as shown in Appendix D)
ICLR_2023_1400
ICLR_2023
- While the paper shows improvements on CIFAR derivatives, it lacks analysis or results on other datasets (e.g., ImageNet derivatives). Verifying the effectiveness of the framework on ImageNet-1k or even ImageNet-100 is important. These results ideally can be presented in the main paper. - The authors should add some d...
- Some baselines such as [1] are not considered and should be added. I feel that influence function can be replaced by other influence estimation methods such as datamodels[2] or tracin[3]. It will be beneficial to understand if the updated framework results in better pruning than the baselines. I am assuming it would ...
NIPS_2016_482
NIPS_2016
of the method (see above) would clearly help in making the case for its impact. Clarity: The paper is very clearly written and easy to follow. It would be interesting to see a version of Fig. 1 including error bars estimated from the method - it seems that currently only the estimated means are ever used. More emphasis...
2. It would be nice to know the source of the variance seen in Fig.
ICLR_2022_3056
ICLR_2022
. It is claimed that the generated OOD samples cover larger diversity ranges. If the generated OOD samples all belong to the same classes as those in the training set, how to quantify such diversity? . The experiments show that with more synthesized data the OOD generalization improves. Then the question is what if we ...
. The authors emphasized the superior performance of the proposed algorithm. However, with more synthetic data the training time would be significantly increased. It's better to add discussion on both the advantage and limitations of the proposed algorithm for a fair comparison with benchmarks.
HZtBP6DZah
ICLR_2024
One major weakness is in clarity and presentation. This is a complicated model with many components, and I found it difficult to follow. Specific suggestions: - The authors may consider rewriting or reorganizing the last two paragraphs in Introduction. - Please provide a table of notations and variables in Appendix. - ...
- (1) does not make sense mathematically. Is Group_j a set or a vector or a scalar? For others, please see the list of questions below.
l8zRnvD95l
ICLR_2025
1.The dataset was compiled from multiple sources with various modalities, which may introduce inconsistency or OOD samples when doing model training. Careful data analysis can be helpful 2. The experiment shows the proposed EcoPerceiver outperformed the current SOTA approach for most IGBP types especially WET, WAT, and...
3.There is only one baseline compared and there is no model with single modalities.
en3NwykrHW
ICLR_2025
1. There is no experiment. 2. In the upper bound of Theorem 7, the last three terms dominate. In contrast, the abstract and the introduction claim that the upper bound is determined by the first term. The authors should clarify under what conditions, if any, the first term dominates. If the first term is not asymptotic...
3. The introduction claims that the developed algorithm for RL with trajectory feedback achieves the same asymptotically optimal regret bound as the standard RL. The authors should explain why trajectory feedback does not lead to a worse regret bound and what properties of their algorithm allow them to overcome the inf...
XhdckVyXKg
ICLR_2025
* In general, I believe the quality of the writing, presentation and conclusions in the paper can improve significantly. There are several unbacked claims and missing details throughout the paper (see below), which make the paper very hard to follow. I highly suggest authors consider revising the manuscript write up to...
* Details of masking strategies in Table 8 are missing.
NIPS_2021_2024
NIPS_2021
below). Using the related literature on active interventions would require full identification of the underlying DAG. It is emphasized that matching only the means can be done with significantly smaller number of interventions, and this is the difference from previous works. - Identifiability in terms of Markov equival...
- The paper is organized clearly, and the theoretical claims are well supported. Weaknesses: I have several concerns on the importance of the proposed settings and usefulness of the results.
OhTzuWzO6Q
ICLR_2024
- The proposed method seems to heavily depend on how good AD is. Indeed, for common image and text tasks, it might be easy to find such a public dataset. But for more sensitive tasks on devices, such a public dataset might not exist. - Scale of experiments is small, where the tasks such as MNIST or CIFAR10 are relative...
- Also since the authors considered a public dataset is available, then the DP baseline should also be those with such assumptions, such as [1].
NIPS_2019_1348
NIPS_2019
0. My first concern is the assumption that a human risk measure is gold standard when it comes to fairness. There are many reasons to question this assumption. First, humans are the worst random number generators, e.g. the distribution over random integers from 1 to 10 is highly skewed in the center. Similarly, if huma...
2. In the Introduction, the authors choose to over-sell their work by presenting their work as a "very natural if simple solution to addressing these varied desiderata" where the desiderata include "fairness, safety, and robustness". This is a strong statement but incorrect at the same time. The paper lacks any connect...
ICLR_2023_1957
ICLR_2023
• The experimental datasets were very simple. I would like to see more complex datasets, such as ImageNet/Tiny Imagenet. • Please expand on the contribution of the compromised clients in the model update. It’s not clear whether the attack success rate is low because the compromised clients have a low genuine score or i...
• Please expand on the contribution of the compromised clients in the model update. It’s not clear whether the attack success rate is low because the compromised clients have a low genuine score or if their updates result in weak backdoor success.
ICLR_2023_2664
ICLR_2023
1.There is no evidence showing that the relationship between NC and transferability is robust. As the authors already mentioned, a large NC leads might not lead to good transfer performance as well, e.g. the model is randomly initialized and not trained. This is a simple sanity check that the correlation between NC and...
5.Lacking justifications on larger datasets. It’s better to provide the NC results on ImageNet apart from CIFAR100 and CIFAR-10. Evaluating the numbers based on ImageNet should not be difficult with the publicly available pre-trained models? This gives readers more confidence on this phenomenon.
ICLR_2023_4133
ICLR_2023
1.The structure of this paper is confused and difficult to understand. 2.The motivation of introducing graphic information into attention calculation is not clear, and the model is not novel enough. 3.More explanation is needed for the experiment to calculate the standard deviation of attention scores of 1,2,3 hop neig...
4.Some important methods [1,2,3] for dealing with heterophily graphs should be either discussed in related works or compared.
NIPS_2017_349
NIPS_2017
- The paper is not self contained Understandable given the NIPS format, but the supplementary is necessary to understand large parts of the main paper and allow reproducibility. I also hereby request the authors to release the source code of their experiments to allow reproduction of their results. - Use of deep-reinfo...
- Increase space between the main caption and sub-caption. ** Line 299: From Fig 5b, it's not clear that |R|=7 is the maximum. To my eyes, 6 seems higher.
ICLR_2023_2869
ICLR_2023
Weakness: 1.The technical quality of this paper is not enough, and it seems like a direct combination with Evidential Theory and Reinforcement Learning. 2.The paper is not sound as there are many exploration methods in RL literature, such as count-based methods and intrinsic motivations(RND,ICM). But the paper does not...
5.The paper does not provide a specification of the experimental setup. Did the authors bulid simulator? If not, how to evaluate the performance of each policy in the offline setting?
kN25ggeq1J
ICLR_2025
This paper is not suitable for a computer-science/ML conference like ICLR. It seems best suited to a cognitive psychology or philosophy conference. The paper starts off in line 42 with "From the perspective of human cognitive psychology, reasoning can be viewed as a process of memory retrieval," and this is the perspec...
- LLMs perform better on System 1 tasks than on System 2 tasks. ???? Must define System 1 tasks and System 2 tasks for this to make any sense.
NIPS_2017_40
NIPS_2017
. Are other methods such as Barak, Kelner, Steuer 2014 "Rounding sum-of-squares relaxations" relevant? 6. Sec 4 Experiments. When you run BP-SP, you obtain marginals. How do you then compute your approximate MAP solution? Do you use the same CLAP rounding approach or something else? This may be important since in your ...
. Are other methods such as Barak, Kelner, Steuer 2014 "Rounding sum-of-squares relaxations" relevant?
AQiuwWLvim
EMNLP_2023
* Empathy is very difficult to be captured by automatic metrics, hence human evaluation is a must to verify the improvements. However, the authors only report automatic metrics. Moreover, the difference of the automatic scores between approaches are small. Therefore it is hard to tell whether the approach in the paper ...
* Although the authors highlight dialogue act labels to be their main contribution, the results do not favor their claim, because the scores are better when the dialogue act label for the source target is not given (implicit vs. explicit). Moreover, the plain Target prompting, which do not include dialogue act labels, ...
ACL_2017_145_review
ACL_2017
The comparison against similar approaches could be extended. - General Discussion: The main focus of this paper is the introduction of a new model for learning multimodal word distributions formed from Gaussian mixtures for multiple word meanings. i. e. representing a word by a set of many Gaussian distributions. The a...
_ There are some missing citations that could me mentioned in related work as : Efficient Non-parametric Estimation of Multiple Embeddings per Word in Vector Space Neelakantan, A., Shankar. J. Passos, A., McCallum. EMNLP 2014 Do Multi-Sense Embeddings Improve Natural Language Understanding? Li and Jurafsky, EMNLP 2015 ...
NIPS_2021_2082
NIPS_2021
weakness of the existing works. 3. In Table 1 we can see that, besides the CAMELYON16 dataset, the baseline MIL-based methods showed much lower performances than max-pooling. Please give some discussions about the reason. 4. For ablation study, The Table 2 and Fig. 5 were not mentioned in the manuscript. What do the va...
5. For Fig.6, What is the purpose to show the zoom-in view of heatmap? I cannot see anything special in this area.
NIPS_2019_374
NIPS_2019
---------- 1. Except the new definition of the Generalized Gauss-Newton matrix (that is not pursued), no other proposition in the paper is original. 2. As the authors point themselves, analyzing the EF as a variance adaptation method would have explained its efficiency and strengthened the paper: "This perspective on t...
2. As the authors point themselves, analyzing the EF as a variance adaptation method would have explained its efficiency and strengthened the paper: "This perspective on the empirical Fisher is currently not well studied. Of course, there are obvious difficulties ahead:" Overcoming these difficulties is what a research...
NIPS_2017_217
NIPS_2017
- The model seems to really require the final refinement step to achieve state-of-the-art performance. - How does the size of the model (in terms of depth or number of parameters) compare to competing approaches? The authors mention that the model consists of 4 hourglass modules, but do not say how big each hourglass m...
- There are some implementation details that are curious and will benefit from some intuition: for example, lines 158-160: why not just impose a pairwise relationship across all pairs of keypoints? the concept of anchor joints seems needlessly complex.
ACL_2017_108_review
ACL_2017
The problem itself is not really well motivated. Why is it important to detect China as an entity within the entity Bank of China, to stay with the example in the introduction? I do see a point for crossing entities but what is the use case for nested entities? This could be much more motivated to make the reader inter...
- I would place commas after the enumeration items at the end of page 2 and a period after the last one - what are child nodes in a hypergraph?
ICLR_2022_200
ICLR_2022
(1) The progressive distillation process seems to need a much larger computational cost than many previous fast sampling methods, such as DDIM and DDPM respacing. As stated in the paper, its training budget is almost the same as training a diffusion model from scratch. I wonder how this concern can be addressed in prac...
3) Confusing sentences. For example, what do you mean by saying “... unlike the original data point x, since multiple different data points x could conceivably have led to observing noisy data z t ”? Also, when saying “we found this to work slightly better than starting from a non-zero signal-to-noise ratio as used by ...
TKzERU0kq1
EMNLP_2023
- There are many basic writing or grammatical errors. Some sentences are not fluent (for example L259-262, it makes me hard to understand the motivation of Sec5.2). - Current pipeline is a sequence of five types of editing, but it is not clear the contribution of each type of editing. - I’m skeptical about the quality ...
- There are many basic writing or grammatical errors. Some sentences are not fluent (for example L259-262, it makes me hard to understand the motivation of Sec5.2).
YvOq7jHT6R
ICLR_2025
1. The experiments are somewhat weak. - The main paper only presents ridge regression experiments, while important black-box adversarial experiments are deferred to the appendix. - I recommend moving key adversarial attack results to the main paper, particularly those demonstrating the practical benefits of bias cancel...
- The main paper only presents ridge regression experiments, while important black-box adversarial experiments are deferred to the appendix.
NIPS_2019_494
NIPS_2019
of the approach, it may be interesting to do that. Clarity: The paper is well written but clarity could be improved in several cases: - I found the notation / the explicit split between "static" and temporal features into two variables confusing, at least initially. In my view this requires more information than is pro...
- the paper is presented well, e.g., quality of graphs is good (though labels on the graphs in Fig 3 could be slightly bigger) Significance:
ACL_2017_676_review
ACL_2017
The most annoying point to me is that in the relatively large dataset (ASPEC), the best proposed model is still 1 BLEU point lower than the softmax model. What about some even larger dataset, like the French-English? There are at most 12 million sentences there. Will the gap be even larger? Similarly, what's the perfor...
-General Discussion: The paper describes a parameter reducing method for large vocabulary softmax. By applying the error-corrected code and hybrid with softmax, its BLEU approaches that of the orignal full vocab softmax model. One quick question: what is the hidden dimension size of the models? I couldn't find this in ...
NIPS_2020_530
NIPS_2020
There are multiple issues with the claims and evaluations presented in the paper. In particular, as a reader, I am not convinced that reported gains are due to exploiting gaze information. 1. An improvement over SOTA? : For paraphrasing task, the paper claims Patro et al. (2018) as SOTA which is an outdated baseline. [...
6. Missing important implementation details: For the seq2seq model, author mentioned that they used greedy search. Is there any reason for not using a standard beam-search?
NIPS_2022_1340
NIPS_2022
. The claim regarding the ability of the proposed method to alleviate the popularity bias is not well supported in the paper, nor by theoretical analyses, neither by convincing targeted experiments. For examples, I would recommend reporting statistics about the popularity distribution of the recommended items by the di...
13. This is due to using different notations for the mutual information terms in the proposition and in eq.
ACL_2017_524_review
ACL_2017
- The evaluation datasets used are small and hence results are not very convincing (particularly wrt to the alchemy45 dataset on which the best results have been obtained) - It is disappointing to see only F1 scores and coverage scores, but virtually no deeper analysis of the results. For instance, a breakdown by type ...
- it is still not clear to this reviewer what is the proportion of out of coverage items due to various factors (running out of resources, lack of coverage for "genuine" grammatical constructions in the long tail, lack of coverage due to extra-grammatical factors like interjections, disfluencies, lack of lexical covera...
NIPS_2021_1604
NIPS_2021
). Weaknesses - Some parts of the paper are difficult to follow, see also Typos etc below. - Ideally other baselines would also be included, such as the other works discussed in related work [29, 5, 6]. After the Authors' Response My weakness points after been addressed in the authors' response. Consequently I raised m...
- Section 2.3: Before line 116 mentioned the change when adding the counterfactual example, it would be helpful to first state what I(X_2, Y_1) and I(X_1, Y_1) are without it. Minor points - Line 29: How is desired relationship between input text and target labels defined?
ARR_2022_101_review
ARR_2022
1. The method in this paper is quite similar to BERTScore, but the authors have not cited that paper. 2. Figure 2 does not show the time complexity of SimCSE_{CLS} method. 3. I am confused about the definition of "\vec \mathbf{1}" in Equation(1). Missing citation: BERTScore: Evaluating Text Generation with BERT (Zhang ...
2. Figure 2 does not show the time complexity of SimCSE_{CLS} method.
ICLR_2023_1935
ICLR_2023
Missing literature and baselines: there are many learning-based approaches for heuristic search that are not based on L_2 and are not cited in the paper [e.g., 1-4]. [1-2] have specifically focused on Sokoban. [3][4] are older works that avoid problems with L_2 by focusing on learning to rank. Optimality: the paper see...
35. No.14. 2021. [2] Feng, Dieqiao, Carla P. Gomes, and Bart Selman. "The Remarkable Effectiveness of Combining Policy and Value Networks in A*-based Deep RL for AI Planning." (2021). [3] Garrett, Caelan Reed, Leslie Pack Kaelbling, and Tomás Lozano-Pérez. "Learning to rank for synthesizing planning heuristics." Procee...
NIPS_2018_243
NIPS_2018
I had trouble understanding this paper, but I'm not sure if that's due to the exposition or because it's a little ways outside my normal area of expertise. In particular, I don't immediately see the answer to: 1. When, if ever, is SITE a consistent estimator of the true average ITE? What assumptions are required for th...
2. Does the SGD actually converge? It seems possible that selecting triplets within the mini-batch may lead to issues.
NIPS_2021_2326
NIPS_2021
Weakness Overview of the main concerns, which are detailed in the paragraphs below: Improper evaluation of the universal controller. Missing insights into the data and the strong performance of the pose estimation, which perform better than state-of-the-art pose estimation from third-person view on established benchmar...
1) statistical data on their dataset to be able to better assess their quantitative results (e.g., data on per joint trajectories, pose diversity against available 3D datasets),
NIPS_2022_80
NIPS_2022
weakness: 1.There is not enough theory in this article to explain the effectiveness of Structural Knowledge Distillation. 2.In sec4.4, only two SOTA KD methods in detections used for comparing. The explanation and theory analysis of the proposed method is limited. The compared SOTA methods is not enough.
2.In sec4.4, only two SOTA KD methods in detections used for comparing. The explanation and theory analysis of the proposed method is limited. The compared SOTA methods is not enough.
2tIyA5cri8
ICLR_2025
Only minor weaknesses. 1. In the background section on RL, TD is presented for a fixed policy, and then the paper switches to Q-learning, assuming the policy chooses \argmax_a Q(s,a). But this will change the policy as the Q function is updated, so it's not technically the same setting. 2. It was a bit unclear what "co...
4. Line 458, mangled sentence "our study is, we have explored".
ICLR_2021_2568
ICLR_2021
** Unfortunately, the proposed approach is not described clearly enough for it to be widely useful. In general, I believe that when formal tools (like group theory) are applied to prove anything outside of their original domain (i.e. when we are using group theory to reason about compositional representations in machin...
1) clearly define all involved notions (not only mathematical, but also the ones to which mathematical tools are applied)
NIPS_2018_8
NIPS_2018
I personally think the paper does not do justice to weight adaptation. The proposed setup is only valid for classification using same kind of data (modality, appearance etc.) between training and adaptation; however, weight adaptation is a simple method which be used for any problem regardless of change of appearance s...
- Averaging the accuracies over different tasks in Figure 6 does not seem right to me since going from 90% to 95% accuracy and going from 10% to 15% should ideally be valued differently. Authors should try to give the same plot for each dataset separately in addition to combining datasets.
dVOXsyVcik
EMNLP_2023
* The equation of agreement@k, which is used to compute most of the results in this study, is not clear. Based on the response of the authors, I will need to revisit my assessment regarding this weakness. * The authors overlook existing work in event/anomaly detection (e.g.: https://arxiv.org/abs/2007.02500 or https://...
* The authors overlook existing work in event/anomaly detection (e.g.: https://arxiv.org/abs/2007.02500 or https://www.nature.com/articles/s41598-021-03526-y) and implement a new algorithm to define the optimum k dynamically.
NIPS_2021_34
NIPS_2021
of this paper are summarized as follows. Strengths: 1. The experiments are comprehensive. To show the effectiveness of the proposed method, the authors take SGR as an example and provide comparisons with other SOTA methods. The results with varying noise ratios and real-world noisy data demonstrated the effectiveness o...
1. It seems that the proposed method NCR is only applicable to the triplet loss in cross-modal matching as defined in Eq.5-6. How to achieve robustness to other loss formulations like the softmax loss in ALIGN (Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision)?
NIPS_2019_663
NIPS_2019
of their work?"] The submission is overall reasonably sound, although I have some comments and questions: * Regarding the model itself, I am confused by the GRU-Bayes component. I must be missing something, but why is it not possible to ingest observed data using the GRU itself, as in equation 2? This confusion would p...
* Section 4 is placed quite far away from the Figure it refers to (Figure 1). I realise this is because Figure 1 is mentioned in the introduction of the paper, but it makes section 4 somewhat hard to follow. A possible solution would be to place section 4 before the related research, since the only related work it draw...
Zes7Wyif8G
ICLR_2025
1) Although the paper indicated in 3rd paragraph that it focused on a "particular flavor of neurosymbolic AI", i.e., a neural network feeding into probabilistic inference based on arithmetic circuits, it would be beneficial to reflect it explicitly in other parts, e.g., in the abstract, or a more specific title (Accele...
2) The actual "interface" and details between the neural network and the used arithmetic circuits remain largely a secret for readers(of course there are pointers to prior arts). It would be beneficial to open up and explain how exactly a neural network is interfaced to the arithmetic circuits, what are the assumptions...
NIPS_2016_314
NIPS_2016
I found in the paper includes: 1. The paper mentions that their model can work well for a variety of image noise, but they show results only on images corrupted using Gaussian noise. Is there any particular reason for the same? 2. I can't find details on how they make the network fit the residual instead of directly le...
- Is it through the use of skip connections? If so, this argument would make more sense if the skip connections exist after every layer (not every 2 layers) 3. It would have been nice if there was an ablation study on what plays the most important factor on the improvement in performance. Whether it is the number of la...
NIPS_2022_2182
NIPS_2022
Weakness: 1. Contribution is not convincing. They argue that the traditional adaptive filterbank uses a scalar weight shared by all nodes, and their proposed method learns different weights for different nodes. However, in my opinion, FAGCN can do the same thing. 2. There is a gap between the proposed metric and method...
1. Contribution is not convincing. They argue that the traditional adaptive filterbank uses a scalar weight shared by all nodes, and their proposed method learns different weights for different nodes. However, in my opinion, FAGCN can do the same thing.
ICLR_2023_3879
ICLR_2023
- There is no theoretical result or analysis supporting the proposed method. - The method applies only when testing data is incomplete requiring complete training datasets with limits its application in many practical situations where training datasets are also incomplete. - The paper compares the proposed method only ...
- There is no explanation on why a maximum of P/2 features can be removed. Is there any theoretical explanation for such constraint? How the algorithms behaves when missing entries are more than P/2?
NIPS_2017_122
NIPS_2017
* It is not clear if the ability of the model to detect fall height is because of the absolute timing of the simulations. Falling from a greater height leads to a longer delay before the first impact. This is obvious to an algorithm analyzing fixed-sized wav files, but not to a human listening to sound files with somew...
* It is not clear if the ability of the model to detect fall height is because of the absolute timing of the simulations. Falling from a greater height leads to a longer delay before the first impact. This is obvious to an algorithm analyzing fixed-sized wav files, but not to a human listening to sound files with somew...
CP1PLnFzbr
EMNLP_2023
Two major reasons to reject this paper. - The proposed approaches are combinations of well-known algorithms and the metrics are already popular in the community. The Author's contributions are marginal at best. - No theory behind. Formal definition and theoretical background of the proposed method are missing in the pa...
- The proposed approaches are combinations of well-known algorithms and the metrics are already popular in the community. The Author's contributions are marginal at best.
yNJEyP4Jv2
ICLR_2024
### Correctness and clarity of the theoretical results The paper formulates an adversarial optimization problem particularly tailored for the latent diffusion models (LDM). The analysis guides the algorithm design to some degree (more on this later). However, due to the lack of clarity and various approximations being ...
7. I do not quite see the purpose of Proposition 1. It acts as either a definition or an assumption to me. The last sentence “one can sample $x \sim w(x)$ from $p_{\theta(x)}(x)$” is also very unclear. Is the assumption that the true distribution is exactly the same as the distribution of outputs of the fine-tuned LDM?
ICLR_2022_3218
ICLR_2022
Weakness: 1) Since this paper focuses on biometric verification learning, the comparison against the state-of-the-art loss functions widely used in face/iris verification should be added (e.g., Center-Loss, A-Softmax, AM-Softmax, ArcFace). 2) Cosine similarity score is more often used in biometric verification, so I wo...
4) Why triplet loss cannot convergent on CASIA-V4? I guess many previous iris verification works have employed such loss.
GVhfWu5L8D
ICLR_2025
## The motivation is good but some method details are quite strange 1. Eq. (7) tries to ensure $(1-\alpha) c + \alpha(c + V_c)=Q_c$. This is not a common Bellman equation for the cost value Q and V function. Instead, this equation is similar to the one for the feasible value function but is still different: $(1-\alpha)...
2. The definition of $A_r^\pi$ in Eq. (9) is somehow ad-hoc, solely providing some intuitive explanations.
NIPS_2017_110
NIPS_2017
weakness of this paper in my opinion (and one that does not seem to be resolved in Schiratti et al., 2015 either), is that it makes no attempt to answer this question, either theoretically, or by comparing the model with a classical longitudinal approach. If we take the advantage of the manifold approach on faith, then...
245 "the biggest the sample size", p.7l.257 "a Symetric Random Walk", p.8 l.269 "the escapement of a patient".
ZhZFUOV5hb
EMNLP_2023
- The major concern with this paper is the performance of the proposed model. Since one doc id could be matched to multiple documents, the recall/MRR metric computation is unfair for baselines. Though the authors compute the expectation of Recall/MRR, essentially the proposed model still look for documents at more posi...
- The generative retrieval baseline in ADS dataset is weak. From the experiment in MS MARCO dataset, SEAL is not the best performing generative retrieval baseline. The authors should report the results for stronger baseline like the Ultron-Atomic.
NIPS_2018_328
NIPS_2018
Weakness: - This paper's approach proposes multi-layer representation learning via gradient boosted trees, and on the top of that, linear regression/softmax regression are placed for supervised learning. But, we can do the opposite representation learning via neural nets, and on the top of that, decision trees can be u...
- Something is wrong in the descriptions of Algorithm 1. If we initialize G_{2:M}^0 <- null, then the first run of "G_j^t <- G_j^{t-1}" for the j=M to 2 loop would become G_M^1 <- G_M^0 = null, and thus the computation of L_j^inv (as well as the residuals r_k) can be problematic.
NIPS_2021_1743
NIPS_2021
1. While the paper claim the importance of language modeling capability of pre-trained models, the authors did not conduct experments on generation tasks that are more likely to require a well-performing language model. Experiments on word similarity and SquAD in section 5.3 cannot really reflect the capability of lang...
3. In section 5.1, the authors say that the benefits of the stop gradient operation are more on stability. What stability, the training process? If so, are there any learning curves of COCO-LM with and without stop gradient during pre-training to support this claim?
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
5