paper_id stringlengths 10 19 | venue stringclasses 14
values | focused_review stringlengths 249 8.29k | point stringlengths 59 672 |
|---|---|---|---|
ARR_2022_219_review | ARR_2022 | - The paper hypothesizes that SimCSE suffers from the cue of sentence length and syntax. However, the experiments only targets sentence length but not syntax. - The writing of this paper can benefit by some work (see more below). Specifically, I find Section 3 difficult to understand as someone who does not directly wo... | - (136): abbreviations like "MoCo" should not appear in the section header, since a reader might not know what it means. |
QQvhOyIldg | ICLR_2025 | 1. This paper is poorly written & presented. A lot of the content can be found in the undergraduate textbook. A substantial part of the results are informal version, say Lemma 6.1 - 6.3. Also, there is hardly any interpretation of the main results. The presentation style does not seem to be serious.
2. The technical co... | 2. The technical contribution is unclear. Most of the analysis are quite standard. |
xtOydkE1Ku | ICLR_2024 | - The core innovation claimed by the paper is the reduction in computational complexity through a two-stage solution, first estimating marginals and then dependencies. However, this approach isn't novel, as seen in references [1,2]. The paper would benefit from a clearer distinction of how its methodology differs signi... | - The paper's primary contribution seems to be an incremental advancement in efficiency over the TACTiS approach. More substantial evidence or arguments are needed to establish this as a significant contribution to the field. |
aRlH9AkiEA | EMNLP_2023 | 1. It's still unclear how topic entities can improve the relationship representations. This claim is less intuitive.
2. The improvements on different datasets are trivial and the novelty of this paper is limited. Lots of previous works focus on this topic. Just adding topic entities seems incremental.
3. Missed related... | 2. The improvements on different datasets are trivial and the novelty of this paper is limited. Lots of previous works focus on this topic. Just adding topic entities seems incremental. |
yCAigmDGVy | ICLR_2025 | 1. As the paper primarily focuses on applying quantum computing to global Lipschitz constant estimation, it is uncertain whether the ICLR community will find this topic compelling.
2. The paper lacks discussion on the theoretical guarantee about the approximation ratio of the hierarchical strategy to the global optimal... | 2. The paper lacks discussion on the theoretical guarantee about the approximation ratio of the hierarchical strategy to the global optimal of original QUBO. |
NIPS_2022_246 | NIPS_2022 | Weakness: 1) Generally lacking a quantitative measure to evaluate the generated VCEs. Evaluation is mainly performed with visual inspection. 2) As the integration of cone projection shown to be helpful, however it is not clear why this particular projection is chosen. Are there other projections that are also helpful? ... | 1) Generally lacking a quantitative measure to evaluate the generated VCEs. Evaluation is mainly performed with visual inspection. |
NIPS_2021_275 | NIPS_2021 | weakness Originality
+ Novel setting. As far as I am aware, the paper proposes a novel setting - Few-shot Hypothesis Adaptation (FHA) - a combination of existing problems - Hypothesis Transfer Learning and the Few-Shot Domain Adaptation.
+/- Somewhat novel method. As far as I am aware, the paper also proposes a novel m... | - Only marginal improvements over baselines, mostly within the error bar range. Although the authors claim the method performs better than the baselines, the error range is rather high, suggesting that the performance differences between some methods are not very significant. |
NIPS_2016_69 | NIPS_2016 | - The paper is somewhat incremental. The developed model is a fairly straighforward extension of the GAN for static images. - The generated videos have significant artifacts. Only some of the beach videos are kind of convincing. The action recognition performance is much below the current state-of-the-art on the UCF da... | - The generated videos have significant artifacts. Only some of the beach videos are kind of convincing. The action recognition performance is much below the current state-of-the-art on the UCF dataset, which uses more complex (deeper, also processing optic flow) architectures. Questions: |
ICLR_2023_4659 | ICLR_2023 | Weakness: 1. It would make this paper stronger if the authors can show some adversarial robustness of some SOTA defended recognition models on the new set. 2. I would like to see more clear details of how to use DALL-E2 or stable diffusion models to generate hard examples, e.g., how to design prompts and how to filter ... | 4. It is still unclear how to make the new proposed evaluation set more diverse and representative than the previous method and how to select those representative images. |
Y4iaDU4yMi | ICLR_2025 | - The paper's presentation is difficult to follow, with numerous typos and inconsistencies in notation. For example:
- Line 84, "In summery" -> "In summary".
- In Figure 1, "LLaVA as dicision model" -> "LLaVA as decision model."
- Line 215, "donate" should be "denote"; additionally, $\pi_{ref}$ is duplicated.
- The def... | - The authors should include a background section to introduce the basic RL framework, including elements of the MDP, trajectories, and policy, to clarify the RL context being considered. Without this, it is difficult to follow the subsequent sections. Additionally, a brief overview of the original DPO algorithm should... |
Gzuzpl4Jje | EMNLP_2023 | 1. The original tasks’ performance degenerates to some extent and underperforms the baseline of the Adapter, which indicates the negative influence of removing some parts of the original networks.
2. The proposed method may encounter a limitation if the users continuously add new languages because of the limited model ... | 2. The proposed method may encounter a limitation if the users continuously add new languages because of the limited model capacity. |
NIPS_2020_593 | NIPS_2020 | - Line 54: Is 'interpretable' program relevant to the notion described in the work of 'Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608'? - Line 19, 37, 39: A reference for the 'Influence maximization' problem may be provided. The distribut... | - Line 54: Is 'interpretable' program relevant to the notion described in the work of 'Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608'? |
ICLR_2021_2892 | ICLR_2021 | - Proposition 2 seems to lack an argument why Eq 16 forms a complete basis for all functions h. The function h appears to be defined as any family of spherical signals parameterized by a parameter in [-pi/2, pi/2]. If that’s the case, why eq 16? As a concrete example, let \hat{h}^\theta_lm = 1 if l=m=1 and 0 otherwise,... | - The experiments are limited to MNIST and a single real-world dataset. |
ICLR_2022_912 | ICLR_2022 | 1. The paper in general does not read well, and more careful proofreading is needed. 2. In S2D structure, it is not clear why the number of parameters does not change. If the kernel height/width stay the same, then its depth will increase, resulting in more parameters. I agree the efficiency could be improved since the... | 2. In S2D structure, it is not clear why the number of parameters does not change. If the kernel height/width stay the same, then its depth will increase, resulting in more parameters. I agree the efficiency could be improved since the FLOP is quadratic on activation side length. But in terms of parameters, more detail... |
NIPS_2016_499 | NIPS_2016 | - The proposed method is very similar in spirit to the approach in [10]. It seems that the method in [10] can also be equipped with scoring causal predictions and the interventional data. If otherwise, why [10] cannot use these side information? - The proposed method reduces the computation time drastically compared to... | - The proposed method is very similar in spirit to the approach in [10]. It seems that the method in [10] can also be equipped with scoring causal predictions and the interventional data. If otherwise, why [10] cannot use these side information? |
NIPS_2019_390 | NIPS_2019 | 1. The distinction between modeling uncertainty about the Q-values and modeling stochasticity of the reward (lines 119-121) makes some sense philosophically but the text should make clearer the practical distinction between this and distributional reinforcement learning. 2. It is not explained (Section 5) why the modif... | 3. The Atari game result (Section 7.2) is limited to a single game and a single baseline. It is very hard to interpret this. Less major weaknesses: |
ICLR_2022_3248 | ICLR_2022 | compared to [1], which are advantages of the IBP. 1) Unlike a factorized model with an IBP prior, the proposed method lacks a sparsity constraint in the number of factors used by subsequent tasks. As such, the model will not be incentivized to use less factors, leading to increasing number of factors and increased comp... | 1) Unlike a factorized model with an IBP prior, the proposed method lacks a sparsity constraint in the number of factors used by subsequent tasks. As such, the model will not be incentivized to use less factors, leading to increasing number of factors and increased computation with more tasks. |
NIPS_2020_902 | NIPS_2020 | - The paper could benefit from a better practical motivation, in its current form it will be quite hard for someone who is not at home in this field to understand why they should care about this work. What are specific practical examples in which the proposed algorithm would be beneficial? - The presentation of the sim... | - The presentation of the simulation study is not really doing a favor to the authors. Specifically, the authors do not really comment on why the GPC (benchmark) is performing better than BPC (their method). It would be worth re-iterating that this is b/c of the bandit feedback and not using information about the form ... |
NIPS_2016_117 | NIPS_2016 | weakness of this work is impact. The idea of "direct feedback alignment" follows fairly straightforwardly from the original FA alignment work. Its notable that it is useful in training very deep networks (e.g. 100 layers) but its not clear that this results in an advantage for function approximation (the error rate is ... | - I might be helpful to quantify and clarify the claim "ReLU does not work very well in very deep or in convolutional networks." ReLUs were used in the AlexNet paper which, at the time, was considered deep and makes use of convolution (with pooling rather than ReLUs for the convolutional layers). |
NIPS_2020_839 | NIPS_2020 | - In Table 2, what about the performance of vanilla Transformer with the proposed approach? It's clearer to report the baseline + proposed approach, not only aiming at reporting state-of-the-art performance. - In Figure 1, the reported perplexities are over 30, which looks pretty high. This high perplexity contradicts ... | - In Figure 1, the reported perplexities are over 30, which looks pretty high. This high perplexity contradicts better BLEU scores in my experience. How did you calculate perplexity? |
fL8AKDvELp | EMNLP_2023 | 1. The paper needs a comprehensive analysis of sparse MoE, including the communication overhead (all to all). Currently, it's not clear where the performance gain comes from, basically, different number of experts incurs different communication overhead.
2. The evaluation needs experiments on distributed deployment and... | 2. The evaluation needs experiments on distributed deployment and a larger model. |
NIPS_2021_815 | NIPS_2021 | - In my opinion, the paper is a bit hard to follow. Although this is expected when discussing more involved concepts, I think it would be beneficial for the exposition of the manuscript and in order to reach a larger audience, to try to make it more didactic. Some suggestions: - A visualization showing a counting of ho... | - The authors define rooted patterns (in a similar way to the orbit counting in GSN), but do not elaborate on why it is important for the patterns to be rooted, neither how they choose the roots. A brief discussion is expected, or if non-rooted patterns are sufficient, it might be better for the sake of exposition to d... |
ARR_2022_178_review | ARR_2022 | __1. The relation between instance difficulty and training-inference consistency remains vague:__ This paper seems to try to decouple the concept of instance difficulty and training-inference consistency in current early exiting works. However, I don't think these two things are orthogonal and can be directly decoupled... | 2. The paper addresses many times (Line 95-97, Line 308-310) that the consistency between training and inference can be easily satisfied due to the smoothness of neural models. I would suggest giving more explanations on this. |
NIPS_2017_356 | NIPS_2017 | ]
My major concerns about this paper is the experiment on visual dialog dataset. The authors only show the proposed model's performance on discriminative setting without any ablation studies. There is not enough experiment result to show how the proposed model works on the real dataset. If possible, please answer my fo... | 1: The authors claim their model can achieve superior performance having significantly fewer parameters than baseline [1]. This is mainly achieved by using a much smaller word embedding size and LSTM size. To me, it could be authors in [1] just test model with standard parameter setting. To backup this claim, is there ... |
NIPS_2022_1598 | NIPS_2022 | Weakness:
It is unclear whether the gain of BooT comes from 1. Extra data 2. Different architecture (pretrained gpt2 vs not) 3. Some inherent property in the sequence model as opposed to other world models that may only predict the observation and the reward.
It is unclear from the paper whether bootstrapping is novel ... | 1. The extra two hyperparameters introduced k and η require finetuning, which depends on availability to the environment or a good OPE method. |
NIPS_2016_153 | NIPS_2016 | weakness of previous models. Thus I find these results novel and exciting.Modeling studies of neural responses are usually measured on two scales: a. Their contribution to our understanding of the neural physiology, architecture or any other biological aspect. b. Model accuracy, where the aim is to provide a model whic... | 2. The authors note that the LN model needed regularization, but then they apply regularization (in the form of a cropped stimulus) to both LN models and GLMs. To the best of my recollection the GLM presented by pillow et al. did not crop the image but used L1 regularization for the filters and a low rank approximation... |
NIPS_2016_43 | NIPS_2016 | Weakness: 1. The organization of this paper could be further improved, such as give more background knowledge of the proposed method and bring the description of the relate literatures forward. 2. It will be good to see some failure cases and related discussion. | 2. It will be good to see some failure cases and related discussion. |
Fg04yPK0BH | ICLR_2025 | 1. There is some disconnection between Proposition 2.2 that the adjacency matrix of the line graph has the same support of some unitary matrix and the proposed method which finds the projection of a weighted adjacency matrix to the set of unitary matrices. It is unclear to me if the result in Proposition 2.3 has the sa... | 5. It's unclear why there is a base layer GNN encoding in the proposed method. An ablation study on the necessity of the base layer GNN encoding would be helpful. |
ICLR_2021_1189 | ICLR_2021 | weakness of the paper is its experiments section. 1. Lack of large scale experiments: The models trained in the experiments section are quite small (80 hidden neurons for the MNIST experiments and a single convolutional layer with 40 channels for the SVHN experiments). It would be nice if there were at least some exper... | 4. In Section 4.1, \epsilon is not used in equation (10), but in equation (11). It might be more clear to introduce \epsilon when (11) is discussed. |
NIPS_2021_815 | NIPS_2021 | - In my opinion, the paper is a bit hard to follow. Although this is expected when discussing more involved concepts, I think it would be beneficial for the exposition of the manuscript and in order to reach a larger audience, to try to make it more didactic. Some suggestions: - A visualization showing a counting of ho... | - The authors do not adequately discuss the computational complexity of counting homomorphisms. They make brief statements (e.g., L 145 “Better still, homomorphism counts of small graph patterns can be efficiently computed even on large datasets”), but I think it will be beneficial for the paper to explicitly add the u... |
NIPS_2016_238 | NIPS_2016 | - My biggest concern with this paper is the fact that it motivates âdiversityâ extensively (even the word diversity is in the title) but the model does not enforce diversity explicitly. I was all excited to see how the authors managed to get the diversity term into their model and got disappointed when I learned th... | - line 108, the first âfâ should be âgâ in âwe fixed the form of ..â - extra â.â in the middle of a sentence in line 115. One Question: For the baseline MCL with deep learning, how did the author ensure that each of the networks have converged to a reasonable results. Cutting the learners early on might... |
NIPS_2018_122 | NIPS_2018 | - Figure 1 and 2 well motive this work, but in the main body of this paper I cannot see what happens to these figures after applying the proposed adversarial training. It is better to put together the images before and after applying your method in the same place. Figure 2 does not say anything about details (we can un... | * Additional comments after reading the author response Thank you for your kind reply to my comments and questions. I believe that the draft will be further improved in the camera-ready version. One additional suggestion is that the title seems to be too general. The term "adversarial training" has a wide range of mean... |
NIPS_2021_2191 | NIPS_2021 | of the paper: [Strengths]
The problem is relevant.
Good ablation study.
[Weaknesses] - The statement in the intro about bottom up methods is not necessarily true (Line 28). Bottom-up methods do have a receptive fields that can infer from all the information in the scene and can still predict invisible keypoints. - Seve... | - No study of inference time. Since this is a pose estimation method that is direct and does not require detection or keypoint grouping, it is worth to compare its inference speed to previous top-down and bottom-up pose estimation method. |
ICLR_2023_642 | ICLR_2023 | Unclear notations. The authors used the same notations to write vectors and scalars. Reading these notations would be challenging to follow for many readers. Please consider updating your notations and refer to the notation section in the Formatting Instructions template for ICLR 23.
The framework impact is unclear. Th... | 27. In Theorem A.3 proof, how the input x has two indices? The input is a vector, not a matrix. Moreover, shouldn’t ∑ k ( W k ( 2 ) ) 2 = 1 / d , not d ? |
tqhAA26vXE | ICLR_2024 | - In section 4.3 and 4.4, words such as “somewhat” and “good generative ability” appears in the description yet I am concerned that even with beam search, only 77% of the result lists contain the ground truth logical forms. If the relationships and entities were replaced, how do we ensure that the plugged-in entities/r... | - In section 4.3 and 4.4, words such as “somewhat” and “good generative ability” appears in the description yet I am concerned that even with beam search, only 77% of the result lists contain the ground truth logical forms. If the relationships and entities were replaced, how do we ensure that the plugged-in entities/r... |
NIPS_2016_395 | NIPS_2016 | - I found the application to differential privacy unconvincing (see comments below) - Experimental validation was a bit light and felt preliminary RECOMMENDATION: I think this paper should be accepted into the NIPS program on the basis of the online algorithm and analysis. However, I think the application to differenti... | - I found the application to differential privacy unconvincing (see comments below) - Experimental validation was a bit light and felt preliminary RECOMMENDATION: I think this paper should be accepted into the NIPS program on the basis of the online algorithm and analysis. However, I think the application to differenti... |
NIPS_2016_283 | NIPS_2016 | weakness of the paper are the empirical evaluation which lacks some rigor, and the presentation thereof: - First off: The plots are terrible. They are too small, the colors are hard to distinguish (e.g. pink vs red), the axis are poorly labeled (what "error"?), and the labels are visually too similar (s-dropout(tr) vs ... | - In my opinion the claim about evolutional dropout addresses the internal covariate shift is very limited: it can only increase the variance of some low-variance units. Batch Normalization on the other hand standardizes the variance and centers the activation. These limitations should be discussed explicitly. Minor: |
NIPS_2022_947 | NIPS_2022 | 1. Apart from the multiple pre-trained models, FedPCL is built on the idea of prototypical learning and contrastive learning, which are not new in federated learning. 2. The performance of FedPCL heavily relies on the selection of different pre-trained models, limiting its applications to more wide areas. As shown in T... | 2. The performance of FedPCL heavily relies on the selection of different pre-trained models, limiting its applications to more wide areas. As shown in Table 4, the model accuracy is quite sensitive to the pre-trained models. This work adequately addressed the limitations. The authors developed a lightweight federated ... |
NIPS_2017_356 | NIPS_2017 | - I would have liked to see some analysis about the distribution of the addressing coefficients (Betas) with and without the bias towards sequential addressing. This difference seems to be very important for the synthetic task (likely because each question is based on the answer set of the previous one). Also I don't t... | - It would have been interesting to see not only the retrieved and final attentions but also the tentative attention maps in the qualitative figures. |
ICLR_2023_3208 | ICLR_2023 | 1.The typesetting in some places is out of order, such as equations (23), (25), and (31). 2. In Page 4, "A4 bounds the degree of non-stationarity between consecutive iterations", why this assumption holds? 3. This author should add more description about the contribution of this paper. | 3. This author should add more description about the contribution of this paper. |
ICLR_2022_1012 | ICLR_2022 | a. The paper lacks structure and clarity
b. The paper lacks a more qualitative study of the model:
it would be interesting to see what layers the layer-wise attention mechanism attends to.
it would be great to understand how this model uses the latent variables, for instance by measuring the KL divergence at each layer... | 4. Minor comments / suggestions a. The main contributions are introducing two types of attention for deep VAEs, it might help to describe them in a separate section, and only then describe the generative and inference models. Right now the description of the layer-wise attention mechanism is scattered across sections 2... |
ICLR_2022_1393 | ICLR_2022 | I think that:
The comparison to baselines could be improved.
Some of the claims are not carefully backed up.
The explanation of the relationship to the existing literature could be improved.
More details on the above weaknesses:
Comparison to baselines:
"We did not find good benchmarks to compare our unsupervised, iter... | - Either I don't understand Figure 5 or the labels are wrong. |
NIPS_2021_2257 | NIPS_2021 | - Missing supervised baselines. Since most experiments are done on datasets of scale ~100k images, it is reasonable to assume that full annotation is available for a dataset at this scale in practice. Even if it isn’t, it’s an informative baseline to show where these self-supervised methods are at comparing to a fully ... | - Missing supervised baselines. Since most experiments are done on datasets of scale ~100k images, it is reasonable to assume that full annotation is available for a dataset at this scale in practice. Even if it isn’t, it’s an informative baseline to show where these self-supervised methods are at comparing to a fully ... |
WzUPae4WnA | ICLR_2025 | 1. **The motivation of this paper appears to be questionable.** The authors claim that DoRA increases the risk of overfitting, basing this on two pieces of evidence:
- DoRA introduces additional parameters compared to LoRA.
- The gap between training and test accuracy curves for DoRA is larger than that of BiDoRA.
Howe... | 3. **Performance differences between methods are minimal across evaluations**. In nearly all results, the performance differences between the methods are less than 1 percentage point, which may be attributable to random variation. Furthermore, the benchmarks selected are outdated and likely saturated. [1] [LoRA Learns ... |
NIPS_2020_106 | NIPS_2020 | I also feel that the paper could have benefited from a discussion of these as compared to just outrightly saying that existing methods do not give us good results. In particular, the conditions under which existing methods work vs do not work should have been discussed more explicitly than what it is right now in the p... | - Why does the method help on Hopper, which has deterministic dynamics, so given (s, a), there is a unique s', and in this case, it simply reduces to action-conditional masking? Can it be evaluated on some other domains with non-deterministic dynamics to evaluate its empirical efficacy? Otherwise empirically it seems l... |
NIPS_2020_1706 | NIPS_2020 | 1. The memorization effect is not new to the community. Therefore, the novelty of this paper is not sufficiently demonstrated. The authors need to be clearer what extra insights this paper gives. 2. It would be better if the author could provide some theocratical justification in terms of why co-training and weight ave... | 2. It would be better if the author could provide some theocratical justification in terms of why co-training and weight averaging can improve results, since they are important for the performance. |
NIPS_2016_182 | NIPS_2016 | weakness of the technique in my view is that the kerne values will be dependent on the dataset that is being used. Thus, the effectiveness of the kernel will require a rich enough dataset to work well. In this respect, the method should be compared to the basic trick that is used to allos non-PSD similarity metrics to ... | - it would seem to me that in section 4, "X" should be a multiset (and [\cal X]**n the set of multisets of size n) instead of a set, since in order the histogram to honestly represent a graph that has repeated vertex or edge labels, you need to include the multiplicities of the labels in the graph as well. |
ICLR_2022_1998 | ICLR_2022 | In Section 3. The paper uses a measure ρ
that is essentially the fraction of examples at which local monotonicity (in any of the prescribed directions in M
) is violated and then show that this measure decreases when using the paper's method over the baselines. However, I'm not certain that this measure corresponds to ... | 1) Only the sum of the gradient is taken into account (so it could be that a component a_w_{i,j} has a very negative gradient, but still the sum will be positive), and |
lesQevLmgD | ICLR_2024 | I believe the authors' results merit publication in a specialized journal rather than in ICLR. The main reasons are the following
1. The authors do not give any compelling numerical evidence that their bound is tight or even "log-tight".
2. The authors' derivation falls into classical learning theory-based bounds, whic... | 2. The authors' derivation falls into classical learning theory-based bounds, which, to the best of my knowledge, does not yield realistic bounds, unless Bayesian considerations are taken into account (e.g. Bayesian-PAC based bounds). |
ICLR_2023_3693 | ICLR_2023 | Weakness: 1. Some details of the proposed method are missing, such as the definition of \mathcal{L}{{kl} and the representation of the augmentation samples in the function. 2. In the proposed method, a novel augmentation strategy is proposed with mask. In the ablation study, the effectiveness of the proposed strategy h... | 3. Meanwhile, more details about the proposed method should be presented, such as how the implicit distribution characterize the uncertainty of each label value and how the model mitigrate the uncertainty of the label distribution. |
RwzFNbJ3Ez | EMNLP_2023 | 1. The method presented relies on extracting multiple responses from the LLM. For the variant with optimal performance, LLM prompting, 20 samples are needed to achieve the best reported results. Assuming a response contains 5 sentences, this requires 100 API calls to obtain a passage-level score (if I understand correc... | 3. The proposed method might struggle to detect hallucinations in open-ended responses, for example, the prompt "introduce a sports celebrity to me". In this case, the sampled responses could pertain to different individuals, making it challenging to identify shared information for consistency checking. |
VoI4d6uhdr | ICLR_2025 | 1. Although the authors present the exact formulation of the risk in the main text, it is complicated to understand the implications of those formulas. It would be helpful to include more discussion to explain each term to better understand the results.
2. The paper's main contribution is to examine the bias amplificat... | 3. It is unclear how these theoretical findings relate to real-world deep learning models, I would suggest the authors verify the conclusion about the label noise and model size on MNIST and CNN as well. |
ICLR_2023_903 | ICLR_2023 | of different chain-of-thought prompting methods: including zero-shot-CoT, few-shot-CoT, manual-CoT, and Auto-CoT. The paper conducts case study experiments to look into the limitations of existing methods and proposes improvement directions. Finally, the paper proposes an improved method for Auto-CoT that could achieve... | 3. The paper is well-organized. The writing is good and most of the content is very clear to me. Weaknesses/Feedback 1. The writing could be improved. It would be helpful to draw a table to compare different CoT prompting methods across different dimensions. How and why shall we make an assumption that “questions of al... |
NIPS_2021_1822 | NIPS_2021 | of the paper. Organization could definitely be improved and I oftentimes had a bit of a hard time following the discussed steps. But in general, I think the included background is informative and well selected. Though, I could see people having trouble understanding the state-space GP-regression when coming from the mo... | • It should be mentioned that p ( y ∣ H f ¯ ( t n ) ) has to be chosen Gaussian, as otherwise Kalman Filtering and Smoothing and CVI is not possible. Later on in the ELBOs this is assumed anyway. |
JWwvC7As4S | ICLR_2024 | ### Theory
The main theoretical results are Theorem 2.1 and 2.2. They state that if the "average last-layer feature norm and the last-layer weight matrix norm are both bounded, then achieving near-optimal loss implies that most classes have intra-class cosine similarity near one and most pairs of classes have inter-cla... | 3. As (suboptimally) weight decay is applied to all layers, we would expect a large training loss and thus suboptimal cosine similarities for large weight decay parameters. Conveniently, cosine similarities for such large weight decay strengths are not reported and the plots end at a weight decay strength where cosine ... |
NIPS_2018_276 | NIPS_2018 | . Strengths: * This is the first inconsistency analysis for random forests. (Verified by quick Google scholar search.) * Clearly written to make results (mostly) approachable. This is a major accomplishment for such a technical topic. * The analysis is relevant to published random forest variations; these include paper... | * The title, abstract, introduction, and discussion do not explain that the results are for unsupervised random forests. This is a fairly serious omission, and casual readers would remember the wrong conclusions. This must be fixed for publication, but I think it would be straightforward to fix. Officially, NIPS review... |
NIPS_2022_1813 | NIPS_2022 | 1.The innovation of the article seems limited to me, mainly since the work shares the same perspective as [2]. Both the models build upon the probabilistic formulation and applies the Hilbert-Schmidt Independence Criteria (HSIC). It may be good to clarify a bit more on how novel the paper is compared from [2].
2.There ... | 2.There is a lack of qualitative experiments to demonstrate the validity of the conditional independence model. a)It is better to provide some illustrative experimental results to demonstrate that minimising HSICcond-i could indeed perform better than minimising HSIC_HOOD. Possibly, one toy dataset can be used to demon... |
NIPS_2018_172 | NIPS_2018 | 1. The writing is not clear. The descriptions of the techniqual part can not be easily followed by the reviewer, which make it very hard to reimplement the techniques. 2. An incomplete sequence is represented by a finite state automaton. In this paper, only a two out of three finite state automation is used. Is it poss... | 3. The authors describe an online version of the algorithm because it is impractical to train multiple iterations/epochs with large models and datasets. Is it true that the proposed method requires much more computation than other methods? Please compare the computational complexity with other methods. |
NIPS_2020_1274 | NIPS_2020 | - It would be helpful if the paper’s definition of “decentralized” is more explicitly stated in the paper, instead of in a footnote. Other ways of defining “decentralized” is where agents do not have access to the global state and actions of other agents during both training and execution which LIO seems to do. - Syste... | - Systematically studying the impact of the cost of incentivization on performance would have been a helpful analysis (e.g., for various values of \alpha, what are the reward incentives each agent receives, and what is the collective return?). It seems like roles between “winners” and “cooperators” emerge because the c... |
sXErPfdA7Q | EMNLP_2023 | UPDATE: The authors addressed most of my concerns however, I believe that the first and second points are still valid and should be discussed as potential limitations (i.e., there are too many confounding variables to claim that one is investigating an impact of different training methods; and the datasets might have b... | - The authors discuss how certain methods are significantly different from others, yet no significance testing is done to support these claims. For example, in line 486 the authors write "The conversational ability of ChatGPT and GPT-4 significantly boosts translation quality and discourse awareness" -- the difference ... |
ARR_2022_141_review | ARR_2022 | - The approach description (§ 3) is partially difficult to follow and should be revised. The additional page of the camera-ready version should be used to extend the approach description (rather than adding more experiments).
- CSFCube results are not reported with the same metrics as in the original publication making... | - The approach description (§ 3) is partially difficult to follow and should be revised. The additional page of the camera-ready version should be used to extend the approach description (rather than adding more experiments). |
ZPwX1FL4yp | ICLR_2025 | 1.The application of gyro-structures on SPD manifolds and correlation matrices is indeed novel, but the paper does not clearly articulate the theoretical significance or unique advantages of using Power-Euclidean (PE) geometry over existing approaches like Affine-Invariant (AI) or Log-Euclidean (LE) methods. The work s... | 3.On the experiments part, the related discussion lacks interpretive insights that would elucidate why the proposed gyro-structures outperform existing methods. In addition, while the paper compares its methods against SPD-based models and a few gyro-structure-based approaches, it lacks comparison with other state-of-t... |
4WrqZlEK3K | EMNLP_2023 | 1. It is desired to have more evidence or analysis supporting the training effectiveness property of the dataset or other key properties
that will explain the importance and possible use-cases of _LMGQS_ over other QFS datasets.
2. Several unclear methods affecting readability and reproducibility:
* "To use LMGQS in th... | 1. It is desired to have more evidence or analysis supporting the training effectiveness property of the dataset or other key properties that will explain the importance and possible use-cases of _LMGQS_ over other QFS datasets. |
ICLR_2021_2527 | ICLR_2021 | Duplicate task settings. The proposed new task, cross-supervised object detection, is almost the same as the task defined in (Hoffman et al. 2014, Tang et al. 2016, Uijlings et al. 2018). Both of these previous works study the task of training object detectors on the combination of base class images with instance-level... | 2) giving higher weights to intuitions of why previous works fail on challenging datasets like COCO and motivations of the proposed method. [a] YOLO9000: Better, Faster, Stronger, In CVPR, 2017 [b] Detecting 11K Classes: Large Scale Object Detection without Fine-Grained Bounding Boxes, In ICCV, 2019 |
NIPS_2022_2373 | NIPS_2022 | weakness in He et al., and proposes a more invisible watermarking algorithm, making their method more appealing to the community. 2. Instead of using a heuristic search, the authors elegantly cast the watermark search issue into an optimization problem and provide rigorous proof. 3. The authors conduct comprehensive ex... | 2. Figure 5 is hard to comprehend. I would like to see more details about the two baselines presented in Figure 5. The authors only study CATER for the English-centric datasets. However, as we know, the widespread text generation APIs are for translation, which supports multiple languages. Probably, the authors could e... |
viNQSOadLg | ICLR_2024 | * Lack of Training Details: The paper lacks sufficient information regarding the training process of the policy. It should provide more details on the training data used, the methodology for updating parameters, and the specific hyperparameters employed in the process.
* Unclear Literature Review: The literature review... | * Unclear Literature Review: The literature review in the paper needs improvement. It is not adequately clear what the main contribution of the proposed method is, and how it distinguishes itself from existing work, particularly in relation to the utilization of GFlowNet for sequence generation. The paper should provid... |
NIPS_2018_66 | NIPS_2018 | of their proposed method for disentangling discrete features in different datasets. I think that the main of the paper lies in the relatively thorough experimentation. I thought the results in Figure 6 were particularly interesting in that they suggest that there is an ordering in features in terms of mutual informatio... | - I would say that section 3.2 can be eliminated - I think that at this point readers can be presumed to know about the Gumbel-Softmax/Concrete distribution. |
XX73vFMemG | EMNLP_2023 | 1. The paper studies the case where the student distills knowledge to the teacher, which improves the teacher's performance. However, the improvements could potentially be due to regularization effects rather than distillation as claimed, since all fine-tuning is performed for 10 epochs and without early-stopping. The ... | 1. The paper studies the case where the student distills knowledge to the teacher, which improves the teacher's performance. However, the improvements could potentially be due to regularization effects rather than distillation as claimed, since all fine-tuning is performed for 10 epochs and without early-stopping. The ... |
NIPS_2018_688 | NIPS_2018 | weakness of the paper is that experiments are limited to a single task. That said, they compare against two reasonable baselines (CPO, and including the constraint as a negative reward). While the formal definition of the constrained objective in L149 - L155 is appreciated, it might be made a bit more clear by avoiding... | * Label the X and Y axes of plots. This paper makes an important step towards safe RL. While this paper builds upon much previous work, it clearly documents and discusses comparisons to previous work. While the results are principally theoretical, I believe it will inspire both more theoretical work and practical appli... |
Kjs0mpGJwb | EMNLP_2023 | 1. Although the structural information has not been explicitly used in the current problem statement, it has been implicitly used in few previous works on bilingual mapping induction. Please see:
"Multi-Stage Framework with Refinement based Point Set Registration for Unsupervised Bi-Lingual Word Alignment". Oprea et al... | 3. For experiments, I have 2 comments - (i) addition of performance on word similarity and sentence translation tasks as in the MUSE paper (and others) would lend more credibility to the robustness and effectiveness of the framework. (ii) addition of morphologically rich languages like Finnish, Hebrew, etc and low-reso... |
ICLR_2023_2658 | ICLR_2023 | Weakness:
1.I think the work is lack of novelty as the work GPN[1] has already proposed to add node importance score in the calculation of class prototype and the paper only give a theoretical analysis on it.
2.The experiment part is not sufficient enough. (1) For few-shot graph node classification problem to predict n... | 1.The paper consider the node importance among nodes with same label in support set. In 1-shot scenario, how node importance can be used? I also find that the experiment part in the paper does not include the 1-shot paper setting, but related works such as RALE have 1-shot setting, why? |
Q2IInBu2kz | EMNLP_2023 | 1. You should compare your model with more recent models [1-5].
2. Contrastive learning has been widely used in Intent Detection [6-9], although the tasks are not identical. I think the novelty of this simple modification is not suitable for EMNLP.
3. You should provide more details about the formula in the text, e.g. ... | 3. You should provide more details about the formula in the text, e.g. $\ell_{BCE}$ ,even if it is simple, give specific details. |
nuPp6jdCgg | EMNLP_2023 | 1.While this paper shows many findings, few of them are new to the community.
2.There should be more discussions about why LLMs struggle at fine-grained hard constraints and how to address these problems.
3.It would be better to include vicuna and falcon in Table-2, Table-3, and Table-5. | 2.There should be more discussions about why LLMs struggle at fine-grained hard constraints and how to address these problems. |
NIPS_2020_396 | NIPS_2020 | 1- While the experimental results suggest that the proposed approach is valuable for self-supervised learning on 360 video data which have spatial audio, little insights are given about why do we need to do self-supervised learning on this kind of data. In particular, 1) There are currently several large audio-video da... | 1- While the experimental results suggest that the proposed approach is valuable for self-supervised learning on 360 video data which have spatial audio, little insights are given about why do we need to do self-supervised learning on this kind of data. In particular, |
NIPS_2016_478 | NIPS_2016 | weakness is in the evaluation. The datasets used are very simple (whether artificial or real). Furthermore, there is no particularly convincing direct demonstration on real data (e.g. MNIST digits) that the network is actually robust to gain variation. Figure 3 shows that performance is worse without IP, but this is no... | - The link between IP and the terms/equations could be explained more explicitly and prominently - Pls include labels for subfigures in Figs 3 and 4, and not just state in the captions. |
NIPS_2018_543 | NIPS_2018 | Weakness: The main idea of the paper is not original. The entire Section 2.1 is classical results in Gaussian process modeling. There are many papers and books described it. I only point out one such source, Chapter 3 and 4 of Santner, Thomas J., Brian J. Williams, and William I. Notz. The design and analysis of comput... | 17. Cambridge university press, 2004. Or Theorem 14.5 of Fasshauer, Gregory E. Meshfree approximation methods with MATLAB. Vol. |
NIPS_2019_1089 | NIPS_2019 | - The paper can be seen as incremental improvements on previous work that has used simple tensor products to representation multimodal data. This paper largely follows previous setups but instead proposes to use higher-order tensor products. ****************************Quality**************************** Strengths: - T... | - Results should be averaged over multiple runs to determine statistical significance. |
NIPS_2021_2235 | NIPS_2021 | (and questions):
A) The biggest weakness I think is that the analysis happens on a very restricted scenario, with no transfer: the authors study only the case where we have a single dataset and learn the encoder without using the label that we know exist and use to learn the classifiers - this is suboptimal and would n... | 1) only the SimCLR case is covered and yet, there is no analysis on a seemingly important (see SimCLR-v2 and other recent papers that show that) part of that approach, ie the projection head. |
NIPS_2018_87 | NIPS_2018 | weakness/questions: 1. Description of the framework: It's not very clear what Bs is in the formulation. It's not introduced in the formulation, but later on the paper talks about how to form Bs along with Os and Zs for different supervision signals. And it;s very confusing what is Bs's role in the formulation. 2. compu... | 4. The observation and conclusions are hidden in the experimental section. It would be great if the paper can highlight those observations and conclusions, which is very useful for understanding the trade-offs of annotation effort and corresponding training performance. |
ICLR_2021_243 | ICLR_2021 | Weakness: 1. As several modifications mentioned in Section 3.4 were used, it would be better to provide some ablation experiments of these tricks to validate the model performance further. 2. The model involves many hyperparameters. Thus, the selection of the hyperparameters in the paper needs further explanation. 3. A... | 1. As several modifications mentioned in Section 3.4 were used, it would be better to provide some ablation experiments of these tricks to validate the model performance further. |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 3