review_point stringlengths 45 642 | paper_id stringlengths 10 19 | venue stringclasses 15
values | focused_review stringlengths 200 10.5k | batch int64 2 10 | actionability dict | actionability_label stringclasses 5
values | actionability_label_type stringclasses 1
value | id int64 31 1.53k |
|---|---|---|---|---|---|---|---|---|
- It is required to analyze the time complexity of the proposed policies mentioned in Section 4. | NIPS_2020_1296 | NIPS_2020 | ===After rebuttal=== I read the reubttal and I think the proposed method has computational complexity issues and it should be compared with the naive solution of inteverening on the target variable (estimating MEC from finite sample size). Thus, I decided to keep my score unchanged. ================ - The main assumpti... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,423 |
3. The experiments succinctly prove the point that the authors try to make. That said, it would strengthen the paper to include experiments across more diverse domains (those in TD-MPC 2). | i7jAYFYDcM | ICLR_2025 | 1. The expert imitation procedure introduces overhead into the training pipeline, as each training step requires replanning. Although this is sidestepped by Lazy Reanalyze, it remains a fundamental limitation of the method.
2. The experiments are run with a small number of seeds (3 seeds).
3. The experiments succinctly... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,426 |
- The paper does not evaluate the magnitude of interpretability tax associated with the method. | 7FXgefa9lU | EMNLP_2023 | I like the paper overall, and I think the contribution is probably sufficient for a short paper. The concerns below could be addressed in follow-up work.
- The method is only evaluated on a single encoder model, and on two classification datasets. Experiments on a larger range of models/datasets would be necessary for ... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"4",
"4",
"4"
]
} | 4 | gold | 1,428 |
4. But the LUQ itself is rather straightforward to design, once the goal of designing logarithmic and unbiased quantizer is clear. The approaches in Sec. 5 are also rather standard and to some extent explored in previous literature. I'd say the main contribution of this paper is showing that such a simple combination o... | ICLR_2023_1088 | ICLR_2023 | The novelty is somewhat thin: Until the second half of page 5, the paper is mostly presenting existing backgrounds. The novelty mainly falls in Sec. 4. But the LUQ itself is rather straightforward to design, once the goal of designing logarithmic and unbiased quantizer is clear. The approaches in Sec. 5 are also rather... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"1",
"1",
"1"
]
} | 1 | gold | 1,429 |
- l81-82: Do you mean to write t_R^m or t_R^{m-1} in this unnumbered equation? If it is correct, please define t_R^m. It is used subsequently and it's meaning is unclear. | NIPS_2017_110 | NIPS_2017 | of this work include that it is a not-too-distant variation of prior work (see Schiratti et al, NIPS 2015), the search for hyperparameters for the prior distributions and sampling method do not seem to be performed on a separate test set, the simultion demonstrated that the parameters that are perhaps most critical to ... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,434 |
2. The improvements of the proposed model over the RL without feedback model is not so high (row3 vs. row4 in table 6), in fact a bit worse for BLEU-1. So, I would like the authors to verify if the improvements are statistically significant. | NIPS_2017_486 | NIPS_2017 | 1. The paper is motivated with using natural language feedback just as humans would provide while teaching a child. However, in addition to natural language feedback, the proposed feedback network also uses three additional pieces of information â which phrase is incorrect, what is the correct phrase, and what is the... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,437 |
- In continuation to the above remark, what do you think can be done (i.e. what minimal assumptions are needed) to relax the need of visiting all ball-action pairs with each iteration? Alternatively, what would happen if you partially cover them? | NIPS_2018_288 | NIPS_2018 | . Given bellow is a list of remarks regarding these weaknesses and requests for clarifications and updates to the manuscript. - The algorithmâs O(1/(\esiplon^3 (1-\gamma)^7)) complexity is extremely high. Of course, this is not practical. Notice that as opposed to the nice recovery time O(\epsilon^{-(d+3)}) result, w... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,438 |
1) The choice of the baseline methods can be improved. Especially to evaluate the appearance decomposition part, it would be good to compare to other existing methods, as an example Ref-NeRF would be a good baseline that contains appearance decomposition. For the larger outdoor scene, MipNerf would be a good baseline. | YHqEWF5gt8 | ICLR_2024 | 1) The choice of the baseline methods can be improved. Especially to evaluate the appearance decomposition part, it would be good to compare to other existing methods, as an example Ref-NeRF would be a good baseline that contains appearance decomposition. For the larger outdoor scene, MipNerf would be a good baseline.
... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,441 |
- section 3.1, line 143" "Then the state changes and environment gives a reward". This is not true of standard MDP formulations. You may not get a reward after each action, but this makes it sound like that. Also, line 154, it's not clear if each action is a single feature or the power set. Maybe make the description m... | NIPS_2018_125 | NIPS_2018 | - Some missing references and somewhat weak baseline comparisons (see below) - Writing style needs some improvement, although, it is overall well written and easy to understand. Technical comments and questions: - The idea of active feature acquisition, especially in the medical domain was studied early on by Ashish Ka... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,450 |
- What is \delta in the statement of Lemma 5? | NIPS_2017_201 | NIPS_2017 | ++++++++++
Novelty/Significance: The reformulation of the robust regression problem (Eq 6 in the paper) shows that robust regression is reducible to standard k-sparse recovery. Therefore, the proposed CRR algorithm is basically the well-known IHT algorithm (with a modified design matrix), and IHT has been (re)introduce... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,452 |
1. line 113: You set \alpha_m uniformly to be 1/M which implies that the contributions from all modalities are the same. However, works in multimodal fusion have shown that dynamically weighting the modalities is quite important because | NIPS_2019_1408 | NIPS_2019 | - The paper is not that original given the amount of work in learning multimodal generative models: â For example, from the perspective of the model, the paper builds on top of the work by Wu and Goodman (2018) except that they learn a mixture of experts rather than a product of experts variational posterior. â In ... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"1",
"1",
"1"
]
} | 1 | gold | 1,453 |
* The effectiveness of the proposed two-stage optimization approach needs further justifications. Only showing the performance drop on fusion models is not enough. Comparisons with other single-stage attacks are also needed to demonstrate the effectiveness. Without proper benchmarks and comparisons with other SOTA algo... | 3VD4PNEt5q | ICLR_2024 | * The effectiveness of the proposed two-stage optimization approach needs further justifications. Only showing the performance drop on fusion models is not enough. Comparisons with other single-stage attacks are also needed to demonstrate the effectiveness. Without proper benchmarks and comparisons with other SOTA algo... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,459 |
1)2) and4) Less important points: In table 1 it seems that overall performance of RS-D4PG monotonically increases w.r.t. λ values. I am curious to see what happens when λ is even smaller. Page 3, Line 2, J o b j π ( θ ) -> τ and η are missing in the bracket Line 4 at paragraph D4PG: Q T ( s , . . . ) -> s' | ICLR_2021_242 | ICLR_2021 | In the paper the motivation of using meta-gradient to solve the formulated Lagrangian optimization is only explained once at the beginning of Page 4 "Our intuition is that a learning rate gradient that takes into account the overall task objective and constraint thresholds will lead to improved overall performance." Ho... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,461 |
- This sentence is confusing [93-95] "After we have trained the model for task t, we memorize each newly added filter by the shape of every layer to prevent the caused semantic drift." I believe I understood it after re-reading it and the subsequent sentences but it is not immediately obvious what is meant. | NIPS_2018_83 | NIPS_2018 | - An argument against DEN, a competitor, is hyper-parameter sensitivity. First, this isn't really shown, but second (and more importantly) reinforcement learning is well-known to be extremely unstable and require a great deal of tuning. For example, even random seed changes are known to change the behavior of the same ... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"4",
"4",
"4"
]
} | 4 | gold | 1,463 |
- Line 78-79: "diffusion models have been able to outperform generative adversarial networks on image generation benchmarks": yes, but you need a citation there - Lines 129-130: "Previous work has tried to tackle... but with limited success": citation needed - Lines 156-158: "This improves the reliability and efficienc... | Xe6UmKMInx | ICLR_2025 | - Many sentences and paragraphs are unclear. I have given my best to collect most examples but I might have forgotten some of them (examples listed below)
- Many claims are insufficiently supported by evidence (either from other papers or experiments). Similarly, I listed multiple examples below.
- The experiments are ... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,464 |
- How does the ineq. after l433 follow from Lemma 7? It seems to follow somehow from a combination of the previous inequalities, but please facilitate the reading a bit by stating how Lemma 7 comes into play here. | NIPS_2016_133 | NIPS_2016 | --- The clarity of the main parts has clearly improved compared to the last version I saw as an ICML reviewer. Generally, it seems natural to investigate the direction of how causal models can help for autonomous agents. The authors present an interesting proposal for how this can be done in case of simple bandits, del... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,467 |
- The experiment comparison is weak, the author only compare their method to the BERT-baseline. The author should compare their method to token pruning and token combination baselines. | 8l2m7jctGv | EMNLP_2023 | - The contribution is a combination of current methods from computer vision and incremental.
- The illustration in fig. 1 is confusing, the symbol definition is different from what the authors use in the text.
- The motivation for fuzzy-based token pruning is not clear. Even in imbalanced distributions, discarding toke... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,471 |
- The experimental section only compares to methods that in their convolution are unaware of the point coordinates (except for in the input features). A comparison to coordinate-aware methods, such as TFN or SchNet seems appropriate. | ICLR_2021_512 | ICLR_2021 | - Important pieces of prior work are missing from the related work section. The paper seems to be strongly related to Tensor Field Networks (TFN) (Thomas et al. 2018), as both define Euclidean and permutation equivariant convolutions on point clouds / graphs. Furthermore, there are several other methods that operate on... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,472 |
-The idea makes sense for the long document summarization, but I’m wondering what the others have done in this area with a similar methodology? What does the system offer over the previous extract-then-generate methodologies? This is troublesome considering that the paper does not have any Related Work section, nor exp... | ARR_2022_14_review | ARR_2022 | -The idea makes sense for the long document summarization, but I’m wondering what the others have done in this area with a similar methodology? What does the system offer over the previous extract-then-generate methodologies? This is troublesome considering that the paper does not have any Related Work section, nor exp... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,474 |
2) Larger-scale experiments. Why were there no experiments with larger state-action spaces and non-trivial dynamics included (at least grid-worlds with walls, and other non-trivial tiles)? Currently it is hard to judge whether this was simply due to a lack of time or because the method has severe scalability issues. Ve... | NIPS_2019_1270 | NIPS_2019 | of the two variants (does the VI version run faster in terms of wall-clock time, is it more sample efficient, does it generalize better, �). Given the small size of the toy domain, other (brute-force, or inefficient sampling-based) methods could potentially be included as well, but it would be OK to dismiss them by ... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,478 |
1) the authors did not propose any quantitative measurement to the extent of occupation bias relative to real distributions in society; | NIPS_2021_725 | NIPS_2021 | Comparing the occupational statistics computed by GPT2 vs those by the United States is very interesting and informative. However, the presentation on the methodology and the subsequent discussion is confusing to me. Particularly from section 3.4, I am not sure what “adj.” in equation (1) means and why “adj. Pred” is a... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"4",
"4",
"4"
]
} | 4 | gold | 1,479 |
3. We normally use adaptive gradient methods rather than SGD. Would such a method, which rescales gradient components, affect the findings? For instance, might it amplify updates for weights associated with hard features (i.e., x2)? | hNkXTqDrfb | ICLR_2025 | 1. The generative process description for the data is somewhat unclear. Based on the current explanation, it seems that the Bayes optimal classifier might not need to rely on semantic features; syntactic features appear sufficient to solve the task in an asymptotic setup. This would havew undermined the notion that x2 ... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,480 |
2. However, overall, no information from 2-hop neighbors is included. Again, this method is simple, but it is highly unclear why it is effective. | oSdrJyb4UH | ICLR_2025 | W1. This paper claims that the proposed method has good expressiveness. However, I found no (theoretical) analysis regarding the expressiveness.
W2. The proposed method is actually pretty simple, and the rationale is simple, monophyly (lines 50-52), i.e., 2-hop neighbors are helpful for node classification on both homo... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"4",
"4",
"4"
]
} | 4 | gold | 1,483 |
9. The real dicom image is recommended to use as experiment data, not the png image. FastMRI challenge dataset would be a good choice. Inference speed should be compared between different methods. | ICLR_2023_4641 | ICLR_2023 | Weakness:
Experiments are not sufficient. Table 1 only shows the comparison with MADUN, other state-of-the-art methods should be included.
Experimental results are not convincing, particularly the CS-MRI reconstruction problem. The difference between different methods can hardly be observed in Fig. 8 and Fig. 9. The re... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,484 |
4. Please clarify how the model in fig. 7 was trained. Was it on full field flicker stimulus changing contrast with a fixed cycle? If the duration of the cycle changes (shortens, since as the authors mention the model cannot handle longer time scales), will the time scale of adaptation shorten as reported in e.g Smirna... | NIPS_2016_153 | NIPS_2016 | weakness of previous models. Thus I find these results novel and exciting.Modeling studies of neural responses are usually measured on two scales: a. Their contribution to our understanding of the neural physiology, architecture or any other biological aspect. b. Model accuracy, where the aim is to provide a model whic... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,486 |
1)what happens if the original CAD model is already associated with spatially-varying (SV) BRDF maps? | AkL2ID5rRV | ICLR_2025 | [1] Clarification. Several details need to be clarified to better understand the model and the training strategy.
(a) What does the model estimate w.r.t. the PBR parameters? Does it only estimate albedo?
(b) With sampled metallic, roughness and lighting envmaps, do we apply the metallic and roughness globally? If yes: ... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,487 |
1. The first question is that the evidence of the motivation is not direct. Since the problem to be solved is that “a predictor suffers from the accuracy decline due to long-term and continuous usage”, the authors need to plot a figure about the decline in accuracy of a predictor over time (search steps) in different s... | ICLR_2023_2880 | ICLR_2023 | 1. The first question is that the evidence of the motivation is not direct. Since the problem to be solved is that “a predictor suffers from the accuracy decline due to long-term and continuous usage”, the authors need to plot a figure about the decline in accuracy of a predictor over time (search steps) in different s... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,494 |
3 Can authors explain more about the definition of excessive risk in line 103 and how to calculate in practice, in terms of expectation? Since the optimal solution θ ∗ is not the optimal solution for the loss function w.r.t. data of group a. It can be negative values, right? But I see all excessive risk values in Figur... | NIPS_2021_1554 | NIPS_2021 | 1 Why does Theorem 2 only show a second-order Taylor expansion of the excessive risk for group a, rather than similar result showing in Theorem 1? Since the unfairness defined in line 107 is based on excessive risk gap ξ a
, it is more meaningful and consistant to see the theoretical results with respect to ξ a
for DP-... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,495 |
* L200: "for every arm a" implies there is a single optimistic parameter, but of course it depends on a ** L303: Why not choose T_0 = m Sqrt(T)? Then the condition becomes B > Sqrt(m) T^(3/4), which improves slightly on what you give. | NIPS_2016_386 | NIPS_2016 | , however. For of all, there is a lot of sloppy writing, typos and undefined notation. See the long list of minor comments below. A larger concern is that some parts of the proof I could not understand, despite trying quite hard. The authors should focus their response to this review on these technical concerns, which ... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,498 |
2. the experimental section is a little weak. More experiments are required. | ICLR_2022_3089 | ICLR_2022 | 1. it is usually difficult to get the rules in real-world applications. Statistical rules learnt from data may be feasible. 2. the experimental section is a little weak. More experiments are required. | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"3",
"3",
"3"
]
} | 3 | gold | 1,500 |
- The paper is somewhat incremental. The developed model is a fairly straighforward extension of the GAN for static images. | NIPS_2016_69 | NIPS_2016 | - The paper is somewhat incremental. The developed model is a fairly straighforward extension of the GAN for static images. - The generated videos have significant artifacts. Only some of the beach videos are kind of convincing. The action recognition performance is much below the current state-of-the-art on the UCF da... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"1",
"1",
"1"
]
} | 1 | gold | 1,504 |
2. The motivation of this task is unclear to me. When an object is totally occluded, its state, including position, size and motion, is very difficult to predict. Although authors consume much time to annotate such objects, but the quality can not be guaranteed because we do not know their real states. What are the pot... | cfuZKjGDW7 | ICLR_2025 | 1. The contributions of this work seem small. TAO dataset is existing, and the contribution to the benchmark is large-scale annotations. The designed expander is also a simple regressor and data augmentation schemes are also based on existing ones.
2. The motivation of this task is unclear to me. When an object is tota... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,507 |
2. While I understand that GPT-4 is expensive, I suggest that you should include experiments with GPT-3.5, which is a more affordable option. This would provide a more comprehensive evaluation of your proposed approach. | 7D4TPisEBk | EMNLP_2023 | 1. The paper lacks experiments on the Spider test set. Without experiments on this dataset, it is difficult to evaluate the generalizability of your proposed approach.
2. While I understand that GPT-4 is expensive, I suggest that you should include experiments with GPT-3.5, which is a more affordable option. This would... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,508 |
- Table 4: Please also include bold numbers for the baselines of previous work. Specifically for WMT17-WIKT the best result in terms of BLEU is actually in the baselines. | ARR_2022_276_review | ARR_2022 | - The evaluation of the paper could be made stronger by using some of the standard datasets for terminology translation (e.g. wmt21 shared task) and evaluation metrics (Alam et al. 2021).
- The description of the alignment embedding seems a bit under-specified. Am I understanding it correctly that the constraints in th... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,510 |
6. Lines 170 to 171, “unreliable neighbors” any examples of “unreliable neighbors”? | ARR_2022_143_review | ARR_2022 | Weak: 1. More examples are preferred to understand the motivations, the novel part of the proposed method and the baselines (see “detailed questions and comments”); 2. Some higher level comparisons, such as between parametric and non- parametric solutions are preferred. Currently, most baselines are in the same techn... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,511 |
1. It appears in Sections 6.1 and 6.2 that the tree-sliced Wasserstein distance outperforms the original optimal transport distance, which is surprising. Could you explain why this occurs? | NIPS_2019_1102 | NIPS_2019 | 1. It appears in Sections 6.1 and 6.2 that the tree-sliced Wasserstein distance outperforms the original optimal transport distance, which is surprising. Could you explain why this occurs? 2. The proof in the main text of Proposition 1 looks more like a proof sketch, particularly as the existence of a function f having... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,514 |
3. It would make for a stronger case if the paper reports the numbers observed when the label noise experiment is performed on image-net with 1000 classes as well (at least on the non-tail classes). This would further stress test the conjecture. Even if the phenomenon significantly weakens in this setting, the numbers ... | ICLR_2022_2213 | ICLR_2022 | 1. Based on some efforts to reproduce the results on my end, it is not clear how strongly the proposed observation holds which might limit the significance of the contributions of the paper.
Comments and questions: 1. What would constitute distributional generalization in the setting of regression? If I consider the se... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,519 |
- Given that the brain does everything in parallel, why is the number of weight updates a better metric than the number of network updates? Provide additional feedback with the aim to improve the paper. | ICLR_2021_973 | ICLR_2021 | .
Clearly state your recommendation (accept or reject) with one or two key reasons for this choice. I recommend acceptance. The number of updates needed to learn realistic brain-like representations is a fair criticism of current models, and this paper demonstrates that this number can be greatly reduced, with moderate... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,520 |
- The contrastive learning framework is the same as SimCLR. | NIPS_2020_487 | NIPS_2020 | - The contrastive learning framework is the same as SimCLR. - Graph augmentation methods, such as DropNode, DropEdge, FeatureMask, have been adopted in previous GNNs work, such as [1,2]. [1] DROPEDGE: TOWARDS DEEP GRAPH CONVOLUTIONAL NETWORKS ON NODE CLASSIFICATION. [2] STRATEGIES FOR PRE-TRAINING GRAPH NEURAL NETWORKS... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"1",
"1",
"1"
]
} | 1 | gold | 1,523 |
- There is no comparison against existing text GANs , many of which have open source implentations. While SeqGAN is mentioned, they do not test it with the pretrained version. | NIPS_2019_387 | NIPS_2019 | - The main weakness is empirical---scratchGAN appreciably underperforms an MLE model in terms of LM score and reverse LM score. Further, samples from Table 7 are ungrammatical and incoherent, especially when compared to the (relatively) coherent MLE samples. - I find this statement in the supplemental section D.4 quest... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"4",
"4",
"4"
]
} | 4 | gold | 1,526 |
- Table 4 and 5 would be more readable if they were split into two tables each, to have one table per measure. E.g. first put the 8 SFII columns and then the 8 SPDI columns rather than alternating between them. | ARR_2022_13_review | ARR_2022 | - Some design choices could have been justified in more detail and explained with more examples - The formalization is hard to read at times, with multiple Greek letters and subscripts for somewhat easy-to-grasp concepts - Multilingual coverage could of course be better, but the current limitation is understandable and... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,528 |
- In Fig. 5, it would be helpful to specify what does "valid" and "orig" differ in. | NIPS_2019_424 | NIPS_2019 | weakness of the current watermarking methods, namely the fact that they are prone to ambiuity attacks, - offers an analysis of the issue investigating the requirements that have to be fullfiled by any method that should withstand such attacks, - proposes such a method based on "passport layers" which are appended after... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,529 |
* Including a comparison to one of the methods mentioned in the computer vision setting would have been more useful than comparing to, e.g. loss-based sampling. I understand that these are not always applicable and typically require a supervised set-up, but some of them can probably be adapted to language tasks relativ... | bWXIut4pNM | EMNLP_2023 | There are 3 potential changes that would improve this work:
* First, something that didn't come across was the importance and intuition behind the choice of the similarity kernel. What types of kernels work best? Are there, e.g., cheap empirical metrics that can effectively estimate the clustering kernel in eq. 2? Coul... | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,531 |
1. The time complexity of the learning algorithm should be explicitly estimated to proof the scalability properties. | NIPS_2016_95 | NIPS_2016 | 1. The time complexity of the learning algorithm should be explicitly estimated to proof the scalability properties. 2. In Figure 4, the time complexity for TRMF-AR({1,8}) and TRMF-AR({1,2,â¦,8}) seems to be the same. The reason should be explained. | 10 | {
"annotators": [
"boda",
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f"
],
"labels": [
"5",
"5",
"5"
]
} | 5 | gold | 1,532 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.