Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowInvalid
Message:      Failed to parse string: '2: Several of the paper’s claims are incorrect or not well-supported.' as a scalar of type double
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1893, in _prepare_split_single
                  writer.write_table(table)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 765, in write_table
                  self._write_table(pa_table, writer_batch_size=writer_batch_size)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 773, in _write_table
                  pa_table = table_cast(pa_table, self._schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2224, in cast_table_to_schema
                  cast_array_to_feature(
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1795, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2086, in cast_array_to_feature
                  return array_cast(
                         ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1797, in wrapper
                  return func(array, *args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1949, in array_cast
                  return array.cast(pa_type)
                         ^^^^^^^^^^^^^^^^^^^
                File "pyarrow/array.pxi", line 1135, in pyarrow.lib.Array.cast
                File "/usr/local/lib/python3.12/site-packages/pyarrow/compute.py", line 412, in cast
                  return call_function("cast", [arr], options, memory_pool)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/_compute.pyx", line 604, in pyarrow._compute.call_function
                File "pyarrow/_compute.pyx", line 399, in pyarrow._compute.Function.call
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Failed to parse string: '2: Several of the paper’s claims are incorrect or not well-supported.' as a scalar of type double
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
                  builder.download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
                  self._download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1739, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1925, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

forum
string
paper_title
string
review_id
string
cdate
int64
confidence
string
rating
string
review
string
title
string
filter_source
string
passed_filter
bool
failed_rule
null
llm_generated_flag
bool
llm_generated_reasoning
string
experience_assessment
null
review_assessment:_checking_correctness_of_derivations_and_theory
null
review_assessment:_checking_correctness_of_experiments
null
review_assessment:_thoroughness_in_paper_reading
null
correctness
null
details_of_ethics_concerns
null
empirical_novelty_and_significance
null
flag_for_ethics_review
null
main_review
null
recommendation
null
summary_of_the_paper
null
summary_of_the_review
null
technical_novelty_and_significance
null
clarity,_quality,_novelty_and_reproducibility
null
strength_and_weaknesses
null
code_of_conduct
null
contribution
null
presentation
null
questions
null
soundness
null
strengths
null
summary
null
weaknesses
null
ethical_concerns
null
ethics_review_area
null
limitations_and_societal_impact
null
needs_ethics_review
null
time_spent_reviewing
null
ethics_flag
null
limitations
null
strengths_and_weaknesses
null
ryzm6BATZ
Image Quality Assessment Techniques Improve Training and Evaluation of Energy-Based Generative Adversarial Networks
HJZIu0Kef
1,511,807,049,322
3: The reviewer is fairly confident that the evaluation is correct
5: Marginally below acceptance threshold
This paper proposed some new energy function in the BEGAN (boundary equilibrium GAN framework), including l_1 score, Gradient magnitude similarity score, and chrominance score, which are motivated and borrowed from the image quality assessment techniques. These energy component in the objective function allows learning...
Novelty of the paper is a bit restricted, and design choices appear to be lacking strong justifications.
iclr_2018
true
null
false
The review uses a very human, blunt tone ('outrageous', 'bad side of recent deep nets hype') and includes specific BibTeX entries and manual page/column references that are typical of human peer reviewers.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ryzm6BATZ
Image Quality Assessment Techniques Improve Training and Evaluation of Energy-Based Generative Adversarial Networks
Bk8udEEeM
1,511,438,445,887
3: The reviewer is fairly confident that the evaluation is correct
5: Marginally below acceptance threshold
Quick summary: This paper proposes an energy based formulation to the BEGAN model and modifies it to include an image quality assessment based term. The model is then trained with CelebA under different parameters settings and results are analyzed. Quality and significance: This is quite a technical paper, written in ...
A very technical paper with unclear significance.
iclr_2018
true
null
false
The review uses natural, human-like phrasing, includes a specific technical question for the authors, and lacks typical LLM filler words or formulaic structures.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ryykVe-0W
Learning Independent Features with Adversarial Nets for Non-linear ICA
HyoEDdvxG
1,511,651,123,102
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
3: Clear rejection
The focus of the paper is independent component analysis (ICA) and its nonlinear variants such as the post non-linear (PNL) ICA model. Motivated by the fact that estimating mutual information and similar dependency measures require density estimates and hard to optimize, the authors propose a Wasserstein GAN (generativ...
Proposed Wasserstein GAN: not well-suited to ICA
iclr_2018
true
null
false
The review uses natural, direct, and critical language that lacks the formulaic structure or flowery vocabulary typical of LLM-generated text.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ryykVe-0W
Learning Independent Features with Adversarial Nets for Non-linear ICA
H1hlWndxM
1,511,731,443,944
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
5: Marginally below acceptance threshold
The paper proposes a GAN variant for solving the nonlinear independent component analysis (ICA) problem. The method seems interesting, but the presentation has a severe lack of focus. First, the authors should focus their discussion instead of trying to address a broad range of ICA problems from linear to post-nonline...
Interesting nonlinear ICA method, but unfocused presentation and poor comparisons
iclr_2018
true
null
false
The review contains several grammatical idiosyncrasies and typos (e.g., 'fo comparisons') that suggest human authorship rather than LLM generation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ryykVe-0W
Learning Independent Features with Adversarial Nets for Non-linear ICA
ry2lpp_ez
1,511,738,612,355
3: The reviewer is fairly confident that the evaluation is correct
6: Marginally above acceptance threshold
The idea of ICA is constructing a mapping from dependent inputs to outputs (=the derived features) such that the outputs are as independent as possible. As the input/output densities are often not known and/or are intractable, natural independence measures such as mutual information are hard to estimate. In practice, ...
Thought provoking paper but lacks more detailed analysis
iclr_2018
true
null
false
The review contains minor grammatical errors and specific technical references that are consistent with human writing and lacks typical LLM stylistic markers.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
rywHCPkAW
Noisy Networks For Exploration
H14gEaFxG
1,511,801,835,995
4: The reviewer is confident but not absolutely certain that the evaluation is correct
6: Marginally above acceptance threshold
A new exploration method for deep RL is presented, based on the idea of injecting noise into the deep networks’ weights. The noise may take various forms (either uncorrelated or factored) and its magnitude is trained by gradient descent along other parameters. It is shown how to implement this idea both in DQN (and its...
Good paper but lack of empirical comparison & analysis
iclr_2018
true
null
false
The review does not exhibit typical LLM markers and includes specific details such as LaTeX spelling corrections and precise citations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
rywHCPkAW
Noisy Networks For Exploration
Hyf0aUVeM
1,511,448,010,505
3: The reviewer is fairly confident that the evaluation is correct
5: Marginally below acceptance threshold
In this paper, a new heuristic is introduced with the purpose of controlling the exploration in deep reinforcement learning. The proposed approach, NoisyNet, seems very simple and smart: a noise of zero mean and unknown variance is added to each weight of the deep network. The matrices of unknown variances are consid...
The proposed approach is interesting and has strengths, but the paper has weaknesses. I am somewhat divided for acceptance.
iclr_2018
true
null
false
The review lacks formulaic LLM structures and contains specific human-like critiques regarding figure ordering and the realism of linear interpolation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
rywHCPkAW
Noisy Networks For Exploration
rJ6Z7prxf
1,511,539,460,976
4: The reviewer is confident but not absolutely certain that the evaluation is correct
7: Good paper, accept
This paper introdues NoisyNets, that are neural networks whose parameters are perturbed by a parametric noise function, and they apply them to 3 state-of-the-art deep reinforcement learning algorithms: DQN, Dueling networks and A3C. They obtain a substantial performance improvement over the baseline algorithms, without...
A good paper, despite a weak analysis
iclr_2018
true
null
false
The review exhibits a natural human tone, references specific niche technical concepts correctly, and lacks the formulaic structure or repetitive vocabulary associated with AI generation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
rywDjg-RW
Neural-Guided Deductive Search for Real-Time Program Synthesis from Examples
S1qCIfJWz
1,512,150,737,563
3: The reviewer is fairly confident that the evaluation is correct
8: Top 50% of accepted papers, clear accept
This is a strong paper. It focuses on an important problem (speeding up program synthesis), it’s generally very well-written, and it features thorough evaluation. The results are impressive: the proposed system synthesizes programs from a single example that generalize better than prior state-of-the-art, and it does so...
Strong paper; accept
iclr_2018
true
null
false
The review lacks typical LLM patterns; it uses a natural, direct tone and includes specific, critical observations about figure layout and technical definitions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
rywDjg-RW
Neural-Guided Deductive Search for Real-Time Program Synthesis from Examples
SkPNib9ez
1,511,820,079,417
3: The reviewer is fairly confident that the evaluation is correct
6: Marginally above acceptance threshold
This paper extends and speeds up PROSE, a programming by example system, by posing the selection of the next production rule in the grammar as a supervised learning problem. This paper requires a large amount of background knowledge as it depends on understanding program synthesis as it is done in the programming lang...
Incremental paper but well-written
iclr_2018
true
null
false
The review is written in a direct, professional tone without the formulaic structure or verbose filler common in LLM-generated text.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
rywDjg-RW
Neural-Guided Deductive Search for Real-Time Program Synthesis from Examples
SyFsGdSlM
1,511,518,880,762
4: The reviewer is confident but not absolutely certain that the evaluation is correct
6: Marginally above acceptance threshold
The paper presents a branch-and-bound approach to learn good programs (consistent with data, expected to generalise well), where an LSTM is used to predict which branches in the search tree should lead to good programs (at the leaves of the search tree). The LSTM learns from inputs of program spec + candidate branch (g...
Although the search method chosen was reasonable, the only real innovation here is to use the LSTM to learn a search heuristic.
iclr_2018
true
null
false
The review uses natural language, specific mathematical references, and lacks the formulaic buzzwords or overly positive tone often associated with LLM-generated reviews.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ryvxcPeAb
Enhancing the Transferability of Adversarial Examples with Noise Reduced Gradient
SJIOPWdgf
1,511,688,046,234
3: The reviewer is fairly confident that the evaluation is correct
5: Marginally below acceptance threshold
This paper focuses on enhancing the transferability of adversarial examples from one model to another model. The main contribution of this paper is to factorize the adversarial perturbation direction into model-specific and data-dependent. Motivated by finding the data-dependent direction, the paper proposes the noise ...
Some arguments are not well justified
iclr_2018
true
null
false
The review uses standard academic phrasing and lacks the flowery, formulaic style typical of LLMs.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ryvxcPeAb
Enhancing the Transferability of Adversarial Examples with Noise Reduced Gradient
rkzeadBxf
1,511,521,513,976
4: The reviewer is confident but not absolutely certain that the evaluation is correct
4: Ok but not good enough - rejection
This paper postulates that an adversarial perturbation consists of a model-specific and data-specific component, and that amplification of the latter is best suited for adversarial attacks. This paper has many grammatical errors. The article is almost always missing from nouns. Some of the sentences need changing. For...
Review
iclr_2018
true
null
false
The review lacks LLM-typical buzzwords and exhibits slight grammatical imperfections characteristic of human writing.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ryvxcPeAb
Enhancing the Transferability of Adversarial Examples with Noise Reduced Gradient
rkKt2t2xz
1,511,984,257,528
4: The reviewer is confident but not absolutely certain that the evaluation is correct
5: Marginally below acceptance threshold
The problem of exploring the cross-model (and cross-dataset) generalization of adversarial examples is relatively neglected topic. However the paper's list of related work on that toopic is a bit lacking as in section 3.1 it omits referencing the "Explaining and Harnessing..." paper by Goodfellow et al., which presente...
Interesting study of the most intriguing but lesser studied aspect of adversarial examples.
iclr_2018
true
null
false
The review lacks typical LLM patterns and demonstrates a specific, critical human-like engagement with the paper's logic and comparative data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ryup8-WCW
Measuring the Intrinsic Dimension of Objective Landscapes
B1IwI-2xz
1,511,949,918,452
3: The reviewer is fairly confident that the evaluation is correct
7: Good paper, accept
This paper proposes an empirical measure of the intrinsic dimensionality of a neural network problem. Taking the full dimensionality to be the total number of parameters of the network model, the authors assess intrinsic dimensionality by randomly projecting the network to a domain with fewer parameters (corresponding ...
ICLR 2018 official review (Reviewer 2)
iclr_2018
true
null
false
The review uses natural, conversational phrasing and lacks the characteristic repetitive structures or overly sophisticated vocabulary typically associated with LLM-generated text.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ryup8-WCW
Measuring the Intrinsic Dimension of Objective Landscapes
BkJsM2vgf
1,511,666,326,601
4: The reviewer is confident but not absolutely certain that the evaluation is correct
7: Good paper, accept
[ =============================== REVISION =========================================================] My questions are answered, paper undergone some revision to clarify the presentation. I still maintain that it is a good paper and argue for acceptance - it provides a witty way of checking whether the network is overp...
Good paper
iclr_2018
true
null
false
The review uses natural academic language and specific context-aware recommendations without the formulaic structure or buzzwords typically associated with LLMs.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
rytstxWAW
FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling
SJce_4YlM
1,511,766,002,084
4: The reviewer is confident but not absolutely certain that the evaluation is correct
6: Marginally above acceptance threshold
Update: I have read the rebuttal and the revised manuscript. Additionally I had a brief discussion with the authors regarding some aspects of their probabilistic framework. I think that batch training of GCN is an important problem and authors have proposed an interesting solution to this problem. I appreciated all th...
Interesting ideas, but I have both theoretical and practical concerns
iclr_2018
true
null
false
The review uses natural language and provides specific, context-aware critiques of the model's performance relative to older literature without typical LLM formatting markers.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
rytstxWAW
FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling
H1IdT6AlG
1,512,131,950,169
4: The reviewer is confident but not absolutely certain that the evaluation is correct
8: Top 50% of accepted papers, clear accept
This paper addresses the memory bottleneck problem in graph neural networks and proposes a novel importance sampling scheme that is based on sampling vertices (instead of sampling local neighbors as in [1]). Experimental results demonstrate a significant speedup in per-batch training time compared to previous works whi...
Fast solution for the memory bottleneck issue in graph neural networks
iclr_2018
true
null
false
The review is written in a direct, human-like style without the flowery prose or formulaic structure typical of LLM-generated feedback.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
rytstxWAW
FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling
HJDVPNYgf
1,511,765,807,138
4: The reviewer is confident but not absolutely certain that the evaluation is correct
7: Good paper, accept
The paper focuses on the recently graph convolutional network (GCN) framework. They authors identify a couple of issues with GCN: the fact that both training and test data need to be present at training time, making it transductive in nature and the fact that the notion of ‘neighborhood’ grows as the signal propagates ...
Solid idea, excellent presentation, questions about experiments
iclr_2018
true
null
false
The language is natural and contains personal phrasing like 'the elephant in the room' and 'clicked for me' that is not typical of formulaic LLM responses.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
rytNfI1AZ
Training wide residual networks for deployment using a single bit for each weight
SkGtH2Kxf
1,511,798,137,959
3: The reviewer is fairly confident that the evaluation is correct
6: Marginally above acceptance threshold
The authors propose to train neural networks with 1bit weights by storing and updating full precision weights in training, but using the reduced 1bit version of the network to compute predictions and gradients in training. They add a few tricks to keep the optimization numerically efficient. Since right now more and mo...
Solid work
iclr_2018
true
null
false
The review has a natural, conversational tone ('I reckon', 'isn't it?') and identifies specific missing placeholders that a human reader would notice.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
rytNfI1AZ
Training wide residual networks for deployment using a single bit for each weight
BJyxkbFxz
1,511,751,402,736
4: The reviewer is confident but not absolutely certain that the evaluation is correct
6: Marginally above acceptance threshold
This paper introduces several ideas: scaling, warm-restarting learning rate, cutout augmentation. I would like to see detailed ablation studies: how the performance is influenced by the warm-restarting learning rates, how the performance is influenced by cutout. Is the scaling scheme helpful for existing single-bit a...
Mixed ideas
iclr_2018
true
null
false
The review is concise and lacks the characteristic verbosity and buzzwords common in LLM-generated text.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
rytNfI1AZ
Training wide residual networks for deployment using a single bit for each weight
HJ0pVRqxM
1,511,871,686,185
4: The reviewer is confident but not absolutely certain that the evaluation is correct
6: Marginally above acceptance threshold
The paper trains wide ResNets for 1-bit per weight deployment. The experiments are conducted on CIFAR-10, CIFAR-100, SVHN and ImageNet32. +the paper reads well +the reported performance is compelling Perhaps the authors should make it clear in the abstract by replacing: "Here, we report methodological innovations th...
a single bit for each weight
iclr_2018
true
null
false
The review is written in a direct, blunt style with specific page references and external links that are characteristic of human peer review rather than an LLM's typical structure.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ryserbZR-
Classification and Disease Localization in Histopathology Using Only Global Labels: A Weakly-Supervised Approach
Bk72o4NWM
1,512,487,850,795
3: The reviewer is fairly confident that the evaluation is correct
5: Marginally below acceptance threshold
The authors approach the task of labeling histology images with just a single global label, with promising results on two different data sets. This is of high relevance given the difficulty in obtaining expert annotated data. At the same time the key elements of the presented approach remain identical to those in a pre...
Interesting application and results with incremental innovation on exististing work
iclr_2018
true
null
false
The review exhibits a personal tone, specific critiques of terminology, and direct references to older NLP methodologies (MEMMs, Rosenfeld's models) that suggest human expertise.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ryserbZR-
Classification and Disease Localization in Histopathology Using Only Global Labels: A Weakly-Supervised Approach
SkWQLvebf
1,512,236,569,311
3: The reviewer is fairly confident that the evaluation is correct
6: Marginally above acceptance threshold
This paper proposes a deep learning (DL) approach (pre-trained CNNs) to the analysis of histopathological images for disease localization. It correctly identifies the problem that DL usually requires large image databases to provide competitive results, while annotated histopathological data repositories are costly to ...
Down-to-earth practical application of DL in a medico-clinical context
iclr_2018
true
null
false
The review uses natural academic phrasing, provides highly specific and dated citations, and lacks the common linguistic markers or formulaic structure associated with LLMs.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ryserbZR-
Classification and Disease Localization in Histopathology Using Only Global Labels: A Weakly-Supervised Approach
S1O8uhkxf
1,511,143,504,077
4: The reviewer is confident but not absolutely certain that the evaluation is correct
5: Marginally below acceptance threshold
This paper describes a semi-supervised method to classify and segment WSI histological images that are only labeled at the whole image level. Images are tiled and tiles are sampled and encoded into a feature vector via a ResNET-50 pretrained on ImageNET. A 1D convolutional layer followed by a min-max layer and 2 fully ...
Interesting MIL approach, lacks technical depth for this conference
iclr_2018
true
null
false
The review uses natural, technical language specific to the paper's content and includes a personalized revision section that reflects engagement with author responses.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
rypT3fb0b
LEARNING TO SHARE: SIMULTANEOUS PARAMETER TYING AND SPARSIFICATION IN DEEP LEARNING
rkPj2vjeM
1,511,910,560,326
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
8: Top 50% of accepted papers, clear accept
SUMMARY The paper proposes to apply GrOWL regularization to the tensors of parameters between each pair of layers. The groups are composed of all coefficients associated to inputs coming from the same neuron in the previous layer. The proposed algorithm is a simple proximal gradient algorithm using the proximal operato...
A nice paper that would be more compelling with a comparison with the group elastic net
iclr_2018
true
null
false
The review is concise and lacks the typical stylistic markers, flowery language, or formulaic structure common in LLM-generated text.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
rypT3fb0b
LEARNING TO SHARE: SIMULTANEOUS PARAMETER TYING AND SPARSIFICATION IN DEEP LEARNING
H1fONf_gG
1,511,691,370,290
3: The reviewer is fairly confident that the evaluation is correct
6: Marginally above acceptance threshold
This paper proposes to apply a group ordered weighted l1 (GrOWL) regularization term to promote sparsity and parameter sharing in training deep neural networks and hence compress the model to a light version. The GrOWL regularizer (Oswal et al., 2016) penalizes the sorted l2 norms of the rows in a parameter matrix wit...
Unclear motivation and insufficient experimental results
iclr_2018
true
null
false
The review uses natural, direct language and includes specific page references and minor typos, which are characteristic of human-written critiques.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
rypT3fb0b
LEARNING TO SHARE: SIMULTANEOUS PARAMETER TYING AND SPARSIFICATION IN DEEP LEARNING
rkJfM20eG
1,512,124,935,161
4: The reviewer is confident but not absolutely certain that the evaluation is correct
7: Good paper, accept
The authors propose to use the group ordered weighted l1 regulariser (GrOWL) combined with clustering of correlated features to select and tie parameters, leading to a sparser representation with a reduced parameter space. They apply the proposed method two well-known benchmark datasets under a fully connected and a co...
An incremental improvement for compressing deep neural networks
iclr_2018
true
null
false
The review uses informal phrasing like 'I haven't checked all the details' and lacks the typical formulaic structure and verbose vocabulary of LLM-generated text.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
rylejExC-
Stochastic Training of Graph Convolutional Networks
rJA4cxJlf
1,511,094,837,764
4: The reviewer is confident but not absolutely certain that the evaluation is correct
3: Clear rejection
This paper proposes a new training method for graph convolutional networks. The experimental results look interesting. However, this paper has some issues. This paper is hard to read. There are some undefined or multi-used notations. For instance, sigma is used for two different meanings: an activation function and va...
A new training method of graph convolutional networks. Good but there are some errors.
iclr_2018
true
null
false
The review uses natural academic phrasing, includes specific citations like [A] correctly integrated into the critique, and lacks the typical formulaic patterns or buzzwords associated with LLM-generated reviews.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
rylejExC-
Stochastic Training of Graph Convolutional Networks
S1g5R5Ogz
1,511,726,728,086
3: The reviewer is fairly confident that the evaluation is correct
7: Good paper, accept
Existing training algorithms for graph convolutional nets are slow. This paper develops new novel methods, with a nice mix of theory, practicalities, and experiments. Let me caution that I am not familiar with convolutional nets applied to graph data. Clearly, the existing best algorithm - neighborhood sampling is sl...
Existing training algorithms for graph convolutional nets are slow. This paper develops new novel methods, with a nice mix of theory, practicalities and experiments.
iclr_2018
true
null
false
The review uses natural, direct language and includes a specific admission of limited familiarity with a sub-field, which is uncharacteristic of standard LLM templates.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
rylSzl-R-
On Unifying Deep Generative Models
SJAtVYteG
1,511,785,605,896
4: The reviewer is confident but not absolutely certain that the evaluation is correct
6: Marginally above acceptance threshold
The authors develops a framework interpreting GAN algorithms as performing a form of variational inference on a generative model reconstructing an indicator variable of whether a sample is from the true of generative data distributions. Starting from the ‘non-saturated’ GAN loss the key result (lemma 1) shows that GANs...
Review of On Unifying Deep Generative Models
iclr_2018
true
null
false
The review uses a natural, critical tone with specific references to equations and figures without the stylistic markers or formulaic structure typical of LLM output.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
rylSzl-R-
On Unifying Deep Generative Models
SJIHn0tlz
1,511,808,061,836
3: The reviewer is fairly confident that the evaluation is correct
7: Good paper, accept
The paper provides a symmetric modeling perspective ("generation" and "inference" are just different naming, the underlying techniques can be exchanged) to unify existing deep generative models, particularly VAEs and GANs. Someone had to formally do this, and the paper did a good job in describing the new view (by borr...
Good paper
iclr_2018
true
null
false
The review uses natural, direct language and lacks the typical verbose, repetitive, and overly polite markers often seen in LLM-generated reviews.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
rylSzl-R-
On Unifying Deep Generative Models
BkONJetlM
1,511,747,375,839
4: The reviewer is confident but not absolutely certain that the evaluation is correct
7: Good paper, accept
Update 1/11/18: I'm happy with the comments from the authors. I think the explanation of non-saturating vs saturating objective is nice, and I've increased the score. Note though: I absolutely expect a revision at camera-ready if the paper gets accepted (we did not get one). Original review: The paper is overall a g...
Overall good perspective on GANs that connect them to other variational methods
iclr_2018
true
null
false
The review uses natural language and specific references to tables and figures without the formulaic or repetitive structure common in LLM-generated text.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ryk77mbRZ
Noise-Based Regularizers for Recurrent Neural Networks
Syr4moYxG
1,511,793,452,781
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
2: Strong rejection
The authors of the paper advocate injecting noise into the activations of recurrent networks for regularisation. This is done by replacing the deterministic units with stochastic ones. The paper has several issues with respect to the method and related work. - The paper needs to mention [Graves 2011], which is one ...
Sever issues with prior work and justification
iclr_2018
true
null
false
The review is concise and lacks the typical verbose, formulaic, and overly polite structure often found in LLM-generated text.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ryk77mbRZ
Noise-Based Regularizers for Recurrent Neural Networks
Sk6_QZcgM
1,511,818,100,676
4: The reviewer is confident but not absolutely certain that the evaluation is correct
5: Marginally below acceptance threshold
The RNN transition function is: h_t+1 = f(h_t,x_t) This paper proposes using a stochastic transition function instead of a deterministic one. i.e h_{t+1} \sim expfam(mean = f(h_t,x_t), gamma) where expfam denotes a distribution from the exponential family. The experimental results consider text modeling (evaluating on...
Sample hidden states of an RNN instead of predicting them deterministically. Interesting idea that is insufficiently explored.
iclr_2018
true
null
false
The review uses informal human-like phrasing such as 'hammer in need of a nail' and 'not sure how much I buy' and lacks LLM-typical stylistic markers.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ryk77mbRZ
Noise-Based Regularizers for Recurrent Neural Networks
ry22qzclM
1,511,824,052,525
3: The reviewer is fairly confident that the evaluation is correct
3: Clear rejection
In order to regularize RNNs, the paper suggests to inject noise into hidden units. More specifically, the suggested technique resembles optimizing the expected log likelihood under the hidden states prior, a lower bound to the data log-likelihood. The described approach seems to be simple. Yet, several details are unc...
Running an RNN for one step from noisy hidden states is a valid regularizer
iclr_2018
true
null
false
The review lacks the typical flowery language and formulaic structure of an LLM, and it contains very specific technical critiques that show a deep understanding of the paper's mathematical details.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ryjw_eAaZ
Unsupervised Deep Structure Learning by Recursive Dependency Analysis
ryilanteG
1,511,800,055,530
4: The reviewer is confident but not absolutely certain that the evaluation is correct
4: Ok but not good enough - rejection
The paper proposes an unsupervised structure learning method for deep neural networks. It first constructs a fully visible DAG by learning from data, and decomposes variables into autonomous sets. Then latent variables are introduced and stochastic inverse is generated. Later a deep neural network structure is construc...
There is a major technical flaw in this paper. And some experiment settings are not convincing.
iclr_2018
true
null
false
The review uses natural, technical language and identifies specific paper components without the stylistic markers or generic praise common in LLM-generated text.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ryjw_eAaZ
Unsupervised Deep Structure Learning by Recursive Dependency Analysis
SJGyhgwZz
1,512,668,122,231
3: The reviewer is fairly confident that the evaluation is correct
5: Marginally below acceptance threshold
Authors propose a deep architecture learning algorithm in an unsupervised fashion. By finding conditional in-dependencies in input as a Bayesian network and using a stochastic inverse mechanism that preserves the conditional dependencies, they suggest an optimal structure of fully connected hidden layers (depth, number...
Promising method, inconclusive results
iclr_2018
true
null
false
The review is concise and lacks the typical flowery language, repetitive structure, or over-extended polite phrasing characteristic of LLM-generated text.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ryj38zWRb
Optimizing the Latent Space of Generative Networks
SynXdTKeM
1,511,802,915,972
4: The reviewer is confident but not absolutely certain that the evaluation is correct
6: Marginally above acceptance threshold
In this paper, the authors propose a new architecture for generative neural networks. Rather than the typical adversarial training procedure used to train a generator and a discriminator, the authors train a generator only. To ensure that noise vectors get mapped to images from the target distribution, the generator is...
This paper is a potentially interesting alternative training procedure to GANs.
iclr_2018
true
null
false
The review exhibits expert-level domain knowledge, specific citations from 2017, and a natural human argumentative structure without typical LLM buzzwords or formulaic positivity.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ryj38zWRb
Optimizing the Latent Space of Generative Networks
BkILtntlz
1,511,799,117,545
4: The reviewer is confident but not absolutely certain that the evaluation is correct
4: Ok but not good enough - rejection
Summary: The authors observe that the success of GANs can be attributed to two factors; leveraging the inductive bias of deep CNNs and the adversarial training protocol. In order to disentangle the factors of success, and they propose to eliminate the adversarial training protocol while maintaining the first factor. Th...
OPTIMIZING THE LATENT SPACE OF GENERATIVE NETWORKS
iclr_2018
true
null
false
The review lacks common LLM markers, such as a polite or overly structured tone, and uses human-like critical language without formulaic buzzwords.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ryj38zWRb
Optimizing the Latent Space of Generative Networks
HyE2oHixz
1,511,902,124,122
3: The reviewer is fairly confident that the evaluation is correct
6: Marginally above acceptance threshold
The paper is well written and easy to follow. I find the results very interesting. In particular the paper shows that many properties of GAN (or generative) models (e.g. interpolation, feature arithmetic) are a in great deal result of the inductive bias of deep CNN’s and can be obtained with simple reconstruction losse...
Official review
iclr_2018
true
null
false
The review exhibits a natural human tone, specific domain-specific citations, and lacks the formulaic structure or buzzwords typically associated with LLM-generated text.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ryj0790hb
Incremental Learning through Deep Adaptation
HyK6w83xM
1,511,970,753,360
4: The reviewer is confident but not absolutely certain that the evaluation is correct
5: Marginally below acceptance threshold
----------------- Summary ----------------- The paper tackles the problem of task-incremental learning using deep networks. It devises an architecture and a training procedure aiming for some desirable properties; a) it does not require retraining using previous tasks’ data, b) the number of network parameters grows on...
Extensive experiments but inconclusive for the main message (task-incremental learning)
iclr_2018
true
null
false
The language used is direct and lacks the characteristic repetitive structures or buzzwords common in LLM-generated reviews.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ryj0790hb
Incremental Learning through Deep Adaptation
BJJTve9gM
1,511,815,094,629
4: The reviewer is confident but not absolutely certain that the evaluation is correct
6: Marginally above acceptance threshold
This paper proposes to adapt convnet representations to new tasks while avoiding catastrophic forgetting by learning a per-task “controller” specifying weightings of the convolution-al filters throughout the network while keeping the filters themselves fixed. Pros The proposed approach is novel and broadly applicabl...
-
iclr_2018
true
null
false
The review uses natural, first-person academic phrasing and lacks the formulaic structure or repetitive buzzwords common in LLM-generated text.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ryj0790hb
Incremental Learning through Deep Adaptation
HyOveS5gf
1,511,833,696,471
4: The reviewer is confident but not absolutely certain that the evaluation is correct
4: Ok but not good enough - rejection
This paper proposes new idea of using controller modules for increment learning. Instead of finetuning the whole network, only the added parameters of the controller modules are learned while the output of the old task stays the same. Experiments are conducted on multiple image classification datasets. I found the id...
Interesting idea but missing some simple baselines.
iclr_2018
true
null
false
The review lacks typical LLM buzzwords and follows a natural, human-like structure with specific, contextual critiques of equations and tables.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ryiAv2xAZ
Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples
By_HQdCeG
1,512,108,863,682
3: The reviewer is fairly confident that the evaluation is correct
6: Marginally above acceptance threshold
This paper proposes a new method of detecting in vs. out of distribution samples. Most existing approaches for this deal with detecting out of distributions at *test time* by augmenting input data and or temperature scaling the softmax and applying a simple classification rule based on the output. This paper proposes a...
simple, effective method, some discussion/understanding missing
iclr_2018
true
null
false
The review contains a mid-sentence cutoff in the summary section, which is a common human clerical error and suggests a lack of LLM post-processing or generation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ryiAv2xAZ
Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples
B1ja8-9lf
1,511,818,946,645
4: The reviewer is confident but not absolutely certain that the evaluation is correct
6: Marginally above acceptance threshold
I have read authors' reply. In response to authors' comprehensive reply and feedback. I upgrade my score to 6. ----------------------------- This paper presents a novel approach to calibrate classifiers for out of distribution samples. In additional to the original cross entropy loss, the “confidence loss” was prop...
Interesting idea, but not yet convinced
iclr_2018
true
null
false
The review lacks typical LLM buzzwords and follows a concise, human-like structure with specific technical inquiries.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ryiAv2xAZ
Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples
B1klq-5lG
1,511,819,751,148
3: The reviewer is fairly confident that the evaluation is correct
7: Good paper, accept
The manuscript proposes a generative approach to detect which samples are within vs. out of the sample space of the training distribution. This distribution is used to adjust the classifier so it makes confident predictions within sample, and less confident predictions out of sample, where presumably it is prone to mis...
interesting idea for robust classification
iclr_2018
true
null
false
The review lacks typical LLM buzzwords and follows a concise, direct, and human-like argumentative structure.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
End of preview.

No dataset card yet

Downloads last month
38