paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2018_BydjJte0-
Towards Reverse-Engineering Black-Box Neural Networks
Many deployed learned models are black boxes: given input, returns output. Internal information about the model, such as the architecture, optimisation procedure, or training data, is not disclosed explicitly as it might contain proprietary information or make the system more vulnerable. This work shows that such attri...
accepted-poster-papers
Novel way of analyzing neural networks to predict NN attributes such as architecture, training method, batch size etc. And the method works surprisingly good on the MNIST and ImageNet.
test
[ "HJ3gesPSf", "B1h4qp9xz", "rJGK3urgz", "Hyqnu-clf", "BJpZGOoQM", "Sk7ej-KQM", "SyGyjWKmz", "HkfR9ZtmM", "r1GocWtmf", "H1W59-F7z", "rye_qZtXG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author" ]
[ "Thanks to the authors for the extensive response with further results and analysis. Table 7 was very helpful in understanding how similar/different various architectures are, and the expanded table 3 was helpful in evaluating kennen-o in the extrapolation setting. I didn't find the results in section 4.3 to be con...
[ -1, 7, 7, 5, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 3, 4, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "HkfR9ZtmM", "iclr_2018_BydjJte0-", "iclr_2018_BydjJte0-", "iclr_2018_BydjJte0-", "iclr_2018_BydjJte0-", "r1GocWtmf", "H1W59-F7z", "rye_qZtXG", "rJGK3urgz", "Hyqnu-clf", "B1h4qp9xz" ]
iclr_2018_B1J_rgWRW
Understanding Deep Neural Networks with Rectified Linear Units
In this paper we investigate the family of functions representable by deep neural networks (DNN) with rectified linear units (ReLU). We give an algorithm to train a ReLU DNN with one hidden layer to {\em global optimality} with runtime polynomial in the data size albeit exponential in the input dimension. Further, we i...
accepted-poster-papers
Theoretical analysis and understanding of DNNs is a crucial area for ML community. This paper studies characteristics of the relu DNNs and makes several important contributions.
train
[ "rJUiN3DeM", "BkQ3IWcxM", "Sy66Z9sgG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents several theoretical results regarding the expressiveness and learnability of ReLU-activated deep neural networks. I summarize the main results as below:\n\n(1) Any piece-wise linear function can be represented by a ReLU-acteivated DNN. Any smooth function can be approximated by such networks.\n...
[ 6, 6, 7 ]
[ 4, 5, 4 ]
[ "iclr_2018_B1J_rgWRW", "iclr_2018_B1J_rgWRW", "iclr_2018_B1J_rgWRW" ]
iclr_2018_rytNfI1AZ
Training wide residual networks for deployment using a single bit for each weight
For fast and energy-efficient deployment of trained deep neural networks on resource-constrained embedded hardware, each learned weight parameter should ideally be represented and stored using a single bit. Error-rates usually increase when this requirement is imposed. Here, we report large improvements in error rate...
accepted-poster-papers
The paper presents a way of training 1bit wide resnet to reduce the model footprint while maintaining good performance. The revisions added more comparisons and discussions, which make it much better. Overall, the committee feels this work will bring value to the conference.
train
[ "ByDm13EBG", "SJ2dCsEHz", "SkaoIrVHz", "HkvE8wI4G", "BJyxkbFxz", "SkGtH2Kxf", "HJ0pVRqxM", "rymXcljQz", "rkAgKpHQG", "rkzl-zvXG", "ryrbVpBmG" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Good suggestion - we will do that at the next opportunity for revision.", "Ah, that is an impressive speedup! Thanks! (I would suggest citing this in the paper since it serves nicely at making the motivation of using 1bit weights obvious. But I leave that decision up to you).", "Regarding speedup on GPUs: we d...
[ -1, -1, -1, -1, 6, 6, 6, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, 4, 3, 4, -1, -1, -1, -1 ]
[ "SJ2dCsEHz", "SkaoIrVHz", "HkvE8wI4G", "rkzl-zvXG", "iclr_2018_rytNfI1AZ", "iclr_2018_rytNfI1AZ", "iclr_2018_rytNfI1AZ", "iclr_2018_rytNfI1AZ", "BJyxkbFxz", "SkGtH2Kxf", "HJ0pVRqxM" ]
iclr_2018_HyzbhfWRW
Learn to Pay Attention
We propose an end-to-end-trainable attention module for convolutional neural network (CNN) architectures built for image classification. The module takes as input the 2D feature vector maps which form the intermediate representations of the input image at different stages in the CNN pipeline, and outputs a 2D matrix of...
accepted-poster-papers
quality: interesting idea to train an end-to-end attention together with CNNs and solid experiments to justify the benefits of using such attentions. clarity: the presentation has been updated according to review comments and improved a lot significance: highly relevant topic, good improvements over other methods
train
[ "ryqX5FFlG", "rJ7NDl9xM", "ry-5adjxG", "rkGzeIaXz", "Sk9_cra7z", "Bkdd8avXM", "BJv3NaIzz", "BkG4XT8GM", "BJ0XfTUGG", "Skdm7jJ-G", "SkJcqL5lz", "r1-vztUlz", "SJD74cJxM", "Bk6Jkhnyf", "HyKA1qdkf", "rJN1a7CCZ", "B1eI20uCb" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "public", "author", "public", "author", "public", "author", "public" ]
[ "This paper proposes a network with the standard soft-attention mechanism for classification tasks, where the global feature is used to attend on multiple feature maps of local features at different intermediate layers of CNN. The attended features at different feature maps are then used to predict the final classe...
[ 5, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HyzbhfWRW", "iclr_2018_HyzbhfWRW", "iclr_2018_HyzbhfWRW", "iclr_2018_HyzbhfWRW", "SkJcqL5lz", "iclr_2018_HyzbhfWRW", "ryqX5FFlG", "rJ7NDl9xM", "ry-5adjxG", "ry-5adjxG", "iclr_2018_HyzbhfWRW", "SJD74cJxM", "iclr_2018_HyzbhfWRW", "HyKA1qdkf", "iclr_2018_HyzbhfWRW", "B1eI20uCb"...
iclr_2018_Hko85plCW
Monotonic Chunkwise Attention
Sequence-to-sequence models with soft attention have been successfully applied to a wide variety of problems, but their decoding process incurs a quadratic time and space cost and is inapplicable to real-time sequence transduction. To address these issues, we propose Monotonic Chunkwise Attention (MoChA), which adaptiv...
accepted-poster-papers
This clearly written paper describes a simple extension to hard monotonic attention -- the addition of a soft attention mechanism that operates over a fixed length window of inputs that ends at the point selected by the hard attention mechanism. Experiments on speech recognition (WSJ) and on a document summarization t...
train
[ "HJS8P6Vgf", "ryr2L0FeG", "H1J9s-cef", "S1hbW4Pmz", "Byr5xd-mG", "rJ5me_-7G", "Hkx_jvZmM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper proposes a small modification to the monotonic attention in [1] by adding a soft attention to the segment predicted by the monotonic attention. The paper is very well written and easy to follow. The experiments are also convincing. Here are a few suggestions and questions to make the paper stronger.\n\n...
[ 7, 6, 8, -1, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_Hko85plCW", "iclr_2018_Hko85plCW", "iclr_2018_Hko85plCW", "rJ5me_-7G", "HJS8P6Vgf", "ryr2L0FeG", "H1J9s-cef" ]
iclr_2018_BJ_UL-k0b
Recasting Gradient-Based Meta-Learning as Hierarchical Bayes
Meta-learning allows an intelligent agent to leverage prior learning episodes as a basis for quickly improving performance on a novel task. Bayesian hierarchical modeling provides a theoretical framework for formalizing meta-learning as inference for a set of parameters that are shared across tasks. Here, we reformulat...
accepted-poster-papers
Pros: + The paper introduces a non-trivial interpretation of MAML as hierarchical Bayesian learning and uses this perspective to develop a new variation of MAML that accounts for curvature information. Cons: - Relatively small gains over MAML on mini-Imagenet. - No direct comparison against the state-of-the-art on min...
train
[ "SycLo2FxG", "SJPGTRSEz", "HJiZYRT7z", "Hkv54AYeM", "HJBzB65xf", "rJBEH-fVf", "H1hUKApmG", "rJ1GD0T7f", "SkauLRaXz", "HkYWBCTQz", "r1aR9l5lG", "rJOy7ZYxG" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public" ]
[ "Summary\nThe paper presents an interesting view on the recently proposed MAML formulation of meta-learning (Finn et al). The main contribution is a) insight into the connection between the MAML procedure and MAP estimation in an equivalent linear hierarchical Bayes model with explicit priors, b) insight into the c...
[ 6, -1, -1, 7, 7, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, -1, -1, 3, 3, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJ_UL-k0b", "rJBEH-fVf", "SycLo2FxG", "iclr_2018_BJ_UL-k0b", "iclr_2018_BJ_UL-k0b", "SkauLRaXz", "SycLo2FxG", "Hkv54AYeM", "HJBzB65xf", "iclr_2018_BJ_UL-k0b", "rJOy7ZYxG", "iclr_2018_BJ_UL-k0b" ]
iclr_2018_B1Yy1BxCZ
Don't Decay the Learning Rate, Increase the Batch Size
It is common practice to decay the learning rate. Here we show one can usually obtain the same learning curve on both training and test sets by instead increasing the batch size during training. This procedure is successful for stochastic gradient descent (SGD), SGD with momentum, Nesterov momentum, and Adam. It reache...
accepted-poster-papers
Pros: + Nice demonstration of the equivalence between scaling the learning rate and increasing the batch size in SGD optimization. Cons: - While reporting convergence as a function of number of parameter updates is consistent, the paper would be more compelling if wall-clock times were given in some cases, as that wil...
train
[ "SyoR0j7BG", "H17TutI4G", "r1SNNxFlf", "B1i1vxqgz", "SJJrhg5lf", "BkCOZEamG", "SJdPl-KMf", "HkYkRxtzz", "B1FJsgFfG", "BJtuH2Qlf", "rJIErsI1M" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public" ]
[ "We apologize for any confusion. We do not claim that most big data problems can be solved using single machine SGD; our experiments use distributed (but synchronous) SGD. We will edit the text to clarify that the incentive to use asynchronous training is reduced when the synchronous batch size can be scaled to a s...
[ -1, -1, 6, 7, 6, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, 4, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ "H17TutI4G", "HkYkRxtzz", "iclr_2018_B1Yy1BxCZ", "iclr_2018_B1Yy1BxCZ", "iclr_2018_B1Yy1BxCZ", "iclr_2018_B1Yy1BxCZ", "r1SNNxFlf", "SJJrhg5lf", "B1i1vxqgz", "rJIErsI1M", "iclr_2018_B1Yy1BxCZ" ]
iclr_2018_HyMTkQZAb
Kronecker-factored Curvature Approximations for Recurrent Neural Networks
Kronecker-factor Approximate Curvature (Martens & Grosse, 2015) (K-FAC) is a 2nd-order optimization method which has been shown to give state-of-the-art performance on large-scale neural network optimization tasks (Ba et al., 2017). It is based on an approximation to the Fisher information matrix (FIM) that makes assu...
accepted-poster-papers
This clearly written paper extends the Kronecker-factored approximate curvature optimizer to recurrent networks. Experiments on Penn Treebank language modeling and training of differentiable neural computers on a repeated copy task show that the proposed K-FAC optimizers are stronger than SGD, Adam, and Adam with laye...
train
[ "B1IfZi4mM", "rk1lre5xG", "HkH1mlrVz", "ryIhKEFxM", "Sk-wrg9gM", "Hy60-o47f", "SyubZTI7z", "Syd3NrumM", "BkyTlaL7G", "SyxdyRI7G", "S18lVjVQz", "rJ_n-jNQz", "S1RLEw4xf" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author" ]
[ "Thank you for your detailed comments. We will address each of your major points in the sections below, followed by your remaining questions/comments.\n\n\nEmpirical / theoretical analysis of approximation quality\n=========================\n\nA detailed discussion, empirical study, and analysis of the main approx...
[ -1, 7, -1, 5, 7, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, -1, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "ryIhKEFxM", "iclr_2018_HyMTkQZAb", "SyubZTI7z", "iclr_2018_HyMTkQZAb", "iclr_2018_HyMTkQZAb", "rJ_n-jNQz", "BkyTlaL7G", "iclr_2018_HyMTkQZAb", "rk1lre5xG", "iclr_2018_HyMTkQZAb", "Sk-wrg9gM", "B1IfZi4mM", "iclr_2018_HyMTkQZAb" ]
iclr_2018_ByeqORgAW
Proximal Backpropagation
We propose proximal backpropagation (ProxProp) as a novel algorithm that takes implicit instead of explicit gradient steps to update the network parameters during neural network training. Our algorithm is motivated by the step size limitation of explicit gradient descent, which poses an impediment for optimization. Pro...
accepted-poster-papers
Pros: + Clear, well-written paper that tackles an interesting problem. + Interesting potential connections to other approaches in the literature such as Carreira-Perpiñán and Wang, 2014 and Taylor et al., 2016. + Paper shows good understanding of the literature, has serious experiments, and does not overstate the resul...
train
[ "BkX4GPilz", "HJGe_n84f", "rJWf9IOgM", "rJX7hXKgG", "rk6DsGhWM", "rJ42Kf2-z", "By3PKf2-f", "B1vCOGhWG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Summary:\n\nUsing a penalty formulation of backpropagation introduced in a paper of Carreira-Perpinan and Wang (2014), the current submission proposes to minimize this formulation using explicit step for the update of the variables corresponding to the backward pass, but implicit steps for the update of the parame...
[ 6, -1, 5, 7, -1, -1, -1, -1 ]
[ 4, -1, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_ByeqORgAW", "B1vCOGhWG", "iclr_2018_ByeqORgAW", "iclr_2018_ByeqORgAW", "iclr_2018_ByeqORgAW", "rJWf9IOgM", "rJX7hXKgG", "BkX4GPilz" ]
iclr_2018_rkLyJl-0-
Neumann Optimizer: A Practical Optimization Algorithm for Deep Neural Networks
Progress in deep learning is slowed by the days or weeks it takes to train large models. The natural solution of using more hardware is limited by diminishing returns, and leads to inefficient use of additional resources. In this paper, we present a large batch, stochastic optimization algorithm that is both faster tha...
accepted-poster-papers
Pros: + Clearly written paper. + Easily implemented algorithm that appears to have excellent scaling properties and can even improve on validation error in some cases. + Thorough evaluation against the state of the art. Cons: - No theoretical guarantees for the algorithm. This paper belongs in ICLR if there is enough...
train
[ "ByPYAMtgG", "Hy5t5WIxG", "BkEmOSvef", "SyVWZD6QM", "ByNx4IpXf", "Hkg9QITXG", "BJXgmLamM", "SkFuMIpXM", "ByNdWUp7M", "HJLT-LpXG", "rJc1-IpmG", "rkXFpDlzf", "HJ1hYPI0-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public", "public" ]
[ "The paper proposes a new algorithm, where they claim to use Hessian implicitly and are using a motivation from power-series. In general, I like the paper.\n\nTo me, Algorithm 1 looks like some kind of proximal-point type algorithm. Algorithm 2 is more heuristic approach, with a couple of parameters to tune it. Gi...
[ 6, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rkLyJl-0-", "iclr_2018_rkLyJl-0-", "iclr_2018_rkLyJl-0-", "HJLT-LpXG", "iclr_2018_rkLyJl-0-", "iclr_2018_rkLyJl-0-", "HJ1hYPI0-", "Hy5t5WIxG", "ByPYAMtgG", "BkEmOSvef", "rkXFpDlzf", "iclr_2018_rkLyJl-0-", "iclr_2018_rkLyJl-0-" ]
iclr_2018_rJ33wwxRb
SGD Learns Over-parameterized Networks that Provably Generalize on Linearly Separable Data
Neural networks exhibit good generalization behavior in the over-parameterized regime, where the number of network parameters exceeds the number of observations. Nonetheless, current generalization bounds for neural networks fail to explain this phenomenon. In an attempt to bridge this gap, we s...
accepted-poster-papers
This is a high quality paper, clearly written, highly original, and clearly significant. The paper gives a complete analysis of SGD in a two layer network where the second layer does not undergo training and the data are linearly separable. Experimental results confirm the theoretical suggestion that the second layer ...
train
[ "HJ9LXfvlz", "rJV8Y8ulf", "BJBRHQkWf", "Bkn9F8dfz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "Paper studies an interesting phenomenon of overparameterised models being able to learn well-generalising solutions. It focuses on a setting with three crucial simplifications:\n- data is linearly separable\n- model is 1-hidden layer feed forward network with homogenous activations\n- **only input-hidden layer wei...
[ 7, 7, 8, -1 ]
[ 3, 3, 4, -1 ]
[ "iclr_2018_rJ33wwxRb", "iclr_2018_rJ33wwxRb", "iclr_2018_rJ33wwxRb", "iclr_2018_rJ33wwxRb" ]
iclr_2018_Skz_WfbCZ
A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks
We present a generalization bound for feedforward neural networks in terms of the product of the spectral norm of the layers and the Frobenius norm of the weights. The generalization bound is derived using a PAC-Bayes analysis.
accepted-poster-papers
This is a strong paper presenting a very clean proof of a result that is similar, though now incomparable to one due to Bartlett et al. These bounds (and Bartlett's) are among the most promising norm-based bounds for NNs. I would simply add that the citation of Dziugaite and Roy (2017) could be improved. There work al...
train
[ "SJehrIf1f", "rkn5xHFlf", "Hyi_2MTxz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors prove a generalization guarantee for deep\nneural networks with ReLU activations, in terms of margins of the\nclassifications and norms of the weight matrices. They compare this\nbound with a similar recent bound proved by Bartlett, et al. While,\nstrictly speaking, the bounds are incomparable in str...
[ 9, 6, 7 ]
[ 4, 3, 4 ]
[ "iclr_2018_Skz_WfbCZ", "iclr_2018_Skz_WfbCZ", "iclr_2018_Skz_WfbCZ" ]
iclr_2018_r1iuQjxCZ
On the importance of single directions for generalization
Despite their ability to memorize large datasets, deep neural networks often achieve good generalization performance. However, the differences between the learned solutions of networks which generalize and those which do not remain unclear. Additionally, the tuning properties of single directions (defined as the activa...
accepted-poster-papers
The paper contributes to a body of empirical work towards understanding generalization in deep learning. They do this through a battery of experiments studying "single directions" or selectivity of small groups of neurons. The reviewers that have actively participated agree that the revision is of high quality, impact...
train
[ "H1gh0U_lG", "SyGCUouxf", "r1On1W5xf", "H1qU4p5GM", "HyoMFT9fz", "Hk6Twpcfz", "S1TldacMM", "HyKcH6czz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "\nSummary:\n- nets that rely on single directions are probably overfitting\n- batch norm helps not having large single directions\n- high class selectivity of single units is a bad measure to find \"important\" neurons that help a NN generalize.\n\nThe experiments that this paper does are quite interesting, somewh...
[ 7, 5, 9, -1, -1, -1, -1, -1 ]
[ 3, 4, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2018_r1iuQjxCZ", "iclr_2018_r1iuQjxCZ", "iclr_2018_r1iuQjxCZ", "iclr_2018_r1iuQjxCZ", "H1gh0U_lG", "SyGCUouxf", "Hk6Twpcfz", "r1On1W5xf" ]
iclr_2018_r1q7n9gAb
The Implicit Bias of Gradient Descent on Separable Data
We show that gradient descent on an unregularized logistic regression problem, for almost all separable datasets, converges to the same direction as the max-margin solution. The result generalizes also to other monotone decreasing loss functions with an infimum at infinity, and we also discuss a multi-class gener...
accepted-poster-papers
The paper is tackling an important open problem. AnonReviewer3 identified some technical issues that led them to rate the manuscript 5 (i.e., just below the acceptance threshold). Many of these issues are resolved by the reviewer in their review, and the author response makes it clear that these fixes are indeed corre...
train
[ "S1jezarxG", "HyBrwGweG", "HkS9oWtef", "SymFQAPfz", "ByMhM0wGz", "HJxIf0wGz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper offers a formal proof that gradient descent on the logistic\nloss converges very slowly to the hard SVM solution in the case where\nthe data are linearly separable. This result should be viewed in the\ncontext of recent attempts at trying to understand the generalization\nability of neural networks, whic...
[ 5, 7, 8, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1 ]
[ "iclr_2018_r1q7n9gAb", "iclr_2018_r1q7n9gAb", "iclr_2018_r1q7n9gAb", "S1jezarxG", "HyBrwGweG", "HkS9oWtef" ]
iclr_2018_ByQpn1ZA-
Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step
Generative adversarial networks (GANs) are a family of generative models that do not minimize a single training criterion. Unlike other generative models, the data distribution is learned via a game between a generator (the generative model) and a discriminator (a teacher providing training signal) that each minimize t...
accepted-poster-papers
AnonReviewers 2 and AnonReviewer 3 rated the paper highly, with AR3 even upgrading their score. AnonReviewer1 was less generous: " Overall, it is a good empirical study, raising a healthy set of questions. In this regard, the paper is worth accepting. However, I am still uncomfortable with the lack of answers and giv...
train
[ "r12l4YDBz", "H1iaR_vSf", "SkqpgbLgM", "B1RLg-DNz", "BJAGbdrVf", "Sks895Fxz", "SkiI_Bixz", "HyqX2aMGG", "H1oisTzMf", "HkfYcazGz", "HksiG0DA-", "BySwLOUkM", "BkUrF7rJG", "ryDZ5ySJz", "HytAS7W1z", "H1N7QxeJM" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "author", "public", "author", "author", "public" ]
[ "Bugs were fixed yesterday and a flood of old emails were sent. But rest assured, I've been reading your comments actively. Thanks for your message.", "Yesterday, some of the authors got an e-mail asking us to respond ASAP to a comment from the AC. When we visited the webpage, we could not see a comment, even whe...
[ -1, -1, 8, -1, -1, 4, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, 5, -1, -1, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "H1iaR_vSf", "iclr_2018_ByQpn1ZA-", "iclr_2018_ByQpn1ZA-", "H1oisTzMf", "SkiI_Bixz", "iclr_2018_ByQpn1ZA-", "iclr_2018_ByQpn1ZA-", "SkqpgbLgM", "Sks895Fxz", "SkiI_Bixz", "iclr_2018_ByQpn1ZA-", "BkUrF7rJG", "ryDZ5ySJz", "H1N7QxeJM", "HksiG0DA-", "iclr_2018_ByQpn1ZA-" ]
iclr_2018_S1uxsye0Z
Adaptive Dropout with Rademacher Complexity Regularization
We propose a novel framework to adaptively adjust the dropout rates for the deep neural network based on a Rademacher complexity bound. The state-of-the-art deep learning algorithms impose dropout strategy to prevent feature co-adaptation. However, choosing the dropout rates remains an art of heuristics or relies on em...
accepted-poster-papers
The reviewers agreed that the work addresses an important problem. There was disagreement as to the correctness of the arguments in the paper: one of these reviewers was eventually convinced. The other pointed out another two issue in their final post, but it seems that 1. the first is easily adopted and does not affec...
val
[ "Bk15lpF1G", "HyUUnfKHz", "By4ny9FHG", "r1cQ9zYSf", "rkZCWLOHf", "HJkIwe_Sz", "BkYWorDrf", "rJlVd7PrM", "rJ3xKc8BM", "r1wH68UHz", "Syx39Bsgf", "BksvXe8rG", "BJefd2BBf", "HJHht9rHM", "rJYg3KLEf", "HkOUXRFlz", "S1CZ6BnQM", "ry1sRp-ff", "SJkzxRWzz", "S18-Tp-zz" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "a...
[ "This paper studies the adjustment of dropout rates which is a useful tool to prevent the overfitting of deep neural networks. The authors derive a generalization error bound in terms of dropout rates. Based on this, the authors propose a regularization framework to adaptively select dropout rates. Experimental res...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, 7, -1, -1, -1, -1 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, 5, -1, -1, -1, -1 ]
[ "iclr_2018_S1uxsye0Z", "BkYWorDrf", "r1cQ9zYSf", "rkZCWLOHf", "HJkIwe_Sz", "rJlVd7PrM", "r1wH68UHz", "rJYg3KLEf", "BJefd2BBf", "BksvXe8rG", "iclr_2018_S1uxsye0Z", "iclr_2018_S1uxsye0Z", "HJHht9rHM", "S18-Tp-zz", "Bk15lpF1G", "iclr_2018_S1uxsye0Z", "iclr_2018_S1uxsye0Z", "HkOUXRFlz"...
iclr_2018_BJij4yg0Z
A Bayesian Perspective on Generalization and Stochastic Gradient Descent
We consider two questions at the heart of machine learning; how can we predict if a minimum will generalize to the test set, and why does stochastic gradient descent find minima that generalize well? Our work responds to \citet{zhang2016understanding}, who showed deep neural networks can easily memorize randomly labele...
accepted-poster-papers
I'm inclined to recommend accepting this paper, although it is borderline given the strong dissenting opinion. The revisions have addressed many of the concerns about quality, clarity, and significance. The paper gives an end to end explanation in Bayesian terms of generalization in neural networks using SGD. However,...
train
[ "SJAadCKxz", "By74tbqxf", "HkoLoX5lf", "ry5JIu5GG", "ryszMFGWG", "BkhiKJ1zG", "ry8dvMBZf", "SyQvIePbz", "SJC1KfSWf", "HyvtmFzWM", "S1XazYfZG", "SJUvMa-bf", "HJiDf7WZf", "HJBT-QZWf", "r1tVifbbz", "H1YF5RlyM", "HyDLn-pRZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "author", "author", "author", "public", "public", "official_reviewer", "public", "public", "public", "author", "public" ]
[ "The paper takes a recent paper of Zhang et al 2016 as the starting point to investigate the generalization capabilities of models trained by stochastic gradient descent. The main contribution are scaling rules that relate the batch size k used in SGD with the learning rate \\epsilon, most notably \\epsilon/k = con...
[ 3, 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJij4yg0Z", "iclr_2018_BJij4yg0Z", "iclr_2018_BJij4yg0Z", "iclr_2018_BJij4yg0Z", "SJUvMa-bf", "By74tbqxf", "HkoLoX5lf", "SJAadCKxz", "HkoLoX5lf", "SJUvMa-bf", "SJUvMa-bf", "r1tVifbbz", "HJBT-QZWf", "r1tVifbbz", "HkoLoX5lf", "HyDLn-pRZ", "iclr_2018_BJij4yg0Z" ]
iclr_2018_SyELrEeAb
Implicit Causal Models for Genome-wide Association Studies
Progress in probabilistic generative models has accelerated, developing richer models with neural architectures, implicit densities, and with scalable algorithms for their Bayesian inference. However, there has been limited progress in models that capture causal relationships, for example, how individual genetic factor...
accepted-poster-papers
The reviewers agree that the work is high quality, clear, original, and could be significant. Despite this, the scores are borderline. The reason is due to rough agreement that the empirical evaluations are not quite there yet. In particular, two reviewers agree that, in the synthetic experiments, the method is evalua...
train
[ "BJaMh7FSM", "SynpHhUBf", "ByKzdqIBM", "HJ80r5Brf", "HkjOg7SBM", "S1EEVuwlG", "SyBCAMcgG", "HJxrNo3xz", "S15hIOT7M", "rJwfLOpQf", "S1488upQG", "HJR3vuamM" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Unfortunately we weren't able to finish the experiments by today, which is the deadline. Regardless if the paper is accepted, we hope to finish these experiments and get them into the paper by camera-ready and/or the next arxiv update. (And thanks again to all the reviewers for the helpful feedback.)\n\nre:rescale...
[ -1, -1, -1, -1, -1, 5, 6, 6, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, 5, 5, 5, -1, -1, -1, -1 ]
[ "SynpHhUBf", "ByKzdqIBM", "HkjOg7SBM", "HkjOg7SBM", "S1488upQG", "iclr_2018_SyELrEeAb", "iclr_2018_SyELrEeAb", "iclr_2018_SyELrEeAb", "S1EEVuwlG", "HJxrNo3xz", "SyBCAMcgG", "iclr_2018_SyELrEeAb" ]
iclr_2018_HJC2SzZCW
Sensitivity and Generalization in Neural Networks: an Empirical Study
In practice it is often found that large over-parameterized neural networks generalize better than their smaller counterparts, an observation that appears to conflict with classical notions of function complexity, which typically favor smaller models. In this work, we investigate this tension between complexity and gen...
accepted-poster-papers
Reviewers always find problems in papers like this. AnonReviewer1 would have preferred to have seen a study of traditional architectures, rather than fully connected ones, which are now less frequently used. They thought the paper was too long, the figures too cluttered, and were not convinced by the discussion around...
train
[ "rJOMWJmSf", "HyVOdjxHf", "H1gNqp1BG", "HkwqeuYeG", "rJzvIiKlf", "rJtlOoqlG", "S1p14OT7M", "ryrJixIMG", "rkLQyWIGf", "Byq8l-IfM", "SyLS0eLzM", "SkJsigUGz" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "Thank you for taking the time to consider and respond to our rebuttal!\n------------------------------------------------------------------------------------------------------------------------------------------------------\n\n(1)\n>> Indeed, it is clearly expected that the performance of the networks sensibly drop...
[ -1, -1, -1, 8, 5, 4, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, 4, 3, 5, -1, -1, -1, -1, -1, -1 ]
[ "H1gNqp1BG", "S1p14OT7M", "rkLQyWIGf", "iclr_2018_HJC2SzZCW", "iclr_2018_HJC2SzZCW", "iclr_2018_HJC2SzZCW", "rJtlOoqlG", "rJtlOoqlG", "rJzvIiKlf", "HkwqeuYeG", "rJzvIiKlf", "rJtlOoqlG" ]
iclr_2018_SyyGPP0TZ
Regularizing and Optimizing LSTM Language Models
In this paper, we consider the specific problem of word-level language modeling and investigate strategies for regularizing and optimizing LSTM-based models. We propose the weight-dropped LSTM, which uses DropConnect on hidden-to-hidden weights, as a form of recurrent regularization. Further, we introduce NT-ASGD, a no...
accepted-poster-papers
This paper presents a simple yet effective method for weight dropping for an LSTM that requires no modification of an RNN cell's formulation. Experimental results shows good perplexity results on benchmarks compared to many baselines. All reviewers agree that the paper will bring good contribution to the conference.
test
[ "HyQVY2Bxz", "ByC55Gcxz", "BJlf_TmWG", "r18OaJsGG", "S1YYPyjMz", "BkhO4kifz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper sets a new state of the art on word level language modelling on the Penn Treebank and Wikitext-2 datasets using various optimization and regularization techniques. These already very good results are further improved, by a large margin, using a Neural Cache.\n\nThe paper is well written, easy to follow a...
[ 7, 7, 7, -1, -1, -1 ]
[ 4, 4, 5, -1, -1, -1 ]
[ "iclr_2018_SyyGPP0TZ", "iclr_2018_SyyGPP0TZ", "iclr_2018_SyyGPP0TZ", "ByC55Gcxz", "BJlf_TmWG", "HyQVY2Bxz" ]
iclr_2018_H1meywxRW
DCN+: Mixed Objective And Deep Residual Coattention for Question Answering
Traditional models for question answering optimize using cross entropy loss, which encourages exact answers at the cost of penalizing nearby or overlapping answers that are sometimes equally accurate. We propose a mixed objective that combines cross entropy loss with self-critical policy learning, using rewards derived...
accepted-poster-papers
This is an interesting paper that provides modeling improvements over several strong baselines and presents SOTA on Squad. One criticism of the paper is that it evaluates only on Squad, which is somewhat of an artificial task, but we think for publication purposes at ICLR, the paper has a reasonable set of components.
train
[ "BJ5BhWlSG", "HyeSGKwgf", "rkf-R8qlz", "rJ9UPyaeG", "SJIhjddQG", "rkUPcK4mG", "HyABctE7G", "SkPEcKVQG", "Bk4rc8Ebf", "HkzhLBVZf", "Sk7GJVNZf", "Skuf674Wf", "S1UPKfXbM", "H1UVzfmZf", "SJtORWmZM", "BJfdokG-M", "rJdarOXyG" ]
[ "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "author", "public", "public", "author", "author", "public", "public", "author" ]
[ "The performance of the model on SQuAD dataset is impressive. In addition to the performance on the test set, we are also interested in the sample complexity of the proposed model. Currently, the SQuAD dataset splits the collection of passages into a training set, a development set, and a test set in a ratio of 80%...
[ -1, 7, 6, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, 4, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H1meywxRW", "iclr_2018_H1meywxRW", "iclr_2018_H1meywxRW", "iclr_2018_H1meywxRW", "Sk7GJVNZf", "HyeSGKwgf", "rkf-R8qlz", "rJ9UPyaeG", "HkzhLBVZf", "Skuf674Wf", "H1UVzfmZf", "S1UPKfXbM", "BJfdokG-M", "SJtORWmZM", "iclr_2018_H1meywxRW", "iclr_2018_H1meywxRW", "iclr_2018_H1mey...
iclr_2018_H196sainb
Word translation without parallel data
State-of-the-art methods for learning cross-lingual word embeddings have relied on bilingual dictionaries or parallel corpora. Recent studies showed that the need for parallel data supervision can be alleviated with character-level information. While these methods showed encouraging results, they are not on par with th...
accepted-poster-papers
There is significant discussion on this paper and high variance between reviewers: one reviewer gave the paper a low score. However the committee feels that this paper should be accepted at the conference since it provides a better framework for reproducibility, performs more large scale experiments than prior work. ...
train
[ "SyE3AHgxG", "rJEg3TtxM", "H1Qhqm9ez", "Sy_UZ--4f", "SJfsiaQ7z", "Skw7wkXQG", "B1yRBYGQM", "H1RBrtfmG", "rJ0n4tGQG", "HkFHDhiGz", "rkJTKT9lz", "BJ4hFZ5ez", "SkMy4hKlz", "rJcdzzcCb", "HkD8ivF0-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "public", "author", "public", "author", "public" ]
[ "This paper presents a new method for obtaining a bilingual dictionary, without requiring any parallel data between the source and target languages. The method consists of an adversarial approach for aligning two monolingual word embedding spaces, followed by a refinement step using frequent aligned words (accordin...
[ 9, 3, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H196sainb", "iclr_2018_H196sainb", "iclr_2018_H196sainb", "rJ0n4tGQG", "HkFHDhiGz", "iclr_2018_H196sainb", "H1Qhqm9ez", "rJEg3TtxM", "SyE3AHgxG", "iclr_2018_H196sainb", "BJ4hFZ5ez", "SkMy4hKlz", "iclr_2018_H196sainb", "HkD8ivF0-", "iclr_2018_H196sainb" ]
iclr_2018_HkuGJ3kCb
All-but-the-Top: Simple and Effective Postprocessing for Word Representations
Real-valued word representations have transformed NLP applications; popular examples are word2vec and GloVe, recognized for their ability to capture linguistic regularities. In this paper, we demonstrate a {\em very simple}, and yet counter-intuitive, postprocessing technique -- eliminate the common mean vector and a f...
accepted-poster-papers
This is a good paper with strong results via a set of simple steps for post processing off the shelf words embeddings. Reviewers are enthusiastic about it and the author responses are satisfactory.
train
[ "HyvB9RKez", "rkmdzXigz", "S1DUyihgz", "B1ab6bXMz", "Hksjh-mfM", "S1d8nWXMG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper provides theoretical and empirical motivations for removing the top few principle components of commonly-used word embeddings.\n\nThe paper is well-written and I enjoyed reading it. However, it does not explain how significant this result is beyond that of (Bullinaria and Levy, 2012), who also removed t...
[ 6, 7, 7, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1 ]
[ "iclr_2018_HkuGJ3kCb", "iclr_2018_HkuGJ3kCb", "iclr_2018_HkuGJ3kCb", "HyvB9RKez", "S1DUyihgz", "rkmdzXigz" ]
iclr_2018_B18WgG-CZ
Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning
A lot of the recent success in natural language processing (NLP) has been driven by distributed vector representations of words trained on large amounts of text in an unsupervised manner. These representations are typically used as general purpose features for words across a range of NLP problems. However, extending th...
accepted-poster-papers
This paper presents a very cool setup for multi task learning for learning fixed length representations for sentences. Although the authors accept the fact that fixed length representations may not be suitable for complex, long pieces of text (often, sentences), such representations may be useful for several tasks. T...
train
[ "H1qLBusxz", "BJOtf_JlG", "BkSxEc5xz", "HJP15O6XG", "By7S_wamz", "r1df_v6XM", "HkW2Lwamz", "SyxDHPTmM", "By43QwTmM", "HJ-F7vamM", "B19f7vp7G", "B1u5zP6XG", "Skog6b2XM", "HyNahb37M", "Bk1qKe3XG", "S1mE8yRxz", "SJlxJ4ixG", "HkHMeSHez", "Syi8hpvCZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "public", "public", "public", "public", "public" ]
[ "---- updates: ----\n\nI had a ton of comments and concerns, and I think the authors did an admirable job in addressing them. I think the paper represents a solid empirical contribution to this area and is worth publishing in ICLR. \n\n---- original review follows: ----\n\nThis paper is about learning sentence emb...
[ 8, 8, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_B18WgG-CZ", "iclr_2018_B18WgG-CZ", "iclr_2018_B18WgG-CZ", "iclr_2018_B18WgG-CZ", "BJOtf_JlG", "BJOtf_JlG", "BkSxEc5xz", "SJlxJ4ixG", "H1qLBusxz", "H1qLBusxz", "H1qLBusxz", "H1qLBusxz", "HkHMeSHez", "Bk1qKe3XG", "S1mE8yRxz", "iclr_2018_B18WgG-CZ", "iclr_2018_B18WgG-CZ", "...
iclr_2018_r1dHXnH6-
Natural Language Inference over Interaction Space
Natural Language Inference (NLI) task requires an agent to determine the logical relationship between a natural language premise and a natural language hypothesis. We introduce Interactive Inference Network (IIN), a novel class of neural network architectures that is able to achieve high-level understanding of the sent...
accepted-poster-papers
This paper presents a marginally interesting idea -- that of an interaction tensor that compares two sentence representations word by word, and feeds the interaction tensor into a higher level feature extraction mechanism. It produces good results on multi-NLI and SNLI datasets. There is some criticism about comparin...
train
[ "B1td1m1Gf", "r1c8O_eSM", "rkFAbDamf", "SJeSa6_ez", "HJIhPadgf", "r1SU3CYeM", "H1YPRv6Xf", "S1r9UvTXz", "Hymo5L67G", "rySbtLamM" ]
[ "public", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Thank you for the detailed description of the architecture. I choose this paper for ICLR Reproducibility Challenge and I think I could reproduce the main parts of the architecture. Still, there are a few details that I wasn't able to understand from the paper:\n1. How are the weights of the network initialized?\n2...
[ -1, -1, -1, 5, 6, 6, -1, -1, -1, -1 ]
[ -1, -1, -1, 5, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_r1dHXnH6-", "Hymo5L67G", "r1SU3CYeM", "iclr_2018_r1dHXnH6-", "iclr_2018_r1dHXnH6-", "iclr_2018_r1dHXnH6-", "iclr_2018_r1dHXnH6-", "B1td1m1Gf", "SJeSa6_ez", "HJIhPadgf" ]
iclr_2018_SJ1nzBeA-
Multi-Task Learning for Document Ranking and Query Suggestion
We propose a multi-task learning framework to jointly learn document ranking and query suggestion for web search. It consists of two major components, a document ranker, and a query recommender. Document ranker combines current query and session information and compares the combined representation with document represe...
accepted-poster-papers
Overall, the committee finds this paper to be interesting, well written and proposes an end to end model for a very relevant task. The comparisons are also interesting and well rounded. Reviewer 2 is critical of the paper, but the committee finds the answers to the criticisms to be satisfactory. The paper will bring...
train
[ "SJe5w-Ylz", "HkHVBVYxz", "BkYFdiqez", "H1B4TT37z", "By3-RP-Qf", "B1S_ta3mM", "H1iKCw-Qz", "S1PDTwW7z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "Novelty: It looks quite straightforward to combine document ranking and query suggestion. For the model architecture, it is a standard multi-task learning framework. For the “session encoder”, it is also proposed (at least, used) in (Sordoni et al., CIKM 2015). Therefore, I think the technical novelty of the work...
[ 4, 6, 7, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SJ1nzBeA-", "iclr_2018_SJ1nzBeA-", "iclr_2018_SJ1nzBeA-", "By3-RP-Qf", "SJe5w-Ylz", "H1iKCw-Qz", "BkYFdiqez", "HkHVBVYxz" ]
iclr_2018_HkgNdt26Z
Distributed Fine-tuning of Language Models on Private Data
One of the big challenges in machine learning applications is that training data can be different from the real-world data faced by the algorithm. In language modeling, users’ language (e.g. in private messaging) could change in a year and be completely different from what we observe in publicly available data. At the ...
accepted-poster-papers
The committee feels that this paper presents a simple, yet effective way to adapt language models from various users in a sufficiently privacy preserving way. Empirical results are quite strong. Reviewer 3 says that the novelty of the paper is not great, but does not provide any references to prior work that are simi...
train
[ "ByBJfdKxM", "H1ywUV9gf", "ryEbgBTxG", "HkRPphPff", "Bk-IT3vMz", "ryrx6hvGf", "SyebtQVWG", "S10QJAz-z", "HJHV20GZf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "public", "public" ]
[ "This paper deals with improving language models on mobile equipments\nbased on small portion of text that the user has ever input. For this\npurpose, authors employed a linearly interpolated objectives between user\nspecific text and general English, and investigated which method (learning\nwithout forgetting and ...
[ 5, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HkgNdt26Z", "iclr_2018_HkgNdt26Z", "iclr_2018_HkgNdt26Z", "ByBJfdKxM", "H1ywUV9gf", "ryEbgBTxG", "ByBJfdKxM", "ryEbgBTxG", "H1ywUV9gf" ]
iclr_2018_SkT5Yg-RZ
Intrinsic Motivation and Automatic Curricula via Asymmetric Self-Play
We describe a simple scheme that allows an agent to learn about its environment in an unsupervised manner. Our scheme pits two versions of the same agent, Alice and Bob, against one another. Alice proposes a task for Bob to complete; and then Bob attempts to complete the task. In this work we will focus on two kinds o...
accepted-poster-papers
I fully agree with strong positive statements in the reviews. All reviewers agree that the paper introduces a novel and elegant twist on standard RL, wherein one agent proposes a sequence of diverse tasks to a second agent so as to accelerate the second agent's learning models of the environment. I also concur that t...
train
[ "S1Fy0bqlG", "H14gZYsgG", "SkaVNc2gz", "rkKgampmz", "ByzcQscQf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "In this paper, the authors describe a new formulation for exploring the environment in an unsupervised way to aid a specific task later. Using two “minds”, Alice and Bob, where the former proposes increasingly difficult tasks and the latter tries to accomplish them as fast as possible, the learning agent Bob can l...
[ 8, 5, 8, -1, -1 ]
[ 4, 3, 4, -1, -1 ]
[ "iclr_2018_SkT5Yg-RZ", "iclr_2018_SkT5Yg-RZ", "iclr_2018_SkT5Yg-RZ", "SkaVNc2gz", "H14gZYsgG" ]
iclr_2018_SyoDInJ0-
Reinforcement Learning Algorithm Selection
This paper formalises the problem of online algorithm selection in the context of Reinforcement Learning (RL). The setup is as follows: given an episodic task and a finite number of off-policy RL algorithms, a meta-algorithm has to decide which RL algorithm is in control during the next episode so as to maximize the ex...
accepted-poster-papers
The reviewers are unanimous in accepting the paper. They generally view it as introducing an original approach to online RL using bandit-style selection from a fixed portfolio of off-policy algorithms. Furthermore, rigorous theoretical analysis shows that the algorithm achieves near-optimal performance. The only rea...
train
[ "rkTUeUKef", "SJX3im5ez", "ryC1UZsgz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "SUMMARY\nThe paper considers a meta-algorithm, in the form of a UCB algorithms, that selects base-learners in a pool of reinforcement learning agents.\n\nHIGH LEVEL COMMENTS\nIn this paper, T refers to the total number of meta-decisions. This is very different from the total number of interactions with the system ...
[ 6, 6, 7 ]
[ 5, 3, 4 ]
[ "iclr_2018_SyoDInJ0-", "iclr_2018_SyoDInJ0-", "iclr_2018_SyoDInJ0-" ]
iclr_2018_S1vuO-bCW
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a considerable amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each a...
accepted-poster-papers
This paper is an easy accept -- three reviewers have above threshold scores, while one reviewer is slightly below threshold, but based on the submitted manuscript. It appears that the paper has substantially improved based on reviewer comments. Pros: All reviews had positive sentiment: "very elegant and general idea...
train
[ "BJ0qmr9xf", "ByMUO4qxG", "Hko8GaGff", "SyqiSbomf", "BkOs5gCmz", "Hy80b8TXz", "By-F6QpmM", "B1Hs_K37G", "HyhCJN3Xf", "HkNYoQhQf", "H1ufXR3MM", "ByCgNR6Zz", "SJ067R6Zf", "By2lMCaWf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "The paper solves the problem of how to do autonomous resets, which is an important problem in real world RL. The method is novel, the explanation is clear, and has good experimental results.\n \nPros:\n1. The approach is simple, solves a task of practical importance, and performs well in the experiments. \n2. The ...
[ 7, 6, 5, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_S1vuO-bCW", "iclr_2018_S1vuO-bCW", "iclr_2018_S1vuO-bCW", "iclr_2018_S1vuO-bCW", "Hy80b8TXz", "By-F6QpmM", "B1Hs_K37G", "HyhCJN3Xf", "HkNYoQhQf", "SyqiSbomf", "Hko8GaGff", "ByMUO4qxG", "BJ0qmr9xf", "iclr_2018_S1vuO-bCW" ]
iclr_2018_BkabRiQpb
Consequentialist conditional cooperation in social dilemmas with imperfect information
Social dilemmas, where mutual cooperation can lead to high payoffs but participants face incentives to cheat, are ubiquitous in multi-agent interaction. We wish to construct agents that cooperate with pure cooperators, avoid exploitation by pure defectors, and incentivize cooperation from the rest. However, often the a...
accepted-poster-papers
The reviewer reactions to the initial manuscript were generally positive. They considered the paper to be well written and clear, providing an original contribution to learning to cooperate in multi-agent deep RL in imperfect domains. The reviewers raised a number of specific issues to address, including improved def...
train
[ "r1CyEt8Nf", "r1uWD5txM", "HkCdXWqlM", "B1GljO9xM", "Sy0tFIhMf", "rJYL9Uhzf", "Byb0FUnfz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Thanks for clarifying some of the mentioned issues. With the introduced revision, you cover your ground very well. I believe that this paper offers a great basis for interesting further studies in this direction.", "This paper proposes a novel adaptive learning mechanism to improve results in ergodic cooperation...
[ -1, 7, 5, 6, -1, -1, -1 ]
[ -1, 3, 4, 4, -1, -1, -1 ]
[ "rJYL9Uhzf", "iclr_2018_BkabRiQpb", "iclr_2018_BkabRiQpb", "iclr_2018_BkabRiQpb", "B1GljO9xM", "r1uWD5txM", "HkCdXWqlM" ]
iclr_2018_SkZxCk-0Z
Can Neural Networks Understand Logical Entailment?
We introduce a new dataset of logical entailments for the purpose of measuring models' ability to capture and exploit the structure of logical expressions against an entailment prediction task. We use this task to compare a series of architectures which are ubiquitous in the sequence-processing literature, in addition ...
accepted-poster-papers
This paper studies the problem of modeling logical structure in a neural model. It introduces a data set for probing various existing models and proposes a new model that addresses shortcomings in existing ones. The reviewers point out that there is a bit of a tautology in introducing a new task and a new model that ...
train
[ "HyKlaaFxf", "ByyL5O_ef", "Synyi3YxM", "SJaK9o5fz", "B19KA55GM", "HJitaq9zz", "rk5QwLh1f", "BJiNJMsyf", "Skm7Jfokz", "ryDJkMoyz", "rJWTC-s1G", "BkaIlVcCZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "public" ]
[ "Overall, the paper is well-written and the proposed model is quite intuitive. Specifically, the idea is to represent entailment as a product of continuous functions over possible worlds. Specifically, the idea is to generate possible worlds, and compute the functions that encode entailment in those worlds. The fun...
[ 7, 7, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SkZxCk-0Z", "iclr_2018_SkZxCk-0Z", "iclr_2018_SkZxCk-0Z", "Synyi3YxM", "ByyL5O_ef", "HyKlaaFxf", "Skm7Jfokz", "BkaIlVcCZ", "BkaIlVcCZ", "BkaIlVcCZ", "BkaIlVcCZ", "iclr_2018_SkZxCk-0Z" ]
iclr_2018_HyRVBzap-
Cascade Adversarial Machine Learning Regularized with a Unified Embedding
Injecting adversarial examples during training, known as adversarial training, can improve robustness against one-step attacks, but not for unknown iterative attacks. To address this challenge, we first show iteratively generated adversarial images easily transfer between networks trained with the same strategy. Inspir...
accepted-poster-papers
This paper forms a good contribution to the active area of adversarial training. The main issue with the original submission was presentation quality and excessive length. The revised version is much improved. However, it still needs some work on the writing, in large part in the transferability section but also to ...
train
[ "Hy7Gjh9eM", "HyPfhQ2ez", "HkdIMPhez", "HkBR-RcXz", "Hko4LECMG", "S1nCrEAMM", "SJ0PSV0GM", "SywnZV0MG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "The authors proposed to supplement adversarial training with an additional regularization that forces the embeddings of clean and adversarial inputs to be similar. The authors demonstrate on MNIST and CIFAR that the added regularization leads to more robustness to various kinds of attacks. The authors further prop...
[ 6, 6, 5, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HyRVBzap-", "iclr_2018_HyRVBzap-", "iclr_2018_HyRVBzap-", "iclr_2018_HyRVBzap-", "iclr_2018_HyRVBzap-", "Hy7Gjh9eM", "HyPfhQ2ez", "HkdIMPhez" ]
iclr_2018_Sk9yuql0Z
Mitigating Adversarial Effects Through Randomization
Convolutional neural networks have demonstrated high accuracy on various tasks in recent years. However, they are extremely vulnerable to adversarial examples. For example, imperceptible perturbations added to clean images can cause convolutional neural networks to fail. In this paper, we propose to utilize randomizati...
accepted-poster-papers
Paper proposes adding randomization steps during inference time to CNNs in order to defend against adversarial attacks. Pros: - Results demonstrate good performance, and the team achieve a high rank (2nd place) on a public benchmark. - The benefit of the proposed approach is that it does not require any additional tr...
val
[ "BJDaCQFxM", "ByRWmWAxM", "B104VQCgM", "BJrUXqS7f", "BJpj_KSXG", "Hkxu_YH7f", "SJkSdtSmG", "rJ70Io8kf", "rJXvqfIJG", "Sy3pQrByM", "r13-LSB1M", "rJkSBVByG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "author", "author", "public" ]
[ "The authors propose a simple defense against adversarial attacks, which is to add randomization in the input of the CNNs. They experiment with different CNNs and published adversarial training techniques and show that randomized inputs mitigate adversarial attacks. \n\nPros:\n(+) The idea introduced is simple and ...
[ 6, 7, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Sk9yuql0Z", "iclr_2018_Sk9yuql0Z", "iclr_2018_Sk9yuql0Z", "iclr_2018_Sk9yuql0Z", "BJDaCQFxM", "ByRWmWAxM", "B104VQCgM", "rJXvqfIJG", "Sy3pQrByM", "rJkSBVByG", "iclr_2018_Sk9yuql0Z", "iclr_2018_Sk9yuql0Z" ]
iclr_2018_BkpiPMbA-
Decision Boundary Analysis of Adversarial Examples
Deep neural networks (DNNs) are vulnerable to adversarial examples, which are carefully crafted instances aiming to cause prediction errors for DNNs. Recent research on adversarial examples has examined local neighborhoods in the input space of DNN models. However, previous work has limited what regions to consider, fo...
accepted-poster-papers
Authors propose an approach to generation of adversarial examples that jointly examine the effects to classification within a local neighborhood, to yield a more robust example. This idea is taken a step further for defense, whereby the classification boundaries within a local neighborhood of a presented example are ex...
train
[ "SkxHS_vlz", "rJNyagigz", "ryhMljRxz", "HyQTSJyEG", "S1wjr_a7G", "H1tdBO6Xz", "SkwUBd6Qz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Summary of paper:\n\nThe authors present a novel attack for generating adversarial examples, deemed OptMargin, in which the authors attack an ensemble of classifiers created by classifying at random L2 small perturbations. They compare this optimization method with two baselines in MNIST and CIFAR, and provide an ...
[ 6, 6, 6, -1, -1, -1, -1 ]
[ 3, 2, 3, -1, -1, -1, -1 ]
[ "iclr_2018_BkpiPMbA-", "iclr_2018_BkpiPMbA-", "iclr_2018_BkpiPMbA-", "iclr_2018_BkpiPMbA-", "SkxHS_vlz", "rJNyagigz", "ryhMljRxz" ]
iclr_2018_HJWLfGWRb
Matrix capsules with EM routing
A capsule is a group of neurons whose outputs represent different properties of the same entity. Each layer in a capsule network contains many capsules. We describe a version of capsules in which each capsule has a logistic unit to represent the presence of an entity and a 4x4 matrix which could learn to represent the ...
accepted-poster-papers
Authors present a new multi-layered capsule network architecture, implemented an EM routing procedure, and introduced "Coordinate Addition". Capsule architectures are gaining interest because of their ability to achieve equivariance of parts, and employ a new form of pooling called "routing" (as opposed to max pooling...
train
[ "HyvJKULxM", "Hykw8iKxG", "ry1nhoKgM", "ByZRu4ClG", "HyguZD-Vf", "SJAWbD-4G", "HJkSzIZEf", "rk5MadsMf", "rJFRG6oWG", "rJUY2VdbM", "ryTPZJd-f", "SJQqV1JWz", "HkAVUc3gz", "rJnxEL2xf", "r17t2UIgf", "BkFS5LLxf", "Hy9EvktkG", "Hkc2c4HyM", "ryM_Fi4JM", "ByAqs7VJf", "ByVzDRDRW", "...
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "public", "author", "public", "public", "public", "public", "author", "author", "public", "public", "public", "public", "author", "public", "public" ]
[ "The objective function in details is:\n\\sum_c a'_c (-\\beta_a) + a'_c ln(a'_c) + (1-a'_c)ln(1-a'_c)+\\sum_h cost_{ch} + \\sum_i a_i * r_{ic} * ln(r_{ic})\n\na'_c is the activation for capsule c in layer L+1 and a_i is the activation probability for capsule i in layer L. The rest of the notations follow paper. \n...
[ -1, 7, 6, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 3, 3, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "ByAqs7VJf", "iclr_2018_HJWLfGWRb", "iclr_2018_HJWLfGWRb", "iclr_2018_HJWLfGWRb", "ByZRu4ClG", "Hykw8iKxG", "ry1nhoKgM", "iclr_2018_HJWLfGWRb", "iclr_2018_HJWLfGWRb", "ryTPZJd-f", "iclr_2018_HJWLfGWRb", "iclr_2018_HJWLfGWRb", "iclr_2018_HJWLfGWRb", "iclr_2018_HJWLfGWRb", "Hy9EvktkG", "...
iclr_2018_BJE-4xW0W
CausalGAN: Learning Causal Implicit Generative Models with Adversarial Training
We introduce causal implicit generative models (CiGMs): models that allow sampling from not only the true observational but also the true interventional distributions. We show that adversarial training can be used to learn a CiGM, if the generator architecture is structured based on a given causal graph. We consider th...
accepted-poster-papers
This paper proposes an interesting machinery around Generative Adversarial Networks to enable sampling not only from conditional observational distributions but also from interven­tional distributions. This is an important contribution as this means that we can obtain samples with desired properties that may not be pre...
test
[ "Bk3mPx0xM", "rkBmo9ryM", "ryv9d98lf", "HJlNEQvGf", "r1ZoQQPMz", "ryPEmQwMf", "H151-7mlG", "HJV1pD3eG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "author" ]
[ "The paper describes a way of combining a causal graph describing the dependency structure of labels with two conditional GAN architectures (causalGAN and causalBEGAN) that generate images conditioning on the binary labels. Ideally, this type of approach should allow not only to generate images from an observation...
[ 6, 7, 9, -1, -1, -1, -1, -1 ]
[ 3, 3, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJE-4xW0W", "iclr_2018_BJE-4xW0W", "iclr_2018_BJE-4xW0W", "rkBmo9ryM", "ryv9d98lf", "Bk3mPx0xM", "iclr_2018_BJE-4xW0W", "H151-7mlG" ]
iclr_2018_SJyEH91A-
Learning Wasserstein Embeddings
The Wasserstein distance received a lot of attention recently in the community of machine learning, especially for its principled way of comparing distributions. It has found numerous applications in several hard problems, such as domain adaptation, dimensionality reduction or generative models. However, its use is sti...
accepted-poster-papers
The paper presents a practical approach to compute Wasserstein distance based image embeddings. The Euclidean distance in the embedded space approximates the true Wasserstein distance, thus reducing the high computation cost associated with the latter. Pros: - Reviewers agree that the proposed solution is novel, strai...
train
[ "S1FE0K2eG", "r11xXR3xf", "SkpXB7TlM", "Hk4iVgFff", "ryBDExKGM", "SylcXetzz", "BywLGeYMz", "HyE_4duzG", "B11F7OOfM", "rkpfmO_fG", "SJqdGdufM", "SJddZ_OzM", "Byrxn34Wf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "public" ]
[ "The paper proposes to use a deep neural network to embed probability distributions in a vector space, where the Euclidean distance in that space matches the Wasserstein distance in the original space of probability distributions. A dataset of pairs of probability distributions and their Wasserstein distance is col...
[ 7, 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SJyEH91A-", "iclr_2018_SJyEH91A-", "iclr_2018_SJyEH91A-", "S1FE0K2eG", "S1FE0K2eG", "S1FE0K2eG", "r11xXR3xf", "SkpXB7TlM", "SkpXB7TlM", "SkpXB7TlM", "SkpXB7TlM", "Byrxn34Wf", "iclr_2018_SJyEH91A-" ]
iclr_2018_BJNRFNlRW
TRAINING GENERATIVE ADVERSARIAL NETWORKS VIA PRIMAL-DUAL SUBGRADIENT METHODS: A LAGRANGIAN PERSPECTIVE ON GAN
We relate the minimax game of generative adversarial networks (GANs) to finding the saddle points of the Lagrangian function for a convex optimization problem, where the discriminator outputs and the distribution of generator outputs play the roles of primal variables and dual variables, respectively. This formulation ...
accepted-poster-papers
The paper makes a good theoretical contribution by formulating the GAN training as primal-dual subgradient method for convex optimization and providing convergence proof. The authors then propose a modified objective to standard GAN training, based on this formulation, that helps address the mode collapse issue. One we...
train
[ "SkmK6TUxz", "SkvyRa3lG", "SJHM1GxbG", "Bkv1X-3Qz", "SyfZHx5Xz", "H1yaEeqXG", "SJMKDw_7f", "B1haOAOXG", "r1o4ORO7z", "SJ6GI0_XG", "H1X2ld_7G", "HJlcnP_XG", "B1dsSPOXG", "rybMrduXG", "r1FDmdumM", "ry3SXdO7M", "Hy0GRPumz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author" ]
[ "This paper formulates GAN as a Lagrangian of a primal convex constrained optimization problem. They then suggest to modify the updates used in the standard GAN training to be similar to the primal-dual updates typically used by primal-dual subgradient methods.\n\nTechnically, the paper is sound. It mostly leverage...
[ 7, 6, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJNRFNlRW", "iclr_2018_BJNRFNlRW", "iclr_2018_BJNRFNlRW", "H1yaEeqXG", "r1o4ORO7z", "SJ6GI0_XG", "SJHM1GxbG", "H1X2ld_7G", "rybMrduXG", "r1FDmdumM", "SkmK6TUxz", "SkvyRa3lG", "iclr_2018_BJNRFNlRW", "r1FDmdumM", "ry3SXdO7M", "H1X2ld_7G", "HJlcnP_XG" ]
iclr_2018_HyyP33gAZ
Activation Maximization Generative Adversarial Nets
Class labels have been empirically shown useful in improving the sample quality of generative adversarial nets (GANs). In this paper, we mathematically study the properties of the current variants of GANs that make use of class label information. With class aware gradient and cross-entropy decomposition, we reveal how ...
accepted-poster-papers
The authors investigate various class aware GANs and provide extensive analysis of their ability to address mode collapse and sample quality issues. Based on this analysis they propose an extension called Activation Maximization-GAN which tries to push each generated sample to a specific class indicated by the Discrimi...
train
[ "HJZyvxT1G", "HyUOlCNlf", "SJ5g2WcgM", "rJGlj7pmf", "SkIE9XHbG", "HJ7_9mrWG", "r1F_FQBZG", "rknnXVKxz", "B1ZnkAYkz", "Sy7107h0b", "SJC9zbqCZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "public", "author", "public" ]
[ "\nI thank the authors for the thoughtful responses and updated manuscript. Although the manuscript is improved, I still feel it is unfocused and may be substantially improved, thus my review score remains unchanged.\n\n===============\n\nThe authors describe a new version of a generative adversarial network (GAN) ...
[ 5, 7, 8, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HyyP33gAZ", "iclr_2018_HyyP33gAZ", "iclr_2018_HyyP33gAZ", "r1F_FQBZG", "SJ5g2WcgM", "HyUOlCNlf", "HJZyvxT1G", "B1ZnkAYkz", "iclr_2018_HyyP33gAZ", "SJC9zbqCZ", "iclr_2018_HyyP33gAZ" ]
iclr_2018_SkVqXOxCb
Coulomb GANs: Provably Optimal Nash Equilibria via Potential Fields
Generative adversarial networks (GANs) evolved into one of the most successful unsupervised techniques for generating realistic images. Even though it has recently been shown that GAN training converges, GAN models often end up in local Nash equilibria that are associated with mode collapse or otherwise fail to model t...
accepted-poster-papers
The paper provides an interesting take on GAN training based on Coulomb dynamics. The proposed formulation is theoretically well motivated and authors provide guarantees for convergence. Reviewers agree that the theoretical analysis is interesting but are not completely impressed by the results. The method addresses mo...
test
[ "BkPeeQ5lf", "r1EJEK6lG", "HkdTXw1bM", "ByCZjzt7G", "BkzCcfFQz", "H1bh5MtXz", "Hyi0SB3lM", "ryPND-5gG", "SkQ_1mQ1f", "S1wB0slyf", "BJI5By0RZ", "rJDaKf0Rb", "Sy5L6uoC-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "author", "public", "author", "public", "public" ]
[ "\nIn this paper, the authors interpret the training of GAN by potential field and inspired from which to provide new training procedure for GAN. They claim that under the condition that global optima are achieved for discriminator and generator in each iteration, the Coulomb GAN converges to the global solution. \...
[ 5, 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SkVqXOxCb", "iclr_2018_SkVqXOxCb", "iclr_2018_SkVqXOxCb", "BkPeeQ5lf", "r1EJEK6lG", "HkdTXw1bM", "ryPND-5gG", "iclr_2018_SkVqXOxCb", "S1wB0slyf", "rJDaKf0Rb", "Sy5L6uoC-", "BJI5By0RZ", "iclr_2018_SkVqXOxCb" ]
iclr_2018_SJx9GQb0-
Improving the Improved Training of Wasserstein GANs: A Consistency Term and Its Dual Effect
Despite being impactful on a variety of problems and applications, the generative adversarial nets (GANs) are remarkably difficult to train. This issue is formally analyzed by \cite{arjovsky2017towards}, who also propose an alternative direction to avoid the caveats in the minmax two-player training of GANs. The corre...
accepted-poster-papers
The paper proposes various improvements to Wasserstein distance based GAN training. Reviewers agree that the method produces good quality samples and are impressed by the state of the art results in several semi-supervised learning benchmarks. The paper is well written and the authors have further improved the empirica...
train
[ "SkNSwnmrG", "r17cU27HG", "rJJbI3QHM", "BJNCqPqxM", "B11HQF84M", "HkbRNKwef", "ryT2f8KgM", "SywG98VGM", "r1QGQhVfG", "ryr5SEEfG", "Bkee-E4zG", "SksBJ-fzz", "SkTmQbkbz", "BJhxQVieM", "HkLJt_QxG", "BJopYw5yM", "BynjqwxJf", "SkpgYPxyz", "Hk9rHlS0-", "BJhcHerRb", "BkWTPbHCW", "...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "author", "public", "public", "public", "author", "author", "author", "author", "author", "public", "public"...
[ "AnonReviewer2: \"But still, without the analysis of the temporal ensembling trick [Samuli & Timo, 2017]\" \n\nWe have actually reported the ablation study about this temporal ensemebling technique in the rebuttal. Please read our answer to Q4 in the rebuttal. \n\n=Q4=\n\"... which part of the model works\"\n\nPlea...
[ -1, -1, -1, 4, -1, 6, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, 4, -1, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "BJNCqPqxM", "BJNCqPqxM", "B11HQF84M", "iclr_2018_SJx9GQb0-", "SywG98VGM", "iclr_2018_SJx9GQb0-", "iclr_2018_SJx9GQb0-", "HkbRNKwef", "SksBJ-fzz", "ryT2f8KgM", "BJNCqPqxM", "iclr_2018_SJx9GQb0-", "BJhxQVieM", "iclr_2018_SJx9GQb0-", "BynjqwxJf", "iclr_2018_SJx9GQb0-", "r1mlEqyyz", "...
iclr_2018_BJIgi_eCZ
FusionNet: Fusing via Fully-aware Attention with Application to Machine Comprehension
This paper introduces a new neural structure called FusionNet, which extends existing attention approaches from three perspectives. First, it puts forward a novel concept of "History of Word" to characterize attention information from the lowest word-level embedding up to the highest semantic-level representation. Seco...
accepted-poster-papers
State-of-the-art results on Squad (at least at time of submission) with a nice model. Authors have since applied the model to additional tasks (SNLI). Good discussion with reviewers, well written submission and all reviewers suggest acceptance.
train
[ "r1HLacEHf", "r1AM2beHM", "r1ApvdPxG", "SJxIVpkZM", "S1UrbZQ-f", "H1NajF9mf", "BJi3jC4-f", "SJeISQYWM", "rJtar37Zf", "HyLoVfF-f", "ryjOHJHbG", "H1qsigSWz", "rJK-1DBZM", "r13F2uHZM", "HJ3IoLB-z", "SJxG58rZM", "S1OQa84Wf", "rJYnDN4-f", "rkqkzzm-G", "HkeZo1MZG", "rJBpWyR1z", "...
[ "public", "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "author", "author", "author", "author", "author", "public", "public", "public", "public", "author", "public", "public", "author", "author", "public", "public...
[ "I loved reading your paper. Very well written. Great work!\nI would like to know more about your ensemble model. You have mentioned that your ensemble contains 39 models. Can you please comment on what these models are?\n\nHow long did the training take during your experiments? What is the batch size? And GPU memo...
[ -1, -1, 7, 8, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, 5, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJIgi_eCZ", "iclr_2018_BJIgi_eCZ", "iclr_2018_BJIgi_eCZ", "iclr_2018_BJIgi_eCZ", "iclr_2018_BJIgi_eCZ", "iclr_2018_BJIgi_eCZ", "S1OQa84Wf", "HJ3IoLB-z", "S1UrbZQ-f", "SJxIVpkZM", "r1ApvdPxG", "rkqkzzm-G", "SJxG58rZM", "rJK-1DBZM", "BJi3jC4-f", "H1qsigSWz", "rJYnDN4-f", "...
iclr_2018_rkgOLb-0W
Neural Language Modeling by Jointly Learning Syntax and Lexicon
We propose a neural language model capable of unsupervised syntactic structure induction. The model leverages the structure information to form better semantic representations and better language modeling. Standard recurrent neural networks are limited by their structure and fail to efficiently use syntactic informatio...
accepted-poster-papers
Nice language modeling paper with consistently high scores. The model structure is neat and the results are solid. Good ICLR-type paper with contributions mostly on the ML side and experiments on a (simple) NLP task.
train
[ "ByUN4M9xM", "rkJwIctlf", "HkLBrbaef", "Sk9e3S2fz", "r1NnBSnfz", "SkVObH3Mz", "B1_Q-r2fM", "rkX6IwPzM", "rkemARblM", "SJdy1V01z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "author", "official_reviewer" ]
[ "** UPDATE ** upgraded my score to 7 based on the new version of the paper.\n\nThe main contribution of this paper is to introduce a new recurrent neural network for language modeling, which incorporates a tree structure More precisely, the model learns constituency trees (without any supervision), to capture synta...
[ 7, 8, 7, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rkgOLb-0W", "iclr_2018_rkgOLb-0W", "iclr_2018_rkgOLb-0W", "rkX6IwPzM", "rkJwIctlf", "ByUN4M9xM", "HkLBrbaef", "iclr_2018_rkgOLb-0W", "SJdy1V01z", "iclr_2018_rkgOLb-0W" ]
iclr_2018_rk6cfpRjZ
Learning Intrinsic Sparse Structures within Long Short-Term Memory
Model compression is significant for the wide adoption of Recurrent Neural Networks (RNNs) in both user devices possessing limited resources and business clusters requiring quick responses to large-scale service requests. This work aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the sizes of...
accepted-poster-papers
The reviewers really liked this paper. This paper presents a tweak to the LSTM cell that introduces sparsity, thus reducing the number of parameters in the model. The authors show that their sparse models match the performance of the non-sparse baselines. While the results are not state-of-the-art but vanilla implemen...
train
[ "SJ15MyGeG", "SySivF84M", "B15mUMdef", "r14eWGtez", "SJsqUdpzG", "Hyt62VvGG", "SyjB0Vvfz", "SkkbFvaGz", "By6GmBPGz", "rkmp_naTZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public" ]
[ "Quality: \nThe motivation and experimentation is sound.\n\nOriginality:\nThis work is a natural follow up on previous work that used group lasso for CNNs, namely learning sparse RNNs with group-lasso. Not very original, but nevertheless important.\n\nClarity:\nThe fact that the method is using a group-lasso regula...
[ 7, -1, 6, 7, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rk6cfpRjZ", "SJsqUdpzG", "iclr_2018_rk6cfpRjZ", "iclr_2018_rk6cfpRjZ", "By6GmBPGz", "r14eWGtez", "B15mUMdef", "rkmp_naTZ", "SJ15MyGeG", "iclr_2018_rk6cfpRjZ" ]
iclr_2018_ry018WZAZ
Deep Active Learning for Named Entity Recognition
Deep learning has yielded state-of-the-art performance on many natural language processing tasks including named entity recognition (NER). However, this typically requires large amounts of labeled data. In this work, we demonstrate that the amount of labeled training data can be drastically reduced when deep learning i...
accepted-poster-papers
The reviewers liked this paper quite a bit. The novelty seems modest and the results are limited to a fairly simple NER task, but there is nothing wrong with the paper, hence recommending acceptance.
train
[ "HklWpb5lz", "S1G9tRFgG", "ry7bzE8eG", "S1S83QJVz", "B1cNJbz7G", "H1Y_RgzXG", "B1M7plzmz", "rJJs3gG7M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper introduces a lightweight neural network that achieves state-of-the-art performance on NER. The network allows efficient active incremental training, which significantly reduces the amount of training data needed to match state-of-the-art performance.\n\nThe paper is well-written. The ideas are simple, b...
[ 6, 6, 7, -1, -1, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ry018WZAZ", "iclr_2018_ry018WZAZ", "iclr_2018_ry018WZAZ", "B1cNJbz7G", "ry7bzE8eG", "S1G9tRFgG", "HklWpb5lz", "iclr_2018_ry018WZAZ" ]
iclr_2018_Syg-YfWCW
Go for a Walk and Arrive at the Answer: Reasoning Over Paths in Knowledge Bases using Reinforcement Learning
Knowledge bases (KB), both automatically and manually constructed, are often incomplete --- many valid facts can be inferred from the KB by synthesizing existing information. A popular approach to KB completion is to infer new relations by combinatory reasoning over the information found along other paths connecting a ...
accepted-poster-papers
Good contribution. There was a (heated) debate over this paper but the authors stayed calm and patiently addressed all comments and supplied additional evaluations, etc.
train
[ "rJPLu4kff", "ByeDuWqEG", "SymbMC2xf", "SJdS6W2ef", "H1-nFopxM", "SJcDk8FzG", "rkQzTQTmf", "r1QV2maXG", "SkOUZl6Xf", "Hk6agA5fM", "ByjZWQ9zf", "By1YR_Kzz", "ry7CRLKfM", "Skn0jadzf", "rJtRtNkfM", "HkOqr4kff", "S1frbpDZM", "B1ZpM7FZM", "HkEYGpulf", "H1tRWqdgf", "BJhGlT8JM", "...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "author", "author", "author", "public", "author", "public", "author", "author", "author", "public", "public", "public", "author", "author", "author", "public", "public...
[ "Thank you for your helpful reviews!\n\nYou raised an interesting point regarding the performance of MINERVA on KGs with large number of relation types. For a fair comparison, we ran query answering (not fact prediction) experiments on NELL-995 and compared to our implementation of DistMult (which does very well on...
[ -1, -1, 7, 6, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "SJdS6W2ef", "SymbMC2xf", "iclr_2018_Syg-YfWCW", "iclr_2018_Syg-YfWCW", "iclr_2018_Syg-YfWCW", "iclr_2018_Syg-YfWCW", "SJCU15BJf", "S1frbpDZM", "H1-nFopxM", "ByjZWQ9zf", "ry7CRLKfM", "SJcDk8FzG", "SJcDk8FzG", "B1ZpM7FZM", "Hkcn5-Dyz", "SymbMC2xf", "iclr_2018_Syg-YfWCW", "iclr_2018_...
iclr_2018_Sk7KsfW0-
Lifelong Learning with Dynamically Expandable Networks
We propose a novel deep network architecture for lifelong learning which we refer to as Dynamically Expandable Network (DEN), that can dynamically decide its network capacity as it trains on a sequence of tasks, to learn a compact overlapping knowledge sharing structure among tasks. DEN is efficiently trained in an onl...
accepted-poster-papers
PROS: 1. good results; the authors made it work 2. paper is largely well written CONS: 1. some found the writing to be unclear and sloppy in places 2. the algorithm is complicated -- a chain of sub-algorithms A few small points: -I initially found Algorithm 1 to be confusing because it wasn't clear whether it was in...
train
[ "SJAGR15lz", "SkcwXXqxM", "HJnaoMagf", "ByFTDgp7G", "ryZHNvkmz", "BkOyLPJXG", "HJJ5UP1XG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The paper was clearly written and pleasant to read. I liked the use of sparsity- and group-sparsity-promoting regularizers to select connections and decide how to expand the network.\n\nA strength of the paper is that the proposed algorithm is interesting and intuitive, even if relatively complex, as it requires c...
[ 7, 6, 8, -1, -1, -1, -1 ]
[ 3, 3, 2, -1, -1, -1, -1 ]
[ "iclr_2018_Sk7KsfW0-", "iclr_2018_Sk7KsfW0-", "iclr_2018_Sk7KsfW0-", "iclr_2018_Sk7KsfW0-", "HJnaoMagf", "SkcwXXqxM", "SJAGR15lz" ]
iclr_2018_H1VjBebR-
The Role of Minimal Complexity Functions in Unsupervised Learning of Semantic Mappings
We discuss the feasibility of the following learning problem: given unmatched samples from two domains and nothing else, learn a mapping between the two, which preserves semantics. Due to the lack of paired samples and without any definition of the semantic information, the problem might seem ill-posed. Specifically, i...
accepted-poster-papers
The reviewers were generally positive about this paper with a few caveats: PROS: 1. Important and challenging topic to analyze and any progress on unsupervised learning is interesting. 2. the paper is clear, although more formalization would help sometimes 3. The paper presents an analysis for unsupervised learning of...
train
[ "Skz4Z5KlG", "rkXOO8B4f", "SkZzR3E4M", "rJn7jf9xf", "S1wHlhaZM", "B1oTaatGz", "SJhq6pFff", "S1yAtfsbz", "HkHa_foZG" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The paper addresses the problem of learning mappings between different domains without any supervision. It belongs to the recent family of papers based on GANs.\nThe paper states three conjectures (predictions in the paper):\n1. GAN are sufficient to learn « semantic mappings » in an unsupervised way, if the consi...
[ 7, -1, -1, 7, 6, -1, -1, -1, -1 ]
[ 4, -1, -1, 2, 4, -1, -1, -1, -1 ]
[ "iclr_2018_H1VjBebR-", "SkZzR3E4M", "SJhq6pFff", "iclr_2018_H1VjBebR-", "iclr_2018_H1VjBebR-", "S1wHlhaZM", "S1wHlhaZM", "Skz4Z5KlG", "rJn7jf9xf" ]
iclr_2018_BJuWrGW0Z
Dynamic Neural Program Embeddings for Program Repair
Neural program embeddings have shown much promise recently for a variety of program analysis tasks, including program synthesis, program repair, code completion, and fault localization. However, most existing program embeddings are based on syntactic features of programs, such as token sequences or abstract syntax tree...
accepted-poster-papers
PROS: 1. Interesting and clearly useful idea 2. The paper is clearly written. 3. This work doesn't seem that original from an algorithmic point of view since Reed & De Freitas (2015) and Cai et. al (2017) among others have considered using execution traces. However the application to program repair is novel (as far as...
train
[ "rkdmp2J-f", "r1a9wIjEM", "H1Pyl4sxM", "H1JAev9gz", "S1e4KmgXM", "HyPdjuszz", "BJ5rj_jGz", "ry5liuiGf", "ryPug02Zf", "BJk11A3bz", "H1DM0ah-z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author" ]
[ "This paper considers the task of learning program embeddings with neural networks with the ultimate goal of bug detection program repair in the context of students learning to program. Three NN architectures are explored, which leverage program semantics rather than pure syntax. The approach is validated using pr...
[ 6, -1, 7, 7, -1, -1, -1, -1, -1, -1, -1 ]
[ 2, -1, 3, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJuWrGW0Z", "BJ5rj_jGz", "iclr_2018_BJuWrGW0Z", "iclr_2018_BJuWrGW0Z", "iclr_2018_BJuWrGW0Z", "H1JAev9gz", "H1Pyl4sxM", "rkdmp2J-f", "H1Pyl4sxM", "rkdmp2J-f", "H1JAev9gz" ]
iclr_2018_S1Euwz-Rb
Compositional Attention Networks for Machine Reasoning
We present Compositional Attention Networks, a novel fully differentiable neural network architecture, designed to facilitate explicit and expressive reasoning. While many types of neural networks are effective at learning and generalizing from massive quantities of data, this model moves away from monolithic black-box...
accepted-poster-papers
PROS: 1. Good results on CLEVER datasets 2. Writing is clear 3. The MAC unit is novel and interesting. 4. Ablation experiments are helpful CONS: The authors overstate the degree to which they are doing "sound" and "transparent" reasoning. In particular statements such as "Most neural networks are essentially very lar...
train
[ "H1-Icx3Vf", "ByrYKk3VG", "BJoJaojNz", "S1AP0Njxz", "B1Ewp2LVG", "Hyf1B_p7f", "Bkb9w8r4f", "Sk0oVNYlM", "SyKUVctlM", "BJQeEzAWG", "ByjkRDEgz", "SytM0vNlf", "H1gOE_mxM" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public" ]
[ "\nThank you very much for your review - we truly appreciate it! \nWe have uploaded a revision (by the rebuttal deadline, jan 5) that addresses all your comments:\n\n1. We have revised the description of the writing unit to make it more clear - we have experimented with several variants for this unit - the \"standa...
[ -1, -1, -1, 7, -1, -1, -1, 7, 6, -1, -1, -1, -1 ]
[ -1, -1, -1, 4, -1, -1, -1, 3, 4, -1, -1, -1, -1 ]
[ "SyKUVctlM", "BJoJaojNz", "S1AP0Njxz", "iclr_2018_S1Euwz-Rb", "Bkb9w8r4f", "iclr_2018_S1Euwz-Rb", "Sk0oVNYlM", "iclr_2018_S1Euwz-Rb", "iclr_2018_S1Euwz-Rb", "iclr_2018_S1Euwz-Rb", "H1gOE_mxM", "H1gOE_mxM", "iclr_2018_S1Euwz-Rb" ]
iclr_2018_BkXmYfbAZ
Beyond Shared Hierarchies: Deep Multitask Learning through Soft Layer Ordering
Existing deep multitask learning (MTL) approaches align layers shared between tasks in a parallel ordering. Such an organization significantly constricts the types of shared structure that can be learned. The necessity of parallel ordering for deep MTL is first tested by comparing it with permuted ordering of shared la...
accepted-poster-papers
PROS: 1. Clear, interesting idea. 2. Largely convincing evaluation 3. Good writing CONS: 1. The model used in the evaluation is a Resnet-50 and could have been more convincing with a more SOTA model. 2. There is some concern about the whether the comparison of results (fig 6c) is really apples to apples.
val
[ "r1q6Vx9lM", "rJbP7B_eG", "r1O8FN5lG", "B1HKW2pQG", "B1KE72qXG", "H1rSSptmf", "HJpV7ptmz", "SJyY-6FQf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Summary: This paper proposes a different approach to deep multi-task learning using “soft ordering.” Multi-task learning encourages the sharing of learned representations across tasks, thus using less parameters and tasks help transfer useful knowledge across. Thus enabling the reuse of universally learned repres...
[ 7, 6, 7, -1, -1, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BkXmYfbAZ", "iclr_2018_BkXmYfbAZ", "iclr_2018_BkXmYfbAZ", "H1rSSptmf", "iclr_2018_BkXmYfbAZ", "rJbP7B_eG", "r1q6Vx9lM", "r1O8FN5lG" ]
iclr_2018_BJQRKzbA-
Hierarchical Representations for Efficient Architecture Search
We explore efficient neural architecture search methods and show that a simple yet powerful evolutionary algorithm can discover new architectures with excellent performance. Our approach combines a novel hierarchical genetic representation scheme that imitates the modularized design pattern commonly adopted by human ex...
accepted-poster-papers
PROS: 1. Overall, the paper is well-written, clear in its exposition and technically sound. 2. With some caveats, an independent team concluded that the results were "largely reproducible" 3. The key idea is a smart evolution scheme. It circumvents the traditional tradeoff between search space size and complexity of th...
test
[ "BJNK-sdxz", "SkbQs_cgz", "HkM52nagG", "HJ9nV7amG", "r1mfNm6mG", "H1yO14fzz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public" ]
[ "The fundamental contribution of the article is the explicit use of compositionality in the definition of the search space. Instead of merely defining an architecture as a Directed Acyclic Graph (DAG), with nodes corresponding to feature maps and edges to primitive operations, the approach in this paper introduces ...
[ 6, 6, 8, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1 ]
[ "iclr_2018_BJQRKzbA-", "iclr_2018_BJQRKzbA-", "iclr_2018_BJQRKzbA-", "H1yO14fzz", "iclr_2018_BJQRKzbA-", "iclr_2018_BJQRKzbA-" ]
iclr_2018_ryTp3f-0-
Reinforcement Learning on Web Interfaces using Workflow-Guided Exploration
Reinforcement learning (RL) agents improve through trial-and-error, but when reward is sparse and the agent cannot discover successful action sequences, learning stagnates. This has been a notable problem in training deep RL agents to perform web-based tasks, such as booking flights or replying to emails, where a singl...
accepted-poster-papers
PROS: 1. well-written and clear 2. added extra comparison to dagger which shows success 3. SOTA results on open ai benchmark problem and comparison to relevant related work (Shi 2017) 4. practical applications 5. created new dataset to test harder aspects of the problem CONS: 1. the algorithmic novelty is somewhat li...
train
[ "Hy6qBvbGM", "BJ448noJM", "H1asng9lG", "Syye3027M", "ByyXe3hQz", "HJ9ltkOQz", "S1KEPmUXM", "HkelwaS7M", "ryFlSzIMG", "S1WASuMbf", "HyInEOM-f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author" ]
[ "Summary:\n\nThe authors propose a method to make exploration in really sparse reward tasks more efficient. They propose a method called Workflow Guided Exploration (WGE) which is learnt from demonstrations but is environment agnostic. Episodes are generated by first turning demonstrations to a workflow lattice. Th...
[ 7, 6, 7, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ryTp3f-0-", "iclr_2018_ryTp3f-0-", "iclr_2018_ryTp3f-0-", "iclr_2018_ryTp3f-0-", "HJ9ltkOQz", "S1KEPmUXM", "HkelwaS7M", "ryFlSzIMG", "Hy6qBvbGM", "BJ448noJM", "H1asng9lG" ]
iclr_2018_Hksj2WWAW
Combining Symbolic Expressions and Black-box Function Evaluations in Neural Programs
Neural programming involves training neural networks to learn programs, mathematics, or logic from data. Previous works have failed to achieve good generalization performance, especially on problems and programs with high complexity or on large domains. This is because they mostly rely either on black-box function eval...
accepted-poster-papers
Learn to complete an equation by filling the blank with a missing function or numeral, and also to evaluate an expression. Along the way learn to determine if an identity holds (e.g. sin^2(x) + cos^2(x) = 1). They use a TreeNN with a separate node for each expression in the grammar. PROS: 1. They've put together a n...
train
[ "SJg6Lqp7f", "HkU17bFrM", "rk_xMk8ef", "SyFw0BPgz", "BywjthjgG", "H1fOMFa7z", "r1gRHOpmz", "BklhE_TXf", "HkG84_p7G" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "\n**Reviewer 1:**\n\nPage 8, Table 3: fixed typo for the number of equations in the test set for depth 1 column\n\nPage 7, parag 1: we have included the explanation about the Sympy baseline as suggested by both reviewers 1 and 2. \n\nReferences: Added Piech et al 2015 \n\n**Reviewer 2:**\n\npage 6, paragraph 2 des...
[ -1, -1, 6, 8, 5, -1, -1, -1, -1 ]
[ -1, -1, 4, 3, 4, -1, -1, -1, -1 ]
[ "iclr_2018_Hksj2WWAW", "rk_xMk8ef", "iclr_2018_Hksj2WWAW", "iclr_2018_Hksj2WWAW", "iclr_2018_Hksj2WWAW", "r1gRHOpmz", "rk_xMk8ef", "SyFw0BPgz", "BywjthjgG" ]
iclr_2018_rkZB1XbRZ
Scalable Private Learning with PATE
The rapid adoption of machine learning has increased concerns about the privacy implications of machine learning models trained on sensitive data, such as medical records or other personal information. To address those concerns, one promising approach is Private Aggregation of Teacher Ensembles, or PATE, which transfer...
accepted-poster-papers
This paper extends last year's paper on PATE to large-scale, real-world datasets. The model works by training multiple "teacher" models -- one per dataset, where a dataset might be for example, one user's data -- and then distilling those models into a student model. The teachers are all trained on disjoint data. Diff...
train
[ "r1nc91tez", "r1ERFjYef", "H1VNp87bz", "BklAIIiMz", "SkMva0lff", "SJ--T0xGf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper proposes novel techniques for private learning with PATE framework. Two key ideas in the paper include the use of Gaussian noise for the aggregation mechanism in PATE instead of Laplace noise and selective answering strategy by teacher ensemble. In the experiments, the efficacy of the proposed techniques...
[ 6, 6, 7, -1, -1, -1 ]
[ 1, 4, 3, -1, -1, -1 ]
[ "iclr_2018_rkZB1XbRZ", "iclr_2018_rkZB1XbRZ", "iclr_2018_rkZB1XbRZ", "H1VNp87bz", "r1ERFjYef", "r1nc91tez" ]
iclr_2018_H1aIuk-RW
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expe...
accepted-poster-papers
The effectiveness of active learning techniques for training modern deep learning pipelines in a label efficient manner is certainly a very well motivated topic. The reviewers unanimously found the contributions of this paper to be of interest, particularly nice empirical gains over several natural baselines.
train
[ "BJfU-qDeG", "ByFZUzFlf", "BytfdZsxf", "H11TdQ44z", "ByBnqXEVG", "HkWB9LC7z", "BJb693emf", "rJrEFw2QG", "BkQix6emz", "HJh6HmUMz", "BJR9cmLzf", "rkZr6-yMM", "rkhz6ptlG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "author", "author", "author", "author", "author", "public", "public" ]
[ "After reading rebuttals from the authors: The authors have addressed all of my concerns. THe additional experiments are a good addition.\n\n************************\nThe authors provide an algorithm-agnostic active learning algorithm for multi-class classification. The core technique is to construct a coreset of p...
[ 7, 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H1aIuk-RW", "iclr_2018_H1aIuk-RW", "iclr_2018_H1aIuk-RW", "HkWB9LC7z", "iclr_2018_H1aIuk-RW", "BJR9cmLzf", "ByFZUzFlf", "BytfdZsxf", "rkZr6-yMM", "BJfU-qDeG", "rkhz6ptlG", "iclr_2018_H1aIuk-RW", "iclr_2018_H1aIuk-RW" ]
iclr_2018_BkrSv0lA-
Loss-aware Weight Quantization of Deep Networks
The huge size of deep networks hinders their use in small computing devices. In this paper, we consider compressing the network by weight quantization. We extend a recently proposed loss-aware weight binarization scheme to ternarization, with possibly different scaling parameters for the positive and negative weights, ...
accepted-poster-papers
While novelty is not the main strength of this paper, there is consensus that presentation is clear and the experimental results are convincing. Given the practical importance of designing and benchmarking methods to compactify deep nets, the paper deserves to be presented at ICLR-2018.
train
[ "B1h7tYSlM", "Sy_vAgOgz", "Bk4fUyieG", "rkaeuvwGM", "rJ5VPPDMf", "S1_DIvDfM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "In this paper, the authors propose a method of compressing network by means of weight ternarization. The network weights ternatization is formulated in the form of loss-aware quantization, which originally proposed by Hou et al. (2017).\n\nTo this reviewer’s understanding, the proposed method can be regarded as th...
[ 8, 6, 6, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1 ]
[ "iclr_2018_BkrSv0lA-", "iclr_2018_BkrSv0lA-", "iclr_2018_BkrSv0lA-", "B1h7tYSlM", "Sy_vAgOgz", "Bk4fUyieG" ]
iclr_2018_BJk7Gf-CZ
Global Optimality Conditions for Deep Neural Networks
We study the error landscape of deep linear and nonlinear neural networks with the squared error loss. Minimizing the loss of a deep linear neural network is a nonconvex problem, and despite recent progress, our understanding of this loss surface is still incomplete. For deep linear networks, we present necessary and s...
accepted-poster-papers
Understanding global optimality conditions for deep nets even in the restricted case of linear layers is a valuable contribution. Please add clarifications to ways in which the paper goes beyond the results of Kawaguchi'16, which was the main concern expressed by the reviewers.
train
[ "ryd2EvplG", "BJKj87vxG", "B1Mwz5dxM", "BJ5ebDa7M", "HkImgwTQG", "ryHYJwaQG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper gives sufficient and necessary conditions for the global optimality of the loss function of deep linear neural networks. The paper is an extension of Kawaguchi'16. It also provides some sufficient conditions for the non-linear cases. \n\nI think the main technical concerns with the paper is that the tech...
[ 5, 7, 8, -1, -1, -1 ]
[ 5, 4, 5, -1, -1, -1 ]
[ "iclr_2018_BJk7Gf-CZ", "iclr_2018_BJk7Gf-CZ", "iclr_2018_BJk7Gf-CZ", "BJKj87vxG", "B1Mwz5dxM", "ryd2EvplG" ]
iclr_2018_HJ_aoCyRZ
SpectralNet: Spectral Clustering using Deep Neural Networks
Spectral clustering is a leading and popular technique in unsupervised data analysis. Two of its major limitations are scalability and generalization of the spectral embedding (i.e., out-of-sample-extension). In this paper we introduce a deep learning approach to spectral clustering that overcomes the above shortcomin...
accepted-poster-papers
The paper proposes interesting deep learning based spectral clustering techniques. The use of functional embeddings for enabling spectral clustering to have an out-of-sample extension has of course been explored earlier (e.g., see Manifold Regularization work of Belkin et al, JMLR 2006). For polynomials or kernel-base...
train
[ "S1FGCYFef", "HJx_Ub9ez", "HylbmO3lf", "HkRVrYTmM", "Hy75fd6Qz", "B1fEvV3Xf", "By95P4h7M", "HylaPE3Qf", "S1EtIVnmf", "Hy_4L4nmM", "HJ0ur0V1G", "B1L_so4yz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "public" ]
[ "The authors study deep neural networks for spectral clustering in combination with stochastic optimization for large datasets. They apply VC theory to find a lower bound on the size of the network. \n\nOverall it is an interesting study, though the connections with the existing literature could be strengthened:\n\...
[ 6, 4, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HJ_aoCyRZ", "iclr_2018_HJ_aoCyRZ", "iclr_2018_HJ_aoCyRZ", "Hy75fd6Qz", "S1EtIVnmf", "S1FGCYFef", "iclr_2018_HJ_aoCyRZ", "iclr_2018_HJ_aoCyRZ", "HJx_Ub9ez", "HylbmO3lf", "B1L_so4yz", "iclr_2018_HJ_aoCyRZ" ]
iclr_2018_Hk8XMWgRb
Not-So-Random Features
We propose a principled method for kernel learning, which relies on a Fourier-analytic characterization of translation-invariant or rotation-invariant kernels. Our method produces a sequence of feature maps, iteratively refining the SVM margin. We provide rigorous guarantees for optimality and generalization, interpret...
accepted-poster-papers
New effective kernel learning methods are very well aligned with ICLR's focus on Representation Learning. As a reviewer pointed out, not all aspects of the paper are algorithmically "clean". However, the proposed approach is natural and appears to give consistent improvements over a couple of expected baselines. The pa...
train
[ "SyPGF6dxM", "r1fhwN9eG", "Hyzp__aWM", "HJS-kvamM", "rkWcMz3QG", "HyJQF6S7f", "S1XVxiSGG", "rJGMxiBMf", "ryzCJiBfM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "author", "author", "author" ]
[ "The paper proposes to learn a custom translation or rotation invariant kernel in the Fourier representation to maximize the margin of SVM. Instead of using Monte Carlo approximation as in the traditional random features literature, the main point of the paper is to learn these Fourier features in a min-max sense. ...
[ 7, 6, 4, -1, -1, -1, -1, -1, -1 ]
[ 3, 5, 5, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Hk8XMWgRb", "iclr_2018_Hk8XMWgRb", "iclr_2018_Hk8XMWgRb", "iclr_2018_Hk8XMWgRb", "HyJQF6S7f", "iclr_2018_Hk8XMWgRb", "SyPGF6dxM", "r1fhwN9eG", "Hyzp__aWM" ]
iclr_2018_Hkn7CBaTW
Learning how to explain neural networks: PatternNet and PatternAttribution
DeConvNet, Guided BackProp, LRP, were invented to better understand deep neural networks. We show that these methods do not produce the theoretically correct explanation for a linear model. Yet they are used on multi-layer networks with millions of parameters. This is a cause for concern since linear models are simple ...
accepted-poster-papers
The paper shows that many of the current state-of-the-art interpretability methods are inaccurate even for linear models. Then based on their analysis of linear models they propose a technique that is thus accurate for them and also empirically provides good performance for non-linear models such as DNNs.
train
[ "S1_hZzxBM", "ByUZ0g5lG", "Hk0lS3teG", "H1AKArLNf", "BJ711zqxG", "rJv-eUTQG", "SJZuJ8TQf", "HJ4bkI6mG", "HyG5aL9C-", "rJUSnbE0b", "B12BY_ATZ" ]
[ "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "public", "public" ]
[ "Both integrated gradients [1] and DeepLift [2] are good, commonly used methods which are certainly fast enough to be used in the image degradation experiment. A meaningful comparison to them, or any method published within the past two years, is very clearly missing from this paper.\n\nContext: I'm an active resea...
[ -1, 8, 8, -1, 6, -1, -1, -1, -1, -1, -1 ]
[ -1, 3, 4, -1, 4, -1, -1, -1, -1, -1, -1 ]
[ "H1AKArLNf", "iclr_2018_Hkn7CBaTW", "iclr_2018_Hkn7CBaTW", "rJv-eUTQG", "iclr_2018_Hkn7CBaTW", "Hk0lS3teG", "BJ711zqxG", "ByUZ0g5lG", "B12BY_ATZ", "iclr_2018_Hkn7CBaTW", "iclr_2018_Hkn7CBaTW" ]
iclr_2018_ByOfBggRZ
Detecting Statistical Interactions from Neural Network Weights
Interpreting neural networks is a crucial and challenging task in machine learning. In this paper, we develop a novel framework for detecting statistical interactions captured by a feedforward multilayer neural network by directly interpreting its learned weights. Depending on the desired interactions, our method can a...
accepted-poster-papers
The paper proposes a way of detecting statistical interactions in a dataset based on the weights learned by a DNN. The idea is interesting and quite useful as is showcased in the experiments. The reviewers feel that the paper is also quite well written and easy to follow.
test
[ "SyQM6W5gz", "Hy8XC6Kxf", "Hy9gtVoxz", "HkVnyt57M", "HJf2U17mz", "BkR-FYvzf", "rJZYeXMzG", "S1xmKLxzM", "Bk7UD8gMf", "ry4GOIgGM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author" ]
[ "This paper presents a method to identify high-order interactions from the weights of feedforward neural networks. \n\nThe main benefits of the method are:\n1)\tCan detect high order interactions and there’s no need to specify the order (unlike, for example, in lasso-based methods).\n2)\tCan detect interactions ap...
[ 7, 7, 7, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 5, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ByOfBggRZ", "iclr_2018_ByOfBggRZ", "iclr_2018_ByOfBggRZ", "BkR-FYvzf", "Hy9gtVoxz", "rJZYeXMzG", "Bk7UD8gMf", "Hy8XC6Kxf", "Hy9gtVoxz", "SyQM6W5gz" ]
iclr_2018_r1ZdKJ-0W
Deep Gaussian Embedding of Graphs: Unsupervised Inductive Learning via Ranking
Methods that learn representations of nodes in a graph play a critical role in network analysis since they enable many downstream learning tasks. We propose Graph2Gauss - an approach that can efficiently learn versatile node embeddings on large scale (attributed) graphs that show strong performance on tasks such as lin...
accepted-poster-papers
The paper proposes a method to embed graph nodes into a gaussian distribution rather than the standard latent vector embeddings. The reviewers concur that the method is interesting and the paper is well-written especially after the opportunity to update.
train
[ "BkNHltugM", "HJhWAwsgM", "H1j0rCeZz", "BkijGb67z", "HkE-HMm-z", "BkEdFlQZz", "ry0Hue7bG", "BJKpLlX-M", "Syo1sTSgz", "HyzfYL4eG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "This paper is well-written and easy follow. I didn't find serious concern and therefore suggest an acceptance.\n\nPros\nMethodology\n1. inductive ability: can generalize to unseen nodes without any further training\n2. personalized ranking: the model uses natural ranking that embeddings of closer nodes (considers ...
[ 7, 6, 7, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_r1ZdKJ-0W", "iclr_2018_r1ZdKJ-0W", "iclr_2018_r1ZdKJ-0W", "iclr_2018_r1ZdKJ-0W", "BJKpLlX-M", "BkNHltugM", "HJhWAwsgM", "H1j0rCeZz", "HyzfYL4eG", "iclr_2018_r1ZdKJ-0W" ]
iclr_2018_H1BLjgZCb
Generating Natural Adversarial Examples
Due to their complex nature, it is hard to characterize the ways in which machine learning models can misbehave or be exploited when deployed. Recent work on adversarial examples, i.e. inputs with minor perturbations that result in substantially different model predictions, is helpful in evaluating the robustness of th...
accepted-poster-papers
The paper proposes a method to generate adversaries close to the (training) data manifold using GANs rather than arbitrary adversaries. They show the effectiveness of their method in terms of human evaluation and success in fooling a deep network. The reviewers feel that this paper is for the most part well-written and...
train
[ "S1c2UjL4M", "By2zFR_gz", "HJLfGN_xM", "rkzZoW5xf", "ByEBICLQG", "HJwr1CWfM", "SyTyJCbzM", "H1RqCpWfz", "HJR1wYX1z", "Bk8fG911z", "HyF4SB6A-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "author", "public" ]
[ "Most of my concerns have been properly addressed. I agree with the author that use of GAN to generate adversarial examples in text analysis is indeed novel. The importance and the application of the proposed methodology has now been depicted clearly. \n\nHowever I still have two small issues- (1) The application o...
[ -1, 6, 7, 6, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 3, 4, 3, -1, -1, -1, -1, -1, -1, -1 ]
[ "SyTyJCbzM", "iclr_2018_H1BLjgZCb", "iclr_2018_H1BLjgZCb", "iclr_2018_H1BLjgZCb", "iclr_2018_H1BLjgZCb", "HJLfGN_xM", "By2zFR_gz", "rkzZoW5xf", "Bk8fG911z", "HyF4SB6A-", "iclr_2018_H1BLjgZCb" ]
iclr_2018_HyydRMZC-
Spatially Transformed Adversarial Examples
Recent studies show that widely used Deep neural networks (DNNs) are vulnerable to the carefully crafted adversarial examples. Many advanced algorithms have been proposed to generate adversarial examples by leveraging the L_p distance for penalizing perturbations. Different defense methods have also been ex...
accepted-poster-papers
All reviewers gave "accept" ratings. it seems that everyone thinks this is interesting work. The paper generated a large number of anonymous comments and these were addressed by the authors.
train
[ "SynTtWlBG", "rJ5UfkeSM", "ry_xOQ5ef", "ryO_-j5NM", "S1uJLjCxz", "SJCbAbugf", "SyUPzf7Vz", "HyRHVgfVz", "rkPfBgGNG", "HyRMrDbNz", "HJRHE_aXz", "Hkh-EdTmf", "rJedEdamf", "S13te9vzf", "HJTI0MJMG", "ryqaGWfbf", "H1S0R27lf", "HJUFg7L1f", "ryZVaVr1M", "SyFCytM1z", "SyqTo1-JG", "...
[ "public", "author", "official_reviewer", "public", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public", "public", "public", "author", "public", "public", "author", "public", "author", "public" ]
[ "Thanks for the reply!\n\nI see. I think it is a valid argument to say that it lies on different data manifold compared to FGSM and/or CW. Perhaps it can be considered to mention that in the future because I initially misunderstood that you want to show how robust the attack is, and I thought it was not a fair test...
[ -1, -1, 7, -1, 9, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, 4, -1, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "rJ5UfkeSM", "ryO_-j5NM", "iclr_2018_HyydRMZC-", "iclr_2018_HyydRMZC-", "iclr_2018_HyydRMZC-", "iclr_2018_HyydRMZC-", "ryqaGWfbf", "HJTI0MJMG", "S13te9vzf", "iclr_2018_HyydRMZC-", "ry_xOQ5ef", "S1uJLjCxz", "SJCbAbugf", "iclr_2018_HyydRMZC-", "iclr_2018_HyydRMZC-", "iclr_2018_HyydRMZC-"...
iclr_2018_ryBnUWb0b
Predicting Floor-Level for 911 Calls with Neural Networks and Smartphone Sensor Data
In cities with tall buildings, emergency responders need an accurate floor level location to find 911 callers quickly. We introduce a system to estimate a victim's floor level via their mobile device's sensor data in a two-step process. First, we train a neural network to determine when a smartphone enters or exits a b...
accepted-poster-papers
Reviewers agree that the paper is well done and addresses an interesting problem, but uses fairly standard ML techniques. The authors have responded to rebuttals with careful revisions, and improved results.
train
[ "ry1E-75eG", "B11TNj_gM", "ryca0nYef", "S1MIIQpXM", "SyX3mXp7f", "ryex-7aXz", "S1xJ7MTmM", "Bk4vzMTQG", "S1mOIY2mf", "rJp2Dt3XG", "B1kuDK27M", "B1gGvY3Qf", "BkGbWw9Xf", "H1zsR8cmz", "SJw9Jr4-G", "r1gRb9SWG", "Hkesgr4Zf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author" ]
[ "Update: Based on the discussions and the revisions, I have improved my rating. However I still feel like the novelty is somewhat limited, hence the recommendation.\n\n======================\n\nThe paper introduces a system to estimate a floor-level via their mobile device's sensor data using an LSTM to determine w...
[ 6, 7, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ryBnUWb0b", "iclr_2018_ryBnUWb0b", "iclr_2018_ryBnUWb0b", "SyX3mXp7f", "ryex-7aXz", "Bk4vzMTQG", "Bk4vzMTQG", "S1mOIY2mf", "iclr_2018_ryBnUWb0b", "ryca0nYef", "B11TNj_gM", "H1zsR8cmz", "H1zsR8cmz", "iclr_2018_ryBnUWb0b", "ry1E-75eG", "B11TNj_gM", "ryca0nYef" ]
iclr_2018_SJLlmG-AZ
Understanding image motion with group representations
Motion is an important signal for agents in dynamic environments, but learning to represent motion from unlabeled video is a difficult and underconstrained problem. We propose a model of motion based on elementary group properties of transformations and use it to train a representation of image motion. While most metho...
accepted-poster-papers
An interesting model, for an interesting problem but perhaps of limited applicability - doesn't achieve state of the art results on practical tasks. Paper has other limitations, though the authors have addressed some in rebuttals.
test
[ "S18F244xz", "SkG-iVtlz", "rJp8iHslM", "BkoQQPTXG", "BJokPHp7z", "rkZIUS6QM", "B1KiUB6mf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The authors propose to learn the rigid motion group (translation and rotation) from a latent representation of image sequences without the need for explicit labels.\nWithin their data driven approach they pose minimal assumptions on the model, requiring the group properties (associativity, invertibility, identity)...
[ 7, 5, 4, -1, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_SJLlmG-AZ", "iclr_2018_SJLlmG-AZ", "iclr_2018_SJLlmG-AZ", "iclr_2018_SJLlmG-AZ", "S18F244xz", "SkG-iVtlz", "rJp8iHslM" ]
iclr_2018_r1HhRfWRZ
Learning Awareness Models
We consider the setting of an agent with a fixed body interacting with an unknown and uncertain external world. We show that models trained to predict proprioceptive information about the agent's body come to represent objects in the external world. In spite of being trained with only internally available signals, thes...
accepted-poster-papers
Since this seems interesting, I suggest to accept this paper at the conference. However, there are still some serious issues with the paper, including missing references.
test
[ "BJb0-vDxz", "BkpHBrUNz", "ryAPav5lz", "Skozh1abf", "rkarB4izM", "BkpoxZszM", "H1qYeZjfz", "HJxOWU5MM", "r1MFgUcfz", "SkzIyI5zf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "The paper proposes an architecture for internal model learning of a robotic system and applies it to a simulated and a real robotic hand. The model allows making relatively long-term predictions with uncertainties. The models are used to perform model predictive control to achieve informative actions. It is shown...
[ 7, -1, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, 4, 5, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_r1HhRfWRZ", "SkzIyI5zf", "iclr_2018_r1HhRfWRZ", "iclr_2018_r1HhRfWRZ", "HJxOWU5MM", "H1qYeZjfz", "iclr_2018_r1HhRfWRZ", "Skozh1abf", "ryAPav5lz", "BJb0-vDxz" ]
iclr_2018_SyzKd1bCW
Backpropagation through the Void: Optimizing control variates for black-box gradient estimation
Gradient-based optimization is the foundation of deep learning and reinforcement learning. Even when the mechanism being optimized is unknown or not differentiable, optimization using high-variance or biased gradient estimates is still often the best strategy. We introduce a general framework for learning low-var...
accepted-poster-papers
This is an interesting and well-written paper introducing two unbiased gradient estimators for optimizing expectations of black box functions. LAX can handle functions of both continuous and discrete random variables, while RELAX is specialized to functions of discrete variables and can be seen as a version of the rece...
train
[ "HyFP5nE1M", "S1JPkCzlG", "S1a0antgG", "B1Gdj6qQM", "r148oh9QG", "Bk9nMTL7f", "rkSpZaImf", "SJLiWTImf", "H1Cjea8mG", "r1U0in8XG", "rkaDiETZf", "BylvIk5lG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public", "public" ]
[ "This paper introduces LAX/RELAX, a method to reduce the variance of the REINFORCE gradient estimator. The method builds on and is directly inspired by REBAR. Similarly to REBAR, RELAX is an unbiased estimator, and the idea is to introduce a control variate that leverages the reparameterization gradient. In contras...
[ 8, 7, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SyzKd1bCW", "iclr_2018_SyzKd1bCW", "iclr_2018_SyzKd1bCW", "BylvIk5lG", "rkaDiETZf", "HyFP5nE1M", "SJLiWTImf", "H1Cjea8mG", "S1JPkCzlG", "S1a0antgG", "iclr_2018_SyzKd1bCW", "iclr_2018_SyzKd1bCW" ]
iclr_2018_rylSzl-R-
On Unifying Deep Generative Models
Deep generative models have achieved impressive success in recent years. Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), as powerful frameworks for deep generative model learning, have largely been considered as two distinct paradigms and received extensive independent studies respectively. ...
accepted-poster-papers
This is a thought-provoking paper that places GANs and VAEs in a single framework and, motivated by this perspective, proposes several novel extensions to them. The reviewers made several good suggestions for improving the paper and the authors are expected to make the revisions they promised. The current title of the ...
train
[ "BkONJetlM", "SJAtVYteG", "SJIHn0tlz", "BJm4dObMz", "SJOCPubMz", "rknHPu-zM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Update 1/11/18:\n\nI'm happy with the comments from the authors. I think the explanation of non-saturating vs saturating objective is nice, and I've increased the score.\n\nNote though: I absolutely expect a revision at camera-ready if the paper gets accepted (we did not get one).\n\nOriginal review:\nThe paper is...
[ 7, 6, 7, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1 ]
[ "iclr_2018_rylSzl-R-", "iclr_2018_rylSzl-R-", "iclr_2018_rylSzl-R-", "BkONJetlM", "SJAtVYteG", "SJIHn0tlz" ]
iclr_2018_HyZoi-WRb
Debiasing Evidence Approximations: On Importance-weighted Autoencoders and Jackknife Variational Inference
The importance-weighted autoencoder (IWAE) approach of Burda et al. defines a sequence of increasingly tighter bounds on the marginal likelihood of latent variable models. Recently, Cremer et al. reinterpreted the IWAE bounds as ordinary variational evidence lower bounds (ELBO) applied to increasingly accurate variatio...
accepted-poster-papers
The authors analyze the IWAE bound as an estimator of the marginal log-likelihood and show how to reduce its bias by using the jackknife. They then evaluate the effect of using the resulting estimator (JVI) for training and evaluating VAEs on MNIST. This is an interesting and well written paper. It could be improved by...
test
[ "HyUn5KYxz", "B11KATDMf", "SkuC4bqez", "S1MVzM5ez", "ryjOLrWzG", "B1xfLS-zz", "H1gTrB-GG", "ry-rrBWMz", "H164Zf9eG", "B1xZQ0DCZ", "rkSCR0HCW" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "public" ]
[ "[After author feedback]\nI think this is an interesting paper and recommend acceptance. My remaining main comments are described in the response to author feedback below.\n\n[Original review]\nThe authors introduce jackknife variational inference (JVI), a method for debiasing Monte Carlo objectives such as the imp...
[ 7, -1, 6, 7, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, 3, 3, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HyZoi-WRb", "H1gTrB-GG", "iclr_2018_HyZoi-WRb", "iclr_2018_HyZoi-WRb", "H164Zf9eG", "SkuC4bqez", "HyUn5KYxz", "rkSCR0HCW", "iclr_2018_HyZoi-WRb", "rkSCR0HCW", "iclr_2018_HyZoi-WRb" ]
iclr_2018_rkrC3GbRW
Learning a Generative Model for Validity in Complex Discrete Structures
Deep generative models have been successfully used to learn representations for high-dimensional discrete spaces by representing discrete objects as sequences and employing powerful sequence-based deep models. Unfortunately, these sequence-based models often produce invalid sequences: sequences which do not represent a...
accepted-poster-papers
Viewing the problem of determining the validity of high-dimensional discrete sequences as a sequential decision problem, the authors propose learning a Q function that indicates whether the current sequence prefix can lead to a valid sequence. The paper is fairly well written and contains several interesting ideas. The...
val
[ "SJzxBpKeM", "r1bjT3VgM", "B1odDD8gM", "B1frSrpXf", "BkYQBH67f", "SyvGSB6Xz", "SJHWHrT7z", "SypyHBT7z", "HJ4T4Spmz", "H1HcVrTmM", "H1iONBaXG", "Hy_SEST7G", "S1C74HpXf", "H1U0mSp7M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author" ]
[ "SUMMARY:\nThis work is about learning the validity of a sequences in specific application domains like SMILES strings for chemical compounds. In particular, the main emphasis is on predicting if a prefix sequence could possibly be extended to a complete valid sequence. In other words, one tries to predict if there...
[ 6, 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rkrC3GbRW", "iclr_2018_rkrC3GbRW", "iclr_2018_rkrC3GbRW", "r1bjT3VgM", "r1bjT3VgM", "r1bjT3VgM", "r1bjT3VgM", "r1bjT3VgM", "B1odDD8gM", "B1odDD8gM", "B1odDD8gM", "SJzxBpKeM", "SJzxBpKeM", "SJzxBpKeM" ]
iclr_2018_rkTS8lZAb
Boundary Seeking GANs
Generative adversarial networks are a learning framework that rely on training a discriminator to estimate a measure of difference between a target and generated distributions. GANs, as normally formulated, rely on the generated samples being completely differentiable w.r.t. the generative parameters, and thus do not w...
accepted-poster-papers
Training GANs to generate discrete data is a hard problem. This paper introduces a principled approach to it that uses importance sampling to estimate the gradient of the generator. The quantitative results, though minimal, appear promising and the generated samples look fine. The writing is clear, if unnecessarily hea...
train
[ "BJZMpg8Nf", "rJegU3tef", "r1in0YH4M", "r12KCYBVz", "H1FAQ9Flz", "H1lP_k5ez", "Hywj4vLXM", "BJkW_vUXM", "r1u6rv87f", "H1jZBvUXM", "ryQ3XBXzz", "HypaYt_Zf", "Hk-oKKdbz", "S1-90FdWz", "HJuHqFuWG" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author" ]
[ "I updated my review and increased my score to 7.", "Thanks for the feedback and for clarifying the 1) algorithm and the assumptions in the multivariate case 2) comparison to RL based methods 3) connection to estimating importance sampling weights using GAN discriminator.\n\nI think the paper contribution is now ...
[ -1, 7, -1, -1, 7, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, -1, -1, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "r12KCYBVz", "iclr_2018_rkTS8lZAb", "H1lP_k5ez", "rJegU3tef", "iclr_2018_rkTS8lZAb", "iclr_2018_rkTS8lZAb", "iclr_2018_rkTS8lZAb", "H1FAQ9Flz", "rJegU3tef", "H1lP_k5ez", "HypaYt_Zf", "H1FAQ9Flz", "rJegU3tef", "iclr_2018_rkTS8lZAb", "H1lP_k5ez" ]
iclr_2018_Hk0wHx-RW
Learning Sparse Latent Representations with the Deep Copula Information Bottleneck
Deep latent variable models are powerful tools for representation learning. In this paper, we adopt the deep information bottleneck model, identify its shortcomings and propose a model that circumvents them. To this end, we apply a copula transformation which, by restoring the invariance properties of the information b...
accepted-poster-papers
Observing that in contrast to classical information bottleneck, the deep variational information bottleneck (DVIB) model is not invariant to monotonic transformations of input and output marginals, the authors show how to incorporate this invariance along with sparsity in DVIB using the copula transform. The revised ve...
train
[ "ByJ3JKkBG", "S1ZdyY1SG", "H13MWgq4M", "rJYSUovgG", "ByQLos_xM", "ByR8Gr5gf" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I would have also liked to see a more direct and systemic validation of the claims made in the paper. For example, the shortcomings of DIB identified in Section 3.1, 3.2 could have been verified more directly by plotting I(y,t) for various monotonic transformations of x.\n\nWe verified this for beta transformation...
[ -1, -1, 5, 6, 6, 6 ]
[ -1, -1, 4, 3, 3, 1 ]
[ "H13MWgq4M", "H13MWgq4M", "iclr_2018_Hk0wHx-RW", "iclr_2018_Hk0wHx-RW", "iclr_2018_Hk0wHx-RW", "iclr_2018_Hk0wHx-RW" ]
iclr_2018_S1cZsf-RW
WHAI: Weibull Hybrid Autoencoding Inference for Deep Topic Modeling
To train an inference network jointly with a deep generative topic model, making it both scalable to big corpora and fast in out-of-sample prediction, we develop Weibull hybrid autoencoding inference (WHAI) for deep latent Dirichlet allocation, which infers posterior samples via a hybrid of stochastic-gradient MCMC and...
accepted-poster-papers
The paper proposes a new approach for scalable training of deep topic models based on amortized inference for the local parameters and stochastic-gradient MCMC for the global ones. The key aspect of the method involves using Weibull distributions (instead of Gammas) to model the variational posteriors over the local ...
test
[ "B1gG5N5ez", "S1HoJBilG", "SyESlFoef", "BJoIAaEGz", "HJR836NGf", "B1kjsaEMG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The authors develop a hybrid amortized variational inference MCMC inference \nframework for deep latent Dirichlet allocation. Their model consists of a stack of\n gamma factorization layers with a Poisson layer at the bottom. They amortize \ninference at the observation level using a Weibull approximation. The str...
[ 6, 6, 5, -1, -1, -1 ]
[ 4, 2, 4, -1, -1, -1 ]
[ "iclr_2018_S1cZsf-RW", "iclr_2018_S1cZsf-RW", "iclr_2018_S1cZsf-RW", "B1gG5N5ez", "S1HoJBilG", "SyESlFoef" ]
iclr_2018_H1MczcgR-
Understanding Short-Horizon Bias in Stochastic Meta-Optimization
Careful tuning of the learning rate, or even schedules thereof, can be crucial to effective neural net training. There has been much recent interest in gradient-based meta-optimization, where one tunes hyperparameters, or even learns an optimizer, in order to minimize the expected loss when the training procedure is un...
accepted-poster-papers
An interesting analysis of the issue of short-horizon bias in meta-optimization that highlights a real problem in a number of existing setups. I concur with Reviewer 3 that it would be nice to provide a constructive solution to this issue: if something like K-FAC does indeed work well, it would be a great addition to a...
train
[ "BkZRhnbxz", "Hkhtvm5eM", "B1EVroyWG", "Hkt9Hx9mG", "rJn-Vl9Xf", "H1TfSx5Xz", "SJBhNe9Xz", "By9IIjkZM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper discusses the problems of meta optimization with small look-ahead: do small runs bias the results of tuning? The result is yes and the authors show how differently the tuning can be compared to tuning the full run. The Greedy schedules are far inferior to hand-tuned schedules as they focus on optimizing ...
[ 7, 6, 8, -1, -1, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H1MczcgR-", "iclr_2018_H1MczcgR-", "iclr_2018_H1MczcgR-", "BkZRhnbxz", "B1EVroyWG", "By9IIjkZM", "Hkhtvm5eM", "B1EVroyWG" ]
iclr_2018_rkpoTaxA-
Self-ensembling for visual domain adaptation
This paper explores the use of self-ensembling for visual domain adaptation problems. Our technique is derived from the mean teacher variant (Tarvainen et. al 2017) of temporal ensembling (Laine et al. 2017), a technique that achieved state of the art results in the area of semi-supervised learning. We introduce a numb...
accepted-poster-papers
An interesting application of self-ensembling/temporal ensembling for visual domain adaptation that achieves state of the art on the visual domain adaptation challenge. Reviewers noted that the approach is quite engineering-heavy, but I am not sure it's really much worse than making a pixel-to-pixel approach work well ...
train
[ "B1Ih1S54G", "S1HXzycxf", "HJ7P8yYNM", "r1uziOjxf", "S1OIJnigM", "Syu2DDUmz", "ryhrgwLXz", "HJqdkDLQG", "B1vyYcxyG" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Thanks for pointing this out, as its' most likely correct; problems will likely arise in situations where there is severe class imbalance in the target dataset.\n\nIf the editors permit it, we may need to add this caveat to our paper.", "This paper presents a domain adaptation algorithm based on the self-ensembl...
[ -1, 7, -1, 7, 7, -1, -1, -1, -1 ]
[ -1, 4, -1, 3, 5, -1, -1, -1, -1 ]
[ "HJ7P8yYNM", "iclr_2018_rkpoTaxA-", "Syu2DDUmz", "iclr_2018_rkpoTaxA-", "iclr_2018_rkpoTaxA-", "S1HXzycxf", "r1uziOjxf", "S1OIJnigM", "iclr_2018_rkpoTaxA-" ]
iclr_2018_SJi9WOeRb
Gradient Estimators for Implicit Models
Implicit models, which allow for the generation of samples but not for point-wise evaluation of probabilities, are omnipresent in real-world problems tackled by machine learning and a hot topic of current research. Some examples include data simulators that are widely used in engineering and scientific research, genera...
accepted-poster-papers
The paper presents the Stein gradient estimator, a kernelized direct estimate of the score function for implicitly defined models. The authors demonstrate the estimator for GANs, meta-learning for approx. inference in Bayesian NNs, and approximating gradient-free MCMC. The reviewers found the method interesting and pri...
train
[ "ryjc6__ez", "H1xEAg3lM", "rJ8QfICez", "Sydc7eZMz", "SJlbOODMG", "r1XbwWZMG", "Sy3KkZWfz", "r1zqfN7Zf", "rJ5a7tg-G", "HJS1ru6gG", "r19JQOTgG", "SJCCQvTgz", "SJxdjXBgM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "public", "official_reviewer" ]
[ "Post rebuttal phase (see below for original comments)\n================================================================================\nI thank the authors for revising the manuscript. The methods makes sense now, and I think its quite interesting. While I do have some concerns (e.g. choice of eta, batching may n...
[ 7, 7, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 2, 4, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SJi9WOeRb", "iclr_2018_SJi9WOeRb", "iclr_2018_SJi9WOeRb", "rJ8QfICez", "ryjc6__ez", "ryjc6__ez", "H1xEAg3lM", "rJ5a7tg-G", "r19JQOTgG", "SJCCQvTgz", "SJxdjXBgM", "iclr_2018_SJi9WOeRb", "iclr_2018_SJi9WOeRb" ]
iclr_2018_B1nZ1weCZ
Learning to Multi-Task by Active Sampling
One of the long-standing challenges in Artificial Intelligence for learning goal-directed behavior is to build a single agent which can solve multiple tasks. Recent progress in multi-task learning for goal-directed sequential problems has been in the form of distillation based learning wherein a student network learns ...
accepted-poster-papers
The paper contains an interesting way to do online multi task learning, by borrowing ideas from active learning and comparing and contrasting a number of ways on the arcade learning environment. Like the reviewers, I have some concerns about using the target scores and I think more analysis would be needed to see just...
test
[ "B1lMs83HG", "r1XoHKtlf", "BkTVPAYeG", "HkFWypcgf", "rJj6KMfVG", "Sy6Q3DTXz", "ryLScD6QM", "rkpwKvT7M", "rybYG3Pgz", "H1kXk_weG", "H1KJtmmez" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "author", "public" ]
[ "We thank the reviewer of increasing his score. We further address your comments below:\n\n> The paper should have contained a precise description of the DUA4C algorithm --not only experimental results--.\n\nThe paper does contain a precise description of the DUA4C algorithm. The Algorithm 7 on Page 23 is exactly t...
[ -1, 5, 7, 7, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 3, 3, 5, -1, -1, -1, -1, -1, -1, -1 ]
[ "rJj6KMfVG", "iclr_2018_B1nZ1weCZ", "iclr_2018_B1nZ1weCZ", "iclr_2018_B1nZ1weCZ", "Sy6Q3DTXz", "r1XoHKtlf", "BkTVPAYeG", "HkFWypcgf", "H1kXk_weG", "H1KJtmmez", "iclr_2018_B1nZ1weCZ" ]
iclr_2018_rkHywl-A-
Learning Robust Rewards with Adverserial Inverse Reinforcement Learning
Reinforcement learning provides a powerful and general framework for decision making and control, but its application in practice is often hindered by the need for extensive feature and reward engineering. Deep reinforcement learning methods can remove the need for explicit engineering of policy or va...
accepted-poster-papers
The AIRL is presented as a scalable inverse reinforcement learning algorithm. A key idea is to produce "disentangled rewards", which are invariant to changing dynamics; this is done by having the rewards depend only on the current state. There are some similarities with GAIL and the authors argue that this is effective...
train
[ "HyKxU-erf", "S1Nj--xSG", "BJ-3TanEM", "SJfeNePVM", "Hyn6kL_xG", "ryZzenclz", "ryyF8NyZM", "BJz9VDTQG", "SJAvEPa7f", "SJnzNvaXz", "B1c_e2Fzf", "S1HsAfGzf", "rkGMGe-bG", "r17fiybWM", "SJbt4SDez", "HJOGkoD1z", "H11JStDyG", "Skxih5LyM" ]
[ "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "public", "public", "public", "public", "public", "author", "public" ]
[ "If the ground truth reward depends on both states and actions, the algorithm cannot represent the true reward and thus the performance of the policy will not match that of the experts (we have included new experiments in Section 7.3 for this case). The results will likely depend on the task - in our experiments th...
[ -1, -1, -1, -1, 6, 6, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, 4, 2, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "B1c_e2Fzf", "BJ-3TanEM", "iclr_2018_rkHywl-A-", "BJz9VDTQG", "iclr_2018_rkHywl-A-", "iclr_2018_rkHywl-A-", "iclr_2018_rkHywl-A-", "ryZzenclz", "Hyn6kL_xG", "ryyF8NyZM", "iclr_2018_rkHywl-A-", "iclr_2018_rkHywl-A-", "ryZzenclz", "ryyF8NyZM", "iclr_2018_rkHywl-A-", "H11JStDyG", "Skxih...
iclr_2018_B1DmUzWAW
A Simple Neural Attentive Meta-Learner
Deep neural networks excel in regimes with large amounts of data, but tend to struggle when data is scarce or when they need to adapt quickly to changes in the task. In response, recent work in meta-learning proposes training a meta-learner on a distribution of similar tasks, in the hopes of generalization to novel but...
accepted-poster-papers
An interesting new approach for doing meta-learning incorporating temporal convolution blocks and soft attention. Achieves impressive SOTA results on few shot learning tasks and a number of RL tasks. I appreciate the authors doing the ablation studies in the appendix as that raises my confidence in the novelty aspect o...
train
[ "r1Ma5fixz", "S1J6xOmgf", "BJDSdqqxM", "BJtifUTQf", "rJC87UpmM", "SydimI6QG", "rJpTzLamf", "BJSLNSp7f", "Sy5QVH6mG", "B1Cg_OA-f", "BJGc5UgxM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "public" ]
[ "This work proposes an approach to meta-learning in which temporal convolutions and attention are used to synthesize labeled examples (for few-shot classification) or action-reward pairs (for reinforcement learning) in order to take the appropriate action. The resulting model is general-purpose and experiments demo...
[ 7, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_B1DmUzWAW", "iclr_2018_B1DmUzWAW", "iclr_2018_B1DmUzWAW", "iclr_2018_B1DmUzWAW", "BJDSdqqxM", "S1J6xOmgf", "r1Ma5fixz", "BJGc5UgxM", "B1Cg_OA-f", "iclr_2018_B1DmUzWAW", "iclr_2018_B1DmUzWAW" ]
iclr_2018_SywXXwJAb
Deep Learning and Quantum Entanglement: Fundamental Connections with Implications to Network Design
Formal understanding of the inductive bias behind deep convolutional networks, i.e. the relation between the network's architectural features and the functions it is able to model, is limited. In this work, we establish a fundamental connection between the fields of quantum physics and deep learning, and use it for obt...
accepted-poster-papers
This paper seemingly joins a cohort of ICLR submissions which attempt to port mature concepts from physics to machine learning, make a complex and non-trivial theoretical contribution, and fall short on the empirical front. The one aspect that sets this apart from its peers is that the reviewers agree that the theoreti...
val
[ "r183ZUKVz", "BJX98ixfM", "SJWogYvEM", "SJ45Qm8Zz", "Hk3NX9vWM", "SJw9gV2ZM", "HJbGEWHGf", "HyS9hL6Wz", "B18GHL6Wf", "S1jF4IaWM" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "public", "public" ]
[ "We thank reviewer for the time and effort invested, and for the consideration of our response. \n\nBy \"empirical trends we observed with ConvACs were precisely the same\", we mean that with ConvACs, as with ReLU ConvNets, \"wide-base\" architecture had a clear advantage over \"wide-tip\" in the local task, where...
[ -1, 6, -1, 7, 8, 6, -1, -1, -1, -1 ]
[ -1, 4, -1, 3, 5, 2, -1, -1, -1, -1 ]
[ "SJWogYvEM", "iclr_2018_SywXXwJAb", "HJbGEWHGf", "iclr_2018_SywXXwJAb", "iclr_2018_SywXXwJAb", "iclr_2018_SywXXwJAb", "BJX98ixfM", "SJw9gV2ZM", "Hk3NX9vWM", "SJ45Qm8Zz" ]
iclr_2018_Skp1ESxRZ
Towards Synthesizing Complex Programs From Input-Output Examples
In recent years, deep learning techniques have been developed to improve the performance of program synthesis from input-output examples. Albeit its significant progress, the programs that can be synthesized by state-of-the-art approaches are still simple in terms of their complexity. In this work, we move a significan...
accepted-poster-papers
This paper proposes a method for training an neural network to operate stack-based mechanism in order to act as a CFG parser in order to, eventually, improve program synthesis and program induction systems. The reviewers agreed that the paper was compelling and well supported empirically, although one reviewer suggeste...
train
[ "HJ7k9-5ef", "Hyp9KYPez", "SyKcRn9xf", "BJo8MfaQG", "HJlWhynmM", "S17K82EQf", "rkVJZtiMG", "Bko83vXfG", "rJCciDXff", "HJKbiwmMG", "SJcoYPXff", "r1atCHsbG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "public" ]
[ "Summary:\nI thank the authors for their update and clarifications. They have addressed my concerns, and I will keep my score as it is.\n\n-----------------------------------------------------------------------\nThe authors present a system that parses DSL expressions into syntax trees when trained using input-out...
[ 8, 7, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 4, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Skp1ESxRZ", "iclr_2018_Skp1ESxRZ", "iclr_2018_Skp1ESxRZ", "HJlWhynmM", "HJKbiwmMG", "SJcoYPXff", "iclr_2018_Skp1ESxRZ", "r1atCHsbG", "SyKcRn9xf", "HJ7k9-5ef", "Hyp9KYPez", "iclr_2018_Skp1ESxRZ" ]
iclr_2018_S1WRibb0Z
Expressive power of recurrent neural networks
Deep neural networks are surprisingly efficient at solving practical tasks, but the theory behind this phenomenon is only starting to catch up with the practice. Numerous works show that depth is the key to this efficiency. A certain class of deep convolutional networks – namely those that correspond ...
accepted-poster-papers
This paper offers a theoretical and empirical analysis of the expressivity of RNNs, in particular in comparison to TT decomposition. The reviewers argued the results was interesting and important, although there were issues with clarity of some of the explanations. More critical reviewers argued the comparison basis wi...
val
[ "ryxBhTjgM", "SJr9X58lz", "HyjYq-DgM", "HJgCQmtQG", "S1KW7QFQG", "r17DGXtXf", "rk4NWQt7M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The authors of this paper first present a class of networks inspired by various tensor decomposition models. Then they focus on one particular decompostion known as the tensor train decomposition and points out an analogy between tensor train networks and recurrent neural networks. Finally the authors show that al...
[ 6, 6, 6, -1, -1, -1, -1 ]
[ 4, 5, 3, -1, -1, -1, -1 ]
[ "iclr_2018_S1WRibb0Z", "iclr_2018_S1WRibb0Z", "iclr_2018_S1WRibb0Z", "SJr9X58lz", "HyjYq-DgM", "ryxBhTjgM", "iclr_2018_S1WRibb0Z" ]
iclr_2018_rJlMAAeC-
Improving the Universality and Learnability of Neural Programmer-Interpreters with Combinator Abstraction
To overcome the limitations of Neural Programmer-Interpreters (NPI) in its universality and learnability, we propose the incorporation of combinator abstraction into neural programing and a new NPI architecture to support this abstraction, which we call Combinatory Neural Programmer-Interpreter (CNPI). Combinator abstr...
accepted-poster-papers
This paper present a functional extension to NPI, allowing the learning of simpler, more expressive programs. Although the conference does not put explicit bounds on the length of papers, the authors pushed their luck with their initial submission (a body of 14 pages). It is clear, from the discussion and the reviews,...
train
[ "Syil6yaVf", "rkRojIHxz", "BkswAkLlG", "ByvgbYFeG", "rJ7yRfj7M", "HkCyUXWMG", "ryjjDGbMM", "SyKHGQbfz", "rJQJbQWff", "S1HgOM-Gz", "HkID7QZfz", "H15q-7WGM", "ryggtIU-f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "public" ]
[ "I have now also read the revised version of the paper, i.e., the 12-page version. \n\nThe paper is very interesting to read, however slightly hard to digest when you are not familiar with NPI. \n\nThe paper presents a clear contribution in addition to previous work, i.e., identifying and proposing a set of four co...
[ -1, 3, 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "rkRojIHxz", "iclr_2018_rJlMAAeC-", "iclr_2018_rJlMAAeC-", "iclr_2018_rJlMAAeC-", "iclr_2018_rJlMAAeC-", "rkRojIHxz", "ByvgbYFeG", "BkswAkLlG", "BkswAkLlG", "ByvgbYFeG", "ryggtIU-f", "BkswAkLlG", "rkRojIHxz" ]
iclr_2018_HJvvRoe0W
An image representation based convolutional network for DNA classification
The folding structure of the DNA molecule combined with helper molecules, also referred to as the chromatin, is highly relevant for the functional properties of DNA. The chromatin structure is largely determined by the underlying primary DNA sequence, though the interaction is not yet fully understood. In this paper we...
accepted-poster-papers
This paper addresses an important application in genomics, i.e. the prediction of chromatin structure from nucleotide sequences. The authors develop a novel method for converting the nucleotide sequences to a 2D structure that allows a CNN to detect interactions between distant parts of the sequence. The reviewers fo...
val
[ "r1nyJDpZG", "r18RxrXlG", "BkZQ81QlG", "r19CoevgM", "r1IqLpczf", "Hyi4DyMzG", "r1bLW-MWz" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "public" ]
[ "Dear reviewer,\n\nThank you very much for your comment. We feel that you have made an excellent point here regarding the filter-size difference between the 1D-sequence CNN and the Hilbert CNN. Your suggestion has resulted in further insight into the particular advantages of the components of our approach, which, ...
[ -1, 7, 7, 7, -1, -1, -1 ]
[ -1, 5, 3, 5, -1, -1, -1 ]
[ "r1bLW-MWz", "iclr_2018_HJvvRoe0W", "iclr_2018_HJvvRoe0W", "iclr_2018_HJvvRoe0W", "Hyi4DyMzG", "iclr_2018_HJvvRoe0W", "iclr_2018_HJvvRoe0W" ]
iclr_2018_rydeCEhs-
SMASH: One-Shot Model Architecture Search through HyperNetworks
Designing architectures for deep neural networks requires expert knowledge and substantial computation time. We propose a technique to accelerate architecture selection by learning an auxiliary HyperNet that generates the weights of a main model conditioned on that model's architecture. By comparing the relative valida...
accepted-poster-papers
This paper proposes a method for having a meta deep learning model generate the weights of a main model given a proposed architecture. This allows the authors to search over the space of architectures efficiently. The reviewers agreed that the paper was very well composed, presents an interesting and thought provokin...
train
[ "SkmGrjvlz", "rJ200-5ez", "SycMimAgG", "HysNsD6XM", "ryK_yyjGM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "Summary of paper - This paper presents SMASH (or the one-Shot Model Architecture Search through Hypernetworks) which has two training phases (one to quickly train a random sample of network architectures and one to train the best architecture from the first stage). The paper presents a number of interesting experi...
[ 7, 7, 6, -1, -1 ]
[ 3, 4, 2, -1, -1 ]
[ "iclr_2018_rydeCEhs-", "iclr_2018_rydeCEhs-", "iclr_2018_rydeCEhs-", "iclr_2018_rydeCEhs-", "SkmGrjvlz" ]
iclr_2018_ByBAl2eAZ
Parameter Space Noise for Exploration
Deep reinforcement learning (RL) methods generally engage in exploratory behavior through noise injection in the action space. An alternative is to add noise directly to the agent's parameters, which can lead to more consistent exploration and a richer set of behaviors. Methods such as evolutionary strategies use param...
accepted-poster-papers
This paper proposes adding noise to the parameters of a deep network when taking actions in deep reinforcement learning to encourage exploration. The method is simple but the authors demonstrate its effectiveness through thorough empirical analysis across a variety of reinforcement learning tasks (i.e. DQN, DDPG, and ...
train
[ "S1Zd1FUEM", "By9jfdZkM", "r1gBpq_gG", "ryVd3dFgf", "HJxWJ0eEG", "HyUtjoTmG", "HJjO3r97f", "HJUU9kZzf", "ByiX9J-zf", "SkaJ5JWMG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "author", "author", "author" ]
[ "I read the author's response and other reviews. I think the author's have addressed most concerns (I'm still curious about the discrepancy in DDPG result). My rating was already positive so I've left it unchanged.", "This paper explores the idea of adding parameter space noise in service of exploration. The pape...
[ -1, 6, 7, 7, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, 4, 5, -1, -1, -1, -1, -1, -1 ]
[ "ByiX9J-zf", "iclr_2018_ByBAl2eAZ", "iclr_2018_ByBAl2eAZ", "iclr_2018_ByBAl2eAZ", "HyUtjoTmG", "iclr_2018_ByBAl2eAZ", "iclr_2018_ByBAl2eAZ", "By9jfdZkM", "r1gBpq_gG", "ryVd3dFgf" ]
iclr_2018_r1VVsebAZ
Synthesizing realistic neural population activity patterns using Generative Adversarial Networks
The ability to synthesize realistic patterns of neural activity is crucial for studying neural information processing. Here we used the Generative Adversarial Networks (GANs) framework to simulate the concerted activity of a population of neurons. We adapted the Wasserstein-GAN variant to facilitate the generatio...
accepted-poster-papers
This paper proposes a novel application of generative adversarial networks to model neural spiking activity. Their technical contribution, SpikeGAN, generates neural spikes that accurately match the statistics of real recorded spiking behavior from a small number of neurons. The paper is controversial among the revie...
train
[ "H1V6FdKgz", "SkQHi3FgM", "H1cZ1g0ef", "Hylb3fp7M", "BJIXLEYfM", "BkF3VEtff", "HyoJNNtGG", "r1hc5xv0Z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "[Summary of paper] The paper presents a method for simulating spike trains from populations of neurons which match empirically measured multi-neuron recordings. They set up a Wasserstein-GAN and train it on both synthetic and real multi-neuron recordings, using data from the salamander retina. They find that their...
[ 8, 4, 6, -1, -1, -1, -1, -1 ]
[ 5, 4, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2018_r1VVsebAZ", "iclr_2018_r1VVsebAZ", "iclr_2018_r1VVsebAZ", "iclr_2018_r1VVsebAZ", "H1V6FdKgz", "SkQHi3FgM", "H1cZ1g0ef", "iclr_2018_r1VVsebAZ" ]
iclr_2018_BJ8c3f-0b
Auto-Encoding Sequential Monte Carlo
We build on auto-encoding sequential Monte Carlo (AESMC): a method for model and proposal learning based on maximizing the lower bound to the log marginal likelihood in a broad family of structured probabilistic models. Our approach relies on the efficiency of sequential Monte Carlo (SMC) for performing inference in st...
accepted-poster-papers
This work develops importance weighted autoencoder-like training but with sequential Monte Carlo. The paper is interesting, well written and the methods are very timely (there are two highly related concurrent papers - Naesseth et al. and Maddison et al.). Initially, the reviewers shared concerns about the technical...
train
[ "rkN3iHo4f", "r132qrjEG", "Syi1Z34ZG", "HkqLp4GeM", "BkaCH2KlG", "SybDOZMEz", "BkgqKr6Xf", "S1Eei4pQM", "H1Uo_Namf", "ByvVq0m-G" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "Thank you for the further comments.\n\n%%% You argue that increased K ... %%%\n\n>>> The argument about increasing K being detrimental to optimizing q(z|x) is actually distinct to the bound potentially not being tight. Increasing K can be detrimental because of undermining our ability to reliably estimate the gra...
[ -1, -1, 7, 3, 7, -1, -1, -1, -1, -1 ]
[ -1, -1, 3, 2, 4, -1, -1, -1, -1, -1 ]
[ "SybDOZMEz", "Syi1Z34ZG", "iclr_2018_BJ8c3f-0b", "iclr_2018_BJ8c3f-0b", "iclr_2018_BJ8c3f-0b", "H1Uo_Namf", "Syi1Z34ZG", "HkqLp4GeM", "BkaCH2KlG", "HkqLp4GeM" ]
iclr_2018_HJewuJWCZ
Learning to Teach
Teaching plays a very important role in our society, by spreading human knowledge and educating our next generations. A good teacher will select appropriate teaching materials, impact suitable methodologies, and set up targeted examinations, according to the learning behaviors of the students. In the field of artificia...
accepted-poster-papers
The paper addresses the problem of learning a teacher model which selects the training samples for the next mini-batch used by the student model. The proposed solution is to learn the teacher model using policy gradient. It is an interesting training setting, and the evaluation demonstrates that the method outperforms ...
train
[ "SkCogB4Hf", "rkECpdo7M", "HJoIn0tEz", "BkMvqjYgG", "rJPwQZYgM", "SkV5DeTlG", "ByKbNaRWG", "Hy4tuUsmf", "B1GMHpCbM", "HJfPE60-M", "SkDirp7mf", "SJ3_maCWG", "rktvBfRWz" ]
[ "public", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public" ]
[ "Dear the authors,\n\nI really like this work, it's great and insightful. \nI have some questions about the implementation of the model.\n\n1). Could you provide the exact setting of hyper parameter: T' (the maximum iteration number)?\n\n2). In section 5.1.2, you mentioned that \"the base neural network model will ...
[ -1, -1, -1, 8, 9, 5, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, 4, 3, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HJewuJWCZ", "iclr_2018_HJewuJWCZ", "SkV5DeTlG", "iclr_2018_HJewuJWCZ", "iclr_2018_HJewuJWCZ", "iclr_2018_HJewuJWCZ", "SkV5DeTlG", "rktvBfRWz", "rJPwQZYgM", "BkMvqjYgG", "SkV5DeTlG", "iclr_2018_HJewuJWCZ", "iclr_2018_HJewuJWCZ" ]
iclr_2018_Syhr6pxCW
PixelNN: Example-based Image Synthesis
We present a simple nearest-neighbor (NN) approach that synthesizes high-frequency photorealistic images from an ``incomplete'' signal such as a low-resolution image, a surface normal map, or edges. Current state-of-the-art deep generative models designed for such conditional image synthesis lack two important things: ...
accepted-poster-papers
The paper proposes a novel method for conditional image generation which is based on nearest neighbor matching for transferring high-frequency statistics. The evaluation is carried out on several image synthesis tasks, where the technique is shown to perform better than an adversarial baseline.
train
[ "SJt3bbKgz", "BJ4AfUoeG", "SJt9X9kWz", "BJxxa-XzG", "HJ9SiWmfM", "ry2edZQMf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Overall I like the paper and the results look nice in a diverse set of datasets and tasks such as edge-to-image, super-resolution, etc. Unlike the generative distribution sampling of GANs, the method provides an interesting compositional scheme, where the low frequencies are regressed and the high frequencies are ...
[ 8, 6, 7, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1 ]
[ "iclr_2018_Syhr6pxCW", "iclr_2018_Syhr6pxCW", "iclr_2018_Syhr6pxCW", "SJt9X9kWz", "BJ4AfUoeG", "SJt3bbKgz" ]
iclr_2018_B1l8BtlCb
Non-Autoregressive Neural Machine Translation
Existing approaches to neural machine translation condition each output word on previously generated outputs. We introduce a model that avoids this autoregressive property and produces its outputs in parallel, allowing an order of magnitude lower latency during inference. Through knowledge distillation, the use of inpu...
accepted-poster-papers
The paper proposes a novel method for training a non-autoregressive machine translation model based on a pre-trained auto-regressive model. The method is interesting and the evaluation is carried out well. It should be noted, however, that the relative complexity of the training procedure (which involves multiple stage...
train
[ "HJ-eMYYxz", "B1Zh3McgM", "rJKwhzhxM", "S1VYsm9mM", "B1oOt7qQf", "S1U8U7qXM", "Sk-GdGflM", "SJ7QvnZlM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "This work proposes non-autoregressive decoder for the encoder-decoder framework in which the decision of generating a word does not depends on the prior decision of generated words. The key idea is to model the fertility of each word so that copies for source words are fed as input to the encoder part, not the gen...
[ 7, 7, 6, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_B1l8BtlCb", "iclr_2018_B1l8BtlCb", "iclr_2018_B1l8BtlCb", "HJ-eMYYxz", "B1Zh3McgM", "rJKwhzhxM", "SJ7QvnZlM", "iclr_2018_B1l8BtlCb" ]
iclr_2018_HJtEm4p6Z
Deep Voice 3: Scaling Text-to-Speech with Convolutional Sequence Learning
We present Deep Voice 3, a fully-convolutional attention-based neural text-to-speech (TTS) system. Deep Voice 3 matches state-of-the-art neural speech synthesis systems in naturalness while training an order of magnitude faster. We scale Deep Voice 3 to dataset sizes unprecedented for TTS, training on more than eight h...
accepted-poster-papers
The paper describes a production-ready neural text-to-speech system. The algorithmic novelty is somewhat limited, as the fully-convolutional sequence model with attention is based on the previous work. The main contribution of the paper is the description of the complete system in full detail. I would encourage the aut...
train
[ "S1c4VEXWz", "r1Ps9aPez", "rJo8vWqgM", "SJVdGOpQG", "HJ-WJuTXf", "SJJktDpQz", "r1ciaLa7M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper provides an overview of the Deep Voice 3 text-to-speech system. It describes the system in a fair amount of detail and discusses some trade-offs w.r.t. audio quality and computational constraints. Some experimental validation of certain architectural choices is also provided.\n\nMy main concern with thi...
[ 7, 6, 6, -1, -1, -1, -1 ]
[ 4, 5, 3, -1, -1, -1, -1 ]
[ "iclr_2018_HJtEm4p6Z", "iclr_2018_HJtEm4p6Z", "iclr_2018_HJtEm4p6Z", "HJ-WJuTXf", "r1Ps9aPez", "S1c4VEXWz", "rJo8vWqgM" ]
iclr_2018_r1Ddp1-Rb
mixup: Beyond Empirical Risk Minimization
Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their ...
accepted-poster-papers
The paper presents a simple but surprisingly effective data augmentation technique which is thoroughly evaluated on a variety of classification tasks, leading to improvement over state-of-the-art baselines. The paper is somewhat lacking a theoretical justification beyond intuitions, but extensive evaluation makes up fo...
val
[ "SJsGHZyrz", "BJrLsP6Ez", "HJrOW5zeG", "H1ZcJ0rgz", "S1EEd8e-G", "r1LTMFT7z", "S13cNOaXz", "SyaZqVn7M", "HJjVr_hXz", "H1I9EpTAW" ]
[ "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "Thanks for your interests in our work!\n\nWe could use the floating point value directly with the cross-entropy loss. Note that the cross-entropy loss function (https://en.wikipedia.org/wiki/Cross_entropy) can be written as:\nl(p, q) = \\sum_i p_i * log(q_i) = p^T log(q) (1)\nwhere i is the category inde...
[ -1, -1, 6, 7, 6, -1, -1, -1, -1, -1 ]
[ -1, -1, 4, 4, 4, -1, -1, -1, -1, -1 ]
[ "BJrLsP6Ez", "S13cNOaXz", "iclr_2018_r1Ddp1-Rb", "iclr_2018_r1Ddp1-Rb", "iclr_2018_r1Ddp1-Rb", "iclr_2018_r1Ddp1-Rb", "H1ZcJ0rgz", "HJrOW5zeG", "S1EEd8e-G", "iclr_2018_r1Ddp1-Rb" ]
iclr_2018_HyiAuyb0b
TD or not TD: Analyzing the Role of Temporal Differencing in Deep Reinforcement Learning
Our understanding of reinforcement learning (RL) has been shaped by theoretical and empirical results that were obtained decades ago using tabular representations and linear function approximators. These results suggest that RL methods that use temporal differencing (TD) are superior to direct Monte Carlo estimation (M...
accepted-poster-papers
This is an interesting piece of work that provides solid evidence on the topic of bootstrapping in deep reinforcement learning.
val
[ "B1ibc3dlM", "HJwSvpOxf", "Sku18wYgz", "rkr8pJ_fG", "H1-LCkuff", "Bylqo1OGz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper includes several controlled empirical studies comparing MC and TD methods in predicting of value function with complex DNN function approximators. Such comparison has been carried out both in theory and practice for simple low dimensional environments with linear (and RKHS) value function approximation ...
[ 7, 7, 7, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1 ]
[ "iclr_2018_HyiAuyb0b", "iclr_2018_HyiAuyb0b", "iclr_2018_HyiAuyb0b", "HJwSvpOxf", "B1ibc3dlM", "Sku18wYgz" ]
iclr_2018_ry1arUgCW
DORA The Explorer: Directed Outreaching Reinforcement Action-Selection
Exploration is a fundamental aspect of Reinforcement Learning, typically implemented using stochastic action-selection. Exploration, however, can be more efficient if directed toward gaining new world knowledge. Visit-counters have been proven useful both in practice and in theory for directed exploration. However, a m...
accepted-poster-papers
This is a very interesting paper that also seems a little underdeveloped. As noted by the reviewers, it would have been nice to see the idea applied to domains requiring function approximation to confirm that it can scale -- the late addition of Freeway results is nice, but Freeway is also by far the simplest explorati...
train
[ "Hy0gemvxf", "BJeqeaOgM", "r1EghdKeM", "HkdCfLsXz", "Sy5-emIQf", "SJ9GoksGG", "SJ4ghLMzz", "r1rjjLMff" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "author", "author" ]
[ "\n\nThe paper proposes a novel way for trading of exploration and exploitation in model-free reinforcement learning. The idea is to learn a second (kind of) Q function, which could be called E-function, which captures the value of exploration (E-value). In contrast to the Q-function of the problem at hand, the E-f...
[ 7, 6, 6, -1, -1, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ry1arUgCW", "iclr_2018_ry1arUgCW", "iclr_2018_ry1arUgCW", "iclr_2018_ry1arUgCW", "SJ9GoksGG", "iclr_2018_ry1arUgCW", "BJeqeaOgM", "r1EghdKeM" ]
iclr_2018_Skw0n-W0Z
Temporal Difference Models: Model-Free Deep RL for Model-Based Control
Model-free reinforcement learning (RL) has been proven to be a powerful, general tool for learning complex behaviors. However, its sample efficiency is often impractically large for solving challenging real-world problems, even for off-policy algorithms such as Q-learning. A limiting factor in classic model-free RL is ...
accepted-poster-papers
There is a concern from one of the reviewers that the paper needs deeper analysis. On the other hand, applying finite horizon techniques to deep RL is relatively unexplored, and the paper does provide some interesting results in that direction.
train
[ "BJ25DYuxG", "rJyvKScxz", "rkvII7pef", "rJ9zsPaXM", "BktRDPpXz", "HkCQvwaQG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper universal value function type ideas to learn models of how long the current policy will take to reach various states (or state features), and then incorporates these into model-predictive control. This looks like a reasonable way to approach the problem of model-based RL in a way that avoids the covariat...
[ 7, 4, 7, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1 ]
[ "iclr_2018_Skw0n-W0Z", "iclr_2018_Skw0n-W0Z", "iclr_2018_Skw0n-W0Z", "rJyvKScxz", "rkvII7pef", "BJ25DYuxG" ]
iclr_2018_H1dh6Ax0Z
TreeQN and ATreeC: Differentiable Tree-Structured Models for Deep Reinforcement Learning
Combining deep model-free reinforcement learning with on-line planning is a promising approach to building on the successes of deep RL. On-line planning with look-ahead trees has proven successful in environments where transition models are known a priori. However, in complex environments where transition models need t...
accepted-poster-papers
This is a nicely written paper proposing a reasonably interesting extension to existing work (e.g. VPN). While the Atari results are not particular convincing, they do show promise. I encourage the authors to carefully take the reviewers' comment into consideration and incorporate them to the final version.
test
[ "ry1vffqeM", "r1cczyqef", "Hkg9o32gG", "B1t83k27M", "B1_hA0W7f", "rkKSHOTZG", "H1fRVuTWM", "Hya_Qdabf", "r1SimQEgf", "rJVUBbXxG", "HJAkFD-xM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "author", "author", "public", "author", "public" ]
[ "# Update after the rebuttal\nThank you for the rebuttal.\nThe authors claim that the source of objective mismatch comes from n-step Q-learning, and their method is well-justified in 1-step Q-learning. However, there is still a mismatch even with 1-step Q-learning because the bootstrapped target is also computed fr...
[ 4, 8, 5, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 5, 3, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H1dh6Ax0Z", "iclr_2018_H1dh6Ax0Z", "iclr_2018_H1dh6Ax0Z", "B1_hA0W7f", "iclr_2018_H1dh6Ax0Z", "r1cczyqef", "ry1vffqeM", "Hkg9o32gG", "rJVUBbXxG", "HJAkFD-xM", "iclr_2018_H1dh6Ax0Z" ]