paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18
values | meta_review stringlengths 29 10k | label stringclasses 3
values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2018_rkr1UDeC- | Large scale distributed neural network training through online distillation | Techniques such as ensembling and distillation promise model quality improvements when paired with almost any base model. However, due to increased test-time cost (for ensembles) and increased complexity of the training pipeline (for distillation), these techniques are challenging to use in industrial settings. In this... | accepted-poster-papers | meta score: 7
The paper introduces an online distillation technique to parallelise large scale training. Although the basic idea is not novel, the presented experimentation indicates that the authors' have made the technique work. Thus this paper should be of interest to practitioners.
Pros:
- clearly written, the... | train | [
"SJ7PzWDeM",
"SyOiDTtef",
"Bk09mAnlG",
"rkmWh7jMG",
"SkVb6QsMM",
"B1Hj5YOMG",
"HyFBYTVZf",
"HyCju6EZz",
"H1e5ETNWM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"This paper provides a very original & promising method to scale distributed training beyond the current limits of mini-batch stochastic gradient descent. As authors point out, scaling distributed stochastic gradient descent to more workers typically requires larger batch sizes in order to fully utilize computation... | [
8,
4,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rkr1UDeC-",
"iclr_2018_rkr1UDeC-",
"iclr_2018_rkr1UDeC-",
"HyCju6EZz",
"SJ7PzWDeM",
"iclr_2018_rkr1UDeC-",
"SyOiDTtef",
"Bk09mAnlG",
"SJ7PzWDeM"
] |
iclr_2018_BJ0hF1Z0b | Learning Differentially Private Recurrent Language Models | We demonstrate that it is possible to train large recurrent language models with user-level differential privacy guarantees with only a negligible cost in predictive accuracy. Our work builds on recent advances in the training of deep networks on user-partitioned data and privacy accounting for stochastic gradient des... | accepted-poster-papers | This paper uses known methods for learning a differentially private models and applies it to the task of learning a language model, and find they are able to maintain accuracy results on large datasets. Reviewers found the method convincing and original saying it was "interesting and very important to the machine learn... | train | [
"rJG5vkH4z",
"BJ1XIR_ef",
"Bkg5_kcxG",
"ryImKM5lG",
"HJRVC6-Gz",
"BkMETpWMz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author"
] | [
"\n1. Noise\n\nThanks for the reference. It might indeed be an LSTM issue!\n\n2. Clipping\n\nOh right, I didn't thought about the bias introduces, that is a good point!\n\n3. Optimizers\n\"Certainly an interesting direction, but beyond the scope of the current work.\"\n\nIndeed!\n",
"Summary: The paper provides ... | [
-1,
7,
7,
8,
-1,
-1
] | [
-1,
4,
2,
4,
-1,
-1
] | [
"HJRVC6-Gz",
"iclr_2018_BJ0hF1Z0b",
"iclr_2018_BJ0hF1Z0b",
"iclr_2018_BJ0hF1Z0b",
"Bkg5_kcxG",
"ryImKM5lG"
] |
iclr_2018_SJ-C6JbRW | Mastering the Dungeon: Grounded Language Learning by Mechanical Turker Descent | Contrary to most natural language processing research, which makes use of static datasets, humans learn language interactively, grounded in an environment. In this work we propose an interactive learning procedure called Mechanical Turker Descent (MTD) that trains agents to execute natural language commands grounded i... | accepted-poster-papers | This paper provides a game-based interface to have Turkers compete to analyze data for a learning task over multiple rounds. Reviewers found the work interesting and clear written, saying "the paper is easy to follow and the evaluation is meaningful." They also note that there is clear empirical benefit "the results se... | train | [
"r14hglcez",
"ByLXrM9eG",
"SyXWKhaxM",
"rJZaKFhXG",
"SkRXUrXMf",
"HkGz8Bmff",
"BJxl8BQfM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The authors propose a framework for interactive language learning, called Mechanical Turker Descent (MTD). Over multiple iterations, Turkers provide training examples for a language grounding task, and they are incentivized to provide new training examples that quickly improve generalization. The framework is stra... | [
7,
7,
8,
-1,
-1,
-1,
-1
] | [
4,
4,
5,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SJ-C6JbRW",
"iclr_2018_SJ-C6JbRW",
"iclr_2018_SJ-C6JbRW",
"iclr_2018_SJ-C6JbRW",
"r14hglcez",
"ByLXrM9eG",
"SyXWKhaxM"
] |
iclr_2018_Hyg0vbWC- | Generating Wikipedia by Summarizing Long Sequences | We show that generating English Wikipedia articles can be approached as a multi-
document summarization of source documents. We use extractive summarization
to coarsely identify salient information and a neural abstractive model to generate
the article. For the abstractive model, we introduce a decode... | accepted-poster-papers | This paper presents a new multi-document summarization task of trying to write a wikipedia article based on its sources. Reviewers found the paper and the task clear to understand and well-explained. The modeling aspects are clear as well, although lacking justification. Reviewers are split on the originality of the ta... | train | [
"r129mGrxf",
"BJGaExqgz",
"H1VuTvqgG",
"SyEJe4v7M",
"SkUg9xLfz",
"Hy663x8ff",
"SJ3RoxLzz",
"S1npqlIMf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"This paper considers the task of generating Wikipedia articles as a combination of extractive and abstractive multi-document summarization task where input is the content of reference articles listed in a Wikipedia page along with the content collected from Web search and output is the generated content for a targ... | [
7,
8,
7,
-1,
-1,
-1,
-1,
-1
] | [
5,
3,
4,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_Hyg0vbWC-",
"iclr_2018_Hyg0vbWC-",
"iclr_2018_Hyg0vbWC-",
"SJ3RoxLzz",
"iclr_2018_Hyg0vbWC-",
"r129mGrxf",
"BJGaExqgz",
"H1VuTvqgG"
] |
iclr_2018_rkYTTf-AZ | Unsupervised Machine Translation Using Monolingual Corpora Only | Machine translation has recently achieved impressive performance thanks to recent advances in deep learning and the availability of large-scale parallel corpora. There have been numerous attempts to extend these successes to low-resource language pairs, yet requiring tens of thousands of parallel sentences. In this wor... | accepted-poster-papers | This work presents some of the first results on unsupervised neural machine translation. The group of reviewers is highly knowledgeable in machine translation, and they were generally very impressed by the results and the think it warrants a whole new area of research noting "the fact that this is possible at all is re... | val | [
"B1POjpKef",
"HJlJ_aqgf",
"r1uaaZRxf",
"BkJl1m6mf",
"rkFvP_9mG",
"Sy8bGbVmf",
"BJd4kZNmz",
"r19B0xEmM",
"SJNmPo-Xf",
"Sk7cfh-zG",
"BJQe95agz",
"Hy59njgkG",
"rJPiWDkJf",
"SkrSo6EC-"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"public",
"author",
"author",
"public",
"public"
] | [
"This paper describes an approach to train a neural machine translation system without parallel data. Starting from a word-to-word translation lexicon, which was also learned with unsupervised methods, this approach combines a denoising auto-encoder objective with a back-translation objective, both in two translati... | [
8,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
5,
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rkYTTf-AZ",
"iclr_2018_rkYTTf-AZ",
"iclr_2018_rkYTTf-AZ",
"iclr_2018_rkYTTf-AZ",
"Sk7cfh-zG",
"HJlJ_aqgf",
"r1uaaZRxf",
"B1POjpKef",
"iclr_2018_rkYTTf-AZ",
"iclr_2018_rkYTTf-AZ",
"rJPiWDkJf",
"SkrSo6EC-",
"iclr_2018_rkYTTf-AZ",
"iclr_2018_rkYTTf-AZ"
] |
iclr_2018_HkAClQgA- | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences. For longer documents and summaries however these models often include repetitive and incoherent phrases. We introduce a neural network model with a novel intra-attention that a... | accepted-poster-papers | This work extends upon recent ideas to build a complete summarization system using clever attention, copying, and RL training. Reviewers like the work but have some criticisms. Particularly in terms of its originality and potential significance noting "It is a good incremental research, but the downside of this paper ... | train | [
"ryxZURtlf",
"HyzQdZqez",
"BkQAkH5lM",
"S1nwDXnQM",
"B1c32sZmM",
"SyqgjnIWM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"public"
] | [
"The paper proposes a model for abstractive document summarization using a self-critical policy gradient training algorithm, which is mixed with maximum likelihood objective. The Seq2seq architecture incorporates both intra-temporal and intra-decoder attention, and a pointer copying mechanism. A hard constraint is ... | [
8,
7,
6,
-1,
-1,
-1
] | [
3,
5,
4,
-1,
-1,
-1
] | [
"iclr_2018_HkAClQgA-",
"iclr_2018_HkAClQgA-",
"iclr_2018_HkAClQgA-",
"iclr_2018_HkAClQgA-",
"iclr_2018_HkAClQgA-",
"iclr_2018_HkAClQgA-"
] |
iclr_2018_BJRZzFlRb | Compressing Word Embeddings via Deep Compositional Code Learning | Natural language processing (NLP) models often require a massive number of parameters for word embeddings, resulting in a large storage or memory footprint. Deploying neural NLP models to mobile devices requires compressing the word embeddings without any significant sacrifices in performance. For this purpose, we prop... | accepted-poster-papers | This paper proposes an offline neural method using concrete/gumbel for learning a sparse codebook for use in NLP tasks such as sentiment analysis and MT. The method outperforms other methods using pruning and other sparse coding methods, and also produces somewhat interpretable codes. Reviewers found the paper to be si... | train | [
"rk0hvx5xf",
"SyrG5UJ-G",
"ryIqDgXbz",
"Sk5cTlq7f",
"rJL04SDXf",
"SyW6J-AWM",
"By5oXODZG",
"Bk0C4WEZz",
"ry-Ird7Zf",
"HybsIIQbz",
"ryJCJQXZf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"public",
"public",
"official_reviewer",
"public",
"public",
"public",
"public"
] | [
"This paper proposed a new method to compress the space complexity of word embedding vectors by introducing summation composition over a limited number of basis vectors, and representing each embedding as a list of the basis indices. The proposed method can reduce more than 90% memory consumption while keeping orig... | [
8,
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_BJRZzFlRb",
"iclr_2018_BJRZzFlRb",
"iclr_2018_BJRZzFlRb",
"rJL04SDXf",
"iclr_2018_BJRZzFlRb",
"By5oXODZG",
"ryJCJQXZf",
"ryIqDgXbz",
"ryJCJQXZf",
"rk0hvx5xf",
"iclr_2018_BJRZzFlRb"
] |
iclr_2018_SkhQHMW0W | Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training | Large-scale distributed training requires significant communication bandwidth for gradient exchange that limits the scalability of multi-node training, and requires expensive high-bandwidth network infrastructure. The situation gets even worse with distributed training on mobile devices (federated learning), which suff... | accepted-poster-papers | This work proposes a hybrid system for large-scale distributed and federated training of commonly used deep networks. This problem is of broad interest and these methods have the potential to be significantly impactful, as is attested by the active and interesting discussion on this work. At first there were questions ... | train | [
"SkwY9v4VG",
"rJ9crpElM",
"rJmrmQ5lG",
"B1lk3Ojxf",
"B1HmMiTXz",
"ByemeWoMz",
"S1te8dFmM",
"H1eVtcdff",
"H1o6dcuff",
"HkRogNzGf",
"Byn_brkfG",
"S1mTqdWzM",
"r1U-Go0WG",
"r13BWoAZM",
"rknhZsRbf",
"HJbNlsRbM",
"S1SGgoA-f",
"HySlhQ1GM",
"ryuVTiCWG",
"BkCsfi0bG",
"H18nes0Wz",
"... | [
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"official_reviewer",
"author",
"author",
"public",
"author",
"public",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"author",
"author",
"author",... | [
"Do the four rows in Table 2 (#GPUs in total = 4, 8, 16, 32) correspond to 1, 2, 4 and 8 training nodes? Could you please also say what is the compression ratio for these four cases? Thank you.",
"I think this is a good work that I am sure will have some influence in the near future. I think it should be accepted... | [
-1,
7,
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
4,
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SkhQHMW0W",
"iclr_2018_SkhQHMW0W",
"iclr_2018_SkhQHMW0W",
"iclr_2018_SkhQHMW0W",
"ryuVTiCWG",
"Byn_brkfG",
"r1U-Go0WG",
"HkRogNzGf",
"S1mTqdWzM",
"iclr_2018_SkhQHMW0W",
"HySlhQ1GM",
"BkCsfi0bG",
"rJ9crpElM",
"Bk5Gd4sxf",
"B1lk3Ojxf",
"S1SGgoA-f",
"SkhtLE9Zz",
"iclr_2018_... |
iclr_2018_B14TlG-RW | QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension | Current end-to-end machine reading and question answering (Q\&A) models are primarily based on recurrent neural networks (RNNs) with attention. Despite their success, these models are often slow for both training and inference due to the sequential nature of RNNs. We propose a new Q\&A architecture called QANet, which... | accepted-poster-papers | This work replaces the RNN layer of square with a self-attention and convolution, achieving a big speed up and performance gains, particularly with data augmentation. The work is mostly clear presented, one reviewer found it "well-written" although there was a complaint the work did not clear separate out the novel asp... | train | [
"B1lKKF8Bz",
"H1hw3bgSz",
"S1yRD584M",
"Hkx2Bz9lM",
"rycJHDIgf",
"Hyqx3y5xz",
"r1xzqspQM",
"BJgkoz9Xz",
"ryYnnsdXG",
"ByOo9od7M",
"HkxgcsdQG",
"Hy4sFiuQM",
"H1d6uidmf",
"rkOlY3WXf",
"BkCXSXOyf",
"HJg2Fk_yf"
] | [
"public",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"public"
] | [
"Thank you for your paper! We really liked your approach to accelerate inference and training times in QA. \nI have one question regarding the comparison with BiDAF. On the article, you mention that you batched the training examples by paragraph length in your model, but it is not clear whether you did the same for... | [
-1,
-1,
-1,
8,
5,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
5,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_B14TlG-RW",
"iclr_2018_B14TlG-RW",
"BJgkoz9Xz",
"iclr_2018_B14TlG-RW",
"iclr_2018_B14TlG-RW",
"iclr_2018_B14TlG-RW",
"iclr_2018_B14TlG-RW",
"Hy4sFiuQM",
"rkOlY3WXf",
"rycJHDIgf",
"Hyqx3y5xz",
"Hkx2Bz9lM",
"iclr_2018_B14TlG-RW",
"iclr_2018_B14TlG-RW",
"HJg2Fk_yf",
"iclr_2018_... |
iclr_2018_Sy2ogebAW | Unsupervised Neural Machine Translation | In spite of the recent success of neural machine translation (NMT) in standard benchmarks, the lack of large parallel corpora poses a major practical problem for many language pairs. There have been several proposals to alleviate this issue with, for instance, triangulation and semi-supervised learning techniques, but ... | accepted-poster-papers | This work presents new results on unsupervised machine translation using a clever combination of techniques. In terms of originality, the reviewers find that the paper over-claims, and promises a breakthrough, which they do not feel is justified.
However there is "more than enough new content" and "preliminary" results... | train | [
"BkW3sl8Nz",
"B1pZoxIEf",
"rJdm42ENG",
"SkBWbhN4M",
"S1jAR0Klf",
"SyniKeceM",
"S1BhMb5lG",
"B1sSDPTXf",
"SJHB8vaXM",
"Sy-0ez6MG",
"HkiSezafG",
"ByJTJzaMM",
"r1NHyGpGG",
"BkH86b6fM",
"B1SawIffz"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public"
] | [
"While it is true that we do not analyze any specific linguistic phenomenon in depth, note that our experiments already show that the system is not working like a \"word-for-word gloss\" as speculated in the comment: the baseline system is precisely a word-for-word gloss, and the proposed method beats it with a con... | [
-1,
-1,
-1,
-1,
6,
5,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
4,
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"rJdm42ENG",
"SkBWbhN4M",
"r1NHyGpGG",
"r1NHyGpGG",
"iclr_2018_Sy2ogebAW",
"iclr_2018_Sy2ogebAW",
"iclr_2018_Sy2ogebAW",
"iclr_2018_Sy2ogebAW",
"B1SawIffz",
"iclr_2018_Sy2ogebAW",
"S1BhMb5lG",
"r1NHyGpGG",
"SyniKeceM",
"S1jAR0Klf",
"iclr_2018_Sy2ogebAW"
] |
iclr_2018_BkwHObbRZ | Learning One-hidden-layer Neural Networks with Landscape Design | We consider the problem of learning a one-hidden-layer neural network: we assume the input x is from Gaussian distribution and the label y=aσ(Bx)+ξ, where a is a nonnegative vector and B is a full-rank weight matrix, and ξ is a noise vector. We first give an analytic formula for the population risk of the standard squ... | accepted-poster-papers | I recommend acceptance based on the reviews. The paper makes novel contributions to learning one-hidden layer neural networks and designing new objective function with no bad local optima.
There is one point that the paper is missing. It only mentions Janzamin et al in the passing. Janzamin et al propose using score ... | test | [
"ry5GSRHNf",
"rk0Ek5vgM",
"SyfsN8tef",
"SJjI7pKlz",
"HkLFssYfM",
"H1jVcsFMf",
"BkFFKsKfz"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"Thanks for the review again!\n\nWe apologize that we didn't know that the paper was expected to be updated. We just added the results for sigmoid that answers the question \"does sigmoid suffer from the same problem?\" as we claimed in the response before. Please see page 9, figure 2 in the current version. \n\nWe... | [
-1,
6,
9,
7,
-1,
-1,
-1
] | [
-1,
3,
3,
3,
-1,
-1,
-1
] | [
"rk0Ek5vgM",
"iclr_2018_BkwHObbRZ",
"iclr_2018_BkwHObbRZ",
"iclr_2018_BkwHObbRZ",
"rk0Ek5vgM",
"SyfsN8tef",
"SJjI7pKlz"
] |
iclr_2018_SysEexbRb | Critical Points of Linear Neural Networks: Analytical Forms and Landscape Properties | Due to the success of deep learning to solving a variety of challenging machine learning tasks, there is a rising interest in understanding loss functions for training neural networks from a theoretical aspect. Particularly, the properties of critical points and the landscape around them are of importance to determine ... | accepted-poster-papers | I recommend acceptance based on the positive reviews. The paper analyzes critical points for linear neural networks and shallow ReLU networks. Getting characterization of critical points for shallow ReLU networks is a great first step. | train | [
"S1BRtK8EM",
"Hyyw_tH4z",
"S1aEzCJxG",
"ryOWEcdlM",
"SJ6btV9gz",
"BydBmeLQf",
"H1oDUDefG",
"rJye8vezz",
"SkBt4Dgff"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"I am satisfied with the authors response and maintain my rating and acceptance recommendation.",
"Thanks for the clarification. Most of my concerns are addressed. An anonymous reviewer raised a concern about the overlap with existing work, Li et al. 2016b. The authors' comments about this related work sound ok t... | [
-1,
-1,
7,
7,
6,
-1,
-1,
-1,
-1
] | [
-1,
-1,
3,
5,
4,
-1,
-1,
-1,
-1
] | [
"rJye8vezz",
"SkBt4Dgff",
"iclr_2018_SysEexbRb",
"iclr_2018_SysEexbRb",
"iclr_2018_SysEexbRb",
"iclr_2018_SysEexbRb",
"S1aEzCJxG",
"ryOWEcdlM",
"SJ6btV9gz"
] |
iclr_2018_rJm7VfZA- | Learning Parametric Closed-Loop Policies for Markov Potential Games | Multiagent systems where the agents interact among themselves and with an stochastic environment can be formalized as stochastic games. We study a subclass of these games, named Markov potential games (MPGs), that appear often in economic and engineering applications when the agents share some common resource. We consi... | accepted-poster-papers | The paper considers Markov potential games (MPGs), where the agents share some common resource. They consider MPGs with continuous state-action variables, coupled constraints and nonconvex rewards, which is novel. The reviews are all positive and point out the novel contributions in the paper | train | [
"BJLGKD8Mz",
"BkVvEP5gM",
"BJZ6A-clG",
"H1iE5-jQG",
"ry7CMFMmz",
"S1UU_tGQz",
"B1JnBtzXf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"While it is not very surprising that in a potential game it is easy to find Nash equilibria (compare to normal form static games, in which local maxima of the potential are pure Nash equilibria), the idea of approaching these stochastic games from this direction is novel and potentially (no pun intended) fruitful.... | [
7,
6,
6,
-1,
-1,
-1,
-1
] | [
2,
3,
1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rJm7VfZA-",
"iclr_2018_rJm7VfZA-",
"iclr_2018_rJm7VfZA-",
"iclr_2018_rJm7VfZA-",
"BkVvEP5gM",
"BJZ6A-clG",
"BJLGKD8Mz"
] |
iclr_2018_SyProzZAW | The power of deeper networks for expressing natural functions | It is well-known that neural networks are universal approximators, but that deeper networks tend in practice to be more powerful than shallower ones. We shed light on this by proving that the total number of neurons m required to approximate natural classes of multivariate polynomials of n variables grows only linearly... | accepted-poster-papers | All the reviewers are agree on the significance of the topic of understanding expressivity of deep networks. This paper makes good progress in analyzing the ability of deep networks to fit multivariate polynomials. They show exponential depth advantage for general sparse polynomials.
I am very surprised that the pape... | train | [
"HJqsNbFez",
"S1z1Zf9xM",
"B1B65zqef",
"HkvXDF2XM",
"HyaCBK3XG",
"B1ebHYnXG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"Experimental results have shown that deep networks (many hidden layers) can approximate more complicated functions with less neurons compared to shallow (single hidden layer) networks. \nThis paper gives an explicit proof when the function in question is a sparse polynomial, ie: a polynomial in n variables, which ... | [
7,
6,
6,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1
] | [
"iclr_2018_SyProzZAW",
"iclr_2018_SyProzZAW",
"iclr_2018_SyProzZAW",
"HJqsNbFez",
"S1z1Zf9xM",
"B1B65zqef"
] |
iclr_2018_B1QgVti6Z | Empirical Risk Landscape Analysis for Understanding Deep Neural Networks | This work aims to provide comprehensive landscape analysis of empirical risk in deep neural networks (DNNs), including the convergence behavior of its gradient, its stationary points and the empirical risk itself to their corresponding population counterparts, which reveals how various network parameters determine the... | accepted-poster-papers | Based on the positive reviews, I recommend acceptance. The paper analyzes when empirical risk is close to the population version, when empirical saddle points are close to the population version and empirical gradients are close to the population version. | train | [
"H1Wo7pKgM",
"BJGc-k9xG",
"r13F3TRbM",
"S1T8deYMz",
"S1mZuxYGf",
"S14tDgFGG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"public",
"public"
] | [
"This paper studies empirical risk in deep neural networks. Results are provided in Section 4 for linear networks and in Section 5 for nonlinear networks.\nResults for deep linear neural networks are puzzling. Whatever the number of layers, a deep linear NN is simply a matrix multiplication and minimizing the MSE i... | [
3,
7,
7,
-1,
-1,
-1
] | [
3,
3,
3,
-1,
-1,
-1
] | [
"iclr_2018_B1QgVti6Z",
"iclr_2018_B1QgVti6Z",
"iclr_2018_B1QgVti6Z",
"r13F3TRbM",
"BJGc-k9xG",
"H1Wo7pKgM"
] |
iclr_2018_Hk9Xc_lR- | On the Discrimination-Generalization Tradeoff in GANs | Generative adversarial training can be generally understood as minimizing certain moment matching loss defined by a set of discriminator functions, typically neural networks. The discriminator set should be large enough to be able to uniquely identify the true distribution (discriminative), and also be small enough to... | accepted-poster-papers | I recommend acceptance. The two positive reviews point out the theoretical contributions. The authors have responded extensively to the negative review and I see no serious flaw as claimed by the negative review. | train | [
"HJjLXT4gM",
"ByyV3Atez",
"SyRq3ukMf",
"HySPX0V-f",
"BJJee_aXz",
"Skfk1PT7z",
"rkqxkP6XG",
"Hyz1hBp7M",
"rkwy_hVbG",
"S1FLYnVbf",
"S1eOo0sez",
"r16zP8jlG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public"
] | [
"== Paper Summary ==\nThe paper addresses the problem of balancing capacities of generator and discriminator classes in generative adversarial nets (GANs) from purely theoretical (function analytical and statistical learning) perspective. In my point of view, the main *novel* contributions are: \n(a) Conditions on ... | [
6,
3,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_Hk9Xc_lR-",
"iclr_2018_Hk9Xc_lR-",
"iclr_2018_Hk9Xc_lR-",
"ByyV3Atez",
"SyRq3ukMf",
"HJjLXT4gM",
"HJjLXT4gM",
"iclr_2018_Hk9Xc_lR-",
"ByyV3Atez",
"ByyV3Atez",
"r16zP8jlG",
"iclr_2018_Hk9Xc_lR-"
] |
iclr_2018_SyZI0GWCZ | Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models | Many machine learning algorithms are vulnerable to almost imperceptible perturbations of their inputs. So far it was unclear how much risk adversarial perturbations carry for the safety of real-world machine learning applications because most methods used to generate such perturbations rely either on detailed model inf... | accepted-poster-papers | The reviewers all agree this is a well written and interesting paper describing a novel black box adversarial attack. There were missing relevant references in the original submission, but these have been added. I would suggest the authors follow the reviewer suggestions on claims of generality beyond CNN; although ... | test | [
"BkP-T1qgM",
"SyWXJWqgf",
"HJ3OcT3gG",
"S1lew6QNf",
"rJFt8lM4f",
"SkqcaKqfM",
"Bk06LctMz",
"SJcYIcFzf",
"ryCQLcYzG",
"Bkw25o8GM",
"r1uU2GiZM",
"HkPCcEuWM",
"SJvgtpHeM",
"SySvBkSJf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"author",
"author",
"author",
"author",
"public",
"author",
"author",
"public"
] | [
"The authors identify a new security threat for deep learning: Decision-based adversarial attacks. This new class of attacks on deep learning systems requires from an attacker only the knowledge of class labels (previous attacks required more information, e.g., access to a gradient oracle). Unsurprisingly, since th... | [
7,
7,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SyZI0GWCZ",
"iclr_2018_SyZI0GWCZ",
"iclr_2018_SyZI0GWCZ",
"rJFt8lM4f",
"iclr_2018_SyZI0GWCZ",
"iclr_2018_SyZI0GWCZ",
"BkP-T1qgM",
"SyWXJWqgf",
"HJ3OcT3gG",
"r1uU2GiZM",
"iclr_2018_SyZI0GWCZ",
"SJvgtpHeM",
"SySvBkSJf",
"iclr_2018_SyZI0GWCZ"
] |
iclr_2018_rJQDjk-0b | Unbiased Online Recurrent Optimization | The novel \emph{Unbiased Online Recurrent Optimization} (UORO) algorithm allows for online learning of general recurrent computational graphs such as recurrent network models. It works in a streaming fashion and avoids backtracking through past activations and inputs. UORO is computationally as costly as \emph{Truncate... | accepted-poster-papers | The reviewers agree that the proposed method is theoretically interesting, but disagree on whether it has been properly experimentally validated. My view is that the the theoretical contribution is interesting enough to warrant inclusion in the conference, and so I will err on the side of accepting. | val | [
"B1Ud2yqgz",
"B1QPFb5eG",
"S1zt98K4z",
"r11STCqxG",
"rk7gF0cXG",
"SyaSrR5Xf",
"SJHpNAqQG",
"H1thm0c7G"
] | [
"official_reviewer",
"official_reviewer",
"public",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The authors introduce a novel approach to online learning of the parameters of recurrent neural networks from long sequences that overcomes the limitation of truncated backpropagation through time (BPTT) of providing biased gradient estimates.\n\nThe idea is to use a forward computation of the gradient as in Willi... | [
6,
7,
-1,
8,
-1,
-1,
-1,
-1
] | [
4,
4,
-1,
5,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rJQDjk-0b",
"iclr_2018_rJQDjk-0b",
"iclr_2018_rJQDjk-0b",
"iclr_2018_rJQDjk-0b",
"iclr_2018_rJQDjk-0b",
"B1Ud2yqgz",
"B1QPFb5eG",
"r11STCqxG"
] |
iclr_2018_ryup8-WCW | Measuring the Intrinsic Dimension of Objective Landscapes | Many recently trained neural networks employ large numbers of parameters to achieve good performance. One may intuitively use the number of parameters required as a rough gauge of the difficulty of a problem. But how accurate are such notions? How many parameters are really needed? In this paper we attempt to answer th... | accepted-poster-papers | The authors make an empirical study of the "dimension" of a neural net optimization problem, where the "dimension" is defined by the minimal random linear parameter subspace dimension where a (near) solution to the problem is likely to be found. I agree with reviewers that in light of the authors' revisions, the resu... | train | [
"B1IwI-2xz",
"BkJsM2vgf",
"BJva6gOgM",
"SJohldaXz",
"HkDPl_aXG",
"S1e7luTQM",
"SJ1yeuTmM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"This paper proposes an empirical measure of the intrinsic dimensionality of a neural network problem. Taking the full dimensionality to be the total number of parameters of the network model, the authors assess intrinsic dimensionality by randomly projecting the network to a domain with fewer parameters (correspon... | [
7,
7,
6,
-1,
-1,
-1,
-1
] | [
3,
4,
2,
-1,
-1,
-1,
-1
] | [
"iclr_2018_ryup8-WCW",
"iclr_2018_ryup8-WCW",
"iclr_2018_ryup8-WCW",
"BkJsM2vgf",
"BJva6gOgM",
"SJ1yeuTmM",
"B1IwI-2xz"
] |
iclr_2018_rkO3uTkAZ | Memorization Precedes Generation: Learning Unsupervised GANs with Memory Networks | We propose an approach to address two issues that commonly occur during training of unsupervised GANs. First, since GANs use only a continuous latent distribution to embed multiple classes or clusters of data, they often do not correctly handle the structural discontinuity between disparate classes in a latent space. S... | accepted-poster-papers |
I am going to recommend acceptance of this paper despite being worried about the issues raised by reviewer 1. In particular,
1: the best possible inception score would be obtained by copying the training dataset
2: the highest visual quality samples would be obtained by copying the training dataset
3: perturbatio... | train | [
"Bko3dzDlG",
"SyzkuzYxG",
"S1ck4rYxM",
"rkQUyPd7z",
"ryl76gumM",
"BJMceY9Mf",
"ryYDlFcfG",
"S16Vet5Mz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author"
] | [
"In summary, the paper introduces a memory module to the GANs to address two existing problems: (1) no discrete latent structures and (2) the forgetting problem. The memory provides extra information for both the generation and the discrimination, compared with vanilla GANs. Based on my knowledge, the idea is novel... | [
6,
6,
7,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rkO3uTkAZ",
"iclr_2018_rkO3uTkAZ",
"iclr_2018_rkO3uTkAZ",
"ryl76gumM",
"BJMceY9Mf",
"Bko3dzDlG",
"SyzkuzYxG",
"S1ck4rYxM"
] |
iclr_2018_H1uR4GZRZ | Stochastic Activation Pruning for Robust Adversarial Defense | Neural networks are known to be vulnerable to adversarial examples. Carefully chosen perturbations to real images, while imperceptible to humans, induce misclassification and threaten the reliability of deep learning systems in the wild. To guard against adversarial examples, we take inspiration from game theory and ca... | accepted-poster-papers | This is a borderline paper. The reviewers are happy with the simplicity of the proposed method and the fact that it can be applied after training; but are concerned by the lack of theory explaining the results. I will recommend accepting, but I would ask the authors add the additional experiments they have promised, ... | train | [
"ryrXQ4wyz",
"SJFnpOYxM",
"ry5D1Z5xf",
"HJvA3yQQG",
"rkk5517Qf",
"B1DQ5J77z",
"HJRkw1X7f"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"This paper investigates a new approach to prevent a given classifier from adversarial examples. The most important contribution is that the proposed algorithm can be applied post-hoc to already trained networks. Hence, the proposed algorithm (Stochastic Activation Pruning) can be combined with algorithms which pre... | [
6,
7,
6,
-1,
-1,
-1,
-1
] | [
3,
4,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_H1uR4GZRZ",
"iclr_2018_H1uR4GZRZ",
"iclr_2018_H1uR4GZRZ",
"ryrXQ4wyz",
"SJFnpOYxM",
"ry5D1Z5xf",
"iclr_2018_H1uR4GZRZ"
] |
iclr_2018_HkxF5RgC- | Sparse Persistent RNNs: Squeezing Large Recurrent Networks On-Chip | Recurrent Neural Networks (RNNs) are powerful tools for solving sequence-based problems, but their efficacy and execution time are dependent on the size of the network. Following recent work in simplifying these networks with model pruning and a novel mapping of work onto GPUs, we design an efficient implementation fo... | accepted-poster-papers | The reviewers find the work interesting and well made, but are concerned that ICLR is not the right venue for the work. I will recommend that the paper be accepted, but ask the authors to add the NMT results to the main paper (any other non-synthetic applications they could add would be helpful). | train | [
"rkoKvifef",
"BJ6cxWFlM",
"H1PcMAKeG",
"SkzfXdpmf",
"SJ5iMUt7f",
"BkwIocfQz",
"S1gFF5fmM",
"HJ4pB5M7f"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author"
] | [
"The paper devises a sparse kernel for RNNs which is urgently needed because current GPU deep learning libraries (e.g., CuDNN) cannot exploit sparsity when it is presented and because a number of works have proposed to sparsify/prune RNNs so as to be able to run on devices with limited compute power (e.g., smartpho... | [
6,
6,
6,
-1,
-1,
-1,
-1,
-1
] | [
2,
4,
2,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_HkxF5RgC-",
"iclr_2018_HkxF5RgC-",
"iclr_2018_HkxF5RgC-",
"iclr_2018_HkxF5RgC-",
"S1gFF5fmM",
"rkoKvifef",
"BJ6cxWFlM",
"H1PcMAKeG"
] |
iclr_2018_ByKWUeWA- | GANITE: Estimation of Individualized Treatment Effects using Generative Adversarial Nets | Estimating individualized treatment effects (ITE) is a challenging task due to the need for an individual's potential outcomes to be learned from biased data and without having access to the counterfactuals. We propose a novel method for inferring ITE based on the Generative Adversarial Nets (GANs) framework. Our metho... | accepted-poster-papers | The reviewers agree that the method is original and mostly well communicated, but have some doubts about the significance of the work. | val | [
"rk3S-gKez",
"ryaoluFgG",
"SyIFK-9lG",
"HJAYMyyfz",
"rkl8zJyfM",
"S1VTby1fG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"Summary:\nThis paper proposes to estimate the individual treatment effects (ITE) through\ntraining two separate conditional generative adversarial networks (GANs). \n\nFirst, a counterfactual GAN is trained to estimate the conditional distribution \nof the potential outcome vector, which consists of factual outcom... | [
6,
6,
6,
-1,
-1,
-1
] | [
4,
3,
3,
-1,
-1,
-1
] | [
"iclr_2018_ByKWUeWA-",
"iclr_2018_ByKWUeWA-",
"iclr_2018_ByKWUeWA-",
"rk3S-gKez",
"ryaoluFgG",
"SyIFK-9lG"
] |
iclr_2018_S18Su--CW | Thermometer Encoding: One Hot Way To Resist Adversarial Examples | It is well known that it is possible to construct "adversarial examples"
for neural networks: inputs which are misclassified by the network
yet indistinguishable from true data. We propose a simple
modification to standard neural network architectures, thermometer
encoding, which significantly i... | accepted-poster-papers | This paper is borderline. The reviewers agree that the method is novel and interesting, but have concerns about scalability and weakness to attacks with larger epsilon. I will recommend accepting; but I think the paper would be well served by imagenet experiments, and hope the authors are able to include these for th... | val | [
"ByzXBMDxf",
"HJDuim3lM",
"Bk9IXvzWf",
"H16--t2Xz",
"HkYs0dnXz",
"r1fzgYnQM",
"ryPcrG9xG",
"B19zAPflM",
"HkALrvckf",
"SJJbNyK0Z"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"public",
"public",
"author",
"public"
] | [
"This paper studies input discretization and white-box attacks on it to make deep networks robust to adversarial examples. They propose one-hot and thermometer encodings as input discretization and \nalso propose DGA and LS-PGA as white-box attacks on it.\nRobustness to adversarial examples for thermometer encodin... | [
6,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
2,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_S18Su--CW",
"iclr_2018_S18Su--CW",
"iclr_2018_S18Su--CW",
"ByzXBMDxf",
"Bk9IXvzWf",
"HJDuim3lM",
"HkALrvckf",
"iclr_2018_S18Su--CW",
"SJJbNyK0Z",
"iclr_2018_S18Su--CW"
] |
iclr_2018_HyrCWeWCb | Trust-PCL: An Off-Policy Trust Region Method for Continuous Control | Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a large amount of on-policy interaction with the environment. To address this problem, we prop... | accepted-poster-papers | This paper adapts (Nachum et al 2017) to continuous control via TRPO. The work is incremental (not in the dirty sense of the word popular amongst researchers, but rather in the sense of "building atop a closely related work"), nontrivial, and shows empirical promise. The reviewers would like more exploration of t... | train | [
"rJ-4JL_Vf",
"ByDPYkUxG",
"H11zfWQZf",
"B1tQ10rVG",
"H1ccXfmeG",
"HkF_6L6Qz",
"BJ--aUT7M",
"Hk772U6XM"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"These comments continue to reveal some fundamental misunderstandings we should clarify.\n\nR2: \"Our paper does not present a policy gradient method\" <- This is obviously untrue.\n\n- To correct such a misunderstanding, one first needs to realize that policy gradient algorithms update model parameters along the g... | [
-1,
6,
5,
-1,
5,
-1,
-1,
-1
] | [
-1,
4,
4,
-1,
1,
-1,
-1,
-1
] | [
"B1tQ10rVG",
"iclr_2018_HyrCWeWCb",
"iclr_2018_HyrCWeWCb",
"BJ--aUT7M",
"iclr_2018_HyrCWeWCb",
"H1ccXfmeG",
"H11zfWQZf",
"ByDPYkUxG"
] |
iclr_2018_rk49Mg-CW | Stochastic Variational Video Prediction | Predicting the future in real-world settings, particularly from raw sensory observations such as images, is exceptionally challenging. Real-world events can be stochastic and unpredictable, and the high dimensionality and complexity of natural images requires the predictive model to build an intricate understanding of ... | accepted-poster-papers | Not quite enough for an oral but a very solid poster. | train | [
"SJHVI1WSG",
"S1gH28vgM",
"r17bOI8yG",
"S1riI7OxM",
"H1ajkrZEf",
"S1W4e5U7f",
"HJtZzEW7z",
"Hk7A25bmM",
"r1ULac-Qf",
"rkLWT5Z7f",
"HJ1NygMgM",
"H1i2t1egM"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"author",
"public",
"author",
"author",
"author",
"author",
"public"
] | [
"\nFor comparison with VPN, we did NOT train any model. Instead, the authors of Reed et al. 2017 provided their trained model which we used for evaluation.\n\nIn terms of numbers, the model from Reed et al. 2017 has 119,538,432 while our model has 8,378,497. Hopefully this helps to get a better understanding of the... | [
-1,
7,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
5,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"H1ajkrZEf",
"iclr_2018_rk49Mg-CW",
"iclr_2018_rk49Mg-CW",
"iclr_2018_rk49Mg-CW",
"iclr_2018_rk49Mg-CW",
"HJtZzEW7z",
"iclr_2018_rk49Mg-CW",
"S1riI7OxM",
"r17bOI8yG",
"S1gH28vgM",
"H1i2t1egM",
"iclr_2018_rk49Mg-CW"
] |
iclr_2018_HkXWCMbRW | Towards Image Understanding from Deep Compression Without Decoding | Motivated by recent work on deep neural network (DNN)-based image compression methods showing potential improvements in image quality, savings in storage, and bandwidth reduction, we propose to perform image understanding tasks such as classification and segmentation directly on the compressed representations produced ... | accepted-poster-papers | Some reviewers seem to assign novelty to the compression and classification formulation; however, semi-supervised autoencoders have been used for a long time. Taking the compression task more seriously as is done in this paper is less explored.
The paper provides some extensive experimental evaluation and was edited t... | train | [
"SkE6QMtlG",
"r1A9XDwgG",
"rJx_tnFeM",
"BkCWUB2zM",
"HJB3rBhzG",
"HkvLrShzM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"Thanks for addressing most of the issues. I changed my given score from 3 to 6.\n\nSummary:\nThis work explores the use of learned compressed image representation for solving 2 computer vision tasks without employing a decoding step. \n\nThe paper claims to be more computationally and memory efficient compared to ... | [
6,
9,
6,
-1,
-1,
-1
] | [
4,
5,
3,
-1,
-1,
-1
] | [
"iclr_2018_HkXWCMbRW",
"iclr_2018_HkXWCMbRW",
"iclr_2018_HkXWCMbRW",
"r1A9XDwgG",
"SkE6QMtlG",
"rJx_tnFeM"
] |
iclr_2018_ByJIWUnpW | Automatically Inferring Data Quality for Spatiotemporal Forecasting | Spatiotemporal forecasting has become an increasingly important prediction task in machine learning and statistics due to its vast applications, such as climate modeling, traffic prediction, video caching predictions, and so on. While numerous studies have been conducted, most existing works assume that the data from d... | accepted-poster-papers | With an 8-6-6 rating all reviewers agreed that this paper is past the threshold for acceptance.
The quality of the paper appears to have increased during the review cycle due to interactions with the reviewers. The paper addresses issues related to the quality of heterogeneous data sources. The paper does this throug... | train | [
"Hk7kJzcxM",
"B1GH1Kd4f",
"r16AndOEf",
"S1GlLvu4G",
"rJDUzhtxf",
"ry07x_9xG",
"rJCAdVTQM",
"ByIwKojXM",
"rktRm4eQz",
"HyChXVl7G",
"S1eWfNlmz",
"ry6rgEgXG"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"Update:\n\nI have read the rebuttal and the revised manuscript. Paper reads better and comparison to Auto-regression was added. This work presents a novel way of utilizing GCN and I believe it would be interesting to the community. In this regard, I have updated my rating.\n\nOn the downside, I still remain uncert... | [
6,
-1,
-1,
-1,
6,
8,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
-1,
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_ByJIWUnpW",
"r16AndOEf",
"S1GlLvu4G",
"S1eWfNlmz",
"iclr_2018_ByJIWUnpW",
"iclr_2018_ByJIWUnpW",
"ByIwKojXM",
"HyChXVl7G",
"rJDUzhtxf",
"rJDUzhtxf",
"Hk7kJzcxM",
"ry07x_9xG"
] |
iclr_2018_Sy21R9JAW | Towards better understanding of gradient-based attribution methods for Deep Neural Networks | Understanding the flow of information in Deep Neural Networks (DNNs) is a challenging problem that has gain increasing attention over the last few years. While several methods have been proposed to explain network predictions, there have been only a few attempts to compare them from a theoretical perspective. What is m... | accepted-poster-papers | With scores of 7-7-6 and the justification below the AC recommends acceptance.
One of the reviewers summarizes why this is a good paper as follows:
"This paper discusses several gradient based attribution methods, which have been popular for the fast computation of saliency maps for interpreting deep neural networks... | train | [
"rJUrhpYxf",
"Byt56W9lM",
"SymYit2xf",
"H1QgktIGG",
"rJoXE_UMG",
"B1NvQ_Izz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"This paper discusses several gradient based attribution methods, which have been popular for the fast computation of saliency maps for interpreting deep neural networks. The paper provides several advances:\n- \\epsilon-LRP and DeepLIFT are formulated in a way that can be calculated using the same back-propagation... | [
7,
6,
7,
-1,
-1,
-1
] | [
3,
5,
4,
-1,
-1,
-1
] | [
"iclr_2018_Sy21R9JAW",
"iclr_2018_Sy21R9JAW",
"iclr_2018_Sy21R9JAW",
"Byt56W9lM",
"rJUrhpYxf",
"SymYit2xf"
] |
iclr_2018_SyJ7ClWCb | Countering Adversarial Images using Input Transformations | This paper investigates strategies that defend against adversarial-example attacks on image-classification systems by transforming the inputs before feeding them to the system. Specifically, we study applying image transformations such as bit-depth reduction, JPEG compression, total variance minimization, and image qui... | accepted-poster-papers | A well written paper proposing some reasonable approaches to counter adversarial images. Proposed approaches include non-differentiable and randomized methods. Anonymous commentators pushed upon and cleared up some important issues regarding white, black and gray "box" settings. The approach appears to be a plausible d... | train | [
"HJLVhosrz",
"SJR7osjSG",
"SyYA9jsBz",
"rkx25isrM",
"ryi9KVKBG",
"Bk1rU7YSz",
"HksQPiUVM",
"B1gREqS4f",
"HyulgtSVz",
"S1wXIrVgM",
"Sk47YIYlM",
"SJzYnEqef",
"rk5sqWNNG",
"HkKhk67Nf",
"rkqFThmEM",
"rJX6P09fz",
"r1XVPRqzM",
"SyNlDC9GM",
"r1-ST6olG",
"SJp9ze61G",
"H1mJgiqJG",
"... | [
"public",
"public",
"public",
"public",
"author",
"public",
"public",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"author",
"public",
"author",
"author",
"author",
"public",
"public",
"public",
"author",
"public",
"author... | [
"There is a paper “Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods” (https://nicholas.carlini.com/papers/2017_aisec_breakingdetection.pdf) which explores how stochastic model could be de-randomized and successfully attacked.\n\nThus while randomness makes an attack harder, it does not ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
8,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"HksQPiUVM",
"SyYA9jsBz",
"iclr_2018_SyJ7ClWCb",
"iclr_2018_SyJ7ClWCb",
"Bk1rU7YSz",
"iclr_2018_SyJ7ClWCb",
"B1gREqS4f",
"HyulgtSVz",
"iclr_2018_SyJ7ClWCb",
"iclr_2018_SyJ7ClWCb",
"iclr_2018_SyJ7ClWCb",
"iclr_2018_SyJ7ClWCb",
"HkKhk67Nf",
"rkqFThmEM",
"iclr_2018_SyJ7ClWCb",
"S1wXIrVgM"... |
iclr_2018_HkwVAXyCW | Skip RNN: Learning to Skip State Updates in Recurrent Neural Networks | Recurrent Neural Networks (RNNs) continue to show outstanding performance in sequence modeling tasks. However, training RNNs on long sequences often face challenges like slow inference, vanishing gradients and difficulty in capturing long term dependencies. In backpropagation through time settings, these issues are ti... | accepted-poster-papers | This paper explores what might be characterized as an adaptive form of ZoneOut.
With the improvements and clarifications added to the paper during the rebuttal the paper could be accepted.
| train | [
"HkmeI2vxM",
"rkKX3j7SM",
"SyUH1Qjez",
"HJ6Ve2MHz",
"BkfgO-FgG",
"BywMmCPzf",
"SyThG0DMz",
"HJwcGAPMG",
"HJoVzADMM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"UPDATE: Following the author's response I've increased my score from 5 to 6. The revised paper includes many of the additional references that I suggested, and the author response clarified my confusion over the Charades experiments; their results are indeed close to state-of-the-art on Charades activity localizat... | [
6,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1
] | [
4,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_HkwVAXyCW",
"BywMmCPzf",
"iclr_2018_HkwVAXyCW",
"HJwcGAPMG",
"iclr_2018_HkwVAXyCW",
"HkmeI2vxM",
"BkfgO-FgG",
"SyUH1Qjez",
"iclr_2018_HkwVAXyCW"
] |
iclr_2018_rkPLzgZAZ | Modular Continual Learning in a Unified Visual Environment | A core aspect of human intelligence is the ability to learn new tasks quickly and switch between them flexibly. Here, we describe a modular continual reinforcement learning paradigm inspired by these abilities. We first introduce a visual interaction environment that allows many types of tasks to be unified in a singl... | accepted-poster-papers | Important problem (modular continual RL) and novel contributions. The initial submission was judged to be a little dense and hard to read, but the authors have been responsive in responding and updating the paper. I support accepting this paper. | test | [
"rJHuDgAlz",
"HkxSRZcxM",
"BJQgR1aef",
"BJPN4R3WM",
"H1qarChZG",
"ByXcBCnbf",
"BJVOBA3Wz",
"Bkht4Anbf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"The authors propose a kind of framework for learning to solve elemental tasks and then learning task switching in a multitask scenario. The individual tasks are inspired by a number of psychological tasks. Specifically, the authors use a pretrained convnet as raw statespace encoding together with previous actions ... | [
6,
8,
8,
-1,
-1,
-1,
-1,
-1
] | [
2,
3,
2,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rkPLzgZAZ",
"iclr_2018_rkPLzgZAZ",
"iclr_2018_rkPLzgZAZ",
"iclr_2018_rkPLzgZAZ",
"HkxSRZcxM",
"BJQgR1aef",
"BJQgR1aef",
"rJHuDgAlz"
] |
iclr_2018_BydLzGb0Z | Twin Networks: Matching the Future for Sequence Generation | We propose a simple technique for encouraging generative RNNs to plan ahead. We train a ``backward'' recurrent network to generate a given sequence in reverse order, and we encourage states of the forward model to predict cotemporal states of the backward model. The backward network is used only during training, and pl... | accepted-poster-papers | Simple idea (which is a positive) to regularize RNNs, broad applicability, well-written paper. Initially, there were concerns about comparisons, but he authors have provided additional experiments that have made the paper stronger. | train | [
"HyciX9dxM",
"B1Fe0Zqxz",
"Hy2zdEuNz",
"HJzYCPDlf",
"B1VAfclVG",
"HyCbe867z",
"r1SMZBT7z",
"BJ2TeHpXz",
"rJob2Ep7z",
"B1_9I7DzG",
"BJMXIjbGM",
"rk6_yF2-f",
"r11N5P2Zf"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"public",
"author",
"author",
"author",
"author",
"author",
"public",
"author",
"author"
] | [
"\n1) Summary\nThis paper proposes a recurrent neural network (RNN) training formulation for encouraging RNN the hidden representations to contain information useful for predicting future timesteps reliably. The authors propose to train a forward and backward RNN in parallel. The forward RNN predicts forward in tim... | [
6,
7,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_BydLzGb0Z",
"iclr_2018_BydLzGb0Z",
"HyCbe867z",
"iclr_2018_BydLzGb0Z",
"B1_9I7DzG",
"HyciX9dxM",
"HJzYCPDlf",
"B1Fe0Zqxz",
"iclr_2018_BydLzGb0Z",
"BJMXIjbGM",
"iclr_2018_BydLzGb0Z",
"B1Fe0Zqxz",
"iclr_2018_BydLzGb0Z"
] |
iclr_2018_S1J2ZyZ0Z | Interpretable Counting for Visual Question Answering | Questions that require counting a variety of objects in images remain a major challenge in visual question answering (VQA). The most common approaches to VQA involve either classifying answers based on fixed length representations of both the image and question or summing fractional counts estimated from each section o... | accepted-poster-papers | Important problem and all reviewers recommend acceptance. I agree. | train | [
"rkvdigi4f",
"Byv9HGFEG",
"SJdWxzoxz",
"ryq-8Y_lG",
"rJ1U7MKef",
"B1bTymoGf",
"S1iUyQjzM",
"H1bYCGofM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"I'm satisfied with the authors' responses to the concerns raised by me and my fellow reviewers, I would recommend acceptance of the paper.",
"After reading the authors' responses to the concerns raised by me and my fellow reviewers, I would recommend acceptance of the paper because it presents a novel, interesti... | [
-1,
-1,
6,
7,
7,
-1,
-1,
-1
] | [
-1,
-1,
3,
4,
4,
-1,
-1,
-1
] | [
"S1iUyQjzM",
"H1bYCGofM",
"iclr_2018_S1J2ZyZ0Z",
"iclr_2018_S1J2ZyZ0Z",
"iclr_2018_S1J2ZyZ0Z",
"ryq-8Y_lG",
"rJ1U7MKef",
"SJdWxzoxz"
] |
iclr_2018_H1UOm4gA- | Interactive Grounded Language Acquisition and Generalization in a 2D World | We build a virtual agent for learning language in a 2D maze-like world. The agent sees images of the surrounding environment, listens to a virtual teacher, and takes actions to receive rewards. It interactively learns the teacher’s language from scratch based on two language use cases: sentence-directed navigation and ... | accepted-poster-papers | This manuscript was reviewed by 3 expert reviewers and their evaluation is generally positive. The authors have responded to the questions asked and the reviewers are satisfied with the responses. Although the 2D environments are underwhelming (compared to 3D environments such as SUNCG, Doom, Thor, etc), one thing that... | train | [
"B1JBH18gf",
"B1KF2Z5xf",
"HJLqaM7bz",
"r1-gIAMNM",
"SJGGiTcQM",
"HJi7u9TWf",
"Skq3gyaWf",
"SJnXZJpWz",
"HkQl1ypZf",
"ryVnJ16Wz",
"rkCvlKvkf",
"r1a6A4PJz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public"
] | [
"This paper introduces a new task that combines elements of instruction following\nand visual question answering: agents must accomplish particular tasks in an\ninteractive environment while providing one-word answers to questions about\nfeatures of the environment. To solve this task, the paper also presents a new... | [
7,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_H1UOm4gA-",
"iclr_2018_H1UOm4gA-",
"iclr_2018_H1UOm4gA-",
"HkQl1ypZf",
"iclr_2018_H1UOm4gA-",
"Skq3gyaWf",
"B1JBH18gf",
"r1a6A4PJz",
"HJLqaM7bz",
"B1KF2Z5xf",
"r1a6A4PJz",
"iclr_2018_H1UOm4gA-"
] |
iclr_2018_Sy0GnUxCb | Emergent Complexity via Multi-Agent Competition | Reinforcement learning algorithms can train agents that solve problems in complex, interesting environments. Normally, the complexity of the trained agent is closely related to the complexity of the environment. This suggests that a highly capable agent requires a complex environment for training. In this paper, we p... | accepted-poster-papers | This paper received divergent reviews (7, 3, 9). The main contributions of the paper -- that multi-agent competition serves as a natural curriculum, opponent sampling strategies, and the characterization of emergent complex strategies -- are certainly of broad interest (although the first is essentially the same observ... | train | [
"BJjEeunNf",
"SkFemC-lz",
"By9EwRPxG",
"SyCKd4clM",
"rJI9veRQz",
"B1XRb_6mG",
"r1gtW_p7G",
"S1VyeOTQG"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"We respond to the three points about the paper raised by the reviewer:\n\n1) There are two questions here, one about games being zero sum and another about the plots in Figure 3 being symmetric about 50%. For the first question the answer is games are not zero sum and a draw results in a “negative” reward for both... | [
-1,
3,
9,
7,
-1,
-1,
-1,
-1
] | [
-1,
3,
5,
4,
-1,
-1,
-1,
-1
] | [
"rJI9veRQz",
"iclr_2018_Sy0GnUxCb",
"iclr_2018_Sy0GnUxCb",
"iclr_2018_Sy0GnUxCb",
"iclr_2018_Sy0GnUxCb",
"SyCKd4clM",
"By9EwRPxG",
"SkFemC-lz"
] |
iclr_2018_B1mvVm-C- | Universal Agent for Disentangling Environments and Tasks | Recent state-of-the-art reinforcement learning algorithms are trained under the goal of excelling in one specific task. Hence, both environment and task specific knowledge are entangled into one framework. However, there are often scenarios where the environment (e.g. the physical world) is fixed while only the target ... | accepted-poster-papers | All reviewers recommend accepting this paper, and this AC agrees. | train | [
"rkHr_WFlz",
"rkx8qW9ez",
"H1wc2j2lM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose to decompose reinforcement learning into a PATH function that can learn how to solve reusable sub-goals an agent might have in a specific environment and a GOAL function that chooses subgoals in order to solve a specific task in the environment using path segments. So I guess it can be thought ... | [
6,
7,
6
] | [
3,
4,
3
] | [
"iclr_2018_B1mvVm-C-",
"iclr_2018_B1mvVm-C-",
"iclr_2018_B1mvVm-C-"
] |
iclr_2018_SJa9iHgAZ | Residual Connections Encourage Iterative Inference | Residual networks (Resnets) have become a prominent architecture in deep learning. However, a comprehensive understanding of Resnets is still a topic of ongoing research. A recent view argues that Resnets perform iterative refinement of features. We attempt to further expose properties of this aspect. To this end, we s... | accepted-poster-papers | The paper presents an interesting view of ResNets and the findings should be of broad interest. R1 did not update their score/review, but I am satisfied with the author response, and recommend this paper for acceptance. | train | [
"H1EPgaweG",
"HkeOU0qgf",
"HJyUi3sez",
"SkXbC02XM",
"BJyaPFbff",
"SyGjDt-fM",
"HkFKPt-Gf",
"HkzBwFWzM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"This paper investigates residual networks (ResNets) in an empirical way. The authors argue that shallow layers are responsible for learning important feature representations, while deeper layers focus on refining the features. They validate this point by performing a series of lesion study on ResNet.\n\nOverall, t... | [
6,
5,
7,
-1,
-1,
-1,
-1,
-1
] | [
3,
5,
4,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SJa9iHgAZ",
"iclr_2018_SJa9iHgAZ",
"iclr_2018_SJa9iHgAZ",
"iclr_2018_SJa9iHgAZ",
"H1EPgaweG",
"HJyUi3sez",
"HkeOU0qgf",
"HkeOU0qgf"
] |
iclr_2018_Hk6WhagRW | Emergent Communication through Negotiation | Multi-agent reinforcement learning offers a way to study how communication could emerge in communities of agents needing to solve specific problems. In this paper, we study the emergence of communication in the negotiation environment, a semi-cooperative model of agent interaction. We introduce two communication protoc... | accepted-poster-papers | All reviewers agree the paper proposes an interesting setup and the main finding that "prosocial agents are able to learn to ground symbols using RL, but self-interested agents are not" progresses work in this area. R3 asked a number of detail-oriented questions and while they did not update their review based on the a... | train | [
"B1nNZKU4M",
"SJGBLcYxG",
"Bk7S9ZclM",
"BkoPxj3xz",
"ByxfIR9Xf",
"HyQaw09QG",
"B1RQDA9Qz",
"BkGlPCqQz",
"ryQO17PyM",
"HJ9FF6EAZ",
"r1Nwr02RW",
"HydzSWqCW",
"SyodmvDC-",
"SkM4N2SCW",
"SkW8mhN0-"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"public",
"public",
"public",
"public"
] | [
"Thank you for responding to the mentioned concerns and addressing those in your latest revision. The topic is interesting and deserves visibility.",
"The authors describe a variant of the negotiation game in which agents of different type, selfish or prosocial, and with different preferences. The central feature... | [
-1,
6,
7,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
3,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"HyQaw09QG",
"iclr_2018_Hk6WhagRW",
"iclr_2018_Hk6WhagRW",
"iclr_2018_Hk6WhagRW",
"iclr_2018_Hk6WhagRW",
"SJGBLcYxG",
"Bk7S9ZclM",
"BkoPxj3xz",
"r1Nwr02RW",
"SkW8mhN0-",
"iclr_2018_Hk6WhagRW",
"HJ9FF6EAZ",
"SkM4N2SCW",
"HJ9FF6EAZ",
"iclr_2018_Hk6WhagRW"
] |
iclr_2018_SygwwGbRW | Semi-parametric topological memory for navigation | We introduce a new memory architecture for navigation in previously unseen environments, inspired by landmark-based navigation in animals. The proposed semi-parametric topological memory (SPTM) consists of a (non-parametric) graph with nodes corresponding to locations in the environment and a (parametric) deep network ... | accepted-poster-papers | Important problem (navigation in unseen 3D environments, Doom in this case), interesting hybrid approach (mixing neural networks and path-planning). Initially, there were concerns about evaluation (proper baselines, ambiguous environments, etc). The authors have responded with updated experiments that are convincing to... | train | [
"BJIvZ_H4M",
"BkOtNnKxM",
"S1H89etgf",
"Hy3ONt2eM",
"S1qTBd6XM",
"ByrCE_aXf",
"HJsFdVbQG",
"SJMgFkxQG",
"rJPdc9JmM",
"rkVudelff",
"rkU3lxgff",
"ByFzpZxzf",
"SJzHaWlfG",
"ryI0-glfM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"Taking into account the revision, this is an interesting idea whose limitations have been properly investigated.",
"*** Revision: based on the author's work, we have switched the score to accept (7) ***\n\nClever ideas but not end-to-end navigation.\n\nThis paper presents a hybrid architecture that mixes paramet... | [
-1,
7,
3,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
5,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"S1qTBd6XM",
"iclr_2018_SygwwGbRW",
"iclr_2018_SygwwGbRW",
"iclr_2018_SygwwGbRW",
"SJMgFkxQG",
"iclr_2018_SygwwGbRW",
"SJMgFkxQG",
"ByFzpZxzf",
"iclr_2018_SygwwGbRW",
"S1H89etgf",
"iclr_2018_SygwwGbRW",
"BkOtNnKxM",
"BkOtNnKxM",
"Hy3ONt2eM"
] |
iclr_2018_B12Js_yRb | Learning to Count Objects in Natural Images for Visual Question Answering | Visual Question Answering (VQA) models have struggled with counting objects in natural images so far. We identify a fundamental problem due to soft attention in these models as a cause. To circumvent this problem, we propose a neural network component that allows robust counting from object proposals. Experiments on a ... | accepted-poster-papers | Initially this paper received mixed reviews. After reading the author response, R1 and and R3 recommend acceptance.
R2, who recommended rejecting the paper, did not participate in discussions, did not respond to author explanations, did not respond to AC emails, and did not submit a final recommendation. This AC does ... | train | [
"r13as6sXf",
"HkRBLTo7G",
"S1dXkASrz",
"H1GhmwqgG",
"Hkwum9YgM",
"SJ11jzclf",
"B1bfGw-ff",
"HJt5Gsrbf",
"ByYnNiB-f",
"S15E4oHbM",
"Sy-67iHZz",
"B1-TZiS-M",
"Sk0dZsS-M",
"B17QZjB-M",
"rkkDeiB-M"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"Due to the length of our detailed point-by-point rebuttals, we would like to give a quick summary of our responses to the main concerns that the reviewers had.\n\n# Reviewer 3 (convinced by our rebuttal and increased the rating)\n\n- Too handcrafted\nThe current state-of-art in VQA on real images is nowhere near g... | [
-1,
-1,
-1,
6,
6,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
3,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_B12Js_yRb",
"iclr_2018_B12Js_yRb",
"H1GhmwqgG",
"iclr_2018_B12Js_yRb",
"iclr_2018_B12Js_yRb",
"iclr_2018_B12Js_yRb",
"iclr_2018_B12Js_yRb",
"Hkwum9YgM",
"S15E4oHbM",
"Sy-67iHZz",
"SJ11jzclf",
"Sk0dZsS-M",
"B17QZjB-M",
"rkkDeiB-M",
"H1GhmwqgG"
] |
iclr_2018_HJsjkMb0Z | i-RevNet: Deep Invertible Networks | It is widely believed that the success of deep convolutional networks is based on progressively discarding uninformative variability about the input with respect to the problem at hand. This is supported empirically by the difficulty of recovering images from their hidden representations, in most commonly used network ... | accepted-poster-papers | This paper constructs a variant of deep CNNs which is provably invertible, by replacing spatial pooling with multiple shifted spatial downsampling, and capitalizing on residual layers to define a simple, invertible representation. The authors show that the resulting representation is equally effective at large-scale ob... | train | [
"BJOsVtsNz",
"HyABCoKVz",
"BJzKguLVz",
"HJRdhx5eM",
"rJxrJe9eG",
"HkxP0bceM",
"SyyNsbbNz",
"H1ia7wJ4f",
"ByS7UBGQz",
"HyW2XBf7z",
"By9vGBMXM",
"H1dDbBfQf"
] | [
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"author",
"author",
"author"
] | [
"\n1) As mentioned in section 3.1 and detailed in the 4th paragraph of section 3.2, $\\tilde{S}$ splits the input into two tensors. In our case, we stick to the choice of Revnets and split the number of input channels in half. You will be able to check how this is done in detail in the code we will release alongsid... | [
-1,
-1,
-1,
8,
9,
8,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"HyABCoKVz",
"iclr_2018_HJsjkMb0Z",
"HyW2XBf7z",
"iclr_2018_HJsjkMb0Z",
"iclr_2018_HJsjkMb0Z",
"iclr_2018_HJsjkMb0Z",
"H1ia7wJ4f",
"iclr_2018_HJsjkMb0Z",
"HkxP0bceM",
"HJRdhx5eM",
"rJxrJe9eG",
"iclr_2018_HJsjkMb0Z"
] |
iclr_2018_BkUHlMZ0b | Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach | The robustness of neural networks to adversarial examples has received great attention due to security implications. Despite various attack approaches to crafting visually imperceptible adversarial examples, little has been developed towards a comprehensive measure of robustness. In this paper, we provide theoretical j... | accepted-poster-papers | This paper proposes a new metric to evaluate the robustness of neural networks to adversarial attacks. This metric comes with theoretical guarantees and can be efficiently computed on large-scale neural networks.
Reviewers were generally positive about the strengths of the paper, especially after major revisions durin... | train | [
"rk8Ucb5gf",
"B1ZlEVXyf",
"BJiW7IkZM",
"BJdX_3LGz",
"H1i4bwD7f",
"Hyu463UzG",
"SyK9an8Gz",
"Hkt4hhIMM",
"Hk0A_3IGG",
"ry18wnIMz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"The work claims a measure of robustness of networks that is attack-agnostic. Robustness measure is turned into the problem of finding a local Lipschitz constant which is given by the maximum of the norm of the gradient of the associated function. That quantity is then estimated by sampling from the domain of maxim... | [
7,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
3,
1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_BkUHlMZ0b",
"iclr_2018_BkUHlMZ0b",
"iclr_2018_BkUHlMZ0b",
"BJiW7IkZM",
"B1ZlEVXyf",
"B1ZlEVXyf",
"B1ZlEVXyf",
"B1ZlEVXyf",
"rk8Ucb5gf",
"iclr_2018_BkUHlMZ0b"
] |
iclr_2018_r1vuQG-CW | HexaConv | The effectiveness of Convolutional Neural Networks stems in large part from their ability to exploit the translation invariance that is inherent in many learning problems. Recently, it was shown that CNNs can exploit other invariances, such as rotation invariance, by using group convolutions instead of planar convoluti... | accepted-poster-papers | This paper implements Group convolutions on inputs defined over hexagonal lattices instead of square lattices, using the roto-translation group. The internal symmetries of the hexagonal grid allow for a larger discrete rotation group than when using square pixels, leading to improved performance on CIFAR and aerial dat... | train | [
"SysTGDdxf",
"rJnr8zdgM",
"Syvd8Qcgf",
"BkDvOSdQM",
"Sy1xuHOXM",
"SJrJPrdQG",
"Byo0SHu7z",
"HybbMN0Zf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public"
] | [
"\nThe authors took my comments nicely into account in their revision, and their answers are convincing. I increase my rating from 5 to 7. The authors could also integrate their discussion about their results on CIFAR in the paper, I think it would help readers understand better the advantage of the contribution.\n... | [
7,
7,
7,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_r1vuQG-CW",
"iclr_2018_r1vuQG-CW",
"iclr_2018_r1vuQG-CW",
"HybbMN0Zf",
"Syvd8Qcgf",
"SysTGDdxf",
"rJnr8zdgM",
"iclr_2018_r1vuQG-CW"
] |
iclr_2018_rJzIBfZAb | Towards Deep Learning Models Resistant to Adversarial Attacks | Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimizatio... | accepted-poster-papers | This paper presents new results on adversarial training, using the framework of robust optimization. Its minimax nature allows for principled methods of both training and attacking neural networks.
The reviewers were generally positive about its contributions, despite some concerns about 'overclaiming'. The AC recomme... | train | [
"rkdJB4SBG",
"Hy0j8ecgz",
"rkO53U_ez",
"SyRt7SoxG",
"BkTN7DaXz",
"SyUsGvpmG",
"HkRKZDTQM",
"HJJe50sez",
"rkj4yGsgf",
"rJw7T03yG",
"ryQ33hmkf",
"rJsu3TGyG"
] | [
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"public",
"author",
"public"
] | [
"We have been performing an analysis of the robustness of many of the papers submitted here. This paper provides a substantially stronger defense than many of the other submissions, and we were not able to meaningfully invalidate any of the claims made. Given our analysis so far, it looks like this is the strongest... | [
-1,
7,
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
4,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rJzIBfZAb",
"iclr_2018_rJzIBfZAb",
"iclr_2018_rJzIBfZAb",
"iclr_2018_rJzIBfZAb",
"rkO53U_ez",
"Hy0j8ecgz",
"SyRt7SoxG",
"rkj4yGsgf",
"iclr_2018_rJzIBfZAb",
"ryQ33hmkf",
"rJsu3TGyG",
"iclr_2018_rJzIBfZAb"
] |
iclr_2018_By4HsfWAZ | Deep Learning for Physical Processes: Incorporating Prior Scientific Knowledge | We consider the use of Deep Learning methods for modeling complex phenomena like those occurring in natural physical processes. With the large amount of data gathered on these phenomena the data intensive paradigm could begin to challenge more traditional approaches elaborated over the years in fields like maths or ph... | accepted-poster-papers | This paper proposes to use data-driven deep convolutional architectures for modeling advection diffusion. It is well motivated and comes with convincing numerical experiments.
Reviewers agreed that this is a worthy contribution to ICLR with the potential to trigger further research in the interplay between deep learnin... | train | [
"BJBU32dgz",
"SyeN_yclz",
"Bk6nbuf-M",
"Skf3ipimM",
"r1saUCzXG",
"rkX5LRzmz",
"Skp8ICG7f",
"BkK-sP4xf",
"H1UFgYQlG",
"HJdRxVkgM",
"H1lXnCjyG",
"Sy569OJ1G",
"SykEv6KRW"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"author",
"public",
"author",
"public"
] | [
"The paper ‘Deep learning for Physical Process: incorporating prior physical knowledge’ proposes\nto question the use of data-intensive strategies such as deep learning in solving physical \ninverse problems that are traditionally solved through assimilation strategies. They notably show\nhow physical priors on a g... | [
7,
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
3,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_By4HsfWAZ",
"iclr_2018_By4HsfWAZ",
"iclr_2018_By4HsfWAZ",
"iclr_2018_By4HsfWAZ",
"SyeN_yclz",
"BJBU32dgz",
"Bk6nbuf-M",
"H1UFgYQlG",
"HJdRxVkgM",
"H1lXnCjyG",
"iclr_2018_By4HsfWAZ",
"SykEv6KRW",
"iclr_2018_By4HsfWAZ"
] |
iclr_2018_ryazCMbR- | Communication Algorithms via Deep Learning | Coding theory is a central discipline underpinning wireline and wireless modems that are the workhorses of the information age. Progress in coding theory is largely driven by individual human ingenuity with sporadic breakthroughs over the past century. In this paper we study whether it is possible to automate the disco... | accepted-poster-papers | This paper studies trainable deep encoders/decoders in the context of coding theory, based on recurrent neural networks. It presents highly promising results showing that one may be able to use learnt encoders and decoders on channels where no predefined codes are known.
Besides these encouraging aspects, there are im... | train | [
"BJKuTzwBf",
"HyBtZlVrf",
"r1u-g-JSf",
"ry10QYxSM",
"ByEN_hFgM",
"S1PB3Ocef",
"BkbcZjAgM",
"S1LKJamVf",
"rygrya7Ez",
"r1oRA2QVG",
"rkAzD4Z4f",
"SyDC5dgEG",
"BJwz5EyEz",
"B1lvp76mG",
"Bk79HmpQf",
"HyGycOnXM",
"SJkduKjXz",
"BydMVxizM",
"Hkm9QejMM",
"r11m9pcfG",
"HyOXBaqzG",
"... | [
"public",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"public",
"public",
"public",
"author",
"author",
"public",
"public",
"author",
"author",
"author",
"author",
"author",
"author"... | [
"Thanks!",
"Thanks for your comments. Indeed, the Turbo decoder is not nearest neighbor, and therefore there is no theorem that the turbo decoder will perform better on every other noise distribution with the same variance. Indeed, if such was the case, there would be no way for the turbo decoder to do worse (sin... | [
-1,
-1,
-1,
-1,
2,
6,
9,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
4,
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"HyBtZlVrf",
"r1u-g-JSf",
"rygrya7Ez",
"ByEN_hFgM",
"iclr_2018_ryazCMbR-",
"iclr_2018_ryazCMbR-",
"iclr_2018_ryazCMbR-",
"rkAzD4Z4f",
"SyDC5dgEG",
"BJwz5EyEz",
"iclr_2018_ryazCMbR-",
"B1lvp76mG",
"Bk79HmpQf",
"SJkduKjXz",
"HyGycOnXM",
"iclr_2018_ryazCMbR-",
"r11m9pcfG",
"ByEN_hFgM"... |
iclr_2018_rJYFzMZC- | Simulating Action Dynamics with Neural Process Networks | Understanding procedural language requires anticipating the causal effects of actions, even when they are not explicitly stated. In this work, we introduce Neural Process Networks to understand procedural text through (neural) simulation of action dynamics. Our model complements existing memory architectures with dyn... | accepted-poster-papers | this submission proposes a novel extension of existing recurrent networks that focus on capturing long-term dependencies via tracking entities/their statesand tested it on a new task. there's a concern that the proposed approach is heavily engineered toward the proposed task and may not be applicable to other tasks, wh... | test | [
"SJUEXlDxf",
"r1Hu15Kxz",
"rJQcSB5gG",
"S1d6eY-7f",
"HyGietbmz",
"ryq_xY-7M",
"Hy2mkt-7z",
"HyFjC_-Xz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"Summary\n\nThis paper presents Neural Process Networks, an architecture for capturing procedural knowledge stated in texts that makes use of a differentiable memory, a sentence and word attention mechanism, as well as learning action representations and their effect on entity representations. The architecture is t... | [
6,
9,
8,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rJYFzMZC-",
"iclr_2018_rJYFzMZC-",
"iclr_2018_rJYFzMZC-",
"HyGietbmz",
"ryq_xY-7M",
"SJUEXlDxf",
"r1Hu15Kxz",
"rJQcSB5gG"
] |
iclr_2018_BkeqO7x0- | Unsupervised Cipher Cracking Using Discrete GANs | This work details CipherGAN, an architecture inspired by CycleGAN used for inferring the underlying cipher mapping given banks of unpaired ciphertext and plaintext. We demonstrate that CipherGAN is capable of cracking language data enciphered using shift and Vigenere ciphers to a high degree of fidelity and for vocabul... | accepted-poster-papers | this work adapts cycle GAN to the problem of decipherment with some success. it's still an early result, but all the reviewers have found it to be interesting and worthwhile for publication. | test | [
"S1skfxRxM",
"r1TBz6I4M",
"SykysFulM",
"ryn4mW9ef",
"HkGl-GcZf",
"By1ReMc-M",
"HkWfxz9WM",
"By7yez5Zz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"SUMMARY\n\nThe paper considers the problem of using cycle GANs to decipher text encrypted with historical ciphers. Also it presents some theory to address the problem that discriminating between the discrete data and continuous prediction is too simple. The model proposed is a variant of the cycle GAN in which in ... | [
7,
-1,
7,
8,
-1,
-1,
-1,
-1
] | [
4,
-1,
1,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_BkeqO7x0-",
"HkGl-GcZf",
"iclr_2018_BkeqO7x0-",
"iclr_2018_BkeqO7x0-",
"By1ReMc-M",
"S1skfxRxM",
"ryn4mW9ef",
"SykysFulM"
] |
iclr_2018_Sy-dQG-Rb | Neural Speed Reading via Skim-RNN | Inspired by the principles of speed reading, we introduce Skim-RNN, a recurrent neural network (RNN) that dynamically decides to update only a small fraction of the hidden state for relatively unimportant input tokens. Skim-RNN gives a significant computational advantage over an RNN that always updates the entire hidde... | accepted-poster-papers | this submission proposes an efficient parametrization of a recurrent neural net by using two transition functions (one large and one small) to reduce the amount of computation (though, without actual improvement on GPU.) the reviewers found the submission very positive.
please, do not forget to include all the result ... | train | [
"BkjQVC8Sz",
"H1dhmCIrz",
"rJiTSpSSf",
"BkXOd1q4G",
"r1izCPYlG",
"HJpgrTKxf",
"rkZtyy5gf",
"HyDWxCXNf",
"SyMwQNmEG",
"SkC43pf4G",
"ByGiVpuQM",
"BynOEa_Qf",
"SJNL4ad7M",
"SJDcXGVJM",
"SytImi7kG",
"BycR50MyG"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"public",
"author",
"public"
] | [
"NT",
"We note that we could not increase FLOP reduction of VCRNN by controlling the hyperparameters on SQuAD. Also, VCRNN performs worse than vanilla RNN (LSTM) without any gain in FLOP reduction, which we believe is due to the difficulty in training (biased gradient, etc.).\n\nWe believe that this supports our... | [
-1,
-1,
-1,
-1,
7,
7,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
3,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"H1dhmCIrz",
"rJiTSpSSf",
"iclr_2018_Sy-dQG-Rb",
"HyDWxCXNf",
"iclr_2018_Sy-dQG-Rb",
"iclr_2018_Sy-dQG-Rb",
"iclr_2018_Sy-dQG-Rb",
"SyMwQNmEG",
"SkC43pf4G",
"iclr_2018_Sy-dQG-Rb",
"rkZtyy5gf",
"HJpgrTKxf",
"r1izCPYlG",
"SytImi7kG",
"BycR50MyG",
"iclr_2018_Sy-dQG-Rb"
] |
iclr_2018_SyJS-OgR- | Multi-level Residual Networks from Dynamical Systems View | Deep residual networks (ResNets) and their variants are widely used in many computer vision applications and natural language processing tasks. However, the theoretical principles for designing and training ResNets are still not fully understood. Recently, several points of view have emerged to try to interpret ResNet... | accepted-poster-papers | this submission proposes a learning algorithm for resnets based on their interpreration of them as a discrete approximation to a continuous-time dynamical system. all the reviewers have found the submission to be clearly written, well motivated and have proposed an interesting and effective learning algorithm for resn... | test | [
"rJiJWZtHz",
"rk40-nDlz",
"SyuwCCKlz",
"HJzVc2sxf",
"HkrMrL2mG",
"rk51SIhmz",
"ByShEI27z",
"SkSwN8n7M"
] | [
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"> We are currently working on the experiments of ImageNet\n\nAny update on this front ?\nImproved ImageNet training time would significantly increase the impact of this paper.",
"This paper interprets deep residual network as a dynamic system, and proposes a novel training algorithm to train it in a constructive... | [
-1,
7,
7,
7,
-1,
-1,
-1,
-1
] | [
-1,
4,
3,
4,
-1,
-1,
-1,
-1
] | [
"ByShEI27z",
"iclr_2018_SyJS-OgR-",
"iclr_2018_SyJS-OgR-",
"iclr_2018_SyJS-OgR-",
"rk40-nDlz",
"SyuwCCKlz",
"HJzVc2sxf",
"iclr_2018_SyJS-OgR-"
] |
iclr_2018_HktJec1RZ | Towards Neural Phrase-based Machine Translation | In this paper, we present Neural Phrase-based Machine Translation (NPMT). Our method explicitly models the phrase structures in output sequences using Sleep-WAke Networks (SWAN), a recently proposed segmentation-based sequence modeling method. To mitigate the monotonic alignment requirement of SWAN, we introduce a new ... | accepted-poster-papers | this submission introduces soft local reordering to the recently proposed SWAN layer [Wang et al., 2017] to make it suitable for machine translation. although only in small-scale experiments, the results are convincing. | val | [
"Sy2fyR7gG",
"r1IRR2Yez",
"r1PrkGheM",
"rkMojlVzG",
"SyGiIgEff",
"SJzkqxNfz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"The paper introduces a neural translation model that automatically discovers phrases. This idea is very interesting and tries to marry phrase-based statistical machine translation with neural methods in a principled way. However, the clarity of the paper could be improved.\n\nThe local reordering layer has the ab... | [
6,
6,
8,
-1,
-1,
-1
] | [
3,
4,
5,
-1,
-1,
-1
] | [
"iclr_2018_HktJec1RZ",
"iclr_2018_HktJec1RZ",
"iclr_2018_HktJec1RZ",
"Sy2fyR7gG",
"r1PrkGheM",
"r1IRR2Yez"
] |
iclr_2018_ByJHuTgA- | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks. However, these have been evaluated using differing codebases and limited computational resources, which represent uncontrolled sources of experimental vari... | accepted-poster-papers | this submission demonstrates an existing loop-hole (?) in rushing out new neural language models by carefully (and expensively) running hyperparameter tuning of baseline approaches. i feel this is an important contribution, but as pointed out by some reviewers, i would have liked to see whether the conclusion stands ev... | train | [
"S1Mw8jBef",
"rJTcBCtxG",
"HkGW8A2gG",
"rJwjH7HzM",
"SkNXbLNzf",
"HJOf_f2Wz",
"HJfg_G2bz",
"BJM9Df2WM",
"BJXUZl2Wf",
"SJeVnLdbG",
"H14DMe7WG",
"HkGngSGZz",
"BkqLXA1Zz",
"r1naAI6eG",
"r19qHo2gM",
"HJV5juheG",
"Sk5vVdhez",
"HkATqf5xz",
"HksEzG5gf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"author",
"author",
"author",
"public",
"author",
"public",
"public",
"author",
"author",
"public",
"public",
"author",
"public"
] | [
"The submitted manuscript describes an exercise in performance comparison for neural language models under standardization of the hyperparameter tuning and model selection strategies and costs. This type of study is important to give perspective to non-standardized performance scores reported across separate publi... | [
7,
5,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
2,
5,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_ByJHuTgA-",
"iclr_2018_ByJHuTgA-",
"iclr_2018_ByJHuTgA-",
"iclr_2018_ByJHuTgA-",
"iclr_2018_ByJHuTgA-",
"S1Mw8jBef",
"rJTcBCtxG",
"HkGW8A2gG",
"SJeVnLdbG",
"iclr_2018_ByJHuTgA-",
"HkGngSGZz",
"iclr_2018_ByJHuTgA-",
"rJTcBCtxG",
"Sk5vVdhez",
"HJV5juheG",
"HkATqf5xz",
"iclr_... |
iclr_2018_rkfOvGbCW | Memory-based Parameter Adaptation | Deep neural networks have excelled on a wide range of problems, from vision to language and game playing. Neural networks very gradually incorporate information into weights as they process data, requiring very low learning rates. If the training distribution shifts, the network is slow to adapt, and when it does adap... | accepted-poster-papers | the proposed approach nicely incorporates various ideas from recent work into a single meta-learning (or domain adaptation or incremental learning or ...) framework. although better empirical comparison to existing (however recent they are) approaches would have made it stronger, the reviewers all found this submission... | val | [
"HylhQnUNG",
"HkJsPxmxG",
"rktPEKveG",
"ByEeLZ5xz",
"H1cEqB6mz",
"SJAlIN6mf",
"HJLvPs3mz",
"H1ymHgbmz",
"Skiov_-zz",
"HkbqcOZfG",
"BJQOOO-GG",
"H1rrLdWfM",
"rk86VuZGz",
"ByYMHubGz",
"ByINRE1zM",
"rJyW0n4eM",
"B1tJwiMef"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"author",
"author",
"public",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"public"
] | [
"Dear Authors and AC\n\nThank you for your detailed answers -- having to split in two comments due to length shows how seriously you take it :)\nBetween them and the fact that my mind kept wandering back to the ideas in this paper during the holidays, I am happy to maintain my score of 8 - Top 50% papers.",
"Over... | [
-1,
6,
6,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
4,
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"ByEeLZ5xz",
"iclr_2018_rkfOvGbCW",
"iclr_2018_rkfOvGbCW",
"iclr_2018_rkfOvGbCW",
"SJAlIN6mf",
"H1ymHgbmz",
"iclr_2018_rkfOvGbCW",
"rJyW0n4eM",
"rktPEKveG",
"ByINRE1zM",
"HkJsPxmxG",
"rktPEKveG",
"ByEeLZ5xz",
"ByEeLZ5xz",
"iclr_2018_rkfOvGbCW",
"B1tJwiMef",
"iclr_2018_rkfOvGbCW"
] |
iclr_2018_HJJ23bW0b | Initialization matters: Orthogonal Predictive State Recurrent Neural Networks | Learning to predict complex time-series data is a fundamental challenge in a range of disciplines including Machine Learning, Robotics, and Natural Language Processing. Predictive State Recurrent Neural Networks (PSRNNs) (Downey et al.) are a state-of-the-art approach for modeling time-series data which combine the ben... | accepted-poster-papers | this submission presents the positive impact of using orthogonal random features instead of unstructured random features for predictive state recurrent neural nets. there's been some sentiment by the reviewers that the contribution is rather limited, but after further discussion with another AC and PC's, we have conclu... | train | [
"ByofVOOgG",
"HJzahgcgf",
"rJujSJjgG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I was very confused by some parts of the paper that are simple copy-past from the paper of Downey et al. which has been accepted for publication in NIPS. In particular, in section 3, several sentences are taken as they are from the Downey et al.’s paper. Some examples :\n\n« provide a compact representation of a ... | [
4,
8,
7
] | [
5,
4,
2
] | [
"iclr_2018_HJJ23bW0b",
"iclr_2018_HJJ23bW0b",
"iclr_2018_HJJ23bW0b"
] |
iclr_2018_rJUYGxbCW | PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples | Adversarial perturbations of normal images are usually imperceptible to humans, but they can seriously confuse state-of-the-art machine learning models. What makes them so special in the eyes of image classifiers? In this paper, we show empirically that adversarial examples mainly lie in the low probability regions of ... | accepted-poster-papers | The paper studies the use of PixelCNN density models for the detection of adversarial images, which tend to lie in low-probability parts of image space. The work is novel, relevant to the ICLR community, and appears to be technically sound.
A downside of the paper is its limited empirical evaluation: there evidence su... | train | [
"HJ9WQx6JG",
"rJ4_WfuxM",
"rJbiu3lbM",
"BkMlqBTfG",
"rklJKHaMf",
"Sk-nvSTMf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"\nI read the rebuttal and thank the authors for the thoughtful responses and revisions. The updated Figure 2 and Section 4.4. addresses my primary concerns. Upwardly revising my review.\n\n====================\n\nThe authors describe a method for detecting adversarial examples by measuring the likelihood in terms ... | [
7,
7,
7,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1
] | [
"iclr_2018_rJUYGxbCW",
"iclr_2018_rJUYGxbCW",
"iclr_2018_rJUYGxbCW",
"rJ4_WfuxM",
"rJbiu3lbM",
"HJ9WQx6JG"
] |
iclr_2018_Bys4ob-Rb | Certified Defenses against Adversarial Examples | While neural networks have achieved high accuracy on standard image classification benchmarks, their accuracy drops to nearly zero in the presence of small adversarial perturbations to test inputs. Defenses based on regularization and adversarial training have been proposed, but often followed by new, stronger attacks ... | accepted-poster-papers | The paper presents a differentiable upper bound on the performance of classifier on an adversarially perturbed example (with small perturbation in the L-infinity sense). The paper presents novel ideas, is well-written, and appears technically sound. It will likely be of interest to the ICLR community.
The only downsid... | train | [
"SJlhZp8gf",
"BJVgLg9xf",
"SkwJQwogM",
"rJ3juc1MM",
"rkaLdqJMM",
"rkjMdqkGM",
"r1kqwc1Gz",
"By245E-Wf",
"r15EJeKyG",
"SyBzxMU1z",
"SJT_HG81G",
"Byvbr5WJM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"author",
"public",
"author",
"public"
] | [
"This paper develops a new differentiable upper bound on the performance of classifier when the adversarial input in l_infinity is assumed to be applied.\nWhile the attack model is quite general, the current bound is only valid for linear and NN with one hidden layer model, so the result is quite restrictive.\n\nHo... | [
8,
8,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_Bys4ob-Rb",
"iclr_2018_Bys4ob-Rb",
"iclr_2018_Bys4ob-Rb",
"SJlhZp8gf",
"BJVgLg9xf",
"SkwJQwogM",
"By245E-Wf",
"iclr_2018_Bys4ob-Rb",
"iclr_2018_Bys4ob-Rb",
"iclr_2018_Bys4ob-Rb",
"Byvbr5WJM",
"iclr_2018_Bys4ob-Rb"
] |
iclr_2018_BkJ3ibb0- | Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models | In recent years, deep neural network approaches have been widely adopted for machine learning tasks, including classification. However, they were shown to be vulnerable to adversarial perturbations: carefully crafted small perturbations can cause misclassification of legitimate images. We propose Defense-GAN, a new fra... | accepted-poster-papers | The paper studied defenses against adversarial examples by training a GAN and, at inference time, finding the GAN-generated sample that is nearest to the (adversarial) input example. Next, it classifies the generated example rather than the input example. This defense is interesting and novel. The CelebA experiments th... | train | [
"H17TwR4rM",
"By-CxBKgz",
"BympCwwgf",
"rJOVWxjez",
"SkbvmBamf",
"Hy120kU7f",
"Bkw8Ck8QG",
"r1MMCyImM",
"SyS0aJ8Xz",
"r1D5pJ87f",
"Bkbgpk87z",
"S1bEhkU7G",
"B1wgPVOzG",
"S1c64RJzz",
"ryW5rcl-f",
"SkdMUQaAZ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"public",
"public",
"public"
] | [
"B) C) Thanks for the additional experiments, I think they make the paper stronger. In particular they validate that scaling is proportional to L but not (linear in) to image size, and that the method works in RGB.\nD) OK.\nA) E) I still think that these additional experiments would help, but I am now marginally co... | [
-1,
6,
6,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
3,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"Bkbgpk87z",
"iclr_2018_BkJ3ibb0-",
"iclr_2018_BkJ3ibb0-",
"iclr_2018_BkJ3ibb0-",
"iclr_2018_BkJ3ibb0-",
"SkdMUQaAZ",
"ryW5rcl-f",
"S1c64RJzz",
"B1wgPVOzG",
"BympCwwgf",
"By-CxBKgz",
"rJOVWxjez",
"iclr_2018_BkJ3ibb0-",
"iclr_2018_BkJ3ibb0-",
"iclr_2018_BkJ3ibb0-",
"iclr_2018_BkJ3ibb0-"... |
iclr_2018_rkZvSe-RZ | Ensemble Adversarial Training: Attacks and Defenses | Adversarial examples are perturbed inputs designed to fool machine learning models. Adversarial training injects such examples into training data to increase robustness. To scale this technique to large datasets, perturbations are crafted using fast single-step methods that maximize a linear approximation of the model'... | accepted-poster-papers | The paper studies a defense against adversarial examples that re-trains convolutional networks on adversarial examples constructed to attack pre-trained networks. Whilst the proposed approach is not very original, the paper does present a solid empirical baseline for these kinds of defenses. In particular, it goes beyo... | train | [
"BkM3vGDlf",
"SJxF3VsxG",
"S1suPTx-G",
"rJgZKlFGf",
"rySuOxYGG",
"rynrIxYGz",
"r1LmIgFGM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"This paper proposes ensemble adversarial training, in which adversarial examples crafted on other static pre-trained models are used in the training phase. Their method makes deep networks robust to black-box attacks, which was empirically demonstrated.\n\nThis is an empirical paper. The ideas are simple and not s... | [
6,
6,
6,
-1,
-1,
-1,
-1
] | [
2,
4,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rkZvSe-RZ",
"iclr_2018_rkZvSe-RZ",
"iclr_2018_rkZvSe-RZ",
"iclr_2018_rkZvSe-RZ",
"S1suPTx-G",
"SJxF3VsxG",
"BkM3vGDlf"
] |
iclr_2018_SJyVzQ-C- | Fraternal Dropout | Recurrent neural networks (RNNs) are important class of architectures among neural networks useful for language modeling and sequential prediction. However, optimizing RNNs is known to be harder compared to feed-forward neural networks. A number of techniques have been proposed in literature to address this problem. In... | accepted-poster-papers | The paper studies a dropout variant, called fraternal dropout. The paper is somewhat incremental in that the proposed approach is closely related to expectation linear dropout. Having said that, fraternal dropout does improve a state-of-the-art language model on PTB and WikiText2 by ~0.5-1.7 perplexity points. The pape... | train | [
"SJGZIlkSz",
"rkblPhrgf",
"SkmNLstxG",
"rJJ2RIigz",
"SkuiLrp7M",
"rJ6YkqBfz",
"ryNO0J2Wz",
"S1Yt6udbz",
"HJ4xA__bz",
"HkN3T__Wf"
] | [
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author"
] | [
"The proposed method, fraternal dropout, is the version of self-ensembles (Pi model) for RNNs. The authors proved the fact that the regularization term for self-ensemble is worth for learning RNN models. The results of the paper show incredible performances from the previous state-of-the-art performances on languag... | [
-1,
5,
5,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
4,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SJyVzQ-C-",
"iclr_2018_SJyVzQ-C-",
"iclr_2018_SJyVzQ-C-",
"iclr_2018_SJyVzQ-C-",
"iclr_2018_SJyVzQ-C-",
"ryNO0J2Wz",
"HJ4xA__bz",
"rJJ2RIigz",
"rkblPhrgf",
"SkmNLstxG"
] |
iclr_2018_SJcKhk-Ab | Can recurrent neural networks warp time? | Successful recurrent models such as long short-term memories (LSTMs) and gated recurrent units (GRUs) use \emph{ad hoc} gating mechanisms. Empirically these models have been found to improve the learning of medium to long term temporal dependencies and to help with vanishing gradient issues.
We prove tha... | accepted-poster-papers | All the reviews like the theoretical result presented in the paper which relates the gating mechanism of LSTMS (and GRUs) to time invariance / warping. The theoretical result is great and is used to propose a heuristic for setting biases when time invariance scales are known. The experiments are not mind-boggling, but ... | test | [
"Sk2_qmcxf",
"rk10EE5Vf",
"HyqtEE54z",
"SkrzwsKeG",
"Hyb2BDI4G",
"HkIzPXqxM",
"ry_xKKQEM",
"S12wnhz4f",
"HyGJjR9QG",
"ryPwnxFXf",
"HJUl3etXf",
"HyuqsxtXG"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"Summary:\nThis paper shows that incorporating invariance to time transformations in recurrent networks naturally results in a gating mechanism used by LSTMs and their variants. This is then used to develop a simple bias initialization scheme for the gates when the range of temporal dependencies relevant for a prob... | [
8,
-1,
-1,
8,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SJcKhk-Ab",
"iclr_2018_SJcKhk-Ab",
"Hyb2BDI4G",
"iclr_2018_SJcKhk-Ab",
"ry_xKKQEM",
"iclr_2018_SJcKhk-Ab",
"S12wnhz4f",
"HyuqsxtXG",
"iclr_2018_SJcKhk-Ab",
"SkrzwsKeG",
"HkIzPXqxM",
"Sk2_qmcxf"
] |
iclr_2018_HyUNwulC- | Parallelizing Linear Recurrent Neural Nets Over Sequence Length | Recurrent neural networks (RNNs) are widely used to model sequential data but
their non-linear dependencies between sequence elements prevent parallelizing
training over sequence length. We show the training of RNNs with only linear
sequential dependencies can be parallelized over the sequence length ... | accepted-poster-papers | Paper presents a way in which linear RNNs can be computed (fprop, bprop) using parallel scan. They show big improvements in speedups and show application on really long sequences. Reviews were generally favorable. | val | [
"Hkr1wGOeG",
"ry7sCqtgM",
"SyAgjAtgG",
"rJIWI2PWM",
"BycaVhwZz",
"r1T7-3PbG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"This paper focuses on accelerating RNN by applying the method from Blelloch (1990). The application is straightforward and thus technical novelty of this paper is limited. But the results are impressive. \n\nOne concern is the proposed technique is only applied for few types of RNNs which may limit its application... | [
6,
7,
7,
-1,
-1,
-1
] | [
3,
2,
4,
-1,
-1,
-1
] | [
"iclr_2018_HyUNwulC-",
"iclr_2018_HyUNwulC-",
"iclr_2018_HyUNwulC-",
"Hkr1wGOeG",
"ry7sCqtgM",
"SyAgjAtgG"
] |
iclr_2018_HkTEFfZRb | Attacking Binarized Neural Networks | Neural networks with low-precision weights and activations offer compelling
efficiency advantages over their full-precision equivalents. The two most
frequently discussed benefits of quantization are reduced memory consumption,
and a faster forward pass when implemented with efficient bitwise
op... | accepted-poster-papers | Paper was well written and rebuttal was well thought out and convincing.
The reviewers agree that the paper showed BNNs were good (relatively speaking) at resisting adversarial examples. Some question was raised about whether the methods would work on larger datasets and models. The authors offered some experiments in... | train | [
"BkSH7A5Qz",
"Sy5hsUOlG",
"H1EWH1KxG",
"HkNP3wvWM",
"BJO40TcmM",
"ByKGyXMfz"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"We thank the reviewers for their positive and constructive feedback. We believe that we have addressed all of the main questions and concerns in the most recent revision of the paper. These are detailed below:\n\nR2 - Higher dimensional data\n\nTo confirm that our findings hold for higher dimensional data, and fur... | [
-1,
7,
7,
6,
-1,
-1
] | [
-1,
3,
4,
5,
-1,
-1
] | [
"iclr_2018_HkTEFfZRb",
"iclr_2018_HkTEFfZRb",
"iclr_2018_HkTEFfZRb",
"iclr_2018_HkTEFfZRb",
"iclr_2018_HkTEFfZRb",
"iclr_2018_HkTEFfZRb"
] |
iclr_2018_S1jBcueAb | Depthwise Separable Convolutions for Neural Machine Translation | Depthwise separable convolutions reduce the number of parameters and computation used in convolutional operations while increasing representational efficiency.
They have been shown to be successful in image classification models, both in obtaining better models than previously possible for a given parameter count... | accepted-poster-papers | Paper explore depth-wise separable convolutions for sequence to sequence models with convolutions encoders.
R1 and R3 liked the paper and the results. R3 thought the presentation of the convolutional space was nice, but the experiments were hurried. Other reviewers thought the paper as a whole had dense parts and need ... | train | [
"rJeAByPlf",
"BkCwTl9lG",
"rJ9-yZ9lM",
"BJkhyncQM",
"BJrg6o9mz",
"HyW_7BOZM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer"
] | [
"Pros:\n- new module\n- good performances (not state-of-the-art)\nCons:\n- additional experiments\n\nThe paper is well motivated, and is purely experimental and proposes a new architecture. However, I believe that more experiments should be performed and the explanations could be more concise.\n\nThe section 3 is d... | [
5,
7,
7,
-1,
-1,
-1
] | [
4,
4,
3,
-1,
-1,
-1
] | [
"iclr_2018_S1jBcueAb",
"iclr_2018_S1jBcueAb",
"iclr_2018_S1jBcueAb",
"rJeAByPlf",
"rJ9-yZ9lM",
"rJeAByPlf"
] |
iclr_2018_rywHCPkAW | Noisy Networks For Exploration | We introduce NoisyNet, a deep reinforcement learning agent with parametric noise added to its weights, and show that the induced stochasticity of the agent’s policy can be used to aid efficient exploration. The parameters of the noise are learned with gradient descent along with the remaining network weights. NoisyNet... | accepted-poster-papers | The paper proposes to add noise to the weights of a policy network during learning in Deep-RL settings and finds that this results in better performance on DQN, A3C and other algorithms that use other exploration strategies. Unfortunately, the paper does not do a thorough job of exploring the reasons and doesn't offer ... | train | [
"Hyf0aUVeM",
"rJ6Z7prxf",
"H14gEaFxG",
"SJDBQS5mz",
"BJZhEy5Xf",
"B1o89OFXz",
"B1e_W8uMM",
"H1NaqYFmz",
"r14ytLuMM",
"ry7jv6OQf",
"HJlU1T9GM",
"S1cI98dGM",
"S1pPiLdMG",
"BJRR3paAZ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"public"
] | [
"In this paper, a new heuristic is introduced with the purpose of controlling the exploration in deep reinforcement learning. \n\nThe proposed approach, NoisyNet, seems very simple and smart: a noise of zero mean and unknown variance is added to each weight of the deep network. The matrices of unknown variances are... | [
5,
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rywHCPkAW",
"iclr_2018_rywHCPkAW",
"iclr_2018_rywHCPkAW",
"BJZhEy5Xf",
"H1NaqYFmz",
"HJlU1T9GM",
"iclr_2018_rywHCPkAW",
"ry7jv6OQf",
"H14gEaFxG",
"S1cI98dGM",
"r14ytLuMM",
"rJ6Z7prxf",
"Hyf0aUVeM",
"iclr_2018_rywHCPkAW"
] |
iclr_2018_Hkc-TeZ0W | A Hierarchical Model for Device Placement | We introduce a hierarchical model for efficient placement of computational graphs onto hardware devices, especially in heterogeneous environments with a mixture of CPUs, GPUs, and other computational devices. Our method learns to assign graph operations to groups and to allocate those groups to available devices. The g... | accepted-poster-papers | The authors provide an alternative method to [1] for placement of ops in blocks. The results are shown to be an improvement over prior RL based placement in [1] and superior to *some* (maybe not the best) earlier methods for operations placements. The paper seems to have benefited strongly from reviewer feedback and se... | train | [
"HytWY1DVG",
"ryazKvH4M",
"BJuGT9zez",
"Sk-qjGYlz",
"rkSREOYgM",
"r1yts_37f",
"HJMiRCdGz",
"SypbR0OGG",
"rJZNkyYzz"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"Thanks for your response!\n\nAlthough we cited Scotch papers from 2009, the software we used was developed in 2012: http://www.labri.fr/perso/pelegrin/scotch/. Thanks to your suggestion, we have found a more recent graph partitioning package called KaHIP, which has publications in 2017 as well as ongoing software ... | [
-1,
-1,
5,
5,
8,
-1,
-1,
-1,
-1
] | [
-1,
-1,
4,
4,
5,
-1,
-1,
-1,
-1
] | [
"ryazKvH4M",
"rJZNkyYzz",
"iclr_2018_Hkc-TeZ0W",
"iclr_2018_Hkc-TeZ0W",
"iclr_2018_Hkc-TeZ0W",
"iclr_2018_Hkc-TeZ0W",
"Sk-qjGYlz",
"rkSREOYgM",
"BJuGT9zez"
] |
iclr_2018_BJJLHbb0- | Deep Autoencoding Gaussian Mixture Model for Unsupervised Anomaly Detection | Unsupervised anomaly detection on multi- or high-dimensional data is of great importance in both fundamental machine learning research and industrial applications, for which density estimation lies at the core. Although previous approaches based on dimensionality reduction followed by density estimation have made fruit... | accepted-poster-papers | + Empirically convincing and clearly explained application: a novel deep learning architecture and approach is shown to significantly outperform state-of-the-art in unsupervised anomaly detection.
- No clear theoretical foundation and justification is provided for the approach
- Connexion and differentiation from pr... | train | [
"S1f48huxz",
"r1tvocFgf",
"B1aQ8_2ef",
"S11z079mf",
"B1wAzvEQf",
"Sk4EfvVQf",
"rJq9evE7G",
"HyFqWv4Xf",
"BJXb-w4Qz",
"rkkvR84XM",
"S1g4W3x-f"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"public"
] | [
"1. This is a good paper, makes an interesting algorithmic contribution in the sense of joint clustering-dimension reduction for unsupervised anomaly detection\n2. It demonstrates clear performance improvement via comprehensive comparison with state-of-the-art methods\n3. Is the number of Gaussian Mixtures 'K' a hy... | [
8,
8,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
5,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_BJJLHbb0-",
"iclr_2018_BJJLHbb0-",
"iclr_2018_BJJLHbb0-",
"rJq9evE7G",
"S1g4W3x-f",
"S1f48huxz",
"r1tvocFgf",
"r1tvocFgf",
"r1tvocFgf",
"B1aQ8_2ef",
"iclr_2018_BJJLHbb0-"
] |
iclr_2018_BySRH6CpW | Learning Discrete Weights Using the Local Reparameterization Trick | Recent breakthroughs in computer vision make use of large deep neural networks, utilizing the substantial speedup offered by GPUs. For applications running on limited hardware, however, high precision real-time processing can still be a challenge. One approach to solving this problem is training networks with binary o... | accepted-poster-papers | Well written paper on a novel application of the local reprarametrisation trick to learn networks with discrete weights. The approach achieves state-of-the-art results.
Note: I apreciate that the authors added a comparison to the Gumbel-softmax continuous relaxation approach during the review period, following the sug... | train | [
"BJHcawFxM",
"SkOjP3Hlf",
"ryZHzH9gz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes training binary and ternary weight distribution networks through the local reparametrization trick and continuous optimization. The argument is that due to the central limit theorem (CLT) the distribution on the neuron pre-activations is approximately Gaussian, with a mean given by the inner pr... | [
6,
7,
6
] | [
4,
3,
3
] | [
"iclr_2018_BySRH6CpW",
"iclr_2018_BySRH6CpW",
"iclr_2018_BySRH6CpW"
] |
iclr_2018_BJ_wN01C- | Deep Rewiring: Training very sparse deep networks | Neuromorphic hardware tends to pose limits on the connectivity of deep networks that one can run on them. But also generic hardware and software implementations of deep learning run more efficiently for sparse networks. Several methods exist for pruning connections of a neural network after it was trained without conne... | accepted-poster-papers | Clearly explained, well motivated and empirically supported algorithm for training deep networks while simultaneously learning their sparse connectivity.
The approach is similar to previous work (in particular Welling et al., Bayesian Learning via Stochastic Gradient Langevin Dynamics, ICML 2011) but is novel in that i... | train | [
"Syx4zM9xM",
"H1aEoGAgG",
"r1UOC9lbf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, the authors present an approach to implement deep learning directly on sparsely connected graphs. Previous approaches have focused on transferring trained deep networks to a sparse graph for fast or efficient utilization; using this approach, sparse networks can be trained efficiently online, allowi... | [
8,
5,
6
] | [
4,
5,
4
] | [
"iclr_2018_BJ_wN01C-",
"iclr_2018_BJ_wN01C-",
"iclr_2018_BJ_wN01C-"
] |
iclr_2018_SJQHjzZ0- | Quantitatively Evaluating GANs With Divergences Proposed for Training | Generative adversarial networks (GANs) have been extremely effective in approximating complex distributions of high-dimensional, input data samples, and substantial progress has been made in understanding and improving GAN performance in terms of both theory and application.
However, we currently lack quantitati... | accepted-poster-papers | + clearly written and thorough empirical comparison of several metrics/divergences for evaluating GANs, prominently parametric-critic based divergences.
- little technical novelty with respect to prior work. As noted by reviewers and an anonymous commentator: using an Independent critic for evaluation has been propos... | train | [
"ryX_FSexG",
"H1uFgwqeM",
"H1C2pZplz",
"HJBlWX6mM",
"SyqLlX6QM",
"S1Gex7a7z",
"Hyexk7TmM",
"S1mjkm67z",
"SJm-JvWfz",
"SJ7anZWlM",
"ry8IOs_yM",
"SkCF1x4kG",
"r1GaH_NAW"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"author",
"public",
"author",
"public"
] | [
"Through evaluation of current popular GAN variants. \n * useful AIS figure\n * useful example of failure mode of inception scores\n * interesting to see that using a metric based on a model’s distance does not make the model better at that distance\nthe main criticism that can be given to the paper is that the... | [
7,
4,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
5,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SJQHjzZ0-",
"iclr_2018_SJQHjzZ0-",
"iclr_2018_SJQHjzZ0-",
"iclr_2018_SJQHjzZ0-",
"SJm-JvWfz",
"H1C2pZplz",
"H1uFgwqeM",
"ryX_FSexG",
"iclr_2018_SJQHjzZ0-",
"ry8IOs_yM",
"iclr_2018_SJQHjzZ0-",
"r1GaH_NAW",
"iclr_2018_SJQHjzZ0-"
] |
iclr_2018_BkLhaGZRW | Improving GAN Training via Binarized Representation Entropy (BRE) Regularization | We propose a novel regularizer to improve the training of Generative Adversarial Networks (GANs). The motivation is that when the discriminator D spreads out its model capacity in the right way, the learning signals given to the generator G are more informative and diverse, which helps G to explore better and discover ... | accepted-poster-papers | + Original regularizer that encourages discriminator representation entropy is shown to improve GAN training.
+ good supporting empirical validation
- While intuitively reasonable, no compelling theory is given to justify the approach
- The regularizer used in practice is a heap of heuristic approximations (con... | train | [
"B1ssgfcgM",
"SJa7Mu9lf",
"By5aywgZf",
"Hk1j4_p7z",
"HyAkfOTQf",
"Skcy4dTXf",
"r1xcQdaXG",
"Sk8_EdpXz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"The paper proposes a regularizer that encourages a GAN discriminator to focus its capacity in the region around the manifolds of real and generated data points, even when it would be easy to discriminate between these manifolds using only a fraction of its capacity, so that the discriminator provides a more inform... | [
6,
7,
4,
-1,
-1,
-1,
-1,
-1
] | [
3,
4,
3,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_BkLhaGZRW",
"iclr_2018_BkLhaGZRW",
"iclr_2018_BkLhaGZRW",
"Sk8_EdpXz",
"iclr_2018_BkLhaGZRW",
"B1ssgfcgM",
"SJa7Mu9lf",
"By5aywgZf"
] |
iclr_2018_r1NYjfbR- | Generative networks as inverse problems with Scattering transforms | Generative Adversarial Nets (GANs) and Variational Auto-Encoders (VAEs) provide impressive image generations from Gaussian white noise, but the underlying mathematics are not well understood. We compute deep convolutional network generators by inverting a fixed embedding operator. Therefore, they do not require to be o... | accepted-poster-papers | The paper got mixed scores of 4 (R1), 6 (R3), 8 (R2). R1 initially gave up after a few pages of reading, due to clarity problems. But looking over the revised version was much happier, so raised their score to 7. R2, who is knowledge about the area, was very positive about the paper, feeling it is a very interesting id... | train | [
"H1WORsdlG",
"SkJxZ1FeG",
"H1QWqHsgz",
"Hy83g4Gmz",
"S1d6aXzmM",
"ryTfQNMXM",
"SkP1GEGmM",
"SkG-ZEzQG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"After a first manuscript that needed majors edits, the revised version\noffers an interesting GAN approach based the scattering transform.\n\nApproach is well motivated with proper references to the recent literature.\n\nExperiments are not state of the art but clearly demonstrate that the\nproposed approach does ... | [
7,
8,
6,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_r1NYjfbR-",
"iclr_2018_r1NYjfbR-",
"iclr_2018_r1NYjfbR-",
"H1QWqHsgz",
"iclr_2018_r1NYjfbR-",
"H1WORsdlG",
"SkJxZ1FeG",
"Hy83g4Gmz"
] |
iclr_2018_BJGWO9k0Z | Critical Percolation as a Framework to Analyze the Training of Deep Networks | In this paper we approach two relevant deep learning topics: i) tackling of graph structured input data and ii) a better understanding and analysis of deep networks and related learning algorithms. With this in mind we focus on the topological classification of reachability in a particular subset of planar graphs (Maze... | accepted-poster-papers | The paper got generally positive scores of 6,7,7. The reviewers found the paper to be novel but hard to understand. The AC feels the paper should be accepted but the authors should revise their paper to take into account the comments from the reviewers to improve clarity. | val | [
"HJ1MEAYxG",
"HJraEkqlz",
"BkojC46xG",
"r13AbIaWG",
"Byo7BITbM",
"HyZXN8abM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"The authors are motivated by two problems: Inputting non-Euclidean data (such as graphs) into deep CNNs, and analyzing optimization properties of deep networks. In particular, they look at the problem of maze testing, where, given a grid of black and white pixels, the goal is to answer whether there is a path from... | [
7,
7,
6,
-1,
-1,
-1
] | [
3,
3,
1,
-1,
-1,
-1
] | [
"iclr_2018_BJGWO9k0Z",
"iclr_2018_BJGWO9k0Z",
"iclr_2018_BJGWO9k0Z",
"BkojC46xG",
"HJ1MEAYxG",
"HJraEkqlz"
] |
iclr_2018_HkNGsseC- | On the Expressive Power of Overlapping Architectures of Deep Learning | Expressive efficiency refers to the relation between two architectures A and B, whereby any function realized by B could be replicated by A, but there exists functions realized by A, which cannot be replicated by B unless its size grows significantly larger. For example, it is known that deep networks are exponentially... | accepted-poster-papers | The paper received scores of 8 (R1), 6 (R2), 6 (R3). R1's review is brief, and also is optimistic that these results demonstrated on ConvACs generalize to real convnets. R2 and R3 feel this might be a potential problem. R2 advocates weak accept and given that R1 is keen on the paper, the AC feels it can be accepted.
| train | [
"BJZ4zdslf",
"HkopHcseG",
"BypvOtGZz",
"HyxMluZfz",
"S1rPmFqZM",
"SyfVBDMbG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"The paper studies the expressive power provided by \"overlap\" in convolution layers of DNNs. Instead of ReLU networks with average/max pooling (as is standard in practice), the authors consider linear activations with product pooling. Such networks, which have been known as convolutional arithmetic circuits, ar... | [
6,
8,
6,
-1,
-1,
-1
] | [
4,
3,
4,
-1,
-1,
-1
] | [
"iclr_2018_HkNGsseC-",
"iclr_2018_HkNGsseC-",
"iclr_2018_HkNGsseC-",
"BypvOtGZz",
"HkopHcseG",
"BJZ4zdslf"
] |
iclr_2018_HJ94fqApW | Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers | Model pruning has become a useful technique that improves the computational efficiency of deep learning, making it possible to deploy solutions in resource-limited scenarios. A widely-used practice in relevant work assumes that a smaller-norm parameter or feature plays a less informative role at the inference time. In ... | accepted-poster-papers | The paper received scores either side of the borderline: 6 (R1), 5 (R2), 7 (R3). R1 and R3 felt the idea to be interesting, simple and effective. R2 raised a number of concerns which the rebuttal addressed satisfactorily. Therefore the AC feels the paper can be accepted. | val | [
"BJtJ3c_gG",
"B1rak-5eG",
"B1KcBUqlz",
"r1lVkUJGz",
"Skqs6Bh-G",
"r1ud3r3WM",
"HJklnrnbM",
"ByUWjHhZf",
"ByWHEgg-z"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"author",
"author",
"author",
"author",
"public"
] | [
"In this paper, the authors propose a data-dependent channel pruning approach to simplify CNNs with batch-normalizations. The authors view CNNs as a network flow of information and applies sparsity regularization on the batch-normalization scaling parameter \\gamma which is seen as a “gate” to the information flow.... | [
5,
7,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
5,
3,
5,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_HJ94fqApW",
"iclr_2018_HJ94fqApW",
"iclr_2018_HJ94fqApW",
"ByUWjHhZf",
"BJtJ3c_gG",
"B1rak-5eG",
"B1KcBUqlz",
"ByWHEgg-z",
"iclr_2018_HJ94fqApW"
] |
iclr_2018_SJiHXGWAZ | Diffusion Convolutional Recurrent Neural Network: Data-Driven Traffic Forecasting | Spatiotemporal forecasting has various applications in neuroscience, climate and transportation domain. Traffic forecasting is one canonical example of such learning task. The task is challenging due to (1) complex spatial dependency on road networks, (2) non-linear temporal dynamics with changing road conditions and (... | accepted-poster-papers | The paper received highly diverging scores: 5 (R1) ,9 (R2), 4(R3). Both R1 and R3 complained about the comparisons to related methods. R3 suggested some kNN and GP baselines, while R1 mentioned concurrent work using deepnets for trafffic prediction.
R3 is real expert on field. R2 and R1, not so.
R2 review very positiv... | train | [
"r1zoeeFgf",
"r1pn22FeG",
"H1AlgBcxf",
"S1W9DvMmM",
"ryjAIwfQG",
"Hy2t8DfQz",
"B1IMLDfmf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The paper proposes to build a graph where the edge weight is defined using the road network distance which is shown to be more realistic than the Euclidean distance. The defined diffusion convolution operation is essentially conducting random walks over the road segment graph. To avoid the expensive matrix operati... | [
5,
4,
9,
-1,
-1,
-1,
-1
] | [
3,
5,
5,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SJiHXGWAZ",
"iclr_2018_SJiHXGWAZ",
"iclr_2018_SJiHXGWAZ",
"H1AlgBcxf",
"r1zoeeFgf",
"B1IMLDfmf",
"r1pn22FeG"
] |
iclr_2018_SkHDoG-Cb | Simulated+Unsupervised Learning With Adaptive Data Generation and Bidirectional Mappings | Collecting a large dataset with high quality annotations is expensive and time-consuming. Recently, Shrivastava et al. (2017) propose Simulated+Unsupervised (S+U) learning: It first learns a mapping from synthetic data to real data, translates a large amount of labeled synthetic data to the ones that resemble real data... | accepted-poster-papers | Split opinions on paper: 6 (R1), 3 (R2), 6 (R3). Much of the debate centered on the novelty of the algorithm. R2 felt that the paper was a straight-forward combination of CycleGAN with S+U, while R3 felt it made a significant contribution. The AC has looked at the paper and the reviews and discussion. The topic is very... | train | [
"S1uLIj8lG",
"BJ__hY9lz",
"BJ7oBjolf",
"HyXoBh3zM",
"Bk_imn2fz",
"ByisG2nMG",
"Hk-y9aJZz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"* sec.2.2 is about label-preserving translation and many notations are introduced. However, it is not clear what label here refers to, and it does not shown in the notation so far at all. Only until the end of sec.2.2, the function F(.) is introduced and its revelation - Google Search as label function is discusse... | [
6,
6,
3,
-1,
-1,
-1,
-1
] | [
3,
4,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SkHDoG-Cb",
"iclr_2018_SkHDoG-Cb",
"iclr_2018_SkHDoG-Cb",
"iclr_2018_SkHDoG-Cb",
"S1uLIj8lG",
"BJ__hY9lz",
"BJ7oBjolf"
] |
iclr_2018_ryH20GbRW | Relational Neural Expectation Maximization: Unsupervised Discovery of Objects and their Interactions | Common-sense physical reasoning is an essential ingredient for any intelligent agent operating in the real-world. For example, it can be used to simulate the environment, or to infer the state of parts of the world that are currently unobserved. In order to match real-world conditions this causal knowledge must be lear... | accepted-poster-papers | All three reviewers recommend acceptance. The authors did a good job at the rebuttal which swayed the first reviewer to increase the final rating. This is a clear accept. | train | [
"S1OZaPI4z",
"BkApAXalG",
"ByDGdV9ef",
"B1USI22xz",
"Skl0rrTmz",
"HyR2QHKzM",
"B18VfrYMM",
"HkrHQHFMf",
"ryE0frKzM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"The rebuttal and revision addressed enough of my concerns for me to increase the score to 8. \nGood work on the additional experiments and the discussion of limitations in the conclusion!",
"Summary:\nThe manuscript extends the Neural Expectation Maximization framework by integrating an interaction function that... | [
-1,
8,
7,
7,
-1,
-1,
-1,
-1,
-1
] | [
-1,
5,
4,
3,
-1,
-1,
-1,
-1,
-1
] | [
"B18VfrYMM",
"iclr_2018_ryH20GbRW",
"iclr_2018_ryH20GbRW",
"iclr_2018_ryH20GbRW",
"iclr_2018_ryH20GbRW",
"iclr_2018_ryH20GbRW",
"BkApAXalG",
"ByDGdV9ef",
"B1USI22xz"
] |
iclr_2018_HkCsm6lRb | Generative Models of Visually Grounded Imagination | It is easy for people to imagine what a man with pink hair looks like, even if they have never seen such a person before. We call the ability to create images of novel semantic concepts visually grounded imagination. In this paper, we show how we can modify variational auto-encoders to perform this task. Our method use... | accepted-poster-papers | All three reviewers recommend acceptance. Good work, accept | train | [
"BJDxbMvez",
"S1UuHvwgf",
"rky3x-5lG",
"HyzaUOvGz",
"HygQU_DMM",
"B1jeeYPMG",
"HkXT0_Pzf",
"HkvKuGGfG",
"r1W-W7fMM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"public"
] | [
"The authors propose a generative method that can produce images along a hierarchy of specificity, i.e. both when all relevant attributes are specified, and when some are left undefined, creating a more abstract generation task. \n\nPros:\n+ The results demonstrating the method's ability to generate results for (1)... | [
7,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_HkCsm6lRb",
"iclr_2018_HkCsm6lRb",
"iclr_2018_HkCsm6lRb",
"rky3x-5lG",
"iclr_2018_HkCsm6lRb",
"BJDxbMvez",
"S1UuHvwgf",
"iclr_2018_HkCsm6lRb",
"iclr_2018_HkCsm6lRb"
] |
iclr_2018_r1wEFyWCW | Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions | Deep autoregressive models have shown state-of-the-art performance in density estimation for natural images on large-scale datasets such as ImageNet. However, such models require many thousands of gradient-based weight updates and unique image examples for training. Ideally, the models would rapidly learn visual conce... | accepted-poster-papers | This paper incorporates attention in the PixelCNN model and shows how to use MAML to enable few-shot density estimation. The paper received mixed reviews (7,6,4). After rebuttal the first reviewer updated the score to accept. The AC shares the concern of novelty with the first reviewer. However, it is also not trivial ... | train | [
"rkHhxN2lG",
"HyKWS3KxM",
"rJ3vXv5xf",
"ryfVtml7f",
"ByMcMEGJG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"This paper focuses on the density estimation when the amount of data available for training is low. The main idea is that a meta-learning model must be learnt, which learns to generate novel density distributions by learn to adapt a basic model on few new samples. The paper presents two independent method.\n\nThe ... | [
6,
7,
6,
-1,
-1
] | [
5,
4,
4,
-1,
-1
] | [
"iclr_2018_r1wEFyWCW",
"iclr_2018_r1wEFyWCW",
"iclr_2018_r1wEFyWCW",
"iclr_2018_r1wEFyWCW",
"iclr_2018_r1wEFyWCW"
] |
iclr_2018_rknt2Be0- | Compositional Obverter Communication Learning from Raw Visual Input | One of the distinguishing aspects of human language is its compositionality, which allows us to describe complex environments with limited vocabulary. Previously, it has been shown that neural network agents can learn to communicate in a highly structured, possibly compositional language based on disentangled input (e.... | accepted-poster-papers | This paper investigates emergence of language from raw pixels in a two-agent setting. The paper received divergent reviews, 3,6,9. Two ACs discussed this paper, due to a strong opinion from both positive and negative reviewers. The ACs agree that the score "9" is too high: the notion of compositionality is used in many... | val | [
"H1T0ZOBEG",
"rkKuB71xM",
"BJsMQVvxM",
"B11XUD_gM",
"B1TvQWpmz",
"rkV7ieaXf",
"BkS1LAiQf",
"B101HAiQG",
"SkTBrAjXz",
"rJcRXRjQf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"Dear authors,\n\nThank you very much for the detailed response. I've spent a while thinking about this, and my score stays the same. 3 points:\n\n1. It is claimed that \"the messages in the gray boxes of Figure 3 do actually follow the patterns nicely\". I disagree. The central problem is that the paper provides n... | [
-1,
9,
3,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
4,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"B101HAiQG",
"iclr_2018_rknt2Be0-",
"iclr_2018_rknt2Be0-",
"iclr_2018_rknt2Be0-",
"rkV7ieaXf",
"BkS1LAiQf",
"rkKuB71xM",
"BJsMQVvxM",
"BJsMQVvxM",
"B11XUD_gM"
] |
iclr_2018_rkN2Il-RZ | SCAN: Learning Hierarchical Compositional Visual Concepts | The seemingly infinite diversity of the natural world arises from a relatively small set of coherent rules, such as the laws of physics or chemistry. We conjecture that these rules give rise to regularities that can be discovered through primarily unsupervised experiences and represented as abstract concepts. If such r... | accepted-poster-papers | This paper initially received borderline reviews. The main concern raised by all reviewers was a limited experimental evaluation (synthetic only). In rebuttal, the authors provided new results on the CelebA dataset, which turned the first reviewer positive. The AC agrees there is merit to this approach, and generally a... | train | [
"Bkyw7hwrf",
"ByLBsIIBM",
"BkeoFCjgG",
"rkzoyZW-M",
"H1vrGEM-G",
"S1hTSkuXz",
"SkPdrTcMG",
"Hk-qNp5fz",
"r1m0QT9zz"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"Dear Reviewer,\n\nThank you for taking the time to comment on the updated version of our paper. You suggest that you do not find our additional experiments convincing enough because we do not train recombination operators on the celebA dataset. However, in our understanding your original review did not ask for the... | [
-1,
-1,
5,
6,
7,
-1,
-1,
-1,
-1
] | [
-1,
-1,
4,
4,
4,
-1,
-1,
-1,
-1
] | [
"ByLBsIIBM",
"r1m0QT9zz",
"iclr_2018_rkN2Il-RZ",
"iclr_2018_rkN2Il-RZ",
"iclr_2018_rkN2Il-RZ",
"SkPdrTcMG",
"H1vrGEM-G",
"rkzoyZW-M",
"BkeoFCjgG"
] |
iclr_2018_HJCXZQbAZ | Hierarchical Density Order Embeddings | By representing words with probability densities rather than point vectors, proba- bilistic word embeddings can capture rich and interpretable semantic information and uncertainty (Vilnis & McCallum, 2014; Athiwaratkun & Wilson, 2017). The uncertainty information can be particularly meaningful in capturing entailment r... | accepted-poster-papers | This paper marries the idea of Gaussian word embeddings and order embeddings, by imposing order among probabilistic word embeddings. Two reviewers vote for acceptance, and one finds the novelty of the paper incremental. The reviewer stuck to this view even after rebuttal, however, acknowledges the improvement in result... | val | [
"HyGYccUxz",
"SyRynl9eM",
"rk63KZixz",
"BJMKZCoXM",
"BJ73RqEXf",
"ryk5RqN7G",
"SJIsJCXQG",
"ryeRnpmmz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"The paper presents a method for hierarchical object embedding by Gaussian densities for lexical entailment tasks.Each word is represented by a diagonal Gaussian and the KL divergence is used as a directional distance measure. if D(f||g) < gamma then the concept represented by f entails the concept represented by ... | [
4,
6,
8,
-1,
-1,
-1,
-1,
-1
] | [
3,
4,
5,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_HJCXZQbAZ",
"iclr_2018_HJCXZQbAZ",
"iclr_2018_HJCXZQbAZ",
"iclr_2018_HJCXZQbAZ",
"ryk5RqN7G",
"HyGYccUxz",
"SyRynl9eM",
"rk63KZixz"
] |
iclr_2018_BkN_r2lR- | Identifying Analogies Across Domains | Identifying analogies across domains without supervision is a key task for artificial intelligence. Recent advances in cross domain image mapping have concentrated on translating images across domains. Although the progress made is impressive, the visual fidelity many times does not suffice for identifying the matching... | accepted-poster-papers | This paper builds on top of Cycle GAN ideas where the main idea is to jointly optimize the domain-level translation function with an instance-level matching objective. Initially the paper received two negative reviews (4,5) and a positive (7). After the rebuttal and several back and forth between the first reviewer and... | train | [
"HyJww3cEz",
"BkyDnj5VG",
"ByECSWv4z",
"SkHatuolz",
"HJ08-bCef",
"ryhcYB-bG",
"rJ6aA85QG",
"Hyj4tk1GM",
"Sk0k9JkfG",
"rklEiy1Mz",
"Byqw2JyGf"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"I thank the authors for thoroughly responding to my concerns. The 3D alignment experiment looks great, and indeed I did miss the comment about the cell bio experiment. That experiment is also very compelling.\n\nI think with these two experiments added to the revision, along with all the other improvements, the pa... | [
-1,
-1,
-1,
7,
5,
4,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
4,
4,
3,
-1,
-1,
-1,
-1,
-1
] | [
"BkyDnj5VG",
"ByECSWv4z",
"Byqw2JyGf",
"iclr_2018_BkN_r2lR-",
"iclr_2018_BkN_r2lR-",
"iclr_2018_BkN_r2lR-",
"iclr_2018_BkN_r2lR-",
"SkHatuolz",
"HJ08-bCef",
"ryhcYB-bG",
"ryhcYB-bG"
] |
iclr_2018_B17JTOe0- | Emergence of grid-like representations by training recurrent neural networks to perform spatial localization | Decades of research on the neural code underlying spatial navigation have revealed a diverse set of neural response properties. The Entorhinal Cortex (EC) of the mammalian brain contains a rich set of spatial correlates, including grid cells which encode space using tessellating patterns. However, the mechanisms and fu... | accepted-poster-papers | This work shows how activation patterns of units reminiscent of grid and border cells emerge in RNNs trained on navigation tasks. While the ICLR audience is not mainly focused on neuroscience, the findings of the paper are quite intriguing, and grid cells are sufficiently well-known and "mainstream" that this may inter... | train | [
"SkDHZUXlG",
"rk3jvePlf",
"HyMQMl9eG",
"By7T3PpmM",
"HJ8d9D6Xz",
"ByubuDpmM",
"rktarP6mG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The authors train an RNN to perform deduced reckoning (ded reckoning) for spatial navigation, and then study the responses of the model neurons in the RNN. They find many properties reminiscent of neurons in the mammalian entorhinal cortex (EC): grid cells, border cells, etc. When regularization of the network is ... | [
8,
9,
8,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_B17JTOe0-",
"iclr_2018_B17JTOe0-",
"iclr_2018_B17JTOe0-",
"iclr_2018_B17JTOe0-",
"SkDHZUXlG",
"rk3jvePlf",
"HyMQMl9eG"
] |
iclr_2018_HJhIM0xAW | Learning a neural response metric for retinal prosthesis | Retinal prostheses for treating incurable blindness are designed to electrically stimulate surviving retinal neurons, causing them to send artificial visual signals to the brain. However, electrical stimulation generally cannot precisely reproduce normal patterns of neural activity in the retina. Therefore, an electr... | accepted-poster-papers | This work shows interesting potential applications of known machine learning techniques to the practical problem of how to devise a retina prosthesis that is the most perceptually useful. The paper suffers from a few methodological problems pointed out by the reviewers (e.g., not using the more powerful neural network ... | val | [
"HyzRKw7xf",
"S1AQa7uxz",
"B1OVwz9ez"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors develop new spike train distance metrics that cluster together responses to the same stimulus, and push responses to different stimuli away from each other. Two such metrics are discussed: neural networks, and quadratic metrics. They then show that these metrics can be used to classify neural responses... | [
5,
6,
7
] | [
4,
3,
4
] | [
"iclr_2018_HJhIM0xAW",
"iclr_2018_HJhIM0xAW",
"iclr_2018_HJhIM0xAW"
] |
iclr_2018_BJj6qGbRW | Few-Shot Learning with Graph Neural Networks | We propose to study the problem of few-shot learning with the prism of inference on a partially observed graphical model, constructed from a collection of input images whose label can be either observed or not. By assimilating generic message-passing inference algorithms with their neural-network counterparts, we defin... | accepted-poster-papers | All reviewers agree that the proposed method is novel and experiments do a good job in establishing its value for few-shot learning. Most the concerns raised by the reviewers on experimental protocols have been addressed in the author response and revised version. | train | [
"BJIp_k0xM",
"r1_szu5xM",
"By7ixJ9eG",
"SkzW_xPbG",
"HJ4l-figf",
"r1XiADHmz",
"HkmKSsqzz",
"ByG5VLHQG",
"HkH8YlDZz",
"Hy1Xgqbfz",
"HyGQQPbMM",
"r1CZZbezf",
"Syc4dgwWz",
"SJeQvN3eG",
"BkWNYcseM",
"rJ5B_ucef"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"author",
"public",
"author",
"author",
"author",
"public",
"author",
"public",
"public",
"public"
] | [
"This paper proposes to use graph neural networks for the purpose of few-shot learning, as well as semi-supervised learning and active learning. The paper first relies on convolutional neural networks to extract image features. Then, these image features are organized in a fully connected graph. Then, this graph is... | [
7,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_BJj6qGbRW",
"iclr_2018_BJj6qGbRW",
"iclr_2018_BJj6qGbRW",
"r1_szu5xM",
"rJ5B_ucef",
"ByG5VLHQG",
"By7ixJ9eG",
"HkmKSsqzz",
"BJIp_k0xM",
"HyGQQPbMM",
"r1CZZbezf",
"iclr_2018_BJj6qGbRW",
"r1_szu5xM",
"BkWNYcseM",
"HJ4l-figf",
"iclr_2018_BJj6qGbRW"
] |
iclr_2018_S1nQvfgA- | Semantically Decomposing the Latent Spaces of Generative Adversarial Networks | We propose a new algorithm for training generative adversarial networks to jointly learn latent codes for both identities (e.g. individual humans) and observations (e.g. specific photographs). In practice, this means that by fixing the identity portion of latent codes, we can generate diverse images of the same subject... | accepted-poster-papers | The paper proposes a GAN based approach for disentangling identity (or class information) from style. The supervision needed is the identity label for each image. Overall, the reviewers agree that the paper makes a novel contribution along the line of work on disentangling 'style' from 'content'. | train | [
"SkIH8vIef",
"ryEAzYOlM",
"BkAls-Kgf",
"H19BT53-f",
"SyGahq2WG",
"rkqOhq2bM",
"rJ7G3q3Zz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"Quality\nThe paper is well written and the model is simple and clearly explained. The idea for disentangling identity from other factors of variation using identity-matched image pairs is quite simple, but the experimental results on faces and shoes are impressive.\n\nClarity\nThe model and its training objective ... | [
6,
6,
7,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_S1nQvfgA-",
"iclr_2018_S1nQvfgA-",
"iclr_2018_S1nQvfgA-",
"ryEAzYOlM",
"SkIH8vIef",
"BkAls-Kgf",
"iclr_2018_S1nQvfgA-"
] |
iclr_2018_By-7dz-AZ | A Framework for the Quantitative Evaluation of Disentangled Representations | Recent AI research has emphasised the importance of learning disentangled representations of the explanatory factors behind data. Despite the growing interest in models which can learn such representations, visual inspection remains the standard evaluation metric. While various desiderata have been implied in recent d... | accepted-poster-papers | The paper proposes evaluation metrics for quantifying the quality of disentangled representations. There is consensus among reviewers that the paper makes a useful contribution towards this end. Authors have addressed most of reviewers' concerns in their response. | train | [
"ByfkoCtlf",
"rk7pIjceG",
"H1YPgFhlf",
"rJnsAbTQG",
"S1-fkGaXf",
"rk4ik297f",
"H1PQy257z"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"****\nI acknowledge the author's comments and improve my score to 7.\n****\n\nSummary:\nThe authors propose an experimental framework and metrics for the quantitative evaluation of disentangling representations.\nThe basic idea is to use datasets with known factors of variation, z, and measure how well in an infor... | [
7,
6,
6,
-1,
-1,
-1,
-1
] | [
5,
5,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_By-7dz-AZ",
"iclr_2018_By-7dz-AZ",
"iclr_2018_By-7dz-AZ",
"rk7pIjceG",
"ByfkoCtlf",
"H1YPgFhlf",
"H1YPgFhlf"
] |
iclr_2018_HJcSzz-CZ | Meta-Learning for Semi-Supervised Few-Shot Classification | In few-shot classification, we are interested in learning algorithms that train a classifier from only a handful of labeled examples. Recent progress in few-shot classification has featured meta-learning, in which a parameterized model for a learning algorithm is defined and trained on episodes representing different c... | accepted-poster-papers | The paper extends the earlier work on Prototypical networks to semi-supervised setting. Reviewers largely agree that the paper is well-written. There are some concerns on the incremental nature of the paper wrt to the novelty aspect but in the light of reported empirical results which show clear improvement over earlie... | train | [
"rJzcaGvgf",
"Hyx7bEPez",
"SkW9BQ9lG",
"Hyrfr91fG",
"SJ7s7qkzG",
"Hy4CMckMG",
"BkmDZ-ZZG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"public"
] | [
"This paper is an extension of the “prototypical network” which will be published in NIPS 2017. The classical few-shot learning has been limited to using the unlabeled data, while this paper considers employing the unlabeled examples available to help train each episode. The paper solves a new semi-supervised situa... | [
6,
6,
6,
-1,
-1,
-1,
-1
] | [
5,
4,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_HJcSzz-CZ",
"iclr_2018_HJcSzz-CZ",
"iclr_2018_HJcSzz-CZ",
"rJzcaGvgf",
"Hyx7bEPez",
"SkW9BQ9lG",
"iclr_2018_HJcSzz-CZ"
] |
iclr_2018_H1q-TM-AW | A DIRT-T Approach to Unsupervised Domain Adaptation | Domain adaptation refers to the problem of leveraging labeled data in a source domain to learn an accurate model in a target domain where labels are scarce or unavailable. A recent approach for finding a common representation of the two domains is via domain adversarial training (Ganin & Lempitsky, 2015), which attempt... | accepted-poster-papers | Well motivated and well written, with extensive results. The paper also received positive comments from all reviewers. The AC recommends that the paper be accepted. | train | [
"rknCb19ef",
"Syaz3vINf",
"Hk19swKeM",
"BkhjhvQ-G",
"rJMJD0n7z",
"ryETB02XG",
"SknfSAnQf",
"S17uEA2QG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"This paper presents two complementary models for unsupervised domain adaptation (classification task): 1) the Virtual Adversarial Domain Adaptation (VADA) and 2) the Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T). The authors make use of the so-called cluster assumption, i.e., decision bou... | [
8,
-1,
7,
7,
-1,
-1,
-1,
-1
] | [
4,
-1,
4,
2,
-1,
-1,
-1,
-1
] | [
"iclr_2018_H1q-TM-AW",
"rJMJD0n7z",
"iclr_2018_H1q-TM-AW",
"iclr_2018_H1q-TM-AW",
"Hk19swKeM",
"BkhjhvQ-G",
"rknCb19ef",
"iclr_2018_H1q-TM-AW"
] |
iclr_2018_r1Dx7fbCW | Generalizing Across Domains via Cross-Gradient Training | We present CROSSGRAD , a method to use multi-domain training data to learn a classifier that generalizes to new domains. CROSSGRAD does not need an adaptation phase via labeled or unlabeled data, or domain features in the new domain. Most existing domain adaptation methods attempt to erase domain signals using techniqu... | accepted-poster-papers | Well motivated and well received by all of the expert reviewers. The AC recommends that the paper be accepted. | test | [
"SkBWbEhVz",
"rkJNQW5gM",
"Syaaxl0lf",
"SJbAMFl-z",
"rkDrknW-G",
"HyQ8rr6Xf",
"rJ13AnxXG",
"HyalvTemf",
"S1CwzpemG",
"SkE4Ang7M",
"SkHQ6TqWG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public"
] | [
"The rebuttal addresses my questions. The authors are recommended to explicitly use \"domain generalization\" in the paper and/or the title to make the language consistent with the literature. ",
"This paper proposed a domain generalization approach by domain-dependent data augmentation. The augmentation is guide... | [
-1,
7,
7,
8,
7,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
5,
4,
4,
5,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"rkJNQW5gM",
"iclr_2018_r1Dx7fbCW",
"iclr_2018_r1Dx7fbCW",
"iclr_2018_r1Dx7fbCW",
"iclr_2018_r1Dx7fbCW",
"iclr_2018_r1Dx7fbCW",
"Syaaxl0lf",
"rkDrknW-G",
"rkJNQW5gM",
"SJbAMFl-z",
"rkJNQW5gM"
] |
iclr_2018_ByRWCqvT- | Learning to cluster in order to transfer across domains and tasks | This paper introduces a novel method to perform transfer learning across domains and tasks, formulating it as a problem of learning to cluster. The key insight is that, in addition to features, we can transfer similarity information and this is sufficient to learn a similarity function and clustering network to perform... | accepted-poster-papers | Pros
-- A novel formulation for cross-task and cross-domain transfer learning.
-- Extensive evaluations.
Cons
-- Presentation a bit confusing, please improve.
The paper received positive reviews from reviewers. But the reviewers pointed out some issues with presentation and flow of the paper. Even though the revised ... | train | [
"BJbSJYcgG",
"Byum0OYlG",
"BJZ2B0agf",
"B16XOpsXf",
"Sk5-cTImG",
"B1D0OaIQz",
"B1oBwTIXM",
"BJkB-Sczf",
"ryA6vo2gf",
"HktVdZdlG",
"rkVJuH4ez"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"author",
"public"
] | [
"The authors propose a method for performing transfer learning and domain adaptation via a clustering approach. The primary contribution is the introduction of a Learnable Clustering Objective (LCO) that is trained on an auxiliary set of labeled data to correctly identify whether pairs of data belong to the same cl... | [
7,
5,
9,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_ByRWCqvT-",
"iclr_2018_ByRWCqvT-",
"iclr_2018_ByRWCqvT-",
"iclr_2018_ByRWCqvT-",
"Byum0OYlG",
"BJbSJYcgG",
"BJZ2B0agf",
"ryA6vo2gf",
"HktVdZdlG",
"rkVJuH4ez",
"iclr_2018_ByRWCqvT-"
] |
iclr_2018_H1T2hmZAb | Deep Complex Networks | At present, the vast majority of building blocks, techniques, and architectures for deep learning are based on real-valued operations and representations. However, recent work on recurrent neural networks and older fundamental theoretical analysis suggests that complex numbers could have a richer representational capac... | accepted-poster-papers | The paper received mostly positive comments from experts. To summarize:
Pros:
-- The paper provides complex counterparts for typical architectures / optimization strategies used by real valued networks.
Cons:
-- Although the authors include plots explaining how nonlinearities transform phase, intuition about how phase... | train | [
"r1iYihLEf",
"SyJZuXjlG",
"rJH_dHjeG",
"BJ8VRRhgM",
"H1rRov6mz",
"HJUShwpQf",
"HJC5jDTmG",
"rJkJTEcXG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"public",
"public",
"public"
] | [
"Unfortunately I'm not familiar with state of the art in music transcription.\n\nFrom description it sounds that test set is quite small (3 melodies). For a small test set, various hyper-parameters such as model architecture, learning rate schedule and choice of optimization algorithm are expected to have a strong ... | [
-1,
7,
8,
4,
-1,
-1,
-1,
-1
] | [
-1,
4,
4,
4,
-1,
-1,
-1,
-1
] | [
"HJC5jDTmG",
"iclr_2018_H1T2hmZAb",
"iclr_2018_H1T2hmZAb",
"iclr_2018_H1T2hmZAb",
"rJH_dHjeG",
"SyJZuXjlG",
"BJ8VRRhgM",
"iclr_2018_H1T2hmZAb"
] |
iclr_2018_HkwBEMWCZ | Skip Connections Eliminate Singularities | Skip connections made the training of very deep networks possible and have become an indispensable component in a variety of neural architectures. A completely satisfactory explanation for their success remains elusive. Here, we present a novel explanation for the benefits of skip connections in training very deep netw... | accepted-poster-papers | pros:
* novel explanation: skip connections <--> singualrities
* thorough analysis
* significant topic in understanding deep nets
cons:
* more rigorous theoretical analysis would be better
overall, the committee feels this paper would be interesting to have at ICLR.
| train | [
"rJkUMYQgz",
"SJWa0g9xM",
"HJiCEsseG",
"rytZt1jXM",
"B1W8I1jmf",
"B11hBJsXM",
"H1JlpxZZf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"public"
] | [
"The authors show that two types of singularities impede learning in deep neural networks: elimination singularities (where a unit is effectively shut off by a loss of input or output weights, or by an overly-strong negative bias), and overlap singularities, where two or more units have very similar input or output... | [
8,
8,
6,
-1,
-1,
-1,
-1
] | [
3,
3,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_HkwBEMWCZ",
"iclr_2018_HkwBEMWCZ",
"iclr_2018_HkwBEMWCZ",
"HJiCEsseG",
"iclr_2018_HkwBEMWCZ",
"H1JlpxZZf",
"iclr_2018_HkwBEMWCZ"
] |
iclr_2018_H1cWzoxA- | Bi-Directional Block Self-Attention for Fast and Memory-Efficient Sequence Modeling | Recurrent neural networks (RNN), convolutional neural networks (CNN) and self-attention networks (SAN) are commonly used to produce context-aware representations. RNN can capture long-range dependency but is hard to parallelize and not time-efficient. CNN focuses on local dependency but does not perform well on some ta... | accepted-poster-papers | The proposed Bi-BloSAN is a two-levels' block SAN, which has both parallelization efficiency and memory efficiency. The study is thoroughly conducted and well presented. | train | [
"SJz6VRFlG",
"rkcETx9lf",
"ryOYfeaef",
"BJjyKg-mM",
"SyboUpHzz",
"rkzLzZpbM",
"ryi_ebTWM",
"r1Fr-WT-f",
"rk9hKgTZz",
"S1Y3_lTZM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"Pros: \nThe paper proposes a “bi-directional block self-attention network (Bi-BloSAN)” for sequence encoding, which inherits the advantages of multi-head (Vaswani et al., 2017) and DiSAN (Shen et al., 2017) network but is claimed to be more memory-efficient. The paper is written clearly and is easy to follow. The ... | [
6,
9,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_H1cWzoxA-",
"iclr_2018_H1cWzoxA-",
"iclr_2018_H1cWzoxA-",
"iclr_2018_H1cWzoxA-",
"SJz6VRFlG",
"iclr_2018_H1cWzoxA-",
"SJz6VRFlG",
"SJz6VRFlG",
"ryOYfeaef",
"rkcETx9lf"
] |
iclr_2018_ry8dvM-R- | Routing Networks: Adaptive Selection of Non-Linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in tasks to improve performance, but often suffers from task interference which reduces the benefits of transfer. To address this issue we introduce the routing network paradigm, a novel neural network and training algorithm. A routing network is a ... | accepted-poster-papers | The proposed routing networks using RL to automatically learn the optimal network architecture is very interesting. Solid experimental justification and comparisons. The authors also addressed reviewers' concerns on presentation clarity in revisions. | train | [
"HyNnyzceG",
"r1AHkVdgf",
"Hk65p-5lf",
"SJyFVJhXf",
"SJkOLCs7G",
"HyAwoOF7M",
"HJ2i7kjzf",
"BJIwfksMG",
"rJgSZyozG",
"ByzMZ1ozM",
"BkHpeJoGf",
"ry3NDBHGf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public"
] | [
"The paper introduces a routing network for multi-task learning. The routing network consists of a router and a set of function blocks. Router makes a routing decision by either passing the input to a function block or back to the router. This network paradigm is tested on multi-task settings of MNIST, mini-imagene... | [
7,
6,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_ry8dvM-R-",
"iclr_2018_ry8dvM-R-",
"iclr_2018_ry8dvM-R-",
"r1AHkVdgf",
"r1AHkVdgf",
"ry3NDBHGf",
"BJIwfksMG",
"r1AHkVdgf",
"Hk65p-5lf",
"HyNnyzceG",
"iclr_2018_ry8dvM-R-",
"iclr_2018_ry8dvM-R-"
] |
iclr_2018_rkhlb8lCZ | Wavelet Pooling for Convolutional Neural Networks | Convolutional Neural Networks continuously advance the progress of 2D and 3D image and object classification. The steadfast usage of this algorithm requires constant evaluation and upgrading of foundational concepts to maintain progress. Network regularization techniques typically focus on convolutional layer operation... | accepted-poster-papers | The idea of using wavelet pooling is novel and will bring many interesting research work in this direction. But more thorough experimental justification such as those recommended by the reviewers would make the paper better. Overall, the committee feels this paper will bring value to the conference. | train | [
"SJgEVADSz",
"Sk8IoTPSf",
"rJJWFNNef",
"B1zf5Uvxf",
"B1Q6kMqgM",
"r1JTSdEzf",
"SJv6OO4Mf",
"B1C-Nximz"
] | [
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"Short answer is yes. We would love to give access to the code. The longer answer is that it needs to be made more efficient so that the implementation time is reduced. When it was written it wasn't written for CUDA, or MEX, and thus doesn't have the speedups afforded by precompiling, etc. When that happens we will... | [
-1,
-1,
7,
9,
4,
-1,
-1,
-1
] | [
-1,
-1,
4,
3,
4,
-1,
-1,
-1
] | [
"Sk8IoTPSf",
"iclr_2018_rkhlb8lCZ",
"iclr_2018_rkhlb8lCZ",
"iclr_2018_rkhlb8lCZ",
"iclr_2018_rkhlb8lCZ",
"B1zf5Uvxf",
"rJJWFNNef",
"B1Q6kMqgM"
] |
iclr_2018_SJ1Xmf-Rb | FearNet: Brain-Inspired Model for Incremental Learning | Incremental class learning involves sequentially learning classes in bursts of examples from the same class. This violates the assumptions that underlie methods for training standard deep neural networks, and will cause them to suffer from catastrophic forgetting. Arguably, the best method for incremental class learni... | accepted-poster-papers | A novel dual memory system inspired by brain for the important incremental learning and very good results. | test | [
"rJ96Jgclf",
"rkvqPjUVz",
"HyirgqDgM",
"HkvoWK6ef",
"SkTJreuzM",
"HkmO7eOGf",
"HyFfVxOzG",
"Bk1Cfe_MM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"I quite liked the revival of the dual memory system ideas and the cognitive (neuro) science inspiration. The paper is overall well written and tackles serious modern datasets, which was impressive, even though it relies on a pre-trained, fixed ResNet (see point below).\n\nMy only complaint is that I felt I couldn’... | [
7,
-1,
7,
6,
-1,
-1,
-1,
-1
] | [
2,
-1,
4,
2,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SJ1Xmf-Rb",
"SkTJreuzM",
"iclr_2018_SJ1Xmf-Rb",
"iclr_2018_SJ1Xmf-Rb",
"HyirgqDgM",
"HkvoWK6ef",
"rJ96Jgclf",
"iclr_2018_SJ1Xmf-Rb"
] |
iclr_2018_BJehNfW0- | Do GANs learn the distribution? Some Theory and Empirics | Do GANS (Generative Adversarial Nets) actually learn the target distribution? The foundational paper of Goodfellow et al. (2014) suggested they do, if they were given sufficiently large deep nets, sample size, and computation time. A recent theoretical analysis in Arora et al. (2017) raised doubts whether the same hold... | accepted-poster-papers | * presents a novel way analyzing GANs using the birthday paradox and provides a theoretical construction that shows bidirectional GANs cannot escape specific cases of mode collapse
* significant contribution to the discussion of whether GANs learn the target disctibution
* thorough justifications | train | [
"rkhhruYgM",
"B1jWee9eM",
"B1g5pBTxz",
"ByMIdi0mz",
"H1O1IXeff",
"BJImH7xfz",
"SJaoN7gfz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The paper adds to the discussion on the question whether Generative Adversarial Nets (GANs) learn the target distribution. Recent theoretical analysis of GANs by Arora et al. show that of the discriminator capacity of is bounded, then there is a solution the closely meets the objective but the output distribution ... | [
7,
6,
7,
-1,
-1,
-1,
-1
] | [
4,
4,
3,
-1,
-1,
-1,
-1
] | [
"iclr_2018_BJehNfW0-",
"iclr_2018_BJehNfW0-",
"iclr_2018_BJehNfW0-",
"iclr_2018_BJehNfW0-",
"rkhhruYgM",
"B1jWee9eM",
"B1g5pBTxz"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.