paper_id stringlengths 43 43 | summaries list | abstractText stringlengths 98 40k | authors list | references list | sections list | year int64 1.98k 2.02k ⌀ | title stringlengths 4 183 ⌀ |
|---|---|---|---|---|---|---|---|
SP:a85b6e1281b4c5f84e891b0897affe5971d4ff7a | [
"The paper presents an algorithm for performing min-max optimisation without gradients and analyses its convergence. The algorithm is evaluated for the min-max problems that arise in the context of adversarial attacks. The presented algorithm is a natural application of a zeroth-order gradient estimator and the aut... | In this paper, we study the problem of constrained robust (min-max) optimization in a black-box setting, where the desired optimizer cannot access the gradients of the objective function but may query its values. We present a principled optimization framework, integrating a zeroth-order (ZO) gradient estimator with an ... | [] | [
{
"authors": [
"Charu Aggarwal",
"Djallel Bouneffouf",
"Horst Samulowitz",
"Beat Buesser",
"Thanh Hoang",
"Udayan Khurana",
"Sijia Liu",
"Tejaswini Pedapati",
"Parikshit Ram",
"Ambrish Rawat"
],
"title": "How can ai automate end-to-end data sci... | [
{
"heading": "1 INTRODUCTION",
"text": "In numerous real-world applications, one is faced with various forms of adversary that are not accounted for by standard optimization algorithms. For instance, when training a machine learning model on user-provided data, malicious users can carry out a data poisoning... | 2,019 | null |
SP:41867edbd1bb96ff8340c8decefba2127a67dced | [
"The paper proposes a model building off of the generative query network model that takes in as input multiple images, builds a model of the 3D scene, and renders it. This can be trained end to end. The insight of the method is that one can factor the underlying representation into different objects. The system is ... | In this paper, we propose a probabilistic generative model, called ROOTS, for unsupervised learning of object-oriented 3D-scene representation and rendering. ROOTS bases on the Generative Query Network (GQN) framework. However, unlike GQN, ROOTS provides independent, modular, and object-oriented decomposition of the 3D... | [] | [
{
"authors": [
"Jacob Andreas",
"Marcus Rohrbach",
"Trevor Darrell",
"Dan Klein"
],
"title": "Neural module networks",
"venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,",
"year": 2016
},
{
"authors": [
"Christopher P... | [
{
"heading": "1 INTRODUCTION",
"text": "The shortcomings of contemporary deep learning such as interpretability, sample efficiency, ability for reasoning and causal inference, transferability, and compositionality, are where the symbolic AI has traditionally shown its strengths (Garnelo & Shanahan, 2019). T... | 2,019 | null |
SP:05a329e1e9faa9917c278dd2ba1eb5090189bdf9 | [
"This paper presents a method for single image 3D reconstruction. It is inspired by implicit shape models, like presented in Park et al. and Mescheder et al., that given a latent code project 3D positions to signed distance, or occupancy values, respectively. However, instead of a latent vector, the proposed method... | We present a new approach to 3D object representation where a neural network encodes the geometry of an object directly into the weights and biases of a second ‘mapping’ network. This mapping network can be used to reconstruct an object by applying its encoded transformation to points randomly sampled from a simple geo... | [
{
"affiliations": [],
"name": "Eric Mitchell"
},
{
"affiliations": [],
"name": "Selim Engin"
},
{
"affiliations": [],
"name": "Volkan Isler"
},
{
"affiliations": [],
"name": "Daniel D Lee"
}
] | [
{
"authors": [
"Angel X. Chang",
"Thomas A. Funkhouser",
"Leonidas J. Guibas",
"Pat Hanrahan",
"Qi-Xing Huang",
"Zimo Li",
"Silvio Savarese",
"Manolis Savva",
"Shuran Song",
"Hao Su",
"Jianxiong Xiao",
"Li Yi",
"Fisher Yu"
],
... | [
{
"heading": null,
"text": "We present a new approach to 3D object representation where a neural network encodes the geometry of an object directly into the weights and biases of a second ‘mapping’ network. This mapping network can be used to reconstruct an object by applying its encoded transformation to p... | 2,020 | null |
SP:7f6ef5f3fa7627e799377aa06561904b80c5c1c4 | [
"This paper proposes a novel direction for curriculum learning. Previous works in the area of curriculum learning focused on choosing easier samples first and harder samples later when learning the neural network models. This is problematic since we need to first compute how difficult each samples are, which intr... | Like humans, deep networks learn better when samples are organized and introduced in a meaningful order or curriculum (Weinshall et al., 2018). While conventional approaches to curriculum learning emphasize the difficulty of samples as the core incremental strategy, it forces networks to learn from small subsets of dat... | [] | [
{
"authors": [
"Judith Avrahami",
"Yaakov Kareev",
"Yonatan Bogot",
"Ruth Caspi",
"Salomka Dunaevsky",
"Sharon Lerner"
],
"title": "Teaching by examples: Implications for the process of category acquisition",
"venue": "The Quarterly Journal of Experimental Psychol... | [
{
"heading": "1 INTRODUCTION",
"text": "Deep networks have seen rich applications in high-dimensional problems characterized by a large number of labels and a high volume of samples. However, successfully training deep networks to solve problems under such conditions is mystifyingly hard (Erhan et al. (2009... | 2,019 | null |
SP:c3a5a5600463b8f590e9a2b10f7984973410b043 | [
"The paper proposes an imitation learning algorithm that combines support estimation with adversarial training. The key idea is simple: multiply the reward from Random Expert Distillation (RED) with the reward from Generative Adversarial Imitation Learning (GAIL). The new reward combines the best of both methods. L... | We propose Support-guided Adversarial Imitation Learning (SAIL), a generic imitation learning framework that unifies support estimation of the expert policy with the family of Adversarial Imitation Learning (AIL) algorithms. SAIL addresses two important challenges of AIL, including the implicit reward bias and potentia... | [] | [
{
"authors": [
"Pieter Abbeel",
"Andrew Y Ng"
],
"title": "Apprenticeship learning via inverse reinforcement learning",
"venue": "In Proceedings of the twenty-first international conference on Machine learning,",
"year": 2004
},
{
"authors": [
"Martin Arjovsky",
"... | [
{
"heading": "1 INTRODUCTION",
"text": "The class of Adversarial Imitation Learning (AIL) algorithms learns robust policies that imitate an expert’s actions from a small number of expert trajectories, without further access to the expert or environment signals. AIL iterates between refining a reward via adv... | 2,019 | null |
SP:812c4e2bd2b3e6b25fc6869775bea958498cbfd1 | [
"This paper tackles an issue imitation learning approaches face. More specifically, policies learned in this manner can often fail when they encounter new states not seen in demonstrations. The paper proposes a method for learning value functions that are more conservative on unseen states, which encourages the lea... | Imitation learning, followed by reinforcement learning algorithms, is a promising paradigm to solve complex control tasks sample-efficiently. However, learning from demonstrations often suffers from the covariate shift problem, which results in cascading errors of the learned policy. We introduce a notion of conservati... | [
{
"affiliations": [],
"name": "NEGATIVE SAMPLING"
},
{
"affiliations": [],
"name": "Yuping Luo"
}
] | [
{
"authors": [
"Pieter Abbeel",
"Andrew Y Ng"
],
"title": "Apprenticeship learning via inverse reinforcement learning",
"venue": "In Proceedings of the twenty-first international conference on Machine learning,",
"year": 2004
},
{
"authors": [
"Jacopo Aleotti",
"S... | [
{
"heading": null,
"text": "Imitation learning, followed by reinforcement learning algorithms, is a promising paradigm to solve complex control tasks sample-efficiently. However, learning from demonstrations often suffers from the covariate shift problem, which results in cascading errors of the learned pol... | 2,020 | null |
SP:c2dfaba3df490671f8ce20bf69df96d0887aa19d | [
"The authors propose a prediction model for directed acyclic graphs (DAGs) over a fixed set of vertices based on a neural network. The present work follows the previous work on undirected acyclic graphs, where the key constraint is (3), ensuring the acyclic property. The proposed method performed favorably on artif... | We propose a novel score-based approach to learning a directed acyclic graph (DAG) from observational data. We adapt a recently proposed continuous constrained optimization formulation to allow for nonlinear relationships between variables using neural networks. This extension allows to model complex interactions while... | [
{
"affiliations": [],
"name": "Sébastien Lachapelle"
},
{
"affiliations": [],
"name": "Philippe Brouillard"
},
{
"affiliations": [],
"name": "Tristan Deleu"
},
{
"affiliations": [],
"name": "Simon Lacoste-Julien"
}
] | [
{
"authors": [
"J. Alayrac",
"P. Bojanowski",
"N. Agrawal",
"J. Sivic",
"I. Laptev",
"S. Lacoste-Julien"
],
"title": "Learning from narrated instruction",
"venue": "videos. TPAMI,",
"year": 2018
},
{
"authors": [
"A.-L. Barabási"
],
"titl... | [
{
"heading": "1 INTRODUCTION",
"text": "Structure learning and causal inference have many important applications in different areas of science such as genetics (Koller & Friedman, 2009; Peters et al., 2017), biology (Sachs et al., 2005) and economics (Pearl, 2009). Bayesian networks (BN), which encode condi... | 2,020 | GRADIENT-BASED NEURAL DAG LEARNING |
SP:4aebddd56e10489765e302e291cf41589d02b530 | [
"The paper presents a new NN architecture designed for life-long learning of natural language processing. As well depicted in Figure 2, the proposed network is trained to generate the correct answers and training samples at the same time. This prevents the \"catastrophic forgetting\" of an old task. Compared to the... | Most research on lifelong learning applies to images or games, but not language. We present LAMOL, a simple yet effective method for lifelong language learning (LLL) based on language modeling. LAMOL replays pseudo-samples of previous tasks while requiring no extra memory or model capacity. Specifically, LAMOL is a lan... | [
{
"affiliations": [],
"name": "Fan-Keng Sun"
},
{
"affiliations": [],
"name": "Cheng-Hao Ho"
},
{
"affiliations": [],
"name": "Hung-Yi Lee"
}
] | [
{
"authors": [
"Rahaf Aljundi",
"Francesca Babiloni",
"Mohamed Elhoseiny",
"Marcus Rohrbach",
"Tinne Tuytelaars"
],
"title": "Memory aware synapses: Learning what (not) to forget",
"venue": "In Proceedings of the European Conference on Computer Vision (ECCV),",
"yea... | [
{
"heading": "1 INTRODUCTION",
"text": "The current dominant paradigm for machine learning is to run an algorithm on a given dataset to produce a trained model specifically for a particular purpose; this is isolated learning (Chen & Liu, 2016, p. 150). In isolated learning, the model is unable to retain and... | 2,019 | LAMOL: LANGUAGE MODELING FOR LIFELONG LANGUAGE LEARNING |
SP:bce4d9d2825454f2b345f4650abac10efee7c2fb | [
"The problem addressed by this paper is the estimation of trajectories of moving objects thrown / launched by a user, in particular in computer games like angry birds or basketball simulation games. A deep neural network is trained on a small dataset of ~ 300 trajectories and estimates the underlying physical prope... | In this work we present an approach that combines deep learning together with laws of Newton’s physics for accurate trajectory predictions in physical games. Our model learns to estimate physical properties and forces that generated given observations, learns the relationships between available player’s actions and est... | [] | [
{
"authors": [
"Rene Baillargeon"
],
"title": "Physical reasoning in infancy",
"venue": "Advances in infancy research,",
"year": 1995
},
{
"authors": [
"Peter W. Battaglia",
"Razvan Pascanu",
"Matthew Lai",
"Danilo Jimenez Rezende",
"Koray Kavukcuoglu"... | [
{
"heading": "1 INTRODUCTION",
"text": "Games that follow Newton’s laws of physics despite being a relatively easy task for humans, remain to be a challenging task for artificially intelligent agents due to the requirements for an agent to understand underlying physical laws and relationships between availa... | 2,019 | LEARNING UNDERLYING PHYSICAL PROPERTIES FROM OBSERVATIONS FOR TRAJECTORY PREDIC- |
SP:f6af733aa873bf6ee0f69ec868a2d7a493a0dd0b | [
"The suggest two improvements to boundary detection models: (1) a curriculum learning approach, and (2) augmenting CNNs with features derived from a wavelet transform. For (1), they train half of the epochs with a target boundary that is the intersection between a Canny edge filter and the dilated groundtruth. The ... | This work addresses class-specific object boundary extraction, i.e., retrieving boundary pixels that belong to a class of objects in the given image. Although recent ConvNet-based approaches demonstrate impressive results, we notice that they produce several false-alarms and misdetections when used in real-world applic... | [] | [
{
"authors": [
"David Acuna",
"Amlan Kar",
"Sanja Fidler"
],
"title": "Devil is in the edges: Learning semantic boundaries from noisy annotations",
"venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,",
"year": 2019
},
{
"authors":... | [
{
"heading": "1 INTRODUCTION",
"text": "Class-specific object boundary extraction from images is a fundamental problem in Computer Vision (CV). It has been used as a basic module for several applications including object localization [Yu et al. (2018a); Wang et al. (2015)], 3D reconstruction [Lee et al. (20... | 2,019 | null |
SP:91fbd1f4774de6619bd92d37e1a1b1e7f2ed96f3 | [
"The paper proposes an extension to the Viper[1] method for interpreting and verifying deep RL policies by learning a mixture of decision trees to mimic the originally learned policy. The proposed approach can imitate the deep policy better compared with Viper while preserving verifiability. Empirically the propose... | Deep Reinforcement Learning (DRL) has led to many recent breakthroughs on complex control tasks, such as defeating the best human player in the game of Go. However, decisions made by the DRL agent are not explainable, hindering its applicability in safety-critical settings. Viper, a recently proposed technique, constru... | [] | [
{
"authors": [
"David Silver",
"Aja Huang",
"Chris J Maddison",
"Arthur Guez",
"Laurent Sifre",
"George Van Den Driessche",
"Julian Schrittwieser",
"Ioannis Antonoglou",
"Veda Panneershelvam",
"Marc Lanctot"
],
"title": "Mastering the game of G... | [
{
"heading": "1 INTRODUCTION",
"text": "Deep Reinforcement Learning (DRL) has achieved many recent breakthroughs in challenging domains such as Go (Silver et al., 2016). While using neural networks for encoding state representations allow DRL agents to learn policies for tasks with large state spaces, the l... | 2,019 | null |
SP:ddc70109c59cf0db7fe020300ab762a5ac57bd93 | [
"This paper studies the internal representations of recurrent neural networks trained on navigation tasks. By varying the weight of different terms in an objective used for supervised pre-training, RNNs are created that either use path integration or landmark memory for navigation. The paper shows that the pretrain... | Navigation is crucial for animal behavior and is assumed to require an internal representation of the external environment, termed a cognitive map. The precise form of this representation is often considered to be a metric representation of space. An internal representation, however, is judged by its contribution to pe... | [
{
"affiliations": [],
"name": "Tie Xu"
},
{
"affiliations": [],
"name": "Omri Barak"
}
] | [
{
"authors": [
"Yoshua Bengio",
"Jérôme Louradour",
"Ronan Collobert",
"Jason Weston"
],
"title": "Curriculum learning",
"venue": "In Proceedings of the 26th annual international conference on machine learning,",
"year": 2009
},
{
"authors": [
"Yoram Burak",... | [
{
"heading": "1 INTRODUCTION",
"text": "Spatial navigation is an important task that requires a correct internal representation of the world, and thus its mechanistic underpinnings have attracted the attention of scientists for a long time (O’Keefe & Nadel, 1978). A standard tool for navigation is a euclide... | 2,020 | IMPLEMENTING INDUCTIVE BIAS FOR DIFFERENT NAVIGATION TASKS THROUGH DIVERSE RNN ATTR- RACTORS |
SP:faca1e6eda4ad3b91ab99995e420398c01cc0e42 | [
"This paper presents a computational model of motivation for Q learning and relates it to biological models of motivation. Motivation is presented to the agent as a component of its inputs, and is encoded in a vectorised reward function where each component of the reward is weighted. This approach is explored in th... | How can animals behave effectively in conditions involving different motivational contexts? Here, we propose how reinforcement learning neural networks can learn optimal behavior for dynamically changing motivational salience vectors. First, we show that Q-learning neural networks with motivation can navigate in enviro... | [] | [
{
"authors": [
"Marcin Andrychowicz",
"Filip Wolski",
"Alex Ray",
"Jonas Schneider",
"Rachel Fong",
"Peter Welinder",
"Bob McGrew",
"Josh Tobin",
"Pieter Abbeel",
"Wojciech Zaremba"
],
"title": "Hindsight experience replay",
"venue": "In Ad... | [
{
"heading": "1 INTRODUCTION",
"text": "Motivation is a cognitive process that propels an individual’s behavior towards or away from a particular object, perceived event, or outcome (Zhang et al., 2009). Mathematically, motivation can be viewed as subjective modulation of the perceived reward value before t... | 2,019 | null |
SP:5ca4c62eae1c6a5a870524715c3be44c40383f98 | [
"The paper presents an algorithm to match two distributions with latent variables, named expected information maximization (EIM). Specifically, EIM is based on the I-Projection, which basically is equivalent to minimizing the reverse KL divergence (i.e. min KL[p_model || p_data]); to handle latent variables, an upp... | Modelling highly multi-modal data is a challenging problem in machine learning. Most algorithms are based on maximizing the likelihood, which corresponds to the M(oment)-projection of the data distribution to the model distribution. The M-projection forces the model to average over modes it cannot represent. In contras... | [
{
"affiliations": [],
"name": "Philipp Becker"
},
{
"affiliations": [],
"name": "Oleg Arenz"
}
] | [
{
"authors": [
"Abbas Abdolmaleki",
"Rudolf Lioutikov",
"Jan R Peters",
"Nuno Lau",
"Luis Pualo Reis",
"Gerhard Neumann"
],
"title": "Model-based relative entropy stochastic search",
"venue": "In Advances in Neural Information Processing Systems,",
"year": 201... | [
{
"heading": "1 INTRODUCTION",
"text": "Learning the density of highly multi-modal distributions is a challenging machine learning problem relevant to many fields such as modelling human behavior (Pentland & Liu, 1999). Most common methods rely on maximizing the likelihood of the data. It is well known that... | 2,020 | null |
SP:311d2ebcdc0f71789d6c46d23451657519495119 | [
"The paper theoretically investigates the role of “local optima” of the variational objective in ignoring latent variables (leading to posterior collapse) in variational autoencoders. The paper first discusses various potential causes for posterior collapse before diving deeper into a particular cause: local optima... | In narrow asymptotic settings Gaussian VAE models of continuous data have been shown to possess global optima aligned with ground-truth distributions. Even so, it is well known that poor solutions whereby the latent posterior collapses to an uninformative prior are sometimes obtained in practice. However, contrary to c... | [] | [
{
"authors": [
"A. Alemi",
"B. Poole",
"I. Fischer",
"J. Dillon",
"R. Saurous",
"K. Murphy"
],
"title": "Fixing a broken ELBO",
"venue": "arXiv preprint arXiv:1711.00464,",
"year": 2017
},
{
"authors": [
"M. Bauer",
"A. Mnih"
],
"ti... | [
{
"heading": "1 INTRODUCTION",
"text": "The variational autoencoder (VAE) (Kingma & Welling, 2014; Rezende et al., 2014) represents a powerful generative model of data points that are assumed to possess some complex yet unknown latent structure. This assumption is instantiated via the marginalized distribut... | 2,019 | null |
SP:c8c5809f731c2f0c6bf01e24bc4d9eb7cf924ccd | [
"This is an interesting paper, as it tries to understand the role of hierarchical methods (such as Options, higher level controllers etc) in RL. The core contribution of the paper is understand and evaluate the claimed benefits often proposed by hierarchical methods, and finds that the core benefit in fact comes fr... | Hierarchical reinforcement learning has demonstrated significant success at solving difficult reinforcement learning (RL) tasks. Previous works have motivated the use of hierarchy by appealing to a number of intuitive benefits, including learning over temporally extended transitions, exploring over temporally extended ... | [
{
"affiliations": [],
"name": "SO WELL"
}
] | [
{
"authors": [
"Mohammad Gheshlaghi Azar",
"Ian Osband",
"Rémi Munos"
],
"title": "Minimax regret bounds for reinforcement learning",
"venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume",
"year": 2017
},
{
"authors": [
"Pierre-... | [
{
"heading": "1 INTRODUCTION",
"text": "Many real-world tasks may be decomposed into natural hierarchical structures. To navigate a large building, one first needs to learn how to walk and turn before combining these behaviors to achieve robust navigation; to wash dishes, one first needs to learn basic obje... | 2,019 | WHY DOES HIERARCHY (SOMETIMES) WORK |
SP:385a392e6d055abd65a737f3c5be58105778ac11 | [
"Stability is one of the important aspects of machine learning. This paper views Jacobian regularization as a scheme to improve the stability, and studies the behavior of Jacobian regularization under random input perturbations, adversarial input perturbations, train/test distribution shift, and simply as a regula... | Design of reliable systems must guarantee stability against input perturbations. In machine learning, such guarantee entails preventing overfitting and ensuring robustness of models against corruption of input data. In order to maximize stability, we analyze and develop a computationally efficient implementation of Jac... | [] | [
{
"authors": [
"Othmar H Amman",
"Theodore von Kármán",
"Glenn B Woodruff"
],
"title": "The failure of the Tacoma Narrows bridge",
"venue": "Report to the Federal Works Agency,",
"year": 1941
},
{
"authors": [
"Richard P Feynman",
"Ralph Leighton"
],
... | [
{
"heading": "1 INTRODUCTION",
"text": "Stability analysis lies at the heart of many scientific and engineering disciplines. In an unstable system, infinitesimal perturbations amplify and have substantial impacts on the performance of the system. It is especially critical to perform a thorough stability ana... | 2,019 | null |
SP:da1e92e9459d9f305f206e309faa8e9bbf8e6afa | [
"This paper proposes a multichannel generative language model (MGLM), which models the joint distribution p(channel_1, ..., channel_k) over k channels. MGLM can be used for both conditional generation (e.g., machine translation) and unconditional sampling. In the experiments, MGLM uses the Multi30k dataset where mu... | A channel corresponds to a viewpoint or transformation of an underlying meaning. A pair of parallel sentences in English and French express the same underlying meaning but through two separate channels corresponding to their languages. In this work, we present Multichannel Generative Language Models (MGLM), which model... | [] | [
{
"authors": [
"Loı̈c Barrault",
"Fethi Bougares",
"Lucia Specia",
"Chiraag Lala",
"Desmond Elliott",
"Stella Frank"
],
"title": "Findings of the third shared task on multimodal machine translation",
"venue": "In Proceedings of the Third Conference on Machine Tran... | [
{
"heading": "1 INTRODUCTION",
"text": "A natural way to consider two parallel sentences in different languages is that each language is expressing the same underlying meaning under a different viewpoint. Each language can be thought of as a transformation that maps an underlying concept into a view that we... | 2,019 | MULTICHANNEL GENERATIVE LANGUAGE MODELS |
SP:69704bad659d8cc6e35dc5b7f372bf2e39805f4f | [
"This paper studies the convergence of multiple methods (Gradient, extragradient, optimistic and momentum) on a bilinear minmax game. More precisely, this paper uses spectral condition to study the difference between simultaneous (Jacobi) and alternating (Gau\\ss-Seidel) updates. The analysis is based on Schur theo... | Min-max formulations have attracted great attention in the ML community due to the rise of deep generative models and adversarial methods, while understanding the dynamics of gradient algorithms for solving such formulations has remained a grand challenge. As a first step, we restrict to bilinear zero-sum games and giv... | [
{
"affiliations": [],
"name": "ZERO-SUM GAMES"
},
{
"affiliations": [],
"name": "Guojun Zhang"
},
{
"affiliations": [],
"name": "Yaoliang Yu"
}
] | [
{
"authors": [
"M. Arjovsky",
"S. Chintala",
"L. Bottou"
],
"title": "Wasserstein generative adversarial networks",
"venue": "In International Conference on Machine Learning,",
"year": 2017
},
{
"authors": [
"K.J. Arrow",
"L. Hurwicz",
"H. Uzawa"
]... | [
{
"heading": "1 INTRODUCTION",
"text": "Min-max optimization has received significant attention recently due to the popularity of generative adversarial networks (GANs) (Goodfellow et al., 2014), adversarial training (Madry et al., 2018) and reinforcement learning (Du et al., 2017; Dai et al., 2018), just t... | 2,020 | null |
SP:0a523e5c8790b62fef099d7c5bec61bb18a2703c | [
"In this paper, the authors tackle the problem of multi-modal image-to-image translation by pre-training a style-based encoder. The style-based encoder is trained with a triplet loss that encourages similarity between images with similar styles and dissimilarity between images with different styles. The output of t... | Image-to-image (I2I) translation aims to translate images from one domain to another. To tackle the multi-modal version of I2I translation, where input and output domains have a one-to-many relation, an extra latent input is provided to the generator to specify a particular output. Recent works propose involved trainin... | [] | [
{
"authors": [
"Amjad Almahairi",
"Sai Rajeshwar",
"Alessandro Sordoni",
"Philip Bachman",
"Aaron Courville"
],
"title": "Augmented CycleGAN: Learning many-to-many mappings from unpaired data",
"venue": null,
"year": 2018
},
{
"authors": [
"Qifeng Chen... | [
{
"heading": "1 INTRODUCTION",
"text": "Image-to-Image (I2I) translation is the task of transforming images from one domain to another (e.g., semantic maps→ scenes, sketches→ photo-realistic images, etc.). Many problems in computer vision and graphics can be cast as I2I translation, such as photo-realistic ... | 2,019 | null |
SP:8ec794421e38087b73f7d7fb4fbf373728ea39c7 | [
"This paper considers learning low-dimensional representations from high-dimensional observations for control purposes. The authors extend the E2C framework by introducing the new PCC-Loss function. This new loss function aims to reflect the prediction in the observation space, the consistency between latent and ob... | Many real-world sequential decision-making problems can be formulated as optimal control with high-dimensional observations and unknown dynamics. A promising approach is to embed the high-dimensional observations into a lowerdimensional latent representation space, estimate the latent dynamics model, then utilize this ... | [
{
"affiliations": [],
"name": "LOCALLY-LINEAR CONTROL"
},
{
"affiliations": [],
"name": "Nir Levine"
},
{
"affiliations": [],
"name": "Yinlam Chow"
},
{
"affiliations": [],
"name": "Rui Shu"
},
{
"affiliations": [],
"name": "Ang Li"
},
{
"affiliations"... | [
{
"authors": [
"E. Banijamali",
"R. Shu",
"M. Ghavamzadeh",
"H. Bui",
"A. Ghodsi"
],
"title": "Robust locally-linear controllable embedding",
"venue": "In Proceedings of the Twenty First International Conference on Artificial Intelligence and Statistics,",
"year": 2... | [
{
"heading": "1 INTRODUCTION",
"text": "Decomposing the problem of decision-making in an unknown environment into estimating dynamics followed by planning provides a powerful framework for building intelligent agents. This decomposition confers several notable benefits. First, it enables the handling of spa... | 2,020 | null |
SP:2656017dbf3c1e8b659857d3a44fdbb91e186237 | [
"This paper proposes a neural network architecture to classify graph structure. A graph is specified using its adjacency matrix, and the authors prose to extract features by identifying temples, implemented as small kernels on sub matrices of the adjacency matrix. The main problem is how to handle isomorphism: ther... | Deep learning models have achieved huge success in numerous fields, such as computer vision and natural language processing. However, unlike such fields, it is hard to apply traditional deep learning models on the graph data due to the ‘node-orderless’ property. Normally, adjacency matrices will cast an artificial and ... | [] | [
{
"authors": [
"Sami Abu-El-Haija",
"Bryan Perozzi",
"Rami Al-Rfou",
"Alexander A Alemi"
],
"title": "Watch your step: Learning node embeddings via graph attention",
"venue": "In Advances in Neural Information Processing Systems,",
"year": 2018
},
{
"authors": [
... | [
{
"heading": "1 INTRODUCTION",
"text": "The graph structure is attracting increasing interests because of its great representation power on various types of data. Researchers have done many analyses based on different types of graphs, such as social networks, brain networks and biological networks. In this ... | 2,019 | null |
SP:86076eabb48ef1fe9d51b54945bf81ed44bcacd7 | [
"This paper list several limitations of translational-based Knowledge Graph embedding methods, TransE which have been identified by prior works and have theoretically/empirically shown that all limitations can be addressed by altering the loss function and shifting to Complex domain. The authors propose four varian... | Knowledge graphs (KGs) represent world’s facts in structured forms. KG completion exploits the existing facts in a KG to discover new ones. Translation-based embedding model (TransE) is a prominent formulation to do KG completion. Despite the efficiency of TransE in memory and time, it is claimed that TransE suffers fr... | [] | [
{
"authors": [
"Farahnaz Akrami",
"Lingbing Guo",
"Wei Hu",
"Chengkai Li"
],
"title": "Re-evaluating embedding-based knowledge graph completion methods",
"venue": "In Proceedings of the 27th ACM International Conference on Information and Knowledge Management,",
"year": 2... | [
{
"heading": "1 INTRODUCTION",
"text": "Knowledge is considered as commonsense facts and other information accumulated from different sources. A Knowledge Graph (KG) is collection of facts and is usually represented as a set of triples (h, r, t) where h, t are entities and r is a relation, e.g. (iphone, hyp... | 2,019 | null |
SP:3d3842a5e0816084c5a2406f1b0143d0215b9559 | [
"The authors propose a new gradient-based method (FAB) for constructing adversarial perturbations for deep neural networks. At a high level, the method repeatedly estimates the decision boundary based on the linearization of the classifier at a given point and projects to the closest \"misclassified\" example based... | The evaluation of robustness against adversarial manipulations of neural networks-based classifiers is mainly tested with empirical attacks as the methods for the exact computation, even when available, do not scale to large networks. We propose in this paper a new white-box adversarial attack wrt the lp-norms for p ∈ ... | [] | [
{
"authors": [
"A. Athalye",
"N. Carlini",
"D.A. Wagner"
],
"title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples",
"venue": null,
"year": 2018
},
{
"authors": [
"O. Bastani",
"Y. Ioannou",
"L. La... | [
{
"heading": "1 Introduction",
"text": "The finding of the vulnerability of neural networks-based classifiers to adversarial examples, that is small perturbations of the input able to modify the decision of the models, started a fast development of a variety of attack algorithms. The high effectiveness of a... | 2,019 | Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack |
SP:51a88b77450225e0f80f9fa25510fb4ea64463b2 | [
"The authors present a model for time series which are represented as discrete events in continuous time and describe methods for doing parameter inference, future event prediction and entropy rate estimation for such processes. Their model is based on models for Bayesian Structure prediction where they add the tem... | The inference of models, prediction of future symbols, and entropy rate estimation of discrete-time, discrete-event processes is well-worn ground. However, many time series are better conceptualized as continuous-time, discrete-event processes. Here, we provide new methods for inferring models, predicting future symbol... | [] | [
{
"authors": [
"Evan Archer",
"Il Memming Park",
"Jonathan W Pillow"
],
"title": "Bayesian entropy estimation for countable discrete distributions",
"venue": "The Journal of Machine Learning Research,",
"year": 2014
},
{
"authors": [
"Dieter Arnold",
"H-A Lo... | [
{
"heading": null,
"text": "The inference of models, prediction of future symbols, and entropy rate estimation of discrete-time, discrete-event processes is well-worn ground. However, many time series are better conceptualized as continuous-time, discrete-event processes. Here, we provide new methods for in... | 2,019 | null |
SP:06bbc70edab65f046adb46bc364c3b91f5880845 | [
"This paper proposes to leverage the between-node-path information into the inference of conventional graph neural network methods. Specifically, the proposed method treats the nodes in training set as a reference corpus and, when infering the label of a specific node, makes this node \"attend\" to the reference co... | In this work, we address semi-supervised classification of graph data, where the categories of those unlabeled nodes are inferred from labeled nodes as well as graph structures. Recent works often solve this problem via advanced graph convolution in a conventionally supervised manner, but the performance could degrade ... | [
{
"affiliations": [],
"name": "Chunyan Xu"
},
{
"affiliations": [],
"name": "Zhen Cui"
},
{
"affiliations": [],
"name": "Xiaobin Hong"
},
{
"affiliations": [],
"name": "Tong Zhang"
},
{
"affiliations": [],
"name": "Jian Yang"
},
{
"affiliations": [],
... | [
{
"authors": [
"Sami Abu-El-Haija",
"Amol Kapoor",
"Bryan Perozzi",
"Joonseok Lee"
],
"title": "N-gcn: Multi-scale graph convolution for semi-supervised node classification",
"venue": "arXiv preprint arXiv:1802.08888,",
"year": 2018
},
{
"authors": [
"James ... | [
{
"heading": "1 INTRODUCTION",
"text": "Graph, which comprises a set of vertices/nodes together with connected edges, is a formal structural representation of non-regular data. Due to the strong representation ability, it accommodates many potential applications, e.g., social network (Orsini et al., 2017), ... | 2,020 | null |
SP:bbcb77fc764f7e90ef6126d97d8195734fcdafe8 | [
"This paper deals with 3 theoretical properties of ridge regression. First, it proves that the ridge regression estimator is equivalent to a specific representation which is useful as for instance it can be used to derive the training error of the ridge estimator. Second, it provides a bias correction mechanism for... | We study the following three fundamental problems about ridge regression: (1) what is the structure of the estimator? (2) how to correctly use cross-validation to choose the regularization parameter? and (3) how to accelerate computation without losing too much accuracy? We consider the three problems in a unified larg... | [
{
"affiliations": [],
"name": "Sifan Liu"
}
] | [
{
"authors": [
"2017. Nir Ailon",
"Bernard Chazelle"
],
"title": "Approximate nearest neighbors and the fast johnson-lindenstrauss transform",
"venue": null,
"year": 2017
},
{
"authors": [
"Theodore W Anderson"
],
"title": "An Introduction to Multivariate Statis... | [
{
"heading": null,
"text": "We study the following three fundamental problems about ridge regression: (1) what is the structure of the estimator? (2) how to correctly use cross-validation to choose the regularization parameter? and (3) how to accelerate computation without losing too much accuracy? We consi... | 2,020 | null |
SP:d5ccf8fdd029c2a99dac0441385f280ed3fc03fb | [
"The authors extended the regular convolution and proposed spatially shuffled convolution to use the information outside of its RF, which is inspired by the idea that horizontal connections are believed to be important for visual processing in the visual cortex in biological brain. The authors proposed ss convoluti... | Convolutional Neural Networks (CNNs) are composed of multiple convolution layers and show elegant performance in vision tasks. The design of the regular convolution is based on the Receptive Field (RF) where the information within a specific region is processed. In the view of the regular convolution’s RF, the outputs ... | [
{
"affiliations": [],
"name": "SPATIAL SHUFFLING"
}
] | [
{
"authors": [
"Ossama Abdel-Hamid",
"Abdel-Rahman Mohamed",
"Hui Jiang",
"Li Deng",
"Gerald Penn",
"Dong Yu"
],
"title": "Convolutional neural networks for speech recognition",
"venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing,",
"year... | [
{
"heading": null,
"text": "INCORPORATING HORIZONTAL CONNECTIONS IN CONVOLUTION BY SPATIAL SHUFFLING\nAnonymous authors Paper under double-blind review\nConvolutional Neural Networks (CNNs) are composed of multiple convolution layers and show elegant performance in vision tasks. The design of the regular co... | 2,019 | null |
SP:aec7ce88f21b38c205522c88b3a3253e24754182 | [
"A method for a refinement loop for program synthesizers operating on input/ouput specifications is presented. The core idea is to generate several candidate solutions, execute them on several inputs, and then use a learned component to judge which of the resulting input/output pairs are most likely to be correct. ... | A key challenge of existing program synthesizers is ensuring that the synthesized program generalizes well. This can be difficult to achieve as the specification provided by the end user is often limited, containing as few as one or two inputoutput examples. In this paper we address this challenge via an iterative appr... | [
{
"affiliations": [],
"name": "Larissa Laich"
},
{
"affiliations": [],
"name": "Pavol Bielik"
},
{
"affiliations": [],
"name": "Martin Vechev"
}
] | [
{
"authors": [
"Matej Balog",
"Alexander L. Gaunt",
"Marc Brockschmidt",
"Sebastian Nowozin",
"Daniel Tarlow"
],
"title": "Deepcoder: Learning to write programs",
"venue": "In 5th International Conference on Learning Representations,",
"year": 2017
},
{
"aut... | [
{
"heading": "1 INTRODUCTION",
"text": "Over the years, program synthesis has been applied to a wide variety of different tasks including string, number or date transformations (Gulwani, 2011; Singh & Gulwani, 2012; 2016; Ellis et al., 2019; Menon et al., 2013; Ellis & Gulwani, 2017), layout and graphic pro... | 2,020 | GUIDING PROGRAM SYNTHESIS BY LEARNING TO GENERATE EXAMPLES |
SP:ca085e8e2675fe579df4187290b7b7dc37b8a729 | [
"In this paper, the authors address few-shot learning via a precise collaborative hallucinator. In particular, they follow the framework of (Wang et al., 2018), and introduce two kinds of training regularization. The soft precision-inducing loss follows the spirit of adversarial learning, by using knowledge distill... | Learning to hallucinate additional examples has recently been shown as a promising direction to address few-shot learning tasks, which aim to learn novel concepts from very few examples. The hallucination process, however, is still far from generating effective samples for learning. In this work, we investigate two imp... | [] | [
{
"authors": [
"Marcin Andrychowicz",
"Misha Denil",
"Sergio Gomez",
"Matthew W Hoffman",
"David Pfau",
"Tom Schaul",
"Brendan Shillingford",
"Nando De Freitas"
],
"title": "Learning to learn by gradient descent by gradient descent",
"venue": "In Advan... | [
{
"heading": "1 INTRODUCTION",
"text": "Modern deep learning models rely heavily on large amounts of annotated examples (Deng et al., 2009). Their data-hungry nature limits their applicability to real-world scenarios, where the cost of annotating examples is prohibitive, or they involve rare concepts (Zhu e... | 2,019 | null |
SP:28a2ee0012e23223b2c3501a94a5e72e0c718c66 | [
"The authors propose to use dynamic convolutional kernels as a means to reduce the computation cost in static CNNs while maintaining their performance. The dynamic kernels are obtained by a linear combination of static kernels where the weights of the linear combination are input-dependent (they are obtained simila... | Convolution operator is the core of convolutional neural networks (CNNs) and occupies the most computation cost. To make CNNs more efficient, many methods have been proposed to either design lightweight networks or compress models. Although some efficient network structures have been proposed, such as MobileNet or Shuf... | [] | [
{
"authors": [
"Jimmy Ba",
"Rich Caruana"
],
"title": "Do deep nets really need to be deep? In Advances in neural information processing",
"venue": null,
"year": 2014
},
{
"authors": [
"Wenlin Chen",
"James Wilson",
"Stephen Tyree",
"Kilian Weinberger"... | [
{
"heading": "1 INTRODUCTION",
"text": "Convolutional neural networks (CNNs) have achieved state-of-the-art performance in many computer vision tasks (Krizhevsky et al., 2012; Szegedy et al., 2013), and the neural architectures of CNNs are evolving over the years (Krizhevsky et al., 2012; Simonyan & Zisserm... | 2,019 | null |
SP:9e712c6f60b19d9309721eea514589755b4ce648 | [
"The paper derives results for nonnegative-matrix factorization along the lines of recent results on SGD for DNNs, showing that the loss is star-convex towards randomized planted solutions. The star-convexity property is also shown to hold to some degree on real world datasets. The paper argues that these results e... | Non-negative matrix factorization (NMF) is a highly celebrated algorithm for matrix decomposition that guarantees non-negative factors. The underlying optimization problem is computationally intractable, yet in practice gradient descent based solvers often find good solutions. This gap between computational hardness an... | [] | [
{
"authors": [
"P. Afshani",
"J. Barbay",
"T.M. Chan"
],
"title": "Instance-optimal geometric algorithms",
"venue": "Journal of the ACM (JACM),",
"year": 2017
},
{
"authors": [
"R. Ahlswede",
"A. Winter"
],
"title": "Strong converse for identificatio... | [
{
"heading": "1 INTRODUCTION",
"text": "Non-negative matrix factorization (NMF) is a ubiquitous technique for data analysis where one attempts to factorize a measurement matrix X into the product of non-negative matrices U,V (Lee and Seung, 1999). This simple problem has applications in recommender systems ... | 2,019 | null |
SP:37c8908c43beda4efc9db25216225f0106fe009c | [
"The authors describe a method for adversarially modifying a given (test) example that 1) still retains the correct label on the example, but 2) causes a model to make an incorrect prediction on it. The novelty of their proposed method is that their adversarial modifications are along a provided semantic axis (e.g.... | Deep neural networks (DNNs) have achieved great success in various applications due to their strong expressive power. However, recent studies have shown that DNNs are vulnerable to adversarial examples which are manipulated instances targeting to mislead DNNs to make incorrect predictions. Currently, most such adversar... | [
{
"affiliations": [],
"name": "PLES VIA"
}
] | [
{
"authors": [
"Yoshua Bengio",
"Grégoire Mesnil",
"Yann Dauphin",
"Salah Rifai"
],
"title": "Better mixing via deep representations",
"venue": "In ICML,",
"year": 2013
},
{
"authors": [
"Anand Bhattad",
"Min Jin Chong",
"Kaizhao Liang",
"B... | [
{
"heading": "1 INTRODUCTION",
"text": "Deep neural networks (DNNs) have demonstrated great successes in advancing the state-of-the-art performance of discriminative tasks (Krizhevsky et al., 2012; Goodfellow et al., 2016; He et al., 2016; Collobert & Weston, 2008; Deng et al., 2013; Silver et al., 2016). H... | 2,019 | SEMANTICADV: GENERATING ADVERSARIAL EXAM- |
SP:e84523133b0c393a7d673a3faef8cd2d6368830a | [
"The paper proposes to learn an energy based generative model using an ‘annealed’ denoising score matching objective. The main contribution of the paper is to show that denoising score matching can be trained on a range of noise scales concurrently using a small modification to the loss. Compared to approximate lik... | Energy-Based Models (EBMs) assign unnormalized log-probability to data samples. This functionality has a variety of applications, such as sample synthesis, data denoising, sample restoration, outlier detection, Bayesian reasoning, and many more. But training of EBMs using standard maximum likelihood is extremely slow b... | [] | [
{
"authors": [
"Shane Barratt",
"Rishi Sharma"
],
"title": "A note on the inception score",
"venue": "arXiv preprint arXiv:1801.01973,",
"year": 2018
},
{
"authors": [
"Jens Behrmann",
"Will Grathwohl",
"Ricky T.Q. Chen",
"David Duvenaud",
"Jörn-... | [
{
"heading": "1 INTRODUCTION AND MOTIVATION",
"text": "Treating data as stochastic samples from a probability distribution and developing models that can learn such distributions is at the core for solving a large variety of application problems, such as error correction/denoising (Vincent et al., 2010), ou... | 2,019 | null |
SP:e958fbb0b004f454b79944ca72958254087147d4 | [
"This paper proposes stable GradientLess Descent (GLD) algorithms that do not rely on gradient estimate. Based on the low-rank assumption on P_A, the iteration complexity is poly-logarithmically dependent on dimensionality. The theoretical analysis of the main results is based on a geometric perspective, which is i... | Zeroth-order optimization is the process of minimizing an objective f(x), given oracle access to evaluations at adaptively chosen inputs x. In this paper, we present two simple yet powerful GradientLess Descent (GLD) algorithms that do not rely on an underlying gradient estimate and are numerically stable. We analyze o... | [
{
"affiliations": [],
"name": "Daniel Golovin"
},
{
"affiliations": [],
"name": "John Karro"
},
{
"affiliations": [],
"name": "Greg Kochanski"
},
{
"affiliations": [],
"name": "Chansoo Lee"
},
{
"affiliations": [],
"name": "Xingyou Song"
},
{
"affiliat... | [
{
"authors": [
"Kenneth J Arrow",
"Alain C Enthoven"
],
"title": "Quasi-concave programming",
"venue": "Econometrica: Journal of the Econometric Society,",
"year": 1961
},
{
"authors": [
"Anne Auger",
"Nikolaus Hansen"
],
"title": "A restart cma evolution ... | [
{
"heading": "1 INTRODUCTION",
"text": "We consider the problem of zeroth-order optimization (also known as gradient-free optimization, or bandit optimization), where our goal is to minimize an objective function f : Rn → R with as few evaluations of f(x) as possible. For many practical and interesting obje... | 2,021 | GRADIENTLESS DESCENT: HIGH-DIMENSIONAL ZEROTH-ORDER OPTIMIZATION |
SP:9d2476df24b81661dc5ad76b13c8fd5fd1653381 | [
"This paper looks at privacy concerns regarding data for a specific model before and after a single update. It discusses the privacy concerns thoroughly and look at language modeling as a representative task. They find that there are plenty of cases namely when the composition of the sequences involve low frequency... | To continuously improve quality and reflect changes in data, machine learningbased services have to regularly re-train and update their core models. In the setting of language models, we show that a comparative analysis of model snapshots before and after an update can reveal a surprising amount of detailed information... | [] | [
{
"authors": [
"Martin Abadi",
"Andy Chu",
"Ian Goodfellow",
"H. Brendan McMahan",
"Ilya Mironov",
"Kunal Talwar",
"Li Zhang"
],
"title": "Deep learning with differential privacy",
"venue": "In 23rd ACM SIGSAC Conference on Computer and Communications Securi... | [
{
"heading": "1 INTRODUCTION",
"text": "Over the last few years, deep learning has made sufficient progress to be integrated into intelligent, user-facing systems, which means that machine learning models are now part of the regular software development lifecycle. As part of this move towards concrete produ... | 2,019 | null |
SP:044d99499c4a9cb383f5e39a28fc7ccb700040d1 | [
"The paper proposes an ensemble method for reinforcement learning in which the policy updates are modulated with a loss which encourages diversity among all experienced policies. It is a combination of SAC, normalizing flow policies, and an approach to diversity considered by Hong et al. (2018). The work seems rath... | In reinforcement learning, robotic control tasks are often useful for understanding how agents perform in environments with deceptive rewards where the agent can easily become trapped into suboptimal solutions. One way to avoid these local optima is to use a population of agents to ensure coverage of the policy space (... | [] | [
{
"authors": [
"Marc G Bellemare",
"Yavar Naddaf",
"Joel Veness",
"Michael Bowling"
],
"title": "The arcade learning environment: An evaluation platform for general agents",
"venue": "Journal of Artificial Intelligence Research,",
"year": 2013
},
{
"authors": [
... | [
{
"heading": null,
"text": "In reinforcement learning, robotic control tasks are often useful for understanding how agents perform in environments with deceptive rewards where the agent can easily become trapped into suboptimal solutions. One way to avoid these local optima is to use a population of agents ... | 2,019 | null |
SP:e4f5ca770474ba98dc7643522ea6435f0586c292 | [
"This paper propose an extension to deterministic autoencoders. Motivated from VAEs, the authors propose RAEs, which replace the noise injection in the encoders of VAEs with an explicit regularization term on the latent representations. As a result, the model becomes a deterministic autoencoder with a L_2 regulariz... | Variational Autoencoders (VAEs) provide a theoretically-backed and popular framework for deep generative models. However, learning a VAE from data poses still unanswered theoretical questions and considerable practical challenges. In this work, we propose an alternative framework for generative modeling that is simpler... | [
{
"affiliations": [],
"name": "DETERMINISTIC AUTOENCODERS"
},
{
"affiliations": [],
"name": "Partha Ghosh"
},
{
"affiliations": [],
"name": "Mehdi S. M. Sajjadi"
},
{
"affiliations": [],
"name": "Antonio Vergari"
},
{
"affiliations": [],
"name": "Michael Black... | [
{
"authors": [
"Alexander Alemi",
"Ben Poole",
"Ian Fischer",
"Joshua Dillon",
"Rif A Saurous",
"Kevin Murphy"
],
"title": "Fixing a broken ELBO",
"venue": "In ICML,",
"year": 2018
},
{
"authors": [
"Guozhong An"
],
"title": "The effects ... | [
{
"heading": "1 INTRODUCTION",
"text": "Generative models lie at the core of machine learning. By capturing the mechanisms behind the data generation process, one can reason about data probabilistically, access and traverse the lowdimensional manifold the data is assumed to live on, and ultimately generate ... | 2,020 | null |
SP:7cd001a35175d8565c046093dcf070ba7fa988d6 | [
" This paper proposes using the features learned through Contrastive Predictive Coding as a means for reward shaping. Specifically, they propose to cluster the embedding using the clusters to provide feedback to the agent by applying a positive reward when the agent enters the goal cluster. In more complex domains... | While recent progress in deep reinforcement learning has enabled robots to learn complex behaviors, tasks with long horizons and sparse rewards remain an ongoing challenge. In this work, we propose an effective reward shaping method through predictive coding to tackle sparse reward problems. By learning predictive repr... | [
{
"affiliations": [],
"name": "SPARSE REWARDS"
}
] | [
{
"authors": [
"Rishabh Agarwal",
"Chen Liang",
"Dale Schuurmans",
"Mohammad Norouzi"
],
"title": "Learning to generalize from sparse and underspecified rewards",
"venue": "In International Conference on Machine Learning,",
"year": 2019
},
{
"authors": [
"Ma... | [
{
"heading": "1 INTRODUCTION",
"text": "Recent progress in deep reinforcement learning (DRL) has enabled robots to learn and execute complex tasks, ranging from game playing (Jaderberg et al., 2018; OpenAI, 2019), robotic manipulations (Andrychowicz et al., 2017; Haarnoja et al., 2018), to navigation (Zhang... | 2,019 | null |
SP:1e4d48aca131f5ff12775ba51dd1176397038d59 | [
"This paper studies the problem of exploration in reinforcement learning. The key idea is to learn a goal-conditioned agent and do exploration by selecting goals at the frontier of previously visited states. This frontier is estimated using an extension of prior work (Pong 2019). The method is evaluated on two con... | In many reinforcement learning settings, rewards which are extrinsically available to the learning agent are too sparse to train a suitable policy. Beside reward shaping which requires human expertise, utilizing better exploration strategies helps to circumvent the problem of policy training with sparse rewards. In thi... | [] | [
{
"authors": [
"Joshua Achiam",
"Shankar Sastry"
],
"title": "Surprise-based intrinsic motivation for deep reinforcement learning",
"venue": "arXiv preprint arXiv:1703.01732,",
"year": 2017
},
{
"authors": [
"Joshua Achiam",
"Harrison Edwards",
"Dario Amodei... | [
{
"heading": null,
"text": "1 INTRODUCTION\nReinforcement Learning (RL) is based on performing exploratory actions in a trial-and-error manner and reinforcing those actions that result in superior reward outcomes. Exploration plays an important role in solving a given sequential decision-making problem. A R... | 2,019 | SKEW-EXPLORE: LEARN FASTER IN CONTINUOUS SPACES WITH SPARSE REWARDS |
SP:9043128647ca5b26b38c11af6fddf166e012a390 | [
"This paper presents a novel meta reinforcement learning algorithm capable of meta-generalizing to unseen tasks. They make use of a learned objective function used in combination with DDPG style update. Results are presented on different combinations of meta-training and meta-testing on lunar, half cheetah, and hop... | Biological evolution has distilled the experiences of many learners into the general learning algorithms of humans. Our novel meta reinforcement learning algorithm MetaGenRL is inspired by this process. MetaGenRL distills the experiences of many complex agents to meta-learn a low-complexity neural objective function th... | [
{
"affiliations": [],
"name": "LEARNED OBJECTIVES"
},
{
"affiliations": [],
"name": "Louis Kirsch"
},
{
"affiliations": [],
"name": "Sjoerd van Steenkiste"
},
{
"affiliations": [],
"name": "Jürgen Schmidhuber"
}
] | [
{
"authors": [
"Ferran Alet",
"Martin F Schneider",
"Tomas Lozano-Perez",
"Leslie Pack Kaelbling"
],
"title": "Meta-learning curiosity algorithms",
"venue": "In International Conference on Learning Representations,",
"year": 2020
},
{
"authors": [
"Marcin An... | [
{
"heading": "1 INTRODUCTION",
"text": "The process of evolution has equipped humans with incredibly general learning algorithms. They enable us to solve a wide range of problems, even in the absence of a large number of related prior experiences. The algorithms that give rise to these capabilities are the ... | 2,020 | null |
SP:f48d609519e10cdf6de5dd0341edd5544d96402c | [
"The paper examines the common practice of performing model selection by choosing the model that maximizes validation accuracy. In a setting where there are multiple tasks, the average validation error hides performance on individual tasks, which may be relevant. The paper casts multi-class image classification as ... | The validation curve is widely used for model selection and hyper-parameter search with the curve usually summarized over all the training tasks. However, this summarization tends to lose the intricacies of the per-task curves and it isn’t able to reflect if all the tasks are at their validation optimum even if the sum... | [] | [
{
"authors": [
"Guillaume Alain",
"Yoshua Bengio"
],
"title": "Understanding intermediate layers using linear classifier probes",
"venue": "International Conference on Learning Representations,",
"year": 2016
},
{
"authors": [
"Hadrien Bertrand",
"Mohammad Hashir"... | [
{
"heading": null,
"text": "The validation curve is widely used for model selection and hyper-parameter search with the curve usually summarized over all the training tasks. However, this summarization tends to lose the intricacies of the per-task curves and it isn’t able to reflect if all the tasks are at ... | 2,019 | null |
SP:67c44f33dff59e4d218f753fdbc6296da62cdf62 | [
"This paper compares SGD and SVRG (as a representative variance reduced method) to explore tradeoffs. Although the computational complexity vs overall convergence performance tradeoff is well-known at this point, an interesting new perspective is the comparison in regions of interpolation (where SGD gradient varian... | Stochastic gradient descent (SGD), which trades off noisy gradient updates for computational efficiency, is the de-facto optimization algorithm to solve largescale machine learning problems. SGD can make rapid learning progress by performing updates using subsampled training data, but the noisy updates also lead to slo... | [
{
"affiliations": [],
"name": "A NON-ASYMPTOTIC"
}
] | [
{
"authors": [
"Zeyuan Allen-Zhu"
],
"title": "Katyusha: The first direct acceleration of stochastic gradient methods",
"venue": "The Journal of Machine Learning Research,",
"year": 2017
},
{
"authors": [
"Mikhail Belkin",
"Daniel Hsu",
"Siyuan Ma",
"Soumik ... | [
{
"heading": "1 INTRODUCTION",
"text": "Many large-scale machine learning problems, especially in deep learning, are formulated as minimizing the sum of loss functions on millions of training examples (Krizhevsky et al., 2012; Devlin et al., 2018). Computing exact gradient over the entire training set is in... | 2,019 | null |
SP:6022b52e1e160bd034df1a7c71c6ca163bcf4dc0 | [
"This paper proposes a novel form of surprise-minimizing intrinsic reward signal that leads to interesting behavior in the absence of an external reward signal. The proposed approach encourages an agent to visit states with high probability / density under a parametric marginal state distribution that is learned as... | All living organisms struggle against the forces of nature to carve out niches where they can maintain relative stasis. We propose that such a search for order amidst chaos might offer a unifying principle for the emergence of useful behaviors in artificial agents. We formalize this idea into an unsupervised reinforcem... | [] | [
{
"authors": [
"Joshua Achiam",
"Shankar Sastry"
],
"title": "Surprise-based intrinsic motivation for deep reinforcement learning",
"venue": "arXiv preprint arXiv:1703.01732,",
"year": 2017
},
{
"authors": [
"Yusuf Aytar",
"Tobias Pfaff",
"David Budden",
... | [
{
"heading": "1 INTRODUCTION",
"text": "The general struggle for existence of animate beings is not a struggle for raw materials, nor for energy, but a struggle for negative entropy.\n(Ludwig Boltzmann, 1886)\nAll living organisms carve out environmental niches within which they can maintain relative predic... | 2,019 | null |
SP:8bdeb36997d6699e48511d9abac87df8c14bd087 | [
"In this paper, a tensor decomposition method is studied for link prediction problems. The model is based on Tucker decomposition but the core tensor is decomposed as CP decomposition so that it can be seen as an interpolation between Tucker and CP. The performance is evaluated with several NLP data sets (e.g., sub... | The leading approaches to tensor completion and link prediction are based on the canonical polyadic (CP) decomposition of tensors. While these approaches were originally motivated by low rank approximations, the best performances are usually obtained for ranks as high as permitted by computation constraints. For large ... | [] | [
{
"authors": [
"Ivana Balažević",
"Carl Allen",
"Timothy Hospedales"
],
"title": "Multi-relational poincar\\’e graph embeddings",
"venue": "arXiv preprint arXiv:1905.09791,",
"year": 2019
},
{
"authors": [
"Ivana Balažević",
"Carl Allen",
"Timothy M Ho... | [
{
"heading": "1 INTRODUCTION",
"text": "The problems of representation learning and link prediction in multi-relational data can be formulated as a binary tensor completion problem, where the tensor is obtained by stacking the adjacency matrices of every relations between entities. This tensor can then be i... | 2,019 | null |
SP:62a75399aa97a61432385cf1dffabb674741a18a | [
"This paper proposed to remove all bias terms in denoising networks to avoid overfitting when different noise levels exist. With analysis, the paper concludes that the dimensions of subspaces of image features are adaptively changing according to the noise level. An interesting result is that the MSE is proportiona... | We study the generalization properties of deep convolutional neural networks for image denoising in the presence of varying noise levels. We provide extensive empirical evidence that current state-of-the-art architectures systematically overfit to the noise levels in the training set, performing very poorly at new nois... | [
{
"affiliations": [],
"name": "Sreyas Mohan"
},
{
"affiliations": [],
"name": "Zahra Kadkhodaie"
},
{
"affiliations": [],
"name": "Eero P. Simoncelli"
},
{
"affiliations": [],
"name": "Carlos Fernandez-Granda"
}
] | [
{
"authors": [
"S Grace Chang",
"Bin Yu",
"Martin Vetterli"
],
"title": "Adaptive wavelet thresholding for image denoising and compression",
"venue": "IEEE Trans. Image Processing,",
"year": 2000
},
{
"authors": [
"Yunjin Chen",
"Thomas Pock"
],
"tit... | [
{
"heading": "1 INTRODUCTION AND CONTRIBUTIONS",
"text": "The problem of denoising consists of recovering a signal from measurements corrupted by noise, and is a canonical application of statistical estimation that has been studied since the 1950’s. Achieving high-quality denoising results requires (at leas... | 2,020 | ROBUST AND INTERPRETABLE BLIND IMAGE DENOISING VIA BIAS-FREE CONVOLUTIONAL NEURAL NETWORKS |
SP:35407fdffbf982a97312ef16673be781d593ff22 | [
" This paper proposes a method called attentive feature distillation and selection (AFDS) to improve the performance of transfer learning for CNNs. The authors argue that the regularization should constrain the proximity of feature maps, instead of pre-trained model weights. Specifically, the authors proposes two m... | Deep convolutional neural networks are now widely deployed in vision applications, but a limited size of training data can restrict their task performance. Transfer learning offers the chance for CNNs to learn with limited data samples by transferring knowledge from models pretrained on large datasets. Blindly transfer... | [
{
"affiliations": [],
"name": "Kafeng Wang"
},
{
"affiliations": [],
"name": "Xitong Gao"
},
{
"affiliations": [],
"name": "Yiren Zhao"
},
{
"affiliations": [],
"name": "Xingjian Li"
},
{
"affiliations": [],
"name": "Dejing Dou"
},
{
"affiliations": []... | [
{
"authors": [
"Jose M Alvarez",
"Mathieu Salzmann"
],
"title": "Learning the number of neurons in deep networks",
"venue": "Advances in Neural Information Processing Systems (NIPS),",
"year": 2016
},
{
"authors": [
"Hossein Azizpour",
"Ali Sharif Razavian",
... | [
{
"heading": "1 Introduction",
"text": "Despite recent successes of CNNs achieving state-of-the-art performance in vision applications (Tan & Le, 2019; Cai & Vasconcelos, 2018; Zhao et al., 2018; Ren et al., 2015), there are two major shortcomings limiting their deployments in real life. First, training CNN... | 2,020 | Pay Attention to Features, Transfer Learn Faster CNNs |
SP:d510a4587befa21d3f6b151d437e9d5272ce03a2 | [
"This paper proposed BOGCN-NAS that encodes current architecture with Graph convolutional network (GCN) and uses the feature extracted from GCN as the input to perform a Bayesian regression (predicting bias and variance, See Eqn. 5-6). They use Bayesian Optimization to pick the most promising next model with Expect... | Neural Architecture Search (NAS) has shown great potentials in finding a better neural network design than human design. Sample-based NAS is the most fundamental method aiming at exploring the search space and evaluating the most promising architecture. However, few works have focused on improving the sampling efficien... | [] | [
{
"authors": [
"Y. Akimoto",
"S. Shirakawa",
"N. Yoshinari",
"K. Uchida",
"S. Saito",
"K. Nishida"
],
"title": "Adaptive stochastic natural gradient method for one-shot neural architecture search",
"venue": "In International Conference on Machine Learning,",
"... | [
{
"heading": "1 INTRODUCTION",
"text": "Recently Neural Architecture Search (NAS) has aroused a surge of interest by its potentials of freeing the researchers from tedious and time-consuming architecture tuning for each new task and dataset. Specifically, NAS has already shown some competitive results compa... | 2,019 | null |
SP:f719db5d0209fd670518cf1e28a66dfcd9de0a8c | [
"Augments the loss of video generation systems with a discriminator that considers multiple frames (as opposed to single frames independently) and a new objective termed ping-pong loss which is introduced in order to deal with “artifacts” that appear in video generation. The paper also proposes a few automatic metr... | We focus on temporal self-supervision for GAN-based video generation tasks. While adversarial training successfully yields generative models for a variety of areas, temporal relationship in the generated data is much less explored. This is crucial for sequential generation tasks , e.g. video super-resolution and unpair... | [] | [
{
"authors": [
"Aayush Bansal",
"Shugao Ma",
"Deva Ramanan",
"Yaser Sheikh"
],
"title": "Recycle-gan: Unsupervised video retargeting",
"venue": "In The European Conference on Computer Vision (ECCV),",
"year": 2018
},
{
"authors": [
"Yochai Blau",
"Tome... | [
{
"heading": "1 INTRODUCTION",
"text": "Generative adversarial models (GANs) have been extremely successful at learning complex distributions such as natural images (Zhu et al., 2017; Isola et al., 2017). However, for sequence generation, directly applying GANs without carefully engineered constraints typic... | 2,019 | null |
SP:5c78aac08d907ff07205fe28bf9fa4385c58f40d | [
"This paper proposes a new method for training certifiably robust models that achieves better results than the previous SOTA results by IBP, with a moderate increase in training time. It uses a CROWN-based bound in the warm up phase of IBP, which serves as a better initialization for the later phase of IBP and lead... | Training neural networks with verifiable robustness guarantees is challenging. Several existing approaches utilize linear relaxation based neural network output bounds under perturbation, but they can slow down training by a factor of hundreds depending on the underlying network architectures. Meanwhile, interval bound... | [
{
"affiliations": [],
"name": "Huan Zhang"
},
{
"affiliations": [],
"name": "Hongge Chen"
},
{
"affiliations": [],
"name": "Chaowei Xiao"
},
{
"affiliations": [],
"name": "Sven Gowal"
},
{
"affiliations": [],
"name": "Robert Stanforth"
},
{
"affiliatio... | [
{
"authors": [
"Anish Athalye",
"Nicholas Carlini",
"David Wagner"
],
"title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples",
"venue": "International Conference on Machine Learning (ICML),",
"year": 2018
},
{
"auth... | [
{
"heading": "1 INTRODUCTION",
"text": "The success of deep neural networks (DNNs) has motivated their deployment in some safety-critical environments, such as autonomous driving and facial recognition systems. Applications in these areas make understanding the robustness and security of deep neural network... | 2,019 | VERIFIABLY ROBUST NEURAL NETWORKS |
SP:687a3382a219565eb3eb85b707017eb582439565 | [
"Paper summary: This paper argues that reducing the reliance of neural networks on high-frequency components of images could help robustness against adversarial examples. To attain this goal, the authors propose a new regularization scheme that encourages convolutional kernels to be smoother. The authors augment s... | Recent research has shown that CNNs are often overly sensitive to high-frequency textural patterns. Inspired by the intuition that humans tend to be more sensitive to lower-frequency (larger-scale) patterns, we design a regularization scheme that penalizes large differences between adjacent components within each convo... | [] | [
{
"authors": [
"Naveed Akhtar",
"Ajmal Mian"
],
"title": "Threat of adversarial attacks on deep learning in computer vision: A survey",
"venue": "IEEE Access,",
"year": 2018
},
{
"authors": [
"Rima Alaifari",
"Giovanni S. Alberti",
"Tandri Gauksson"
],
... | [
{
"heading": null,
"text": "Recent research has shown that CNNs are often overly sensitive to high-frequency textural patterns. Inspired by the intuition that humans tend to be more sensitive to lower-frequency (larger-scale) patterns, we design a regularization scheme that penalizes large differences betwe... | 2,019 | SMOOTH KERNELS IMPROVE ADVERSARIAL ROBUST- |
SP:b9b8e3efa69342c90b91dcb29bda1e2f8127581e | [
"This paper proposes a neural topic model that aim to discover topics by minimizing a version of the PLSA loss. According to PLSA, a document is presented as a mixture of topics, while a topic is a probability distribution over words, with documents and words assumed independent given topics. Thanks to this assumpt... | In this paper we present a model for unsupervised topic discovery in texts corpora. The proposed model uses documents, words, and topics lookup table embedding as neural network model parameters to build probabilities of words given topics, and probabilities of topics given documents. These probabilities are used to re... | [] | [
{
"authors": [
"D.M. Blei",
"J.D. Lafferty"
],
"title": "Dynamic topic models",
"venue": "International Conference on Machine Learning (ICML), pp",
"year": 2006
},
{
"authors": [
"D.M. Blei",
"A.Y. Ng",
"M.I. Jordan"
],
"title": "Latent dirichlet all... | [
{
"heading": "1 INTRODUCTION",
"text": "Nowadays, with the digital era, electronic text corpora are ubiquitous. These corpora can be company emails, news groups articles, online journal articles, Wikipedia articles, video metadata (titles, descriptions, tags). These corpora can be very large, thus requiring... | 2,019 | DISCOVERING TOPICS WITH NEURAL TOPIC MODELS BUILT FROM PLSA LOSS |
SP:a396624adb04f88f4ba9d10a7968be1926b5d226 | [
"In this paper the authors propose an end-to-end policy for graph placement and partitioning of computational graphs produced \"under-the-hood\" by platforms like Tensorflow. As the sizes of the neural networks increase, using distributed deep learning is becoming more and more necessary. Primitives like the one su... | Runtime and scalability of large neural networks can be significantly affected by the placement of operations in their dataflow graphs on suitable devices. With increasingly complex neural network architectures and heterogeneous device characteristics, finding a reasonable placement is extremely challenging even for do... | [
{
"affiliations": [],
"name": "DATAFLOW GRAPHS"
}
] | [
{
"authors": [
"Ravichandra Addanki",
"Shaileshh Bojja Venkatakrishnan",
"Shreyan Gupta",
"Hongzi Mao",
"Mohammad Alizadeh"
],
"title": "Placeto: Learning generalizable device placement algorithms for distributed machine learning",
"venue": "CoRR, abs/1906.08879,",
... | [
{
"heading": "1 INTRODUCTION",
"text": "Neural networks have demonstrated remarkable scalability–improved performance can usually be achieved by training a larger model on a larger dataset (Hestness et al., 2017; Shazeer et al., 2017; Jozefowicz et al., 2016; Mahajan et al., 2018; Radford et al.). Training ... | 2,019 | null |
SP:caca11294236433df3e4a14e0ae263ef332372c9 | [
"The paper modifies existing classifier architectures and training objective, in order to minimize \"conditional entropy bottleneck\" (CEB) objective, in attempts to force the representation to maximize the information bottleneck objective. Consequently, the paper claims that this CEB model improves general test ac... | We demonstrate that the Conditional Entropy Bottleneck (CEB) can improve model robustness. CEB is an easy strategy to implement and works in tandem with data augmentation procedures. We report results of a large scale adversarial robustness study on CIFAR-10, as well as the IMAGENET-C Common Corruptions Benchmark, IMAG... | [] | [
{
"authors": [
"Alexander A Alemi",
"Ian Fischer",
"Joshua V Dillon",
"Kevin Murphy"
],
"title": "Deep Variational Information Bottleneck",
"venue": "In International Conference on Learning Representations,",
"year": 2017
},
{
"authors": [
"Anish Athalye",
... | [
{
"heading": "1 INTRODUCTION",
"text": "We aim to make models that make meaningful predictions beyond the data they were trained on. Generally we want our models to be robust. Broadly, robustness is the ability of a model to continue making valid predictions as the distribution the model is tested on moves ... | 2,019 | CEB IMPROVES MODEL ROBUSTNESS |
SP:50073cbe6ab4b44b3c68f141542c1e81df0c5f61 | [
"This paper addresses the problem of representation learning for temporal graphs. That is, graphs where the topology can evolve over time. The contribution is a temporal graph attention (TGAT) layer aims to exploit learned temporal dynamics of graph evolution in tasks such as node classification and link prediction... | Inductive representation learning on temporal graphs is an important step toward salable machine learning on real-world dynamic networks. The evolving nature of temporal dynamic graphs requires handling new nodes as well as capturing temporal patterns. The node embeddings, which are now functions of time, should repres... | [
{
"affiliations": [],
"name": "Da Xu"
},
{
"affiliations": [],
"name": "Chuanwei Ruan"
},
{
"affiliations": [],
"name": "Evren Korpeoglu"
},
{
"affiliations": [],
"name": "Kannan Achan"
}
] | [
{
"authors": [
"Dzmitry Bahdanau",
"Kyunghyun Cho",
"Yoshua Bengio"
],
"title": "Neural machine translation by jointly learning to align and translate",
"venue": "arXiv preprint arXiv:1409.0473,",
"year": 2014
},
{
"authors": [
"Peter W Battaglia",
"Jessica ... | [
{
"heading": "1 INTRODUCTION",
"text": "The technique of learning lower-dimensional vector embeddings on graphs have been widely applied to graph analysis tasks (Perozzi et al., 2014; Tang et al., 2015; Wang et al., 2016) and deployed in industrial systems (Ying et al., 2018; Wang et al., 2018a). Most of th... | 2,020 | INDUCTIVE REPRESENTATION LEARNING ON TEMPORAL GRAPHS |
SP:8361d709b85b1c717e2cf742dab0145fae667660 | [
"This paper explores how graph neural networks can be applied to test satisfiability of 2QBF logical formulas. They show that a straightforward extension of a GNN-based SAT solver to 2QBF fails to outperform random chance, and argue that this is because proving either satisfiability or unsatisfiability of 2QBF requ... | It is valuable yet remains challenging to apply neural networks in logical reasoning tasks. Despite some successes witnessed in learning SAT (Boolean Satisfiability) solvers for propositional logic via Graph Neural Networks (GNN), there haven’t been any successes in learning solvers for more complex predicate logic. In... | [] | [
{
"authors": [
"Saeed Amizadeh",
"Sergiy Matusevych",
"Markus Weimer"
],
"title": "Learning to solve circuit-SAT: An unsupervised differentiable approach",
"venue": "In International Conference on Learning Representations,",
"year": 2019
},
{
"authors": [
"Hubie C... | [
{
"heading": "1 INTRODUCTION",
"text": "As deep learning makes astonishing achievements in the domain of image (He et al., 2016) and audio (Hannun et al., 2014) processing, natural languages (Vaswani et al., 2017), and discrete heuristics decisions in games (Silver et al., 2017), there is a profound interes... | 2,019 | GRAPH NEURAL NETWORKS FOR REASONING 2- QUANTIFIED BOOLEAN FORMULAS |
SP:2b8df72b380b893a55a82934afd558d75a3f42f2 | [
"Review: This paper considers the problem of dropping neurons from a neural network. In the case where this is done randomly, this corresponds to the widely studied dropout algorithm. If the goal is to become robust to randomly dropped neurons during evaluation, then it seems sufficient to just train with dropout... | The loss of a few neurons in a brain rarely results in any visible loss of function. However, the insight into what “few” means in this context is unclear. How many random neuron failures will it take to lead to a visible loss of function? In this paper, we address the fundamental question of the impact of the crash of... | [] | [
{
"authors": [
"D. Amodei",
"D. Hernandez"
],
"title": "AI and compute",
"venue": "Downloaded from https://blog.openai.com/ai-and-compute,",
"year": 2018
},
{
"authors": [
"D. Amodei",
"C. Olah",
"J. Steinhardt",
"P. Christiano",
"J. Schulman",
... | [
{
"heading": "1 INTRODUCTION",
"text": "Understanding the inner working of artificial neural networks (NNs) is currently one of the most pressing questions (20) in learning theory. As of now, neural networks are the backbone of the most successful machine learning solutions (37; 18). They are deployed in sa... | null | null |
SP:8d95af673099b1df7b837f583aa55678d67c5bd6 | [
"This paper presents an approach towards extending the capabilities of feedback alignment algorithms, that in essence replace the error backpropagation weights with random matrices. The authors propose a particular type of network where all weights are constraint to positive values except the first layers, a monot... | The family of feedback alignment (FA) algorithms aims to provide a more biologically motivated alternative to backpropagation (BP), by substituting the computations that are unrealistic to be implemented in physical brains. While FA algorithms have been shown to work well in practice, there is a lack of rigorous theory... | [
{
"affiliations": [],
"name": "Mathias Lechner"
}
] | [
{
"authors": [
"Pierre Baldi",
"Fernando Pineda"
],
"title": "Contrastive learning and neural oscillations",
"venue": "Neural Computation,",
"year": 1991
},
{
"authors": [
"Sergey Bartunov",
"Adam Santoro",
"Blake Richards",
"Luke Marris",
"Geoff... | [
{
"heading": "1 INTRODUCTION",
"text": "A key factor enabling the successes of Deep Learning is the backpropagation of error (BP) algorithm (Rumelhart et al., 1986). Since it has been introduced, BP has sparked several discussions on whether physical brains are realizing BP-like learning or not (Grossberg, ... | 2,020 | LEARNING REPRESENTATIONS FOR BINARY- CLASSIFICATION WITHOUT BACKPROPAGATION |
SP:0cfa52672cf34ffafece1171e48d6c344645dcf3 | [
"This paper investigates the impact of using a reduced precision (i.e., quantization) in different deep reinforcement learning (DRL) algorithms. It shows that overall, reducing the precision of the neural network in DRL algorithms from 32 bits to 16 or 8 bits doesn't have much effect on the quality of the learned p... | Recent work has shown that quantization can help reduce the memory, compute, and energy demands of deep neural networks without significantly harming their quality. However, whether these prior techniques, applied traditionally to imagebased models, work with the same efficacy to the sequential decision making process ... | [] | [
{
"authors": [
"Simon Alford",
"Ryan Robinett",
"Lauren Milechin",
"Jeremy Kepner"
],
"title": "Pruned and Structurally Sparse Neural Networks",
"venue": "arXiv e-prints, art",
"year": 2018
},
{
"authors": [
"Kai Arulkumaran",
"Marc Peter Deisenroth",
... | [
{
"heading": null,
"text": "Recent work has shown that quantization can help reduce the memory, compute, and energy demands of deep neural networks without significantly harming their quality. However, whether these prior techniques, applied traditionally to imagebased models, work with the same efficacy to... | 2,019 | QUANTIZED REINFORCEMENT LEARNING (QUARL) |
SP:8283eb652046558e12c67447dddebcb52ee9de94 | [
"The paper studies self-supervised learning from very few unlabeled images, down to the extreme case where only a single image is used for training. From the few/single image(s) available for training, a data set of the same size as some unmodified reference data set (ImageNet, Cifar-10/100) is generated through he... | We look critically at popular self-supervision techniques for learning deep convolutional neural networks without manual labels. We show that three different and representative methods, BiGAN, RotNet and DeepCluster, can learn the first few layers of a convolutional network from a single image as well as using millions... | [
{
"affiliations": [],
"name": "Yuki M. Asano"
},
{
"affiliations": [],
"name": "Christian Rupprecht"
},
{
"affiliations": [],
"name": "Andrea Vedaldi"
}
] | [
{
"authors": [
"REFERENCES Pulkit Agrawal",
"Joao Carreira",
"Jitendra Malik"
],
"title": "Learning to see by moving",
"venue": "In Proc. ICCV, pp. 37–45",
"year": 2015
},
{
"authors": [
"R. Arandjelović",
"A. Zisserman"
],
"title": "Look, listen and... | [
{
"heading": "1 INTRODUCTION",
"text": "Despite tremendous progress in supervised learning, learning without external supervision remains difficult. Self-supervision has recently emerged as one of the most promising approaches to address this limitation. Self-supervision builds on the fact that convolutiona... | 2,020 | A CRITICAL ANALYSIS OF SELF-SUPERVISION, OR WHAT WE CAN LEARN FROM A SINGLE IMAGE |
SP:5abcf6f6bd3c0079e6f942f614949a3f566afed8 | [
"In this paper, the authors propose a method to perform architecture search on the number of channels in convolutional layers. The proposed method, called AutoSlim, is a one-shot approach based on previous work of Slimmable Networks [2,3]. The authors have tested the proposed methods on a variety of architectures o... | We study how to set the number of channels in a neural network to achieve better accuracy under constrained resources (e.g., FLOPs, latency, memory footprint or model size). A simple and one-shot approach, named AutoSlim, is presented. Instead of training many network samples and searching with reinforcement learning, ... | [] | [
{
"authors": [
"Gabriel Bender",
"Pieter-Jan Kindermans",
"Barret Zoph",
"Vijay Vasudevan",
"Quoc Le"
],
"title": "Understanding and simplifying one-shot architecture search",
"venue": "In International Conference on Machine Learning,",
"year": 2018
},
{
"au... | [
{
"heading": "1 INTRODUCTION",
"text": "The channel configuration (a.k.a.. filter numbers or channel numbers) of a neural network plays a critical role in its affordability on resource constrained platforms, such as mobile phones, wearables and Internet of Things (IoT) devices. The most common constraints (... | 2,019 | null |
SP:6c5368ae026fc1aaf92bdc208d90e4eec999575a | [
"This paper presents an end-to-end approach for clustering. The proposed model is called CNC. It simultaneously learns a data embedding that preserve data affinity using Siamese networks, and clusters data in the embedding space. The model is trained by minimizing a differentiable loss function that is derived from... | We introduce a novel end-to-end approach for learning to cluster in the absence of labeled examples. Our clustering objective is based on optimizing normalized cuts, a criterion which measures both intra-cluster similarity as well as inter-cluster dissimilarity. We define a differentiable loss function equivalent to th... | [] | [
{
"authors": [
"Reid Andersen",
"Fan Chung",
"Kevin Lang"
],
"title": "Local graph partitioning using pagerank vectors",
"venue": "In FOCS,",
"year": 2006
},
{
"authors": [
"Mikhail Belkin",
"Partha Niyogi",
"Vikas Sindhwani"
],
"title": "Manif... | [
{
"heading": null,
"text": "We introduce a novel end-to-end approach for learning to cluster in the absence of labeled examples. Our clustering objective is based on optimizing normalized cuts, a criterion which measures both intra-cluster similarity as well as inter-cluster dissimilarity. We define a diffe... | 2,019 | OPTIMIZE EXPECTED NORMALIZED CUTS |
SP:76a052062e3e4bb707b24a8809c220c8ac1df83a | [
"This paper considers the \"weight transport problem\" which is the problem of ensuring that the feedforward weights $W_{ij}$ is the same as the feedback weights $W_{ji}$ in the spiking NN model of computation. This paper proposes a novel learning method for the feedback weights which depends on accurately estimati... | In artificial neural networks trained with gradient descent, the weights used for processing stimuli are also used during backward passes to calculate gradients. For the real brain to approximate gradients, gradient information would have to be propagated separately, such that one set of synaptic weights is used for pr... | [
{
"affiliations": [],
"name": "Jordan Guerguiev"
},
{
"affiliations": [],
"name": "Konrad P. Kording"
},
{
"affiliations": [],
"name": "Blake A. Richards"
}
] | [
{
"authors": [
"Mohamed Akrout",
"Collin Wilson",
"Peter C Humphreys",
"Timothy Lillicrap",
"Douglas Tweed"
],
"title": "Using weight mirrors to improve feedback alignment",
"venue": null,
"year": 1904
},
{
"authors": [
"Joshua D Angrist",
"Jörn-... | [
{
"heading": "1 INTRODUCTION",
"text": "Any learning system that makes small changes to its parameters will only improve if the changes are correlated to the gradient of the loss function. Given that people and animals can also show clear behavioral improvements on specific tasks (Shadmehr et al., 2010), ho... | 2,020 | SPIKE-BASED CAUSAL INFERENCE FOR WEIGHT ALIGNMENT |
SP:941824acd2bae699174e6bed954e2938eb4bede1 | [
"This paper presents a voice conversion approach using GANs based on adaptive instance normalization (AdaIN). The authors give the mathematical formulation of the problem and provide the implementation of the so-called AdaGAN. Experiments are carried out on VCTK and the proposed AdaGAN is compared with StarGAN. T... | Voice Conversion (VC) is a task of converting perceived speaker identity from a source speaker to a particular target speaker. The earlier approaches in the literature primarily find a mapping between the given source-target speaker-pairs. Developing mapping techniques for many-to-many VC using non-parallel data, inclu... | [] | [
{
"authors": [
"Sercan Arik",
"Jitong Chen",
"Kainan Peng",
"Wei Ping",
"Yanqi Zhou"
],
"title": "Neural voice cloning with a few samples",
"venue": "In Advances in Neural Information Processing Systems,",
"year": 2018
},
{
"authors": [
"Chad Atalla",
... | [
{
"heading": "1 INTRODUCTION",
"text": "Language is the core of civilization, and speech is the most powerful and natural form of communication. Human voice mimicry has always been considered as one of the most difficult tasks since it involves understanding of the sophisticated human speech production mech... | 2,019 | null |
SP:25106cb1a3e5ead20e58b680eeb6aa361c07e1ff | [
"In ES the goal is to find a distribution pi_theta(x) such that the expected value of f(x) under this distribution is high. This can be optimized with REINFORCE or with more sophisticated methods based on the natural gradient. The functional form of pi_theta is almost always a Gaussian, but this isn't sufficiently ... | Evolutionary Strategies (ES) are a popular family of black-box zeroth-order optimization algorithms which rely on search distributions to efficiently optimize a large variety of objective functions. This paper investigates the potential benefits of using highly flexible search distributions in ES algorithms, in contras... | [] | [
{
"authors": [
"Aman Agarwal",
"Soumya Basu",
"Tobias Schnabel",
"Thorsten Joachims"
],
"title": "Effective evaluation using logged bandit feedback from multiple loggers",
"venue": "In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data... | [
{
"heading": "1 INTRODUCTION",
"text": "We are interested in the global minimization of a black-box objective function, only accessible through a zeroth-order oracle. In many instances of this problem the objective is expensive to evaluate, which excludes brute force methods as a reasonable mean of optimiza... | 2,019 | null |
SP:d6218fdd95b48f3e69bf12e96f938cecde8ff7ab | [
"The paper proposes a ‘potential flow generator’ that can be seen as a regularizer for traditional GAN losses. It is based on the idea that samples flowing from one distribution to another should follow a minimum travel cost path. This regularization is expressed as an optimal transport problem with a squared Eucli... | We propose a potential flow generator with L2 optimal transport regularity, which can be easily integrated into a wide range of generative models including different versions of GANs and normalizing flow models. With only a slight augmentation to the original generator loss functions, our generator not only tries to tr... | [] | [
{
"authors": [
"REFERENCES Luigi Ambrosio",
"Nicola Gigli",
"Giuseppe Savaré"
],
"title": "Gradient flows: in metric spaces and in the space of probability measures",
"venue": "Springer Science & Business Media,",
"year": 2008
},
{
"authors": [
"Martin Arjovsky",
... | [
{
"heading": "1 INTRODUCTION",
"text": "Many of the generative models, for example, generative adversarial networks (GANs) (Goodfellow et al., 2014; Arjovsky et al., 2017; Salimans et al., 2018) and normalizing flow models (Rezende & Mohamed, 2015; Kingma & Dhariwal, 2018; Chen et al., 2018), aim to find a ... | 2,019 | null |
SP:927a1f8069c0347c4d0a8b1b947533f1c508ba42 | [
"The main claim of this paper is that a simple strategy of randomization plus fast gradient sign method (FGSM) adversarial training yields robust neural networks. This is somewhat surprising because previous works indicate that FGSM is not a powerful attack compared to iterative versions of it like projected gradie... | Adversarial training, a method for learning robust deep networks, is typically assumed to be more expensive than traditional training due to the necessity of constructing adversarial examples via a first-order method like projected gradient decent (PGD). In this paper, we make the surprising discovery that it is possib... | [
{
"affiliations": [],
"name": "Eric Wong"
},
{
"affiliations": [],
"name": "Leslie Rice"
}
] | [
{
"authors": [
"Anish Athalye",
"Logan Engstrom",
"Andrew Ilyas",
"Kevin Kwok"
],
"title": "Synthesizing robust adversarial examples",
"venue": "arXiv preprint arXiv:1707.07397,",
"year": 2017
},
{
"authors": [
"Anish Athalye",
"Nicholas Carlini",
... | [
{
"heading": "1 INTRODUCTION",
"text": "Although deep network architectures continue to be successful in a wide range of applications, the problem of learning robust deep networks remains an active area of research. In particular, safety and security focused applications are concerned about robustness to ad... | 2,020 | null |
SP:eb8b8a0bae8d3f488caf70b6103ed3fd9631cb9f | [
"This paper introduces a better searching strategy in the context of automatic neural architecture search (NAS). Especially, they focus on improving the search strategy for previously proposed computationally effective weight sharing methods for NAS. Current search strategies for the weight sharing NAS methods eith... | Automatic neural architecture search techniques are becoming increasingly important in machine learning area. Especially, weight sharing methods have shown remarkable potentials on searching good network architectures with few computational resources. However, existing weight sharing methods mainly suffer limitations o... | [] | [
{
"authors": [
"Irwan Bello",
"Barret Zoph",
"Vijay Vasudevan",
"Quoc V Le"
],
"title": "Neural optimizer search with reinforcement learning",
"venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume",
"year": 2017
},
{
"authors": [... | [
{
"heading": null,
"text": "Automatic neural architecture search techniques are becoming increasingly important in machine learning area. Especially, weight sharing methods have shown remarkable potentials on searching good network architectures with few computational resources. However, existing weight sha... | 2,019 | BETANAS: BALANCED TRAINING AND SELECTIVE DROP FOR NEURAL ARCHITECTURE SEARCH |
SP:1f95868a91ef213ebf3be6ca2a0f059e93b4be37 | [
"The paper proposes to use autoencoder for anomaly localization. The approach learns to project anomalous data on an autoencoder-learned manifold by using gradient descent on energy derived from the autoencoder's loss function. The proposed method is evaluated using the anomaly-localization dataset (Bergmann et al.... | Autoencoder reconstructions are widely used for the task of unsupervised anomaly localization. Indeed, an autoencoder trained on normal data is expected to only be able to reconstruct normal features of the data, allowing the segmentation of anomalous pixels in an image via a simple comparison between the image and its... | [
{
"affiliations": [],
"name": "David Dehaene"
},
{
"affiliations": [],
"name": "Oriel Frigo"
},
{
"affiliations": [],
"name": "Sébastien Combrexelle"
},
{
"affiliations": [],
"name": "Pierre Eline AnotherBrain"
}
] | [
{
"authors": [
"Fazil Altinel",
"Mete Ozay",
"Takayuki Okatani"
],
"title": "Deep structured energy-based image inpainting",
"venue": "24th International Conference on Pattern Recognition (ICPR),",
"year": 2018
},
{
"authors": [
"Jinwon An",
"Sungzoon Cho"
... | [
{
"heading": "1 INTRODUCTION",
"text": "Automating visual inspection on production lines with artificial intelligence has gained popularity and interest in recent years. Indeed, the analysis of images to segment potential manufacturing defects seems well suited to computer vision algorithms. However these s... | 2,020 | MAL DATA MANIFOLD FOR ANOMALY LOCALIZATION |
SP:cf0db5624fc03cd71e331202c16808174b4a9ae7 | [
"The paper proposes a type of recurrent neural network module called Long History Short-Term Memory (LH-STM) for longer-term video generation. This module can be used to replace ConvLSTMs in previously published video prediction models. It expands ConvLSTMs by adding a \"previous history\" term to the ConvLSTM equa... | While video prediction approaches have advanced considerably in recent years, learning to predict long-term future is challenging — ambiguous future or error propagation over time yield blurry predictions. To address this challenge, existing algorithms rely on extra supervision (e.g., action or object pose), motion flo... | [] | [
{
"authors": [
"Mohammad Babaeizadeh",
"Chelsea Finn",
"Dumitru Erhan",
"Roy H Campbell",
"Sergey Levine"
],
"title": "Stochastic variational video prediction",
"venue": "arXiv preprint arXiv:1710.11252,",
"year": 2017
},
{
"authors": [
"Wonmin Byeon",... | [
{
"heading": null,
"text": "While video prediction approaches have advanced considerably in recent years, learning to predict long-term future is challenging — ambiguous future or error propagation over time yield blurry predictions. To address this challenge, existing algorithms rely on extra supervision (... | 2,019 | null |
SP:69da1cecdf9fc25a9e6263943a5396b606cdcfef | [
"In this work, the authors show that the sequence of self-attention and feed-forward layers within a Transformer can be interpreted as an approximate numerical solution to a set of coupled ODEs. Based on this insight, the authors propose to replace the first-order Lie-Trotter splitting scheme by the more accurate, ... | The Transformer architecture is widely used in natural language processing. Despite its success, the design principle of the Transformer remains elusive. In this paper, we provide a novel perspective towards understanding the architecture: we show that the Transformer can be mathematically interpreted as a numerical Or... | [] | [
{
"authors": [
"Karim Ahmed",
"Nitish Shirish Keskar",
"Richard Socher"
],
"title": "Weighted transformer network for machine translation",
"venue": "arXiv preprint arXiv:1711.02132,",
"year": 2017
},
{
"authors": [
"Rami Al-Rfou",
"Dokook Choe",
"Noah... | [
{
"heading": "1 INTRODUCTION",
"text": "The Transformer is one of the most commonly used neural network architectures in natural language processing. Variants of the Transformer have achieved state-of-the-art performance in many tasks including language modeling (Dai et al., 2019; Al-Rfou et al., 2018) and ... | 2,019 | null |
SP:fc98effb95b87ad325f609c31b336c7dafd9ac30 | [
"This paper proposes a novel deep reinforcement learning algorithm at the intersection of model-based and model-free reinforcement learning: Risk Averse Value Expansion (RAVE). Overall, this work represents a significant but incremental step forwards for this \"hybrid\"-RL class of algorithms. However, the paper it... | Model-based Reinforcement Learning(RL) has shown great advantage in sampleefficiency, but suffers from poor asymptotic performance and high inference cost. A promising direction is to combine model-based reinforcement learning with model-free reinforcement learning, such as model-based value expansion(MVE). However, th... | [] | [
{
"authors": [
"Jacob Buckman",
"Danijar Hafner",
"George Tucker",
"Eugene Brevdo",
"Honglak Lee"
],
"title": "Sampleefficient reinforcement learning with stochastic ensemble value expansion",
"venue": "In Advances in Neural Information Processing Systems,",
"year":... | [
{
"heading": "1 INTRODUCTION",
"text": "In contrast to the tremendous progress made by model-free reinforcement learning algorithms in the domain of games(Mnih et al., 2015; Silver et al., 2017; Vinyals et al., 2019), poor sample efficiency has risen up as a great challenge to RL, especially when interactin... | 2,019 | null |
SP:bddd3d499426725b02d3d67ca0a7f8ef0c30e639 | [
"This paper presents a technique for encoding the high level “style” of pieces of symbolic music. The music is represented as a variant of the MIDI format. The main strategy is to condition a Music Transformer architecture on this global “style embedding”. Additionally, the Music Transformer model is also conditio... | We consider the problem of learning high-level controls over the global structure of sequence generation, particularly in the context of symbolic music generation with complex language models. In this work, we present the Transformer autoencoder, which aggregates encodings of the input data across time to obtain a glob... | [
{
"affiliations": [],
"name": "TRANSFORMER AUTOENCODERS"
}
] | [
{
"authors": [
"Pierre Baldi"
],
"title": "Autoencoders, unsupervised learning, and deep architectures",
"venue": "In Proceedings of ICML workshop on unsupervised and transfer learning,",
"year": 2012
},
{
"authors": [
"Samuel R Bowman",
"Luke Vilnis",
"Oriol Viny... | [
{
"heading": "1 INTRODUCTION",
"text": "There has been significant progress in generative modeling, particularly with respect to creative applications such as art and music (Oord et al., 2016; Engel et al., 2017b; Ha & Eck, 2017; Huang et al., 2019a; Payne, 2019). As the number of generative applications in... | 2,019 | null |
SP:e472738b53eec7967504021365ac5b4808028ec1 | [
"This paper introduces a corpus-based approach to build sentiment lexicon for Amharic. In order to save time and costs for the resource-limited language, the lexicon is generated from an Amharic news corpus by the following steps: manually preparing polarized seed words lists (strongly positive and strongly negativ... | Sentiment classification is an active research area with several applications including analysis of political opinions, classifying comments, movie reviews, news reviews and product reviews. To employ rule based sentiment classification, we require sentiment lexicons. However, manual construction of sentiment lexicon i... | [] | [
{
"authors": [
"D Alessia",
"Fernando Ferri",
"Patrizia Grifoni",
"Tiziana Guzzo"
],
"title": "Approaches, tools and applications for sentiment analysis implementation",
"venue": "International Journal of Computer Applications,",
"year": 2015
},
{
"authors": [
... | [
{
"heading": null,
"text": "keywords: Amharic Sentiment lexicon , Amharic Sentiment Classification , Seed words"
},
{
"heading": "1 INTRODUCTION",
"text": "Most of sentiment mining research papers are associated to English languages. Linguistic computational resources in languages other than Eng... | 2,019 | CORPUS BASED AMHARIC SENTIMENT LEXICON GENERA- TION |
SP:77d59e1e726172184249bdfdd81011617dc9c208 | [
"The paper proposes a quantum computer-based algorithm for semi-supervised least squared kernel SVM. This work builds upon LS-SVM of Rebentrost et al (2014b) which developed a quantum algorithm for the supervised version of the problem. While the main selling point of quantum LS-SVM is that it scales logarithmicall... | Quantum machine learning methods have the potential to facilitate learning using extremely large datasets. While the availability of data for training machine learning models is steadily increasing, oftentimes it is much easier to collect feature vectors that to obtain the corresponding labels. One of the approaches fo... | [] | [
{
"authors": [
"Scott Aaronson"
],
"title": "Read the fine print",
"venue": "Nature Physics,",
"year": 2015
},
{
"authors": [
"2019. Srinivasan Arunachalam",
"Ronald de Wolf"
],
"title": "A survey of quantum learning theory",
"venue": null,
"year": 2019
... | [
{
"heading": null,
"text": "Quantum machine learning methods have the potential to facilitate learning using extremely large datasets. While the availability of data for training machine learning models is steadily increasing, oftentimes it is much easier to collect feature vectors that to obtain the corres... | 2,019 | null |
SP:e58dc2d21175a62499405b7f4c3a03b135530838 | [
"This paper proposes to employ the likelihood of the latent representation of images as the optimization target in the Glow (Kingma and Dhariwal, 2018) framework. The authors argue that to optimize the ''proxy for image likelihood'' has two advantages: First, the landscapes of the surface are more smooth; Second, a... | Trained generative models have shown remarkable performance as priors for inverse problems in imaging. For example, Generative Adversarial Network priors permit recovery of test images from 5-10x fewer measurements than sparsity priors. Unfortunately, these models may be unable to represent any particular image because... | [] | [
{
"authors": [
"Tero Karras",
"Timo Aila",
"Samuli Laine",
"Jaakko Lehtinen"
],
"title": "Progressive growing of gans for improved quality, stability, and variation",
"venue": "arXiv preprint arXiv:1710.10196,",
"year": 2017
},
{
"authors": [
"Shervin Minaee... | [
{
"heading": "1 INTRODUCTION",
"text": "Generative deep neural networks have shown remarkable performance as natural signal priors in imaging inverse problems, such as denoising, inpainting, compressed sensing, blind deconvolution, and phase retrieval. These generative models can be trained from datasets co... | 2,019 | null |
SP:0d872fb4321f3a4a3fc61cf4d33b0c7e33f2d695 | [
"This paper presents deep symbolic regression (DSR), which uses a recurrent neural network to learn a distribution over mathematical expressions and uses policy gradient to train the RNN for generating desired expressions given a set of points. The RNN model is used to sample expressions from the learned distributi... | Discovering the underlying mathematical expressions describing a dataset is a core challenge for artificial intelligence. This is the problem of symbolic regression. Despite recent advances in training neural networks to solve complex tasks, deep learning approaches to symbolic regression are lacking. We propose a fram... | [] | [
{
"authors": [
"Thomas Bäck",
"David B Fogel",
"Zbigniew Michalewicz"
],
"title": "Evolutionary Computation 1: Basic Algorithms and Operators",
"venue": "CRC press,",
"year": 2018
},
{
"authors": [
"Irwan Bello",
"Barret Zoph",
"Vijay Vasudevan",
... | [
{
"heading": "1 INTRODUCTION",
"text": "Understanding the mathematical relationships among variables in a physical system is an integral component of the scientific process. Symbolic regression aims to identify these relationships by searching over the space of tractable mathematical expressions to best fit... | 2,019 | null |
SP:4706017e6f8b958c7d0825fed98b285ea2994b59 | [
"This paper proposes a new pointwise convolution layer, which is non-parametric and can be efficient thanks to the fast conventional transforms. Specifically, it could use either DCT or DHWT to do the transforming job and explores the optimal block structure to use this new kind of PC layer. Extensive experimental ... | Some conventional transforms such as Discrete Walsh-Hadamard Transform (DWHT) and Discrete Cosine Transform (DCT) have been widely used as feature extractors in image processing but rarely applied in neural networks. However, we found that these conventional transforms have the ability to capture the cross-channel corr... | [] | [
{
"authors": [
"Alfredo Canziani",
"Adam Paszke",
"Eugenio Culurciello"
],
"title": "An analysis of deep neural network models for practical applications",
"venue": "CoRR, abs/1605.07678,",
"year": 2016
},
{
"authors": [
"Matthieu Courbariaux",
"Yoshua Bengi... | [
{
"heading": "1 INTRODUCTION",
"text": "Large Convolutional Neural Networks (CNNs) (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; He et al., 2016; Szegedy et al., 2016b;a) and automatic Neural Architecture Search (NAS) based networks (Zoph et al., 2018; Liu et al., 2018; Real et al., 2018) have evolv... | 2,019 | null |
SP:63ad3be1dae7ede5c02a847304072c1cbc91b1cb | [
"This paper proposes to model various uncertainty measures in Graph Convolutional Networks (GCN) by Bayesian MC Dropout. Compared to existing Bayesian GCN methods, this work stands out in two aspects: 1) in terms of prediction, it considers multiple uncertainty measures including aleatoric, epistemic, vacuity and d... | Thanks to graph neural networks (GNNs), semi-supervised node classification has shown the state-of-the-art performance in graph data. However, GNNs have not considered different types of uncertainties associated with the class probabilities to minimize risk increasing misclassification under uncertainty in real life. I... | [] | [
{
"authors": [
"Clarence W De Silva"
],
"title": "Intelligent control: fuzzy logic applications",
"venue": "CRC press,",
"year": 2018
},
{
"authors": [
"Dhivya Eswaran",
"Stephan Günnemann",
"Christos Faloutsos"
],
"title": "The power of certainty: A diric... | [
{
"heading": "1 INTRODUCTION",
"text": "Inherent uncertainties introduced by different root causes have emerged as serious hurdles to find effective solutions for real world problems. Critical safety concerns have been brought due to lack of considering diverse causes of uncertainties, resulting in high ris... | 2,019 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.