paper_id
stringlengths
43
43
summaries
list
abstractText
stringlengths
98
40k
authors
list
references
list
sections
list
year
int64
1.98k
2.02k
title
stringlengths
4
183
SP:9156d551adff4ed16ba1be79014188caefc901c7
[ "the paper proposes to learn parametric form of optimal quantum annealing schedule. Authors construct 2 versions of neural network parameterizations mapping problem data onto an optimal schedule. They train these networks on artifically generated sets of problem of different size and test final models on the Grover...
Adiabatic quantum computation is a form of computation that acts by slowly interpolating a quantum system between an easy to prepare initial state and a final state that represents a solution to a given computational problem. The choice of the interpolation schedule is critical to the performance: if at a certain time ...
[ { "affiliations": [], "name": "Eli Ovits" } ]
[ { "authors": [ "Dorit Aharonov", "Wim van Dam", "Julia Kempe", "Zeph Landau", "Seth Lloyd", "Oded Regev" ], "title": "Adiabatic quantum computation is equivalent to standard quantum computation", "venue": "SIAM Review,", "year": 2008 }, { "authors": [ ...
[ { "heading": "1 INTRODUCTION", "text": "Many of the algorithms developed for quantum computing employ the quantum circuit model, in which a quantum state involving multiple qubits undergoes a series of invertible transformations. However, an alternative model, called Adiabatic Quantum Computation (AQC) (Far...
2,021
FIDELITY-BASED DEEP ADIABATIC SCHEDULING
SP:13fb6d0e4b208c11e5d58df1afac2921c02be269
[ "The paper builds upon previous lines of research on multi-task learning problem, such as conditional latent variable models including the Neural Process. As shown by the extensive Related Work section, this seems to be an active research direction. This makes it difficult for me to judge originality and significan...
Formulating scalable probabilistic regression models with reliable uncertainty estimates has been a long-standing challenge in machine learning research. Recently, casting probabilistic regression as a multi-task learning problem in terms of conditional latent variable (CLV) models such as the Neural Process (NP) has s...
[ { "affiliations": [], "name": "Michael Volpp" }, { "affiliations": [], "name": "Fabian Flürenbrock" }, { "affiliations": [], "name": "Lukas Grossberger" }, { "affiliations": [], "name": "Christian Daniel" }, { "affiliations": [], "name": "Gerhard Neumann" } ...
[ { "authors": [ "Takuya Akiba", "Shotaro Sano", "Toshihiko Yanase", "Takeru Ohta", "Masanori Koyama" ], "title": "Optuna: A Next-generation Hyperparameter Optimization Framework", "venue": null, "year": 2019 }, { "authors": [ "Marcin Andrychowicz", ...
[ { "heading": "1 INTRODUCTION", "text": "Estimating statistical relationships between physical quantities from measured data is of central importance in all branches of science and engineering and devising powerful regression models for this purpose forms a major field of study in statistics and machine lear...
2,021
BAYESIAN CONTEXT AGGREGATION FOR NEURAL PROCESSES
SP:368ac9d4b7934e68651c1b54286d9332caf16473
[ "Till page 3 the paper was easy to follow, i.e., the analytical expressions in eq(5), and the basic idea of Algorithm 1 (which is same as prior works by Han et al. , Wang et al., Periera et al.) are clear. However, after page 3 the paper is hard to follow. The specific points are as follows:" ]
In this paper we present a deep learning framework for solving large-scale multiagent non-cooperative stochastic games using fictitious play. The HamiltonJacobi-Bellman (HJB) PDE associated with each agent is reformulated into a set of Forward-Backward Stochastic Differential Equations (FBSDEs) and solved via forward s...
[]
[ { "authors": [ "George W Brown" ], "title": "Iterative solution of games by fictitious play", "venue": "Activity analysis of production and allocation,", "year": 1951 }, { "authors": [ "Rene Carmona", "Jean-Pierre Fouque", "Li-Hsien Sun" ], "title": "Mean ...
[ { "heading": "1 INTRODUCTION", "text": "Stochastic differential games represent a framework for investigating scenarios where multiple players make decisions while operating in a dynamic and stochastic environment. The theory of differential games dates back to the seminal work of Isaacs (1965) studying two...
2,020
MULTI-AGENT DEEP FBSDE REPRESENTATION FOR LARGE SCALE STOCHASTIC DIFFERENTIAL GAMES
SP:e4664a073afd05446cb1ddc217163692a9a12c1c
[ "This paper attempts to answer the four questions raised from the mutual information estimator. To this end, this paper investigates why the MINE succeeds or fails during the optimization on a synthetic dataset. Based on the observations and discussions, the paper then proposes a novel lower bound to regularize the...
With the variational lower bound of mutual information (MI), the estimation of MI can be understood as an optimization task via stochastic gradient descent. In this work, we start by showing how Mutual Information Neural Estimator (MINE) searches for the optimal function T that maximizes the Donsker-Varadhan representa...
[]
[ { "authors": [ "David Barber", "Felix V Agakov" ], "title": "Information maximization in noisy channels: A variational approach", "venue": "In Advances in Neural Information Processing Systems,", "year": 2004 }, { "authors": [ "Mohamed Ishmael Belghazi", "Aristide...
[ { "heading": "1 INTRODUCTION", "text": "Identifying a relationship between two variables of interest is one of the great linchpins in mathematics, statistics, and machine learning (Goodfellow et al., 2014; Ren et al., 2015; He et al., 2016; Vaswani et al., 2017). Not surprisingly, this problem is closely ti...
2,020
null
SP:b1c7e0c9656a0ec0399b6602f89f46323ff3436b
[ "The paper proposes contextual dropout as a sample-dependent dropout module, which can be applied to different models at the expense of marginal memory and computational overhead. The authors chose to focus on Visual Question Answering and Image classification tasks. The results in the paper show the contextual dr...
Dropout has been demonstrated as a simple and effective module to not only regularize the training process of deep neural networks, but also provide the uncertainty estimation for prediction. However, the quality of uncertainty estimation is highly dependent on the dropout probabilities. Most current models use the sam...
[ { "affiliations": [], "name": "Xinjie Fan" }, { "affiliations": [], "name": "Shujian Zhang" }, { "affiliations": [], "name": "Korawat Tanwisuth" }, { "affiliations": [], "name": "Xiaoning Qian" }, { "affiliations": [], "name": "Mingyuan Zhou" } ]
[ { "authors": [ "Jimmy Ba", "Brendan Frey" ], "title": "Adaptive dropout for training deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Emmanuel Bengio", "Pierre-Luc Bacon", "Joelle Pineau...
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks (NNs) have become ubiquitous and achieved state-of-the-art results in a wide variety of research problems (LeCun et al., 2015). To prevent over-parameterized NNs from overfitting, we often need to appropriately regularize their training. One way t...
2,021
CONTEXTUAL DROPOUT: AN EFFICIENT SAMPLE- DEPENDENT DROPOUT MODULE
SP:ee9764a48b109b9860c0a6f657a6cdd819237e7e
[ "The authors propose a end-to-end deep learning model called Net-DNF to handle tabular data. The architecture of Net-DNF has four layers: the first layer is a dense layer (learnable weights) with tanh activation eq(1). The second layer (DNNF) is formed by binary conjunctions over literals eq(2). The third layer is ...
A challenging open question in deep learning is how to handle tabular data. Unlike domains such as image and natural language processing, where deep architectures prevail, there is still no widely accepted neural architecture that dominates tabular data. As a step toward bridging this gap, we present Net-DNF a novel ge...
[ { "affiliations": [], "name": "Liran Katzir" }, { "affiliations": [], "name": "Gal Elidan" } ]
[ { "authors": [ "Martin Anthony" ], "title": "Connections between neural networks and Boolean functions", "venue": "In Boolean Methods and Models,", "year": 2005 }, { "authors": [ "Sercan Ömer Arik", "Tomas Pfister" ], "title": "Tabnet: Attentive interpretable ta...
[ { "heading": "1 INTRODUCTION", "text": "A key point in successfully applying deep neural models is the construction of architecture families that contain inductive bias relevant to the application domain. Architectures such as CNNs and RNNs have become the preeminent favorites for modeling images and sequen...
2,021
NET-DNF: EFFECTIVE DEEP MODELING OF TABULAR DATA
SP:9962a592fe8663bbcfe752b83aa9b666fe3a9456
[ "The paper suggests an improvement over double-Q learning by applying the control variates technique to the target Q, in the form of $(q1 - \\beta (q2 - E(q2))$ (eqn (8)). To minimize the variance, it suggests minimizing the correlation between $q1$ and $q2$. In addition, it applies the TD3 trick. The resulting alg...
Q-learning with value function approximation may have the poor performance because of overestimation bias and imprecise estimate. Specifically, overestimation bias is from the maximum operator over noise estimate, which is exaggerated using the estimate of a subsequent state. Inspired by the recent advance of deep rein...
[]
[ { "authors": [ "Oron Anschel", "Nir Baram", "Nahum Shimkin" ], "title": "Averaged-dqn: Variance reduction and stabilization for deep reinforcement learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning, ICML 2017,", "year": 2017 }, { ...
[ { "heading": "1 INTRODUCTION", "text": "Q-learning Watkins & Dayan (1992) as a model free reinforcement learning approach has gained popularity, especially under the advance of deep neural networks Mnih et al. (2013). In general, it combines the neural network approximators with the actor-critic architectur...
2,020
DECORRELATED DOUBLE Q-LEARNING
SP:73f0f92f476990989fa8339f789a77fadb5c1e26
[ "This work empirically studies the relationship between robustness and class selectivity, a measure of neuron variability between classes. Robustness to both adversarial (\"worst-case\") perturbations and corruptions (\"average-case\") are considered. This work builds off the recent work of Leavitt and Morcos (2020...
Representational sparsity is known to affect robustness to input perturbations in deep neural networks (DNNs), but less is known about how the semantic content of representations affects robustness. Class selectivity—the variability of a unit’s responses across data classes or dimensions—is one way of quantifying the s...
[]
[ { "authors": [ "Rana Ali Amjad", "Kairen Liu", "Bernhard C. Geiger" ], "title": "Understanding Individual Neuron Importance Using Information Theory. April 2018", "venue": "URL https://arxiv.org/abs/1804.06679v3", "year": 2018 }, { "authors": [ "Alessio Ansuini", ...
[ { "heading": "1 INTRODUCTION", "text": "Methods for understanding deep neural networks (DNNs) often attempt to find individual neurons or small sets of neurons that are representative of a network’s decision (Erhan et al., 2009; Zeiler and Fergus, 2014; Karpathy et al., 2016; Amjad et al., 2018; Lillian et ...
2,020
null
SP:8fe8ad33a783b2f98816e57e88d20b67fed50e8d
[ "The authors investigate the token embedding space of a variety of contextual embedding models for natural language. Using techniques based on nearest neighbors, clustering, and PCA, they report a variety of results on local dimensionality / anisotropy / clustering / manifold structure in these embedding models whi...
The geometric properties of contextual embedding spaces for deep language models such as BERT and ERNIE, have attracted considerable attention in recent years. Investigations on the contextual embeddings demonstrate a strong anisotropic space such that most of the vectors fall within a narrow cone, leading to high cosi...
[ { "affiliations": [], "name": "Xingyu Cai" }, { "affiliations": [], "name": "Jiaji Huang" }, { "affiliations": [], "name": "Yuchen Bian" }, { "affiliations": [], "name": "Kenneth Church" } ]
[ { "authors": [ "Laurent Amsaleg", "Oussama Chelly", "Teddy Furon", "Stéphane Girard", "Michael E Houle", "Ken-ichi Kawarabayashi", "Michael Nett" ], "title": "Estimating local intrinsic dimensionality", "venue": "In Proceedings of the 21th ACM SIGKDD Interna...
[ { "heading": "1 INTRODUCTION", "text": "The polysemous English word “bank” has two common senses: 1. the money sense, a place that people save or borrow money; 2. the river sense, a slope of earth that prevents the flooding. In modern usage, the two senses are very different from one another, though interes...
2,021
null
SP:9e4a85fa5d76f345b5a38b6f86710a53e1d08503
[ "This paper critically re-examines research in domain generalisation (DG), ie building models that robustly generalise to out-of-distribution data. It observes that existing methods are hard to compare, in particular due to unclear hyper-parameter and model selection criteria. It introduces a common benchmark suite...
The goal of domain generalization algorithms is to predict well on distributions different from those seen during training. While a myriad of domain generalization algorithms exist, inconsistencies in experimental conditions—datasets, network architectures, and model selection criteria—render fair comparisons difficult...
[ { "affiliations": [], "name": "Ishaan Gulrajani" }, { "affiliations": [], "name": "David Lopez-Paz" } ]
[ { "authors": [ "Kartik Ahuja", "Karthikeyan Shanmugam", "Kush Varshney", "Amit Dhurandhar" ], "title": "Invariant risk minimization", "venue": "games. arXiv,", "year": 2020 }, { "authors": [ "Kei Akuzawa", "Yusuke Iwasawa", "Yutaka Matsuo" ],...
[ { "heading": "1 INTRODUCTION", "text": "Machine learning systems often fail to generalize out-of-distribution, crashing in spectacular ways when tested outside the domain of training examples (Torralba and Efros, 2011). The overreliance of learning systems on the training distribution manifests widely. For ...
2,021
IN SEARCH OF LOST DOMAIN GENERALIZATION
SP:04abdf6d039513f23e00e6686832cd4b950f1d75
[ "This work proposes a specific parametrisation for the Gaussian prior and approximate posterior distribution in variational Bayesian neural networks in terms of inducing weights. The general idea is an instance of the sparse variational inference scheme for GPs proposed by Titsias back in 2009; for a given model wi...
Bayesian Neural Networks and deep ensembles represent two modern paradigms of uncertainty quantification in deep learning. Yet these approaches struggle to scale mainly due to memory inefficiency, requiring parameter storage several times that of their deterministic counterparts. To address this, we augment each weight...
[ { "affiliations": [], "name": "Hippolyt Ritter" }, { "affiliations": [], "name": "Martin Kukla" }, { "affiliations": [], "name": "Cheng Zhang" }, { "affiliations": [], "name": "Yingzhen Li" } ]
[ { "authors": [ "F.V. Agakov", "D. Barber" ], "title": "An auxiliary variational method", "venue": "In ICONIP,", "year": 2019 }, { "authors": [ "E. Bingham", "J.P. Chen", "M. Jankowiak", "F. Obermeyer", "N. Pradhan", "T. Karaletsos", "...
[ { "heading": "1 Introduction", "text": "Deep learning models are becoming deeper and wider than ever before. From image recognition models such as ResNet-101 (He et al., 2016a) and DenseNet (Huang et al., 2017) to BERT (Xu et al., 2019) and GPT-3 (Brown et al., 2020) for language modelling, deep neural netw...
2,022
Sparse Uncertainty Representation in Deep Learning with Inducing Weights
SP:4b4f70092c9fceabdc76c6ed5c5cf83c7791e119
[ "This paper proposes a hybrid-regressive machine translation (HRT) approach—combining autoregressive (AT) and non-autoregressive (NAT) translation paradigms: it first uses an AT model to generate a “gappy” sketch (every other token in a sentence), and then applies a NAT model to fill in the gaps with a single pass....
Although the non-autoregressive translation model based on iterative refinement has achieved comparable performance to the autoregressive counterparts with faster decoding, we empirically found that such aggressive iterations make the acceleration rely heavily on small batch size (e.g., 1) and computing device (e.g., G...
[]
[ { "authors": [ "Nader Akoury", "Kalpesh Krishna", "Mohit Iyyer" ], "title": "Syntactically supervised transformers for faster neural machine translation", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, ...
[ { "heading": "1 INTRODUCTION", "text": "Although autoregressive translation (AT) has become the de facto standard for Neural Machine Translation (Bahdanau et al., 2015), its nature of generating target sentences sequentially (e.g., from left to right) makes it challenging to respond quickly in a production ...
2,020
null
SP:41b23082a1439aa8601439e27c9abaa33e06959c
[ "This paper proposes a (decentralized) method for online adjustment of agent incentives in multi-agent learning scenarios, as a means to obtain higher outcomes for each agent and for the group as a whole. The paper uses the “price of anarchy” (the worst value of an equilibrium divided by the best value in the game)...
Even in simple multi-agent systems, fixed incentives can lead to outcomes that are poor for the group and each individual agent. We propose a method, D3C, for online adjustment of agent incentives that reduces the loss incurred at a Nash equilibrium. Agents adjust their incentives by learning to mix their incentive wit...
[]
[ { "authors": [ "Blaise Aguera y Arcas" ], "title": "Social intelligence", "venue": "In Talk presented at the 33rd Conference on Neural Information Processing Systems Conference,", "year": 2020 }, { "authors": [ "Richard D. Alexander", "Gerald Bargia" ], "title":...
[ { "heading": "1 INTRODUCTION", "text": "We consider a setting composed of multiple interacting artificially intelligent agents. These agents will be instantiated by humans, corporations, or machines with specific individual incentives. However, it is well known that the interactions between individual agent...
2,020
null
SP:87bda29654ffe25cda14e3b27a6e4b53e2a40164
[ "The paper investigates whether languages are equally hard to Conditional-Language-Model (CLM). To do this, the authors perform controlled experiments by modeling text from parallel data from 6 typologically diverse languages. They pair the languages and perform experiments in 30 directions with Transformers, and c...
Inspired by the phenomenon of performance disparity between languages in machine translation, we investigate whether and to what extent languages are equally hard to “conditional-language-model”. Our goal is to improve our understanding and expectation of the relationship between language, data representation, size, an...
[]
[ { "authors": [ "Mikhail Belkin", "Daniel Hsu", "Siyuan Ma", "Soumik Mandal" ], "title": "Reconciling modern machine-learning practice and the classical bias–variance trade-off", "venue": "Proceedings of the National Academy of Sciences,", "year": 2019 }, { "author...
[ { "heading": null, "text": "Inspired by the phenomenon of performance disparity between languages in machine translation, we investigate whether and to what extent languages are equally hard to “conditional-language-model”. Our goal is to improve our understanding and expectation of the relationship between...
2,020
null
SP:0cab715d71a765b97066673f3a2d0e00d22ffa3c
[ "The authors propose a neural architecture search (NAS) algorithm inspired by brain physiology. In particular, they propose a NAS algorithm based on neural dendritic branching, and apply it to three different segmentation tasks (namely cell nuclei, electron microscopy, and chest X-ray lung segmentation). The author...
Researchers manually compose most neural networks through painstaking experimentation. This process is taxing and explores only a limited subset of possible architecture. Researchers design architectures to address objectives ranging from low space complexity to high accuracy through hours of experimentation. Neural ar...
[]
[ { "authors": [ "Md Zahangir Alom", "Mahmudul Hasan", "Chris Yakopcic", "Tarek M Taha", "Vijayan K Asari" ], "title": "Recurrent residual convolutional neural network based on u-net (r2u-net) for medical image segmentation", "venue": "arXiv preprint arXiv:1802.06955,", ...
[ { "heading": null, "text": "1 INTRODUCTION\nResearchers manually composing neural networks must juggle multiple goals for their architectures. Architectures must make good decisions; they must be fast, and they should work even with limited computational resources. These goals are challenging to achieve man...
2,020
null
SP:232edf223e799126992acd9ee04d88c22ff57110
[ "The authors propose two approaches for pruning: (a) \"Evolution-style\": start with K random masks associated with the weights, update weights on gradient descent corresponding to those active in the “fittest” mask, and overtime throw away all but one masks which are less fit. (b) \"Dissipating-gradients”: Here th...
Post-training dropout based approaches achieve high sparsity and are well established means of deciphering problems relating to computational cost and overfitting in Neural Network architectures citesrivastava2014dropout, (Pan et al., 2016), Zhu & Gupta (2017), LeCun et al. (1990). Contrastingly, pruning at initializat...
[]
[ { "authors": [ "Simon Alford", "Ryan Robinett", "Lauren Milechin", "Jeremy Kepner" ], "title": "Training behavior of sparse neural network topologies", "venue": "IEEE High Performance Extreme Computing Conference (HPEC),", "year": 2019 }, { "authors": [ "Ayd...
[ { "heading": "1 INTRODUCTION", "text": "Computational complexity and overfitting in neural networks is a well established problem Frankle & Carbin (2018), Han et al. (2015), LeCun et al. (1990), Denil et al. (2013). We utilize pruning approaches for the following two reasons: 1) To reduce the computational ...
2,020
null
SP:bb0b99194e5d102320ca4cc7c89c4ae6ee514d83
[ "The paper studies “butterfly networks”, where, a logarithmic number of linear layers with sparse connections resembling the butterfly structure of the FFT algorithm, along with linear layers in smaller dimensions are used to approximate linear layers in larger dimensions. In general, the paper follows the idea of ...
A butterfly network consists of logarithmically many layers, each with a linear number of non-zero weights (pre-specified). The fast Johnson-Lindenstrauss transform (FJLT) can be represented as a butterfly network followed by a projection onto a random subset of the coordinates. Moreover, a random matrix based on FJLT ...
[ { "affiliations": [], "name": "FIXED BUTTER" } ]
[ { "authors": [ "N. Ailon", "B. Chazelle" ], "title": "The fast johnson–lindenstrauss transform and approximate nearest neighbors", "venue": "SIAM J. Comput.,", "year": 2009 }, { "authors": [ "N. Ailon", "E. Liberty" ], "title": "Fast dimension reduction us...
[ { "heading": "1 INTRODUCTION", "text": "A butterfly network (see Figure 6 in Appendix A) is a layered graph connecting a layer of n inputs to a layer of n outputs withO(log n) layers, where each layer contains 2n edges. The edges connecting adjacent layers are organized in disjoint gadgets, each gadget conn...
2,020
null
SP:fb0eda1f20d9b0a63164e96a2bf9ab4bee365eea
[ "The paper considers the problem of partitioning the atoms (e.g., pixels of an image) of a reinforcement learning task to latent states (e.g., a grid that determines whether there exists furniture in each cell). The number of states grows exponentially with the number of cells of the grid. So the algorithms that ar...
We propose a novel setting for reinforcement learning that combines two common real-world difficulties: presence of observations (such as camera images) and factored states (such as location of objects). In our setting, the agent receives observations generated stochastically from a latent factored state. These observa...
[ { "affiliations": [], "name": "Dipendra Misra" }, { "affiliations": [], "name": "Qinghua Liu" } ]
[ { "authors": [ "Alekh Agarwal", "Sham Kakade", "Akshay Krishnamurthy", "Wen Sun" ], "title": "Flambe: Structural complexity and representation learning of low rank mdps", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [...
[ { "heading": "1 INTRODUCTION", "text": "Most reinforcement learning (RL) algorithms scale polynomially with the size of the state space, which is inadequate for many real world applications. Consider for example a simple navigation task in a room with furniture where the set of furniture pieces and their lo...
2,021
PROVABLE RICH OBSERVATION REINFORCEMENT LEARNING WITH COMBINATORIAL LATENT STATES
SP:5908636440ae0162f1bf98b6e7b8969cc163f9a6
[ "Motivated by the observation that prevalent metrics (Inception Score, Frechet Inception Distance) used to assess the quality of samples obtained from generative models are gameable (due to either the metric not correlating well with visually assessed sample quality or the metric being susceptible to training sampl...
Many recent developments on generative models for natural images have relied on heuristically-motivated metrics that can be easily gamed by memorizing a small sample from the true distribution or training a model directly to improve the metric. In this work, we critically evaluate the gameability of such metrics by run...
[]
[ { "authors": [ "Shane Barratt", "Rishi Sharma" ], "title": "A note on the inception score", "venue": "arXiv preprint arXiv:1801.01973,", "year": 2018 }, { "authors": [ "Ali Borji" ], "title": "Pros and cons of gan evaluation measures", "venue": "Computer Vis...
[ { "heading": null, "text": "Many recent developments on generative models for natural images have relied on heuristically-motivated metrics that can be easily gamed by memorizing a small sample from the true distribution or training a model directly to improve the metric. In this work, we critically evaluat...
2,020
null
SP:9ce7a60c5f2e40f7d59e98c90171a7b49621c67c
[ "Observing that the existed ER-based sampling method may introduce bias or redundancy in sampled transitions, the paper proposes a new sampling method in the ER learning setting. The idea is to take into consideration the context, i.e. many visited transitions, rather than a single one, based on which one can measu...
Experience replay, which enables the agents to remember and reuse experience from the past, has played a significant role in the success of off-policy reinforcement learning (RL). To utilize the experience replay efficiently, the existing sampling methods allow selecting out more meaningful experiences by imposing prio...
[ { "affiliations": [], "name": "REPLAY BUFFERS" }, { "affiliations": [], "name": "Youngmin Oh" }, { "affiliations": [], "name": "Kimin Lee" }, { "affiliations": [], "name": "Jinwoo Shin" }, { "affiliations": [], "name": "Eunho Yang" }, { "affiliations":...
[ { "authors": [ "Marc G Bellemare", "Yavar Naddaf", "Joel Veness", "Michael Bowling" ], "title": "The arcade learning environment: An evaluation platform for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2013 }, { "authors": [ ...
[ { "heading": "1 INTRODUCTION", "text": "Experience replay (Mnih et al., 2015), which is a memory that stores the past experiences to reuse them, has become a popular mechanism for reinforcement learning (RL), since it stabilizes training and improves the sample efficiency. The success of various off-policy ...
2,021
null
SP:ca6ab92369346b3d457f575fc652333255f2dfec
[ "The paper considers the problem of slow sampling in autoregressive generative models. Sampling in such models is sequential, so its computational cost scales with the data dimensionality. Existing work speeds up autoregressive sampling by caching activations or distilling into normalizing flows with fast sampling....
Autoregressive models are widely used for tasks such as image and audio generation. The sampling process of these models, however, does not allow interruptions and cannot adapt to real-time computational resources. This challenge impedes the deployment of powerful autoregressive models, which involve a slow sampling pr...
[ { "affiliations": [], "name": "ORDERED AUTOENCODING" }, { "affiliations": [], "name": "Yilun Xu" }, { "affiliations": [], "name": "Yang Song" }, { "affiliations": [], "name": "Linyuan Gong" } ]
[ { "authors": [ "X. Bao", "J. Lucas", "S. Sachdeva", "R.B. Grosse" ], "title": "Regularized linear autoencoders recover the principal components, eventually", "venue": "ArXiv, abs/2007.06731,", "year": 2020 }, { "authors": [ "Y. Bengio", "N. Léonard", ...
[ { "heading": "1 INTRODUCTION", "text": "Autoregressive models are a prominent approach to data generation, and have been widely used to produce high quality samples of images (Oord et al., 2016b; Salimans et al., 2017; Menick & Kalchbrenner, 2018), audio (Oord et al., 2016a), video (Kalchbrenner et al., 201...
2,021
null
SP:a4cda983cb5a670c3ad7054b9cd7797107af64b1
[ "This paper presents a one-class classification method using a fully convolutional model and directly using the output map as an explanation map. The method is dubbed FCDD for fully convolutional data descriptor. FCDD uses a hypersphere classifier combined with a pseudo-Huber loss. FCDD is trained using outliers ex...
Deep one-class classification variants for anomaly detection learn a mapping that concentrates nominal samples in feature space causing anomalies to be mapped away. Because this transformation is highly non-linear, finding interpretations poses a significant challenge. In this paper we present an explainable deep one-c...
[ { "affiliations": [], "name": "Philipp Liznerski" }, { "affiliations": [], "name": "Lukas Ruff" }, { "affiliations": [], "name": "Robert A. Vandermeulen" }, { "affiliations": [], "name": "Billy Joe Franks" }, { "affiliations": [], "name": "Marius Kloft" }, ...
[ { "authors": [ "C.J. Anders", "P. Pasliev", "A.-K. Dombrowski", "K.-R. Müller", "P. Kessel" ], "title": "Fairwashing explanations with off-manifold detergent", "venue": "In ICML,", "year": 2020 }, { "authors": [ "V. Barnett", "T. Lewis" ], ...
[ { "heading": "1 INTRODUCTION", "text": "Anomaly detection (AD) is the task of identifying anomalies in a corpus of data (Edgeworth, 1887; Barnett and Lewis, 1994; Chandola et al., 2009; Ruff et al., 2021). Powerful new anomaly detectors based on deep learning have made AD more effective and scalable to larg...
2,021
EXPLAINABLE DEEP ONE-CLASS CLASSIFICATION
SP:1d4d75e1bbb4e58273bc027f004aa986a587a6dd
[ "This paper proposes an approach to training deep latent variable models on data that is missing not at random. To learn the parameters of deep latent variable models, the paper adopts importance-weighted variational inference techniques. Experiments on a variety of datasets show that the proposed approach is effec...
When a missing process depends on the missing values themselves, it needs to be explicitly modelled and taken into account while doing likelihood-based inference. We present an approach for building and fitting deep latent variable models (DLVMs) in cases where the missing process is dependent on the missing data. Spec...
[ { "affiliations": [], "name": "Niels Bruun Ipsen" }, { "affiliations": [], "name": "Pierre-Alexandre Mattei" }, { "affiliations": [], "name": "Jes Frellsen" } ]
[ { "authors": [ "Alberto Bietti", "Julien Mairal" ], "title": "Invariance and stability of deep convolutional representations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Benjamin Bloem-Reddy", "Yee Whye Teh" ...
[ { "heading": null, "text": "1 INTRODUCTION\nz\nx\ns\nθ\nφ\nγ\nN\n(a)\nPPCA not-MIWAE PPCA\n(b)\nFigure 1: (a) Graphical model of the not-MIWAE. (b) Gaussian data with MNAR values. Dots are fully observed, partially observed data are displayed as black crosses. A contour of the true distribution is shown tog...
2,021
null
SP:da630280f443afedfacaf7ad1abe20d97ebb60f2
[ "In this work generative models using a GP as prior and a deep network as likelihood (GP-DGMs) are considered. In the VAE formalism for inference, the novelty of this paper is located in the encoder: It is sparse and the posterior can be computed even when part of the observations are missing. Sparsity is obtained ...
Large, multi-dimensional spatio-temporal datasets are omnipresent in modern science and engineering. An effective framework for handling such data are Gaussian process deep generative models (GP-DGMs), which employ GP priors over the latent variables of DGMs. Existing approaches for performing inference in GP-DGMs do n...
[]
[ { "authors": [ "Mauricio A Álvarez", "Neil D Lawrence" ], "title": "Computationally efficient convolved multiple output Gaussian processes", "venue": "The Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Gowtham Atluri", "Anuj Karpatne", ...
[ { "heading": "1 INTRODUCTION", "text": "Increasing amounts of large, multi-dimensional datasets that exhibit strong spatio-temporal dependencies are arising from a wealth of domains, including earth, social and environmental sciences (Atluri et al., 2018). For example, consider modelling daily atmospheric m...
2,020
null
SP:30ceb5d450760e9954ac86f091fb97cb14a2d092
[ "The paper considers the problem of creating spatial memory representations, which play important roles in robotics and are crucial for real-world applications of intelligent agents. The paper proposes an ego-centric representation that stores depth values and features at each pixel in a panorama. Given the relativ...
Spatial memory, or the ability to remember and recall specific locations and objects, is central to autonomous agents’ ability to carry out tasks in real environments. However, most existing artificial memory modules are not very adept at storing spatial information. We propose a parameter-free module, Egospheric Spati...
[ { "affiliations": [], "name": "Daniel Lenton" }, { "affiliations": [], "name": "Stephen James" }, { "affiliations": [], "name": "Ronald Clark" }, { "affiliations": [], "name": "Andrew J. Davison" } ]
[ { "authors": [ "Michael Bloesch", "Jan Czarnowski", "Ronald Clark", "Stefan Leutenegger", "Andrew J Davison" ], "title": "Codeslam—learning a compact, optimisable representation for dense visual slam", "venue": "In Proceedings of the IEEE conference on computer vision a...
[ { "heading": "1 INTRODUCTION", "text": "Egocentric spatial memory is central to our understanding of spatial reasoning in biology (Klatzky, 1998; Burgess, 2006), where an embodied agent constantly carries with it a local map of its surrounding geometry. Such representations have particular significance for ...
2,021
END-TO-END EGOSPHERIC SPATIAL MEMORY
SP:0cde0537137f3eef6c9c0d6d580a610a07112a39
[ "This paper introduces an algorithm for training neural networks in a way that parameters preserve a given property. The optimization is based on using a transformation R that perturbs parameters in a way that the desired property is preserved. Instead of directly optimizing the parameters of the network, the opti...
Many types of neural network layers rely on matrix properties such as invertibility or orthogonality. Retaining such properties during optimization with gradientbased stochastic optimizers is a challenging task, which is usually addressed by either reparameterization of the affected parameters or by directly optimizing...
[]
[ { "authors": [ "P.A. Absil", "R. Mahony", "R. Sepulchre" ], "title": "Optimization algorithms on matrix manifolds", "venue": "ISBN 9780691132983", "year": 2009 }, { "authors": [ "Jens Behrmann", "Will Grathwohl", "Ricky T.Q. Chen", "David Duvenau...
[ { "heading": null, "text": "1 INTRODUCTION\nMany deep learning applications depend critically on the neural network parameters having a certain mathematical structure. As an important example, reversible generative models rely on invertibility and, in the case of normalizing flows, efficient inversion and c...
2,020
null
SP:6ba57dba7e320797ca311e5c7d6e55e130384df2
[ "To summarize, this paper proposed a new noise injection method that is easy to implement and is able to replace the original noise injection method in StyleGAN 2. The approach is supported by detailed theoretical analysis and impactful performance improvement on GAN training and inversion. The results show that th...
Noise injection is an effective way of circumventing overfitting and enhancing generalization in machine learning, the rationale of which has been validated in deep learning as well. Recently, noise injection exhibits surprising effectiveness when generating high-fidelity images in Generative Adversarial Networks (e.g....
[]
[ { "authors": [ "Rameen Abdal", "Yipeng Qin", "Peter Wonka" ], "title": "Image2StyleGAN: How to embed images into the StyleGAN latent space", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Guozhon...
[ { "heading": "1 INTRODUCTION", "text": "Noise injection is usually applied as regularization to cope with overfitting or facilitate generalization in neural networks (Bishop, 1995; An, 1996). The effectiveness of this simple technique has also been proved in various tasks in deep learning, such as learning ...
2,020
null
SP:bdbb12951868ea0864f926192fdbe2e62ecdb0e3
[ "The authors proposed in this paper a supervised approach relying on given relative and quantitative attribute discrepancies. A UNet-like generator learns adversarially tends to generate realistic images while a \"ranker\" tends to predict the magnitude of the input parameter used to control the image manipulation....
We propose a new model to refine image-to-image translation via an adversarial ranking process. In particular, we simultaneously train two modules: a generator that translates an input image to the desired image with smooth subtle changes with respect to some specific attributes; and a ranker that ranks rival preferenc...
[]
[ { "authors": [ "Yazeed Alharbi", "Peter Wonka" ], "title": "Disentangled image generation through structured noise injection", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Yunbo Cao", ...
[ { "heading": "1 INTRODUCTION", "text": "Image-to-image (I2I) translation (Isola et al., 2017) aims to translate an input image into the desired ones with changes in some specific attributes. Current literature can be classified into two categories: binary translation (Zhu et al., 2017; Kim et al., 2017), e....
2,020
TRIP: REFINING IMAGE-TO-IMAGE TRANSLATION
SP:878a518cb77731b8b376d5fd82542670e195f0d6
[ "This paper aims to develop a transformer-based pre-trained model for multivariate time series representation learning. Specifically, the transformer’s encoder is only used and a time-series imputation task is constructed as their unsupervised learning objective. This is a bit similar to the BERT model in NLP. But ...
In this work we propose for the first time a transformer-based framework for unsupervised representation learning of multivariate time series. Pre-trained models can be potentially used for downstream tasks such as regression and classification, forecasting and missing value imputation. We evaluate our models on severa...
[]
[ { "authors": [ "A. Bagnall", "J. Lines", "A. Bostrom", "J. Large", "E. Keogh" ], "title": "The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances", "venue": "Data Mining and Knowledge Discovery,", "year": 2...
[ { "heading": "1 INTRODUCTION", "text": "Multivariate time series (MTS) are an important type of data that is ubiquitous in a wide variety of domains, including science, medicine, finance, engineering and industrial applications. Despite the recent abundance of MTS data in the much touted era of “Big Data”, ...
2,020
null
SP:2fe9ca0b44e57587b94159cb8fa201f79c13db50
[ "In this paper, the authors proposed a novel reparameterization framework of the last network layer that takes semantic hierarchy into account. Specifically, the authors assume a predefined hierarchy graph, and model the classifier of child classes as a parent classifier plus offsets $\\delta$ recursively. The auth...
This paper considers classification problems with hierarchically organized classes. We force the classifier (hyperplane) of each class to belong to a sphere manifold, whose center is the classifier of its super-class. Then, individual sphere manifolds are connected based on their hierarchical relations. Our technique r...
[]
[ { "authors": [ "P.-A. Absil", "R. Mahony", "R. Sepulchre" ], "title": "Optimization Algorithms on Matrix Manifolds", "venue": null, "year": 2007 }, { "authors": [ "Gregor Bachmann", "Gary Bécigneul", "Octavian-Eugen Ganea" ], "title": "Constant...
[ { "heading": "1 INTRODUCTION", "text": "Applying inductive biases or prior knowledge to inference models is a popular strategy to improve their generalization performance (Battaglia et al., 2018). For example, a hierarchical structure is found based on the similarity or shared characteristics between sample...
2,020
CONNECTING SPHERE MANIFOLDS HIERARCHICALLY
SP:cb6afa05735201fecf8106b77c2d0a883d5cd996
[ "This paper investigates the role of pre-training as an initialization for meta-learning for few-shot classification. In particular, they look at the extent to which the pre-trained representations are disentangled with respect to the class labels. They hypothesize that this disentanglement property of those repres...
Few-shot learning aims to classify unknown classes of examples with a few new examples per class. There are two key routes for few-shot learning. One is to (pre)train a classifier with examples from known classes, and then transfer the pretrained classifier to unknown classes using the new examples. The other, called m...
[]
[ { "authors": [ "Luca Bertinetto", "Joao F. Henriques", "Philip Torr", "Andrea Vedaldi" ], "title": "Meta-learning with differentiable closed-form solvers", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "W...
[ { "heading": "1 INTRODUCTION", "text": "In recent years, deep learning methods have outperformed most of the traditional methods in supervised learning, especially in image classification. However, deep learning methods generally require lots of labeled data to achieve decent performance. Some applications,...
2,020
null
SP:2cfe676c21709d69aa3bab1480440fda0a365c3f
[ "The paper proposes a method, named as RG-flow, which combines the ideas of Renormalization group (RG) and flow-based models. The RG is applied to separate signal statistics of different scales in the input distribution and flow-based idea represents each scale information in its latent variables with sparse prior ...
Flow-based generative models have become an important class of unsupervised learning approaches. In this work, we incorporate the key idea of renormalization group (RG) and sparse prior distribution to design a hierarchical flow-based generative model, called RG-Flow, which can separate information at different scales ...
[]
[ { "authors": [ "Samuel K Ainsworth", "Nicholas J Foti", "Adrian KC Lee", "Emily B Fox" ], "title": "oi-vae: Output interpretable vaes for nonlinear group factor analysis", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ ...
[ { "heading": "1 INTRODUCTION", "text": "One of the most important unsupervised learning tasks is to learn the data distribution and build generative models. Over the past few years, various types of generative models have been proposed. Flow-based generative models are a particular family of generative mode...
2,020
RG-FLOW: A HIERARCHICAL AND EXPLAINABLE FLOW MODEL BASED ON RENORMALIZATION GROUP AND SPARSE PRIOR
SP:2f7f3a043edf8bbe4164dc748c7fbfc7c7338a02
[ "The authors propose a discriminator-based approach to inverse reinforcement learning (IRL). The discriminator function is trained to attain large values (\"energy\") on trajectories from the current policy and small values on trajectories from an expert policy. The current policy is then improved by using the nega...
Traditional reinforcement learning methods usually deal with the tasks with explicit reward signals. However, for vast majority of cases, the environment wouldn’t feedback a reward signal immediately. It turns out to be a bottleneck for modern reinforcement learning approaches to be applied into more realistic scenario...
[]
[ { "authors": [ "Pieter Abbeel", "Varun Ganapathi", "Andrew Y. Ng" ], "title": "A learning vehicular dynamics, with application to modeling helicopters", "venue": "In Advances in Neural Information Processing Systems,", "year": 2006 }, { "authors": [ "Pieter Abbeel...
[ { "heading": "1 INTRODUCTION", "text": "Motivated by applying reinforcement learning algorithms into more realistic tasks, we find that most realistic environments cannot feed an explicit reward signal back to the agent immediately. It becomes a bottleneck for traditional reinforcement learning methods to b...
2,020
null
SP:6b06c93bb2394dae7e4d6e76a8c134b6808a46e9
[ "This paper considers solving rank-constrained convex optimization. This is a fairly general problem that contains several special cases such as matrix completion and robust PCA. This paper presents a local search approach along with an interesting theoretical analysis of their approach. Furthermore, this paper pro...
We propose greedy and local search algorithms for rank-constrained convex optimization, namely solving min rank(A)≤r∗ R(A) given a convex function R : Rm×n → R and a parameter r∗. These algorithms consist of repeating two steps: (a) adding a new rank-1 matrix to A and (b) enforcing the rank constraint on A. We refine a...
[ { "affiliations": [], "name": "Kyriakos Axiotis" }, { "affiliations": [], "name": "Maxim Sviridenko" } ]
[ { "authors": [ "Kyriakos Axiotis", "Maxim Sviridenko" ], "title": "Sparse convex optimization via adaptively regularized hard thresholding", "venue": "arXiv preprint arXiv:2006.14571,", "year": 2020 }, { "authors": [ "Thierry Bouwmans", "Sajid Javed", "Hongy...
[ { "heading": null, "text": "rank(A)≤r∗ R(A) given a convex function R : Rm×n →\nR and a parameter r∗. These algorithms consist of repeating two steps: (a) adding a new rank-1 matrix to A and (b) enforcing the rank constraint on A. We refine and improve the theoretical analysis of Shalev-Shwartz et al. (2011...
2,021
LOCAL SEARCH ALGORITHMS FOR RANK- CONSTRAINED CONVEX OPTIMIZATION
SP:9eeb3b40542889b8a8e196f126a11f80e177f031
[ "The paper uses selective training with pseudo labels. Specifically, the method selects the pseudo-labeled data associated with small loss after performing the data augmentation, and then uses the selected data for training the model. Here, the model computes the confidence of the pseudo labels and then puts a t...
We propose a novel semi-supervised learning (SSL) method that adopts selective training with pseudo labels. In our method, we generate hard pseudo-labels and also estimate their confidence, which represents how likely each pseudo-label is to be correct. Then, we explicitly select which pseudo-labeled data should be use...
[]
[ { "authors": [ "David Berthelot", "Nicholas Carlini", "Ian Goodfellow", "Nicolas Papernot", "Avital Oliver", "Colin A Raffel" ], "title": "Mixmatch: A holistic approach to semi-supervised learning", "venue": "In Advances in Neural Information Processing Systems,",...
[ { "heading": "1 INTRODUCTION", "text": "Semi-supervised learning (SSL) is a powerful technique to deliver a full potential of complex models, such as deep neural networks, by utilizing unlabeled data as well as labeled data to train the model. It is especially useful in some practical situations where obtai...
2,020
SEMI-SUPERVISED LEARNING BY SELECTIVE TRAINING WITH PSEUDO LABELS VIA CONFIDENCE ESTIMATION
SP:d818bed28daccbda111c39cdc9d097b5755b3d89
[ "Paper provides an evaluation of the reliability of confidence levels of well known uncertainty quantification techniques in deep learning on classification and regression tasks. The question that the authors are trying to answer empirically is: when a model claims accuracy at a confidence level within a certain i...
Uncertainty quantification for complex deep learning models is increasingly important as these techniques see growing use in high-stakes, real-world settings. Currently, the quality of a model’s uncertainty is evaluated using point-prediction metrics such as negative log-likelihood or the Brier score on heldout data. I...
[]
[ { "authors": [ "Rina Foygel Barber", "Emmanuel J Candès", "Aaditya Ramdas", "Ryan J Tibshirani" ], "title": "The limits of distribution-free conditional predictive inference. March 2019", "venue": "URL http://arxiv.org/ abs/1903.04684", "year": 1903 }, { "authors"...
[ { "heading": null, "text": "Uncertainty quantification for complex deep learning models is increasingly important as these techniques see growing use in high-stakes, real-world settings. Currently, the quality of a model’s uncertainty is evaluated using point-prediction metrics such as negative log-likeliho...
2,020
DEEP LEARNING UNCERTAINTY QUANTIFICATION PROCEDURES
SP:74f12645ba675ccd4217ebfc0579cb4232406009
[ "This paper proposes a general framework for boosting CNNs performance on different tasks by using'commentary' to learn meta-information. The obtained meta-information can also be used for other purposes such as the mask of objects within spurious background and the similarities among classes. The commentary module...
Effective training of deep neural networks can be challenging, and there remain many open questions on how to best learn these models. Recently developed methods to improve neural network training examine teaching: providing learned information during the training process to improve downstream model performance. In thi...
[ { "affiliations": [], "name": "Aniruddh Raghu" }, { "affiliations": [], "name": "Maithra Raghu" }, { "affiliations": [], "name": "Simon Kornblith" } ]
[ { "authors": [ "M.A. Badgeley", "J.R. Zech", "L. Oakden-Rayner", "B.S. Glicksberg", "M. Liu", "W. Gale", "M.V. McConnell", "B. Percha", "T.M. Snyder", "J.T. Dudley" ], "title": "Deep learning predicts hip fracture using confounding patient and ...
[ { "heading": "1 INTRODUCTION", "text": "Training, regularising, and understanding complex neural network models is challenging. There remain central open questions on making training faster and more data-efficient (Kornblith et al., 2019; Raghu et al., 2019a;b), ensuring better generalisation (Zhang et al.,...
2,021
null
SP:19e2493d7bdb4be73c3b834affdb925201243aef
[ "It is well-known that neural networks (NN) perform very well in various areas and in particular if one looks at computer vision convolutional neural networks perform very well. Although convolutional neural networks (CNN) are limited in their architecture (since they only allow nearest-neighbour connections) compa...
Convolutional neural networks (CNN) exhibit unmatched performance in a multitude of computer vision tasks. However, the advantage of using convolutional networks over fully-connected networks is not understood from a theoretical perspective. In this work, we show how convolutional networks can leverage locality in the ...
[ { "affiliations": [], "name": "FULLY-CONNECTED NETWORKS" }, { "affiliations": [], "name": "Eran Malach" }, { "affiliations": [], "name": "Shai Shalev-Shwartz" } ]
[ { "authors": [ "Emmanuel Abbe", "Colin Sandon" ], "title": "Provable limitations of deep learning", "venue": "arXiv preprint arXiv:1812.06369,", "year": 2018 }, { "authors": [ "Peter L Bartlett", "Nick Harvey", "Christopher Liaw", "Abbas Mehrabian" ...
[ { "heading": "1 INTRODUCTION", "text": "Convolutional neural networks (LeCun et al., 1998; Krizhevsky et al., 2012) achieve state-of-the-art performance on every possible task in computer vision. However, while the empirical success of convolutional networks is indisputable, the advantage of using them is n...
2,021
null
SP:b7b4e29defc84ee37a5a4dcaf2d393363c153b52
[ "This paper studies short, chaotic time series and uses the Taken's theorem to discover the causality between two time series. The main challenge is that for short time series, the delay embedding is not possible. Thus, the authors propose to fit a latent neural ODE and theoretically argue that they can use the Neu...
Discovering causal structures of temporal processes is a major tool of scientific inquiry because it helps us better understand and explain the mechanisms driving a phenomenon of interest, thereby facilitating analysis, reasoning, and synthesis for such systems. However, accurately inferring causal structures within a ...
[ { "affiliations": [], "name": "Edward De Brouwer" }, { "affiliations": [], "name": "Adam Arany" }, { "affiliations": [], "name": "Yves Moreau" } ]
[ { "authors": [ "Mohammad Taha Bahadori", "Yan Liu" ], "title": "Granger causality analysis in irregular time series", "venue": "In Proceedings of the 2012 SIAM International Conference on Data Mining,", "year": 2012 }, { "authors": [ "Zsigmond Benkő", "Adám Zlatni...
[ { "heading": "1 INTRODUCTION", "text": "Inferring a right causal model of a physical phenomenon is at the heart of scientific inquiry. It is fundamental to how we understand the world around us and to predict the impact of future interventions (Pearl, 2009). Correctly inferring causal pathways helps us reas...
2,021
LATENT CONVERGENT CROSS MAPPING
SP:474e2b9be8a3ec69a48c4ccd04a7e390ebb96347
[ "There have been multiple attempts to use self-attention in computer vision backbones for image classification and object detection. Most of these approaches either tried to combine convolution with global self-attention, or replace it completely with local self-attention operation. The proposed approach naturally ...
Recently, a series of works in computer vision have shown promising results on various image and video understanding tasks using self-attention. However, due to the quadratic computational and memory complexities of self-attention, these works either apply attention only to low-resolution feature maps in later stages o...
[]
[ { "authors": [ "cent Vanhoucke", "Vijay Vasudevan", "Fernanda Viégas", "Oriol Vinyals", "Pete Warden", "Martin Wattenberg", "Martin Wicke", "Yuan Yu", "Xiaoqiang Zheng" ], "title": "TensorFlow: Large-scale machine learning on heterogeneous systems", ...
[ { "heading": "1 INTRODUCTION", "text": "Self-attention is a mechanism in neural networks that focuses on modeling long-range dependencies. Its advantage in terms of establishing global dependencies over other mechanisms, e.g., convolution and recurrence, has made it prevalent in modern deep learning. In com...
2,020
null
SP:bf70c9e16933774746d621a5b8475843e723ac24
[ "In the context of deep learning, back-propagation is stochastic in the sample level to attain bette efficiency than full-dataset gradient descent. The authors asked that, can we further randomize the gradient compute within each single minibatch / sample with the goal to achieve strong model accuracy. In modern de...
The successes of deep learning, variational inference, and many other fields have been aided by specialized implementations of reverse-mode automatic differentiation (AD) to compute gradients of mega-dimensional objectives. The AD techniques underlying these tools were designed to compute exact gradients to numerical p...
[ { "affiliations": [], "name": "Deniz Oktay" }, { "affiliations": [], "name": "Nick McGreivy" }, { "affiliations": [], "name": "Joshua Aduol" }, { "affiliations": [], "name": "Alex Beatson" }, { "affiliations": [], "name": "Ryan P. Adams" } ]
[ { "authors": [ "Martín Abadi", "Paul Barham", "Jianmin Chen", "Zhifeng Chen", "Andy Davis", "Jeffrey Dean", "Matthieu Devin", "Sanjay Ghemawat", "Geoffrey Irving", "Michael Isard" ], "title": "Tensorflow: A system for large-scale machine learni...
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks have taken center stage as a powerful way to construct and train massivelyparametric machine learning (ML) models for supervised, unsupervised, and reinforcement learning tasks. There are many reasons for the resurgence of neural networks—large da...
2,021
RANDOMIZED AUTOMATIC DIFFERENTIATION
SP:5b707bffe506d9556ffedbe49425c57d0e21c9fa
[ "This paper studies the multi-source domain adaptation problem. The authors examine the existing MDA solutions, i.e. using a domain discriminator for each source-target pair, and argue that the existing ones are likely to distribute the domain-discriminative information across multiple discriminators. By theoretica...
Adversarial learning strategy has demonstrated remarkable performance in dealing with single-source unsupervised Domain Adaptation (DA) problems, and it has recently been applied to multi-source DA problems. Although most existing DA methods use multiple domain discriminators, the effect of using multiple discriminator...
[]
[ { "authors": [ "Alexander A Alemi", "Ian Fischer", "Joshua V Dillon", "Kevin Murphy" ], "title": "Deep variational information bottleneck", "venue": "arXiv preprint arXiv:1612.00410,", "year": 2016 }, { "authors": [ "Orly Alter", "Patrick O Brown", ...
[ { "heading": "1 INTRODUCTION", "text": "Although a large number of studies have demonstrated the ability of deep neural networks to solve challenging tasks, the tasks solved by networks are mostly confined to a similar type or a single domain. One remaining challenge is the problem known as domain shift (Gr...
2,020
null
SP:825132782872f2167abd5e45773bfdef83e4bb2e
[ "This paper tackles the problem of geometrical and topological 3D reconstruction of a (botanical) tree using a drone-mounted stereo vision system and deep learning-based/aided tree branch image annotation procedures. This is an interesting computer vision 3D reconstruction task, which has important practical appl...
We tackle the challenging problem of creating full and accurate three dimensional reconstructions of botanical trees with the topological and geometric accuracy required for subsequent physical simulation, e.g. in response to wind forces. Although certain aspects of our approach would benefit from various improvements,...
[ { "affiliations": [], "name": "SIMULATABLE GEOMETRY" } ]
[ { "authors": [ "Sameer Agarwal", "Noah Snavely", "Ian Simon", "Steven M Seitz", "Richard Szeliski" ], "title": "Building rome in a day", "venue": "In Computer Vision,", "year": 2009 }, { "authors": [ "Iro Armeni", "Ozan Sener", "Amir R Zami...
[ { "heading": "1 INTRODUCTION", "text": "Human-inhabited outdoor environments typically contain ground surfaces such as grass and roads, transportation vehicles such as cars and bikes, buildings and structures, and humans themselves, but are also typically intentionally populated by a large number of trees a...
2,020
null
SP:8e3a07ed19e7b0c677aae1106da801d246f5aa0c
[ "This paper addresses the task of adversarial defense, particularly against untargeted attack. It starts from the observation that these attacks mostly minimize the perturbation and the classification loss, and proposes a new training strategy named Target Training. The method duplicate training examples with a spe...
Recent adversarial defense approaches have failed. Untargeted gradient-based attacks cause classifiers to choose any wrong class. Our novel white-box defense tricks untargeted attacks into becoming attacks targeted at designated target classes. From these target classes, we derive the real classes. The Target Training ...
[]
[ { "authors": [ "Dario Amodei", "Chris Olah", "Jacob Steinhardt", "Paul F. Christiano", "John Schulman", "Dan Mané" ], "title": "Concrete problems in AI safety", "venue": "CoRR, abs/1606.06565,", "year": 2016 }, { "authors": [ "Anish Athalye", ...
[ { "heading": "1 INTRODUCTION", "text": "Neural network classifiers are vulnerable to malicious adversarial samples that appear indistinguishable from original samples (Szegedy et al., 2013), for example, an adversarial attack can make a traffic stop sign appear like a speed limit sign (Eykholt et al., 2018)...
2,020
TARGET TRAINING: TRICKING ADVERSARIAL ATTACKS TO FAIL
SP:8e2ac7405015f9d2d59c4a511df83d796ac00a9e
[ "This paper proposes the signal propagation plot (spp) which is a tool for analyzing residual networks and analyzes ResNet with/without BN. Based on the investigation, the authors first provide ResNet results without normalization with the proposed scaled weight standardization. Furthermore, the authors provide a b...
Batch Normalization is a key component in almost all state-of-the-art image classifiers, but it also introduces practical challenges: it breaks the independence between training examples within a batch, can incur compute and memory overhead, and often results in unexpected bugs. Building on recent theoretical analyses ...
[ { "affiliations": [], "name": "Andrew Brock" }, { "affiliations": [], "name": "Soham De" }, { "affiliations": [], "name": "Samuel L. Smith" } ]
[ { "authors": [ "Devansh Arpit", "Yingbo Zhou", "Bhargava Kota", "Venu Govindaraju" ], "title": "Normalization propagation: A parametric technique for removing internal covariate shift in deep networks", "venue": "In International Conference on Machine Learning,", "year": ...
[ { "heading": null, "text": "Batch Normalization is a key component in almost all state-of-the-art image classifiers, but it also introduces practical challenges: it breaks the independence between training examples within a batch, can incur compute and memory overhead, and often results in unexpected bugs. ...
2,021
CHARACTERIZING SIGNAL PROPAGATION TO CLOSE THE PERFORMANCE GAP IN UNNORMALIZED RESNETS
SP:206600e5bfcc9ccd494b82995a7898ae81a4e0bf
[ "The paper focused on the sample importance in the adversarial training. The authors firstly revealed that over-parameterized deep models on natural data may have insufficient model capacity for adversarial data, because the training loss is hard to zero for adversarial training. Then, the authors argued that limit...
In adversarial machine learning, there was a common belief that robustness and accuracy hurt each other. The belief was challenged by recent studies where we can maintain the robustness and improve the accuracy. However, the other direction, we can keep the accuracy and improve the robustness, is conceptually and pract...
[ { "affiliations": [], "name": "SARIAL TRAINING" }, { "affiliations": [], "name": "Jingfeng Zhang" }, { "affiliations": [], "name": "Jianing Zhu" }, { "affiliations": [], "name": "Gang Niu" }, { "affiliations": [], "name": "Bo Han" }, { "affiliations": ...
[ { "authors": [ "Mislav Balunovic", "Martin Vechev" ], "title": "Adversarial training and provable defenses: Bridging the gap", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Denni D. Boos", "L.A. Stefanski" ], "title": "M-Estimation (Estimating Equat...
[ { "heading": "1 INTRODUCTION", "text": "Crafted adversarial data can easily fool the standard-trained deep models by adding humanimperceptible noise to the natural data, which leads to the security issue in applications such as medicine, finance, and autonomous driving (Szegedy et al., 2014; Nguyen et al., ...
2,021
GEOMETRY-AWARE INSTANCE-REWEIGHTED ADVER-
SP:d729aacc2cd3f97011a04360a252ca7cb0489354
[ "This paper considers adopting continual learning on the problem of causal effect estimation. The paper combines methods and algorithms for storing feature representation and representative samples (herding algorithm), avoiding drifting feature representation when new data is learned (feature representation distill...
The era of real world evidence has witnessed an increasing availability of observational data, which much facilitates the development of causal effect inference. Although significant advances have been made to overcome the challenges in causal effect estimation, such as missing counterfactual outcomes and selection bia...
[]
[ { "authors": [ "Ahmed M Alaa", "Mihaela van der Schaar" ], "title": "Bayesian inference of individualized treatment effects using multi-task gaussian processes", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Hugh A Chi...
[ { "heading": "1 INTRODUCTION", "text": "Causal effect inference is a critical research topic across many domains, such as statistics, computer science, public policy, and economics. Randomized controlled trials (RCT) are usually considered as the gold-standard for causal effect inference, which randomly ass...
2,020
CONTINUAL LIFELONG CAUSAL EFFECT INFERENCE
SP:864d98472c237daf2b227692c4765af9a89886cd
[ "In this paper, the authors study the problem of GCN for disassortative graphs. The authors proposed the GNAN method to allow attention on distant nodes indeed of limiting to local neighbors. The authors generalized the idea of graph wavelet with MLP to generate the attention score and utilized it to generate multi...
Graph neural networks (GNNs) have been extensively studied for prediction tasks on graphs. Most GNNs assume local homophily, i.e., strong similarities in local neighborhoods. This assumption limits the generalizability of GNNs, which has been demonstrated by recent work on disassortative graphs with weak local homophil...
[]
[ { "authors": [ "Filippo Maria Bianchi", "Daniele Grattarola", "Lorenzo Livi", "Cesare Alippi" ], "title": "Graph neural networks with convolutional ARMA filters", "venue": null, "year": 1901 }, { "authors": [ "Heng Chang", "Yu Rong", "Tingyang Xu...
[ { "heading": "1 INTRODUCTION", "text": "Graph neural networks (GNNs) have recently demonstrated great power in graph-related learning tasks, such as node classification (Kipf & Welling, 2017), link prediction (Zhang & Chen, 2018) and graph classification (Lee et al., 2018). Most GNNs follow a message-passin...
2,020
null
SP:28a5570540fa769396ee73c14c25ada9669dd95f
[ "The paper presents a post-hoc calibration method for deep neural net classification. The method proposes to first reduces the well-known ECE score to a special case of the Kolmogorov-Smirnov (KS) test, and this way solves the dependency of ECE on the limiting binning assumption. The method proposes next to recalib...
Calibrating neural networks is of utmost importance when employing them in safety-critical applications where the downstream decision making depends on the predicted probabilities. Measuring calibration error amounts to comparing two empirical distributions. In this work, we introduce a binning-free calibration measure...
[ { "affiliations": [], "name": "Kartik Gupta" }, { "affiliations": [], "name": "Amir Rahimi" }, { "affiliations": [], "name": "Thalaiyasingam Ajanthan" }, { "affiliations": [], "name": "Thomas Mensink" }, { "affiliations": [], "name": "Cristian Sminchisescu" ...
[ { "authors": [ "Glenn W Brier" ], "title": "Verification of forecasts expressed in terms of probability", "venue": "Monthly weather review,", "year": 1950 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-...
[ { "heading": "1 INTRODUCTION", "text": "Despite the success of modern neural networks they are shown to be poorly calibrated (Guo et al. (2017)), which has led to a growing interest in the calibration of neural networks over the past few years (Kull et al. (2019); Kumar et al. (2019; 2018); Müller et al. (2...
2,021
null
SP:cdc407d403e1008ced29c7cda727db0d631cc966
[ "This paper proposes ProxylessKD method from a novel perspective of knowledge distillation. Instead of minimizing the outputs of teacher and student models, ProxylessKD adopts a shared classifier for two models. The shared classifier yields better aligned embedding space, so the embeddings from teacher and student ...
Knowledge Distillation (KD) refers to transferring knowledge from a large model to a smaller one, which is widely used to enhance model performance in machine learning. It tries to align embedding spaces generated from the teacher and the student model (i.e. to make images corresponding to the same semantics share the ...
[ { "affiliations": [], "name": "FACE RECOGNI" } ]
[ { "authors": [ "Umar Asif", "Jianbin Tang", "Stefan Harrer" ], "title": "Ensemble knowledge distillation for learning improved and efficient networks", "venue": "arXiv preprint arXiv:1909.08097,", "year": 2019 }, { "authors": [ "Guobin Chen", "Wongun Choi", ...
[ { "heading": "1 INTRODUCTION", "text": "Knowledge Distillation (KD) is a process of transferring knowledge from a large model to a smaller one. This technique is widely used to enhance model performance in many machine learning tasks such as image classification (Hinton et al., 2015), object detection (Chen...
2,020
PROXYLESSKD: DIRECT KNOWLEDGE DISTILLATION
SP:a15d5230fecc1dad8998905f17c82cf8e05c98d3
[ "This paper proposes a contrastive learning approach where one of the views, x, is converted into two subviews, x' and x'', and then separate InfoNCE style bounds constructed for each of I(x'';y) and I(x';y|x'') before being combined to form an overall training objective. Critically, the second of these is based o...
Many self-supervised representation learning methods maximize mutual information (MI) across views. In this paper, we transform each view into a set of subviews and then decompose the original MI bound into a sum of bounds involving conditional MI between the subviews. E.g., given two views x and y of the same input ex...
[]
[ { "authors": [ "Philip Bachman", "R Devon Hjelm", "William Buchwalter" ], "title": "Learning representations by maximizing mutual information across views", "venue": "In Proc. Conf. on Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ ...
[ { "heading": "1 INTRODUCTION", "text": "The ability to extract actionable information from data in the absence of explicit supervision seems to be a core prerequisite for building systems that can, for instance, learn from few data points or quickly make analogies and transfer to other tasks. Approaches to ...
2,020
null
SP:70bed0f6f729c03edcb03678fca53e1d82fc06ab
[ "The paper proposes a continual learning framework based on Bayesian non-parametric approach. The hidden layer is modeled using Indian Buffet Process prior. The inference uses a structured mean-field approximation with a Gaussian family for the weights, and Beta-Bernoulli for the task-masks. The variational infe...
Continual Learning is a learning paradigm where learning systems are trained on a sequence of tasks. The goal here is to perform well on the current task without suffering from a performance drop on the previous tasks. Two notable directions among the recent advances in continual learning with neural networks are (1) v...
[]
[ { "authors": [ "Tameem Adel", "Han Zhao", "Richard E. Turner" ], "title": "Continual learning with adaptive weights (claw)", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Hongjoon Ahn", "Sungmin Cha", ...
[ { "heading": "1 INTRODUCTION", "text": "Continual learning (CL) (Ring, 1997; Parisi et al., 2019) is the learning paradigm where a single model is subjected to a sequence of tasks. At any point of time, the model is expected to (i) make predictions for the tasks it has seen so far, (ii) if subjected to trai...
2,020
null
SP:d27e98774183ece8d82b87f1e7067bf2a28a4fca
[ "This paper describes a system for separating \"on-screen\" sounds from \"off-screen\" sounds in an audio-visual task, meaning sounds that are associated with objects that are visible in a video versus not. It is trained to do this using mixture invariant training to separate synthetic mixtures of mixtures. It is e...
Recent progress in deep learning has enabled many advances in sound separation and visual scene understanding. However, extracting sound sources which are apparent in natural videos remains an open problem. In this work, we present AudioScope, a novel audio-visual sound separation framework that can be trained without ...
[ { "affiliations": [], "name": "ON-SCREEN SOUNDS" }, { "affiliations": [], "name": "Efthymios Tzinis" }, { "affiliations": [], "name": "Scott Wisdom" }, { "affiliations": [], "name": "Aren Jansen" }, { "affiliations": [], "name": "Shawn Hershey" }, { "a...
[ { "authors": [ "Triantafyllos Afouras", "Andrew Owens", "Joon Son Chung", "Andrew Zisserman" ], "title": "Self-supervised learning of audio-visual objects from video", "venue": "In Proc. European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors":...
[ { "heading": "1 INTRODUCTION", "text": "Audio-visual machine perception has been undergoing a renaissance in recent years driven by advances in large-scale deep learning. A motivating observation is the interplay in human perception between auditory and visual perception. We understand the world by parsing ...
2,021
null
SP:958f2aacb0790ffe7399fd918c023c7e4e4c314c
[ "The paper is generally well presented. However, a main issue is that the optimization algorithms for the l0-norm regularized problems (Section 3.1.2 and Section 3.2) are not correctly presented. Specifically, in the algorithm development to solve the \"Fix $\\boldsymbol{R}$, optimize $\\boldsymbol{Y}$\" subproblem...
Deep models have achieved great success in many applications. However, vanilla deep models are not well-designed against the input perturbation. In this work, we take an initial step to design a simple robust layer as a lightweight plug-in for vanilla deep models. To achieve this goal, we first propose a fast sparse co...
[]
[ { "authors": [ "Chenglong Bao", "Jian-Feng Cai", "Hui Ji" ], "title": "Fast sparsity-based orthogonal dictionary learning for image restoration", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp", "year": 2013 }, { "authors": [ ...
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks have obtained a great success in many applications, including computer vision, reinforcement learning (RL) and natural language processing, etc. However, vanilla deep models are not robust to noise perturbations of the input. Even a small perturba...
2,020
A SIMPLE SPARSE DENOISING LAYER FOR ROBUST DEEP LEARNING
SP:33673a515722e1d8288fd3014e7db507b7250b20
[ "The paper under review proposes a new model for multi-dimensional temporal Point processes, allowing efficient estimation of high order interactions. This new model, called additive Poisson process, relies on a log-linear structure of the intensity function that is motivated thanks to the Kolmogorov-Arnold theorem...
We present the Additive Poisson Process (APP), a novel framework that can model the higher-order interaction effects of the intensity functions in point processes using lower dimensional projections. Our model combines the techniques in information geometry to model higher-order interactions on a statistical manifold a...
[]
[ { "authors": [ "Alan Agresti" ], "title": "Categorical Data Analysis", "venue": "Wiley, 3 edition,", "year": 2012 }, { "authors": [ "S. Amari" ], "title": "Information geometry on hierarchy of probability distributions", "venue": "IEEE Transactions on Information ...
[ { "heading": null, "text": "We present the Additive Poisson Process (APP), a novel framework that can model the higher-order interaction effects of the intensity functions in point processes using lower dimensional projections. Our model combines the techniques in information geometry to model higher-order ...
2,020
null
SP:e6e46c0563e852189839b2f923788165800a0f17
[ "This paper provides an approach for treatment effect estimation when the observational data is longitudinal (with irregular time stamps) and consists of temporal confounding variables. The proposed method can be categorized under the matching methods, in which, in order to estimate the counterfactual outcomes, a s...
Estimating causal treatment effects using observational data is a problem with few solutions when the confounder has a temporal structure, e.g. the history of disease progression might impact both treatment decisions and clinical outcomes. For such a challenging problem, it is desirable for the method to be transparent...
[]
[ { "authors": [ "Alberto Abadie" ], "title": "Using synthetic controls: Feasibility, data requirements, and methodological aspects", "venue": "Journal of Economic Literature,", "year": 2019 }, { "authors": [ "Alberto Abadie", "Javier Gardeazabal" ], "title": "The...
[ { "heading": "1 INTRODUCTION", "text": "Estimating the causal individual treatment effect (ITE) on patient outcomes using observational data (observational studies) has become a promising alternative to clinical trials as large-scale electronic health records become increasingly available (Booth & Tannock, ...
2,020
SYNCTWIN: TRANSPARENT TREATMENT EFFECT ESTIMATION UNDER TEMPORAL CONFOUNDING
SP:8997ab419d35acd51ef50ef6265e5c37c468a2ac
[ "This paper proposes a method for obtaining probably-approximately correct (PAC) predictions given a pre-trained classifier. The PAC intervals are connected to calibration, and take the form of confidence intervals given the bin a prediction falls in. They demonstrate and explore two use cases: applying this techni...
A key challenge for deploying deep neural networks (DNNs) in safety critical settings is the need to provide rigorous ways to quantify their uncertainty. In this paper, we propose a novel algorithm for constructing predicted classification confidences for DNNs that comes with provable correctness guarantees. Our approa...
[ { "affiliations": [], "name": "Sangdon Park" }, { "affiliations": [], "name": "Shuo Li" }, { "affiliations": [], "name": "Insup Lee" }, { "affiliations": [], "name": "Osbert Bastani" } ]
[ { "authors": [ "A.K. Akametalu", "J.F. Fisac", "J.H. Gillula", "S. Kaynama", "M.N. Zeilinger", "C.J. Tomlin" ], "title": "Reachability-based safe learning with gaussian processes", "venue": "In 53rd IEEE Conference on Decision and Control,", "year": 2014 }, ...
[ { "heading": "1 INTRODUCTION", "text": "Due to the recent success of machine learning, there has been increasing interest in using predictive models such as deep neural networks (DNNs) in safety-critical settings, such as robotics (e.g., obstacle detection (Ren et al., 2015) and forecasting (Kitani et al., ...
2,021
PAC CONFIDENCE PREDICTIONS FOR DEEP NEURAL NETWORK CLASSIFIERS
SP:4c82d9d12ec6a9f171c4281739776da18bcc2906
[ "of contribution: The authors propose an interesting approach to address the sample-efficiency issue in Neural Architecture Search (NAS). Compared to other existing predictor based methods, the approach distinguishes itself by progressive shrinking the search space. The paper correctly identifies the sampling is an...
Neural Architecture Search (NAS) finds the best network architecture by exploring the architecture-to-performance manifold. It often trains and evaluates a large amount of architectures, causing tremendous computation cost. Recent predictorbased NAS approaches attempt to solve this problem with two key steps: sampling ...
[]
[ { "authors": [ "Thomas Chau", "Łukasz Dudziak", "Mohamed S Abdelfattah", "Royson Lee", "Hyeji Kim", "Nicholas D Lane" ], "title": "Brp-nas: Prediction-based nas using gcns", "venue": "arXiv preprint arXiv:2007.08668,", "year": 2020 }, { "authors": [ ...
[ { "heading": "1 INTRODUCTION", "text": "Neural Architecture Search (NAS) has become a central topic in recent years with great progress (Liu et al., 2018b; Luo et al., 2018; Wu et al., 2019; Howard et al., 2019; Ning et al., 2020; Wei et al., 2020; Luo et al., 2018; Wen et al., 2019; Chau et al., 2020; Luo ...
2,020
WEAK NAS PREDICTOR IS ALL YOU NEED
SP:720f167592297c58d88272599fb66978f3ae8001
[ "This paper studies the problem of gradient attack in deep learning models. In particular, this paper tries to form a system of linear equations to find a training data point when the gradient of the deep learning model with respect to that data point is available. The algorithm for finding the data point is calle...
Federated learning frameworks have been regarded as a promising approach to break the dilemma between demands on privacy and the promise of learning from large collections of distributed data. Many such frameworks only ask collaborators to share their local update of a common model, i.e. gradients, instead of exposing ...
[ { "affiliations": [], "name": "Junyi Zhu" } ]
[ { "authors": [ "Peter L Bartlett", "Michael I Jordan", "Jon D McAuliffe" ], "title": "Convexity, classification, and risk bounds", "venue": "Journal of the American Statistical Association,", "year": 2006 }, { "authors": [ "Emiliano De Cristofaro" ], "titl...
[ { "heading": "1 INTRODUCTION", "text": "Distributed and federated learning have become common strategies for training neural networks without transferring data (Jochems et al., 2016; 2017; Konečný et al., 2016; McMahan et al., 2017). Instead, model updates, often in the form of gradients, are exchanged be...
2,021
R-GAP: RECURSIVE GRADIENT ATTACK ON PRIVACY
SP:6cf84af3e1ae0c84dc251ba41a5acb3dc7f61645
[ "Considering a continuous time RNN with Lipschitz-continuous nonlinearity, the authors formulate sufficient conditions on the parameter matrices for the network to be globally stable, in the sense of a globally attracting fixed point. They provide a specific parameterization for the hidden-to-hidden weight matrices...
Viewing recurrent neural networks (RNNs) as continuous-time dynamical systems, we propose a recurrent unit that describes the hidden state’s evolution with two parts: a well-understood linear component plus a Lipschitz nonlinearity. This particular functional form facilitates stability analysis of the long-term behavio...
[ { "affiliations": [], "name": "Omri Azencot" }, { "affiliations": [], "name": "Alejandro Queiruga" }, { "affiliations": [], "name": "Michael W. Mahoney" } ]
[ { "authors": [ "Martin Arjovsky", "Amar Shah", "Yoshua Bengio" ], "title": "Unitary evolution recurrent neural networks", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Yoshua Bengio", "Patrice Simard", "Pa...
[ { "heading": "1 INTRODUCTION", "text": "Many interesting problems exhibit temporal structures that can be modeled with recurrent neural networks (RNNs), including problems in robotics, system identification, natural language processing, and machine learning control. In contrast to feed-forward neural networ...
2,021
null
SP:2ad12575818f72f453eb0c04c953a48be56e80e3
[ "In continual learning settings, one of the important technique for avoiding catastrophe forgetting is to replay data points from the past. For memory efficiency purposes, representative samples can be generated from a generative model, such as GANs, rather than replaying the original samples which can be large in ...
The two main impediments to continual learning are catastrophic forgetting and memory limitations on the storage of data. To cope with these challenges, we propose a novel, cognitively-inspired approach which trains autoencoders with Neural Style Transfer to encode and store images. During training on a new task, recon...
[ { "affiliations": [], "name": "CONTINUAL LEARNING" }, { "affiliations": [], "name": "Ali Ayub" }, { "affiliations": [], "name": "Alan R. Wagner" } ]
[ { "authors": [ "Ali Ayub", "Alan R. Wagner" ], "title": "Centroid based concept learning for rgb-d indoor scene classification", "venue": "In British Machine Vision Conference (BMVC),", "year": 2020 }, { "authors": [ "Ali Ayub", "Alan R. Wagner" ], "title"...
[ { "heading": "1 INTRODUCTION", "text": "Humans continue to learn new concepts over their lifetime without the need to relearn most previous concepts. Modern machine learning systems, however, require the complete training data to be available at one time (batch learning) (Girshick, 2015). In this paper, we ...
2,021
null
SP:da8ca392a4eb366f4fdedb09d461ef804615b0b2
[ "In this paper, the authors propose a latent space regression method for analyzing and manipulating the latent space of pre-trained GAN models. Unlike existing optimization-based methods, an explicit latent code regressor is learned to map the input to the latent space. The authors apply this approach to several ap...
In recent years, Generative Adversarial Networks have become ubiquitous in both research and public perception, but how GANs convert an unstructured latent code to a high quality output is still an open question. In this work, we investigate regression into the latent space as a probe to understand the compositional pr...
[ { "affiliations": [], "name": "Lucy Chai" }, { "affiliations": [], "name": "Jonas Wulff" }, { "affiliations": [], "name": "Phillip Isola" } ]
[ { "authors": [ "Rameen Abdal", "Yipeng Qin", "Peter Wonka" ], "title": "Image2stylegan: How to embed images into the stylegan latent space", "venue": null, "year": 2019 }, { "authors": [ "Rameen Abdal", "Yipeng Qin", "Peter Wonka" ], "title": "...
[ { "heading": "1 INTRODUCTION", "text": "Natural scenes are comprised of disparate parts and objects that humans can easily segment and interchange (Biederman, 1987). Recently, unconditional generative adversarial networks (Karras et al., 2017; 2019b;a; Radford et al., 2015) have become capable of mimicking ...
2,021
USING LATENT SPACE REGRESSION TO ANALYZE AND LEVERAGE COMPOSITIONALITY IN GANS
SP:c0e827c33dbc9378404fe2a0949198cb74f13688
[ "The authors propose a new way to aggregate the embeddings of elements in a set (or sequence) by comparing it with respect to (trainable) reference set(s) via Optimal Transport (OT). The motivation to build such a pooling operation is derived from self-attention and the authors suggest an OT spin to that (e.g., the...
We address the problem of learning on sets of features, motivated by the need of performing pooling operations in long biological sequences of varying sizes, with long-range dependencies, and possibly few labeled data. To address this challenging task, we introduce a parametrized representation of fixed size, which emb...
[ { "affiliations": [], "name": "Grégoire Mialon" }, { "affiliations": [], "name": "Dexiong Chen" }, { "affiliations": [], "name": "Alexandre d’Aspremont" }, { "affiliations": [], "name": "Julien Mairal" } ]
[]
[ { "heading": "1 INTRODUCTION", "text": "Many scientific fields such as bioinformatics or natural language processing (NLP) require processing sets of features with positional information (biological sequences, or sentences represented by a set of local features). These objects are delicate to manipulate due...
2,021
A TRAINABLE OPTIMAL TRANSPORT EMBEDDING
SP:a85b6d598513c8e03a013fd20da6b19a1108f71e
[ "This paper extends and explains how to apply the \"free energy principle\" and active inference to RL and imitation learning. They implement a neural network approximation of losses derived this way and test on some control tasks. Importantly the tasks focus on here are imitation + control tasks. That is, there is...
Imitation Learning (IL) and Reinforcement Learning (RL) from high dimensional sensory inputs are often introduced as separate problems, but a more realistic problem setting is how to merge the techniques so that the agent can reduce exploration costs by partially imitating experts at the same time it maximizes its retu...
[]
[ { "authors": [ "Karl Friston" ], "title": "The free-energy principle: a unified brain theory", "venue": "Nature reviews neuroscience,", "year": 2010 }, { "authors": [ "Karl Friston", "James Kilner", "Lee Harrison" ], "title": "A free energy principle for t...
[ { "heading": "1 INTRODUCTION", "text": "Imitation Learning (IL) is a framework to learn a policy to mimic expert trajectories. As the expert specifies model behaviors, there is no need to do exploration or to design complex reward functions. Reinforcement Learning (RL) does not have these features, so RL ag...
2,020
null
SP:69855e0bec141e9d15eec5cc37022f313e6600b2
[ "By the first look, this work itself does not introduce any new architecture or novel algorithm. It takes what is considered as the popular choices in generating classifier saliency masks, and conducts quite extensive sets of experiments to dissect the components by their importance. The writing is pretty clear in ...
Saliency maps that identify the most informative regions of an image for a classifier are valuable for model interpretability. A common approach to creating saliency maps involves generating input masks that mask out portions of an image to maximally deteriorate classification performance, or mask in an image to preser...
[ { "affiliations": [], "name": "SIMPLIFYING MASKING-BASED" } ]
[ { "authors": [ "Julius Adebayo", "Justin Gilmer", "Michael Muelly", "Ian Goodfellow", "Moritz Hardt", "Been Kim" ], "title": "Sanity checks for saliency maps", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Chirag Agarwal", "Anh Ng...
[ { "heading": "1 INTRODUCTION", "text": "The success of CNNs (Krizhevsky et al., 2012; Szegedy et al., 2015; He et al., 2016; Tan & Le, 2019) has prompted interest in improving understanding of how these models make their predictions. Particularly in applications such as medical diagnosis, having models expl...
2,020
null
SP:e4e5b4e2bee43c920ed719dc331a370129845268
[ "The authors propose a model to improve the output distribution of neural nets in image classification problems. Their model is a post hoc procedure and is based on the tree structure of WordNet. The model revises the classifier output based on the distance of the labels in the tree. Intuitively, their solution is ...
There has been increasing interest in building deep hierarchy-aware classifiers that aim to quantify and reduce the severity of mistakes, and not just reduce the number of errors. The idea is to exploit the label hierarchy (e.g., the WordNet ontology) and consider graph distances as a proxy for mistake severity. Surpri...
[ { "affiliations": [], "name": "DEEP NETWORKS" }, { "affiliations": [], "name": "Shyamgopal Karthik" }, { "affiliations": [], "name": "Ameya Prabhu" }, { "affiliations": [], "name": "Puneet K. Dokania" } ]
[ { "authors": [ "Naoki Abe", "Bianca Zadrozny", "John Langford" ], "title": "An iterative method for multi-class cost-sensitive learning", "venue": "In KDD,", "year": 2004 }, { "authors": [ "Zeynep Akata", "Scott Reed", "Daniel Walter", "Honglak L...
[ { "heading": "1 INTRODUCTION", "text": "The conventional performance measure of accuracy for image classification treats all classes other than ground truth as equally wrong. However, some mistakes may have a much higher impact than others in real-world applications. An intuitive example being an autonomous...
2,021
null
SP:7cc59c8f556d03597f7ab391ef14d1a96191a4db
[ "The design of a useful generalization of neural networks on quantum computers has been challenging because the gradient signal will decay exponentially with respect to the depth of the quantum circuit (saturating to exponentially small in system size after the depth is linear in system size). This work provides a ...
Quantum Neural Networks (QNNs) have been recently proposed as generalizations of classical neural networks to achieve the quantum speed-up. Despite the potential to outperform classical models, serious bottlenecks exist for training QNNs; namely, QNNs with random structures have poor trainability due to the vanishing g...
[ { "affiliations": [], "name": "TOWARD TRAINABILITY" } ]
[ { "authors": [ "Frank Arute", "Kunal Arya", "Ryan Babbush", "Dave Bacon", "Joseph C Bardin", "Rami Barends", "Rupak Biswas", "Sergio Boixo", "Fernando GSL Brandao", "David A Buell" ], "title": "Quantum supremacy using a programmable superconduc...
[ { "heading": "1 INTRODUCTION", "text": "Neural Networks (Hecht-Nielsen, 1992) using gradient-based optimizations have dramatically advanced researches in discriminative models, generative models, and reinforcement learning. To efficiently utilize the parameters and practically improve the trainability, neur...
2,020
null
SP:8a8aa5f245c2fb82beddb19c82dddb8d67f66f8a
[ "In this paper, the authors introduce a class of games called Hidden Convex-Concave where a (stricly) convex-concave potential is composed with smooth maps. On this class of problems, they show that the continuous gradient dynamics converge to (a neighbordhood of) the minimax solutions of the problem. This is an ex...
Many recent AI architectures are inspired by zero-sum games, however, the behavior of their dynamics is still not well understood. Inspired by this, we study standard gradient descent ascent (GDA) dynamics in a specific class of non-convex nonconcave zero-sum games, that we call hidden zero-sum games. In this class, pl...
[ { "affiliations": [], "name": "Lampros Flokas" } ]
[ { "authors": [ "Jacob Abernethy", "Kevin A Lai", "Kfir Y Levy", "Jun-Kun Wang" ], "title": "Faster rates for convex-concave games", "venue": "In COLT,", "year": 2018 }, { "authors": [ "Jacob Abernethy", "Kevin A Lai", "Andre Wibisono" ], ...
[ { "heading": "1 Introduction", "text": "Traditionally, our understanding of convex-concave games revolves around von Neumann’s celebrated minimax theorem, which implies the existence of saddle point solutions with a uniquely defined value. These solutions are called von Nemann solutions and guarantee to eac...
2,022
Solving Min-Max Optimization with Hidden Structure via Gradient Descent Ascent
SP:5e9b5c3ee27cf90eb73e2672a1bbf18a1b12e791
[ "This paper shows a correspondence between deep neural networks (DNN) trained with noisy gradients and NNGP. It provides a general analytical form for the finite width correction (FWC) for NNSP expanding around NNGP. Finally, it argues that this FWC can be used to explain why finite width CNNs can improve the perfo...
A recent line of works studied wide deep neural networks (DNNs) by approximating them as Gaussian Processes (GPs). A DNN trained with gradient flow was shown to map to a GP governed by the Neural Tangent Kernel (NTK), whereas earlier works showed that a DNN with an i.i.d. prior over its weights maps to the socalled Neu...
[ { "affiliations": [], "name": "NOISY GRADIENTS" } ]
[ { "authors": [ "Sanjeev Arora", "Simon S. Du", "Wei Hu", "Zhiyuan Li", "Ruslan Salakhutdinov", "Ruosong Wang" ], "title": "On Exact Computation with an Infinitely Wide Neural Net", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "R...
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks (DNNs) have been rapidly advancing the state-of-the-art in machine learning, yet a complete analytic theory remains elusive. Recently, several exact results were obtained in the highly over-parameterized regime (N →∞ where N denotes the width or n...
2,020
null
SP:95899f38fd0f1789510e67178b587c08a14203f5
[ "This paper proposes adding regularization terms to encourage diversity of the layer outputs in order to improve the generalization performance. The proposed idea is an extension of Cogswell's work with different regularization terms. In addition, the authors performed detailed generalization analysis based on the ...
During the last decade, neural networks have been intensively used to tackle various problems and they have often led to state-of-the-art results. These networks are composed of multiple jointly optimized layers arranged in a hierarchical structure. At each layer, the aim is to learn to extract hidden patterns needed t...
[]
[ { "authors": [ "Madhu S Advani", "Andrew M Saxe", "Haim Sompolinsky" ], "title": "High-dimensional dynamics of generalization error in neural networks", "venue": "Neural Networks,", "year": 2020 }, { "authors": [ "Sanjeev Arora", "Rong Ge", "Behnam Ney...
[ { "heading": "1 INTRODUCTION", "text": "Neural networks are a powerful class of non-linear function approximators that have been successfully used to tackle a wide range of problems. They have enabled breakthroughs in many tasks, such as image classification (Krizhevsky et al., 2012), speech recognition (Hi...
2,020
ON NEURAL NETWORK GENERALIZATION VIA PROMOTING WITHIN-LAYER ACTIVATION DIVERSITY
SP:4fd499ebe9fddb6a3f57663d76bb7bf3b5f29ef7
[ "The proposed NDP has two main advantages: 1- it has the capability to adapt the incoming data points in time-series (unlike NODE) without retraining, 2- it can provide a measure of uncertainty for the underlying dynamics of the time-series. NDP partitions the global latent context $z$ to a latent position $l$ and ...
Neural Ordinary Differential Equations (NODEs) use a neural network to model the instantaneous rate of change in the state of a system. However, despite their apparent suitability for dynamics-governed time-series, NODEs present a few disadvantages. First, they are unable to adapt to incoming data-points, a fundamental...
[ { "affiliations": [], "name": "Alexander Norcliffe" }, { "affiliations": [], "name": "Cristian Bodnar" }, { "affiliations": [], "name": "Ben Day" }, { "affiliations": [], "name": "Jacob Moss" }, { "affiliations": [], "name": "Pietro Liò" } ]
[ { "authors": [ "Francesco Paolo Casale", "Adrian V Dalca", "Luca Saglietti", "Jennifer Listgarten", "Nicolo Fusi" ], "title": "Gaussian Process Prior Variational Autoencoders", "venue": "arXiv e-prints, art", "year": 2018 }, { "authors": [ "Ricky TQ Ch...
[ { "heading": "1 INTRODUCTION", "text": "Many time-series that arise in the natural world, such as the state of a harmonic oscillator, the populations in an ecological network or the spread of a disease, are the product of some underlying dynamics. Sometimes, as in the case of a video of a swinging pendulum,...
2,021
NEURAL ODE PROCESSES
SP:1c2c08605956eb4660a8f8a33ce13e80276582ed
[ "This paper proposes a data-driven approach to choose an informative surrogate sub-dataset, termed \"a \\epsilon-approximation\", from the original data set. A meta-learning algorithm called Kernel Inducing Points (KIP ) is proposed to obtain such sub-datasets for (Linear) Kernel Ridge Regression (KRR), with the p...
One of the most fundamental aspects of any machine learning algorithm is the training data used by the algorithm. We introduce the novel concept of approximation of datasets, obtaining datasets which are much smaller than or are significant corruptions of the original training data while maintaining similar model perfo...
[ { "affiliations": [], "name": "Timothy Nguyen" }, { "affiliations": [], "name": "Zhourong Chen" }, { "affiliations": [], "name": "Jaehoon Lee" } ]
[ { "authors": [ "Martin Abadi", "Andy Chu", "Ian Goodfellow", "H. Brendan McMahan", "Ilya Mironov", "Kunal Talwar", "Li Zhang" ], "title": "Deep learning with differential privacy", "venue": "Proceedings of the 2016 ACM SIGSAC Conference on Computer and Commu...
[ { "heading": null, "text": "One of the most fundamental aspects of any machine learning algorithm is the training data used by the algorithm. We introduce the novel concept of - approximation of datasets, obtaining datasets which are much smaller than or are significant corruptions of the original training ...
2,021
null
SP:c06539b9986064977dec933dcce4b81d42f47cc2
[ "This paper focuses on the problem of multi-agent cooperation in social dilemmas, in which mutual defection is individually rational but collectively suboptimal. The authors use the bias toward status-quo in human psychology to motivate a new training method, called SQLoss: 1) for repeated matrix games, each agent ...
Individual rationality, which involves maximizing expected individual return, does not always lead to optimal individual or group outcomes in multi-agent problems. For instance, in social dilemma situations, Reinforcement Learning (RL) agents trained to maximize individual rewards converge to mutual defection that is i...
[]
[ { "authors": [ "Dilip Abreu", "David Pearce", "Ennio Stacchetti" ], "title": "Toward a theory of discounted repeated games with imperfect monitoring", "venue": "URL http://www.jstor.org/stable/2938299", "year": 1990 }, { "authors": [ "Robert Axelrod" ], "t...
[ { "heading": "1 INTRODUCTION", "text": "In sequential social dilemmas, individually rational behavior leads to outcomes that are sub-optimal for each individual in the group (Hardin, 1968; Ostrom, 1990; Ostrom et al., 1999; Dietz et al., 2003). Current state-of-the-art Multi-Agent Deep Reinforcement Learnin...
2,020
null
SP:72f379cefb57913386cbd76978943bdc8d0545a7
[ "The work uses diffusion probabilistic models for conditional speech synthesis tasks, specifically to convert mel-spectrogram to the raw audio waveform. Results from the proposed approach match the state-of-the-art WaveRNN model. The paper is very well-written and it is quite easy to follow. The study of the total ...
This paper introduces WaveGrad, a conditional model for waveform generation which estimates gradients of the data density. The model is built on prior work on score matching and diffusion probabilistic models. It starts from a Gaussian white noise signal and iteratively refines the signal via a gradient-based sampler c...
[ { "affiliations": [], "name": "Nanxin Chen" }, { "affiliations": [], "name": "Yu Zhang" }, { "affiliations": [], "name": "Heiga Zen" }, { "affiliations": [], "name": "Ron J. Weiss" }, { "affiliations": [], "name": "Mohammad Norouzi" }, { "affiliations"...
[ { "authors": [ "Yang Ai", "Zhen-Hua Ling" ], "title": "Knowledge-and-Data-Driven Amplitude Spectrum Prediction for Hierarchical Neural Vocoders", "venue": "arXiv preprint arXiv:2004.07832,", "year": 2020 }, { "authors": [ "Eric Battenberg", "RJ Skerry-Ryan", ...
[ { "heading": "1 INTRODUCTION", "text": "Deep generative models have revolutionized speech synthesis (Oord et al., 2016; Sotelo et al., 2017; Wang et al., 2017; Biadsy et al., 2019; Jia et al., 2019; Vasquez & Lewis, 2019). Autoregressive models, in particular, have been popular for raw audio generation than...
2,021
WAVEGRAD: ESTIMATING GRADIENTS FOR WAVEFORM GENERATION
SP:11cd869cd8c6dc657c136545fd2029f0c49843ba
[ "The paper presents a benchmark / dataset, HW-NAS-Bench, for evaluating various neural architecture search algorithms. The benchmark is based on extensive measurements on real hardware. An important goal with the proposal is to support neural architecture searches for non-hardware experts. Further, the paper provid...
HardWare-aware Neural Architecture Search (HW-NAS) has recently gained tremendous attention by automating the design of deep neural networks deployed in more resource-constrained daily life devices. Despite its promising performance, developing optimal HW-NAS solutions can be prohibitively challenging as it requires cr...
[ { "affiliations": [], "name": "SEARCH BENCHMARK" }, { "affiliations": [], "name": "Chaojian Li" }, { "affiliations": [], "name": "Zhongzhi Yu" }, { "affiliations": [], "name": "Yonggan Fu" }, { "affiliations": [], "name": "Yongan Zhang" }, { "affiliati...
[ { "authors": [ "Martı́n Abadi", "Paul Barham", "Jianmin Chen", "Zhifeng Chen", "Andy Davis", "Jeffrey Dean", "Matthieu Devin", "Sanjay Ghemawat", "Geoffrey Irving", "Michael Isard" ], "title": "Tensorflow: A system for large-scale machine learn...
[ { "heading": "1 INTRODUCTION", "text": "The recent performance breakthroughs of deep neural networks (DNNs) have attracted an explosion of research in designing efficient DNNs, aiming to bring powerful yet power-hungry DNNs into more resource-constrained daily life devices for enabling various DNN-powered i...
2,021
null
SP:f65217b47950d0dbf8e77622489d8883211a012d
[ "This paper proposes a novel graph neural network-based architecture. Building upon the theoretical success of graph scattering transforms, the authors propose to learn some aspects of it providing them with more flexibility to adapt to data (recall that graph scattering transforms are built on pre-designed graph w...
Many popular graph neural network (GNN) architectures, which are often considered as the current state of the art, rely on encoding graph structure via smoothness or similarity between neighbors. While this approach performs well on a surprising number of standard benchmarks, the efficacy of such models does not transl...
[]
[ { "authors": [ "Uri Alon", "Eran Yahav" ], "title": "On the bottleneck of graph neural networks and its practical implications", "venue": "arXiv preprint arXiv:2006.05205,", "year": 2020 }, { "authors": [ "Pablo Barceló", "Egor V Kostylev", "Mikael Monet", ...
[ { "heading": "1 INTRODUCTION", "text": "Geometric deep learning has recently emerged as an increasingly prominent branch of machine learning in general, and deep learning in particular (Bronstein et al., 2017). It is based on the observation that many of the impressive achievements of neural networks come i...
2,020
null
SP:c90a894d965bf8e529df296b9d5c76864aa5f4f9
[ "This paper describes a neural vocoder based on a diffusion probabilistic model. The model utilizes a fixed-length markov chain to convert between a latent uncorrelated Gaussian vector and a full-length observation. The conversion from observation to latent is fixed and amounts to adding noise at each step. The con...
In this work, we propose DiffWave, a versatile diffusion probabilistic model for conditional and unconditional waveform generation. The model is non-autoregressive, and converts the white noise signal into structured waveform through a Markov chain with a constant number of steps at synthesis. It is efficiently trained...
[ { "affiliations": [], "name": "Zhifeng Kong" }, { "affiliations": [], "name": "Wei Ping" }, { "affiliations": [], "name": "Jiaji Huang" }, { "affiliations": [], "name": "Kexin Zhao" } ]
[ { "authors": [ "Yang Ai", "Zhen-Hua Ling" ], "title": "A neural vocoder with hierarchical generation of amplitude and phase spectra for statistical parametric speech synthesis", "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing,", "year": 2020 }, { "au...
[ { "heading": "1 INTRODUCTION", "text": "Deep generative models have produced high-fidelity raw audio in speech synthesis and music generation. In previous work, likelihood-based models, including autoregressive models (van den Oord et al., 2016; Kalchbrenner et al., 2018; Mehri et al., 2017) and flow-based ...
2,021
DIFFWAVE: A VERSATILE DIFFUSION MODEL FOR AUDIO SYNTHESIS
SP:efbb0e2e944f1d810a6f0b6bc71e636af9ae9c13
[ "The authors present a seq2seq model with a sparse transformer encoder and an LSTM decoder. They utilize a learning curriculum wherein the autoregressive decoder is initially trained using teacher forcing and is gradually fed its past predictions as training progresses. The authors introduce a new dataset for long ...
Dancing to music is one of human’s innate abilities since ancient times. In machine learning research, however, synthesizing dance movements from music is a challenging problem. Recently, researchers synthesize human motion sequences through autoregressive models like recurrent neural network (RNN). Such an approach of...
[ { "affiliations": [], "name": "CURRICULUM LEARNING" }, { "affiliations": [], "name": "Ruozi Huang" }, { "affiliations": [], "name": "Huang Hu" }, { "affiliations": [], "name": "Wei Wu" }, { "affiliations": [], "name": "Kei Sawada" }, { "affiliations": ...
[ { "authors": [ "Samy Bengio", "Oriol Vinyals", "Navdeep Jaitly", "Noam Shazeer" ], "title": "Scheduled sampling for sequence prediction with recurrent neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": ...
[ { "heading": "1 INTRODUCTION", "text": "Arguably, dancing to music is one of human’s innate abilities, as we can spontaneously sway along with the tempo of music we hear. The research in neuropsychology indicates that our brain is hardwired to make us move and synchronize with music regardless of our intent...
2,021
DANCE REVOLUTION: LONG-TERM DANCE GENERA-
SP:18e9f58ab4fc8532cbd298730cff5b7f8ec31a5f
[ "This paper presents the \"Block Skim Transformer\" for extractive question answering tasks. The key idea in this model is using a classifier, on the self-attention distributions of a particular layer, to classify whether a large spans of non-contiguous text (blocks) contain the answer. If a block is rejected by th...
Transformer-based encoder models have achieved promising results on natural language processing (NLP) tasks including question answering (QA). Different from sequence classification or language modeling tasks, hidden states at all positions are used for the final classification in QA. However, we do not always need all...
[ { "affiliations": [], "name": "SKIM TRANSFORMER" } ]
[ { "authors": [ "Iz Beltagy", "Matthew E Peters", "Arman Cohan" ], "title": "Longformer: The long-document transformer", "venue": "arXiv preprint arXiv:2004.05150,", "year": 2020 }, { "authors": [ "Victor Campos", "Brendan Jou", "Xavier Giró-i-Nieto", ...
[ { "heading": null, "text": "Transformer-based encoder models have achieved promising results on natural language processing (NLP) tasks including question answering (QA). Different from sequence classification or language modeling tasks, hidden states at all positions are used for the final classification i...
2,020
null
SP:977fc8d3bb7266d1beaecc609a91970783347ed3
[ "The authors discuss how a classifier’s performance over the initial class sample can be used to extrapolate its expected accuracy on a larger, unobserved set of classes by mean of the dual of the ROC function, swapping the roles of classes and samples. Grounded on such function, the authors develop a novel ANN app...
Multiclass classifiers are often designed and evaluated only on a sample from the classes on which they will eventually be applied. Hence, their final accuracy remains unknown. In this work we study how a classifier’s performance over the initial class sample can be used to extrapolate its expected accuracy on a larger...
[ { "affiliations": [], "name": "Yuli Slavutsky" }, { "affiliations": [], "name": "Yuval Benjamini" } ]
[ { "authors": [ "Felix Abramovich", "Marianna Pensky" ], "title": "Classification with many classes: challenges and pluses", "venue": "Journal of Multivariate Analysis,", "year": 2019 }, { "authors": [ "Brandon Amos", "Bartosz Ludwiczuk", "Mahadev Satyanaraya...
[ { "heading": "1 INTRODUCTION", "text": "Advances in machine learning and representation learning led to automatic systems that can identify an individual class from very large candidate sets. Examples are abundant in visual object recognition (Russakovsky et al., 2015; Simonyan & Zisserman, 2014), face iden...
2,021
PREDICTING CLASSIFICATION ACCURACY WHEN ADDING NEW UNOBSERVED CLASSES
SP:eb5f64c7d1e303394f4650a14806e60dba1afdd3
[ "The paper presented an adaptive inference model for efficient action recognition in videos. The core of the model is the dynamic gating of feature channels that controls the fusion between two frame features, whereby the gating is conditioned on the input video and helps to reduce the computational cost at runtime...
Temporal modelling is the key for efficient video action recognition. While understanding temporal information can improve recognition accuracy for dynamic actions, removing temporal redundancy and reusing past features can significantly save computation leading to efficient action recognition. In this paper, we introd...
[ { "affiliations": [], "name": "Yue Meng" }, { "affiliations": [], "name": "Rameswar Panda" }, { "affiliations": [], "name": "Chung-Ching Lin" }, { "affiliations": [], "name": "Prasanna Sattigeri" }, { "affiliations": [], "name": "Leonid Karlinsky" }, { ...
[ { "authors": [ "Sadjad Asghari-Esfeden", "Mario Sznaier", "Octavia Camps" ], "title": "Dynamic motion representation for human action recognition", "venue": "In The IEEE Winter Conference on Applications of Computer Vision,", "year": 2020 }, { "authors": [ "Emmanu...
[ { "heading": "1 INTRODUCTION", "text": "Over the last few years, video action recognition has made rapid progress with the introduction of a number of large-scale video datasets (Carreira & Zisserman, 2017; Monfort et al., 2018; Goyal et al., 2017). Despite impressive results on commonly used benchmark data...
2,021
ADAFUSE: ADAPTIVE TEMPORAL FUSION NETWORK FOR EFFICIENT ACTION RECOGNITION
SP:8b0cee077c1bcdf9a546698dc041654ca6a222ed
[ "This paper is basically unreadable. The sentence structure / grammar is strange, and if that was the only issue it could be overlooked. The paper also does not describe or explain the motivation and interpretation of anything, but instead just lists equations. For example, eta is the parameter that projects a sphe...
We present geometric Bayesian active learning by disagreements (GBALD), a framework that performs BALD on its geometric interpretation interacting with a deep learning model. There are two main components in GBALD: initial acquisitions based on core-set construction and model uncertainty estimation with those initial a...
[]
[ { "authors": [ "Jordan T Ash", "Chicheng Zhang", "Akshay Krishnamurthy", "John Langford", "Alekh Agarwal" ], "title": "Deep batch active learning by diverse, uncertain gradient lower bounds", "venue": "In International Conference on Learning Representations,", "year...
[ { "heading": "1 INTRODUCTION", "text": "Lack of training labels restricts the performance of deep neural networks (DNNs), though prices of GPU resources were falling fast. Recently, leveraging the abundance of unlabeled data has become a potential solution to relieve this bottleneck whereby expert knowledge...
2,020
null
SP:09bbd1a342033a65e751a8878c23e3fa6facc636
[ "The authors propose a convolution as a message passing of node features over edges where messages are aggregated weighted by a \"direction\" edge field. Furthermore, the authors propose to use the gradients of Laplace eigenfunctions as direction fields. Presumably, the aggregation is done with different direction ...
In order to overcome the expressive limitations of graph neural networks (GNNs), we propose the first method that exploits vector flows over graphs to develop globally consistent directional and asymmetric aggregation functions. We show that our directional graph networks (DGNs) generalize convolutional neural networks...
[]
[ { "authors": [ "Uri Alon", "Eran Yahav" ], "title": "On the bottleneck of graph neural networks and its practical implications", "venue": null, "year": 2020 }, { "authors": [ "Xavier Bresson", "Thomas Laurent" ], "title": "Residual gated graph convnets", ...
[ { "heading": "1 INTRODUCTION", "text": "One of the most important distinctions between convolutional neural networks (CNNs) and graph neural networks (GNNs) is that CNNs allow for any convolutional kernel, while most GNN methods are limited to symmetric kernels (also called isotropic kernels in the literatu...
2,020
null
SP:540d8c615b5193239aa43717de8cacc749ccc4c6
[ "The authors describe a method for representing a continuous signal by a pulse code, in a manner inspired by auditory processing in the brain. The resulting framework is somewhat like matching pursuit except that filters are run a single time in a causal manner to find the spike times (which would be faster than MP...
In many animal sensory pathways, the transformation from external stimuli to spike trains is essentially deterministic. In this context, a new mathematical framework for coding and reconstruction, based on a biologically plausible model of the spiking neuron, is presented. The framework considers encoding of a signal t...
[]
[ { "authors": [ "Horace B Barlow" ], "title": "Possible principles underlying the transformations of sensory messages", "venue": "Sensory Communication,", "year": 1961 }, { "authors": [ "Stephen Boyd", "Leon Chua" ], "title": "Fading memory and the problem of app...
[ { "heading": null, "text": "In many animal sensory pathways, the transformation from external stimuli to spike trains is essentially deterministic. In this context, a new mathematical framework for coding and reconstruction, based on a biologically plausible model of the spiking neuron, is presented. The fr...
2,020
null
SP:725d036c0863e59f6bb0b0bb22cc0ad3a0988126
[ "Review: This paper studies how to improve contrastive divergence (CD) training of energy-based models (EBMs) by revisiting the gradient term neglected in the traditional CD learning. This paper also introduces some useful techniques, such as data augmentation, multi-scale energy design, and reservoir sampling to i...
We propose several different techniques to improve contrastive divergence training of energy-based models (EBMs). We first show that a gradient term neglected in the popular contrastive divergence formulation is both tractable to estimate and is important to avoid training instabilities in previous models. We further h...
[]
[ { "authors": [ "Sergey Bartunov", "Jack W Rae", "Simon Osindero", "Timothy P Lillicrap" ], "title": "Meta-learning deep energy-based memory models", "venue": "arXiv preprint arXiv:1910.02720,", "year": 2019 }, { "authors": [ "Jan Beirlant", "E. Dudewic...
[ { "heading": "1 INTRODUCTION", "text": "Energy-Based models (EBMs) have received an influx of interest recently and have been applied to realistic image generation (Han et al., 2019; Du & Mordatch, 2019), 3D shapes synthesis (Xie et al., 2018b) , out of distribution and adversarial robustness (Lee et al., 2...
2,020
null
SP:6d6e083899bc17a2733aa16efd259ad4ed2076d6
[ "This paper falls into a class of continual learning methods which accommodate for new tasks by expanding the network architecture, while freezing existing weights. This freezing trivially resolves forgetting. The (hard) problem of determining how to expand the network is tackled with reinforcement learning, largel...
Continual learning with neural networks is an important learning framework in AI that aims to learn a sequence of tasks well. However, it is often confronted with three challenges: (1) overcome the catastrophic forgetting problem, (2) adapt the current network to new tasks, and meanwhile (3) control its model complexit...
[]
[ { "authors": [ "Rahaf Aljundi", "Francesca Babiloni", "Mohamed Elhoseiny", "Marcus Rohrbach", "Tinne Tuytelaars" ], "title": "Memory aware synapses: Learning what (not) to forget", "venue": "In Proceedings of the European Conference on Computer Vision,", "year": 201...
[ { "heading": "1 INTRODUCTION", "text": "Continual learning, or lifelong learning, refers to the ability of continually learning new tasks and also performing well on learned tasks. It has attracted enormous attention in AI as it mimics a human learning process - constantly acquiring and accumulating knowled...
2,020
null
SP:047761908963bea6350f5d65a253c09f1a626093
[ "The authors contribute an approach to automatically distinguish between good and bad student assignment submissions by modeling the assignment submissions as MDPs. The authors hypothesize that satisfactory assignments modeled as MDPs will be more alike than they are to unsatisfactory assignments. Therefore this ca...
Contemporary coding education often present students with the task of developing programs that have user interaction and complex dynamic systems, such as mouse based games. While pedagogically compelling, grading such student programs requires dynamic user inputs, therefore they are difficult to grade by unit tests. In...
[]
[ { "authors": [ "Karl Cobbe", "Christopher Hesse", "Jacob Hilton", "John Schulman" ], "title": "Leveraging procedural generation to benchmark reinforcement learning", "venue": "arXiv preprint arXiv:1912.01588,", "year": 2019 }, { "authors": [ "Karl Cobbe", ...
[ { "heading": "1 INTRODUCTION", "text": "The rise of online coding education platforms accelerates the trend to democratize high quality computer science education for millions of students each year. Corbett (2001) suggests that providing feedback to students can have an enormous impact on efficiently and ef...
2,020
null
SP:2eed06887f51560197590d617b1a37ec6d22e943
[ "This paper considers the problem of data-free post-training quantization of classfication networks. It proposes three extensions of an existing framework ZeroQ (Cai et al., 2020): (1). in order to generate distilled data for network sensitivity analysis, the \"Retro Synthesis\" method is proposed to turn a random ...
Existing quantization aware training methods attempt to compensate for the quantization loss by leveraging on training data, like most of the post-training quantization methods, and are also time consuming. Both these methods are not effective for privacy constraint applications as they are tightly coupled with trainin...
[]
[ { "authors": [ "Ron Banner", "Yury Nahshan", "Daniel Soudry" ], "title": "Post training 4-bit quantization of convolutional networks for rapid-deployment", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Chaim Bask...
[ { "heading": "1 INTRODUCTION", "text": "Quantization is a widely used and necessary approach to convert heavy Deep Neural Network (DNN) models in Floating Point (FP32) format to a light-weight lower precision format, compatible with edge device inference. The introduction of lower precision computing hardwa...
2,020
null
SP:259b64e62b640ccba4bc82c50e59db7662677e6b
[ "The authors propose a bootstrap framework for understanding generalization in deep learning. In particular, instead of the usual decomposition of test error as training error plus the generalization gap, the bootstrap framework decomposes the empirical test error as online error plus the bootstrap error (the gap ...
We propose a new framework for reasoning about generalization in deep learning. The core idea is to couple the Real World, where optimizers take stochastic gradient steps on the empirical loss, to an Ideal World, where optimizers take steps on the population loss. This leads to an alternate decomposition of test error ...
[ { "affiliations": [], "name": "OFFLINE GENERALIZERS" }, { "affiliations": [], "name": "Preetum Nakkiran" }, { "affiliations": [], "name": "Behnam Neyshabur" }, { "affiliations": [], "name": "Hanie Sedghi" } ]
[ { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li", "Yingyu Liang" ], "title": "Learning and generalization in overparameterized neural networks, going beyond two layers", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ ...
[ { "heading": "1 INTRODUCTION", "text": "The goal of a generalization theory in supervised learning is to understand when and why trained models have small test error. The classical framework of generalization decomposes the test error of a model ft as:\nTestError(ft) = TrainError(ft) + [TestError(ft)− Train...
2,021
null
SP:1b984693f1a64c86306aff37d58f9ff188bcf67e
[ "This paper presents a general Self-supervised Time Series representation learning framework. It explores the inter-sample relation reasoning and intra-temporal relation reasoning of time series to capture the underlying structure pattern of the unlabeled time series data. The proposed method achieves new state-of...
Self-supervised learning achieves superior performance in many domains by extracting useful representations from the unlabeled data. However, most of traditional self-supervised methods mainly focus on exploring the inter-sample structure while less efforts have been concentrated on the underlying intra-temporal struct...
[]
[ { "authors": [ "Anthony Bagnall", "Jason Lines", "Jon Hills", "Aaron Bostrom" ], "title": "Time-series classification with cote: the collective of transformation-based ensembles", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2015 }, { "...
[ { "heading": "1 INTRODUCTION", "text": "Time series data is ubiquitous and there has been significant progress for time series analysis (Das, 1994) in machine learning, signal processing, and other related areas, with many real-world applications such as healthcare (Stevner et al., 2019), industrial diagnos...
2,020
null
SP:9513f146a764d9e67b7d054692d0a923622ff007
[ "This paper proposes to use orthogonal weight constraints for autoencoders. The authors demonstrate that under orthogonal weights (hence invertible), more features could be extracted. The theory is conducted under linear cases while the authors claim it can be applied to more complicated scenarios such as higher di...
The pressing need for pretraining algorithms has been diminished by numerous advances in terms of regularization, architectures, and optimizers. Despite this trend, we re-visit the classic idea of unsupervised autoencoder pretraining and propose a modified variant that relies on a full reverse pass trained in conjuncti...
[ { "affiliations": [], "name": "REVIVING AUTOENCODER PRETRAINING" } ]
[ { "authors": [ "REFERENCES Michele Alberti", "Mathias Seuret", "Rolf Ingold", "Marcus Liwicki" ], "title": "A pitfall of unsupervised pretraining", "venue": "arXiv preprint arXiv:1703.04332,", "year": 2017 }, { "authors": [ "Lynton Ardizzone", "Jakob K...
[ { "heading": "1 INTRODUCTION", "text": "While approaches such as greedy layer-wise autoencoder pretraining (Bengio et al., 2007; Vincent et al., 2010; Erhan et al., 2010) arguably paved the way for many fundamental concepts of today’s methodologies in deep learning, the pressing need for pretraining neural ...
2,020
null
SP:70fc08b1b6161c770b5019272c2eaa0d2e3c39ee
[ "This paper raises and studies concerns about the generalization of 3D human motion prediction approaches across unseen motion categories. The authors address this problem by augmenting existing architectures with a VAE framework. More precisely, an encoder network that is responsible for summarizing the seed seque...
The task of predicting human motion is complicated by the natural heterogeneity and compositionality of actions, necessitating robustness to distributional shifts as far as out-of-distribution (OoD). Here we formulate a new OoD benchmark based on the Human3.6M and CMU motion capture datasets, and introduce a hybrid fra...
[]
[ { "authors": [ "Emre Aksan", "Manuel Kaufmann", "Otmar Hilliges" ], "title": "Structured prediction helps 3d human motion modelling", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Alexandre Alah...
[ { "heading": "1 INTRODUCTION", "text": "Human motion is naturally intelligible as a time-varying graph of connected joints constrained by locomotor anatomy and physiology. Its prediction allows the anticipation of actions with applications across healthcare (Geertsema et al., 2018; Kakar et al., 2005), phys...
2,020
null
SP:8f1c7fabe235bdf095007948007509102dd0c126
[ "The authors address the problem of discrete keypoint matching. For an input pair of images, the task is to match the unannotated (but given as part of the input) keypoints. The main contribution is identifying the bottleneck of the current SOTA algorithm: a fixed connectivity construction given by Delauney triangu...
Graph matching (GM) has been traditionally modeled as a deterministic optimization problem characterized by an affinity matrix under pre-defined graph topology. Though there have been several attempts on learning more effective node-level affinity/representation for matching, they still heavily rely on the initial grap...
[]
[ { "authors": [ "Yoshua Bengio", "Nicholas Léonard", "Aaron Courville" ], "title": "Estimating or propagating gradients through stochastic neurons for conditional computation", "venue": "arXiv preprint arXiv:1308.3432,", "year": 2013 }, { "authors": [ "Christopher ...
[ { "heading": "1 INTRODUCTION", "text": "Being a long standing NP-hard problem (Loiola et al., 2007), graph matching (GM) has received persistent attention from the machine learning and optimization communities for many years. Concretely, for two graphs with n nodes for each, graph matching seeks to solve1:\...
2,020
null
SP:879ce870f09e422aced7d008abc42fe5a8db29bc
[ "The paper proposes a method for stabilizing the training of GAN as well as overcoming the problem of mode collapse by optimizing several auxiliary models. The first step is to learn a latent space using an autoencoder. Then, this latent space is \"intervened\" by a predefined set of $K$ transformations to generate...
In this paper we propose a novel approach for stabilizing the training process of Generative Adversarial Networks as well as alleviating the mode collapse problem. The main idea is to incorporate a regularization term that we call intervention into the objective. We refer to the resulting generative model as Interventi...
[]
[ { "authors": [ "M Arjovsky", "L Bottou" ], "title": "Towards principled methods for training generative adversarial networks. arxiv 2017", "venue": "arXiv preprint arXiv:1701.04862", "year": 2017 }, { "authors": [ "Martin Arjovsky", "Soumith Chintala", "Léon...
[ { "heading": "1 INTRODUCTION", "text": "As one of the most important advances in generative models in recent years, Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) have been attracting great attention in the machine learning community. GANs aim to train a generator network that transforms s...
2,020
null
SP:a9c70bdca13ee3800c633589a6ee028701e5bf51
[ "This work proposed a dimensionality reduction algorithm called Uniform Manifold Approximation with Two-phase Optimization (UMATO), which is an improved version of UMAP (Ref. [3] see below). UMATO has a two-phase optimization approach: global optimization to obtain the overall skeleton of data & local optimization ...
We present a dimensionality reduction algorithm called Uniform Manifold Approximation with Two-phase Optimization (UMATO) which produces less biased global structures in the embedding results and is robust over diverse initialization methods than previous methods such as t-SNE and UMAP. We divide the optimization into ...
[]
[ { "authors": [ "Josh Barnes", "Piet Hut" ], "title": "A hierarchical o (n log n) force-calculation", "venue": null, "year": 1986 }, { "authors": [ "Mikhail Belkin", "Partha Niyogi" ], "title": "Laplacian eigenmaps and spectral techniques for embedding and ...
[ { "heading": "1 INTRODUCTION", "text": "We present a novel dimensionality reduction method, Uniform Manifold Approximation with Twophase Optimization (UMATO) to obtain less biased and robust embedding over diverse initialization methods. One effective way of understanding high-dimensional data in various do...
2,020
null
SP:fd70696898c5c725ad789565265274a37a6c2ca0
[ "This paper presents a reduction approach to tackle the optimization problem of constrained RL. They propose a Frank-Wolfe type algorithm for the task, which avoids many shortcomings of previous methods, such as the memory complexity. They prove that their algorithm can find an $\\epsilon$-approximate solution with...
Many applications of reinforcement learning (RL) optimize a long-term reward subject to risk, safety, budget, diversity or other constraints. Though constrained RL problem has been studied to incorporate various constraints, existing methods either tie to specific families of RL algorithms or require storing infinitely...
[]
[ { "authors": [ "Jacob Abernethy", "Peter L Bartlett", "Elad Hazan" ], "title": "Blackwell approachability and no-regret learning are equivalent", "venue": "In Proceedings of the 24th Annual Conference on Learning Theory, pp", "year": 2011 }, { "authors": [ "Jacob ...
[ { "heading": "1 INTRODUCTION", "text": "Contemporary approaches in reinforcement learning (RL) largely focus on optimizing the behavior of an agent against a single reward function. RL algorithms like value function methods (Zou et al., 2019; Zheng et al., 2018) or policy optimization methods (Chen et al., ...
2,020
null
SP:df5fec4899d97f7d5df259a013f467e038895669
[ "The paper proposes a post-hoc uncertainty tuning pipeline for Bayesian neural networks. After getting the point estimate, it adds extra dimensions to the weight matrices and hidden layers, which has no effect on the network output, with the hope that it would influence the variance of the original network weights ...
Laplace approximations are classic, computationally lightweight means for constructing Bayesian neural networks (BNNs). As in other approximate BNNs, one cannot necessarily expect the induced predictive uncertainty to be calibrated. Here we develop a formalism to explicitly “train” the uncertainty in a decoupled way to...
[]
[ { "authors": [ "Felix Dangel", "Frederik Kunstner", "Philipp Hennig" ], "title": "BackPACK: Packing more into Backprop", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a Bayesian Approxima...
[ { "heading": null, "text": "Laplace approximations are classic, computationally lightweight means for constructing Bayesian neural networks (BNNs). As in other approximate BNNs, one cannot necessarily expect the induced predictive uncertainty to be calibrated. Here we develop a formalism to explicitly “trai...
2,020
null
SP:2a2368b5bc6b59f66af75ea37f4cbc19c8fcf50f
[ "In this paper, the authors studied the possibility of sparsity exploration in Recurrent Neural Networks (RNNs) training. The main contributions include two parts: (1) Selfish-RNN training algorithm in Section 3.1 (2) SNT-ASGD optimizer in Section 3.2. The key idea of the Selfish-RNN training algorithm is a non-uni...
Sparse neural networks have been widely applied to reduce the necessary resource requirements to train and deploy over-parameterized deep neural networks. For inference acceleration, methods that induce sparsity from a pre-trained dense network (dense-to-sparse) work effectively. Recently, dynamic sparse training (DST)...
[]
[ { "authors": [ "Guillaume Bellec", "David Kappel", "Wolfgang Maass", "Robert Legenstein" ], "title": "Deep rewiring: Training very sparse deep networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Rui...
[ { "heading": "1 INTRODUCTION", "text": "Recurrent neural networks (RNNs) (Elman, 1990), with a variant of long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997), have been highly successful in various fields, including language modeling (Mikolov et al., 2010), machine translation (Kalchbrenner & Blu...
2,020
null
SP:60d704b4a1555e24c09963617c879a15d8f3c805
[ "This paper proposes a spatial-temporal graph neural network, which is designed to adaptively capture the complex spatial-temporal dependency. Further, the authors design a spatial-temporal attention module, which aims to capture multi-scale correlations. For multi-step prediction instead of one-step prediction, th...
Spatial-temporal data forecasting is of great importance for industries such as telecom network operation and transportation management. However, spatialtemporal data are inherent with complex spatial-temporal correlations and behaves heterogeneities among the spatial and temporal aspects, which makes the forecasting r...
[]
[ { "authors": [ "Lei Bai", "Lina Yao", "Salil Kanhere", "Xianzhi Wang", "Quan Sheng" ], "title": "Stg2seq: Spatialtemporal graph to sequence model for multi-step passenger demand forecasting", "venue": "arXiv preprint arXiv:1905.10069,", "year": 2019 }, { "au...
[ { "heading": "1 INTRODUCTION", "text": "Spatial-temporal data forecasting has attracted attention from researchers due to its wide range of applications and the same specific characteristics of spatial-temporal data. Typical applications include mobile traffic forecast (He et al., 2019), traffic road condit...
2,020
null
SP:a99af0f9e848f4f9068ad407612745a85a262644
[ "This paper extends NTK to RNN to explain behavior of RNNs in overparametrized case. It’s a good extension study and interesting to see RNN with infinite-width limit converges to a kernel. The paper proves the same RNTK formula when the weights are shared and not shared. The proposed sensitivity for computationally...
The study of deep neural networks (DNNs) in the infinite-width limit, via the so-called neural tangent kernel (NTK) approach, has provided new insights into the dynamics of learning, generalization, and the impact of initialization. One key DNN architecture remains to be kernelized, namely, the recurrent neural network...
[ { "affiliations": [], "name": "Sina Alemohammad" }, { "affiliations": [], "name": "Zichao Wang" }, { "affiliations": [], "name": "Randall Balestriero" }, { "affiliations": [], "name": "Richard G. Baraniuk" } ]
[ { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li", "Yingyu Liang" ], "title": "Learning and generalization in overparameterized neural networks, going beyond two layers", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "...
[ { "heading": "1 INTRODUCTION", "text": "The overparameterization of modern deep neural networks (DNNs) has resulted in not only remarkably good generalization performance on unseen data (Novak et al., 2018; Neyshabur et al., 2019; Belkin et al., 2019) but also guarantees that gradient descent learning can f...
2,021
THE RECURRENT NEURAL TANGENT KERNEL