paper_id
stringlengths
43
43
summaries
list
abstractText
stringlengths
98
40k
authors
list
references
list
sections
list
year
int64
1.98k
2.02k
title
stringlengths
4
183
SP:4d08cdb2de2044bcb574a425b42963b83fbebfbc
[ "This paper investigates kernel ridge-less regression from a stability viewpoint by deriving its risk bounds. Using stability arguments to derive risk bounds have been widely adopting in machine learning. However, related studies on kernel ridge-less regression are still sparse. The present study fills this gap, wh...
We study the average CVloo stability of kernel ridge-less regression and derive corresponding risk bounds. We show that the interpolating solution with minimum norm minimizes a bound on CVloo stability, which in turn is controlled by the condition number of the empirical kernel matrix. The latter can be characterized i...
[]
[ { "authors": [ "Jerzy K Baksalary", "Oskar Maria Baksalary", "Götz Trenkler" ], "title": "A revisitation of formulae for the moore–penrose inverse of modified matrices", "venue": "Linear Algebra and Its Applications,", "year": 2003 }, { "authors": [ "Peter L. Bart...
[ { "heading": "1 INTRODUCTION", "text": "Statistical learning theory studies the learning properties of machine learning algorithms, and more fundamentally, the conditions under which learning from finite data is possible. In this context, classical learning theory focuses on the size of the hypothesis space...
2,020
null
SP:b80bc890180934092cde037b49d94d6e4e06fad9
[ "This paper presents a novel way of making full use of compact episodic memory to alleviate catastrophic forgetting in continual learning. This is done by adding the proposed discriminative representation loss to regularize the gradients produced by new samples. Authors gave insightful analysis on the influence of ...
The use of episodic memories in continual learning has been shown to be effective in terms of alleviating catastrophic forgetting. In recent studies, several gradientbased approaches have been developed to make more efficient use of compact episodic memories, which constrain the gradients resulting from new samples wit...
[]
[ { "authors": [ "Rahaf Aljundi", "Min Lin", "Baptiste Goujaud", "Yoshua Bengio" ], "title": "Gradient based sample selection for online continual learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Arsl...
[ { "heading": "1 INTRODUCTION", "text": "In the real world, we are often faced with situations where data distributions are changing over time, and we would like to update our models by new data in time, with bounded growth in system size. These situations fall under the umbrella of “continual learning”, whi...
2,020
null
SP:09f2fe6a482bbd6f9bd2c62aa841f995171ba939
[ "This paper proposes a new framework that computes the task-specific representations to modulate the model parameters during the multi-task learning (MTL). This framework uses a single model with shared representations for learning multiple tasks together. Also, explicit task information may not be always available...
Existing Multi-Task Learning(MTL) strategies like joint or meta-learning focus more on shared learning and have little to no scope for task-specific learning. This creates the need for a distinct shared pretraining phase and a task-specific finetuning phase. The finetuning phase creates separate models for each task, w...
[]
[ { "authors": [ "Rosana Ardila", "Megan Branson", "Kelly Davis", "Michael Henretty", "Michael Kohler", "Josh Meyer", "Reuben Morais", "Lindsay Saunders", "Francis M. Tyers", "Gregor Weber" ], "title": "Common voice: A massivelymultilingual speec...
[ { "heading": "1 INTRODUCTION", "text": "The process of Multi-Task Learning (MTL) on a set of related tasks is inspired by the patterns displayed by human learning. It involves a pretraining phase over all the tasks, followed by a finetuning phase. During pretraining, the model tries to grasp the shared know...
2,020
null
SP:a1e2218e6943bf138aeb359e23628676b396ed66
[ "This work proposes a deep reinforcement learning-based optimization strategy to the fuel optimization problem for the hybrid electric vehicle. The problem has been formulated as a fully observed stochastic Markov Decision Process (MDP). A deep neural network is used to parameterize the policy and value function. A...
This paper deals with the fuel optimization problem for hybrid electric vehicles in reinforcement learning framework. Firstly, considering the hybrid electric vehicle as a completely observable non-linear system with uncertain dynamics, we solve an open-loop deterministic optimization problem to determine a nominal opt...
[]
[ { "authors": [ "R. Akrour", "A. Abdolmaleki", "H. Abdulsamad", "G. Neumann" ], "title": "Model Free Trajectory Optimization for Reinforcement Learning", "venue": "In Proceedings of the International Conference on Machine Learning (ICML),", "year": 2016 }, { "autho...
[ { "heading": "1 INTRODUCTION", "text": "Hybrid electric vehicles powered by fuel cells and batteries have attracted great enthusiasm in modern days as they have the potential to eliminate emissions from the transport sector. Now, both the fuel cells and batteries have got several operational challenges whic...
2,020
A ROBUST FUEL OPTIMIZATION STRATEGY FOR HY- BRID ELECTRIC VEHICLES: A DEEP REINFORCEMENT LEARNING BASED CONTINUOUS TIME DESIGN AP-
SP:43e525fb3fa611df7fd44bd3bc9843e57b154c66
[ "This paper proposes 3 deep generative models based on VAEs (with different encoding schemes for RNA secondary structure) for the generation of RNA secondary structures. They test each model on 3 benchmark tasks: unsupervised generation, semi-supervised learning and targeted generation. This paper has many interes...
Our work is concerned with the generation and targeted design of RNA, a type of genetic macromolecule that can adopt complex structures which influence their cellular activities and functions. The design of large scale and complex biological structures spurs dedicated graph-based deep generative modeling techniques, wh...
[ { "affiliations": [], "name": "Zichao Yan" }, { "affiliations": [], "name": "William L. Hamilton" } ]
[ { "authors": [ "Bronwen L Aken", "Premanand Achuthan", "Wasiu Akanni", "M Ridwan Amode", "Friederike Bernsdorff", "Jyothish Bhai", "Konstantinos Billis", "Denise Carvalho-Silva", "Carla Cummins", "Peter Clapham" ], "title": "Ensembl 2017", ...
[ { "heading": "1 INTRODUCTION", "text": "There is an increasing interest in developing deep generative models for biochemical data, especially in the context of generating drug-like molecules. Learning generative models of biochemical molecules can facilitate the development and discovery of novel treatments...
2,021
RNA SECONDARY STRUCTURES
SP:0bd749fe44c37b521bd40f701e1428890aaa9c95
[ "This paper presents a benchmark for discourse phenomena in machine translation. Its main novelty lies in the relatively large scale, spanning three translation directions, four discourse phenomena, and 150-5000 data points per language and phenomenon. A relatively large number of systems from previous work is benc...
Despite increasing instances of machine translation (MT) systems including extrasentential context information, the evidence for translation quality improvement is sparse, especially for discourse phenomena. Popular metrics like BLEU are not expressive or sensitive enough to capture quality improvements or drops that a...
[ { "affiliations": [], "name": "MARKS FOR" }, { "affiliations": [], "name": "DISCOURSE PHENOMENA" } ]
[ { "authors": [ "Rachel Bawden", "Rico Sennrich", "Alexandra Birch", "Barry Haddow" ], "title": "Evaluating discourse phenomena in neural machine translation", "venue": null, "year": 2018 }, { "authors": [ "Peter Bourgonje", "Manfred Stede" ], "...
[ { "heading": "1 INTRODUCTION AND RELATED WORK", "text": "The advances in neural machine translation (NMT) systems have led to great achievements in terms of state-of-the-art performance in automatic translation tasks. There have even been claims that their translations are no worse than what an average bili...
2,020
DIP BENCHMARK TESTS: EVALUATION BENCH-
SP:b2fc6ca65add04fb32bcf7622d9098de9004ca2b
[ "The authors present a framework that uses a combination of VAE and GAN to recover private user images using Side channel analysis of memory access . A VAE-LP model first reconstructs a coarse image from side channel information which is reshaped and processed using a convolutional network. The output of the VAE-...
System side channels denote effects imposed on the underlying system and hardware when running a program, such as its accessed CPU cache lines. Side channel analysis (SCA) allows attackers to infer program secrets based on observed side channel logs. Given the ever-growing adoption of machine learning as a service (MLa...
[ { "affiliations": [], "name": "Yuanyuan Yuan" }, { "affiliations": [], "name": "Shuai Wang" }, { "affiliations": [], "name": "Junping Zhang" } ]
[ { "authors": [ "Onur Aciicmez", "Cetin Kaya Koc" ], "title": "Trace-driven cache attacks on AES", "venue": "In ICICS,", "year": 2006 }, { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false ...
[ { "heading": "1 INTRODUCTION", "text": "Side channel analysis (SCA) recovers program secrets based on the victim program’s nonfunctional characteristics (e.g., its execution time) that depend on the values of program secrets. SCA constitutes a major threat in today’s system and hardware security landscape. ...
2,021
null
SP:7fb11c941e8d79248ce5ff7caa0535a466303395
[ "This paper proposes a method of learning ensembles that adhere to an \"ensemble version\" of the information bottleneck principle. Whereas the information bottleneck principle says the representation should avoid spurious correlations between the representation (Z) and the training data (X) that is not useful for ...
Deep ensembles perform better than a single network thanks to the diversity among their members. Recent approaches regularize predictions to increase diversity; however, they also drastically decrease individual members’ performances. In this paper, we argue that learning strategies for deep ensembles need to tackle th...
[ { "affiliations": [], "name": "Alexandre Rame" } ]
[ { "authors": [ "Arturo Hernández Aguirre", "Carlos A Coello Coello" ], "title": "Mutual information-based fitness functions for evolutionary circuit synthesis", "venue": "In Proceedings of the 2004 Congress on Evolutionary Computation (IEEE Cat. No. 04TH8753),", "year": 2004 }, {...
[ { "heading": null, "text": "Deep ensembles perform better than a single network thanks to the diversity among their members. Recent approaches regularize predictions to increase diversity; however, they also drastically decrease individual members’ performances. In this paper, we argue that learning strateg...
2,021
null
SP:5561773ab024b083be4e362db079e371abf79653
[ "The paper proposed a new training framework, namely GSL, for novel content synthesis. And GSL enables learning of disentangled representations of tangible attributes and achieve novel image synthesis by recombining those swappable components under a zero-shot setting. The framework leverages the underlying semanti...
Visual cognition of primates is superior to that of artificial neural networks in its ability to “envision” a visual object, even a newly-introduced one, in different attributes including pose, position, color, texture, etc. To aid neural networks to envision objects with different attributes, we propose a family of ob...
[ { "affiliations": [], "name": "Yunhao Ge" }, { "affiliations": [], "name": "Sami Abu-El-Haija" }, { "affiliations": [], "name": "Gan Xin" }, { "affiliations": [], "name": "Laurent Itti" } ]
[ { "authors": [ "Yuval Atzmon", "Gal Chechik" ], "title": "Probabilistic and-or attribute grouping for zero-shot learning", "venue": "In Uncertainty in Artificial Intelligence,", "year": 2018 }, { "authors": [ "A. Borji", "S. Izadi", "L. Itti" ], "tit...
[ { "heading": "1 INTRODUCTION", "text": "Primates perform well at generalization tasks. If presented with a single visual instance of an object, they often immediately can generalize and envision the object in different attributes, e.g., in different 3D pose (Logothetis et al., 1995). Primates can readily do...
2,021
ZERO-SHOT SYNTHESIS WITH GROUP-SUPERVISED LEARNING
SP:9f70871f0111b58783f731748d8750c635998f32
[ "This paper presents an approach to learn goal conditioned policies by relying on self-play which sets the goals and discovers a curriculum of tasks for learning. Alice and Bob are the agents. Alice's task is to set a goal by following a number of steps in the environment and she is rewarded when the goal is too ch...
We train a single, goal-conditioned policy that can solve many robotic manipulation tasks, including tasks with previously unseen goals and objects. We rely on asymmetric self-play for goal discovery, where two agents, Alice and Bob, play a game. Alice is asked to propose challenging goals and Bob aims to solve them. W...
[]
[ { "authors": [ "Marcin Andrychowicz", "Filip Wolski", "Alex Ray", "Jonas Schneider", "Rachel Fong", "Peter Welinder", "Bob McGrew", "Josh Tobin", "Pieter Abbeel", "Wojciech Zaremba" ], "title": "Hindsight experience replay", "venue": "In Ad...
[ { "heading": "1 INTRODUCTION", "text": "We are motivated to train a single goal-conditioned policy (Kaelbling, 1993) that can solve any robotic manipulation task that a human may request in a given environment. In this work, we make progress towards this goal by solving a robotic manipulation problem in a t...
2,020
null
SP:038a1d3066f8273977337262e975d7a7aab5002f
[ "The paper introduces a theoretical framework for analyzing GNN transferability. The main idea is to view a graph as subgraph samples with the information of both the connections and the features. Based on this view, the authors define EGI score of a graph as a learnable function that needs to be optimized by maxim...
Graph neural networks (GNNs) have achieved superior performance in various applications, but training dedicated GNNs can be costly for large-scale graphs. Some recent work started to study the pre-training of GNNs. However, none of them provide theoretical insights into the design of their frameworks, or clear requirem...
[ { "affiliations": [], "name": "Qi Zhu" }, { "affiliations": [], "name": "Carl Yang" }, { "affiliations": [], "name": "Yidan Xu" }, { "affiliations": [], "name": "Haonan Wang" }, { "affiliations": [], "name": "Chao Zhang" }, { "affiliations": [], "n...
[ { "authors": [ "Réka Albert", "Albert-László Barabási" ], "title": "Statistical mechanics of complex networks", "venue": "Reviews of modern physics,", "year": 2002 }, { "authors": [ "Sanjeev Arora", "Elad Hazan", "Satyen Kale" ], "title": "Fast algor...
[ { "heading": "1 Introduction", "text": "Graph neural networks (GNNs) have been intensively studied recently [29, 26, 39, 68], due to their established performance towards various real-world tasks [15, 69, 53], as well as close connections to spectral graph theory [12, 9, 16]. While most GNN architectures ar...
2,022
Transfer Learning of Graph Neural Networks with Ego-graph Information Maximization
SP:40cba7b6c04d7e44709baed351382c27fa89a129
[ "The authors perform a descriptive analysis of data by attempting to identify elements in the partial ordering of all partitions on the data which admit a compact definition. Compact definitions are those that are formed by composition of a small number of predefined (prior) set of mathematical operations. Projecti...
Information Lattice Learning (ILL) is a general framework to learn decomposed representations, called rules, of a signal such as an image or a probability distribution. Each rule is a coarsened signal used to gain some human-interpretable insight into what might govern the nature of the original signal. To summarize th...
[]
[ { "authors": [ "Amina Adadi", "Mohammed Berrada" ], "title": "Peeking inside the black-box: A survey on explainable artificial intelligence (XAI)", "venue": "IEEE Access,", "year": 2018 }, { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ...
[ { "heading": "1 INTRODUCTION", "text": "With rapid progress in AI, there is an increasing desire for general AI (Goertzel & Pennachin, 2007; Chollet, 2019) and explainable AI (Adadi & Berrada, 2018; Molnar, 2019), which exhibit broad, human-like cognitive capacities. One common pursuit is to move away from ...
2,020
null
SP:1ee00313e354c4594bbf6cf8bdbe33e3ec8df62f
[ "This paper proposes searching for an architecture generator that outputs good student architectures for a given teacher. The authors claim that by learning the parameters of the generator instead of relying directly on the search space, it is possible to explore the search space of architectures more effectively, ...
State-of-the-art results in deep learning have been improving steadily, in good part due to the use of larger models. However, widespread use is constrained by device hardware limitations, resulting in a substantial performance gap between state-ofthe-art models and those that can be effectively deployed on small devic...
[]
[ { "authors": [ "Sungsoo Ahn", "Shell Xu Hu", "Andreas Damianou", "Neil D Lawrence", "Zhenwen Dai" ], "title": "Variational information distillation for knowledge transfer", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", ...
[ { "heading": "1 INTRODUCTION", "text": "Recently-developed deep learning models have achieved remarkable performance in a variety of tasks. However, breakthroughs leading to state-of-the-art (SOTA) results often rely on very large models: GPipe, Big Transfer and GPT-3 use 556 million, 928 million and 175 bi...
2,020
null
SP:eea3b3ec32cce61d6b6df8574cf7ce9376f2230a
[ "The paper proposes a defense that works by adding multiple targeted adversarial perturbations (with random classes) on the input sample before classifying it. There is little theoretical reasoning for why this is a sensible defense. More importantly though, the defense is only evaluated in an oblivious threat mode...
Studies show that neural networks are susceptible to adversarial attacks. This exposes a potential threat to neural network-based artificial intelligence systems. We observe that the probability of the correct result outputted by the neural network increases by applying small perturbations generated for non-predicted c...
[]
[ { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "International Conference on Machine Learning,", "year": 2018 }, { "authors": [...
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks (DNNs) have become the dominant approach for various tasks including image understanding, natural language processing and speech recognition (He et al., 2016; Devlin et al., 2018; Park et al., 2018). However, recent studies demonstrate that neural...
2,020
null
SP:8badc3f75194e9780063af5a2f26448e41e733d4
[ "The technique is described in sufficient detail and the paper is easy to read. Experimental results involving three datasets: MNIST, street view house numbers, and German traffic signs. The experimental results show that the proposed technique finds significant failures in all datasets, including critical failure ...
With the greater proliferation of machine learning models, the imperative of diagnosing and correcting bugs in models has become increasingly clear. As a route to better discover and fix model bugs, we propose failure scenarios: regions on the data manifold that are incorrectly classified by a model. We propose an end-...
[]
[ { "authors": [ "Antreas Antoniou", "Amos Storkey", "Harrison Edwards" ], "title": "Data augmentation generative adversarial networks", "venue": "International Conference on Artificial Neural Networks and Machine Learning,", "year": 2017 }, { "authors": [ "Christop...
[ { "heading": "1 INTRODUCTION", "text": "Debugging machine learning (ML) models is a critical part of the ML development life cycle. Uncovering bugs helps ML developers make important decisions about both development and deployment. In practice, much of debugging uses aggregate test statistics (like those in...
2,020
DEFUSE: DEBUGGING CLASSIFIERS THROUGH DIS-
SP:bbaedd5d8e7591fa3a5587260bf19f3d05779976
[ "The paper proposes a model for *variable selection* in *Mixed Integer Programming (MIP)* solvers. While this problem is clearly a sequential decision making task, modeling it as an MDP is challenging. As a result, existing works use other approaches such as ranking or imitation learning. This paper overcomes these...
Branch-and-Bound (B&B) is a general and widely used algorithm paradigm for solving Mixed Integer Programming (MIP). Recently there is a surge of interest in designing learning-based branching policies as a fast approximation of strong branching, a humandesigned heuristic. In this work, we argue that strong branching is...
[]
[ { "authors": [ "Tobias Achterberg" ], "title": "Conflict analysis in mixed integer programming", "venue": "Discrete Optimization4(1):,", "year": 2007 }, { "authors": [ "Tobias Achterberg" ], "title": "Scip: solving constraint integer programs", "venue": "Mathemati...
[ { "heading": "1 INTRODUCTION", "text": "Mixed Integer Programming (MIP) has been applied widely in many real-world problems, such as scheduling (Barnhart et al., 2003) and transportation (Melo & Wolsey, 2012). Branch and Bound (B&B) is a general and widely used paradigm for solving MIP problems (Wolsey & Ne...
2,020
null
SP:a20769de2c7acf390c7e3bece904a17df6a991bd
[ "The work examines properties of Neural Processes (NP). More precisely, of deterministic NPs and how they for finite-dimensional representations of infinite-dimensional function spaces. NP learn functions f that best represent/fit discrete sets of points in space. Based on signal theoretic aspects of discretisation...
Neural Processes are a powerful tool for learning representations of function spaces purely from examples, in a way that allows them to perform predictions at test time conditioned on so-called context observations. The learned representations are finite-dimensional, while function spaces are infinite-dimensional, and ...
[]
[ { "authors": [ "ANP", "Louizos" ], "title": "2019) propose to not merge observations into a global latent space", "venue": null, "year": 2019 }, { "authors": [ "2020). Sitzmann" ], "title": "2020) show that periodic activation functions make it easier for networ...
[ { "heading": null, "text": "Neural Processes are a powerful tool for learning representations of function spaces purely from examples, in a way that allows them to perform predictions at test time conditioned on so-called context observations. The learned representations are finite-dimensional, while functi...
2,020
null
SP:ba25b5b02701e01998e9dd22e4230c4e095f4542
[ "The paper deals with the problem of credit assignment and synchronous estimation in cooperative multi-agent reinforcement learning problems. The authors introduce marginal advantage functions and use them for the estimation of the counterfactual advantage function. These functions permit to decompose the Multi-Age...
Cooperative multi-agent tasks require agents to deduce their own contributions with shared global rewards, known as the challenge of credit assignment. General methods for policy based multi-agent reinforcement learning to solve the challenge introduce differentiate value functions or advantage functions for individual...
[]
[ { "authors": [ "Yu-Han Chang", "Tracey Ho", "Leslie P Kaelbling" ], "title": "All learning is local: Multi-agent learning in global reward games", "venue": "In Advances in neural information processing systems,", "year": 2004 }, { "authors": [ "Tianshu Chu", ...
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement learning(RL) algorithms have shown amazing performance on many singleagent(SA) environment tasks (Mnih et al., 2013)(Jaderberg et al., 2016)(Oh et al., 2018). However, for many real-world problems, the environment is much more complex where RL agents oft...
2,020
null
SP:37bdb147b866b9e32a94d55dae82d7a42cea8da9
[ "This paper addresses the problem of vertex classification using a new Graph Convolutional Neural Network (NN) architecture. The linear operator within each of the layers of the GNNN is formed by a polynomial graph filter (i.e., a matrix polynomial of either the adjacency or the Laplacian novelty). Rather than work...
We study Graph Convolutional Networks (GCN) from the graph signal processing viewpoint by addressing a difference between learning graph filters with fullyconnected weights versus trainable polynomial coefficients. We find that by stacking graph filters with learnable polynomial parameters, we can build a highly adapti...
[]
[ { "authors": [ "Sami Abu-El-Haija", "Bryan Perozzi", "Amol Kapoor", "Nazanin Alipourfard", "Kristina Lerman", "Hrayr Harutyunyan", "Greg Ver Steeg", "Aram Galstyan" ], "title": "Mixhop: Higher-order graph convolutional architectures via sparsified neighbor...
[ { "heading": null, "text": "We study Graph Convolutional Networks (GCN) from the graph signal processing viewpoint by addressing a difference between learning graph filters with fullyconnected weights versus trainable polynomial coefficients. We find that by stacking graph filters with learnable polynomial ...
2,020
null
SP:f19be0fdce321827638f91d57607ba340b1c3e4b
[ "The main objective of this paper is to reduce the model stability, in particular, the prediction churn of neural networks. The prediction churn is defined as the changed prediction w.r.t. model randomness, e.g. multiple runs of networks. The paper proposed to use a interpolated version of global label smoothing an...
Training modern neural networks is an inherently noisy process that can lead to high prediction churn– disagreements between re-trainings of the same model due to factors such as randomization in the parameter initialization and mini-batches– even when the trained models all attain high accuracies. Such prediction chur...
[]
[ { "authors": [ "Ehsan Amid", "Manfred KK Warmuth", "Rohan Anil", "Tomer Koren" ], "title": "Robust bi-tempered logistic loss based on bregman divergences", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Roha...
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks (DNNs) have proved to be immensely successful at solving complex classification tasks across a range of problems. Much of the effort has been spent towards improving their predictive performance (i.e. accuracy), while comparatively little has been...
2,020
DEEP k-NN LABEL SMOOTHING IMPROVES STABIL-
SP:5751b2abad772e44e69e125a769f25892c2a2e30
[ "This paper proposes Adversarial Feature Desensitization (AFD) as a defense against adversarial examples. AFD employs a min-max adversarial learning framework where the classifier learns to encode features of both clean and adversarial images as the same distribution, thereby desensitizing adversarial features. Wit...
Neural networks are known to be vulnerable to adversarial attacks – slight but carefully constructed perturbations of the inputs which can drastically impair the network’s performance. Many defense methods have been proposed for improving robustness of deep networks by training them on adversarially perturbed inputs. H...
[ { "affiliations": [], "name": "Pouya Bashivan" }, { "affiliations": [], "name": "Reza Bayat" }, { "affiliations": [], "name": "Adam Ibrahim" }, { "affiliations": [], "name": "Kartik Ahuja" }, { "affiliations": [], "name": "Mojtaba Faramarzi" }, { "affi...
[ { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "35th International Conference on Machine Learning,", "year": 2018 }, { "author...
[ { "heading": "1 Introduction", "text": "When training a classifier, it is common to assume that the training and test samples are drawn from the same underlying distribution. In adversarial machine learning, however, this assumption is intentionally violated by using the classifier itself to perturb the sam...
2,021
Adversarial Feature Desensitization
SP:95ba9ad102adafaabf9671737e6549728d104629
[ "This paper derives various types of graph embeddings to encode aspects of syntactic information that the brain may be processing during real-time sentence comprehension. These embeddings, along with indicators of punctuation, POS and dependency tags, and BERT embeddings, are used to predict brain activity recorded...
We are far from having a complete mechanistic understanding of the brain computations involved in language processing and of the role that syntax plays in those computations. Most language studies do not computationally model syntactic structure and most studies that do model syntactic processing use effort-based metri...
[]
[ { "authors": [ "Link" ], "title": "BERT-Large, Cased: 24-layer, 1024-hidden, 16-heads, 340M parameters. URL https://storage.googleapis.com/bert_models/2018_10_18/cased_ L-24_H-1024_A-16.zip", "venue": null, "year": 2018 }, { "authors": [ "Bijaya Adhikari", "Yao Zhang", ...
[ { "heading": "1 INTRODUCTION", "text": "Neuroscientists have long been interested in how the brain processes syntax. To date, there is no consensus on which brain regions are involved in processing it. Classically, only a small number of regions in the left hemisphere were thought to be involved in language...
2,020
null
SP:7327dc440b5c193c1dda156276860f89594721fa
[ "This paper explores the problem of generalizing to novel combinations of verbs and nouns in a task for captioning video stills from videos about cooking. The paper introduces a new dataset based off of EPIC-Kitchens (Damen et al. 2018) which masks out verbs and nouns and splits the evaluation data into seen combin...
Children acquire language subconsciously by observing the surrounding world and listening to descriptions. They can discover the meaning of words even without explicit language knowledge, and generalize to novel compositions effortlessly. In this paper, we bring this ability to AI, by studying the task of multimodal co...
[]
[ { "authors": [ "Chris Baber" ], "title": "Designing smart objects to support affording situations: Exploiting affordance through an understanding of forms of engagement", "venue": "Frontiers in psychology,", "year": 2018 }, { "authors": [ "Fabien Baradel", "Natalia Neve...
[ { "heading": null, "text": "Children acquire language subconsciously by observing the surrounding world and listening to descriptions. They can discover the meaning of words even without explicit language knowledge, and generalize to novel compositions effortlessly. In this paper, we bring this ability to A...
2,020
null
SP:5be9a3c39234c10c226c42eec95e29cbddbaf8c0
[ "This paper presents a unified framework for graph convolutional neural networks based on regularized optimization, connecting different variants of graph neural networks including vanilla, attention-based, and topology-based approaches. The authors also propose a novel regularization technique to approach the ov...
Graph Convolutional Networks (GCNs) have attracted a lot of research interest in the machine learning community in recent years. Although many variants have been proposed, we still lack a systematic view of different GCN models and deep understanding of the relations among them. In this paper, we take a step forward to...
[]
[ { "authors": [ "S. Bai", "F. Zhang", "P. Torr" ], "title": "Hypergraph convolution and hypergraph attention", "venue": "ArXiv, abs/1901.08150,", "year": 2019 }, { "authors": [ "Xavier Bresson", "Thomas Laurent" ], "title": "Residual gated graph convn...
[ { "heading": "1 INTRODUCTION", "text": "Recent years have witnessed a fast development in graph processing by generalizing convolution operation to graph-structured data, which is known as Graph Convolutional Networks (GCNs) (Kipf & Welling, 2017). Due to the great success, numerous variants of GCNs have be...
2,020
null
SP:dd2a50abff85d2b52b02dfe27cd42e443ea265cf
[ "This article proposes a benchmark of off-policy evaluation, which provides different metrics for policy ranking, evaluation and selection. Offline metrics are provided by evaluating the value function of logged data, and then evaluating absolute error, rank correlation and regret. Verify the effectiveness of diffe...
Off-policy evaluation (OPE) holds the promise of being able to leverage large, offline datasets for both evaluating and selecting complex policies for decision making. The ability to learn offline is particularly important in many real-world domains, such as in healthcare, recommender systems, or robotics, where online...
[ { "affiliations": [], "name": "Justin Fu" }, { "affiliations": [], "name": "Mohammad Norouzi" }, { "affiliations": [], "name": "Ofir Nachum" }, { "affiliations": [], "name": "George Tucker" }, { "affiliations": [], "name": "Ziyu Wang" }, { "affiliation...
[ { "authors": [ "Gabriel Barth-Maron", "Matthew W. Hoffman", "David Budden", "Will Dabney", "Dan Horgan", "Dhruva TB", "Alistair Muldal", "Nicolas Heess", "Timothy Lillicrap" ], "title": "Distributional policy gradients", "venue": "In Internationa...
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement learning algorithms can acquire effective policies for a wide range of problems through active online interaction, such as in robotics (Kober et al., 2013), board games and video games (Tesauro, 1995; Mnih et al., 2013; Vinyals et al., 2019), and recomme...
2,021
null
SP:1037f94ce6eae4a42ea7913c76007f5f3c26aeaf
[ "This paper proposes Triple-Search (TRIPS), a differentiable framework of jointly searching for network architecture, quantization precision, and accelerator parameters. To address the dilemma between exploding training memory and biased search, the proposed framework leverages heterogeneous sampling where soft Gum...
The record-breaking performance and prohibitive complexity of deep neural networks (DNNs) have ignited a substantial need for customized DNN accelerators which have the potential to boost DNN acceleration efficiency by orders-ofmagnitude. While it has been recognized that maximizing DNNs’ acceleration efficiency requir...
[]
[ { "authors": [ "Mohamed S Abdelfattah", "Łukasz Dudziak", "Thomas Chau", "Royson Lee", "Hyeji Kim", "Nicholas D Lane" ], "title": "Best of both worlds: Automl codesign of a cnn and its hardware accelerator", "venue": null, "year": 2002 }, { "authors": ...
[ { "heading": "1 INTRODUCTION", "text": "The powerful performance and prohibitive complexity of deep neural networks (DNNs) have fueled a tremendous demand for efficient DNN accelerators which could boost DNN acceleration efficiency by orders-of-magnitude (Chen et al., 2016). In response, extensive research ...
2,020
null
SP:d850572819200f79545616fc92e789ce958b30d4
[ "This paper deals with continual learning. Specifically, given a stream of tasks we want to maximise performance across all tasks. Typically neural networks suffer from catastrophic forgetting which results in worse performance on tasks seen earlier in training. There are many proposed solutions to this problem. On...
Continual learning often assumes a knowledge of (strict) task boundaries and identities for the instances in a data stream—i.e., a “task-aware” setting. However, in practice it is rarely the case that practitioners can expose task information to the model; thus needing “task-free” continual learning methods. Recent att...
[]
[ { "authors": [ "Tameem Adel", "Han Zhao", "Richard E. Turner" ], "title": "Continual learning with adaptive weights (claw)", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Rahaf Aljundi", "Klaas Kelchterm...
[ { "heading": "1 INTRODUCTION", "text": "Accumulating past knowledge and adapting to evolving environments are one of the key traits in human intelligence (McClelland et al., 1995). While contemporary deep neural networks have achieved impressive results in a range of machine learning tasks Goodfellow et al....
2,020
null
SP:a692e1e43991839e08a02e9122757224e1582cfd
[ "Given one image, the paper first generates different views which are controlled by differentiable parameter \\alpha, and then minimizes the additional \"conditional variance\" term~(expectation of these views' squared differences). Therefore, the paper encourages representations of the same image remain similar un...
We propose methods to strengthen the invariance properties of representations obtained by contrastive learning. While existing approaches implicitly induce a degree of invariance as representations are learned, we look to more directly enforce invariance in the encoding process. To this end, we first introduce a traini...
[ { "affiliations": [], "name": "Adam Foster" }, { "affiliations": [], "name": "Rattana Pukdee" }, { "affiliations": [], "name": "Tom Rainforth" } ]
[ { "authors": [ "Philip Bachman", "R Devon Hjelm", "William Buchwalter" ], "title": "Learning representations by maximizing mutual information across views", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ben Barre...
[ { "heading": "1 INTRODUCTION", "text": "Learning meaningful representations of data is a central endeavour in artificial intelligence. Such representations should retain important information about the original input whilst using fewer bits to store it (van der Maaten et al., 2009; Gregor et al., 2016). Sem...
2,021
CONTRASTIVE REPRESENTATION LEARNING
SP:a24603a5dbc07070aeba98e1206511799111bec6
[ "This paper studies the potential bias in deep semi-supervised anomaly detection. The bias is evaluated in terms of TPR rate given a fixed FPR rate. It uses the anomaly scores output by unsupervised anomaly detectors as a benchmark to examine the relative scoring bias in deep semi-supervised anomaly detectors. It f...
Anomaly detection presents a unique challenge in machine learning, due to the scarcity of labeled anomaly data. Recent work attempts to mitigate such problems by augmenting training of deep anomaly detection models with additional labeled anomaly samples. However, the labeled data often does not align with the target d...
[]
[ { "authors": [ "Charu C Aggarwal", "Saket Sathe" ], "title": "Theoretical foundations and algorithms for outlier ensembles", "venue": "Acm Sigkdd Explorations Newsletter,", "year": 2015 }, { "authors": [ "Varun Chandola", "Arindam Banerjee", "Vipin Kumar" ...
[ { "heading": "1 INTRODUCTION", "text": "Anomaly detection (Chandola et al., 2009; Pimentel et al., 2014) trains a formal model to identify unexpected or anomalous instances in incoming data, whose behaviors differ from normal instances. It is particularly useful for detecting problematic events such as digi...
2,020
null
SP:cf6c9061542bf9c43a968faa574ce03ad71a859a
[ "The authors present an approach for testing calibration in conditional probability estimation models. They build on a line of work in the kernel estimation literature assessing whether the conditional distributions are well calibrated (i.e. P(Y | f(X)) = f(X), where f is some predictive model). They develop an MMD...
Most supervised machine learning tasks are subject to irreducible prediction errors. Probabilistic predictive models address this limitation by providing probability distributions that represent a belief over plausible targets, rather than point estimates. Such models can be a valuable tool in decision-making under unc...
[ { "affiliations": [], "name": "David Widmann" }, { "affiliations": [], "name": "Fredrik Lindsten" } ]
[ { "authors": [ "M.A. Arcones", "E. Giné" ], "title": "On the bootstrap of U and V statistics", "venue": "The Annals of Statistics,", "year": 1992 }, { "authors": [ "C. Berg", "J.P.R. Christensen", "P. Ressel" ], "title": "Harmonic Analysis on Semigro...
[ { "heading": "1 INTRODUCTION", "text": "We consider the general problem of modelling the relationship between a featureX and a target Y in a probabilistic setting, i.e., we focus on models that approximate the conditional probability distribution P(Y |X) of target Y for given feature X . The use of probabil...
1,980
CALIBRATION TESTS BEYOND CLASSIFICATION
SP:becb496310e88c1e2e7d03131093b9ebcf075c1d
[ "The authors consider the problem of learning a hash function such that semantically similar elements have high collision probability. They modify the approach Deep Hashing Networks (Zhu et al., 2016) with a new loss function. Rather than use a sigmoid based loss function, the authors argue that a loss function ba...
Semantic hashing methods have been explored for learning transformations into binary vector spaces. These learned binary representations may then be used in hashing based retrieval methods, typically by retrieving all neighboring elements in the Hamming ball with radius 1 or 2. Prior studies focus on tasks with a few d...
[]
[ { "authors": [ "Stanley C Ahalt", "Ashok K Krishnamurthy", "Prakoon Chen", "Douglas E Melton" ], "title": "Competitive learning algorithms for vector quantization", "venue": "Neural networks,", "year": 1990 }, { "authors": [ "Sunil Arya", "David M Moun...
[ { "heading": "1 INTRODUCTION", "text": "One of most challenging aspects in many Information Retrieval (IR) systems is the discovery and identification of the nearest neighbors of a query element in an vector space. This is typically solved using Approximate Nearest Neighbors (ANN) methods as exact solutions...
2,020
null
SP:7611ee6b9dfabf7ec6a65da58cb6e3892705e1c9
[ "This paper introduces a new method for leveraging auxiliary information and unlabelled data to improve out-of-distribution model performance. Theoretically, in a linear model with latent variables, they demonstrate using auxiliary data as inputs helps in-distribution test-error, but can hurt out-of-distribution er...
Consider a prediction setting with few in-distribution labeled examples and many unlabeled examples both inand out-of-distribution (OOD). The goal is to learn a model which performs well both in-distribution and OOD. In these settings, auxiliary information is often cheaply available for every input. How should we best...
[ { "affiliations": [], "name": "DISTRIBUTION ROBUSTNESS" }, { "affiliations": [], "name": "Sang Michael Xie" }, { "affiliations": [], "name": "Ananya Kumar" }, { "affiliations": [], "name": "Robbie Jones" }, { "affiliations": [], "name": "Fereshte Khani" }, ...
[ { "authors": [ "Sajjad Ahmad", "Ajay Kalra", "Haroon Stephen" ], "title": "Estimating soil moisture using remote sensing data: A machine learning approach", "venue": "Advances in Water Resources,", "year": 2010 }, { "authors": [ "EA AlBadawy", "A Saha", ...
[ { "heading": "1 INTRODUCTION", "text": "When models are tested on distributions that are different from the training distribution, they typically suffer large drops in performance (Blitzer and Pereira, 2007; Szegedy et al., 2014; Jia and Liang, 2017; AlBadawy et al., 2018; Hendrycks et al., 2019a). For exam...
null
null
SP:b6dd62914f7464efb601c6d9f8a4d35e047447d5
[ "This paper studies the training of deep hierarchical VAEs and focuses on the problem of posterior collapse. It is argued that reducing the variance of the gradient estimate may help to overcome posterior collapse. The authors focus on reducing the variance of the functions parameterizing the variational distributi...
Variational autoencoders with deep hierarchies of stochastic layers have been known to suffer from the problem of posterior collapse, where the top layers fall back to the prior and become independent of input. We suggest that the hierarchical VAE objective explicitly includes the variance of the function parameterizin...
[]
[ { "authors": [ "Samuel R. Bowman", "Luke Vilnis", "Oriol Vinyals", "Andrew M. Dai", "Rafal Józefowicz", "Samy Bengio" ], "title": "Generating Sentences from a Continuous Space", "venue": "In CoNLL", "year": 2016 }, { "authors": [ "Xi Chen", ...
[ { "heading": "1 INTRODUCTION", "text": "Variational autoencoders (VAE) [10] are a popular latent variable model for unsupervised learning that simplifies learning by the introduction of a learned approximate posterior. Given data x and latent variables z, we specify the conditional distribution p(x|z) by pa...
2,020
null
SP:2d25eeb93ba90f9c4064bf794f9a132a6859c8e4
[ "The paper proposes an approximation method, called NEMO (Normalized maximum likelihood Estimation for model-based optimization) to compute the conditional normalized maximum log-likelihood of a query data point as a way to quantify the uncertainty in a forward prediction model in offline model-based optimization ...
In this work we consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points. This problem setting emerges in many domains where function evaluation is a complex and expensive process, such as in the design of materials, vehicles, or neural network architectu...
[ { "affiliations": [], "name": "Justin Fu" }, { "affiliations": [], "name": "Sergey Levine" } ]
[ { "authors": [ "Andrew Barron", "Jorma Rissanen", "Bin Yu" ], "title": "The minimum description length principle in coding and modeling", "venue": "IEEE Transactions on Information Theory,", "year": 1998 }, { "authors": [ "Endika Bengoetxea", "Pedro Larrañag...
[ { "heading": "1 INTRODUCTION", "text": "Many real-world optimization problems involve function evaluations that are the result of expensive or time-consuming process. Examples occur in the design of materials (Mansouri Tehrani et al., 2018), proteins (Brookes et al., 2019; Kumar & Levine, 2019), neural netw...
2,021
MALIZED MAXIMUM LIKELIHOOD ESTIMATION
SP:ce75f565c3c17363695c9e39f28b49a66e3731b8
[ "This paper proposes a simple approach to discover interpretable latent manipulations in trained text VAEs. The method essentially involves performing PCA on the latent representations to find directions that maximize variance. The authors argue that this results in more interpretable directions. The method is appl...
Language generation models are attracting more and more attention due to their constantly increasing quality and remarkable generation results. State-of-the-art NLG models like BART/T5/GPT-3 do not have latent spaces, therefore there is no natural way to perform controlled generation. In contrast, less popular models w...
[ { "affiliations": [], "name": "LANGUAGE VAES" } ]
[ { "authors": [ "Samuel R. Bowman", "Gabor Angeli", "Christopher Potts", "Christopher D. Manning" ], "title": "A large annotated corpus for learning natural language inference", "venue": "In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing...
[ { "heading": "1 INTRODUCTION", "text": "Transformer-based models yield state-of-the-art results on a number of tasks, including representation learning (Devlin et al., 2019; Liu et al., 2019; Clark et al., 2020) and generation (Radford et al.; Raffel et al., 2019; Lewis et al., 2020). Notably, large languag...
2,020
null
SP:b9d78677e836fddeab78615ad35e9545d9c1d08f
[ "This paper extends results of prior work by Steinke and Zakynthinou, by providing generalization bounds in the PAC-Bayesian and single-draw settings that depend on the conditional mutual information. The emphasis in this work is on obtaining fast rates ($1/n$ vs. $1/\\sqrt{n}$). The authors also conduct empirical ...
We present a framework to derive bounds on the test loss of randomized learning algorithms for the case of bounded loss functions. This framework leads to bounds that depend on the conditional information density between the the output hypothesis and the choice of the training set, given a larger set of data samples fr...
[]
[ { "authors": [ "A.R. Asadi", "E. Abbe", "S. Verdú" ], "title": "Chaining mutual information and tightening generalization bounds", "venue": "In Proc. Conf. Neural Inf. Process. Syst. (NeurIPS),", "year": 2018 }, { "authors": [ "R. Bassily", "S. Moran", ...
[ { "heading": null, "text": "√ n dependence. We demonstrate\nthe usefulness of our tail bounds by showing that they lead to estimates of the test loss achievable with several neural network architectures trained on MNIST and Fashion-MNIST that match the state-of-the-art bounds available in the literature." ...
2,020
null
SP:29a7b851d3edc2176467adc75ba67cc973a11a37
[ "This work proposes a sequence-to-sequence approach for learning the time evolution of PDEs. The method employs a bi-directional LSTM to predict solutions of a PDE-based formulation for a chosen number of time steps. By itself this is an interesting, and important goal, but the method does not seem to contain any n...
Partial differential equations (PDEs) play a crucial role in studying a vast number of problems in science and engineering. Numerically solving nonlinear and/or highdimensional PDEs is frequently a challenging task. Inspired by the traditional finite difference and finite elements methods and emerging advancements in m...
[]
[ { "authors": [ "Uri M Ascher", "Steven J Ruuth", "Raymond J Spiteri" ], "title": "Implicit-explicit runge-kutta methods for time-dependent partial differential equations", "venue": "Applied Numerical Mathematics,", "year": 1997 }, { "authors": [ "Fischer Black", ...
[ { "heading": "1 INTRODUCTION", "text": "The research of time-dependent partial differential equations (PDEs) is regarded as one of the most important disciplines in applied mathematics. PDEs appear ubiquitously in a broad spectrum of fields including physics, biology, chemistry, and finance, to name a few. ...
2,020
null
SP:797b07cd8142a35333037bb573db0dfe5dde65ac
[ "In this paper, the authors develop a data selection scheme aimed to minimize a notion of Bayes excess risk for overparametrized linear models. The excess Bayes risk is the expected squared error between the prediction and the target. The authors note that solutions such as V-optimality exist for the underparametri...
The impressive performance exhibited by modern machine learning models hinges on the ability to train such models on a very large amounts of labeled data. However, since access to large volumes of labeled data is often limited or expensive, it is desirable to alleviate this bottleneck by carefully curating the training...
[]
[ { "authors": [ "Sanjeev Arora", "Simon S Du", "Wei Hu", "Zhiyuan Li", "Russ R Salakhutdinov", "Ruosong Wang" ], "title": "On exact computation with an infinitely wide neural net", "venue": "In Advances in Neural Information Processing Systems, pp. 8141–8150,", ...
[ { "heading": "1 INTRODUCTION", "text": "The impressive performance exhibited by modern machine learning models hinges on the ability to train the aforementioned models on a very large amounts of labeled data. In practice, in many real world scenarios, even when raw data exists aplenty, acquiring labels migh...
2,020
null
SP:4989f7703e106a20401cec0a5058d440720b0379
[ "This paper proposes a novel algorithm for offline policy optimization. The main idea is to prevent overestimation bias by regularizing against the variance of the importance weighted value estimate. There are two key modifications: (1) using an importance weight from the stationary distribution and (2) using Fench...
Learning policies from fixed offline datasets is a key challenge to scale up reinforcement learning (RL) algorithms towards practical applications. This is often because off-policy RL algorithms suffer from distributional shift, due to mismatch between dataset and the target policy, leading to high variance and over-es...
[]
[ { "authors": [ "Prashanth L. A", "Michael C. Fu" ], "title": "Risk-sensitive reinforcement learning: A constrained optimization", "venue": "viewpoint. CoRR,", "year": 2018 }, { "authors": [ "Prashanth L. A", "Mohammad Ghavamzadeh" ], "title": "Variance-con...
[ { "heading": "1 INTRODUCTION", "text": "Offline batch reinforcement learning (RL) algoithms are key towards scaling up RL for real world applications, such as robotics (Levine et al., 2016) and medical problems . This is because offline RL provides the appealing ability for agents to learn from fixed datase...
2,020
null
SP:4e77d43eb99688600f6c2115e1882e0b1e11a751
[ "This paper proposed a novel method which to quantify the reliability of DNN-driven hypotheses in a statistical hypothesis testing framework. Naive statistical testings are not appropriate for the DNN-driven hypotheses, where the hypotheses are selected by looking at the data(i.e. The selection bias exists). To add...
In the past few years, various approaches have been developed to explain and interpret deep neural network (DNN) representations, but it has been pointed out that these representations are sometimes unstable and not reproducible. In this paper, we interpret these representations as hypotheses driven by DNN (called DNN-...
[]
[ { "authors": [ "Sebastian Bach", "Alexander Binder", "Grégoire Montavon", "Frederick Klauschen", "Klaus-Robert Müller", "Wojciech Samek" ], "title": "On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation", "venue": "Plo...
[ { "heading": "1 INTRODUCTION", "text": "The remarkable predictive performance of deep neural networks (DNNs) stems from their ability to learn appropriate representations from data. In order to understand the decision-making process of DNNs, it is thus important to be able to explain and interpret DNN repre...
2,020
null
SP:8a32dfc80f31fd3da97e15ce98193144d03836b5
[ "This paper proposes a variant of the GTD2 algorithm by adding an additional regularization term to the objective function, and the new algorithm is named as Gradient-DD (GDD). The regularization ensures that the value function does not change drastically between consecutive iterations. The authors show that the up...
Off-policy algorithms, in which a behavior policy differs from the target policy and is used to gain experience for learning, have proven to be of great practical value in reinforcement learning. However, even for simple convex problems such as linear value function approximation, these algorithms are not guaranteed to...
[]
[ { "authors": [ "K. Atkinson", "W. Han", "D. Stewart" ], "title": "Numerical Solution of Ordinary Differential Equations", "venue": "JOHN WILEY & SONS,", "year": 2008 }, { "authors": [ "L.C. Baird" ], "title": "Residual algorithms: Reinforcement learning wi...
[ { "heading": "1 INTRODUCTION", "text": "Off-policy algorithms for value function learning enable an agent to use a behavior policy that differs from the target policy in order to gain experience for learning. However, because off-policy methods learn a value function for a target policy given data due to a ...
2,020
null
SP:dcb62a0cc1b03e9ea24b2ed167f14255d9386f95
[ "This paper presents a methodology for incorporating factor-graphs into model-based and model-free RL methods. The work starts by assuming access to a correct and factor graph showing the relationship between individual state factors, actions, and rewards. The authors propose to make use of this factor graph by usi...
We propose a simple class of deep reinforcement learning (RL) methods, called FactoredRL, that can leverage factored environment structures to improve the sample efficiency of existing model-based and model-free RL algorithms. In tabular and linear approximation settings, the factored Markov decision process literature...
[]
[ { "authors": [ "Marcin Andrychowicz", "Filip Wolski", "Alex Ray", "Jonas Schneider", "Rachel Fong", "Peter Welinder", "Bob McGrew", "Josh Tobin", "OpenAI Pieter Abbeel", "Wojciech Zaremba" ], "title": "Hindsight experience replay", "venue":...
[ { "heading": null, "text": "We propose a simple class of deep reinforcement learning (RL) methods, called FactoredRL, that can leverage factored environment structures to improve the sample efficiency of existing model-based and model-free RL algorithms. In tabular and linear approximation settings, the fac...
2,020
FACTOREDRL: LEVERAGING FACTORED GRAPHS FOR DEEP REINFORCEMENT LEARNING
SP:ad7eb2bcb3a83153f140e5e8bfaa8b76110e62ab
[ "It is a very poorly written paper. Basic idea of finding a way to not have to wait for full forward pass is not new. Multiple research papers have been published from the extreme of using stale weight to some form of sub-network backdrop as a proxy for the full network. This paper proposed no new idea for local up...
Deep learning models trained on large data sets have been widely successful in both vision and language domains. As state-of-the-art deep learning architectures have continued to grow in parameter count so have the compute budgets and times required to train them, increasing the need for compute-efficient methods that ...
[]
[ { "authors": [ "David H. Ackley", "Geoffrey E. Hinton", "Terrence J. Sejnowski" ], "title": "A learning algorithm for Boltzmann machines", "venue": "Cognitive Science,", "year": 1985 }, { "authors": [ "Eugene Belilovsky", "Michael Eickenberg", "Edouard...
[ { "heading": "1 INTRODUCTION", "text": "Backpropagation (Rumelhart et al., 1985) is by far the most common method used to train neural networks. Alternatives to backpropagation are typically used only when backpropagation is impractical due to a non-differentiable loss (Schulman et al., 2015), non-smooth lo...
2,020
null
SP:a3e5acdd322677d019a4582db78dab2dc1102818
[ "This paper discusses a well-known problem of VAE training that decoder produces blurry reconstruction with constant variance. While much existing work addressed this problem by introducing independent variance training (as of the original VAE model) or additional hyper-parameters, those approaches usually come wit...
Variational autoencoders (VAEs) provide an effective and simple method for modeling complex distributions. However, training VAEs often requires considerable hyperparameter tuning to determine the optimal amount of information retained by the latent variable. We study the impact of calibrated decoders, which learn the ...
[]
[ { "authors": [ "Martı́n Abadi", "Paul Barham", "Jianmin Chen", "Zhifeng Chen", "Andy Davis", "Jeffrey Dean", "Matthieu Devin", "Sanjay Ghemawat", "Geoffrey Irving", "Michael Isard" ], "title": "Tensorflow: A system for large-scale machine learn...
[ { "heading": "1 INTRODUCTION", "text": "Deep density models based on the variational autoencoder (VAE) (Kingma & Welling, 2014; Rezende et al., 2014) have found ubiquitous use in probabilistic modeling and representation learning as they are both conceptually simple and are able to scale to very complex dis...
2,020
null
SP:3a1d7f7165762299ba2d9bab4144576660b9a784
[ "This paper proposes a sampling free technique based on variance propagation to model predictive distributions of deep learning models. Estimating uncertainty of deep learning models is an important line of research for understanding the reliability of predictions and ensuring robustness to out-of-distribution data...
Uncertainty evaluation is a core technique when deep neural networks (DNNs) are used in real-world problems. In practical applications, we often encounter unexpected samples that have not seen in the training process. Not only achieving the high-prediction accuracy but also detecting uncertain data is significant for s...
[]
[ { "authors": [ "Roberto Cipolla" ], "title": "Bayesian segnet: Model uncertainty in deep convolutional encoder-decoder architectures for scene understanding", "venue": "Proceedings of the British Machine Vision Conference (BMVC),", "year": 2017 }, { "authors": [ "M. Christoph...
[ { "heading": "1 INTRODUCTION", "text": "Uncertainty evaluation is a core technique in practical applications of deep neural networks (DNNs). As an example, let us consider the Cyber-Physical Systems (CPS) such as the automated driving system. In the past decade, machine learning methods are widely utilized ...
2,020
null
SP:72d1283f3602edc22896934271fcec5b03f25d9e
[ "This paper studies the differential private synthetic dataset generation. Unlike previous DP based GAN models, this paper aims to boost the sample quality of after the training stage. In particular, the final synthetic dataset is sampled from the sequence of generators obtained during GAN training. The distributio...
Differentially private GANs have proven to be a promising approach for generating realistic synthetic data without compromising the privacy of individuals. Due to the privacy-protective noise introduced in the training, the convergence of GANs becomes even more elusive, which often leads to poor utility in the output g...
[ { "affiliations": [], "name": "POST-GAN BOOSTING" }, { "affiliations": [], "name": "Marcel Neunhoeffer" }, { "affiliations": [], "name": "Zhiwei Steven Wu" } ]
[ { "authors": [ "Martin Abadi", "Andy Chu", "Ian Goodfellow", "H. Brendan McMahan", "Ilya Mironov", "Kunal Talwar", "Li Zhang" ], "title": "Deep learning with differential privacy", "venue": "In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Co...
[ { "heading": "1 INTRODUCTION", "text": "The vast collection of detailed personal data, including everything from medical history to voting records, to GPS traces, to online behavior, promises to enable researchers from many disciplines to conduct insightful data analyses. However, many of these datasets con...
2,021
null
SP:a6280b6605e621403de6ac4c3fc80fa71184ab6d
[ "In this paper, the authors propose a post-processing method for removing bias from a trained model. The bias is defined as conditional statistical parity — for a given partitioning of the data, the predicted label should be conditionally uncorrelated with the sensitive (bias inducing) attribute for each partition....
We present an efficient and scalable algorithm for debiasing trained models, including deep neural networks (DNNs), which we prove to be near-optimal by bounding its excess Bayes risk. Unlike previous black-box reduction methods to cost-sensitive classification rules, the proposed algorithm operates on models that have...
[]
[ { "authors": [ "M.A. Bruckner" ], "title": "The promise and perils of algorithmic lenders’ use of big data,", "venue": "Chi.-Kent L. Rev.,", "year": 2018 }, { "authors": [ "R.C. Deo" ], "title": "Machine learning in medicine,", "venue": "Circulation,", "year":...
[ { "heading": "1 INTRODUCTION", "text": "Machine learning is increasingly applied to critical decisions which can have a lasting impact on individual lives, such as for credit lending (Bruckner, 2018), medical applications (Deo, 2015), and criminal justice (Brennan et al., 2009). Consequently, it is imperati...
2,020
A NEAR-OPTIMAL ALGORITHM FOR DEBIASING TRAINED MACHINE LEARNING MODELS
SP:90ffef024018f59b3bde23aa2e2a4677602d41e8
[ "This paper presents a variant of Transformer where low-dimension matrix multiplications and single-head attention are used. Stacked group-linear-transformation (GLT) are applied on input of each layer to perform dimension growth and then reduction. The paper is well-written and easy to follow. Experiments demonstr...
We introduce a deep and light-weight transformer, DeLighT, that delivers similar or better performance than standard transformer-based models with significantly fewer parameters. DeLighT more efficiently allocates parameters both (1) within each Transformer block using the DeLighT transformation, a deep and lightweight...
[ { "affiliations": [], "name": "LIGHT-WEIGHT TRANSFORMER" }, { "affiliations": [], "name": "Sachin Mehta" }, { "affiliations": [], "name": "Marjan Ghazvininejad" }, { "affiliations": [], "name": "Srinivasan Iyer" }, { "affiliations": [], "name": "Luke Zettlemoy...
[ { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing syste...
[ { "heading": "1 INTRODUCTION", "text": "Attention-based transformer networks (Vaswani et al., 2017) are widely used for sequence modeling tasks, including language modeling and machine translation. To improve performance, models are often scaled to be either wider, by increasing the dimension of hidden laye...
2,021
null
SP:c83ecc74eb885df5f29e5a7080a8c60d1ee0a3b0
[ "This paper shows a relationship between the project rule weights of a Hopfield network (HN) and the interaction weights in a corresponding restricted Boltzmann machine (RBM). The mapping from HN to RBM is facilitated by realising that the partition function of BN can be seen as the partition function of a binary-c...
Hopfield networks (HNs) and Restricted Boltzmann Machines (RBMs) are two important models at the interface of statistical physics, machine learning, and neuroscience. Recently, there has been interest in the relationship between HNs and RBMs, due to their similarity under the statistical mechanics formalism. An exact m...
[ { "affiliations": [], "name": "BOLTZMANN MACHINES" }, { "affiliations": [], "name": "Matthew Smart" }, { "affiliations": [], "name": "Anton Zilman" } ]
[ { "authors": [ "David H Ackley", "Geoffrey E Hinton", "Terrence J. Sejnowski" ], "title": "A learning algorithm for boltzmann machines", "venue": "Cognitive Science,", "year": 1985 }, { "authors": [ "Elena Agliari", "Adriano Barra", "Andrea De Antoni",...
[ { "heading": "1 INTRODUCTION", "text": "Hopfield networks (HNs) (Hopfield, 1982; Amit, 1989) are a classical neural network architecture that can store prescribed patterns as fixed-point attractors of a dynamical system. In their standard formulation with binary valued units, HNs can be regarded as spin gla...
2,021
null
SP:3d705a1b70254d2b9d05277efff8ac08b0539086
[ "The authors present a way to learn the action of an arbitrary orthogonal matrix on a vector via a map from $\\mathbb{R}^{n\\times n}$ onto $\\operatorname{O}(n)$. They show that the map is surjective, and give conditions under which they can invert this action. They then compare against previous proposed schemes i...
Orthogonal weight matrices are used in many areas of deep learning. Much previous work attempt to alleviate the additional computational resources it requires to constrain weight matrices to be orthogonal. One popular approach utilizes many Householder reflections. The only practical drawback is that many reflections c...
[]
[ { "authors": [ "Martin Arjovsky", "Amar Shah", "Yoshua Bengio" ], "title": "Unitary Evolution Recurrent Neural Networks", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Nitin Bansal", "Xiaohan Chen", "Zhangyang Wang" ], "title": "Can We G...
[ { "heading": null, "text": "Orthogonal weight matrices are used in many areas of deep learning. Much previous work attempt to alleviate the additional computational resources it requires to constrain weight matrices to be orthogonal. One popular approach utilizes many Householder reflections. The only pract...
2,020
null
SP:0cb862cf3806c4f04d2d30f200c25841a1cb52a8
[ "This paper proposes to learn patient-specific representation using patient physiological signals. The authors design a PCP representation for each patient, which is learned to agree with signals from the same patients and disagrees with the remaining patients. In the supervised part, the classifier is generated f...
Many clinical deep learning algorithms are population-based and difficult to interpret. Such properties limit their clinical utility as population-based findings may not generalize to individual patients and physicians are reluctant to incorporate opaque models into their clinical workflow. To overcome these obstacles,...
[]
[ { "authors": [ "A K Akobeng" ], "title": "Understanding randomised controlled trials", "venue": "Archives of Disease in Childhood,", "year": 2005 }, { "authors": [ "Erick A Perez Alday", "Annie Gu", "Amit Shah", "Chad Robichaux", "An-Kwok Ian Wong", ...
[ { "heading": "1 INTRODUCTION", "text": "Modern medical research is arguably anchored around the “gold standard” of evidence provided by randomized control trials (RCTs) (Cartwright, 2007). However, RCT-derived conclusions are population-based and fail to capture nuances at the individual patient level (Akob...
null
PCPS: PATIENT CARDIAC PROTOTYPES
SP:b7a45906d972644e9d0e757a83ff50fd3ad7cde3
[ "Either putting the uncertainty on the weights (e.g., Bayes by BP) or on the activation (e.g., fast dropout or variants of natural-parameter networks [2,3] or Bayesian dark knowledge [4]) or both [1] have been investigated before. The idea of moving the uncertainty from the weight to the activation function is not ...
Current approaches for uncertainty estimation in deep learning often produce too confident results. Bayesian Neural Networks (BNNs) model uncertainty in the space of weights, which is usually high-dimensional and limits the quality of variational approximations. The more recent functional BNNs (fBNNs) address this only...
[ { "affiliations": [], "name": "Pablo Morales-Álvarez" }, { "affiliations": [], "name": "Daniel Hernández-Lobato" } ]
[ { "authors": [ "F. Agostinelli", "M. Hoffman", "P. Sadowski", "P. Baldi" ], "title": "Learning activation functions to improve deep neural networks", "venue": "arXiv preprint arXiv:1412.6830,", "year": 2014 }, { "authors": [ "P. Baldi", "P. Sadowski", ...
[ { "heading": "1 INTRODUCTION", "text": "Deep Neural Networks (DNNs) have achieved state-of-the-art performance in many different tasks, such as speech recognition (Hinton et al., 2012), natural language processing (Mikolov et al., 2013) or computer vision (Krizhevsky et al., 2012). In spite of their predict...
2,021
ACTIVATION-LEVEL UNCERTAINTY IN DEEP NEURAL NETWORKS
SP:4d94ef57fdaf5f1100b6b09331d5cff5264fcdf6
[ "In this paper, the authors argue that the mini-batch method and local SGD method suffers generalization performance degradation for large local mini-batch size. An asynchronous method is proposed to improve the generalization performance. A sublinear convergence rate is provided for the non-convex objective. As th...
Distributed variants of stochastic gradient descent (SGD) are central to training deep neural networks on massive datasets. Several scalable versions of data-parallel SGD have been developed, leveraging asynchrony, communicationcompression, and local gradient steps. Current research seeks a balance between distributed ...
[ { "affiliations": [], "name": "MEETS ASYNCHRONY" } ]
[ { "authors": [ "Dan Alistarh", "Demjan Grubic", "Jerry Li", "Ryota Tomioka", "M. Vojnovic" ], "title": "Qsgd: Communicationefficient sgd via gradient quantization and encoding", "venue": null, "year": 2017 }, { "authors": [ "Nils Berglund" ], "...
[ { "heading": "1 INTRODUCTION", "text": "In this paper, we consider the classic problem of minimizing an empirical risk, defined simply as min x∈Rd ∑ i∈[I] fi(x), (1) where d is the dimension, x ∈ Rd denotes the set of model parameters, [I] is the training set, and fi(x) : Rd → R is the loss on the training ...
2,020
null
SP:3dffd0add054e13be141cfe939e367f6f6785eb8
[ "This paper deals with the problem of natural language generation for a dialogue system involved in complex communication tasks such as negotiation or persuasion. The proposed architecture consists of two encoders: one for the utterance and the other for dialogue acts and negotiation strategies. The decoder is an R...
To successfully negotiate a deal, it is not enough to communicate fluently: pragmatic planning of persuasive negotiation strategies is essential. While modern dialogue agents excel at generating fluent sentences, they still lack pragmatic grounding and cannot reason strategically. We present DIALOGRAPH, a negotiation s...
[ { "affiliations": [], "name": "NEGOTIATION DIALOGUES" }, { "affiliations": [], "name": "Rishabh Joshi" }, { "affiliations": [], "name": "Vidhisha Balachandran" }, { "affiliations": [], "name": "Shikhar Vashishth" }, { "affiliations": [], "name": "Alan W Black"...
[ { "authors": [ "Nicholas Asher", "Julie Hunter", "Mathieu Morey", "Benamara Farah", "Stergos Afantenos" ], "title": "Discourse structure and dialogue acts in multiparty dialogue: the STAC corpus", "venue": "In Proceedings of the Tenth International Conference on Languag...
[ { "heading": "1 INTRODUCTION", "text": "Negotiation is ubiquitous in human interaction, from e-commerce to the multi-billion dollar sales of companies. Learning how to negotiate effectively involves deep pragmatic understanding and planning the dialogue strategically (Thompson; Bazerman et al., 2000b; Pruit...
2,021
null
SP:3b3e7833784c53527eb32d5f6ac8d720f9d764bd
[ "The paper studies a problem of learning step-size policy for L-BFGS algorithm. This paper falls into a general category of meta-learning algorithms that try to derive a data-driven approach to learn one of the parameters of the learning algorithm. In this case, it is the learning rate of L-BFGS. The paper is very ...
We consider the problem of how to learn a step-size policy for the LimitedMemory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm. This is a limited computational memory quasi-Newton method widely used for deterministic unconstrained optimization but currently avoided in large-scale problems for requiring step sizes...
[]
[ { "authors": [ "A. Agrawal", "B. Amos", "S. Barratt", "S. Boyd", "S. Diamond", "Z. Kolter" ], "title": "Differentiable convex optimization layers", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "...
[ { "heading": "1 INTRODUCTION", "text": "Consider the unconstrained optimization problem\nminimize x f(x) (1)\nwhere f : Rn → R is an objective function that is differentiable for all x ∈ Rn, with n being the number of decision variables forming x. Let ∇xf(x0) be the gradient of f(x) evaluated at some x0 ∈ R...
2,020
null
SP:7a92beaba926a93a627208abebe4a455ae3e0400
[ "This paper proposes a new calibration error measurement named UCE (Uncertainty Calibration Error) for deep classification models. It consists in doing a calibration in order to achieve \"perfect calibration\" (i.e., the uncertainty provided is equivalent to the classification error at all levels in [0, 1]), relyin...
Various metrics have recently been proposed to measure uncertainty calibration of deep models for classification. However, these metrics either fail to capture miscalibration correctly or lack interpretability. We propose to use the normalized entropy as a measure of uncertainty and derive the Uncertainty Calibration E...
[]
[ { "authors": [ "Arsenii Ashukha", "Alexander Lyzhov", "Dmitry Molchanov", "Dmitry Vetrov" ], "title": "Pitfalls of in-domain uncertainty estimation and ensembling in deep learning", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Christopher M Bishop" ...
[ { "heading": "1 INTRODUCTION", "text": "Advances in deep learning have led to superior accuracy in classification tasks, making deep learning classifiers an attractive choice for safety-critical applications like autonomous driving (Chen et al., 2015) or computer-aided diagnosis (Esteva et al., 2017). Howev...
2,020
null
SP:92d112388a1eac20c2208f0596cdfcdcca685c8f
[ "This paper presents a model and a corresponding training approach for multi-scale invertible models. The presented model is defined on multiple scales with information on finer scales being conditioned on coarser scales. Data generation is hence done sequentially from a coarser to finer scale. The authors argue th...
High-dimensional Bayesian inference problems cast a long-standing challenge in generating samples, especially when the posterior has multiple modes. For a wide class of Bayesian inference problems equipped with the multiscale structure that low-dimensional (coarse-scale) surrogate can approximate the original highdimen...
[]
[ { "authors": [ "Lynton Ardizzone", "Jakob Kruse", "Sebastian Wirkert", "Daniel Rahner", "Eric W Pellegrini", "Ralf S Klessen", "Lena Maier-Hein", "Carsten Rother", "Ullrich Köthe" ], "title": "Analyzing inverse problems with invertible neural network...
[ { "heading": "1 INTRODUCTION", "text": "Bayesian inference provides a powerful framework to blend prior knowledge, data generation process and (possibly small) data for statistical inference. With some prior knowledge ⇢ (distribution) for the quantity of interest x 2 Rd, and some (noisy) measurement y 2 Rdy...
2,020
null
SP:077926a214f87b9fdcd5a5f9d818d6313437cd90
[ "This study is presented clearly, and the core idea is interesting. However, the presented novelty is limited to a globally (for all tasks) and locally (task-specific) learning paradigm using a framework inspired by (Badirli et al., 2020). The authors have presented experimental results for both regression and cla...
Meta-optimization is an effective approach that learns a shared set of parameters across tasks for parameter initialization in meta-learning. A key challenge for metaoptimization based approaches is to determine whether an initialization condition can be generalized to tasks with diverse distributions to accelerate lea...
[]
[ { "authors": [ "Ferran Alet", "Tomás Lozano-Pérez", "Leslie P Kaelbling" ], "title": "Modular meta-learning", "venue": "arXiv preprint arXiv:1806.10166,", "year": 2018 }, { "authors": [ "Jacob Andreas", "Marcus Rohrbach", "Trevor Darrell", "Dan K...
[ { "heading": "1 INTRODUCTION", "text": "While humans can learn quickly with a few samples with prior knowledge and experiences, artificial intelligent algorithms face challenges in dealing with such situations. Learning to learn (or metalearning) (Vilalta & Drissi, 2002) emerges as the common practice to ad...
2,020
null
SP:2969ff98eb93abe37242a962df458541311090ff
[ "The paper explores adversarial robustness in a new setting of test-time adaptation. It shows this new problem of “test-time-adapted adversarial robustness” is strictly weaker than the “traditional adversarial robustness” when assuming the training data is available for the “test-time-adapted adversarial robustness...
This paper studies test-time adaptation in the context of adversarial robustness. We formulate an adversarial threat model for test-time adaptation, where the defender may have a unique advantage as the adversarial game becomes a maximin game, instead of a minimax game as in the classic adversarial robustness threat mo...
[]
[ { "authors": [ "Anish Athalye", "Nicholas Carlini", "David A. Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 201...
[ { "heading": "1 INTRODUCTION", "text": "There is a surge of interest to study test-time adaptation to help generalization to unseen domains (e.g., recent work by Sun et al. (2020); Wang et al. (2020); Nado et al. (2020)). At the high level, a generic test-time adaptation can be modeled as an algorithm Γ whi...
2,020
null
SP:b7532fd6e281d88fff5a0a89c73ae3e6651f8827
[ "This paper presents an approach to deep subspace clustering based on minimizing the correntropy induced metric (CIM), with the goal of establishing when training should be stopped and generalizing to unseen data. The main contribution over the existing S2ConfSCN method is a change from squared error loss to CIM wh...
Deep subspace clustering (SC) algorithms recently gained attention due to their ability to successfully handle nonlinearities in data. However, the insufficient capability of existing SC methods to deal with data corruption of unknown (arbitrary) origin hinders their generalization ability and capability to address rea...
[]
[ { "authors": [ "Vincent Vanhoucke", "Vijay Vasudevan", "Fernanda Viégas", "Oriol Vinyals", "Pete Warden", "Martin Wattenberg", "Martin Wicke", "Yuan Yu", "Xiaoqiang Zheng" ], "title": "TensorFlow: Large-scale machine learning on heterogeneous systems...
[ { "heading": "1 INTRODUCTION", "text": "Subspace clustering approaches have achieved encouraging performance when compared with the clustering algorithms that rely on proximity measures between data points. The main idea behind the subspace model is that the data can be drawn from low-dimensional subspaces ...
2,020
null
SP:f0e0d909df518f25eb9243837939225d7db1196e
[ "The authors present a new Algorithm for performing unsupervised anomaly detection in diverse applications such as visual, audio and text data. They propose a two-step method in which first they utilise contrastive learning in order to find a semantically dense map of the data onto the unit-hypersphere. Then, they ...
In this paper we present SemSAD, a simple and generic framework for detecting examples that lie out-of-distribution (OOD) for a given training set. The approach is based on learning a semantic similarity measure to find for a given test example the semantically closest example in the training set and then using a discr...
[]
[ { "authors": [ "Faruk Ahmed", "Aaron C. Courville" ], "title": "Detecting semantic anomalies", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Samaneh Azadi", "Catherine Olsson", "Trevor Darrell", "Ian J. Goodfellow", "Augustus Odena" ],...
[ { "heading": "1 INTRODUCTION", "text": "Anomaly detection or novelty detection aims at identifying patterns in data that are significantly different to what is expected. This problem is inherently a binary classification problem that classifies examples either as in-distribution or out-of-distribution, give...
2,020
null
SP:7c44bf5a4a8d5e5ee1e86ee4582c42186e2df72c
[ "The paper proposes a generative method for 3D objects (voxels representation). Given an initial voxels configuration (e.g. partial shape, or even a single voxel), the method learns a local transition kernel for a Markov chain to decide how to evolve the configuration; sampling iteratively from these probabilities ...
We present a probabilistic 3D generative model, named Generative Cellular Automata, which is able to produce diverse and high quality shapes. We formulate the shape generation process as sampling from the transition kernel of a Markov chain, where the sampling chain eventually evolves to the full shape of the learned d...
[ { "affiliations": [], "name": "Dongsu Zhang" }, { "affiliations": [], "name": "Changwoon Choi" }, { "affiliations": [], "name": "Jeonghwan Kim" }, { "affiliations": [], "name": "Young Min Kim" } ]
[ { "authors": [ "Panos Achlioptas", "Olga Diamanti", "Ioannis Mitliagkas", "Leonidas Guibas" ], "title": "Learning representations and generative models for 3D point clouds", "venue": "Proceedings of Machine Learning Research,", "year": 2018 }, { "authors": [ ...
[ { "heading": "1 INTRODUCTION", "text": "Probabilistic 3D shape generation aims to learn and sample from the distribution of diverse 3D shapes and has applications including 3D contents generation or robot interaction. Specifically, learning the distribution of shapes or scenes can automate the process of ge...
2,021
GENERATIVE CELLULAR AUTOMATA
SP:9326f169cc5e8d2f4268dcf39af31590ee004d98
[ "This paper extends the results for actor-critic with stochastic policies of [Zhang, ICML 2018] to deterministic policies and offers the proof of convergence under some specific assumptions. The authors consider both the on-policy setting and the off-policy setting and offers some convincing derivation. It provides...
[Zhang, ICML 2018] provided the first decentralized actor-critic algorithm for 1 multi-agent reinforcement learning (MARL) that offers convergence guarantees. In 2 that work, policies are stochastic and are defined on finite action spaces. We extend 3 those results to offer a provably-convergent decentralized actor-cri...
[]
[ { "authors": [ "Zhang" ], "title": "The desired result holds since Step 1 and Step 2 of the proof of Theorem", "venue": null, "year": 2018 }, { "authors": [ "Benveniste" ], "title": "492 proof is now similar to the proof of Lemma 2 on page", "venue": null, "ye...
[ { "heading": null, "text": "[Zhang, ICML 2018] provided the first decentralized actor-critic algorithm for1 multi-agent reinforcement learning (MARL) that offers convergence guarantees. In2 that work, policies are stochastic and are defined on finite action spaces. We extend3 those results to offer a provab...
2,020
Decentralized Deterministic Multi-Agent Reinforcement Learning
SP:cc282126b689c7311c3a28f0d173a004ed24382f
[ "The paper proposes a new training objective for fine-tuning pre-trained models: a weighted sum of the classical cross-entropy (CE) and a new supervised contrastive learning term (SCP). The latter uses the (negated) softmax over the embedding distances (i.e. dot products) between a training instance and all other i...
State-of-the-art natural language understanding classification models follow twostages: pre-training a large language model on an auxiliary task, and then finetuning the model on a task-specific labeled dataset using cross-entropy loss. However, the cross-entropy loss has several shortcomings that can lead to sub-optim...
[ { "affiliations": [], "name": "Beliz Gunel" }, { "affiliations": [], "name": "Jingfei Du" }, { "affiliations": [], "name": "Alexis Conneau" }, { "affiliations": [], "name": "Ves Stoyanov" } ]
[ { "authors": [ "Armen Aghajanyan", "Akshat Shrivastava", "Anchit Gupta", "Naman Goyal", "Luke Zettlemoyer", "Sonal Gupta" ], "title": "Better fine-tuning by reducing representational collapse", "venue": null, "year": 2008 }, { "authors": [ "Phili...
[ { "heading": "1 INTRODUCTION", "text": "State-of-the-art for most existing natural language processing (NLP) classification tasks is achieved by models that are first pre-trained on auxiliary language modeling tasks and then fine-tuned on the task of interest with cross-entropy loss (Radford et al., 2019; H...
2,021
PRE-TRAINED LANGUAGE MODEL FINE-TUNING
SP:7eb0d8278168465270570233e4af64ebb3f2f154
[ "Paper proposes to attack the challenging problem of RL with sparse feedback by leveraging a few demonstrations and learnable reward redistribution. The redistributed reward is computed by aligning the key events (a set of clustered symbols) to the demonstrations via PSSM-based seq matching. Experiments on two arti...
Reinforcement Learning algorithms require a large number of samples to solve complex tasks with sparse and delayed rewards. Complex tasks are often hierarchically composed of sub-tasks. Solving a sub-task increases the return expectation and leads to a step in the Q-function. RUDDER identifies these steps and then redi...
[ { "affiliations": [], "name": "STRATIONS BY" }, { "affiliations": [], "name": "REWARD REDISTRIBUTION" } ]
[ { "authors": [ "P. Abbeel", "A.Y. Ng" ], "title": "Apprenticeship learning via inverse reinforcement learning", "venue": "In Proceedings of the Twenty-First International Conference on Machine Learning, pp", "year": 2004 }, { "authors": [ "S.F. Altschul", "W. Gish...
[ { "heading": null, "text": "Reinforcement Learning algorithms require a large number of samples to solve complex tasks with sparse and delayed rewards. Complex tasks are often hierarchically composed of sub-tasks. Solving a sub-task increases the return expectation and leads to a step in the Q-function. RUD...
2,021
ALIGN-RUDDER: LEARNING FROM FEW DEMON-
SP:233335a3dc327cf153bd2e8d35a9e4594cf5bc67
[ "This paper proposes a novel approach to modeling uncertainty, as an layer added-on to an otherwise black-box system. The ChePAN uses a neural network to estimate per-quantile roots of a chebyshev polynomial, then uses a quantile regression loss to fit these coefficients using backpropagation. Importantly, the Cheb...
Most predictive systems currently in use do not report any useful information for auditing their associated uncertainty and evaluating the corresponding risk. Taking it for granted that their replacement may not be advisable in the short term, in this paper we propose a novel approach to modelling confidence in such sy...
[]
[ { "authors": [ "M. Abadi", "P. Barham", "J. Chen", "Z. Chen", "A. Davis", "J. Dean", "M. Devin", "S. Ghemawat", "G. Irving", "M. Isard" ], "title": "Tensorflow: A system for large-scale machine learning", "venue": "In 12th tUSENIXu Symposiu...
[ { "heading": "1 INTRODUCTION", "text": "The present paper proposes a novel method for adding aleatoric uncertainty estimation to any pointwise predictive system currently in use. Considering the system as a black box, i.e. avoiding any hypothesis about the internal structure of the system, the method offers...
2,020
null
SP:eff774eddcc60e943c0a41207c21a1c9d6d5d950
[ "This paper proposes an approach to improve (supervised and unsupervised) representation learning for text using constrastive learning. The proposed approach augments standard contrastive learning with: (1) Spectral-norm regularization of the critic to estimate the Wasserstein distance instead of the KL (as in the ...
There has been growing interest in representation learning for text data, based on theoretical arguments and empirical evidence. One important direction involves leveraging contrastive learning to improve learned representations. We propose an application of contrastive learning for intermediate textual feature pairs, ...
[]
[ { "authors": [ "Martin Arjovsky", "Soumith Chintala", "Léon Bottou" ], "title": "Wasserstein generative adversarial networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Sanjeev Arora", "Hrishikesh Khandeparkar", "Mikhail Khodak", "Ores...
[ { "heading": "1 INTRODUCTION", "text": "Representation learning is one of the pivotal topics in natural language processing (NLP), in both supervised and unsupervised settings. It has been widely recognized that some forms of “general representation” exist beyond specific applications (Oord et al., 2018). T...
2,020
null
SP:a8bb14b514e474691be63b51582544a9befa7125
[ "The paper finds that at extreme sparsities (>95%), existing approaches to pruning neural networks at initialization devolve to worse than random pruning. The paper posits that this degenerate behavior is due to the fact that weights are pruned in groups, though the saliency metrics only capture pointwise changes. ...
Recent studies have shown that skeletonization (pruning parameters) of networks at initialization provides all the practical benefits of sparsity both at inference and training time, while only marginally degrading their performance. However, we observe that beyond a certain level of sparsity (approx 95%), these approa...
[ { "affiliations": [], "name": "Pau de Jorge" }, { "affiliations": [], "name": "Amartya Sanyal" }, { "affiliations": [], "name": "Harkirat S. Behl" }, { "affiliations": [], "name": "Puneet K. Dokania" } ]
[ { "authors": [ "Guillaume Bellec", "David Kappel", "Wolfgang Maass", "Robert Legenstein" ], "title": "Deep rewiring: Training very sparse deep networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Mig...
[ { "heading": "1 INTRODUCTION", "text": "The majority of pruning algorithms for Deep Neural Networks require training dense models and often fine-tuning sparse sub-networks in order to obtain their pruned counterparts. In Frankle & Carbin (2019), the authors provide empirical evidence to support the hypothes...
2,021
PROGRESSIVE SKELETONIZATION: TRIMMING MORE FAT FROM A NETWORK AT INITIALIZATION
SP:ee89d3273df8b3b082c0e72a8768dff7cd3b7f56
[ "Paper proposed to generate the communication message in MARL with the predicted trajectories of all the agents (include the agent itself). An extra self-attention model is also stacked over the trajectories to trade off the length of prediction and the possible explaining away issue. The whole model is trained vi...
Communication is one of the core components for learning coordinated behavior in multi-agent systems. In this paper, we propose a new communication scheme named Intention Sharing (IS) for multi-agent reinforcement learning in order to enhance the coordination among agents. In the proposed IS scheme, each agent generate...
[ { "affiliations": [], "name": "INTENTION SHARING" }, { "affiliations": [], "name": "Woojun Kim" }, { "affiliations": [], "name": "Jongeui Park" }, { "affiliations": [], "name": "Youngchul Sung" } ]
[ { "authors": [ "Abhishek Das", "Théophile Gervet", "Joshua Romoff", "Dhruv Batra", "Devi Parikh", "Mike Rabbat", "Joelle Pineau" ], "title": "Tarmac: Targeted multi-agent communication", "venue": "In International Conference on Machine Learning,", "year"...
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement learning (RL) has achieved remarkable success in various complex control problems such as robotics and games (Gu et al. (2017); Mnih et al. (2013); Silver et al. (2017)). Multi-agent reinforcement learning (MARL) extends RL to multi-agent systems, which ...
2,021
null
SP:b24e79d30d19c99f1093779bdba8bd8b2aed9ec0
[ "In this paper, the authors focus on keystroke inference attacks in which an attacker leverages machine learning approaches, In particular, a new framework is proposed for low-resource video domain adaptation using supervised disentangled learning, and another method to assess the threat of keystroke inference att...
Keystroke inference attacks are a form of side-channels attacks in which an attacker leverages various techniques to recover a user’s keystrokes as she inputs information into some display (for example, while sending a text message or entering her pin). Typically, these attacks leverage machine learning approaches, but...
[]
[ { "authors": [ "M. Backes", "M. Dürmuth", "D. Unruh" ], "title": "Compromising reflections-or-how to read lcd monitors around the corner", "venue": "IEEE Symposium on Security and Privacy (sp", "year": 2008 }, { "authors": [ "M. Backes", "T. Chen", "M....
[ { "heading": "1 INTRODUCTION", "text": "We are exceedingly reliant on our mobile devices in our everyday lives. Numerous activities, such as banking, communications, and information retrieval, have gone from having separate channels to collapsing into one: through our mobile phones. While this has made many...
2,020
DISENTANGLING STYLE AND CONTENT FOR LOW RESOURCE VIDEO DOMAIN ADAPTATION: A CASE STUDY ON KEYSTROKE INFERENCE ATTACKS
SP:181ce6eaacf4be8ede3fbdd82c63200278f63cc4
[ "The paper considers the problem of approximating Sinkhorn divergence and corresponding transportation plan by combining low-rank and sparse approximation for the Sinkhorn kernel and using Nystrom iterations as a substitute for Sinkhorn's iterations. The corresponding approach is amenable to differentiation and can...
Optimal transport (OT) is a cornerstone of many machine learning tasks. The current best practice for computing OT is via entropy regularization and Sinkhorn iterations. This algorithm runs in quadratic time and requires calculating the full pairwise cost matrix, which is prohibitively expensive for large sets of objec...
[]
[ { "authors": [ "Pierre Ablin", "Gabriel Peyré", "Thomas Moreau" ], "title": "Super-efficiency of automatic differentiation for functions defined as a minimum", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Mokhtar Z. Alaya", "Maxime Berar", "Gil...
[ { "heading": "1 INTRODUCTION", "text": "Measuring the distance between two distributions or sets of objects is a central problem in machine learning. One common method of solving this is optimal transport (OT). OT is concerned with the problem of finding the transport plan for moving a source distribution (...
2,020
null
SP:06414ad3c4b2438227a6d0749755106ee30f1564
[ "The submission presents three contributions. First, the authors show the inconsistencies in the existing annealed Langevin sampling used in score-matching generative models and propose to correct it with the newly proposed Consistent Annealed Sampling (CAS) algorithm. The second contribution claimed is in providin...
Denoising Score Matching with Annealed Langevin Sampling (DSM-ALS) has recently found success in generative modeling. The approach works by first training a neural network to estimate the score of a distribution, and then using Langevin dynamics to sample from the data distribution assumed by the score network. Despite...
[]
[ { "authors": [ "Yang Song", "Stefano Ermon" ], "title": "Generative modeling by estimating gradients of the data distribution", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Aapo Hyvärinen" ], "title": "Estimat...
[ { "heading": "1 INTRODUCTION", "text": "Song and Ermon (2019) recently proposed a novel method of generating samples from a target distribution through a combination of Denoising Score Matching (DSM) (Hyvärinen, 2005; Vincent, 2011; Raphan and Simoncelli, 2011) and Annealed Langevin Sampling (ALS) (Welling ...
null
ADVERSARIAL SCORE MATCHING AND IMPROVED SAMPLING FOR IMAGE GENERATION
SP:f61e427d087e7f8b176a518af6088bde2ab75167
[ "This paper proposes an approach based on Fourier transforms to predict ratings in collaborative filtering problems. The paper’s scope (“smooth reconstruction functions”) gets immediately narrowed down to Fourier transforms--it would be nice to provide some motivation for this choice over alternative smooth functio...
The problem of predicting the rating of a set of users to a set of items in a recommender system based on partial knowledge of the ratings is widely known as collaborative filtering. In this paper, we consider a mapping of the items into a vector space and study the prediction problem by assuming an underlying smooth p...
[]
[ { "authors": [ "Rianne van den Berg", "Thomas N Kipf", "Max Welling" ], "title": "Graph convolutional matrix completion", "venue": "arXiv preprint arXiv:1706.02263,", "year": 2017 }, { "authors": [ "James Davidson", "Benjamin Liebald", "Junning Liu", ...
[ { "heading": null, "text": "The problem of predicting the rating of a set of users to a set of items in a recommender system based on partial knowledge of the ratings is widely known as collaborative filtering. In this paper, we consider a mapping of the items into a vector space and study the prediction pr...
2,020
null
SP:97471b69a8e0ce6d2bbb202cc3f9cd786e77ddea
[ "The theoretical analysis is clearly stated in an well-organized way and the derived sparsity bound is reasonable. With FFNN and CNN, a theorem is given to show that the model is trainable only when the initialization on Edge of Chaos (EOC) and also provided a rescaling method to make the pruned NN into EOC regime....
Overparameterized Neural Networks (NN) display state-of-the-art performance. However, there is a growing need for smaller, energy-efficient, neural networks to be able to use machine learning applications on devices with limited computational resources. A popular approach consists of using pruning techniques. While the...
[ { "affiliations": [], "name": "ROBUST PRUNING" }, { "affiliations": [], "name": "AT INITIALIZATION" }, { "affiliations": [], "name": "Soufiane Hayou" }, { "affiliations": [], "name": "Jean-Francois Ton" }, { "affiliations": [], "name": "Arnaud Doucet" } ]
[ { "authors": [ "J.M. Alvarez", "M. Salzmann" ], "title": "Compression-aware training of deep networks", "venue": "In 31st Conference in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "S. Arora", "S. Du", "W. Hu", "Z. Li", ...
[ { "heading": "1 INTRODUCTION", "text": "Overparameterized deep NNs have achieved state of the art (SOTA) performance in many tasks (Nguyen and Hein, 2018; Du et al., 2019; Zhang et al., 2016; Neyshabur et al., 2019). However, it is impractical to implement such models on small devices such as mobile phones....
2,021
null
SP:934bf46c7ff0d3a3b1f0b75e48235dd0c902558c
[ "This paper study the fundamental relationship between adversarial transferability and knowledge transferability. Theoretical analysis is conducted, revealing that adversarial transferability can indicate knowledge transferability. In this procedure, two quantities are formally defined to measure adversarial transf...
Despite the immense success that deep neural networks (DNNs) have achieved, adversarial examples, which are perturbed inputs that aim to mislead DNNs to make mistakes, have recently led to great concerns. On the other hand, adversarial examples exhibit interesting phenomena, such as adversarial transferability. DNNs al...
[]
[ { "authors": [ "Alessandro Achille", "Michael Lam", "Rahul Tewari", "Avinash Ravichandran", "Subhransu Maji", "Charless C Fowlkes", "Stefano Soatto", "Pietro Perona" ], "title": "Task2vec: Task embedding for meta-learning", "venue": "In Proceedings of ...
[ { "heading": null, "text": "Despite the immense success that deep neural networks (DNNs) have achieved, adversarial examples, which are perturbed inputs that aim to mislead DNNs to make mistakes, have recently led to great concerns. On the other hand, adversarial examples exhibit interesting phenomena, such...
2,020
DOES ADVERSARIAL TRANSFERABILITY INDICATE KNOWLEDGE TRANSFERABILITY?
SP:4a6f5bb1d0f72df5782a09a1ffc5e19504010e36
[ "This work proposes an effective modification of language model token-level distribution during the training which prevents some forms of degeneration such as repetitions and dullness. The approach is based on the idea of encouraging the model to use tokens which were not observed in the previous context so far. In...
Advanced large-scale neural language models have led to significant success in many natural language generation tasks. However, the most commonly used training objective, Maximum Likelihood Estimation (MLE), has been shown to be problematic, where the trained model prefers using dull and repetitive phrases. In this wor...
[]
[ { "authors": [ "Ilya Sutskever", "Dario Amodei" ], "title": "Language models are few-shot", "venue": "learners. arXiv,", "year": 2020 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-...
[ { "heading": "1 INTRODUCTION", "text": "Text generation has been one of the most important research problems in natural language processing (NLP) (Reiter & Dale, 2000). Thanks to the advances in neural architectures, models are now capable of generating texts that are of better quality than before (Brown et...
2,020
null
SP:2062ab9c65e0d10e5d6d0112aaeaca208f131afd
[ "In this paper, the authors augment the instance-level self-supervised learning with cluster-aware learning mechanism during the training procedure. Specifically, for each training batch, the authors project the instances into a clustering space and then utilize a cluster-aware contrastive loss to push the augmente...
Learning visual representations using large-scale unlabelled images is a holy grail for most of computer vision tasks. Recent contrastive learning methods have focused on encouraging the learned visual representations to be linearly separable among the individual items regardless of their semantic similarity; however, ...
[]
[ { "authors": [ "YM Asano", "C Rupprecht", "A Vedaldi" ], "title": "Self-labelling via simultaneous clustering and representation learning", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Hyojin Bahng", "S...
[ { "heading": "1 INTRODUCTION", "text": "Learning to extract generalized representations from a high-dimensional image is essential in solving various down-stream tasks in computer vision. Though a supervised learning framework has shown to be useful in learning discriminative representations for pre-trainin...
2,020
null
SP:b47032cd0c8bf0189504e1c6562b058ba8f0e8ae
[ "The paper studies generalization under distribution shift, and tries to answer the question: why do ERM-based classifiers learn to rely on \"spurious\" features? They present a class of distributions called \"easy-to-learn\" that rules out several explanations given in recent work and isolates the spurious correla...
Empirical studies suggest that machine learning models often rely on features, such as the background, that may be spuriously correlated with the label only during training time, resulting in poor accuracy during test-time. In this work, we identify the fundamental factors that give rise to this behavior, by explaining...
[ { "affiliations": [], "name": "Vaishnavh Nagarajan" }, { "affiliations": [], "name": "Anders Andreassen" }, { "affiliations": [], "name": "Behnam Neyshabur" } ]
[ { "authors": [ "Isabela Albuquerque", "João Monteiro", "Mohammad Darvishi", "Tiago H. Falk", "Ioannis Mitliagkas" ], "title": "Generalizing to unseen domains via distribution matching, 2020", "venue": null, "year": 2020 }, { "authors": [ "Martı́n Arjov...
[ { "heading": "1 INTRODUCTION", "text": "A machine learning model in the wild (e.g., a self-driving car) must be prepared to make sense of its surroundings in rare conditions that may not have been well-represented in its training set. This could range from conditions such as mild glitches in the camera to s...
2,021
UNDERSTANDING THE FAILURE MODES OF OUT-OF- DISTRIBUTION GENERALIZATION
SP:698104525f6955ba58aee1331a9487f77a542f13
[ "This paper proposes a dataset of tasks to help evaluate learned optimizers. The learned optimizers are evaluated by the loss that they achieve on held-out tasks after 10k steps. Using this dataset, the main strategy considered is to use search spaces that parametrize optimizers and learn a list of hyperparameter c...
We present TaskSet, a dataset of tasks for use in training and evaluating optimizers. TaskSet is unique in its size and diversity, containing over a thousand tasks ranging from image classification with fully connected or convolutional neural networks, to variational autoencoders, to non-volume preserving flows on a va...
[]
[ { "authors": [ "Martín Abadi", "Paul Barham", "Jianmin Chen", "Zhifeng Chen", "Andy Davis", "Jeffrey Dean", "Matthieu Devin", "Sanjay Ghemawat", "Geoffrey Irving", "Michael Isard" ], "title": "Tensorflow: A system for large-scale machine learni...
[ { "heading": null, "text": "We present TaskSet, a dataset of tasks for use in training and evaluating optimizers. TaskSet is unique in its size and diversity, containing over a thousand tasks ranging from image classification with fully connected or convolutional neural networks, to variational autoencoders...
2,020
TASKSET: A DATASET OF OPTIMIZATION TASKS
SP:4bda50ce81c790cf9b19a24d81db4c07ec3729c1
[ "The purpose of the paper seems clear: it proposes an attack to the recently proposed algorithm called Instahide (ICML 2020) which is a probabilistic algorithm for generating synthetic private data in the distributed setting. The attack proposed in this paper is considered for the case where the private data is i.i...
In this work, we examine the security of InstaHide, a scheme recently proposed by Huang et al. (2020b) for preserving the security of private datasets in the context of distributed learning. To generate a synthetic training example to be shared among the distributed learners, InstaHide takes a convex combination of pri...
[ { "affiliations": [], "name": "SPARSE MA" }, { "affiliations": [], "name": "TRIX FACTORIZATION" }, { "affiliations": [], "name": "Sitan Chen" }, { "affiliations": [], "name": "Xiaoxiao Li" }, { "affiliations": [], "name": "Danyang Zhuo" } ]
[ { "authors": [ "Sébastien Bubeck", "Yin Tat Lee", "Eric Price", "Ilya Razenshteyn" ], "title": "Adversarial examples from computational constraints", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "T Tony C...
[ { "heading": null, "text": "In this work, we examine the security of InstaHide, a scheme recently proposed by Huang et al. (2020b) for preserving the security of private datasets in the context of distributed learning. To generate a synthetic training example to be shared among the distributed learners, Ins...
2,021
null
SP:a1c54d5c42097b8ba971ac20470de864ae87dd4e
[ "In this work the authors propose a framework to perform object detection when there is noise present in class labels as well as bounding box annotations. The authors propose a two-step process, where in the first step the bounding boxes are corrected in class-agnostic way, and in the second step knowledge distilla...
Training deep object detectors requires large amounts of human-annotated images with accurate object labels and bounding box coordinates, which are extremely expensive to acquire. Noisy annotations are much more easily accessible, but they could be detrimental for learning. We address the challenging problem of trainin...
[]
[ { "authors": [ "Eric Arazo", "Diego Ortego", "Paul Albert", "Noel E. O’Connor", "Kevin McGuinness" ], "title": "Unsupervised label noise modeling and loss correction", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Devansh Arpit", "Stanisla...
[ { "heading": "1 INTRODUCTION", "text": "The remarkable success of modern object detectors largely relies on large-scale datasets with extensive bounding box annotations. However, it is extremely expensive and time-consuming to acquire high-quality human annotations. For example, annotating each bounding box...
2,020
null
SP:4fde35c9931ca15ab6cd53b171323e1abf0224db
[ "This paper proposes an approach to self-supervised learning from videos. The approach takes advantage of compressed videos, using the encoded residuals and motion vectors within the video codec. Using encoded videos has been shown to reduce computation time required by decoding videos. Previous works have explored...
Self-supervised learning of video representations has received great attention. Existing methods typically require frames to be decoded before being processed, which increases compute and storage requirements and ultimately hinders largescale training. In this work, we propose an efficient self-supervised approach to l...
[ { "affiliations": [], "name": "Youngjae Yu" }, { "affiliations": [], "name": "Sangho Lee" }, { "affiliations": [], "name": "Gunhee Kim" } ]
[ { "authors": [ "Relja Arandjelovic", "Andrew Zisserman" ], "title": "Look, listen and learn", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale GAN training for high fidelity...
[ { "heading": null, "text": "Self-supervised learning of video representations has received great attention. Existing methods typically require frames to be decoded before being processed, which increases compute and storage requirements and ultimately hinders largescale training. In this work, we propose an...
2,021
SELF-SUPERVISED LEARNING OF COMPRESSED VIDEO REPRESENTATIONS
SP:2d804ce6cd9917277ac5c4d6c72cceeb14bf0641
[ "The paper presents two algorithms - one for the deterministic and one for stochastic bilevel optimization. The paper claims the methods are lower cost in computational complexity for various terms and easy to implement. A finite-time convergence proof is provided for the algorithms. Empirical results are presente...
Bilevel optimization has arisen as a powerful tool for many machine learning problems such as meta-learning, hyperparameter optimization, and reinforcement learning. In this paper, we investigate the nonconvex-strongly-convex bilevel optimization problem. For deterministic bilevel optimization, we provide a comprehensi...
[ { "affiliations": [], "name": "BILEVEL OPTI" } ]
[ { "authors": [ "Luca Bertinetto", "Joao F Henriques", "Philip Torr", "Andrea Vedaldi" ], "title": "Meta-learning with differentiable closed-form solvers", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ ...
[ { "heading": "1 INTRODUCTION", "text": "Bilevel optimization has received significant attention recently and become an influential framework in various machine learning applications including meta-learning (Franceschi et al., 2018; Bertinetto et al., 2018; Rajeswaran et al., 2019; Ji et al., 2020a), hyperpa...
2,020
null
SP:2c5537aa2c173582e193c903eb85dd63aabc7366
[ "In this paper, the authors propose a novel manifold learning method, via adding a locally isometric smoothness constraint, which preserves topological and geometric properties of data manifold. Empirical results demonstrate the efficacy of their approach. The authors also show that the reliability of tangent space...
It is widely believed that a dimension reduction (DR) process drops information inevitably in most practical scenarios. Thus, most methods try to preserve some essential information of data after DR, as well as manifold based DR methods. However, they usually fail to yield satisfying results, especially in high-dimensi...
[]
[ { "authors": [ "Jens Behrmann", "Will Grathwohl", "Ricky T.Q. Chen", "David Duvenaud", "Jörn-Henrik Jacobsen" ], "title": "Invertible residual networks", "venue": "In Proceedings of the 36th International Conference on Machine Learning (ICML), Proceedings of Machine Lea...
[ { "heading": "1 INTRODUCTION", "text": "In real-world scenarios, it is widely believed that the loss of data information is inevitable after dimension reduction (DR), though the goal of DR is to preserve as much information as possible in the low-dimensional space. In the case of linear DR, compressed sensi...
2,020
null
SP:26c214e61671b012baa8824a39772738a861e44b
[ "This paper introduces a Transformer-based image recognition model that is fully built on the Transformer layers (multi-head self-attention + point-wise MLP) without any standard convolution layers. Basically, it splits an image into patches and takes as input the set of linear embeddings of the patches and their p...
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping ...
[ { "affiliations": [], "name": "Alexey Dosovitskiy" }, { "affiliations": [], "name": "Lucas Beyer" }, { "affiliations": [], "name": "Alexander Kolesnikov" }, { "affiliations": [], "name": "Dirk Weissenborn" }, { "affiliations": [], "name": "Xiaohua Zhai" }, ...
[ { "authors": [ "Samira Abnar", "Willem Zuidema" ], "title": "Quantifying attention flow in transformers", "venue": "In ACL,", "year": 2020 }, { "authors": [ "Alexei Baevski", "Michael Auli" ], "title": "Adaptive input representations for neural language mo...
[ { "heading": "1 INTRODUCTION", "text": "Self-attention-based architectures, in particular Transformers (Vaswani et al., 2017), have become the model of choice in natural language processing (NLP). The dominant approach is to pre-train on a large text corpus and then fine-tune on a smaller task-specific data...
2,021
null
SP:87507439ef121d5d243502d2cb45eafec175f2bc
[ "Temporal smoothness is a recurring feature of real-world data that has been unaccounted for when training neural networks. Much of the random sampling in training neural networks is done to remove the temporal correlations originally present when the data is collected. This work aims to propose a method to train o...
Events in the real world are correlated across nearby points in time, and we must learn from this temporally “smooth” data. However, when neural networks are trained to categorize or reconstruct single items, the common practice is to randomize the order of training items. What are the effects of temporally smooth trai...
[]
[ { "authors": [ "Christopher Baldassano", "Uri Hasson", "Kenneth A Norman" ], "title": "Representation of real-world event schemas during narrative perception", "venue": "Journal of Neuroscience,", "year": 2018 }, { "authors": [ "Yoshua Bengio", "Jérôme Loura...
[ { "heading": "1 INTRODUCTION", "text": "Events in the world are correlated in time: the information that we receive at one moment is usually similar to the information that we receive at the next. For example, when having a conversation with someone, we see multiple samples of the same face from different a...
2,020
null
SP:d460957c05007cafe286b0590ffed111c806dd48
[ "The authors study the problem of global non-convex optimization with access only to function valuations. Specifically, they propose an approach to automatically control the hyper-parameters of Directional Gaussian Smoothing (DGS) a recently proposed solution for the problem. Their proposed solution trade-offs some...
The local gradient points to the direction of the steepest slope in an infinitesimal neighborhood. An optimizer guided by the local gradient is often trapped in local optima when the loss landscape is multi-modal. A directional Gaussian smoothing (DGS) approach was recently proposed in (Zhang et al., 2020) and used to ...
[ { "affiliations": [], "name": "SMOOTHING GRADIENT" } ]
[ { "authors": [ "Youhei Akimoto", "Nikolaus Hansen" ], "title": "Projection-based restricted covariance matrix adaptation for high dimension", "venue": "In Proceedings of the Genetic and Evolutionary Computation Conference 2016,", "year": 2016 }, { "authors": [ "Larry Ar...
[ { "heading": "1 INTRODUCTION", "text": "We consider the problem of black-box optimization, where we search for the optima of a loss function F : Rd → R given access to only its function queries. This type of optimization finds applications in many machine learning areas where the loss function’s gradient is...
2,020
ADADGS: AN ADAPTIVE BLACK-BOX OPTIMIZATION METHOD WITH A NONLOCAL DIRECTIONAL GAUSSIAN
SP:253566b5271d22d4d6492ef9def2e67fb99c5d57
[ "The paper is addressing an important and challenging problem of end-to-end training of deep nets in fixed-point, in this case, with 8-bit precision. A good solution to this problem can have a major impact on the deployability of deep nets on embedded hardware. The basic idea is to introduce an additional term (the...
Quantization of neural network parameters and activations has emerged as a successful approach to reducing model size and inference time on hardware that supports native low-precision arithmetic. Fully quantized training would facilitate further computational speed-ups as well as enable model training on embedded devic...
[]
[ { "authors": [ "Pulkit Bhuwalka", "Alan Chiao", "Suharsh Sivakumar", "Raziel Alvarez", "Feng Liu", "Lawrence Chan", "Skirmantas Kligys", "Yunlu Li", "Khanh LeViet", "Billy Lambert", "Mark Daoust", "Tim Davis", "Sarah Sirajuddin", ...
[ { "heading": "1 INTRODUCTION", "text": "As state-of-the-art deep learning models for vision, language understanding and speech grow increasingly large and computationally burdensome (He et al., 2017; Devlin et al., 2018; Karita et al., 2019), there is increasing antithetical demand, motivated by latency, se...
2,020
null
SP:d9155553fae947cc53d87a221fdd1d57b44f5ec6
[ "I read this paper with great interest. The authors propose an easy-to-understand, easy-to-implement baseline method for detecting when inputs to a ML model is out of distribution. The method involves augmenting the training dataset with an out of distribution dataset and adding an additional class in the classifi...
Refraining from confidently predicting when faced with categories of inputs different from those seen during training is an important requirement for the safe deployment of deep learning systems. While simple to state, this has been a particularly challenging problem in deep learning, where models often end up making o...
[]
[ { "authors": [ "Loïc Barrault", "Fethi Bougares", "Lucia Specia", "Chiraag Lala", "Desmond Elliott", "Stella Frank" ], "title": "Findings of the third shared task on multimodal machine translation", "venue": "In Proceedings of the Third Conference on Machine Trans...
[ { "heading": "1 INTRODUCTION AND RELATED WORK", "text": "Most of supervised machine learning has been developed with the assumption that the distribution of classes seen at train and test time are the same. However, the real-world is unpredictable and open-ended, and making machine learning systems robust t...
2,020
null
SP:70b8c75426f18a3dc4a359c8a8cd7dd2076953a0
[ "The authors propose to address the robustness over outliers for optimal transport (OT). They propose a new formulation based on penalizing the contaminated probability measures by a signed measure (which shares a close relation with unbalanced OT). The authors further derive an equivalent formulation by adjusting ...
Optimal transport (OT) provides a way of measuring distances between distributions that depends on the geometry of the sample space. In light of recent advances in solving the OT problem, OT distances are widely used as loss functions in minimum distance estimation. Despite its prevalence and advantages, however, OT is...
[]
[ { "authors": [ "David Alvarez-Melis", "Tommi S Jaakkola" ], "title": "Gromov-Wasserstein alignment of word embedding spaces", "venue": null, "year": 2018 }, { "authors": [ "Yogesh Balaji", "Rama Chellappa", "Soheil Feizi" ], "title": "Robust optimal ...
[ { "heading": "1 INTRODUCTION", "text": "Optimal transport is a fundamental problem in applied mathematics. In its original form (Monge, 1781), the problem entails finding the minimum cost way to transport mass from a prescribed probability distribution µ on X to another prescribed distribution ν on X . Kant...
2,020
OUTLIER-ROBUST OPTIMAL TRANSPORT
SP:198d7f650c930a1423f7f30688cd2f73d2719920
[ "This paper aims to extend the continuous optimization approach to causal discovery to handle interventional data as well as observational data. It describes a method for learning the causal structure over a set of categorical variables and reports strong empirical performance. However, no theoretical guarantee or ...
Promising results have driven a recent surge of interest in continuous optimization methods for Bayesian network structure learning from observational data. However, there are theoretical limitations on the identifiability of underlying structures obtained from observational data alone. Interventional data provides muc...
[]
[ { "authors": [ "Bruce Abramson", "John Brown", "Ward Edwards", "Allan Murphy", "Robert L Winkler" ], "title": "Hailfinder: A bayesian system for forecasting severe weather", "venue": "International Journal of Forecasting,", "year": 1996 }, { "authors": [ ...
[ { "heading": "1 INTRODUCTION", "text": "Structure learning concerns itself with the recovery of the graph structure of Bayesian networks (BNs) from data samples. A natural application of Bayesian networks is to describe cause-effect relationships between variables. In that context, one may speak of causal s...
2,020
null
SP:d8c4980cf2187b549f2f2a4fbb2fba4101337459
[ ".** Autoregressive models have demonstrate their potential utility for modeling images and other types of complex data with high flexibility (particularly in density estimation). However, its sampling ability is not that good as explained in the paper. Authors show that one of the main weaknesses of autoregressive...
While autoregressive models excel at image compression, their sample quality is often lacking. Although not realistic, generated images often have high likelihood according to the model, resembling the case of adversarial examples. Inspired by a successful adversarial defense method, we incorporate randomized smoothing...
[ { "affiliations": [], "name": "Chenlin Meng" }, { "affiliations": [], "name": "Jiaming Song" }, { "affiliations": [], "name": "Yang Song" }, { "affiliations": [], "name": "Shengjia Zhao" }, { "affiliations": [], "name": "Stefano Ermon" } ]
[ { "authors": [ "Guillaume Alain", "Yoshua Bengio" ], "title": "What regularized auto-encoders learn from the data-generating distribution", "venue": "The Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "Jens Behrmann", "Will Grathwohl", ...
[ { "heading": "1 INTRODUCTION", "text": "Autoregressive models have exhibited promising results in a variety of downstream tasks. For instance, they have shown success in compressing images (Minnen et al., 2018), synthesizing speech (Oord et al., 2016a) and modeling complex decision rules in games (Vinyals e...
2,021
IMPROVED AUTOREGRESSIVE MODELING WITH DISTRIBUTION SMOOTHING
SP:5918a2c105a901f8de4bba248dc283a476d9beac
[ "This work considers an important problem of generating adversarial examples to attack a black-box model. The paper proposes a new approach to consider an adversarial example as a result of a sequence of pixel changes from a benign instance. Therefore, the adversarial generation problem can be considered as a bandi...
We present a new method for score-based adversarial attack, where the attacker queries the loss-oracle of the target model. Our method employs a parameterized search space with a structure that captures the relationship of the gradient of the loss function. We show that searching over the structured space can be approx...
[]
[ { "authors": [ "Abdullah Al-Dujaili", "Una-May O’Reilly" ], "title": "Sign bits are all you need for black-box attacks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Abdullah Al-Dujaili", "Una-May O’Reilly" ...
[ { "heading": null, "text": "We present a new method for score-based adversarial attack, where the attacker queries the loss-oracle of the target model. Our method employs a parameterized search space with a structure that captures the relationship of the gradient of the loss function. We show that searching...
2,020
CORRATTACK: BLACK-BOX ADVERSARIAL ATTACK
SP:9403fa2679f18af78aed2e81b75eb39abeb722eb
[ "The paper develops a density diffusion theory to reveal how minima selection quantitatively depends on the minima sharpness and the hyperparameters. It shows theoretically and empirically that SGD favors flat minima exponentially more than sharp minima. In particular, the paper analyzed the dependence of mean esca...
Stochastic Gradient Descent (SGD) and its variants are mainstream methods for training deep networks in practice. SGD is known to find a flat minimum that often generalizes well. However, it is mathematically unclear how deep learning can select a flat minimum among so many minima. To answer the question quantitatively...
[ { "affiliations": [], "name": "DESCENT EXPONEN" }, { "affiliations": [], "name": "TIALLY FAVORS" }, { "affiliations": [], "name": "FLAT MINIMA" }, { "affiliations": [], "name": "Zeke Xie" }, { "affiliations": [], "name": "Issei Sato" }, { "affiliations...
[ { "authors": [ "Alessandro Achille", "Stefano Soatto" ], "title": "Where is the information in a deep neural network", "venue": "arXiv preprint arXiv:1905.12213,", "year": 2019 }, { "authors": [ "George B Arfken", "Hans J Weber" ], "title": "Mathematical m...
[ { "heading": "1 INTRODUCTION", "text": "In recent years, deep learning (LeCun et al., 2015) has achieved great empirical success in various application areas. Due to the over-parametrization and the highly complex loss landscape of deep networks, optimizing deep networks is a difficult task. Stochastic Grad...
2,021
A DIFFUSION THEORY FOR DEEP LEARNING DYNAM-
SP:d92fe94e29672783f906710a2ecb7a02aa4bd67d
[ "The value of the optimal objective as a function of the cost vector $c$ can be written as $z^*(c) = c^T u^*(c)$ where the optimal solution $u^*$ also depends on $c$. The function $u^*(c)$ is piecewise constant -- there are finitely (resp. countably) many feasible solutions; candidates for $u^*$ -- and so the funct...
Combinatorial problems with linear objective function play a central role in many computer science applications, and efficient algorithms for solving them are well known. However, the solutions to these problems are not differentiable with respect to the parameters specifying the problem instance – for example, shortes...
[]
[ { "authors": [ "Akshay Agrawal", "Brandon Amos", "Shane Barratt", "Stephen Boyd", "Steven Diamond", "J Zico Kolter" ], "title": "Differentiable convex optimization layers", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, {...
[ { "heading": "1 INTRODUCTION", "text": "Combinatorial optimization problems, such as shortest path in a weighted directed graph, minimum spanning tree in a weighted undirected graph, or optimal assignment of tasks to workers, play a central role in many computer science applications. We have highly refined,...
2,020
null
SP:16d9ab54eb8e4f24314ceca6e0f86f4ca586d7f1
[ "This paper provides the interesting method that leverages GPU memory resources more efficiently for supernet (meta-graph) of differentiable NAS. For this, this paper proposes binary neural architecture search and consecutive model parallel (CMP). CMP parallelizes one supernet with multiple GPUs, which allows NAS m...
Neural architecture search (NAS) automatically designs effective network architectures. Differentiable NAS with supernets that encompass all potential architectures in a large graph cuts down search overhead to few GPU days or less. However, these algorithms consume massive GPU memory, which will restrain NAS from larg...
[]
[ { "authors": [ "Han Cai", "Ligeng Zhu", "Song Han" ], "title": "ProxylessNAS: Direct neural architecture search on target task and hardware", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Tianqi Chen", "...
[ { "heading": "1 INTRODUCTION", "text": "Neural architecture search (NAS) has revolutionized architecture designs of deep learning from manually to automatically in various applications, such as image classification (Zoph & Le, 2016) and semantic segmentation (Liu et al., 2019a). Reinforcement learning (Zoph...
2,020
null
SP:d10957cc11891e1aad6ecac21a73d589bfac341d
[ "This paper proposes a method called Temporal Abstract Latent Dynamics (TALD). TALD is built up on RSSM (Hafner et al. 2019) but with hierarchical dynamics. The experiments are conducted on moving MNIST, GQN 3D Mazes, and KTH. Results are qualitatively better than other methods in term of maintaining long-term cons...
Deep learning has shown promise for accurately predicting high-dimensional video sequences. Existing video prediction models succeeded in generating sharp but often short video sequences. Toward improving long-term video prediction, we study hierarchical latent variable models with levels that process at different time...
[]
[ { "authors": [ "Mohammad Babaeizadeh", "Chelsea Finn", "Dumitru Erhan", "Roy H. Campbell", "Sergey Levine" ], "title": "Stochastic variational video", "venue": "prediction. CoRR,", "year": 2017 }, { "authors": [ "Lars Buesing", "Theophane Weber",...
[ { "heading": null, "text": "1 INTRODUCTION\nDeep learning has enabled predicting video sequences from large datasets (Chiappa et al., 2017; Oh et al., 2015; Vondrick et al., 2016). For high-dimensional inputs such as video, there likely exists a more compact representation of the scene that facilitates long...
2,020
null
SP:6082a5b51b24315dfdbfe147de1aef2c53cd113d
[ "This paper extends the Wasserstein autoencoder for learning disentangled representations from sequential data. The latent variable model considered contains separate latent variables capturing global and local information respectively, each of which is regularized by a divergence measuring the marginal posterior $...
Learning disentangled representations leads to interpretable models and facilitates data generation with style transfer, which has been extensively studied on static data such as images in an unsupervised learning framework. However, only a few works have explored unsupervised disentangled sequential representation lea...
[ { "affiliations": [], "name": "DISENTANGLED RECURRENT" }, { "affiliations": [], "name": "WASSERSTEIN AUTOEN" }, { "affiliations": [], "name": "Jun Han" }, { "affiliations": [], "name": "Martin Renqiang Min" }, { "affiliations": [], "name": "Ligong Han" }, ...
[ { "authors": [ "Niki Aifanti", "Christos Papachristou", "Anastasios Delopoulos" ], "title": "The mug facial expression database", "venue": "In 11th International Workshop on Image Analysis for Multimedia Interactive Services WIAMIS", "year": 2010 }, { "authors": [ ...
[ { "heading": "1 INTRODUCTION", "text": "Unsupervised representation learning is an important research topic in machine learning. It embeds high-dimensional sensory data such as images and videos into a low-dimensional latent space in an unsupervised learning framework, aiming at extracting essential data va...
2,021
null
SP:60894f74f40addd7a2a35a003dcdce6cf70ffef4
[ "The paper extends prior work on equivalence between predictive coding and backprop in layered neural networks to arbitrary computation graphs. This is empirically tested first on a simple nonlinear scalar function, and then on a few commonly used architectures (CNNs, RNNs, LSTMs), confirming the theoretical result...
Backpropagation of error (backprop) is a powerful algorithm for training machine learning architectures through end-to-end differentiation. Recently it has been shown that backprop in multilayer-perceptrons (MLPs) can be approximated using predictive coding, a biologically-plausible process theory of cortical computati...
[ { "affiliations": [], "name": "APPROXIMATES BACKPROP" } ]
[ { "authors": [ "Mohamed Akrout", "Collin Wilson", "Peter Humphreys", "Timothy Lillicrap", "Douglas B Tweed" ], "title": "Deep learning without weight transport", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [...
[ { "heading": "1 INTRODUCTION", "text": "Deep learning has seen stunning successes in the last decade in computer vision (Krizhevsky et al., 2012; Szegedy et al., 2015), natural language processing and translation (Vaswani et al., 2017; Radford et al., 2019; Kaplan et al., 2020), and computer game playing (M...
2,020
null
SP:9e6b5b7d9e7459c015130f4b80f7bc75424de050
[ "This paper proposes a simple scheme for training with multiple augmentations of training data in one iteration and reweighting the instances by their relative loss. As authors note in their related works, the idea of reweighting examples based on their relative loss has been widely studied in a variety of machine ...
Data augmentation is an effective technique to improve the generalization of deep neural networks. However, previous data augmentation methods usually treat the augmented samples equally without considering their individual impacts on the model. To address this, for the augmented samples from the same training example,...
[ { "affiliations": [], "name": "ING THE" }, { "affiliations": [], "name": "MAXIMAL EXPECTED LOSS" }, { "affiliations": [], "name": "Mingyang Yi" }, { "affiliations": [], "name": "Lu Hou" }, { "affiliations": [], "name": "Lifeng Shang" }, { "affiliations...
[ { "authors": [ "S. Behpour", "K. Kitani", "B. Ziebart" ], "title": "Ada: Adversarial data augmentation for object detection", "venue": "In IEEE Winter Conference on Applications of Computer Vision,", "year": 2019 }, { "authors": [ "Y. Bengio", "J. Louradour"...
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks have achieved state-of-the-art results in various tasks in natural language processing (NLP) tasks (Sutskever et al., 2014; Vaswani et al., 2017; Devlin et al., 2019) and computer vision (CV) tasks (He et al., 2016; Goodfellow et al., 2016). One a...
2,021
REWEIGHTING AUGMENTED SAMPLES BY MINIMIZ-