Unnamed: 0 int64 0 1.83k | Clean_Title stringlengths 8 153 | Clean_Text stringlengths 330 2.26k | Clean_Summary stringlengths 53 295 |
|---|---|---|---|
0 | Critical Points of Linear Neural Networks: Analytical Forms and Landscape Properties | Due to the success of deep learning to solving a variety of challenging machine learning tasks, there is a rising interest in understanding loss functions for training neural networks from a theoretical aspect.Particularly, the properties of critical points and the landscape around them are of importance to determine t... | We provide necessary and sufficient analytical forms for the critical points of the square loss functions for various neural networks, and exploit the analytical forms to characterize the landscape properties for the loss functions of these neural networks. |
1 | Biologically-Plausible Learning Algorithms Can Scale to Large Datasets | The backpropagation algorithm is often thought to be biologically implausible in the brain.One of the main reasons is that BP requires symmetric weight matrices in the feedforward and feedback pathways.To address this “weight transport problem”, two biologically-plausible algorithms, proposed by Liao et al. and Lillicr... | Biologically plausible learning algorithms, particularly sign-symmetry, work well on ImageNet |
2 | Logic and the 2-Simplicial Transformer | We introduce the 2-simplicial Transformer, an extension of the Transformer which includes a form of higher-dimensional attention generalising the dot-product attention, and uses this attention to update entity representations with tensor products of value vectors.We show that this architecture is a useful inductive bia... | We introduce the 2-simplicial Transformer and show that this architecture is a useful inductive bias for logical reasoning in the context of deep reinforcement learning. |
3 | Long-term Forecasting using Tensor-Train RNNs | We present Tensor-Train RNN, a novel family of neural sequence architectures for multivariate forecasting in environments with nonlinear dynamics.Long-term forecasting in such systems is highly challenging, since there exist long-term temporal dependencies, higher-order correlations and sensitivity to error propagation... | Accurate forecasting over very long time horizons using tensor-train RNNs |
4 | Variational Message Passing with Structured Inference Networks | Recent efforts on combining deep models with probabilistic graphical models are promising in providing flexible models that are also easy to interpret.We propose a variational message-passing algorithm for variational inference in such models.We make three contributions.First, we propose structured inference networks t... | We propose a variational message-passing algorithm for models that contain both the deep model and probabilistic graphical model. |
5 | Adaptive Mixture of Low-Rank Factorizations for Compact Neural Modeling | Modern deep neural networks have a large amount of weights, which make them difficult to deploy on computation constrained devices such as mobile phones.One common approach to reduce the model size and computational cost is to use low-rank factorization to approximate a weight matrix.However, performing standard low-ra... | A simple modification to low-rank factorization that improves performances (in both image and language tasks) while still being compact. |
6 | Progressive Compressed Records: Taking a Byte Out of Deep Learning Data | Deep learning training accesses vast amounts of data at high velocity, posing challenges for datasets retrieved over commodity networks and storage devices.We introduce a way to dynamically reduce the overhead of fetching and transporting training data with a method we term Progressive Compressed Records.PCRs deviate f... | We propose a simple, general, and space-efficient data format to accelerate deep learning training by allowing sample fidelity to be dynamically selected at training time |
7 | ROBUST DISCRIMINATIVE REPRESENTATION LEARNING VIA GRADIENT RESCALING: AN EMPHASIS REGULARISATION PERSPECTIVE | It is fundamental and challenging to train robust and accurate Deep Neural Networks when semantically abnormal examples exist.Although great progress has been made, there is still one crucial research question which is not thoroughly explored yet: What training examples should be focused and how much more should they b... | ROBUST DISCRIMINATIVE REPRESENTATION LEARNING VIA GRADIENT RESCALING: AN EMPHASIS REGULARISATION PERSPECTIVE |
8 | Optimizing the Latent Space of Generative Networks | Generative Adversarial Networks have achieved remarkable results in the task of generating realistic natural images.In most applications, GAN models share two aspects in common.On the one hand, GANs training involves solving a challenging saddle point optimization problem, interpreted as an adversarial game between a g... | Are GANs successful because of adversarial training or the use of ConvNets? We show a ConvNet generator trained with a simple reconstruction loss and learnable noise vectors leads many of the desirable properties of a GAN. |
9 | Dynamically Balanced Value Estimates for Actor-Critic Methods | Reinforcement learning in an actor-critic setting relies on accurate value estimates of the critic.However, the combination of function approximation, temporal difference learning and off-policy training can lead to an overestimating value function.A solution is to use Clipped Double Q-learning, which is used in the TD... | A method for more accurate critic estimates in reinforcement learning. |
10 | A Systematic Framework for Natural Perturbations from Videos | We introduce a systematic framework for quantifying the robustness of classifiers to naturally occurring perturbations of images found in videos.As part of this framework, we construct ImageNet-Vid-Robust, a human-expert--reviewed dataset of 22,668 images grouped into 1,145 sets of perceptually similar images derived f... | We introduce a systematic framework for quantifying the robustness of classifiers to naturally occurring perturbations of images found in videos. |
11 | SuperTML: Two-Dimensional Word Embedding and Transfer Learning Using ImageNet Pretrained CNN Models for the Classifications on Tabular Data | Structured tabular data is the most commonly used form of data in industry according to a Kaggle ML and DS Survey.Gradient Boosting Trees, Support Vector Machine, Random Forest, and Logistic Regression are typically used for classification tasks on tabular data.The recent work of Super Characters method using two-dimen... | Deep learning for structured tabular data machine learning using pre-trained CNN model from ImageNet. |
12 | PatchFormer: A neural architecture for self-supervised representation learning on images | Learning rich representations from predictive learning without labels has been a longstanding challenge in the field of machine learning.Generative pre-training has so far not been as successful as contrastive methods in modeling representations of raw images.In this paper, we propose a neural architecture for self-sup... | Decoding pixels can still work for representation learning on images |
13 | The Case for Full-Matrix Adaptive Regularization | Adaptive regularization methods pre-multiply a descent direction by a preconditioning matrix.Due to the large number of parameters of machine learning problems, full-matrix preconditioning methods are prohibitively expensive.We show how to modify full-matrix adaptive regularization in order to make it practical and eff... | fast, truly scalable full-matrix AdaGrad/Adam, with theory for adaptive stochastic non-convex optimization |
14 | Attention over Parameters for Dialogue Systems | Dialogue systems require a great deal of different but complementary expertise to assist, inform, and entertain humans.For example, different domains of goal-oriented dialogue systems can be viewed as different skills, and so does ordinary chatting abilities of chit-chat dialogue systems.In this paper, we propose to le... | In this paper, we propose to learn a dialogue system that independently parameterizes different dialogue skills, and learns to select and combine each of them through Attention over Parameters (AoP). |
15 | Dataset Distillation | Model distillation aims to distill the knowledge of a complex model into a simpler one.In this paper, we consider an alternative formulation called dataset distillation: we keep the model fixed and instead attempt to distill the knowledge from a large training dataset into a small one.The idea is to synthesize a small ... | We propose to distill a large dataset into a small set of synthetic data that can train networks close to original performance. |
16 | TRAINING GENERATIVE ADVERSARIAL NETWORKS VIA PRIMAL-DUAL SUBGRADIENT METHODS: A LAGRANGIAN PERSPECTIVE ON GAN | We relate the minimax game of generative adversarial networks to finding the saddle points of the Lagrangian function for a convex optimization problem, where the discriminator outputs and the distribution of generator outputs play the roles of primal variables and dual variables, respectively.This formulation shows th... | We propose a primal-dual subgradient method for training GANs and this method effectively alleviates mode collapse. |
17 | Irrationality can help reward inference | Specifying reward functions is difficult, which motivates the area of reward inference: learning rewards from human behavior.The starting assumption in the area is that human behavior is optimal given the desired reward function, but in reality people have many different forms of irrationality, from noise to myopia to ... | We find that irrationality from an expert demonstrator can help a learner infer their preferences. |
18 | Models in the Wild: On Corruption Robustness of NLP Systems | Natural Language Processing models lack a unified approach to robustness testing.In this paper we introduce WildNLP - a framework for testing model stability in a natural setting where text corruptions such as keyboard errors or misspelling occur.We compare robustness of models from 4 popular NLP tasks: Q&A, NLI, NER a... | We compare robustness of models from 4 popular NLP tasks: Q&A, NLI, NER and Sentiment Analysis by testing their performance on perturbed inputs. |
19 | Curriculum Learning for Deep Generative Models with Clustering | Training generative models like Generative Adversarial Network is challenging for noisy data.A novel curriculum learning algorithm pertaining to clustering is proposed to address this issue in this paper.The curriculum construction is based on the centrality of underlying clusters in data points. The data points of hig... | A novel cluster-based algorithm of curriculum learning is proposed to solve the robust training of generative models. |
20 | DBA: Distributed Backdoor Attacks against Federated Learning | Backdoor attacks aim to manipulate a subset of training data by injecting adversarial triggers such that machine learning models trained on the tampered dataset will make arbitrarily incorrect prediction on the testset with the same trigger embedded.While federated learning is capable of aggregating information provide... | We proposed a novel distributed backdoor attack on federated learning and show that it is not only more effective compared with standard centralized attacks, but also harder to be defended by existing robust FL methods |
21 | Label Propagation Networks | Graph networks have recently attracted considerable interest, and in particular in the context of semi-supervised learning.These methods typically work by generating node representations that are propagated throughout a given weighted graph.Here we argue that for semi-supervised learning, it is more natural to consider... | Neural net for graph-based semi-supervised learning; revisits the classics and propagates *labels* rather than feature representations |
22 | Neural Architecture Search for Natural Language Understanding | Neural architecture search has made rapid progress incomputervision,wherebynewstate-of-the-artresultshave beenachievedinaseriesoftaskswithautomaticallysearched neural network architectures.In contrast, NAS has not made comparable advances in natural language understanding.Corresponding to encoder-aggregator meta archit... | Neural Architecture Search for a series of Natural Language Understanding tasks. Design the search space for NLU tasks. And Apply differentiable architecture search to discover new models |
23 | EvalNE: A Framework for Evaluating Network Embeddings on Link Prediction | Network embedding methods aim to learn low-dimensional representations of network nodes as vectors, typically in Euclidean space.These representations are then used for a variety of downstream prediction tasks.Link prediction is one of the most popular choices for assessing the performance of NE methods.However, the co... | In this paper we introduce EvalNE, a Python toolbox for automating the evaluation of network embedding methods on link prediction and ensuring the reproducibility of results. |
24 | No Spurious Local Minima in a Two Hidden Unit ReLU Network | Deep learning models can be efficiently optimized via stochastic gradient descent, but there is little theoretical evidence to support this.A key question in optimization is to understand when the optimization landscape of a neural network is amenable to gradient-based optimization.We focus on a simple neural network t... | Recovery guarantee of stochastic gradient descent with random initialization for learning a two-layer neural network with two hidden nodes, unit-norm weights, ReLU activation functions and Gaussian inputs. |
25 | Jumpout: Improved Dropout for Deep Neural Networks with Rectified Linear Units | Dropout is a simple yet effective technique to improve generalization performance and prevent overfitting in deep neural networks.In this paper, we discuss three novel observations about dropout to better understand the generalization of DNNs with rectified linear unit activations: 1) dropout is a smoothing technique t... | Jumpout applies three simple yet effective modifications to dropout, based on novel understandings about the generalization performance of DNN with ReLU in local regions. |
26 | Sparsity Emerges Naturally in Neural Language Models | Concerns about interpretability, computational resources, and principled inductive priors have motivated efforts to engineer sparse neural models for NLP tasks.If sparsity is important for NLP, might well-trained neural models naturally become roughly sparse?Using the Taxi-Euclidean norm to measure sparsity, we find th... | We study the natural emergence of sparsity in the activations and gradients for some layers of a dense LSTM language model, over the course of training. |
27 | Aging Memories Generate More Fluent Dialogue Responses with Memory Networks | The integration of a Knowledge Base into a neural dialogue agent is one of the key challenges in Conversational AI.Memory networks has proven to be effective to encode KB information into an external memory to thus generate more fluent and informed responses.Unfortunately, such memory becomes full of latent representat... | Conventional memory networks generate many redundant latent vectors resulting in overfitting and the need for larger memories. We introduce memory dropout as an automatic technique that encourages diversity in the latent space. |
28 | Nesterov's method is the discretization of a differential equation with Hessian damping | Su-Boyd-Candes made a connection between Nesterov's method and an ordinary differential equation. ", "We show if a Hessian damping term is added to the ODE from Su-Boyd-Candes, then Nesterov's method arises as a straightforward discretization of the modified ODE.", "Analogously, in the strongly convex case, a Hessian d... | We derive Nesterov's method arises as a straightforward discretization of an ODE different from the one in Su-Boyd-Candes and prove acceleration the stochastic case |
29 | Learning to Transfer Learn | We propose learning to transfer learn to improve transfer learning on a target dataset by judicious extraction of information from a source dataset.L2TL considers joint optimization of vastly-shared weights between models for source and target tasks, and employs adaptive weights for scaling of constituent losses.The ad... | We propose learning to transfer learn (L2TL) to improve transfer learning on a target dataset by judicious extraction of information from a source dataset. |
30 | AMRL: Aggregated Memory For Reinforcement Learning | In many partially observable scenarios, Reinforcement Learning agents must rely on long-term memory in order to learn an optimal policy.We demonstrate that using techniques from NLP and supervised learning fails at RL tasks due to stochasticity from the environment and from exploration.Utilizing our insights on the lim... | In Deep RL, order-invariant functions can be used in conjunction with standard memory modules to improve gradient decay and resilience to noise. |
31 | Optimization on Multiple Manifolds | Optimization on manifold has been widely used in machine learning, to handle optimization problems with constraint.Most previous works focus on the case with a single manifold.However, in practice it is quite common that the optimization problem involves more than one constraints,.It is not clear in general how to opti... | This paper introduces an algorithm to handle optimization problem with multiple constraints under vision of manifold. |
32 | Discrete Sequential Prediction of Continuous Actions for Deep RL | It has long been assumed that high dimensional continuous control problems cannot be solved effectively by discretizing individual dimensions of the action space due to the exponentially large number of bins over which policies would have to be learned.In this paper, we draw inspiration from the recent success of seque... | A method to do Q-learning on continuous action spaces by predicting a sequence of discretized 1-D actions. |
33 | Model Imitation for Model-Based Reinforcement Learning | Model-based reinforcement learning aims to learn a dynamic model to reduce the number of interactions with real-world environments.However, due to estimation error, rollouts in the learned model, especially those of long horizon, fail to match the ones in real-world environments.This mismatching has seriously impacted ... | Our method incorporates WGAN to achieve occupancy measure matching for transition learning. |
34 | Normalization Gradients are Least-squares Residuals | Batch Normalization and its variants have seen widespread adoption in the deep learning community because they improve the training of deep neural networks.Discussions of why this normalization works so well remain unsettled. We make explicit the relationship between ordinary least squares and partial derivatives compu... | Gaussian normalization performs a least-squares fit during back-propagation, which zero-centers and decorrelates partial derivatives from normalized activations. |
35 | Theoretical Analysis of Auto Rate-Tuning by Batch Normalization | Batch Normalization has become a cornerstone of deep learning across diverse architectures, appearing to help optimization as well as generalization.While the idea makes intuitive sense, theoretical analysis of its effectiveness has been lacking.Here theoretical support is provided for one of its conjectured properties... | We give a theoretical analysis of the ability of batch normalization to automatically tune learning rates, in the context of finding stationary points for a deep learning objective. |
36 | Adversarial Video Generation on Complex Datasets | Generative models of natural images have progressed towards high fidelity samples by the strong leveraging of scale.We attempt to carry this success to the field of video modeling by showing that large Generative Adversarial Networks trained on the complex Kinetics-600 dataset are able to produce video samples of subst... | We propose DVD-GAN, a large video generative model that is state of the art on several tasks and produces highly complex videos when trained on large real world datasets. |
37 | Simulating Action Dynamics with Neural Process Networks | Understanding procedural language requires anticipating the causal effects of actions, even when they are not explicitly stated.In this work, we introduce Neural Process Networks to understand procedural text through simulation of action dynamics. Our model complements existing memory architectures with dynamic entity ... | We propose a new recurrent memory architecture that can track common sense state changes of entities by simulating the causal effects of actions. |
38 | Meta-Learning Neural Bloom Filters | There has been a recent trend in training neural networks to replace data structures that have been crafted by hand, with an aim for faster execution, better accuracy, or greater compression. In this setting, a neural data structure is instantiated by training a network over many epochs of its inputs until convergence.... | We investigate the space efficiency of memory-augmented neural nets when learning set membership. |
39 | A Scalable Laplace Approximation for Neural Networks | We leverage recent insights from second-order optimisation for neural networks to construct a Kronecker factored Laplace approximation to the posterior over the weights of a trained network.Our approximation requires no modification of the training procedure, enabling practitioners to estimate the uncertainty of their ... | We construct a Kronecker factored Laplace approximation for neural networks that leads to an efficient matrix normal distribution over the weights. |
40 | Spectral Embedding of Regularized Block Models | Spectral embedding is a popular technique for the representation of graph data.Several regularization techniques have been proposed to improve the quality of the embedding with respect to downstream tasks like clustering.In this paper, we explain on a simple block model the impact of the complete graph regularization, ... | Graph regularization forces spectral embedding to focus on the largest clusters, making the representation less sensitive to noise. |
41 | Quantifying Exposure Bias for Neural Language Generation | The exposure bias problem refers to the training-inference discrepancy caused by teacher forcing in maximum likelihood estimation training for auto-regressive neural network language models.It has been regarded as a central problem for natural language generation model training.Although a lot of algorithms have been pr... | We show that exposure bias could be much less serious than it is currently assumed to be for MLE LM training. |
42 | Emergence of Linguistic Communication from Referential Games with Symbolic and Pixel Input | The ability of algorithms to evolve or learn communication protocols has traditionally been studied in the language evolution literature through the use of emergent communication tasks.Here we scale up this research by using contemporary deep learning methods and by training reinforcement-learning neural network agents... | A controlled study of the role of environments with respect to properties in emergent communication protocols. |
43 | BERTgrid: Contextualized Embedding for 2D Document Representation and Understanding | For understanding generic documents, information like font sizes, column layout, and generally the positioning of words may carry semantic information that is crucial for solving a downstream document intelligence task.Our novel BERTgrid, which is based on Chargrid by Katti et al., represents a document as a grid of co... | Grid-based document representation with contextualized embedding vectors for documents with 2D layouts |
44 | Adversarial Policies: Attacking Deep Reinforcement Learning | Deep reinforcement learning policies are known to be vulnerable to adversarial perturbations to their observations, similar to adversarial examples for classifiers."However, an attacker is not usually able to directly modify another agent's observations.", 'This might lead one to wonder: is it possible to attack an RL ... | Deep RL policies can be attacked by other agents taking actions so as to create natural observations that are adversarial. |
45 | Exponential Family Word Embeddings: An Iterative Approach for Learning Word Vectors | GloVe and Skip-gram word embedding methods learn word vectors by decomposing a denoised matrix of word co-occurrences into a product of low-rank matrices.In this work, we propose an iterative algorithm for computing word vectors based on modeling word co-occurrence matrices with Generalized Low Rank Models.Our algorith... | We present a novel iterative algorithm based on generalized low rank models for computing and interpreting word embedding models. |
46 | Coping With Simulators That Don’t Always Return | Deterministic models are approximations of reality that are often easier to build and interpret than stochastic alternatives. Unfortunately, as nature is capricious, observational data can never be fully explained by deterministic models in practice. Observation and process noise need to be added to adapt deterministic... | We learn a conditional autoregressive flow to propose perturbations that don't induce simulator failure, improving inference performance. |
47 | Multi-hop Question Answering via Reasoning Chains | Multi-hop question answering requires models to gather information from different parts of a text to answer a question.Most current approaches learn to address this task in an end-to-end way with neural networks, without maintaining an explicit representation of the reasoning process.We propose a method to extract a di... | We improve answering of questions that require multi-hop reasoning extracting an intermediate chain of sentences. |
48 | Normalizing Constant Estimation with Gaussianized Bridge Sampling | Normalizing constant is one of the central goals of Bayesian inference, yet most of the existing methods are both expensive and inaccurate.Here we develop a new approach, starting from posterior samples obtained with a standard Markov Chain Monte Carlo.We apply a novel Normalizing Flow approach to obtain an analytic de... | We develop a new method for normalization constant (Bayesian evidence) estimation using Optimal Bridge Sampling and a novel Normalizing Flow, which is shown to outperform existing methods in terms of accuracy and computational time. |
49 | A comprehensive, application-oriented study of catastrophic forgetting in DNNs | We present a large-scale empirical study of catastrophic forgetting in modern Deep Neural Network models that perform sequential learning.A new experimental protocol is proposed that takes into account typical constraints encountered in application scenarios.As the investigation is empirical, we evaluate CF behavior on... | We check DNN models for catastrophic forgetting using a new evaluation scheme that reflects typical application conditions, with surprising results. |
50 | Improving Federated Learning Personalization via Model Agnostic Meta Learning | Federated Learning refers to learning a high quality global model based on decentralized data storage, without ever copying the raw data.A natural scenario arises with data created on mobile phones by the activity of their users.Given the typical data heterogeneity in such situations, it is natural to ask how can the g... | Federated Averaging already is a Meta Learning algorithm, while datacenter-trained methods are significantly harder to personalize. |
51 | Downsampling leads to Image Memorization in Convolutional Autoencoders | Memorization of data in deep neural networks has become a subject of significant research interest.In this paper, we link memorization of images in deep convolutional autoencoders to downsampling through strided convolution. To analyze this mechanism in a simpler setting, we train linear convolutional autoencoders and ... | We identify downsampling as a mechansim for memorization in convolutional autoencoders. |
52 | Learning Robust Rewards with Adverserial Inverse Reinforcement Learning | Reinforcement learning provides a powerful and general framework for decisionmaking and control, but its application in practice is often hindered by the needfor extensive feature and reward engineering.Deep reinforcement learning methodscan remove the need for explicit engineering of policy or value features, butstill... | We propose an adversarial inverse reinforcement learning algorithm capable of learning reward functions which can transfer to new, unseen environments. |
53 | A Bayesian Perspective on Generalization and Stochastic Gradient Descent | We consider two questions at the heart of machine learning; how can we predict if a minimum will generalize to the test set, and why does stochastic gradient descent find minima that generalize well?Our work responds to t, who showed deep neural networks can easily memorize randomly labeled training data, despite gener... | Generalization is strongly correlated with the Bayesian evidence, and gradient noise drives SGD towards minima whose evidence is large. |
54 | Generative Adversarial Networks For Data Scarcity Industrial Positron Images With Attention | In the industrial field, the positron annihilation is not affected by complex environment, and the gamma-ray photon penetration is strong, so the nondestructive detection of industrial parts can be realized.Due to the poor image quality caused by gamma-ray photon scattering, attenuation and short sampling time in posit... | adversarial nets, attention mechanism, positron images, data scarcity |
55 | Revisit Recurrent Attention Model from an Active Sampling Perspective | We revisit the Recurrent Attention Model), a recurrent neural network for visual attention, from an active information sampling perspective.We borrow ideas from neuroscience research on the role of active information sampling in the context of visual attention and gaze, where the author suggested three types of motives... | Inspired by neuroscience research, solve three key weakness of the widely-cited recurrent attention model by simply adding two terms on the objective function. |
56 | Active Learning Graph Neural Networks via Node Feature Propagation | Graph Neural Networks for prediction tasks like node classification or edge prediction have received increasing attention in recent machine learning from graphically structured data.However, a large quantity of labeled graphs is difficult to obtain, which significantly limit the true success of GNNs.Although active lea... | This paper introduces a clustering-based active learning algorithm on graphs. |
57 | InfoCNF: Efficient Conditional Continuous Normalizing Flow Using Adaptive Solvers | Continuous Normalizing Flows have emerged as promising deep generative models for a wide range of tasks thanks to their invertibility and exact likelihood estimation.However, conditioning CNFs on signals of interest for conditional image generation and downstream predictive tasks is inefficient due to the high-dimensio... | We propose the InfoCNF, an efficient conditional CNF that employs gating networks to learn the error tolerances of the ODE solvers |
58 | Unsupervised Learning via Meta-Learning | A central goal of unsupervised learning is to acquire representations from unlabeled data or experience that can be used for more effective learning of downstream tasks from modest amounts of labeled data.Many prior unsupervised learning works aim to do so by developing proxy objectives based on reconstruction, disenta... | An unsupervised learning method that uses meta-learning to enable efficient learning of downstream image classification tasks, outperforming state-of-the-art methods. |
59 | Latent Domain Transfer: Crossing modalities with Bridging Autoencoders | Domain transfer is a exciting and challenging branch of machine learning because models must learn to smoothly transfer between domains, preserving local variations and capturing many aspects of variation without labels.However, most successful applications to date require the two domains to be closely related,utilizin... | Conditional VAE on top of latent spaces of pre-trained generative models that enables transfer between drastically different domains while preserving locality and semantic alignment. |
60 | Adversarial Inductive Transfer Learning with input and output space adaptation | We propose Adversarial Inductive Transfer Learning, a method for addressing discrepancies in input and output spaces between source and target domains.AITL utilizes adversarial domain adaptation and multi-task learning to address these discrepancies.Our motivating application is pharmacogenomics where the goal is to pr... | A novel method of inductive transfer learning that employs adversarial learning and multi-task learning to address the discrepancy in input and output space |
61 | End-to-end named entity recognition and relation extraction using pre-trained language models | Named entity recognition and relation extraction are two important tasks in information extraction and retrieval.Recent work has demonstrated that it is beneficial to learn these tasks jointly, which avoids the propagation of error inherent in pipeline-based systems and improves performance.However, state-of-the-art jo... | A novel, high-performing architecture for end-to-end named entity recognition and relation extraction that is fast to train. |
62 | Music Transformer: Generating Music with Long-Term Structure | Music relies heavily on repetition to build structure and meaning. Self-reference occurs on multiple timescales, from motifs to phrases to reusing of entire sections of music, such as in pieces with ABA structure. The Transformer, a sequence model based on self-attention, has achieved compelling results in many generat... | We show the first successful use of Transformer in generating music that exhibits long-term structure. |
63 | Width-Based Lookaheads Augmented with Base Policies for Stochastic Shortest Paths | Sequential decision problems for real-world applications often need to be solved in real-time, requiring algorithms to perform well with a restricted computational budget.Width-based lookaheads have shown state-of-the-art performance in classical planning problems as well as over the Atari games with tight budgets.In t... | We propose a new Monte Carlo Tree Search / rollout algorithm that relies on width-based search to construct a lookahead. |
64 | Deep Within-Class Covariance Analysis for Robust Deep Audio Representation Learning | Deep Neural Networks are known for excellent performance in supervised tasks such as classification.Convolutional Neural Networks, in particular, can learn effective features and build high-level representations that can be used forclassification, but also for querying and nearest neighbor search.However, CNNs have als... | We propose a novel deep neural network layer for normalising within-class covariance of an internal representation in a neural network that results in significantly improving the generalisation of the learned representations. |
65 | Learning Diverse Generations using Determinantal Point Processes | Generative models have proven to be an outstanding tool for representing high-dimensional probability distributions and generating realistic looking images.A fundamental characteristic of generative models is their ability to produce multi-modal outputs.However, while training, they are often susceptible to mode collap... | The addition of a diversity criterion inspired from DPP in the GAN objective avoids mode collapse and leads to better generations. |
66 | The role of over-parametrization in generalization of neural networks | Despite existing work on ensuring generalization of neural networks in terms of scale sensitive complexity measures, such as norms, margin and sharpness, these complexity measures do not offer an explanation of why neural networks generalize better with over-parametrization.In this work we suggest a novel complexity me... | We suggest a generalization bound that could partly explain the improvement in generalization with over-parametrization. |
67 | Going Deeper with Lean Point Networks | We introduce three generic point cloud processing blocks that improve both accuracy and memory consumption of multiple state-of-the-art networks, thus allowing to design deeper and more accurate networks.The novel processing blocks that facilitate efficient information flow are a convolution-type operation block for po... | We introduce three generic point cloud processing blocks that improve both accuracy and memory consumption of multiple state-of-the-art networks, thus allowing to design deeper and more accurate networks. |
68 | Learned in Speech Recognition: Contextual Acoustic Word Embeddings | End-to-end acoustic-to-word speech recognition models have recently gained popularity because they are easy to train, scale well to large amounts of training data, and do not require a lexicon.In addition, word models may also be easier to integrate with downstream tasks such as spoken language understanding, because i... | Methods to learn contextual acoustic word embeddings from an end-to-end speech recognition model that perform competitively with text-based word embeddings. |
69 | UNSUPERVISED MONOCULAR DEPTH ESTIMATION WITH CLEAR BOUNDARIES | Unsupervised monocular depth estimation has made great progress after deeplearning is involved.Training with binocular stereo images is considered as agood option as the data can be easily obtained.However, the depth or disparityprediction results show poor performance for the object boundaries.The mainreason is relate... | This paper propose a mask method which solves the previous blurred results of unsupervised monocular depth estimation caused by occlusion |
70 | Graph Classification with 2D Convolutional Neural Networks | Graph classification is currently dominated by graph kernels, which, while powerful, suffer some significant limitations.Convolutional Neural Networks offer a very appealing alternative.However, processing graphs with CNNs is not trivial.To address this challenge, many sophisticated extensions of CNNs have recently bee... | We introduce a novel way to represent graphs as multi-channel image-like structures that allows them to be handled by vanilla 2D CNNs. |
71 | Benefits of Depth for Long-Term Memory of Recurrent Networks | The key attribute that drives the unprecedented success of modern Recurrent Neural Networks on learning tasks which involve sequential data, is their ever-improving ability to model intricate long-term temporal dependencies."However, a well established measure of RNNs' long-term memory capacity is lacking, and thus for... | We propose a measure of long-term memory and prove that deep recurrent networks are much better fit to model long-term temporal dependencies than shallow ones. |
72 | Contextual and neural representations of sequentially complex animal vocalizations | Holistically exploring the perceptual and neural representations underlying animal communication has traditionally been very difficult because of the complexity of the underlying signal.We present here a novel set of techniques to project entire communicative repertoires into low dimensional spaces that can be systemat... | We compare perceptual, neural, and modeled representations of animal communication using machine learning, behavior, and physiology. |
73 | What Information Does a ResNet Compress? | The information bottleneck principle suggests that SGD-based training of deep neural networks results in optimally compressed hidden layers, from an information theoretic perspective.However, this claim was established on toy data.The goal of the work we present here is to test these claims in a realistic setting using... | The Information Bottleneck Principle applied to ResNets, using PixelCNN++ models to decode mutual information and conditionally generate images for information illustration |
74 | Hope For The Best But Prepare For The Worst: Cautious Adaptation In RL Agents | We study the problem of safe adaptation: given a model trained on a variety of past experiences for some task, can this model learn to perform that task in a new situation while avoiding catastrophic failure?This problem setting occurs frequently in real-world reinforcement learning scenarios such as a vehicle adapting... | Adaptation of an RL agent in a target environment with unknown dynamics is fast and safe when we transfer prior experience in a variety of environments and then select risk-averse actions during adaptation. |
75 | The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision | We propose the Neuro-Symbolic Concept Learner, a model that learns visual concepts, words, and semantic parsing of sentences without explicit supervision on any of them; instead, our model learns by simply looking at images and reading paired questions and answers.Our model builds an object-based scene representation a... | We propose the Neuro-Symbolic Concept Learner (NS-CL), a model that learns visual concepts, words, and semantic parsing of sentences without explicit supervision on any of them. |
76 | Gaussian Process Meta-Representations For Hierarchical Neural Network Weight Priors | Bayesian inference offers a theoretically grounded and general way to train neural networks and can potentially give calibrated uncertainty.However, it is challenging to specify a meaningful and tractable prior over the network parameters, and deal with the weight correlations in the posterior.To this end, this paper i... | We introduce a Gaussian Process Prior over weights in a neural network and explore its ability to model input-dependent weights with benefits to various tasks, including uncertainty estimation and generalization in the low-sample setting. |
77 | Character-level Translation with Self-attention | We perform an in-depth investigation of the suitability of self-attention models for character-level neural machine translation.We test the standard transformer model, as well as a novel variant in which the encoder block combines information from nearby characters using convolution.We perform extensive experiments on ... | We perform an in-depth investigation of the suitability of self-attention models for character-level neural machine translation. |
78 | Learning to diagnose from scratch by exploiting dependencies among labels | The field of medical diagnostics contains a wealth of challenges which closely resemble classical machine learning problems; practical constraints, however, complicate the translation of these endpoints naively into classical architectures.Many tasks in radiology, for example, are largely problems of multi-label classi... | we present the state-of-the-art results of using neural networks to diagnose chest x-rays |
79 | Analysis of Video Feature Learning in Two-Stream CNNs on the Example of Zebrafish Swim Bout Classification | Semmelhack et al. have achieved high classification accuracy in distinguishing swim bouts of zebrafish using a Support Vector Machine.Convolutional Neural Networks have reached superior performance in various image recognition tasks over SVMs, but these powerful networks remain a black box.Reaching better transparency ... | We demonstrate the utility of a recent AI explainability technique by visualizing the learned features of a CNN trained on binary classification of zebrafish movements. |
80 | INTERNAL-CONSISTENCY CONSTRAINTS FOR EMERGENT COMMUNICATION | When communicating, humans rely on internally-consistent language representations.That is, as speakers, we expect listeners to behave the same way we do when we listen.This work proposes several methods for encouraging such internal consistency in dialog agents in an emergent communication setting.We consider two hypot... | Internal-consistency constraints improve agents ability to develop emergent protocols that generalize across communicative roles. |
81 | Discovering the compositional structure of vector representations with Role Learning Networks | Neural networks are able to perform tasks that rely on compositional structure even though they lack obvious mechanisms for representing this structure.To analyze the internal representations that enable such success, we propose ROLE, a technique that detects whether these representations implicitly encode symbolic str... | We introduce a new analysis technique that discovers interpretable compositional structure in notoriously hard-to-interpret recurrent neural networks. |
82 | A Unified Theory of Early Visual Representations from Retina to Cortex through Anatomically Constrained Deep CNNs | The vertebrate visual system is hierarchically organized to process visual information in successive stages.Neural representations vary drastically across the first stages of visual processing: at the output of the retina, ganglion cell receptive fields exhibit a clear antagonistic center-surround structure, whereas in... | We reproduced neural representations found in biological visual systems by simulating their neural resource constraints in a deep convolutional model. |
83 | Identifying Generalization Properties in Neural Networks | While it has not yet been proven, empirical evidence suggests that model generalization is related to local properties of the optima which can be described via the Hessian.We connect model generalization with the local property of a solution under the PAC-Bayes paradigm.In particular, we prove that model generalization... | a theory connecting Hessian of the solution and the generalization power of the model |
84 | EnGAN: Latent Space MCMC and Maximum Entropy Generators for Energy-based Models | Unsupervised learning is about capturing dependencies between variables and is driven by the contrast between the probable vs improbable configurations of these variables, often either via a generative model which only samples probable ones or with an energy function which is low for probable ones and high for improbab... | We introduced entropy maximization to GANs, leading to a reinterpretation of the critic as an energy function. |
85 | “Style” Transfer for Musical Audio Using Multiple Time-Frequency Representations | Neural Style Transfer has become a popular technique forgenerating images of distinct artistic styles using convolutional neural networks.Thisrecent success in image style transfer has raised the question ofwhether similar methods can be leveraged to alter the “style” of musicalaudio.In this work, we attempt long time-... | We present a long time-scale musical audio style transfer algorithm which synthesizes audio in the time-domain, but uses Time-Frequency representations of audio. |
86 | Continual adaptation for efficient machine communication | To communicate with new partners in new contexts, humans rapidly form new linguistic conventions.Recent language models trained with deep neural networks are able to comprehend and produce the existing conventions present in their training data, but are not able to flexibly and interactively adapt those conventions on ... | We propose a repeated reference benchmark task and a regularized continual learning approach for adaptive communication with humans in unfamiliar domains |
87 | FSPool: Learning Set Representations with Featurewise Sort Pooling | Traditional set prediction models can struggle with simple datasets due to an issue we call the responsibility problem.We introduce a pooling method for sets of feature vectors based on sorting features across elements of the set.This can be used to construct a permutation-equivariant auto-encoder that avoids this resp... | Sort in encoder and undo sorting in decoder to avoid responsibility problem in set auto-encoders |
88 | PLEX: PLanner and EXecutor for Embodied Learning in Navigation | We present a method for policy learning to navigate indoor environments.We adopt a hierarchical policy approach, where two agents are trained to work in cohesion with one another to perform a complex navigation task.A Planner agent operates at a higher level and proposes sub-goals for an Executor agent."The Executor re... | We present a hierarchical learning framework for navigation within an embodied learning setting |
89 | RTFM: Generalising to New Environment Dynamics via Reading | Obtaining policies that can generalise to new environments in reinforcement learning is challenging.In this work, we demonstrate that language understanding via a reading policy learner is a promising vehicle for generalisation to new environments.We propose a grounded policy learning problem, Read to Fight Monsters, i... | We show language understanding via reading is promising way to learn policies that generalise to new environments. |
90 | Coherent Gradients: An Approach to Understanding Generalization in Gradient Descent-based Optimization | An open question in the Deep Learning community is why neural networks trained with Gradient Descent generalize well on real datasets even though they are capable of fitting random data.We propose an approach to answering this question based on a hypothesis about the dynamics of gradient descent that we call Coherent G... | We propose a hypothesis for why gradient descent generalizes based on how per-example gradients interact with each other. |
91 | Deep 3D Pan via Local adaptive "t-shaped" convolutions with global and local adaptive dilations | Recent advances in deep learning have shown promising results in many low-level vision tasks.However, solving the single-image-based view synthesis is still an open problem.In particular, the generation of new images at parallel camera views given a single input image is of great interest, as it enables 3D visualizati... | Novel architecture for stereoscopic view synthesis at arbitrary camera shifts utilizing adaptive t-shaped kernels with adaptive dilations. |
92 | Cutting Down Training Memory by Re-fowarding | Deep Neutral Networks require huge GPU memory when training on modern image/video databases.Unfortunately, the GPU memory as a hardware resource is always finite, which limits the image resolution, batch size, and learning rate that could be used for better DNN performance.In this paper, we propose a novel training app... | This paper proposes fundamental theory and optimal algorithms for DNN training, which reduce up to 80% of training memory for popular DNNs. |
93 | On the Universal Approximability and Complexity Bounds of Quantized ReLU Neural Networks | Compression is a key step to deploy large neural networks on resource-constrained platforms.As a popular compression technique, quantization constrains the number of distinct weight values and thus reducing the number of bits required to represent and store each weight.In this paper, we study the representation power o... | This paper proves the universal approximability of quantized ReLU neural networks and puts forward the complexity bound given arbitrary error. |
94 | CAQL: Continuous Action Q-Learning | Reinforcement learning with value-based methods has shown success in a variety of domains such asgames and recommender systems.When the action space is finite, these algorithms implicitly finds a policy by learning the optimal value function, which are often very efficient.However, one major challenge of extending Q-le... | A general framework of value-based reinforcement learning for continuous control |
95 | Generative Adversarial Network Training is a Continual Learning Problem | Generative Adversarial Networks have proven to be a powerful framework for learning to draw samples from complex distributions.However, GANs are also notoriously difficult to train, with mode collapse and oscillations a common problem.We hypothesize that this is at least in part due to the evolution of the generator di... | Generative Adversarial Network Training is a Continual Learning Problem. |
96 | Network Reparameterization for Unseen Class Categorization | Many problems with large-scale labeled training data have been impressively solved by deep learning.However, Unseen Class Categorization with minimal information provided about target classes is the most commonly encountered setting in industry, which remains a challenging research problem in machine learning.Previous ... | A unified frame for both few-shot learning and zero-shot learning based on network reparameterization |
97 | GraphQA: Protein Model Quality Assessment using Graph Convolutional Network | Proteins are ubiquitous molecules whose function in biological processes is determined by their 3D structure."Experimental identification of a protein's structure can be time-consuming, prohibitively expensive, and not always possible.", 'Alternatively, protein folding can be modeled using computational methods, which ... | GraphQA is a graph-based method for protein Quality Assessment that improves the state-of-the-art for both hand-engineered and representation-learning approaches |
98 | Learning in Confusion: Batch Active Learning with Noisy Oracle | We study the problem of training machine learning models incrementally using active learning with access to imperfect or noisy oracles.We specifically consider the setting of batch active learning, in which multiple samples are selected as opposed to a single sample as in classical settings so as to reduce the training... | We address the active learning in batch setting with noisy oracles and use model uncertainty to encode the decision quality of active learning algorithm during acquisition. |
99 | Learning to Understand Goal Specifications by Modelling Reward | Recent work has shown that deep reinforcement-learning agents can learn to follow language-like instructions from infrequent environment rewards.However, this places on environment designers the onus of designing language-conditional reward functions which may not be easily or tractably implemented as the complexity of... | We propose AGILE, a framework for training agents to perform instructions from examples of respective goal-states. |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 10