sentence
stringlengths
373
5.09k
label
stringclasses
2 values
Title: An Empirical and Comparative Analysis of Data Valuation with Scalable Algorithms. Abstract: This paper focuses on valuating training data for supervised learning tasks and studies the Shapley value, a data value notion originated in cooperative game theory. The Shapley value defines a unique value distribution s...
reject
Title: Congested bandits: Optimal routing via short-term resets. Abstract: For traffic routing platforms, the choice of which route to recommend to a user depends on the congestion on these routes -- indeed, an individual's utility depends on the number of people using the recommended route at that instance. Motivated ...
reject
Title: Distributionally Robust Fair Principal Components via Geodesic Descents. Abstract: Principal component analysis is a simple yet useful dimensionality reduction technique in modern machine learning pipelines. In consequential domains such as college admission, healthcare and credit approval, it is imperative to t...
accept
Title: On Dyadic Fairness: Exploring and Mitigating Bias in Graph Connections. Abstract: Disparate impact has raised serious concerns in machine learning applications and its societal impacts. In response to the need of mitigating discrimination, fairness has been regarded as a crucial property in algorithmic design. I...
accept
Title: Attacking Binarized Neural Networks. Abstract: Neural networks with low-precision weights and activations offer compelling efficiency advantages over their full-precision equivalents. The two most frequently discussed benefits of quantization are reduced memory consumption, and a faster forward pass when impleme...
accept
Title: Additive Poisson Process: Learning Intensity of Higher-Order Interaction in Poisson Processes. Abstract: We present the Additive Poisson Process (APP), a novel framework that can model the higher-order interaction effects of the intensity functions in Poisson processes using projections into lower-dimensional sp...
reject
Title: Optimal ANN-SNN Conversion for High-accuracy and Ultra-low-latency Spiking Neural Networks. Abstract: Spiking Neural Networks (SNNs) have gained great attraction due to their distinctive properties of low power consumption and fast inference on neuromorphic hardware. As the most effective method to get deep SNNs...
accept
Title: The power of deeper networks for expressing natural functions. Abstract: It is well-known that neural networks are universal approximators, but that deeper networks tend in practice to be more powerful than shallower ones. We shed light on this by proving that the total number of neurons m required to approximat...
accept
Title: Mixed-curvature Variational Autoencoders. Abstract: Euclidean space has historically been the typical workhorse geometry for machine learning applications due to its power and simplicity. However, it has recently been shown that geometric spaces with constant non-zero curvature improve representations and perfor...
accept
Title: Fixed Neural Network Steganography: Train the images, not the network. Abstract: Recent attempts at image steganography make use of advances in deep learning to train an encoder-decoder network pair to hide and retrieve secret messages in images. These methods are able to hide large amounts of data, but they als...
accept
Title: Locality-Based Mini Batching for Graph Neural Networks. Abstract: Training graph neural networks on large graphs is challenging since there is no clear way of how to extract mini batches from connected data. To solve this, previous methods have primarily relied on sampling. While this often leads to good converg...
reject
Title: Generative Adversarial Nets for Multiple Text Corpora. Abstract: Generative adversarial nets (GANs) have been successfully applied to the artificial generation of image data. In terms of text data, much has been done on the artificial generation of natural language from a single corpus. We consider multiple text...
reject
Title: Scalable Private Learning with PATE. Abstract: The rapid adoption of machine learning has increased concerns about the privacy implications of machine learning models trained on sensitive data, such as medical records or other personal information. To address those concerns, one promising approach is Private Agg...
accept
Title: EBM Life Cycle: MCMC Strategies for Synthesis, Defense, and Density Modeling. Abstract: This work presents strategies to learn an Energy-Based Model (EBM) according to the desired length of its MCMC sampling trajectories. MCMC trajectories of different lengths correspond to models with different purposes. Our ex...
reject
Title: Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding. Abstract: Disentangling the underlying generative factors from complex data has so far been limited to carefully constructed scenarios. We propose a path towards natural data by first showing that the statistics of natural data provid...
accept
Title: OPTIMAL BINARY QUANTIZATION FOR DEEP NEURAL NETWORKS. Abstract: Quantizing weights and activations of deep neural networks results in significant improvement in inference efficiency at the cost of lower accuracy. A source of the accuracy gap between full precision and quantized models is the quantization error. ...
reject
Title: Dive Deeper Into Integral Pose Regression. Abstract: Integral pose regression combines an implicit heatmap with end-to-end training for human body and hand pose estimation. Unlike detection-based heatmap methods, which decode final joint positions from the heatmap with a non-differentiable argmax operation, inte...
accept
Title: A Modulation Layer to Increase Neural Network Robustness Against Data Quality Issues. Abstract: Data quality is a common problem in machine learning, especially in high-stakes settings such as healthcare. Missing data affects accuracy, calibration, and feature attribution in complex patterns. Developers often tr...
reject
Title: Towards an Adversarially Robust Normalization Approach. Abstract: Batch Normalization (BatchNorm) has shown to be effective for improving and accelerating the training of deep neural networks. However, recently it has been shown that it is also vulnerable to adversarial perturbations. In this work, we aim to inv...
reject
Title: Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models. Abstract: In recent years, deep neural network approaches have been widely adopted for machine learning tasks, including classification. However, they were shown to be vulnerable to adversarial perturbations: carefully craft...
accept
Title: Stabilizing Adversarial Nets with Prediction Methods. Abstract: Adversarial neural networks solve many important problems in data science, but are notoriously difficult to train. These difficulties come from the fact that optimal weights for adversarial nets correspond to saddle points, and not minimizers, of th...
accept
Title: Continual Learning with Gated Incremental Memories for Sequential Data Processing. Abstract: The ability to learn over changing task distributions without forgetting previous knowledge, also known as continual learning, is a key enabler for scalable and trustworthy deployments of adaptive solutions. While the im...
reject
Title: LSH Microbatches for Stochastic Gradients: Value in Rearrangement. Abstract: Metric embeddings are immensely useful representations of associations between entities (images, users, search queries, words, and more). Embeddings are learned by optimizing a loss objective of the general form of a sum over ...
reject
Title: Using Synthetic Data to Improve the Long-range Forecasting of Time Series Data. Abstract: Effective long-range forecasting of time series data remains an unsolved and open problem. One possible approach is to use generative models to improve long-range forecasting, but the challenge then is how to generate high-...
reject
Title: X-Forest: Approximate Random Projection Trees for Similarity Measurement. Abstract: Similarity measurement plays a central role in various data mining and machine learning tasks. Generally, a similarity measurement solution should, in an ideal state, possess the following three properties: accuracy, efficiency a...
reject
Title: Assisted Learning for Organizations with Limited Imbalanced Data. Abstract: We develop an assisted learning framework for assisting organization-level learners to improve their learning performance with limited and imbalanced data. In particular, learners at the organization level usually have sufficient computa...
reject
Title: On Learning with Fairness Trade-Offs. Abstract: Previous literature has shown that bias mitigating algorithms were sometimes prone to overfitting and had poor out-of-sample generalisation. This paper is first and foremost concerned with establishing a mathematical framework to tackle the specific issue of genera...
reject
Title: RTFM: Generalising to New Environment Dynamics via Reading. Abstract: Obtaining policies that can generalise to new environments in reinforcement learning is challenging. In this work, we demonstrate that language understanding via a reading policy learner is a promising vehicle for generalisation to new environ...
accept
Title: On Bridging Generic and Personalized Federated Learning for Image Classification. Abstract: Federated learning is promising for its capability to collaboratively train models with multiple clients without accessing their data, but vulnerable when clients' data distributions diverge from each other. This divergen...
accept
Title: On the Importance of Looking at the Manifold. Abstract: Data rarely lies on uniquely Euclidean spaces. Even data typically represented in regular domains, such as images, can have a higher level of relational information, either between data samples or even relations within samples, e.g., how the objects in an i...
reject
Title: Is Attention Better Than Matrix Decomposition?. Abstract: As an essential ingredient of modern deep learning, attention mechanism, especially self-attention, plays a vital role in the global correlation discovery. However, is hand-crafted attention irreplaceable when modeling the global context? Our intriguing f...
accept
Title: Heteroskedastic and Imbalanced Deep Learning with Adaptive Regularization. Abstract: Real-world large-scale datasets are heteroskedastic and imbalanced --- labels have varying levels of uncertainty and label distributions are long-tailed. Heteroskedasticity and imbalance challenge deep learning algorithms due to...
accept
Title: Isotropic Contextual Representations through Variational Regularization. Abstract: Contextual language representations achieve state-of-the-art performance across various natural language processing tasks. However, these representations have been shown to suffer from the degeneration problem, i.e. they occupy a ...
reject
Title: Maximum Likelihood Estimation for Multimodal Learning with Missing Modality. Abstract: Multimodal learning has achieved great successes in many scenarios. Compared with unimodal learning, it can effectively combine the information from different modalities to improve the performance of learning tasks. In reality...
reject
Title: Adaptive Risk Minimization: A Meta-Learning Approach for Tackling Group Shift. Abstract: A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution. However, this assumption is violated in almost all practical applications: mach...
reject
Title: TESLA: Task-wise Early Stopping and Loss Aggregation for Dynamic Neural Network Inference. Abstract: For inference operations in deep neural networks on end devices, it is desirable to deploy a single pre-trained neural network model, which can dynamically scale across a computation range without comprising accu...
reject
Title: Poly-encoders: Architectures and Pre-training Strategies for Fast and Accurate Multi-sentence Scoring. Abstract: The use of deep pre-trained transformers has led to remarkable progress in a number of applications (Devlin et al., 2018). For tasks that make pairwise comparisons between sequences, matching a given ...
accept
Title: Learning Physics Priors for Deep Reinforcement Learing. Abstract: While model-based deep reinforcement learning (RL) holds great promise for sample efficiency and generalization, learning an accurate dynamics model is challenging and often requires substantial interactions with the environment. Further, a wide v...
reject
Title: The Logical Expressiveness of Graph Neural Networks. Abstract: The ability of graph neural networks (GNNs) for distinguishing nodes in graphs has been recently characterized in terms of the Weisfeiler-Lehman (WL) test for checking graph isomorphism. This characterization, however, does not settle the issue of wh...
accept
Title: Importance-based Multimodal Autoencoder. Abstract: Integrating information from multiple modalities (e.g., verbal, acoustic and visual data) into meaningful representations has seen great progress in recent years. However, two challenges are not sufficiently addressed by current approaches: (1) computationally...
reject
Title: SpectralNet: Spectral Clustering using Deep Neural Networks. Abstract: Spectral clustering is a leading and popular technique in unsupervised data analysis. Two of its major limitations are scalability and generalization of the spectral embedding (i.e., out-of-sample-extension). In this paper we introduce a dee...
accept
Title: Accelerating first order optimization algorithms. Abstract: There exist several stochastic optimization algorithms. However in most cases, it is difficult to tell for a particular problem which will be the best optimizer to choose as each of them are good. Thus, we present a simple and intuitive technique, when ...
reject
Title: Stabilizing Gradients for Deep Neural Networks via Efficient SVD Parameterization. Abstract: Vanishing and exploding gradients are two of the main obstacles in training deep neural networks, especially in capturing long range dependencies in recurrent neural networks (RNNs). In this paper, we present an efficien...
reject
Title: Detecting Change in Seasonal Pattern via Autoencoder and Temporal Regularization. Abstract: Change-point detection problem consists of discovering abrupt property changes in the generation process of time-series. Most state-of-the-art models are optimizing the power of a kernel two-sample test, with only a few a...
reject
Title: Finding Winning Tickets with Limited (or No) Supervision. Abstract: The lottery ticket hypothesis argues that neural networks contain sparse subnetworks, which, if appropriately initialized (the winning tickets), are capable of matching the accuracy of the full network when trained in isolation. Empirically made...
reject
Title: Hard Masking for Explaining Graph Neural Networks. Abstract: Graph Neural Networks (GNNs) are a flexible and powerful family of models that build nodes' representations on irregular graph-structured data. This paper focuses on explaining or interpreting the rationale underlying a given prediction of already trai...
reject
Title: Learning Manifold Patch-Based Representations of Man-Made Shapes. Abstract: Choosing the right representation for geometry is crucial for making 3D models compatible with existing applications. Focusing on piecewise-smooth man-made shapes, we propose a new representation that is usable in conventional CAD modeli...
accept
Title: Graph-Augmented Normalizing Flows for Anomaly Detection of Multiple Time Series. Abstract: Anomaly detection is a widely studied task for a broad variety of data types; among them, multiple time series appear frequently in applications, including for example, power grids and traffic networks. Detecting anomalies...
accept
Title: Flatness is a False Friend. Abstract: Hessian based measures of flatness, such as the trace, Frobenius and spectral norms, have been argued, used and shown to relate to generalisation. In this paper we demonstrate that, for feed-forward neural networks under the cross-entropy loss, low-loss solutions with large ...
reject
Title: Improving Robustness Without Sacrificing Accuracy with Patch Gaussian Augmentation. Abstract: Deploying machine learning systems in the real world requires both high accuracy on clean data and robustness to naturally occurring corruptions. While architectural advances have led to improved accuracy, building robu...
reject
Title: Influence Estimation for Generative Adversarial Networks. Abstract: Identifying harmful instances, whose absence in a training dataset improves model performance, is important for building better machine learning models. Although previous studies have succeeded in estimating harmful instances under supervised s...
accept
Title: Matrix Multilayer Perceptron. Abstract: Models that output a vector of responses given some inputs, in the form of a conditional mean vector, are at the core of machine learning. This includes neural networks such as the multilayer perceptron (MLP). However, models that output a symmetric positive definite (SPD)...
reject
Title: Image GANs meet Differentiable Rendering for Inverse Graphics and Interpretable 3D Neural Rendering. Abstract: Differentiable rendering has paved the way to training neural networks to perform “inverse graphics” tasks such as predicting 3D geometry from monocular photographs. To train high performing models, mos...
accept
Title: Meta-Learning with Domain Adaptation for Few-Shot Learning under Domain Shift. Abstract: Few-Shot Learning (learning with limited labeled data) aims to overcome the limitations of traditional machine learning approaches which require thousands of labeled examples to train an effective model. Considered as a hall...
reject
Title: Logarithmic landscape and power-law escape rate of SGD. Abstract: Stochastic gradient descent (SGD) undergoes complicated multiplicative noise for the mean-square loss. We use this property of the SGD noise to derive a stochastic differential equation (SDE) with simpler additive noise by performing a random time...
reject
Title: Improving the Unsupervised Disentangled Representation Learning with VAE Ensemble. Abstract: Variational Autoencoder (VAE) based frameworks have achieved the state-of-the-art performance on the unsupervised disentangled representation learning. A recent theoretical analysis shows that such success is mainly due ...
reject
Title: Achieving Strong Regularization for Deep Neural Networks. Abstract: L1 and L2 regularizers are critical tools in machine learning due to their ability to simplify solutions. However, imposing strong L1 or L2 regularization with gradient descent method easily fails, and this limits the generalization ability of t...
reject
Title: Area Attention. Abstract: Existing attention mechanisms, are mostly item-based in that a model is trained to attend to individual items in a collection (the memory) where each item has a predefined, fixed granularity, e.g., a character or a word. Intuitively, an area in the memory consisting of multiple items ca...
reject
Title: Correction Networks: Meta-Learning for Zero-Shot Learning. Abstract: We propose a model that learns to perform zero-shot classification using a meta-learner that is trained to produce a correction to the output of a previously trained learner. The model consists of two modules: a task module that supplies an ini...
reject
Title: Differentiable Perturb-and-Parse: Semi-Supervised Parsing with a Structured Variational Autoencoder. Abstract: Human annotation for syntactic parsing is expensive, and large resources are available only for a fraction of languages. A question we ask is whether one can leverage abundant unlabeled texts to improv...
accept
Title: Learning to Complete Code with Sketches. Abstract: Code completion is usually cast as a language modelling problem, i.e., continuing an input in a left-to-right fashion. However, in practice, some parts of the completion (e.g., string literals) may be very hard to predict, whereas subsequent parts directly follo...
accept
Title: Decoupling Weight Regularization from Batch Size for Model Compression. Abstract: Conventionally, compression-aware training performs weight compression for every mini-batch to compute the impact of compression on the loss function. In this paper, in order to study when would be the right time to compress weight...
reject
Title: Stochastic Neural Physics Predictor. Abstract: Recently, neural-network based forward dynamics models have been proposed that attempt to learn the dynamics of physical systems in a deterministic way. While near-term motion can be predicted accurately, long-term predictions suffer from accumul...
reject
Title: Wasserstein diffusion on graphs with missing attributes. Abstract: Many real-world graphs are attributed graphs where nodes are associated with non-topological features. While attributes can be missing anywhere in an attributed graph, most of existing node representation learning approaches do not consider such ...
reject
Title: Data augmentation instead of explicit regularization. Abstract: Modern deep artificial neural networks have achieved impressive results through models with orders of magnitude more parameters than training examples which control overfitting with the help of regularization. Regularization can be implicit, as is t...
reject
Title: Learning Multi-Level Hierarchies with Hindsight. Abstract: Hierarchical agents have the potential to solve sequential decision making tasks with greater sample efficiency than their non-hierarchical counterparts because hierarchical agents can break down tasks into sets of subtasks that only require short sequen...
accept
Title: Learning to Augment Influential Data. Abstract: Data augmentation is a technique to reduce overfitting and to improve generalization by increasing the number of labeled data samples by performing label preserving transformations; however, it is currently conducted in a trial and error manner. A composition of pr...
reject
Title: Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks. Abstract: High-performance Deep Neural Networks (DNNs) are increasingly deployed in many real-world applications e.g., cloud prediction APIs. Recent advances in model functionality stealing attacks via black-box access (i.e., inputs in, p...
accept
Title: FAST OBJECT LOCALIZATION VIA SENSITIVITY ANALYSIS. Abstract: Deep Convolutional Neural Networks (CNNs) have been repeatedly shown to perform well on image classification tasks, successfully recognizing a broad array of objects when given sufficient training data. Methods for object localization, however, are sti...
reject
Title: Low Complexity Approximate Bayesian Logistic Regression for Sparse Online Learning. Abstract: Theoretical results show that Bayesian methods can achieve lower bounds on regret for online logistic regression. In practice, however, such techniques may not be feasible especially for very large feature sets. Vario...
reject
Title: Task-Induced Representation Learning. Abstract: In this work, we evaluate the effectiveness of representation learning approaches for decision making in visually complex environments. Representation learning is essential for effective reinforcement learning (RL) from high-dimensional in- puts. Unsupervised repre...
accept
Title: Decoupling the Layers in Residual Networks. Abstract: We propose a Warped Residual Network (WarpNet) using a parallelizable warp operator for forward and backward propagation to distant layers that trains faster than the original residual neural network. We apply a perturbation theory on residual networks and de...
accept
Title: Learning Private Representations with Focal Entropy. Abstract: How can we learn a representation with good predictive power while preserving user privacy? We present an adversarial representation learning method to sanitize sensitive content from the representation in an adversarial fashion. Specifically, we pro...
reject
Title: Exploring Curvature Noise in Large-Batch Stochastic Optimization. Abstract: Using stochastic gradient descent (SGD) with large batch-sizes to train deep neural networks is an increasingly popular technique. By doing so, one can improve parallelization by scaling to multiple workers (GPUs) and hence leading to si...
reject
Title: Learnability Lock: Authorized Learnability Control Through Adversarial Invertible Transformations. Abstract: Owing much to the revolution of information technology, recent progress of deep learning benefits incredibly from the vastly enhanced access to data available in various digital formats. Yet those publicl...
accept
Title: RL-DARTS: Differentiable Architecture Search for Reinforcement Learning. Abstract: Recently, Differentiable Architecture Search (DARTS) has become one of the most popular Neural Architecture Search (NAS) methods successfully applied in supervised learning (SL). However, its applications in other domains, in part...
reject
Title: Neural Topic Model via Optimal Transport. Abstract: Recently, Neural Topic Models (NTMs) inspired by variational autoencoders have obtained increasingly research interest due to their promising results on text analysis. However, it is usually hard for existing NTMs to achieve good document representation and coh...
accept
Title: Bayesian Relational Generative Model for Scalable Multi-modal Learning. Abstract: The study of complex systems requires the integration of multiple heterogeneous and high-dimensional data types (e.g. multi-omics). However, previous generative approaches for multi-modal inputs suffer from two shortcomings. First,...
reject
Title: Learning Cluster Structured Sparsity by Reweighting. Abstract: Recently, the paradigm of unfolding iterative algorithms into finite-length feed-forward neural networks has achieved a great success in the area of sparse recovery. Benefit from available training data, the learned networks have achieved state-of-th...
reject
Title: A Sharp Analysis of Model-based Reinforcement Learning with Self-Play. Abstract: Model-based algorithms---algorithms that explore the environment through building and utilizing an estimated model---are widely used in reinforcement learning practice and theoretically shown to achieve optimal sample efficiency for...
reject
Title: Adaptive Sample-space & Adaptive Probability coding: a neural-network based approach for compression. Abstract: We propose Adaptive Sample-space & Adaptive Probability (ASAP) coding, an efficient neural-network based method for lossy data compression. Our ASAP coding distinguishes itself from the conventional me...
reject
Title: On the Convergence and Robustness of Batch Normalization. Abstract: Despite its empirical success, the theoretical underpinnings of the stability, convergence and acceleration properties of batch normalization (BN) remain elusive. In this paper, we attack this problem from a modelling approach, where we perform ...
reject
Title: In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness. Abstract: Consider a prediction setting with few in-distribution labeled examples and many unlabeled examples both in- and out-of-distribution (OOD). The goal is to learn a model which performs well both in-d...
accept
Title: Black-Box Optimization Revisited: Improving Algorithm Selection Wizards through Massive Benchmarking. Abstract: Existing studies in black-box optimization for machine learning suffer from low generalizability, caused by a typically selective choice of problem instances used for training and testing different opt...
reject
Title: Reinforcement Learning with Efficient Active Feature Acquisition. Abstract: Solving real-life sequential decision making problems under partial observability involves an exploration-exploitation problem. To be successful, an agent needs to efficiently gather valuable information about the state of the world for ...
reject
Title: A teacher-student framework to distill future trajectories. Abstract: By learning to predict trajectories of dynamical systems, model-based methods can make extensive use of all observations from past experience. However, due to partial observability, stochasticity, compounding errors, and irrelevant dynamics, t...
accept
Title: Understanding Intrinsic Robustness Using Label Uncertainty. Abstract: A fundamental question in adversarial machine learning is whether a robust classifier exists for a given task. A line of research has made some progress towards this goal by studying the concentration of measure, but we argue standard concentr...
accept
Title: Non-Attentive Tacotron: Robust and controllable neural TTS synthesis including unsupervised duration modeling. Abstract: This paper presents Non-Attentive Tacotron based on the Tacotron 2 text-to-speech model, replacing the attention mechanism with an explicit duration predictor. This improves robustness signifi...
reject
Title: BRIDGING ADVERSARIAL SAMPLES AND ADVERSARIAL NETWORKS. Abstract: Generative adversarial networks have achieved remarkable performance on various tasks but suffer from sensitivity to hyper-parameters, training instability, and mode collapse. We find that this is partly due to gradient given by non-robust discrimi...
reject
Title: Disentangling Adversarial Robustness in Directions of the Data Manifold. Abstract: Using generative models (GAN or VAE) to craft adversarial examples, i.e. generative adversarial examples, has received increasing attention in recent years. Previous studies showed that the generative adversarial examples work dif...
reject
Title: Learning transitional skills with intrinsic motivation. Abstract: By maximizing an information theoretic objective, a few recent methods empower the agent to explore the environment and learn useful skills without supervision. However, when considering to use multiple consecutive skills to complete a specific ta...
reject
Title: Feature Interaction Interpretability: A Case for Explaining Ad-Recommendation Systems via Neural Interaction Detection. Abstract: Recommendation is a prevalent application of machine learning that affects many users; therefore, it is important for recommender models to be accurate and interpretable. In this work...
accept
Title: AlgebraNets. Abstract: Neural networks have historically been built layerwise from the set of functions in ${f: \mathbb{R}^n \to \mathbb{R}^m }$, i.e. with activations and weights/parameters represented by real numbers, $\mathbb{R}$. Our work considers a richer set of objects for activations and weights, and und...
reject
Title: Radial Basis Feature Transformation to Arm CNNs Against Adversarial Attacks. Abstract: The linear and non-flexible nature of deep convolutional models makes them vulnerable to carefully crafted adversarial perturbations. To tackle this problem, in this paper, we propose a nonlinear radial basis convolutional fea...
reject
Title: Efficient Inference and Exploration for Reinforcement Learning. Abstract: Despite an ever growing literature on reinforcement learning algorithms and applications, much less is known about their statistical inference. In this paper, we investigate the large-sample behaviors of the Q-value estimates with closed-f...
reject
Title: Planning from Pixels using Inverse Dynamics Models. Abstract: Learning dynamics models in high-dimensional observation spaces can be challenging for model-based RL agents. We propose a novel way to learn models in a latent space by learning to predict sequences of future actions conditioned on task completion. T...
accept
Title: On the Weaknesses of Reinforcement Learning for Neural Machine Translation. Abstract: Reinforcement learning (RL) is frequently used to increase performance in text generation tasks, including machine translation (MT), notably through the use of Minimum Risk Training (MRT) and Generative Adversarial Networks (G...
accept
Title: Learning to Solve Nonlinear Partial Differential Equation Systems To Accelerate MOSFET Simulation. Abstract: Semiconductor device simulation uses numerical analysis, where a set of coupled nonlinear partial differential equations is solved with the iterative Newton-Raphson method. Since an appropriate initial gu...
reject
Title: When Does Self-supervision Improve Few-shot Learning?. Abstract: We present a technique to improve the generalization of deep representations learned on small labeled datasets by introducing self-supervised tasks as auxiliary loss functions. Although recent research has shown benefits of self-supervised learning...
reject
Title: Towards Understanding Regularization in Batch Normalization. Abstract: Batch Normalization (BN) improves both convergence and generalization in training neural networks. This work understands these phenomena theoretically. We analyze BN by using a basic block of neural networks, consisting of a kernel layer, a B...
accept