sentence
stringlengths
373
5.09k
label
stringclasses
2 values
Title: Massively Parallel Hyperparameter Tuning. Abstract: Modern machine learning models are characterized by large hyperparameter search spaces and prohibitively expensive training costs. For such models, we cannot afford to train candidate models sequentially and wait months before finding a suitable hyperparameter...
reject
Title: Cubic Spline Smoothing Compensation for Irregularly Sampled Sequences. Abstract: The marriage of recurrent neural networks and neural ordinary differential networks (ODE-RNN) is effective in modeling irregularly sampled sequences. While ODE produces the smooth hidden states between observation intervals, the RNN...
reject
Title: EqR: Equivariant Representations for Data-Efficient Reinforcement Learning. Abstract: We study different notions of equivariance as an inductive bias in Reinforcement Learning (RL) and propose new mechanisms for recovering representations that are equivariant to both an agent’s action, and symmetry transformatio...
reject
Title: Transferring Hierarchical Structure with Dual Meta Imitation Learning. Abstract: Hierarchical Imitation learning (HIL) is an effective way for robots to learn sub-skills from long-horizon unsegmented demonstrations. However, the learned hierarchical structure lacks the mechanism to transfer across multi-tasks or...
reject
Title: ModeRNN: Harnessing Spatiotemporal Mode Collapse in Unsupervised Predictive Learning. Abstract: Learning predictive models for unlabeled spatiotemporal data is challenging in part because visual dynamics can be highly entangled in real scenes, making existing approaches prone to overfit partial modes of physical...
reject
Title: Factoring out Prior Knowledge from Low-Dimensional Embeddings. Abstract: Low-dimensional embedding techniques such as tSNE and UMAP allow visualizing high-dimensional data and therewith facilitate the discovery of interesting structure. Although they are widely used, they visualize data as is, rather than in lig...
reject
Title: Reducing Computation in Recurrent Networks by Selectively Updating State Neurons. Abstract: Recurrent Neural Networks (RNN) are the state-of-the-art approach to sequential learning. However, standard RNNs use the same amount of computation at each timestep, regardless of the input data. As a result, even for hig...
reject
Title: Time-Agnostic Prediction: Predicting Predictable Video Frames. Abstract: Prediction is arguably one of the most basic functions of an intelligent system. In general, the problem of predicting events in the future or between two waypoints is exceedingly difficult. However, most phenomena naturally pass through re...
accept
Title: Prior Networks for Detection of Adversarial Attacks. Abstract: Adversarial examples are considered a serious issue for safety critical applications of AI, such as finance, autonomous vehicle control and medicinal applications. Though significant work has resulted in increased robustness of systems to these atta...
reject
Title: Proving the Lottery Ticket Hypothesis for Convolutional Neural Networks. Abstract: The lottery ticket hypothesis states that a randomly-initialized neural network contains a small subnetwork which, when trained in isolation, can compete with the performance of the original network. Recent theoretical works prove...
accept
Title: U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation. Abstract: We propose a novel method for unsupervised image-to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end ...
accept
Title: Penetrating the Fog: the Path to Efficient CNN Models. Abstract: With the increasing demand to deploy convolutional neural networks (CNNs) on mobile platforms, the sparse kernel approach was proposed, which could save more parameters than the standard convolution while maintaining accuracy. However, despite the ...
reject
Title: NETWORK ROBUSTNESS TO PCA PERTURBATIONS. Abstract: A key challenge in analyzing neural networks' robustness is identifying input features for which networks are robust to perturbations. Existing work focuses on direct perturbations to the inputs, thereby studies network robustness to the lowest-level features. I...
reject
Title: Monge-Amp\`ere Flow for Generative Modeling. Abstract: We present a deep generative model, named Monge-Amp\`ere flow, which builds on continuous-time gradient flow arising from the Monge-Amp\`ere equation in optimal transport theory. The generative map from the latent space to the data space follows a dynamical ...
reject
Title: Regularization Matters in Policy Optimization. Abstract: Deep Reinforcement Learning (Deep RL) has been receiving increasingly more attention thanks to its encouraging performance on a variety of control tasks. Yet, conventional regularization techniques in training neural networks (e.g., $L_2$ regularization, ...
reject
Title: Anytime Dense Prediction with Confidence Adaptivity. Abstract: Anytime inference requires a model to make a progression of predictions which might be halted at any time. Prior research on anytime visual recognition has mostly focused on image classification.We propose the first unified and end-to-end approach fo...
accept
Title: What's new? Summarizing Contributions in Scientific Literature. Abstract: With thousands of academic articles shared on a daily basis, it has become increasingly difficult to keep up with the latest scientific findings. To overcome this problem, we introduce a new task of $\textit{disentangled paper summarizatio...
reject
Title: C+1 Loss: Learn to Classify C Classes of Interest and the Background Class Differentially. Abstract: There is one kind of problem all around the classification area, where we want to classify C+1 classes of samples, including C semantically deterministic classes which we call classes of interest and the (C+1)th ...
reject
Title: Spontaneous Symmetry Breaking in Deep Neural Networks. Abstract: We propose a framework to understand the unprecedented performance and robustness of deep neural networks using field theory. Correlations between the weights within the same layer can be described by symmetries in that layer, and networks generali...
reject
Title: Dynamic Instance Hardness. Abstract: We introduce dynamic instance hardness (DIH) to facilitate the training of machine learning models. DIH is a property of each training sample and is computed as the running mean of the sample's instantaneous hardness as measured over the training history. We use DIH to evalua...
reject
Title: Model-Based Visual Planning with Self-Supervised Functional Distances. Abstract: A generalist robot must be able to complete a variety of tasks in its environment. One appealing way to specify each task is in terms of a goal observation. However, learning goal-reaching policies with reinforcement learning remain...
accept
Title: Feature Map Variational Auto-Encoders. Abstract: There have been multiple attempts with variational auto-encoders (VAE) to learn powerful global representations of complex data using a combination of latent stochastic variables and an autoregressive model over the dimensions of the data. However, for the most ch...
reject
Title: Decoupled Kernel Neural Processes: Neural Network-Parameterized Stochastic Processes using Explicit Data-driven Kernel. Abstract: Neural Processes (NPs) are a class of stochastic processes parametrized by neural networks. Unlike traditional stochastic processes (e.g., Gaussian processes), which require specifyin...
reject
Title: DiffSkill: Skill Abstraction from Differentiable Physics for Deformable Object Manipulations with Tools. Abstract: We consider the problem of sequential robotic manipulation of deformable objects using tools. Previous works have shown that differentiable physics simulators provide gradients to the environment st...
accept
Title: On the Power of Abstention and Data-Driven Decision Making for Adversarial Robustness. Abstract: We formally define a feature-space attack where the adversary can perturb datapoints by arbitrary amounts but in restricted directions. By restricting the attack to a small random subspace, our model provides a clean...
reject
Title: Optimal Transport for Long-Tailed Recognition with Learnable Cost Matrix. Abstract: It is attracting attention to the long-tailed recognition problem, a burning issue that has become very popular recently. Distinctive from conventional recognition is that it posits that the allocation of the training set is supr...
accept
Title: Value Propagation Networks. Abstract: We present Value Propagation (VProp), a parameter-efficient differentiable planning module built on Value Iteration which can successfully be trained in a reinforcement learning fashion to solve unseen tasks, has the capability to generalize to larger map sizes, and can lear...
reject
Title: GraphQA: Protein Model Quality Assessment using Graph Convolutional Network. Abstract: Proteins are ubiquitous molecules whose function in biological processes is determined by their 3D structure. Experimental identification of a protein's structure can be time-consuming, prohibitively expensive, and not always ...
reject
Title: Characterizing Structural Regularities of Labeled Data in Overparameterized Models. Abstract: Humans are accustomed to environments that contain both regularities and exceptions. For example, at most gas stations, one pays prior to pumping, but the occasional rural station does not accept payment in advance. Lik...
reject
Title: Near-Optimal Representation Learning for Hierarchical Reinforcement Learning. Abstract: We study the problem of representation learning in goal-conditioned hierarchical reinforcement learning. In such hierarchical structures, a higher-level controller solves tasks by iteratively communicating goals which a lower...
accept
Title: GINN: Fast GPU-TEE Based Integrity for Neural Network Training. Abstract: Machine learning models based on Deep Neural Networks (DNNs) are increasingly being deployed in a wide range of applications ranging from self-driving cars to Covid-19 diagnostics. The computational power necessary to learn a DNN is non-tr...
reject
Title: Hidden Markov models are recurrent neural networks: A disease progression modeling application. Abstract: Hidden Markov models (HMMs) are commonly used for disease progression modeling when the true state of a patient is not fully known. Since HMMs may have multiple local optima, performance can be improved by i...
reject
Title: Relevance of Unsupervised Metrics in Task-Oriented Dialogue for Evaluating Natural Language Generation. Abstract: Automated metrics such as BLEU are widely used in the machine translation literature. They have also been used recently in the dialogue community for evaluating dialogue response generation. However,...
reject
Title: BLOOD: Bi-level Learning Framework for Out-of-distribution Generalization. Abstract: Empirical risk minimization (ERM) based machine learning algorithms have suffered from weak generalization performance on the out-of-distribution (OOD) data when the training data are collected from separate environments with un...
reject
Title: PAC-Bayes Information Bottleneck. Abstract: Understanding the source of the superior generalization ability of NNs remains one of the most important problems in ML research. There have been a series of theoretical works trying to derive non-vacuous bounds for NNs. Recently, the compression of information stored ...
accept
Title: CausalGAN: Learning Causal Implicit Generative Models with Adversarial Training. Abstract: We introduce causal implicit generative models (CiGMs): models that allow sampling from not only the true observational but also the true interventional distributions. We show that adversarial training can be used to learn...
accept
Title: Genetic Algorithm for Constrained Molecular Inverse Design. Abstract: A genetic algorithm is suitable for exploring large search spaces as it finds an approximate solution. Because of this advantage, genetic algorithm is effective in exploring vast and unknown space such as molecular search space. Though the alg...
reject
Title: Softmax Supervision with Isotropic Normalization. Abstract: The softmax function is widely used to train deep neural networks for multi-class classification. Despite its outstanding performance in classification tasks, the features derived from the supervision of softmax are usually sub-optimal in some scenarios...
reject
Title: Solving Packing Problems by Conditional Query Learning. Abstract: Neural Combinatorial Optimization (NCO) has shown the potential to solve traditional NP-hard problems recently. Previous studies have shown that NCO outperforms heuristic algorithms in many combinatorial optimization problems such as the routing p...
reject
Title: Learning to Compute Word Embeddings On the Fly. Abstract: Words in natural language follow a Zipfian distribution whereby some words are frequent but most are rare. Learning representations for words in the ``long tail'' of this distribution requires enormous amounts of data. Representations of rare words train...
reject
Title: DOUBLY STOCHASTIC ADVERSARIAL AUTOENCODER. Abstract: Any autoencoder network can be turned into a generative model by imposing an arbitrary prior distribution on its hidden code vector. Variational Autoencoder uses a KL divergence penalty to impose the prior, whereas Adversarial Autoencoder uses generative adver...
reject
Title: Exploiting Verified Neural Networks via Floating Point Numerical Error. Abstract: Motivated by the need to reliably characterize the robustness of deep neural networks, researchers have developed verification algorithms for deep neural networks. Given a neural network, the verifiers aim to answer whether certain...
reject
Title: Maximum Entropy competes with Maximum Likelihood. Abstract: Maximum entropy (MAXENT) method has a large number of applications in theoretical and applied machine learning, since it provides a convenient non-parametric tool for estimating unknown probabilities. The method is a major contribution of statistical p...
reject
Title: Graphon based Clustering and Testing of Networks: Algorithms and Theory. Abstract: Network-valued data are encountered in a wide range of applications, and pose challenges in learning due to their complex structure and absence of vertex correspondence. Typical examples of such problems include classification or ...
accept
Title: Progressive Compressed Records: Taking a Byte Out of Deep Learning Data. Abstract: Deep learning training accesses vast amounts of data at high velocity, posing challenges for datasets retrieved over commodity networks and storage devices. We introduce a way to dynamically reduce the overhead of fetching and tra...
reject
Title: The wisdom of the crowd: reliable deep reinforcement learning through ensembles of Q-functions. Abstract: Reinforcement learning agents learn by exploring the environment and then exploiting what they have learned. This frees the human trainers from having to know the preferred action or intrinsic value of each ...
reject
Title: Individually Fair Rankings. Abstract: We develop an algorithm to train individually fair learning-to-rank (LTR) models. The proposed approach ensures items from minority groups appear alongside similar items from majority groups. This notion of fair ranking is based on the definition of individual fairness from ...
accept
Title: PareCO: Pareto-aware Channel Optimization for Slimmable Neural Networks. Abstract: Slimmable neural networks provide a flexible trade-off front between prediction error and computational cost (such as the number of floating-point operations or FLOPs) with the same storage cost as a single model. They have been p...
reject
Title: Open-Ended Content-Style Recombination Via Leakage Filtering. Abstract: We consider visual domains in which a class label specifies the content of an image, and class-irrelevant properties that differentiate instances constitute the style. We present a domain-independent method that permits the open-ended recom...
reject
Title: Zero-shot Fairness with Invisible Demographics. Abstract: In a statistical notion of algorithmic fairness, we partition individuals into groups based on some key demographic factors such as race and gender, and require that some statistics of a classifier be approximately equalized across those groups. Current a...
reject
Title: PC2WF: 3D Wireframe Reconstruction from Raw Point Clouds. Abstract: We introduce PC2WF, the first end-to-end trainable deep network architecture to convert a 3D point cloud into a wireframe model. The network takes as input an unordered set of 3D points sampled from the surface of some object, and outputs a wire...
accept
Title: UMEC: Unified model and embedding compression for efficient recommendation systems. Abstract: The recommendation system (RS) plays an important role in the content recommendation and retrieval scenarios. The core part of the system is the Ranking neural network, which is usually a bottleneck of whole system perf...
accept
Title: DIVERSIFY to Generalize: Learning Generalized Representations for Time Series Classification. Abstract: Time series classification is an important problem in real world. Due to its nonstationary property that the distribution changes over time, it remains challenging to build models for generalization to unseen ...
reject
Title: Piecewise Linear Neural Networks verification: A comparative study. Abstract: The success of Deep Learning and its potential use in many important safety- critical applications has motivated research on formal verification of Neural Net- work (NN) models. Despite the reputation of learned NN models to behave as ...
reject
Title: Variational Deterministic Uncertainty Quantification. Abstract: Building on recent advances in uncertainty quantification using a single deep deterministic model (DUQ), we introduce variational Deterministic Uncertainty Quantification (vDUQ). We overcome several shortcomings of DUQ by recasting it as a Gaussian ...
reject
Title: Suppressing Outlier Reconstruction in Autoencoders for Out-of-Distribution Detection. Abstract: While only trained to reconstruct training data, autoencoders may produce high-quality reconstructions of inputs that are well outside the training data distribution. This phenomenon, which we refer to as outlier rec...
reject
Title: Analyzing the Role of Model Uncertainty for Electronic Health Records. Abstract: In medicine, both ethical and monetary costs of incorrect predictions can be significant, and the complexity of the problems often necessitates increasingly complex models. Recent work has shown that changing just the random seed is...
reject
Title: Bayesian Learning with Information Gain Provably Bounds Risk for a Robust Adversarial Defense. Abstract: In this paper, we present a novel method to learn a Bayesian neural network robust against adversarial attacks. Previous algorithms have shown an adversarially trained Bayesian Neural Network (BNN) provides i...
reject
Title: Modeling the Second Player in Distributionally Robust Optimization. Abstract: Distributionally robust optimization (DRO) provides a framework for training machine learning models that are able to perform well on a collection of related data distributions (the "uncertainty set"). This is done by solving a min-max...
accept
Title: Do deep networks transfer invariances across classes?. Abstract: In order to generalize well, classifiers must learn to be invariant to nuisance transformations that do not alter an input's class. Many problems have "class-agnostic" nuisance transformations that apply similarly to all classes, such as lighting a...
accept
Title: BANANA: a Benchmark for the Assessment of Neural Architectures for Nucleic Acids. Abstract: Machine learning has always played an important role in bioinformatics and recent applications of deep learning have allowed solving a new spectrum of biologically relevant tasks. However, there is still a gap between th...
reject
Title: Optimized Gated Deep Learning Architectures for Sensor Fusion. Abstract: Sensor fusion is a key technology that integrates various sensory inputs to allow for robust decision making in many applications such as autonomous driving and robot control. Deep neural networks have been adopted for sensor fusion in a bo...
reject
Title: Direct Molecular Conformation Generation. Abstract: Molecular conformation generation, which is to generate 3 dimensional coordinates of all the atoms in a molecule, is an important task for bioinformatics and pharmacology. Most existing machine learning based methods first predict interatomic distances and then...
reject
Title: That Escalated Quickly: Compounding Complexity by Editing Levels at the Frontier of Agent Capabilities. Abstract: Deep Reinforcement Learning (RL) has recently produced impressive results in a series of settings such as games and robotics. However, a key challenge that limits the utility of RL agents for real-wo...
reject
Title: Dynamics-Aware Comparison of Learned Reward Functions. Abstract: The ability to learn reward functions plays an important role in enabling the deployment of intelligent agents in the real world. However, $\textit{comparing}$ reward functions, for example as a means of evaluating reward learning methods, presents...
accept
Title: Meta Adversarial Training. Abstract: Recently demonstrated physical-world adversarial attacks have exposed vulnerabilities in perception systems that pose severe risks for safety-critical applications such as autonomous driving. These attacks place adversarial artifacts in the physical world that indirectly caus...
reject
Title: Oblivious Sketching-based Central Path Method for Solving Linear Programming Problems. Abstract: In this work, we propose a sketching-based central path method for solving linear programmings, whose running time matches the state of art results [Cohen, Lee, Song STOC 19; Lee, Song, Zhang COLT 19]. Our method ope...
reject
Title: Unsupervised Adversarial Anomaly Detection using One-Class Support Vector Machines. Abstract: Anomaly detection discovers regular patterns in unlabeled data and identifies the non-conforming data points, which in some cases are the result of malicious attacks by adversaries. Learners such as One-Class Support V...
reject
Title: BERT-AL: BERT for Arbitrarily Long Document Understanding. Abstract: Pretrained language models attract lots of attentions, and they take advantage of the two-stages training process: pretraining on huge corpus and finetuning on specific tasks. Thereinto, BERT (Devlin et al., 2019) is a Transformer (Vaswani et a...
reject
Title: Wisdom of Committees: An Overlooked Approach To Faster and More Accurate Models. Abstract: Committee-based models (ensembles or cascades) construct models by combining existing pre-trained ones. While ensembles and cascades are well-known techniques that were proposed before deep learning, they are not considere...
accept
Title: Emergent Communication at Scale. Abstract: Emergent communication aims for a better understanding of human language evolution and building more efficient representations. We posit that reaching these goals will require scaling up, in contrast to a significant amount of literature that focuses on setting up small...
accept
Title: Test-Time Adaptation to Distribution Shifts by Confidence Maximization and Input Transformation. Abstract: Deep neural networks often exhibit poor performance on data that is unlikely under the train-time data distribution, for instance data affected by corruptions. Previous works demonstrate that test-time adap...
reject
Title: Surgical Prediction with Interpretable Latent Representation. Abstract: Given the risks and cost of surgeries, there has been significant interest in exploiting predictive models to improve perioperative care. However, due to the high dimensionality and noisiness of perioperative data, it is challenging to devel...
reject
Title: Towards Effective 2-bit Quantization: Pareto-optimal Bit Allocation for Deep CNNs Compression. Abstract: State-of-the-art quantization methods can compress deep neural networks down to 4 bits without losing accuracy. However, when it comes to 2 bits, the performance drop is still noticeable. One problem in these...
reject
Title: Lifelong Learning by Adjusting Priors. Abstract: In representational lifelong learning an agent aims to continually learn to solve novel tasks while updating its representation in light of previous tasks. Under the assumption that future tasks are related to previous tasks, representations should be learned in s...
reject
Title: Attacking Few-Shot Classifiers with Adversarial Support Sets. Abstract: Few-shot learning systems, especially those based on meta-learning, have recently made significant advances, and are now being considered for real world problems in healthcare, personalization, and science. In this paper, we examine the robu...
reject
Title: Expectigrad: Fast Stochastic Optimization with Robust Convergence Properties. Abstract: Many popular adaptive gradient methods such as Adam and RMSProp rely on an exponential moving average (EMA) to normalize their stepsizes. While the EMA makes these methods highly responsive to new gradient information, recent...
reject
Title: Higher-order Structure Prediction in Evolving Graph Simplicial Complexes. Abstract: Dynamic graphs are rife with higher-order interactions, such as co-authorship relationships and protein-protein interactions in biological networks, that naturally arise between more than two nodes at once. In spite of the ubiqui...
reject
Title: Model Inversion Networks for Model-Based Optimization. Abstract: In this work, we aim to solve data-driven optimization problems, where the goal is to find an input that maximizes an unknown score function given access to a dataset of input, score pairs. Inputs may lie on extremely thin manifolds in high-dimensi...
reject
Title: Enhancing Language Emergence through Empathy. Abstract: The emergence of language in multi-agent settings is a promising research direction to ground natural language in simulated agents. If AI would be able to understand the meaning of language through its using it, it could also transfer it to other situations...
reject
Title: Value Function Spaces: Skill-Centric State Abstractions for Long-Horizon Reasoning. Abstract: Reinforcement learning can train policies that effectively perform complex tasks. However for long-horizon tasks, the performance of these methods degrades with horizon, often necessitating reasoning over and chaining l...
accept
Title: On Invariance Penalties for Risk Minimization. Abstract: The Invariant Risk Minimization (IRM) principle was first proposed by Arjovsky et al. (2019) to address the domain generalization problem by leveraging data heterogeneity from differing experimental conditions. Specifically, IRM seeks to find a data repres...
reject
Title: BIGSAGE: unsupervised inductive representation learning of graph via bi-attended sampling and global-biased aggregating. Abstract: Different kinds of representation learning techniques on graph have shown significant effect in downstream machine learning tasks. Recently, in order to inductively learn representat...
reject
Title: SOAR: Second-Order Adversarial Regularization. Abstract: Adversarial training is a common approach to improving the robustness of deep neural networks against adversarial examples. In this work, we propose a novel regularization approach as an alternative. To derive the regularizer, we formulate the adversarial ...
reject
Title: SoftLoc: Robust Temporal Localization under Label Misalignment. Abstract: This work addresses the long-standing problem of robust event localization in the presence of temporally of misaligned labels in the training data. We propose a novel versatile loss function that generalizes a number of training regimes fr...
reject
Title: Imitation Learning via Off-Policy Distribution Matching. Abstract: When performing imitation learning from expert demonstrations, distribution matching is a popular approach, in which one alternates between estimating distribution ratios and then using these ratios as rewards in a standard reinforcement learning...
accept
Title: EXPLAINABLE AI-BASED DYNAMIC FILTER PRUNING OF CONVOLUTIONAL NEURAL NETWORKS. Abstract: Filter pruning is one of the most effective ways to accelerate Convolutional Neural Networks (CNNs). Most of the existing works are focused on the static pruning of CNN filters. In dynamic pruning of CNN filters, existing wor...
reject
Title: Understanding Knowledge Integration in Language Models with Graph Convolutions. Abstract: Pretrained language models (LMs) are not very good at robustly capturing factual knowledge. This has led to the development of a number of knowledge integration (KI) methods which aim to incorporate external knowledge into ...
reject
Title: Bayesian Meta Sampling for Fast Uncertainty Adaptation. Abstract: Meta learning has been making impressive progress for fast model adaptation. However, limited work has been done on learning fast uncertainty adaption for Bayesian modeling. In this paper, we propose to achieve the goal by placing meta learning on...
accept
Title: Learning Universal User Representations via Self-Supervised Lifelong Behaviors Modeling. Abstract: Universal user representation is an important research topic in industry, and is widely used in diverse downstream user analysis tasks, such as user profiling and user preference prediction. With the rapid developm...
reject
Title: Bounds on Over-Parameterization for Guaranteed Existence of Descent Paths in Shallow ReLU Networks. Abstract: We study the landscape of squared loss in neural networks with one-hidden layer and ReLU activation functions. Let $m$ and $d$ be the widths of hidden and input layers, respectively. We show that there ...
accept
Title: Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack. Abstract: The AutoAttack (AA) has been the most reliable method to evaluate adversarial robustness when considerable computational resources are available. However, the high computational cost (e.g., 100 times more than that of th...
reject
Title: Imitation Learning of Robot Policies using Language, Vision and Motion. Abstract: In this work we propose a novel end-to-end imitation learning approach which combines natural language, vision, and motion information to produce an abstract representation of a task, which in turn can be used to synthesize specifi...
reject
Title: Clustering-friendly Representation Learning via Instance Discrimination and Feature Decorrelation. Abstract: Clustering is one of the most fundamental tasks in machine learning. Recently, deep clustering has become a major trend in clustering techniques. Representation learning often plays an important role in t...
accept
Title: Increasing the Coverage and Balance of Robustness Benchmarks by Using Non-Overlapping Corruptions. Abstract: Neural Networks are sensitive to various corruptions that usually occur in real-world applications such as low-lighting conditions, blurs, noises, etc. To estimate the robustness of neural networks to the...
reject
Title: Local Feature Swapping for Generalization in Reinforcement Learning. Abstract: Over the past few years, the acceleration of computing resources and research in Deep Learning has led to significant practical successes in a range of tasks, including in particular in computer vision. Building on these advances, rei...
accept
Title: SGD with Hardness Weighted Sampling for Distributionally Robust Deep Learning. Abstract: Distributionally Robust Optimization (DRO) has been proposed as an alternative to Empirical Risk Minimization (ERM) in order to account for potential biases in the training data distribution. However, its use in deep learnin...
reject
Title: Learning Hyperbolic Representations of Topological Features. Abstract: Learning task-specific representations of persistence diagrams is an important problem in topological data analysis and machine learning. However, current state of the art methods are restricted in terms of their expressivity as they are focu...
accept
Title: On the Uncomputability of Partition Functions in Energy-Based Sequence Models. Abstract: In this paper, we argue that energy-based sequence models backed by expressive parametric families can result in uncomputable and inapproximable partition functions. Among other things, this makes model selection--and theref...
accept
Title: $G^3$: Representation Learning and Generation for Geometric Graphs. Abstract: A geometric graph is a graph equipped with geometric information (i.e., node coordinates). A notable example is molecular graphs, where the combinatorial bonding is supplement with atomic coordinates that determine the three-dimensiona...
reject