title stringlengths 15 153 | authors stringlengths 6 328 | abstract stringlengths 0 2.42k | url stringlengths 97 97 | detail_url stringlengths 97 97 | abs stringclasses 1
value | OpenReview stringclasses 1
value | Download PDF stringclasses 1
value | tags stringclasses 1
value |
|---|---|---|---|---|---|---|---|---|
Beyond Value-Function Gaps: Improved Instance-Dependent Regret Bounds for Episodic Reinforcement Learning | Christoph Dann, Teodor Vanislavov Marinov, Mehryar Mohri, Julian Zimmert | We provide improved gap-dependent regret bounds for reinforcement learning in finite episodic Markov decision processes. Compared to prior work, our bounds depend on alternative definitions of gaps. These definitions are based on the insight that, in order to achieve a favorable regret, an algorithm does not need to le... | https://papers.nips.cc/paper_files/paper/2021/hash/000c076c390a4c357313fca29e390ece-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/000c076c390a4c357313fca29e390ece-Abstract.html | NIPS 2021 | |||
Learning One Representation to Optimize All Rewards | Ahmed Touati, Yann Ollivier | We introduce the forward-backward (FB) representation of the dynamics of a reward-free Markov decision process. It provides explicit near-optimal policies for any reward specified a posteriori. During an unsupervised phase, we use reward-free interactions with the environment to learn two representations via off-the-sh... | https://papers.nips.cc/paper_files/paper/2021/hash/003dd617c12d444ff9c80f717c3fa982-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/003dd617c12d444ff9c80f717c3fa982-Abstract.html | NIPS 2021 | |||
Matrix factorisation and the interpretation of geodesic distance | Nick Whiteley, Annie Gray, Patrick Rubin-Delanchy | Given a graph or similarity matrix, we consider the problem of recovering a notion of true distance between the nodes, and so their true positions. We show that this can be accomplished in two steps: matrix factorisation, followed by nonlinear dimension reduction. This combination is effective because the point cloud o... | https://papers.nips.cc/paper_files/paper/2021/hash/007ff380ee5ac49ffc34442f5c2a2b86-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/007ff380ee5ac49ffc34442f5c2a2b86-Abstract.html | NIPS 2021 | |||
UniDoc: Unified Pretraining Framework for Document Understanding | Jiuxiang Gu, Jason Kuen, Vlad I Morariu, Handong Zhao, Rajiv Jain, Nikolaos Barmpalios, Ani Nenkova, Tong Sun | Document intelligence automates the extraction of information from documents and supports many business applications. Recent self-supervised learning methods on large-scale unlabeled document datasets have opened up promising directions towards reducing annotation efforts by training models with self-supervised objecti... | https://papers.nips.cc/paper_files/paper/2021/hash/0084ae4bc24c0795d1e6a4f58444d39b-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/0084ae4bc24c0795d1e6a4f58444d39b-Abstract.html | NIPS 2021 | |||
Finding Discriminative Filters for Specific Degradations in Blind Super-Resolution | Liangbin Xie, Xintao Wang, Chao Dong, Zhongang Qi, Ying Shan | Recent blind super-resolution (SR) methods typically consist of two branches, one for degradation prediction and the other for conditional restoration. However, our experiments show that a one-branch network can achieve comparable performance to the two-branch scheme. Then we wonder: how can one-branch networks automat... | https://papers.nips.cc/paper_files/paper/2021/hash/008bd5ad93b754d500338c253d9c1770-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/008bd5ad93b754d500338c253d9c1770-Abstract.html | NIPS 2021 | |||
Counterfactual Explanations Can Be Manipulated | Dylan Slack, Anna Hilgard, Himabindu Lakkaraju, Sameer Singh | Counterfactual explanations are emerging as an attractive option for providing recourse to individuals adversely impacted by algorithmic decisions. As they are deployed in critical applications (e.g. law enforcement, financial lending), it becomes important to ensure that we clearly understand the vulnerabilties of th... | https://papers.nips.cc/paper_files/paper/2021/hash/009c434cab57de48a31f6b669e7ba266-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/009c434cab57de48a31f6b669e7ba266-Abstract.html | NIPS 2021 | |||
From Canonical Correlation Analysis to Self-supervised Graph Neural Networks | Hengrui Zhang, Qitian Wu, Junchi Yan, David Wipf, Philip S Yu | We introduce a conceptually simple yet effective model for self-supervised representation learning with graph data. It follows the previous methods that generate two views of an input graph through data augmentation. However, unlike contrastive methods that focus on instance-level discrimination, we optimize an innovat... | https://papers.nips.cc/paper_files/paper/2021/hash/00ac8ed3b4327bdd4ebbebcb2ba10a00-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/00ac8ed3b4327bdd4ebbebcb2ba10a00-Abstract.html | NIPS 2021 | |||
BAST: Bayesian Additive Regression Spanning Trees for Complex Constrained Domain | Zhao Tang Luo, Huiyan Sang, Bani Mallick | Nonparametric regression on complex domains has been a challenging task as most existing methods, such as ensemble models based on binary decision trees, are not designed to account for intrinsic geometries and domain boundaries. This article proposes a Bayesian additive regression spanning trees (BAST) model for nonpa... | https://papers.nips.cc/paper_files/paper/2021/hash/00b76fddeaaa7d8c2c43d504b2babd8a-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/00b76fddeaaa7d8c2c43d504b2babd8a-Abstract.html | NIPS 2021 | |||
Hyperbolic Busemann Learning with Ideal Prototypes | Mina Ghadimi Atigh, Martin Keller-Ressel, Pascal Mettes | Hyperbolic space has become a popular choice of manifold for representation learning of various datatypes from tree-like structures and text to graphs. Building on the success of deep learning with prototypes in Euclidean and hyperspherical spaces, a few recent works have proposed hyperbolic prototypes for classificati... | https://papers.nips.cc/paper_files/paper/2021/hash/01259a0cb2431834302abe2df60a1327-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/01259a0cb2431834302abe2df60a1327-Abstract.html | NIPS 2021 | |||
Backward-Compatible Prediction Updates: A Probabilistic Approach | Frederik Träuble, Julius von Kügelgen, Matthäus Kleindessner, Francesco Locatello, Bernhard Schölkopf, Peter Gehler | When machine learning systems meet real world applications, accuracy is only one of several requirements. In this paper, we assay a complementary perspective originating from the increasing availability of pre-trained and regularly improving state-of-the-art models. While new improved models develop at a fast pace, dow... | https://papers.nips.cc/paper_files/paper/2021/hash/012d9fe15b2493f21902cd55603382ec-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/012d9fe15b2493f21902cd55603382ec-Abstract.html | NIPS 2021 | |||
Truncated Marginal Neural Ratio Estimation | Benjamin K Miller, Alex Cole, Patrick Forré, Gilles Louppe, Christoph Weniger | Parametric stochastic simulators are ubiquitous in science, often featuring high-dimensional input parameters and/or an intractable likelihood. Performing Bayesian parameter inference in this context can be challenging. We present a neural simulation-based inference algorithm which simultaneously offers simulation effi... | https://papers.nips.cc/paper_files/paper/2021/hash/01632f7b7a127233fa1188bd6c2e42e1-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/01632f7b7a127233fa1188bd6c2e42e1-Abstract.html | NIPS 2021 | |||
ReAct: Out-of-distribution Detection With Rectified Activations | Yiyou Sun, Chuan Guo, Yixuan Li | Out-of-distribution (OOD) detection has received much attention lately due to its practical importance in enhancing the safe deployment of neural networks. One of the primary challenges is that models often produce highly confident predictions on OOD data, which undermines the driving principle in OOD detection that th... | https://papers.nips.cc/paper_files/paper/2021/hash/01894d6f048493d2cacde3c579c315a3-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/01894d6f048493d2cacde3c579c315a3-Abstract.html | NIPS 2021 | |||
Non-local Latent Relation Distillation for Self-Adaptive 3D Human Pose Estimation | Jogendra Nath Kundu, Siddharth Seth, Anirudh Jamkhandi, Pradyumna YM, Varun Jampani, Anirban Chakraborty, Venkatesh Babu R | Available 3D human pose estimation approaches leverage different forms of strong (2D/3D pose) or weak (multi-view or depth) paired supervision. Barring synthetic or in-studio domains, acquiring such supervision for each new target environment is highly inconvenient. To this end, we cast 3D pose learning as a self-super... | https://papers.nips.cc/paper_files/paper/2021/hash/018b59ce1fd616d874afad0f44ba338d-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/018b59ce1fd616d874afad0f44ba338d-Abstract.html | NIPS 2021 | |||
Fast Training of Neural Lumigraph Representations using Meta Learning | Alexander Bergman, Petr Kellnhofer, Gordon Wetzstein | Novel view synthesis is a long-standing problem in machine learning and computer vision. Significant progress has recently been made in developing neural scene representations and rendering techniques that synthesize photorealistic images from arbitrary views. These representations, however, are extremely slow to train... | https://papers.nips.cc/paper_files/paper/2021/hash/01931a6925d3de09e5f87419d9d55055-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/01931a6925d3de09e5f87419d9d55055-Abstract.html | NIPS 2021 | |||
Analytical Study of Momentum-Based Acceleration Methods in Paradigmatic High-Dimensional Non-Convex Problems | Stefano Sarao Mannelli, Pierfrancesco Urbani | The optimization step in many machine learning problems rarely relies on vanilla gradient descent but it is common practice to use momentum-based accelerated methods. Despite these algorithms being widely applied to arbitrary loss functions, their behaviour in generically non-convex, high dimensional landscapes is poor... | https://papers.nips.cc/paper_files/paper/2021/hash/019f8b946a256d9357eadc5ace2c8678-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/019f8b946a256d9357eadc5ace2c8678-Abstract.html | NIPS 2021 | |||
Multimodal Few-Shot Learning with Frozen Language Models | Maria Tsimpoukelli, Jacob L Menick, Serkan Cabi, S. M. Ali Eslami, Oriol Vinyals, Felix Hill | When trained at sufficient scale, auto-regressive language models exhibit the notable ability to learn a new language task after being prompted with just a few examples. Here, we present a simple, yet effective, approach for transferring this few-shot learning ability to a multimodal setting (vision and language). Usin... | https://papers.nips.cc/paper_files/paper/2021/hash/01b7575c38dac42f3cfb7d500438b875-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/01b7575c38dac42f3cfb7d500438b875-Abstract.html | NIPS 2021 | |||
Approximating the Permanent with Deep Rejection Sampling | Juha Harviainen, Antti Röyskö, Mikko Koivisto | We present a randomized approximation scheme for the permanent of a matrix with nonnegative entries. Our scheme extends a recursive rejection sampling method of Huber and Law (SODA 2008) by replacing the permanent upper bound with a linear combination of the subproblem bounds at a moderately large depth of the recursio... | https://papers.nips.cc/paper_files/paper/2021/hash/01d8bae291b1e4724443375634ccfa0e-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/01d8bae291b1e4724443375634ccfa0e-Abstract.html | NIPS 2021 | |||
Revisiting Model Stitching to Compare Neural Representations | Yamini Bansal, Preetum Nakkiran, Boaz Barak | We revisit and extend model stitching (Lenc & Vedaldi 2015) as a methodology to study the internal representations of neural networks. Given two trained and frozen models $A$ and $B$, we consider a "stitched model" formed by connecting the bottom-layers of $A$ to the top-layers of $B$, with a simple trainable layer bet... | https://papers.nips.cc/paper_files/paper/2021/hash/01ded4259d101feb739b06c399e9cd9c-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/01ded4259d101feb739b06c399e9cd9c-Abstract.html | NIPS 2021 | |||
AugMax: Adversarial Composition of Random Augmentations for Robust Training | Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Anima Anandkumar, Zhangyang Wang | Data augmentation is a simple yet effective way to improve the robustness of deep neural networks (DNNs). Diversity and hardness are two complementary dimensions of data augmentation to achieve robustness. For example, AugMix explores random compositions of a diverse set of augmentations to enhance broader coverage, wh... | https://papers.nips.cc/paper_files/paper/2021/hash/01e9565cecc4e989123f9620c1d09c09-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/01e9565cecc4e989123f9620c1d09c09-Abstract.html | NIPS 2021 | |||
Habitat 2.0: Training Home Assistants to Rearrange their Habitat | Andrew Szot, Alexander Clegg, Eric Undersander, Erik Wijmans, Yili Zhao, John Turner, Noah Maestre, Mustafa Mukadam, Devendra Singh Chaplot, Oleksandr Maksymets, Aaron Gokaslan, Vladimír Vondruš, Sameer Dharur, Franziska Meier, Wojciech Galuba, Angel Chang, Zsolt Kira, Vladlen Koltun, Jitendra Malik, Manolis Savva, Dhr... | We introduce Habitat 2.0 (H2.0), a simulation platform for training virtual robots in interactive 3D environments and complex physics-enabled scenarios. We make comprehensive contributions to all levels of the embodied AI stack – data, simulation, and benchmark tasks. Specifically, we present: (i) ReplicaCAD: an artist... | https://papers.nips.cc/paper_files/paper/2021/hash/021bbc7ee20b71134d53e20206bd6feb-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/021bbc7ee20b71134d53e20206bd6feb-Abstract.html | NIPS 2021 | |||
Time Discretization-Invariant Safe Action Repetition for Policy Gradient Methods | Seohong Park, Jaekyeom Kim, Gunhee Kim | In reinforcement learning, continuous time is often discretized by a time scale $\delta$, to which the resulting performance is known to be highly sensitive. In this work, we seek to find a $\delta$-invariant algorithm for policy gradient (PG) methods, which performs well regardless of the value of $\delta$. We first i... | https://papers.nips.cc/paper_files/paper/2021/hash/024677efb8e4aee2eaeef17b54695bbe-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/024677efb8e4aee2eaeef17b54695bbe-Abstract.html | NIPS 2021 | |||
Meta-Learning Reliable Priors in the Function Space | Jonas Rothfuss, Dominique Heyn, jinfan Chen, Andreas Krause | Meta-Learning promises to enable more data-efficient inference by harnessing previous experience from related learning tasks. While existing meta-learning methods help us to improve the accuracy of our predictions in face of data scarcity, they fail to supply reliable uncertainty estimates, often being grossly overconf... | https://papers.nips.cc/paper_files/paper/2021/hash/024d2d699e6c1a82c9ba986386f4d824-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/024d2d699e6c1a82c9ba986386f4d824-Abstract.html | NIPS 2021 | |||
VoiceMixer: Adversarial Voice Style Mixup | Sang-Hoon Lee, Ji-Hoon Kim, Hyunseung Chung, Seong-Whan Lee | Although recent advances in voice conversion have shown significant improvement, there still remains a gap between the converted voice and target voice. A key factor that maintains this gap is the insufficient decomposition of content and voice style from the source speech. This insufficiency leads to the converted spe... | https://papers.nips.cc/paper_files/paper/2021/hash/0266e33d3f546cb5436a10798e657d97-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/0266e33d3f546cb5436a10798e657d97-Abstract.html | NIPS 2021 | |||
Predicting What You Already Know Helps: Provable Self-Supervised Learning | Jason D. Lee, Qi Lei, Nikunj Saunshi, JIACHENG ZHUO | Self-supervised representation learning solves auxiliary prediction tasks (known as pretext tasks), that do not require labeled data, to learn semantic representations. These pretext tasks are created solely using the input features, such as predicting a missing image patch, recovering the color channels of an image fr... | https://papers.nips.cc/paper_files/paper/2021/hash/02e656adee09f8394b402d9958389b7d-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/02e656adee09f8394b402d9958389b7d-Abstract.html | NIPS 2021 | |||
Oracle Complexity in Nonsmooth Nonconvex Optimization | Guy Kornowski, Ohad Shamir | It is well-known that given a smooth, bounded-from-below, and possibly nonconvex function, standard gradient-based methods can find $\epsilon$-stationary points (with gradient norm less than $\epsilon$) in $\mathcal{O}(1/\epsilon^2)$ iterations. However, many important nonconvex optimization problems, such as those ass... | https://papers.nips.cc/paper_files/paper/2021/hash/030e65da2b1c944090548d36b244b28d-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/030e65da2b1c944090548d36b244b28d-Abstract.html | NIPS 2021 | |||
CentripetalText: An Efficient Text Instance Representation for Scene Text Detection | Tao Sheng, Jie Chen, Zhouhui Lian | Scene text detection remains a grand challenge due to the variation in text curvatures, orientations, and aspect ratios. One of the hardest problems in this task is how to represent text instances of arbitrary shapes. Although many methods have been proposed to model irregular texts in a flexible manner, most of them l... | https://papers.nips.cc/paper_files/paper/2021/hash/03227b950778ab86436ff79fe975b596-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/03227b950778ab86436ff79fe975b596-Abstract.html | NIPS 2021 | |||
Learning to Select Exogenous Events for Marked Temporal Point Process | Ping Zhang, Rishabh Iyer, Ashish Tendulkar, Gaurav Aggarwal, Abir De | Marked temporal point processes (MTPPs) have emerged as a powerful modelingtool for a wide variety of applications which are characterized using discreteevents localized in continuous time. In this context, the events are of two typesendogenous events which occur due to the influence of the previous events andexogenous... | https://papers.nips.cc/paper_files/paper/2021/hash/032abcd424b4312e7087f434ef1c0094-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/032abcd424b4312e7087f434ef1c0094-Abstract.html | NIPS 2021 | |||
DRIVE: One-bit Distributed Mean Estimation | Shay Vargaftik, Ran Ben-Basat, Amit Portnoy, Gal Mendelson, Yaniv Ben-Itzhak, Michael Mitzenmacher | We consider the problem where $n$ clients transmit $d$-dimensional real-valued vectors using $d(1+o(1))$ bits each, in a manner that allows the receiver to approximately reconstruct their mean. Such compression problems naturally arise in distributed and federated learning. We provide novel mathematical results and der... | https://papers.nips.cc/paper_files/paper/2021/hash/0397758f8990c1b41b81b43ac389ab9f-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/0397758f8990c1b41b81b43ac389ab9f-Abstract.html | NIPS 2021 | |||
Learning Space Partitions for Path Planning | Kevin Yang, Tianjun Zhang, Chris Cummins, Brandon Cui, Benoit Steiner, Linnan Wang, Joseph E. Gonzalez, Dan Klein, Yuandong Tian | Path planning, the problem of efficiently discovering high-reward trajectories, often requires optimizing a high-dimensional and multimodal reward function. Popular approaches like CEM and CMA-ES greedily focus on promising regions of the search space and may get trapped in local maxima. DOO and VOOT balance exploratio... | https://papers.nips.cc/paper_files/paper/2021/hash/03a3655fff3e9bdea48de9f49e938e32-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/03a3655fff3e9bdea48de9f49e938e32-Abstract.html | NIPS 2021 | |||
Progressive Feature Interaction Search for Deep Sparse Network | Chen Gao, Yinfeng Li, Quanming Yao, Depeng Jin, Yong Li | Deep sparse networks (DSNs), of which the crux is exploring the high-order feature interactions, have become the state-of-the-art on the prediction task with high-sparsity features. However, these models suffer from low computation efficiency, including large model size and slow model inference, which largely limits th... | https://papers.nips.cc/paper_files/paper/2021/hash/03b2ceb73723f8b53cd533e4fba898ee-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/03b2ceb73723f8b53cd533e4fba898ee-Abstract.html | NIPS 2021 | |||
Local Explanation of Dialogue Response Generation | Yi-Lin Tuan, Connor Pryor, Wenhu Chen, Lise Getoor, William Yang Wang | In comparison to the interpretation of classification models, the explanation of sequence generation models is also an important problem, however it has seen little attention. In this work, we study model-agnostic explanations of a representative text generation task -- dialogue response generation. Dialog response gen... | https://papers.nips.cc/paper_files/paper/2021/hash/03b92cd507ff5870df0db7f074728830-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/03b92cd507ff5870df0db7f074728830-Abstract.html | NIPS 2021 | |||
Scalable Inference in SDEs by Direct Matching of the Fokker–Planck–Kolmogorov Equation | Arno Solin, Ella Tamir, Prakhar Verma | Simulation-based techniques such as variants of stochastic Runge–Kutta are the de facto approach for inference with stochastic differential equations (SDEs) in machine learning. These methods are general-purpose and used with parametric and non-parametric models, and neural SDEs. Stochastic Runge–Kutta relies on the us... | https://papers.nips.cc/paper_files/paper/2021/hash/03e4d3f831100d4355663f3d425d716b-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/03e4d3f831100d4355663f3d425d716b-Abstract.html | NIPS 2021 | |||
The Complexity of Bayesian Network Learning: Revisiting the Superstructure | Robert Ganian, Viktoriia Korchemna | We investigate the parameterized complexity of Bayesian Network Structure Learning (BNSL), a classical problem that has received significant attention in empirical but also purely theoretical studies. We follow up on previous works that have analyzed the complexity of BNSL w.r.t. the so-called superstructure of the inp... | https://papers.nips.cc/paper_files/paper/2021/hash/040a99f23e8960763e680041c601acab-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/040a99f23e8960763e680041c601acab-Abstract.html | NIPS 2021 | |||
Fast Tucker Rank Reduction for Non-Negative Tensors Using Mean-Field Approximation | Kazu Ghalamkari, Mahito Sugiyama | We present an efficient low-rank approximation algorithm for non-negative tensors. The algorithm is derived from our two findings: First, we show that rank-1 approximation for tensors can be viewed as a mean-field approximation by treating each tensor as a probability distribution. Second, we theoretically provide a s... | https://papers.nips.cc/paper_files/paper/2021/hash/040ca38cefb1d9226d79c05dd25469cb-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/040ca38cefb1d9226d79c05dd25469cb-Abstract.html | NIPS 2021 | |||
Learning Stochastic Majority Votes by Minimizing a PAC-Bayes Generalization Bound | Valentina Zantedeschi, Paul Viallard, Emilie Morvant, Rémi Emonet, Amaury Habrard, Pascal Germain, Benjamin Guedj | We investigate a stochastic counterpart of majority votes over finite ensembles of classifiers, and study its generalization properties. While our approach holds for arbitrary distributions, we instantiate it with Dirichlet distributions: this allows for a closed-form and differentiable expression for the expected risk... | https://papers.nips.cc/paper_files/paper/2021/hash/0415740eaa4d9decbc8da001d3fd805f-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/0415740eaa4d9decbc8da001d3fd805f-Abstract.html | NIPS 2021 | |||
Numerical influence of ReLU’(0) on backpropagation | David Bertoin, Jérôme Bolte, Sébastien Gerchinovitz, Edouard Pauwels | In theory, the choice of ReLU(0) in [0, 1] for a neural network has a negligible influence both on backpropagation and training. Yet, in the real world, 32 bits default precision combined with the size of deep learning problems makes it a hyperparameter of training methods. We investigate the importance of the value of... | https://papers.nips.cc/paper_files/paper/2021/hash/043ab21fc5a1607b381ac3896176dac6-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/043ab21fc5a1607b381ac3896176dac6-Abstract.html | NIPS 2021 | |||
A Contrastive Learning Approach for Training Variational Autoencoder Priors | Jyoti Aneja, Alex Schwing, Jan Kautz, Arash Vahdat | Variational autoencoders (VAEs) are one of the powerful likelihood-based generative models with applications in many domains. However, they struggle to generate high-quality images, especially when samples are obtained from the prior without any tempering. One explanation for VAEs' poor generative quality is the prior ... | https://papers.nips.cc/paper_files/paper/2021/hash/0496604c1d80f66fbeb963c12e570a26-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/0496604c1d80f66fbeb963c12e570a26-Abstract.html | NIPS 2021 | |||
What training reveals about neural network complexity | Andreas Loukas, Marinos Poiitis, Stefanie Jegelka | This work explores the Benevolent Training Hypothesis (BTH) which argues that the complexity of the function a deep neural network (NN) is learning can be deduced by its training dynamics. Our analysis provides evidence for BTH by relating the NN's Lipschitz constant at different regions of the input space with the beh... | https://papers.nips.cc/paper_files/paper/2021/hash/04a1bf2d968f1ce381cf1f9184a807a9-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/04a1bf2d968f1ce381cf1f9184a807a9-Abstract.html | NIPS 2021 | |||
Class-agnostic Reconstruction of Dynamic Objects from Videos | Zhongzheng Ren, Xiaoming Zhao, Alex Schwing | We introduce REDO, a class-agnostic framework to REconstruct the Dynamic Objects from RGBD or calibrated videos. Compared to prior work, our problem setting is more realistic yet more challenging for three reasons: 1) due to occlusion or camera settings an object of interest may never be entirely visible, but we aim to... | https://papers.nips.cc/paper_files/paper/2021/hash/04da4aea8e38ac933ab23cb2389dddef-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/04da4aea8e38ac933ab23cb2389dddef-Abstract.html | NIPS 2021 | |||
Unique sparse decomposition of low rank matrices | Dian Jin, Xin Bing, Yuqian Zhang | The problem of finding the unique low dimensional decomposition of a given matrix has been a fundamental and recurrent problem in many areas. In this paper, we study the problem of seeking a unique decomposition of a low-rank matrix $Y\in \mathbb{R}^{p\times n}$ that admits a sparse representation. Specifically, we co... | https://papers.nips.cc/paper_files/paper/2021/hash/051928341be67dcba03f0e04104d9047-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/051928341be67dcba03f0e04104d9047-Abstract.html | NIPS 2021 | |||
Neighborhood Reconstructing Autoencoders | Yonghyeon LEE, Hyeokjun Kwon, Frank Park | Vanilla autoencoders often produce manifolds that overfit to noisy training data, or have the wrong local connectivity and geometry. Autoencoder regularization techniques, e.g., the denoising autoencoder, have had some success in reducing overfitting, whereas recent graph-based methods that exploit local connectivity i... | https://papers.nips.cc/paper_files/paper/2021/hash/05311655a15b75fab86956663e1819cd-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/05311655a15b75fab86956663e1819cd-Abstract.html | NIPS 2021 | |||
TopicNet: Semantic Graph-Guided Topic Discovery | Zhibin Duan, Yi.shi Xu, Bo Chen, dongsheng wang, Chaojie Wang, Mingyuan Zhou | Existing deep hierarchical topic models are able to extract semantically meaningful topics from a text corpus in an unsupervised manner and automatically organize them into a topic hierarchy. However, it is unclear how to incorporate prior belief such as knowledge graph to guide the learning of the topic hierarchy. T... | https://papers.nips.cc/paper_files/paper/2021/hash/0537fb40a68c18da59a35c2bfe1ca554-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/0537fb40a68c18da59a35c2bfe1ca554-Abstract.html | NIPS 2021 | |||
(Almost) Free Incentivized Exploration from Decentralized Learning Agents | Chengshuai Shi, Haifeng Xu, Wei Xiong, Cong Shen | Incentivized exploration in multi-armed bandits (MAB) has witnessed increasing interests and many progresses in recent years, where a principal offers bonuses to agents to do explorations on her behalf. However, almost all existing studies are confined to temporary myopic agents. In this work, we break this barrier and... | https://papers.nips.cc/paper_files/paper/2021/hash/054ab897023645cd7ad69525c46992a0-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/054ab897023645cd7ad69525c46992a0-Abstract.html | NIPS 2021 | |||
Combining Recurrent, Convolutional, and Continuous-time Models with Linear State Space Layers | Albert Gu, Isys Johnson, Karan Goel, Khaled Saab, Tri Dao, Atri Rudra, Christopher Ré | Recurrent neural networks (RNNs), temporal convolutions, and neural differential equations (NDEs) are popular families of deep learning models for time-series data, each with unique strengths and tradeoffs in modeling power and computational efficiency. We introduce a simple sequence model inspired by control systems ... | https://papers.nips.cc/paper_files/paper/2021/hash/05546b0e38ab9175cd905eebcc6ebb76-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/05546b0e38ab9175cd905eebcc6ebb76-Abstract.html | NIPS 2021 | |||
Revisiting Hilbert-Schmidt Information Bottleneck for Adversarial Robustness | Zifeng Wang, Tong Jian, Aria Masoomi, Stratis Ioannidis, Jennifer Dy | We investigate the HSIC (Hilbert-Schmidt independence criterion) bottleneck as a regularizer for learning an adversarially robust deep neural network classifier. In addition to the usual cross-entropy loss, we add regularization terms for every intermediate layer to ensure that the latent representations retain useful ... | https://papers.nips.cc/paper_files/paper/2021/hash/055e31fa43e652cb4ab6c0ee845c8d36-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/055e31fa43e652cb4ab6c0ee845c8d36-Abstract.html | NIPS 2021 | |||
T-LoHo: A Bayesian Regularization Model for Structured Sparsity and Smoothness on Graphs | Changwoo Lee, Zhao Tang Luo, Huiyan Sang | Graphs have been commonly used to represent complex data structures. In models dealing with graph-structured data, multivariate parameters may not only exhibit sparse patterns but have structured sparsity and smoothness in the sense that both zero and non-zero parameters tend to cluster together. We propose a new prior... | https://papers.nips.cc/paper_files/paper/2021/hash/05a70454516ecd9194c293b0e415777f-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/05a70454516ecd9194c293b0e415777f-Abstract.html | NIPS 2021 | |||
The Utility of Explainable AI in Ad Hoc Human-Machine Teaming | Rohan Paleja, Muyleng Ghuy, Nadun Ranawaka Arachchige, Reed Jensen, Matthew Gombolay | Recent advances in machine learning have led to growing interest in Explainable AI (xAI) to enable humans to gain insight into the decision-making of machine learning models. Despite this recent interest, the utility of xAI techniques has not yet been characterized in human-machine teaming. Importantly, xAI offers the ... | https://papers.nips.cc/paper_files/paper/2021/hash/05d74c48b5b30514d8e9bd60320fc8f6-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/05d74c48b5b30514d8e9bd60320fc8f6-Abstract.html | NIPS 2021 | |||
Subgoal Search For Complex Reasoning Tasks | Konrad Czechowski, Tomasz Odrzygóźdź, Marek Zbysiński, Michał Zawalski, Krzysztof Olejnik, Yuhuai Wu, Łukasz Kuciński, Piotr Miłoś | Humans excel in solving complex reasoning tasks through a mental process of moving from one idea to a related one. Inspired by this, we propose Subgoal Search (kSubS) method. Its key component is a learned subgoal generator that produces a diversity of subgoals that are both achievable and closer to the solution. Using... | https://papers.nips.cc/paper_files/paper/2021/hash/05d8cccb5f47e5072f0a05b5f514941a-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/05d8cccb5f47e5072f0a05b5f514941a-Abstract.html | NIPS 2021 | |||
MCMC Variational Inference via Uncorrected Hamiltonian Annealing | Tomas Geffner, Justin Domke | Given an unnormalized target distribution we want to obtain approximate samples from it and a tight lower bound on its (log) normalization constant log Z. Annealed Importance Sampling (AIS) with Hamiltonian MCMC is a powerful method that can be used to do this. Its main drawback is that it uses non-differentiable trans... | https://papers.nips.cc/paper_files/paper/2021/hash/05f971b5ec196b8c65b75d2ef8267331-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/05f971b5ec196b8c65b75d2ef8267331-Abstract.html | NIPS 2021 | |||
Landmark-RxR: Solving Vision-and-Language Navigation with Fine-Grained Alignment Supervision | Keji He, Yan Huang, Qi Wu, Jianhua Yang, Dong An, Shuanglin Sima, Liang Wang | In Vision-and-Language Navigation (VLN) task, an agent is asked to navigate inside 3D indoor environments following given instructions. Cross-modal alignment is one of the most critical challenges in VLN because the predicted trajectory needs to match the given instruction accurately. In this paper, we address the cros... | https://papers.nips.cc/paper_files/paper/2021/hash/0602940f23884f782058efac46f64b0f-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/0602940f23884f782058efac46f64b0f-Abstract.html | NIPS 2021 | |||
A Winning Hand: Compressing Deep Networks Can Improve Out-of-Distribution Robustness | James Diffenderfer, Brian Bartoldson, Shreya Chaganti, Jize Zhang, Bhavya Kailkhura | Successful adoption of deep learning (DL) in the wild requires models to be: (1) compact, (2) accurate, and (3) robust to distributional shifts. Unfortunately, efforts towards simultaneously meeting these requirements have mostly been unsuccessful. This raises an important question: Is the inability to create Compact, ... | https://papers.nips.cc/paper_files/paper/2021/hash/0607f4c705595b911a4f3e7a127b44e0-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/0607f4c705595b911a4f3e7a127b44e0-Abstract.html | NIPS 2021 | |||
On the Importance of Gradients for Detecting Distributional Shifts in the Wild | Rui Huang, Andrew Geng, Yixuan Li | Detecting out-of-distribution (OOD) data has become a critical component in ensuring the safe deployment of machine learning models in the real world. Existing OOD detection approaches primarily rely on the output or feature space for deriving OOD scores, while largely overlooking information from the gradient space. I... | https://papers.nips.cc/paper_files/paper/2021/hash/063e26c670d07bb7c4d30e6fc69fe056-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/063e26c670d07bb7c4d30e6fc69fe056-Abstract.html | NIPS 2021 | |||
Iterative Methods for Private Synthetic Data: Unifying Framework and New Methods | Terrance Liu, Giuseppe Vietri, Steven Z. Wu | We study private synthetic data generation for query release, where the goal is to construct a sanitized version of a sensitive dataset, subject to differential privacy, that approximately preserves the answers to a large collection of statistical queries. We first present an algorithmic framework that unifies a long l... | https://papers.nips.cc/paper_files/paper/2021/hash/0678c572b0d5597d2d4a6b5bd135754c-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/0678c572b0d5597d2d4a6b5bd135754c-Abstract.html | NIPS 2021 | |||
Understanding End-to-End Model-Based Reinforcement Learning Methods as Implicit Parameterization | Clement Gehring, Kenji Kawaguchi, Jiaoyang Huang, Leslie Kaelbling | Estimating the per-state expected cumulative rewards is a critical aspect of reinforcement learning approaches, however the experience is obtained, but standard deep neural-network function-approximation methods are often inefficient in this setting. An alternative approach, exemplified by value iteration networks, is ... | https://papers.nips.cc/paper_files/paper/2021/hash/067a26d87265ea39030f5bd82408ce7c-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/067a26d87265ea39030f5bd82408ce7c-Abstract.html | NIPS 2021 | |||
Mirror Langevin Monte Carlo: the Case Under Isoperimetry | Qijia Jiang | Motivated by the connection between sampling and optimization, we study a mirror descent analogue of Langevin dynamics and analyze three different discretization schemes, giving nonasymptotic convergence rate under functional inequalities such as Log-Sobolev in the corresponding metric. Compared to the Euclidean settin... | https://papers.nips.cc/paper_files/paper/2021/hash/069090145d54bf4aa3894133f7e89873-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/069090145d54bf4aa3894133f7e89873-Abstract.html | NIPS 2021 | |||
Do Different Tracking Tasks Require Different Appearance Models? | Zhongdao Wang, Hengshuang Zhao, Ya-Li Li, Shengjin Wang, Philip Torr, Luca Bertinetto | Tracking objects of interest in a video is one of the most popular and widely applicable problems in computer vision. However, with the years, a Cambrian explosion of use cases and benchmarks has fragmented the problem in a multitude of different experimental setups. As a consequence, the literature has fragmented too,... | https://papers.nips.cc/paper_files/paper/2021/hash/06997f04a7db92466a2baa6ebc8b872d-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/06997f04a7db92466a2baa6ebc8b872d-Abstract.html | NIPS 2021 | |||
Towards robust vision by multi-task learning on monkey visual cortex | Shahd Safarani, Arne Nix, Konstantin Willeke, Santiago Cadena, Kelli Restivo, George Denfield, Andreas Tolias, Fabian Sinz | Deep neural networks set the state-of-the-art across many tasks in computer vision, but their generalization ability to simple image distortions is surprisingly fragile. In contrast, the mammalian visual system is robust to a wide range of perturbations. Recent work suggests that this generalization ability can be expl... | https://papers.nips.cc/paper_files/paper/2021/hash/06a9d51e04213572ef0720dd27a84792-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/06a9d51e04213572ef0720dd27a84792-Abstract.html | NIPS 2021 | |||
Arbitrary Conditional Distributions with Energy | Ryan Strauss, Junier B. Oliva | Modeling distributions of covariates, or density estimation, is a core challenge in unsupervised learning. However, the majority of work only considers the joint distribution, which has limited relevance to practical situations. A more general and useful problem is arbitrary conditional density estimation, which aims t... | https://papers.nips.cc/paper_files/paper/2021/hash/06c284d3f757b15c02f47f3ff06dc275-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/06c284d3f757b15c02f47f3ff06dc275-Abstract.html | NIPS 2021 | |||
Learning Domain Invariant Representations in Goal-conditioned Block MDPs | Beining Han, Chongyi Zheng, Harris Chan, Keiran Paster, Michael Zhang, Jimmy Ba | Deep Reinforcement Learning (RL) is successful in solving many complex Markov Decision Processes (MDPs) problems. However, agents often face unanticipated environmental changes after deployment in the real world. These changes are often spurious and unrelated to the underlying problem, such as background shifts for vis... | https://papers.nips.cc/paper_files/paper/2021/hash/06d172404821f7d01060cc9629171b2e-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/06d172404821f7d01060cc9629171b2e-Abstract.html | NIPS 2021 | |||
Near-Optimal Multi-Perturbation Experimental Design for Causal Structure Learning | Scott Sussex, Caroline Uhler, Andreas Krause | Causal structure learning is a key problem in many domains. Causal structures can be learnt by performing experiments on the system of interest. We address the largely unexplored problem of designing a batch of experiments that each simultaneously intervene on multiple variables. While potentially more informative than... | https://papers.nips.cc/paper_files/paper/2021/hash/06d5ae105ea1bea4d800bc96491876e9-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/06d5ae105ea1bea4d800bc96491876e9-Abstract.html | NIPS 2021 | |||
Fuzzy Clustering with Similarity Queries | Wasim Huleihel, Arya Mazumdar, Soumyabrata Pal | The fuzzy or soft $k$-means objective is a popular generalization of the well-known $k$-means problem, extending the clustering capability of the $k$-means to datasets that are uncertain, vague and otherwise hard to cluster. In this paper, we propose a semi-supervised active clustering framework, where the learner is a... | https://papers.nips.cc/paper_files/paper/2021/hash/06f2e099b4f87109d52e15d7c05f0084-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/06f2e099b4f87109d52e15d7c05f0084-Abstract.html | NIPS 2021 | |||
Improving black-box optimization in VAE latent space using decoder uncertainty | Pascal Notin, José Miguel Hernández-Lobato, Yarin Gal | Optimization in the latent space of variational autoencoders is a promising approach to generate high-dimensional discrete objects that maximize an expensive black-box property (e.g., drug-likeness in molecular generation, function approximation with arithmetic expressions). However, existing methods lack robustness as... | https://papers.nips.cc/paper_files/paper/2021/hash/06fe1c234519f6812fc4c1baae25d6af-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/06fe1c234519f6812fc4c1baae25d6af-Abstract.html | NIPS 2021 | |||
Sample Selection for Fair and Robust Training | Yuji Roh, Kangwook Lee, Steven Whang, Changho Suh | Fairness and robustness are critical elements of Trustworthy AI that need to be addressed together. Fairness is about learning an unbiased model while robustness is about learning from corrupted data, and it is known that addressing only one of them may have an adverse affect on the other. In this work, we propose a sa... | https://papers.nips.cc/paper_files/paper/2021/hash/07563a3fe3bbe7e3ba84431ad9d055af-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/07563a3fe3bbe7e3ba84431ad9d055af-Abstract.html | NIPS 2021 | |||
NeurWIN: Neural Whittle Index Network For Restless Bandits Via Deep RL | Khaled Nakhleh, Santosh Ganji, Ping-Chun Hsieh, I-Hong Hou, Srinivas Shakkottai | Whittle index policy is a powerful tool to obtain asymptotically optimal solutions for the notoriously intractable problem of restless bandits. However, finding the Whittle indices remains a difficult problem for many practical restless bandits with convoluted transition kernels. This paper proposes NeurWIN, a neural W... | https://papers.nips.cc/paper_files/paper/2021/hash/0768281a05da9f27df178b5c39a51263-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/0768281a05da9f27df178b5c39a51263-Abstract.html | NIPS 2021 | |||
Sageflow: Robust Federated Learning against Both Stragglers and Adversaries | Jungwuk Park, Dong-Jun Han, Minseok Choi, Jaekyun Moon | While federated learning (FL) allows efficient model training with local data at edge devices, among major issues still to be resolved are: slow devices known as stragglers and malicious attacks launched by adversaries. While the presence of both of these issues raises serious concerns in practical FL systems, no kno... | https://papers.nips.cc/paper_files/paper/2021/hash/076a8133735eb5d7552dc195b125a454-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/076a8133735eb5d7552dc195b125a454-Abstract.html | NIPS 2021 | |||
Alias-Free Generative Adversarial Networks | Tero Karras, Miika Aittala, Samuli Laine, Erik Härkönen, Janne Hellsten, Jaakko Lehtinen, Timo Aila | We observe that despite their hierarchical convolutional nature, the synthesis process of typical generative adversarial networks depends on absolute pixel coordinates in an unhealthy manner. This manifests itself as, e.g., detail appearing to be glued to image coordinates instead of the surfaces of depicted objects. W... | https://papers.nips.cc/paper_files/paper/2021/hash/076ccd93ad68be51f23707988e934906-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/076ccd93ad68be51f23707988e934906-Abstract.html | NIPS 2021 | |||
Noise2Score: Tweedie’s Approach to Self-Supervised Image Denoising without Clean Images | Kwanyoung Kim, Jong Chul Ye | Recently, there has been extensive research interest in training deep networks to denoise images without clean reference.However, the representative approaches such as Noise2Noise, Noise2Void, Stein's unbiased risk estimator (SURE), etc. seem to differ from one another and it is difficult to find the coherent mathem... | https://papers.nips.cc/paper_files/paper/2021/hash/077b83af57538aa183971a2fe0971ec1-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/077b83af57538aa183971a2fe0971ec1-Abstract.html | NIPS 2021 | |||
Continuous Mean-Covariance Bandits | Yihan Du, Siwei Wang, Zhixuan Fang, Longbo Huang | Existing risk-aware multi-armed bandit models typically focus on risk measures of individual options such as variance. As a result, they cannot be directly applied to important real-world online decision making problems with correlated options. In this paper, we propose a novel Continuous Mean-Covariance Bandit (CMCB) ... | https://papers.nips.cc/paper_files/paper/2021/hash/07811dc6c422334ce36a09ff5cd6fe71-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/07811dc6c422334ce36a09ff5cd6fe71-Abstract.html | NIPS 2021 | |||
Dynamic Visual Reasoning by Learning Differentiable Physics Models from Video and Language | Mingyu Ding, Zhenfang Chen, Tao Du, Ping Luo, Josh Tenenbaum, Chuang Gan | In this work, we propose a unified framework, called Visual Reasoning with Differ-entiable Physics (VRDP), that can jointly learn visual concepts and infer physics models of objects and their interactions from videos and language. This is achieved by seamlessly integrating three components: a visual perception module, ... | https://papers.nips.cc/paper_files/paper/2021/hash/07845cd9aefa6cde3f8926d25138a3a2-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/07845cd9aefa6cde3f8926d25138a3a2-Abstract.html | NIPS 2021 | |||
Solving Soft Clustering Ensemble via $k$-Sparse Discrete Wasserstein Barycenter | Ruizhe Qin, Mengying Li, Hu Ding | Clustering ensemble is one of the most important problems in ensemble learning. Though it has been extensively studied in the past decades, the existing methods often suffer from the issues like high computational complexity and the difficulty on understanding the consensus. In this paper, we study the more general so... | https://papers.nips.cc/paper_files/paper/2021/hash/07a4e20a7bbeeb7a736682b26b16ebe8-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/07a4e20a7bbeeb7a736682b26b16ebe8-Abstract.html | NIPS 2021 | |||
Bayesian Adaptation for Covariate Shift | Aurick Zhou, Sergey Levine | When faced with distribution shift at test time, deep neural networks often make inaccurate predictions with unreliable uncertainty estimates.While improving the robustness of neural networks is one promising approach to mitigate this issue, an appealing alternate to robustifying networks against all possible test-time... | https://papers.nips.cc/paper_files/paper/2021/hash/07ac7cd13fd0eb1654ccdbd222b81437-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/07ac7cd13fd0eb1654ccdbd222b81437-Abstract.html | NIPS 2021 | |||
Perturb-and-max-product: Sampling and learning in discrete energy-based models | Miguel Lazaro-Gredilla, Antoine Dedieu, Dileep George | Perturb-and-MAP offers an elegant approach to approximately sample from a energy-based model (EBM) by computing the maximum-a-posteriori (MAP) configuration of a perturbed version of the model. Sampling in turn enables learning. However, this line of research has been hindered by the general intractability of the MAP c... | https://papers.nips.cc/paper_files/paper/2021/hash/07b1c04a30f798b5506c1ec5acfb9031-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/07b1c04a30f798b5506c1ec5acfb9031-Abstract.html | NIPS 2021 | |||
Towards Unifying Behavioral and Response Diversity for Open-ended Learning in Zero-sum Games | Xiangyu Liu, Hangtian Jia, Ying Wen, Yujing Hu, Yingfeng Chen, Changjie Fan, ZHIPENG HU, Yaodong Yang | Measuring and promoting policy diversity is critical for solving games with strong non-transitive dynamics where strategic cycles exist, and there is no consistent winner (e.g., Rock-Paper-Scissors). With that in mind, maintaining a pool of diverse policies via open-ended learning is an attractive solution, which can g... | https://papers.nips.cc/paper_files/paper/2021/hash/07bba581a2dd8d098a3be0f683560643-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/07bba581a2dd8d098a3be0f683560643-Abstract.html | NIPS 2021 | |||
Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples | Sungyoon Lee, Woojin Lee, Jinseong Park, Jaewook Lee | We study the problem of training certifiably robust models against adversarial examples. Certifiable training minimizes an upper bound on the worst-case loss over the allowed perturbation, and thus the tightness of the upper bound is an important factor in building certifiably robust models. However, many studies have ... | https://papers.nips.cc/paper_files/paper/2021/hash/07c5807d0d927dcd0980f86024e5208b-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/07c5807d0d927dcd0980f86024e5208b-Abstract.html | NIPS 2021 | |||
Mitigating Covariate Shift in Imitation Learning via Offline Data With Partial Coverage | Jonathan Chang, Masatoshi Uehara, Dhruv Sreenivas, Rahul Kidambi, Wen Sun | This paper studies offline Imitation Learning (IL) where an agent learns to imitate an expert demonstrator without additional online environment interactions. Instead, the learner is presented with a static offline dataset of state-action-next state triples from a potentially less proficient behavior policy. We introdu... | https://papers.nips.cc/paper_files/paper/2021/hash/07d5938693cc3903b261e1a3844590ed-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/07d5938693cc3903b261e1a3844590ed-Abstract.html | NIPS 2021 | |||
Global Filter Networks for Image Classification | Yongming Rao, Wenliang Zhao, Zheng Zhu, Jiwen Lu, Jie Zhou | Recent advances in self-attention and pure multi-layer perceptrons (MLP) models for vision have shown great potential in achieving promising performance with fewer inductive biases. These models are generally based on learning interaction among spatial locations from raw data. The complexity of self-attention and MLP g... | https://papers.nips.cc/paper_files/paper/2021/hash/07e87c2f4fc7f7c96116d8e2a92790f5-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/07e87c2f4fc7f7c96116d8e2a92790f5-Abstract.html | NIPS 2021 | |||
CAFE: Catastrophic Data Leakage in Vertical Federated Learning | Xiao Jin, Pin-Yu Chen, Chia-Yi Hsu, Chia-Mu Yu, Tianyi Chen | Recent studies show that private training data can be leaked through the gradients sharing mechanism deployed in distributed machine learning systems, such as federated learning (FL). Increasing batch size to complicate data recovery is often viewed as a promising defense strategy against data leakage. In this paper, w... | https://papers.nips.cc/paper_files/paper/2021/hash/08040837089cdf46631a10aca5258e16-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/08040837089cdf46631a10aca5258e16-Abstract.html | NIPS 2021 | |||
Fault-Tolerant Federated Reinforcement Learning with Theoretical Guarantee | Xiaofeng Fan, Yining Ma, Zhongxiang Dai, Wei Jing, Cheston Tan, Bryan Kian Hsiang Low | The growing literature of Federated Learning (FL) has recently inspired Federated Reinforcement Learning (FRL) to encourage multiple agents to federatively build a better decision-making policy without sharing raw trajectories. Despite its promising applications, existing works on FRL fail to I) provide theoretical ana... | https://papers.nips.cc/paper_files/paper/2021/hash/080acdcce72c06873a773c4311c2e464-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/080acdcce72c06873a773c4311c2e464-Abstract.html | NIPS 2021 | |||
Compacter: Efficient Low-Rank Hypercomplex Adapter Layers | Rabeeh Karimi Mahabadi, James Henderson, Sebastian Ruder | Adapting large-scale pretrained language models to downstream tasks via fine-tuning is the standard method for achieving state-of-the-art performance on NLP benchmarks. However, fine-tuning all weights of models with millions or billions of parameters is sample-inefficient, unstable in low-resource settings, and wastef... | https://papers.nips.cc/paper_files/paper/2021/hash/081be9fdff07f3bc808f935906ef70c0-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/081be9fdff07f3bc808f935906ef70c0-Abstract.html | NIPS 2021 | |||
Distilling Image Classifiers in Object Detectors | Shuxuan Guo, Jose M. Alvarez, Mathieu Salzmann | Knowledge distillation constitutes a simple yet effective way to improve the performance of a compact student network by exploiting the knowledge of a more powerful teacher. Nevertheless, the knowledge distillation literature remains limited to the scenario where the student and the teacher tackle the same task. Here, ... | https://papers.nips.cc/paper_files/paper/2021/hash/082a8bbf2c357c09f26675f9cf5bcba3-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/082a8bbf2c357c09f26675f9cf5bcba3-Abstract.html | NIPS 2021 | |||
Subgroup Generalization and Fairness of Graph Neural Networks | Jiaqi Ma, Junwei Deng, Qiaozhu Mei | Despite enormous successful applications of graph neural networks (GNNs), theoretical understanding of their generalization ability, especially for node-level tasks where data are not independent and identically-distributed (IID), has been sparse. The theoretical investigation of the generalization performance is benef... | https://papers.nips.cc/paper_files/paper/2021/hash/08425b881bcde94a383cd258cea331be-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/08425b881bcde94a383cd258cea331be-Abstract.html | NIPS 2021 | |||
Scaling Neural Tangent Kernels via Sketching and Random Features | Amir Zandieh, Insu Han, Haim Avron, Neta Shoham, Chaewon Kim, Jinwoo Shin | The Neural Tangent Kernel (NTK) characterizes the behavior of infinitely-wide neural networks trained under least squares loss by gradient descent. Recent works also report that NTK regression can outperform finitely-wide neural networks trained on small-scale datasets. However, the computational complexity of kernel m... | https://papers.nips.cc/paper_files/paper/2021/hash/08ae6a26b7cb089ea588e94aed36bd15-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/08ae6a26b7cb089ea588e94aed36bd15-Abstract.html | NIPS 2021 | |||
BatchQuant: Quantized-for-all Architecture Search with Robust Quantizer | Haoping Bai, Meng Cao, Ping Huang, Jiulong Shan | As the applications of deep learning models on edge devices increase at an accelerating pace, fast adaptation to various scenarios with varying resource constraints has become a crucial aspect of model deployment. As a result, model optimization strategies with adaptive configuration are becoming increasingly popular. ... | https://papers.nips.cc/paper_files/paper/2021/hash/08aee6276db142f4b8ac98fb8ee0ed1b-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/08aee6276db142f4b8ac98fb8ee0ed1b-Abstract.html | NIPS 2021 | |||
Long Short-Term Transformer for Online Action Detection | Mingze Xu, Yuanjun Xiong, Hao Chen, Xinyu Li, Wei Xia, Zhuowen Tu, Stefano Soatto | We present Long Short-term TRansformer (LSTR), a temporal modeling algorithm for online action detection, which employs a long- and short-term memory mechanism to model prolonged sequence data. It consists of an LSTR encoder that dynamically leverages coarse-scale historical information from an extended temporal window... | https://papers.nips.cc/paper_files/paper/2021/hash/08b255a5d42b89b0585260b6f2360bdd-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/08b255a5d42b89b0585260b6f2360bdd-Abstract.html | NIPS 2021 | |||
Near Optimal Policy Optimization via REPS | Aldo Pacchiano, Jonathan N Lee, Peter Bartlett, Ofir Nachum | Since its introduction a decade ago, relative entropy policy search (REPS) has demonstrated successful policy learning on a number of simulated and real-world robotic domains, not to mention providing algorithmic components used by many recently proposed reinforcement learning (RL) algorithms. While REPS is commonly kn... | https://papers.nips.cc/paper_files/paper/2021/hash/08d562c1eedd30b15b51e35d8486d14c-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/08d562c1eedd30b15b51e35d8486d14c-Abstract.html | NIPS 2021 | |||
Self-Consistent Models and Values | Greg Farquhar, Kate Baumli, Zita Marinho, Angelos Filos, Matteo Hessel, Hado P. van Hasselt, David Silver | Learned models of the environment provide reinforcement learning (RL) agents with flexible ways of making predictions about the environment.Models enable planning, i.e. using more computation to improve value functions or policies, without requiring additional environment interactions.In this work, we investigate a way... | https://papers.nips.cc/paper_files/paper/2021/hash/08f0efebb1c51aada9430a089a2050cc-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/08f0efebb1c51aada9430a089a2050cc-Abstract.html | NIPS 2021 | |||
Learning on Random Balls is Sufficient for Estimating (Some) Graph Parameters | Takanori Maehara, Hoang NT | Theoretical analyses for graph learning methods often assume a complete observation of the input graph. Such an assumption might not be useful for handling any-size graphs due to the scalability issues in practice. In this work, we develop a theoretical framework for graph classification problems in the partial observa... | https://papers.nips.cc/paper_files/paper/2021/hash/08f36fcf88c0a84c19a6ed437b9cbcc9-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/08f36fcf88c0a84c19a6ed437b9cbcc9-Abstract.html | NIPS 2021 | |||
Risk-Averse Bayes-Adaptive Reinforcement Learning | Marc Rigter, Bruno Lacerda, Nick Hawes | In this work, we address risk-averse Bayes-adaptive reinforcement learning. We pose the problem of optimising the conditional value at risk (CVaR) of the total return in Bayes-adaptive Markov decision processes (MDPs). We show that a policy optimising CVaR in this setting is risk-averse to both the epistemic uncertain... | https://papers.nips.cc/paper_files/paper/2021/hash/08f90c1a417155361a5c4b8d297e0d78-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/08f90c1a417155361a5c4b8d297e0d78-Abstract.html | NIPS 2021 | |||
Iterative Connecting Probability Estimation for Networks | Yichen Qin, Linhan Yu, Yang Li | Estimating the probabilities of connections between vertices in a random network using an observed adjacency matrix is an important task for network data analysis. Many existing estimation methods are based on certain assumptions on network structure, which limit their applicability in practice. Without making strong a... | https://papers.nips.cc/paper_files/paper/2021/hash/0919b5c38396c3f0c41f1112d538e42c-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/0919b5c38396c3f0c41f1112d538e42c-Abstract.html | NIPS 2021 | |||
Learning to Adapt via Latent Domains for Adaptive Semantic Segmentation | Yunan Liu, Shanshan Zhang, Yang Li, Jian Yang | Domain adaptive semantic segmentation aims to transfer knowledge learned from labeled source domain to unlabeled target domain. To narrow down the domain gap and ease adaptation difficulty, some recent methods translate source images to target-like images (latent domains), which are used as supplement or substitute to ... | https://papers.nips.cc/paper_files/paper/2021/hash/092cb13c22d51c22b9035a2b4fe76b00-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/092cb13c22d51c22b9035a2b4fe76b00-Abstract.html | NIPS 2021 | |||
Single Layer Predictive Normalized Maximum Likelihood for Out-of-Distribution Detection | Koby Bibas, Meir Feder, Tal Hassner | Detecting out-of-distribution (OOD) samples is vital for developing machine learning based models for critical safety systems. Common approaches for OOD detection assume access to some OOD samples during training which may not be available in a real-life scenario. Instead, we utilize the {\em predictive normalized maxi... | https://papers.nips.cc/paper_files/paper/2021/hash/093b60fd0557804c8ba0cbf1453da22f-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/093b60fd0557804c8ba0cbf1453da22f-Abstract.html | NIPS 2021 | |||
Prototypical Cross-Attention Networks for Multiple Object Tracking and Segmentation | Lei Ke, Xia Li, Martin Danelljan, Yu-Wing Tai, Chi-Keung Tang, Fisher Yu | Multiple object tracking and segmentation requires detecting, tracking, and segmenting objects belonging to a set of given classes. Most approaches only exploit the temporal dimension to address the association problem, while relying on single frame predictions for the segmentation mask itself. We propose Prototypical ... | https://papers.nips.cc/paper_files/paper/2021/hash/093f65e080a295f8076b1c5722a46aa2-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/093f65e080a295f8076b1c5722a46aa2-Abstract.html | NIPS 2021 | |||
Algorithmic Instabilities of Accelerated Gradient Descent | Amit Attia, Tomer Koren | We study the algorithmic stability of Nesterov's accelerated gradient method. For convex quadratic objectives, Chen et al. (2018) proved that the uniform stability of the method grows quadratically with the number of optimization steps, and conjectured that the same is true for the general convex and smooth case. We di... | https://papers.nips.cc/paper_files/paper/2021/hash/094bb65ef46d3eb4be0a87877ec333eb-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/094bb65ef46d3eb4be0a87877ec333eb-Abstract.html | NIPS 2021 | |||
Learning Optimal Predictive Checklists | Haoran Zhang, Quaid Morris, Berk Ustun, Marzyeh Ghassemi | Checklists are simple decision aids that are often used to promote safety and reliability in clinical applications. In this paper, we present a method to learn checklists for clinical decision support. We represent predictive checklists as discrete linear classifiers with binary features and unit weights. We then learn... | https://papers.nips.cc/paper_files/paper/2021/hash/09676fac73eda6cac726c43e43e86c58-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/09676fac73eda6cac726c43e43e86c58-Abstract.html | NIPS 2021 | |||
Finite Sample Analysis of Average-Reward TD Learning and $Q$-Learning | Sheng Zhang, Zhe Zhang, Siva Theja Maguluri | The focus of this paper is on sample complexity guarantees of average-reward reinforcement learning algorithms, which are known to be more challenging to study than their discounted-reward counterparts. To the best of our knowledge, we provide the first known finite sample guarantees using both constant and diminishing... | https://papers.nips.cc/paper_files/paper/2021/hash/096ffc299200f51751b08da6d865ae95-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/096ffc299200f51751b08da6d865ae95-Abstract.html | NIPS 2021 | |||
Generalization Bounds for Graph Embedding Using Negative Sampling: Linear vs Hyperbolic | Atsushi Suzuki, Atsushi Nitanda, jing wang, Linchuan Xu, Kenji Yamanishi, Marc Cavazza | Graph embedding, which represents real-world entities in a mathematical space, has enabled numerous applications such as analyzing natural languages, social networks, biochemical networks, and knowledge bases.It has been experimentally shown that graph embedding in hyperbolic space can represent hierarchical tree-like ... | https://papers.nips.cc/paper_files/paper/2021/hash/09779bb7930c8a0a44360e12b538ae3c-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/09779bb7930c8a0a44360e12b538ae3c-Abstract.html | NIPS 2021 | |||
Gradient Starvation: A Learning Proclivity in Neural Networks | Mohammad Pezeshki, Oumar Kaba, Yoshua Bengio, Aaron C. Courville, Doina Precup, Guillaume Lajoie | We identify and formalize a fundamental gradient descent phenomenon resulting in a learning proclivity in over-parameterized neural networks. Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task, despite the presence of other predictive features th... | https://papers.nips.cc/paper_files/paper/2021/hash/0987b8b338d6c90bbedd8631bc499221-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/0987b8b338d6c90bbedd8631bc499221-Abstract.html | NIPS 2021 | |||
Offline Reinforcement Learning as One Big Sequence Modeling Problem | Michael Janner, Qiyang Li, Sergey Levine | Reinforcement learning (RL) is typically viewed as the problem of estimating single-step policies (for model-free RL) or single-step models (for model-based RL), leveraging the Markov property to factorize the problem in time. However, we can also view RL as a sequence modeling problem: predict a sequence of actions th... | https://papers.nips.cc/paper_files/paper/2021/hash/099fe6b0b444c23836c4a5d07346082b-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/099fe6b0b444c23836c4a5d07346082b-Abstract.html | NIPS 2021 | |||
Optimality and Stability in Federated Learning: A Game-theoretic Approach | Kate Donahue, Jon Kleinberg | Federated learning is a distributed learning paradigm where multiple agents, each only with access to local data, jointly learn a global model. There has recently been an explosion of research aiming not only to improve the accuracy rates of federated learning, but also provide certain guarantees around social good pro... | https://papers.nips.cc/paper_files/paper/2021/hash/09a5e2a11bea20817477e0b1dfe2cc21-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/09a5e2a11bea20817477e0b1dfe2cc21-Abstract.html | NIPS 2021 | |||
Understanding Deflation Process in Over-parametrized Tensor Decomposition | Rong Ge, Yunwei Ren, Xiang Wang, Mo Zhou | In this paper we study the training dynamics for gradient flow on over-parametrized tensor decomposition problems. Empirically, such training process often first fits larger components and then discovers smaller components, which is similar to a tensor deflation process that is commonly used in tensor decomposition alg... | https://papers.nips.cc/paper_files/paper/2021/hash/09a630e07af043e4cae879dd60db1cac-Abstract.html | https://papers.nips.cc/paper_files/paper/2021/hash/09a630e07af043e4cae879dd60db1cac-Abstract.html | NIPS 2021 |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 25