title stringlengths 13 150 | url stringlengths 97 97 | authors stringlengths 8 467 | detail_url stringlengths 97 97 | tags stringclasses 1
value | AuthorFeedback stringlengths 102 102 ⌀ | Bibtex stringlengths 53 54 | MetaReview stringlengths 99 99 | Paper stringlengths 93 93 | Review stringlengths 95 95 | Supplemental stringlengths 100 100 ⌀ | abstract stringlengths 53 2k |
|---|---|---|---|---|---|---|---|---|---|---|---|
A graph similarity for deep learning | https://papers.nips.cc/paper_files/paper/2020/hash/0004d0b59e19461ff126e3a08a814c33-Abstract.html | Seongmin Ok | https://papers.nips.cc/paper_files/paper/2020/hash/0004d0b59e19461ff126e3a08a814c33-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0004d0b59e19461ff126e3a08a814c33-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9725-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0004d0b59e19461ff126e3a08a814c33-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0004d0b59e19461ff126e3a08a814c33-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0004d0b59e19461ff126e3a08a814c33-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/0004d0b59e19461ff126e3a08a814c33-Supplemental.pdf | Graph neural networks (GNNs) have been successful in learning representations from graphs. Many popular GNNs follow the pattern of aggregate-transform: they aggregate the neighbors' attributes and then transform the results of aggregation with a learnable function. Analyses of these GNNs explain which pairs of non-iden... |
An Unsupervised Information-Theoretic Perceptual Quality Metric | https://papers.nips.cc/paper_files/paper/2020/hash/00482b9bed15a272730fcb590ffebddd-Abstract.html | Sangnie Bhardwaj, Ian Fischer, Johannes Ballé, Troy Chinen | https://papers.nips.cc/paper_files/paper/2020/hash/00482b9bed15a272730fcb590ffebddd-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/00482b9bed15a272730fcb590ffebddd-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9726-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/00482b9bed15a272730fcb590ffebddd-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/00482b9bed15a272730fcb590ffebddd-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/00482b9bed15a272730fcb590ffebddd-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/00482b9bed15a272730fcb590ffebddd-Supplemental.pdf | Tractable models of human perception have proved to be challenging to build. Hand-designed models such as MS-SSIM remain popular predictors of human image quality judgements due to their simplicity and speed. Recent modern deep learning approaches can perform better, but they rely on supervised data which can be costly... |
Self-Supervised MultiModal Versatile Networks | https://papers.nips.cc/paper_files/paper/2020/hash/0060ef47b12160b9198302ebdb144dcf-Abstract.html | Jean-Baptiste Alayrac, Adria Recasens, Rosalia Schneider, Relja Arandjelović, Jason Ramapuram, Jeffrey De Fauw, Lucas Smaira, Sander Dieleman, Andrew Zisserman | https://papers.nips.cc/paper_files/paper/2020/hash/0060ef47b12160b9198302ebdb144dcf-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0060ef47b12160b9198302ebdb144dcf-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9727-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0060ef47b12160b9198302ebdb144dcf-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0060ef47b12160b9198302ebdb144dcf-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0060ef47b12160b9198302ebdb144dcf-Review.html | null | Videos are a rich source of multi-modal supervision. In this work, we learn representations using self-supervision by leveraging three modalities naturally present in videos: visual, audio and language streams.
To this end, we introduce the notion of a multimodal versatile network -- a network that can ingest multiple ... |
Benchmarking Deep Inverse Models over time, and the Neural-Adjoint method | https://papers.nips.cc/paper_files/paper/2020/hash/007ff380ee5ac49ffc34442f5c2a2b86-Abstract.html | Simiao Ren, Willie Padilla, Jordan Malof | https://papers.nips.cc/paper_files/paper/2020/hash/007ff380ee5ac49ffc34442f5c2a2b86-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/007ff380ee5ac49ffc34442f5c2a2b86-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9728-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/007ff380ee5ac49ffc34442f5c2a2b86-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/007ff380ee5ac49ffc34442f5c2a2b86-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/007ff380ee5ac49ffc34442f5c2a2b86-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/007ff380ee5ac49ffc34442f5c2a2b86-Supplemental.pdf | We consider the task of solving generic inverse problems, where one wishes
to determine the hidden parameters of a natural system that will give rise to a
particular set of measurements. Recently many new approaches based upon deep
learning have arisen, generating promising results. We conceptualize these models
as dif... |
Off-Policy Evaluation and Learning for External Validity under a Covariate Shift | https://papers.nips.cc/paper_files/paper/2020/hash/0084ae4bc24c0795d1e6a4f58444d39b-Abstract.html | Masatoshi Uehara, Masahiro Kato, Shota Yasui | https://papers.nips.cc/paper_files/paper/2020/hash/0084ae4bc24c0795d1e6a4f58444d39b-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0084ae4bc24c0795d1e6a4f58444d39b-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9729-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0084ae4bc24c0795d1e6a4f58444d39b-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0084ae4bc24c0795d1e6a4f58444d39b-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0084ae4bc24c0795d1e6a4f58444d39b-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/0084ae4bc24c0795d1e6a4f58444d39b-Supplemental.pdf | We consider the evaluation and training of a new policy for the evaluation data by using the historical data obtained from a different policy. The goal of off-policy evaluation (OPE) is to estimate the expected reward of a new policy over the evaluation data, and that of off-policy learning (OPL) is to find a new polic... |
Neural Methods for Point-wise Dependency Estimation | https://papers.nips.cc/paper_files/paper/2020/hash/00a03ec6533ca7f5c644d198d815329c-Abstract.html | Yao-Hung Hubert Tsai, Han Zhao, Makoto Yamada, Louis-Philippe Morency, Russ R. Salakhutdinov | https://papers.nips.cc/paper_files/paper/2020/hash/00a03ec6533ca7f5c644d198d815329c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/00a03ec6533ca7f5c644d198d815329c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9730-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/00a03ec6533ca7f5c644d198d815329c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/00a03ec6533ca7f5c644d198d815329c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/00a03ec6533ca7f5c644d198d815329c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/00a03ec6533ca7f5c644d198d815329c-Supplemental.pdf | Since its inception, the neural estimation of mutual information (MI) has demonstrated the empirical success of modeling expected dependency between high-dimensional random variables. However, MI is an aggregate statistic and cannot be used to measure point-wise dependency between different events. In this work, instea... |
Fast and Flexible Temporal Point Processes with Triangular Maps | https://papers.nips.cc/paper_files/paper/2020/hash/00ac8ed3b4327bdd4ebbebcb2ba10a00-Abstract.html | Oleksandr Shchur, Nicholas Gao, Marin Biloš, Stephan Günnemann | https://papers.nips.cc/paper_files/paper/2020/hash/00ac8ed3b4327bdd4ebbebcb2ba10a00-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/00ac8ed3b4327bdd4ebbebcb2ba10a00-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9731-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/00ac8ed3b4327bdd4ebbebcb2ba10a00-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/00ac8ed3b4327bdd4ebbebcb2ba10a00-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/00ac8ed3b4327bdd4ebbebcb2ba10a00-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/00ac8ed3b4327bdd4ebbebcb2ba10a00-Supplemental.pdf | Temporal point process (TPP) models combined with recurrent neural networks provide a powerful framework for modeling continuous-time event data. While such models are flexible, they are inherently sequential and therefore cannot benefit from the parallelism of modern hardware. By exploiting the recent developments in ... |
Backpropagating Linearly Improves Transferability of Adversarial Examples | https://papers.nips.cc/paper_files/paper/2020/hash/00e26af6ac3b1c1c49d7c3d79c60d000-Abstract.html | Yiwen Guo, Qizhang Li, Hao Chen | https://papers.nips.cc/paper_files/paper/2020/hash/00e26af6ac3b1c1c49d7c3d79c60d000-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/00e26af6ac3b1c1c49d7c3d79c60d000-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9732-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/00e26af6ac3b1c1c49d7c3d79c60d000-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/00e26af6ac3b1c1c49d7c3d79c60d000-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/00e26af6ac3b1c1c49d7c3d79c60d000-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/00e26af6ac3b1c1c49d7c3d79c60d000-Supplemental.pdf | The vulnerability of deep neural networks (DNNs) to adversarial examples has drawn great attention from the community. In this paper, we study the transferability of such examples, which lays the foundation of many black-box attacks on DNNs. We revisit a not so new but definitely noteworthy hypothesis of Goodfellow et ... |
PyGlove: Symbolic Programming for Automated Machine Learning | https://papers.nips.cc/paper_files/paper/2020/hash/012a91467f210472fab4e11359bbfef6-Abstract.html | Daiyi Peng, Xuanyi Dong, Esteban Real, Mingxing Tan, Yifeng Lu, Gabriel Bender, Hanxiao Liu, Adam Kraft, Chen Liang, Quoc Le | https://papers.nips.cc/paper_files/paper/2020/hash/012a91467f210472fab4e11359bbfef6-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/012a91467f210472fab4e11359bbfef6-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9733-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/012a91467f210472fab4e11359bbfef6-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/012a91467f210472fab4e11359bbfef6-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/012a91467f210472fab4e11359bbfef6-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/012a91467f210472fab4e11359bbfef6-Supplemental.pdf | In this paper, we introduce a new way of programming AutoML based on symbolic programming. Under this paradigm, ML programs are mutable, thus can be manipulated easily by another program. As a result, AutoML can be reformulated as an automated process of symbolic manipulation. With this formulation, we decouple the tri... |
Fourier Sparse Leverage Scores and Approximate Kernel Learning | https://papers.nips.cc/paper_files/paper/2020/hash/012d9fe15b2493f21902cd55603382ec-Abstract.html | Tamas Erdelyi, Cameron Musco, Christopher Musco | https://papers.nips.cc/paper_files/paper/2020/hash/012d9fe15b2493f21902cd55603382ec-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/012d9fe15b2493f21902cd55603382ec-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9734-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/012d9fe15b2493f21902cd55603382ec-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/012d9fe15b2493f21902cd55603382ec-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/012d9fe15b2493f21902cd55603382ec-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/012d9fe15b2493f21902cd55603382ec-Supplemental.pdf | We prove new explicit upper bounds on the leverage scores of Fourier sparse functions under both the Gaussian and Laplace measures. In particular, we study s-sparse functions of the form $f(x) = \sum_{j=1}^s a_j e^{i \lambda_j x}$ for coefficients $a_j \in C$ and frequencies $\lambda_j \in R$. Bounding Fourier sparse l... |
Improved Algorithms for Online Submodular Maximization via First-order Regret Bounds | https://papers.nips.cc/paper_files/paper/2020/hash/0163cceb20f5ca7b313419c068abd9dc-Abstract.html | Nicholas Harvey, Christopher Liaw, Tasuku Soma | https://papers.nips.cc/paper_files/paper/2020/hash/0163cceb20f5ca7b313419c068abd9dc-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0163cceb20f5ca7b313419c068abd9dc-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9735-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0163cceb20f5ca7b313419c068abd9dc-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0163cceb20f5ca7b313419c068abd9dc-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0163cceb20f5ca7b313419c068abd9dc-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/0163cceb20f5ca7b313419c068abd9dc-Supplemental.pdf | We consider the problem of nonnegative submodular maximization in the online setting. At time step t, an algorithm selects a set St ∈ C ⊆ 2^V where C is a feasible family of sets. An adversary then reveals a submodular function ft. The goal is to design an efficient algorithm for minimizing the expected approximate reg... |
Synbols: Probing Learning Algorithms with Synthetic Datasets | https://papers.nips.cc/paper_files/paper/2020/hash/0169cf885f882efd795951253db5cdfb-Abstract.html | Alexandre Lacoste, Pau Rodríguez López, Frederic Branchaud-Charron, Parmida Atighehchian, Massimo Caccia, Issam Hadj Laradji, Alexandre Drouin, Matthew Craddock, Laurent Charlin, David Vázquez | https://papers.nips.cc/paper_files/paper/2020/hash/0169cf885f882efd795951253db5cdfb-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0169cf885f882efd795951253db5cdfb-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9736-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0169cf885f882efd795951253db5cdfb-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0169cf885f882efd795951253db5cdfb-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0169cf885f882efd795951253db5cdfb-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/0169cf885f882efd795951253db5cdfb-Supplemental.pdf | Progress in the field of machine learning has been fueled by the introduction of benchmark datasets pushing the limits of existing algorithms.
Enabling the design of datasets to test specific properties and failure modes of learning algorithms is thus a problem of high interest, as it has a direct impact on innovation... |
Adversarially Robust Streaming Algorithms via Differential Privacy | https://papers.nips.cc/paper_files/paper/2020/hash/0172d289da48c48de8c5ebf3de9f7ee1-Abstract.html | Avinatan Hasidim, Haim Kaplan, Yishay Mansour, Yossi Matias, Uri Stemmer | https://papers.nips.cc/paper_files/paper/2020/hash/0172d289da48c48de8c5ebf3de9f7ee1-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0172d289da48c48de8c5ebf3de9f7ee1-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9737-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0172d289da48c48de8c5ebf3de9f7ee1-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0172d289da48c48de8c5ebf3de9f7ee1-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0172d289da48c48de8c5ebf3de9f7ee1-Review.html | null | A streaming algorithm is said to be adversarially robust if its accuracy guarantees are maintained even when the data stream is chosen maliciously, by an adaptive adversary. We establish a connection between adversarial robustness of streaming algorithms and the notion of differential privacy. This connection allows us... |
Trading Personalization for Accuracy: Data Debugging in Collaborative Filtering | https://papers.nips.cc/paper_files/paper/2020/hash/019fa4fdf1c04cf73ba25aa2223769cd-Abstract.html | Long Chen, Yuan Yao, Feng Xu, Miao Xu, Hanghang Tong | https://papers.nips.cc/paper_files/paper/2020/hash/019fa4fdf1c04cf73ba25aa2223769cd-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/019fa4fdf1c04cf73ba25aa2223769cd-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9738-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/019fa4fdf1c04cf73ba25aa2223769cd-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/019fa4fdf1c04cf73ba25aa2223769cd-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/019fa4fdf1c04cf73ba25aa2223769cd-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/019fa4fdf1c04cf73ba25aa2223769cd-Supplemental.pdf | Collaborative filtering has been widely used in recommender systems. Existing work has primarily focused on improving the prediction accuracy mainly via either building refined models or incorporating additional side information, yet has largely ignored the inherent distribution of the input rating data.
In this paper... |
Cascaded Text Generation with Markov Transformers | https://papers.nips.cc/paper_files/paper/2020/hash/01a0683665f38d8e5e567b3b15ca98bf-Abstract.html | Yuntian Deng, Alexander Rush | https://papers.nips.cc/paper_files/paper/2020/hash/01a0683665f38d8e5e567b3b15ca98bf-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/01a0683665f38d8e5e567b3b15ca98bf-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9739-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/01a0683665f38d8e5e567b3b15ca98bf-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/01a0683665f38d8e5e567b3b15ca98bf-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/01a0683665f38d8e5e567b3b15ca98bf-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/01a0683665f38d8e5e567b3b15ca98bf-Supplemental.pdf | The two dominant approaches to neural text generation are fully autoregressive models, using serial beam search decoding, and non-autoregressive models, using parallel decoding with no output dependencies. This work proposes an autoregressive model with sub-linear parallel time generation. Noting that conditional rando... |
Improving Local Identifiability in Probabilistic Box Embeddings | https://papers.nips.cc/paper_files/paper/2020/hash/01c9d2c5b3ff5cbba349ec39a570b5e3-Abstract.html | Shib Dasgupta, Michael Boratko, Dongxu Zhang, Luke Vilnis, Xiang Li, Andrew McCallum | https://papers.nips.cc/paper_files/paper/2020/hash/01c9d2c5b3ff5cbba349ec39a570b5e3-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/01c9d2c5b3ff5cbba349ec39a570b5e3-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9740-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/01c9d2c5b3ff5cbba349ec39a570b5e3-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/01c9d2c5b3ff5cbba349ec39a570b5e3-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/01c9d2c5b3ff5cbba349ec39a570b5e3-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/01c9d2c5b3ff5cbba349ec39a570b5e3-Supplemental.pdf | Geometric embeddings have recently received attention for their natural ability to represent transitive asymmetric relations via containment. Box embeddings, where objects are represented by n-dimensional hyperrectangles, are a particularly promising example of such an embedding as they are closed under intersection a... |
Permute-and-Flip: A new mechanism for differentially private selection | https://papers.nips.cc/paper_files/paper/2020/hash/01e00f2f4bfcbb7505cb641066f2859b-Abstract.html | Ryan McKenna, Daniel R. Sheldon | https://papers.nips.cc/paper_files/paper/2020/hash/01e00f2f4bfcbb7505cb641066f2859b-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/01e00f2f4bfcbb7505cb641066f2859b-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9741-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/01e00f2f4bfcbb7505cb641066f2859b-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/01e00f2f4bfcbb7505cb641066f2859b-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/01e00f2f4bfcbb7505cb641066f2859b-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/01e00f2f4bfcbb7505cb641066f2859b-Supplemental.pdf | We consider the problem of differentially private selection. Given a finite set of candidate items, and a quality score for each item, our goal is to design a differentially private mechanism that returns an item with a score that is as high as possible. The most commonly used mechanism for this task is the exponenti... |
Deep reconstruction of strange attractors from time series | https://papers.nips.cc/paper_files/paper/2020/hash/021bbc7ee20b71134d53e20206bd6feb-Abstract.html | William Gilpin | https://papers.nips.cc/paper_files/paper/2020/hash/021bbc7ee20b71134d53e20206bd6feb-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/021bbc7ee20b71134d53e20206bd6feb-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9742-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/021bbc7ee20b71134d53e20206bd6feb-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/021bbc7ee20b71134d53e20206bd6feb-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/021bbc7ee20b71134d53e20206bd6feb-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/021bbc7ee20b71134d53e20206bd6feb-Supplemental.pdf | Experimental measurements of physical systems often have a limited number of independent channels, causing essential dynamical variables to remain unobserved. However, many popular methods for unsupervised inference of latent dynamics from experimental data implicitly assume that the measurements have higher intrinsic ... |
Reciprocal Adversarial Learning via Characteristic Functions | https://papers.nips.cc/paper_files/paper/2020/hash/021f6dd88a11ca489936ae770e4634ad-Abstract.html | Shengxi Li, Zeyang Yu, Min Xiang, Danilo Mandic | https://papers.nips.cc/paper_files/paper/2020/hash/021f6dd88a11ca489936ae770e4634ad-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/021f6dd88a11ca489936ae770e4634ad-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9743-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/021f6dd88a11ca489936ae770e4634ad-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/021f6dd88a11ca489936ae770e4634ad-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/021f6dd88a11ca489936ae770e4634ad-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/021f6dd88a11ca489936ae770e4634ad-Supplemental.pdf | Generative adversarial nets (GANs) have become a preferred tool for tasks involving complicated distributions. To stabilise the training and reduce the mode collapse of GANs, one of their main variants employs the integral probability metric (IPM) as the loss function. This provides extensive IPM-GANs with theoretical ... |
Statistical Guarantees of Distributed Nearest Neighbor Classification | https://papers.nips.cc/paper_files/paper/2020/hash/022e0ee5162c13d9a7bb3bd00fb032ce-Abstract.html | Jiexin Duan, Xingye Qiao, Guang Cheng | https://papers.nips.cc/paper_files/paper/2020/hash/022e0ee5162c13d9a7bb3bd00fb032ce-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/022e0ee5162c13d9a7bb3bd00fb032ce-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9744-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/022e0ee5162c13d9a7bb3bd00fb032ce-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/022e0ee5162c13d9a7bb3bd00fb032ce-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/022e0ee5162c13d9a7bb3bd00fb032ce-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/022e0ee5162c13d9a7bb3bd00fb032ce-Supplemental.pdf | Nearest neighbor is a popular nonparametric method for classification and regression with many appealing properties. In the big data era, the sheer volume and spatial/temporal disparity of big data may prohibit centrally processing and storing the data. This has imposed considerable hurdle for nearest neighbor predicti... |
Stein Self-Repulsive Dynamics: Benefits From Past Samples | https://papers.nips.cc/paper_files/paper/2020/hash/023d0a5671efd29e80b4deef8262e297-Abstract.html | Mao Ye, Tongzheng Ren, Qiang Liu | https://papers.nips.cc/paper_files/paper/2020/hash/023d0a5671efd29e80b4deef8262e297-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/023d0a5671efd29e80b4deef8262e297-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9745-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/023d0a5671efd29e80b4deef8262e297-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/023d0a5671efd29e80b4deef8262e297-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/023d0a5671efd29e80b4deef8262e297-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/023d0a5671efd29e80b4deef8262e297-Supplemental.pdf | We propose a new Stein self-repulsive dynamics for obtaining diversified samples from intractable un-normalized distributions. Our idea is to introduce Stein variational gradient as a repulsive force to push the samples of Langevin dynamics away from the past trajectories. This simple idea allows us to significantly de... |
The Statistical Complexity of Early-Stopped Mirror Descent | https://papers.nips.cc/paper_files/paper/2020/hash/024d2d699e6c1a82c9ba986386f4d824-Abstract.html | Tomas Vaskevicius, Varun Kanade, Patrick Rebeschini | https://papers.nips.cc/paper_files/paper/2020/hash/024d2d699e6c1a82c9ba986386f4d824-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/024d2d699e6c1a82c9ba986386f4d824-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9746-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/024d2d699e6c1a82c9ba986386f4d824-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/024d2d699e6c1a82c9ba986386f4d824-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/024d2d699e6c1a82c9ba986386f4d824-Review.html | null | Recently there has been a surge of interest in understanding implicit regularization properties of iterative gradient-based optimization algorithms. In this paper, we study the statistical guarantees on the excess risk achieved by early-stopped unconstrained mirror descent algorithms applied to the unregularized empiri... |
Algorithmic recourse under imperfect causal knowledge: a probabilistic approach | https://papers.nips.cc/paper_files/paper/2020/hash/02a3c7fb3f489288ae6942498498db20-Abstract.html | Amir-Hossein Karimi, Julius von Kügelgen, Bernhard Schölkopf, Isabel Valera | https://papers.nips.cc/paper_files/paper/2020/hash/02a3c7fb3f489288ae6942498498db20-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/02a3c7fb3f489288ae6942498498db20-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9747-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/02a3c7fb3f489288ae6942498498db20-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/02a3c7fb3f489288ae6942498498db20-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/02a3c7fb3f489288ae6942498498db20-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/02a3c7fb3f489288ae6942498498db20-Supplemental.pdf | Recent work has discussed the limitations of counterfactual explanations to recommend actions for algorithmic recourse, and argued for the need of taking causal relationships between features into consideration. Unfortunately, in practice, the true underlying structural causal model is generally unknown. In this work, ... |
Quantitative Propagation of Chaos for SGD in Wide Neural Networks | https://papers.nips.cc/paper_files/paper/2020/hash/02e74f10e0327ad868d138f2b4fdd6f0-Abstract.html | Valentin De Bortoli, Alain Durmus, Xavier Fontaine, Umut Simsekli | https://papers.nips.cc/paper_files/paper/2020/hash/02e74f10e0327ad868d138f2b4fdd6f0-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/02e74f10e0327ad868d138f2b4fdd6f0-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9748-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/02e74f10e0327ad868d138f2b4fdd6f0-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/02e74f10e0327ad868d138f2b4fdd6f0-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/02e74f10e0327ad868d138f2b4fdd6f0-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/02e74f10e0327ad868d138f2b4fdd6f0-Supplemental.pdf | In this paper, we investigate the limiting behavior of a
continuous-time counterpart of the Stochastic Gradient Descent (SGD)
algorithm applied to two-layer overparameterized neural networks, as
the number or neurons (i.e., the size of the hidden layer)
$N \to \plusinfty$. Following a probabilistic approach,... |
A Causal View on Robustness of Neural Networks | https://papers.nips.cc/paper_files/paper/2020/hash/02ed812220b0705fabb868ddbf17ea20-Abstract.html | Cheng Zhang, Kun Zhang, Yingzhen Li | https://papers.nips.cc/paper_files/paper/2020/hash/02ed812220b0705fabb868ddbf17ea20-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/02ed812220b0705fabb868ddbf17ea20-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9749-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/02ed812220b0705fabb868ddbf17ea20-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/02ed812220b0705fabb868ddbf17ea20-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/02ed812220b0705fabb868ddbf17ea20-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/02ed812220b0705fabb868ddbf17ea20-Supplemental.pdf | We present a causal view on the robustness of neural networks against input manipulations, which applies not only to traditional classification tasks but also to general measurement data. Based on this view, we design a deep causal manipulation augmented model (deep CAMA) which explicitly models possible manipulations ... |
Minimax Classification with 0-1 Loss and Performance Guarantees | https://papers.nips.cc/paper_files/paper/2020/hash/02f657d55eaf1c4840ce8d66fcdaf90c-Abstract.html | Santiago Mazuelas, Andrea Zanoni, Aritz Pérez | https://papers.nips.cc/paper_files/paper/2020/hash/02f657d55eaf1c4840ce8d66fcdaf90c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/02f657d55eaf1c4840ce8d66fcdaf90c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9750-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/02f657d55eaf1c4840ce8d66fcdaf90c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/02f657d55eaf1c4840ce8d66fcdaf90c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/02f657d55eaf1c4840ce8d66fcdaf90c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/02f657d55eaf1c4840ce8d66fcdaf90c-Supplemental.pdf | Supervised classification techniques use training samples to find classification rules with small expected 0-1 loss. Conventional methods achieve efficient learning and out-of-sample generalization by minimizing surrogate losses over specific families of rules. This paper presents minimax risk classifiers (MRCs) that d... |
How to Learn a Useful Critic? Model-based Action-Gradient-Estimator Policy Optimization | https://papers.nips.cc/paper_files/paper/2020/hash/03255088ed63354a54e0e5ed957e9008-Abstract.html | Pierluca D'Oro, Wojciech Jaśkowski | https://papers.nips.cc/paper_files/paper/2020/hash/03255088ed63354a54e0e5ed957e9008-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/03255088ed63354a54e0e5ed957e9008-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9751-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/03255088ed63354a54e0e5ed957e9008-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/03255088ed63354a54e0e5ed957e9008-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/03255088ed63354a54e0e5ed957e9008-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/03255088ed63354a54e0e5ed957e9008-Supplemental.pdf | Deterministic-policy actor-critic algorithms for continuous control improve the actor by plugging its actions into the critic and ascending the action-value gradient, which is obtained by chaining the actor's Jacobian matrix with the gradient of the critic with respect to input actions. However, instead of gradients, t... |
Coresets for Regressions with Panel Data | https://papers.nips.cc/paper_files/paper/2020/hash/03287fcce194dbd958c2ec5b33705912-Abstract.html | Lingxiao Huang, K Sudhir, Nisheeth Vishnoi | https://papers.nips.cc/paper_files/paper/2020/hash/03287fcce194dbd958c2ec5b33705912-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/03287fcce194dbd958c2ec5b33705912-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9752-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/03287fcce194dbd958c2ec5b33705912-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/03287fcce194dbd958c2ec5b33705912-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/03287fcce194dbd958c2ec5b33705912-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/03287fcce194dbd958c2ec5b33705912-Supplemental.pdf | A panel dataset contains features or observations for multiple individuals over multiple time periods and regression problems with panel data are common in statistics and applied ML. When dealing with massive datasets, coresets have emerged as a valuable tool from a computational, storage and privacy perspective, as on... |
Learning Composable Energy Surrogates for PDE Order Reduction | https://papers.nips.cc/paper_files/paper/2020/hash/0332d694daab22e0e0eaf7a5e88433f9-Abstract.html | Alex Beatson, Jordan Ash, Geoffrey Roeder, Tianju Xue, Ryan P. Adams | https://papers.nips.cc/paper_files/paper/2020/hash/0332d694daab22e0e0eaf7a5e88433f9-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0332d694daab22e0e0eaf7a5e88433f9-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9753-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0332d694daab22e0e0eaf7a5e88433f9-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0332d694daab22e0e0eaf7a5e88433f9-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0332d694daab22e0e0eaf7a5e88433f9-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/0332d694daab22e0e0eaf7a5e88433f9-Supplemental.zip | Meta-materials are an important emerging class of engineered materials in which complex macroscopic behaviour--whether electromagnetic, thermal, or mechanical--arises from modular substructure.
Simulation and optimization of these materials are computationally challenging, as rich substructures necessitate high-fidelit... |
Efficient Contextual Bandits with Continuous Actions | https://papers.nips.cc/paper_files/paper/2020/hash/033cc385728c51d97360020ed57776f0-Abstract.html | Maryam Majzoubi, Chicheng Zhang, Rajan Chari, Akshay Krishnamurthy, John Langford, Aleksandrs Slivkins | https://papers.nips.cc/paper_files/paper/2020/hash/033cc385728c51d97360020ed57776f0-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/033cc385728c51d97360020ed57776f0-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9754-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/033cc385728c51d97360020ed57776f0-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/033cc385728c51d97360020ed57776f0-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/033cc385728c51d97360020ed57776f0-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/033cc385728c51d97360020ed57776f0-Supplemental.zip | We create a computationally tractable learning algorithm for contextual bandits with continuous actions having unknown structure. The new reduction-style algorithm composes with most supervised learning representations. We prove that this algorithm works in a general sense and verify the new functionality with large-... |
Achieving Equalized Odds by Resampling Sensitive Attributes | https://papers.nips.cc/paper_files/paper/2020/hash/03593ce517feac573fdaafa6dcedef61-Abstract.html | Yaniv Romano, Stephen Bates, Emmanuel Candes | https://papers.nips.cc/paper_files/paper/2020/hash/03593ce517feac573fdaafa6dcedef61-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/03593ce517feac573fdaafa6dcedef61-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9755-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/03593ce517feac573fdaafa6dcedef61-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/03593ce517feac573fdaafa6dcedef61-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/03593ce517feac573fdaafa6dcedef61-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/03593ce517feac573fdaafa6dcedef61-Supplemental.pdf | We present a flexible framework for learning predictive models that approximately satisfy the equalized odds notion of fairness. This is achieved by introducing a general discrepancy functional that rigorously quantifies violations of this criterion. This differentiable functional is used as a penalty driving the model... |
Multi-Robot Collision Avoidance under Uncertainty with Probabilistic Safety Barrier Certificates | https://papers.nips.cc/paper_files/paper/2020/hash/03793ef7d06ffd63d34ade9d091f1ced-Abstract.html | Wenhao Luo, Wen Sun, Ashish Kapoor | https://papers.nips.cc/paper_files/paper/2020/hash/03793ef7d06ffd63d34ade9d091f1ced-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/03793ef7d06ffd63d34ade9d091f1ced-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9756-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/03793ef7d06ffd63d34ade9d091f1ced-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/03793ef7d06ffd63d34ade9d091f1ced-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/03793ef7d06ffd63d34ade9d091f1ced-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/03793ef7d06ffd63d34ade9d091f1ced-Supplemental.zip | Safety in terms of collision avoidance for multi-robot systems is a difficult challenge under uncertainty, non-determinism, and lack of complete information. This paper aims to propose a collision avoidance method that accounts for both measurement uncertainty and motion uncertainty. In particular, we propose Probabili... |
Hard Shape-Constrained Kernel Machines | https://papers.nips.cc/paper_files/paper/2020/hash/03fa2f7502f5f6b9169e67d17cbf51bb-Abstract.html | Pierre-Cyril Aubin-Frankowski, Zoltan Szabo | https://papers.nips.cc/paper_files/paper/2020/hash/03fa2f7502f5f6b9169e67d17cbf51bb-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/03fa2f7502f5f6b9169e67d17cbf51bb-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9757-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/03fa2f7502f5f6b9169e67d17cbf51bb-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/03fa2f7502f5f6b9169e67d17cbf51bb-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/03fa2f7502f5f6b9169e67d17cbf51bb-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/03fa2f7502f5f6b9169e67d17cbf51bb-Supplemental.pdf | Shape constraints (such as non-negativity, monotonicity, convexity) play a central role in a large number of applications, as they usually improve performance for small sample size and help interpretability. However enforcing these shape requirements in a hard fashion is an extremely challenging problem. Classically, t... |
A Closer Look at the Training Strategy for Modern Meta-Learning | https://papers.nips.cc/paper_files/paper/2020/hash/0415740eaa4d9decbc8da001d3fd805f-Abstract.html | JIAXIN CHEN, Xiao-Ming Wu, Yanke Li, Qimai LI, Li-Ming Zhan, Fu-lai Chung | https://papers.nips.cc/paper_files/paper/2020/hash/0415740eaa4d9decbc8da001d3fd805f-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0415740eaa4d9decbc8da001d3fd805f-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9758-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0415740eaa4d9decbc8da001d3fd805f-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0415740eaa4d9decbc8da001d3fd805f-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0415740eaa4d9decbc8da001d3fd805f-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/0415740eaa4d9decbc8da001d3fd805f-Supplemental.pdf | The support/query (S/Q) episodic training strategy has been widely used in modern meta-learning algorithms and is believed to improve their generalization ability to test environments. This paper conducts a theoretical investigation of this training strategy on generalization. From a stability perspective, we analyze t... |
On the Value of Out-of-Distribution Testing: An Example of Goodhart's Law | https://papers.nips.cc/paper_files/paper/2020/hash/045117b0e0a11a242b9765e79cbf113f-Abstract.html | Damien Teney, Ehsan Abbasnejad, Kushal Kafle, Robik Shrestha, Christopher Kanan, Anton van den Hengel | https://papers.nips.cc/paper_files/paper/2020/hash/045117b0e0a11a242b9765e79cbf113f-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/045117b0e0a11a242b9765e79cbf113f-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9759-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/045117b0e0a11a242b9765e79cbf113f-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/045117b0e0a11a242b9765e79cbf113f-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/045117b0e0a11a242b9765e79cbf113f-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/045117b0e0a11a242b9765e79cbf113f-Supplemental.pdf | Out-of-distribution (OOD) testing is increasingly popular for evaluating a machine learning system's ability to generalize beyond the biases of a training set. OOD benchmarks are designed to present a different joint distribution of data and labels between training and test time. VQA-CP has become the standard OOD benc... |
Generalised Bayesian Filtering via Sequential Monte Carlo | https://papers.nips.cc/paper_files/paper/2020/hash/04ecb1fa28506ccb6f72b12c0245ddbc-Abstract.html | Ayman Boustati, Omer Deniz Akyildiz, Theodoros Damoulas, Adam Johansen | https://papers.nips.cc/paper_files/paper/2020/hash/04ecb1fa28506ccb6f72b12c0245ddbc-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/04ecb1fa28506ccb6f72b12c0245ddbc-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9760-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/04ecb1fa28506ccb6f72b12c0245ddbc-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/04ecb1fa28506ccb6f72b12c0245ddbc-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/04ecb1fa28506ccb6f72b12c0245ddbc-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/04ecb1fa28506ccb6f72b12c0245ddbc-Supplemental.pdf | We introduce a framework for inference in general state-space hidden Markov models (HMMs) under likelihood misspecification. In particular, we leverage the loss-theoretic perspective of Generalized Bayesian Inference (GBI) to define generalised filtering recursions in HMMs, that can tackle the problem of inference unde... |
Deterministic Approximation for Submodular Maximization over a Matroid in Nearly Linear Time | https://papers.nips.cc/paper_files/paper/2020/hash/05128e44e27c36bdba71221bfccf735d-Abstract.html | Kai Han, zongmai Cao, Shuang Cui, Benwei Wu | https://papers.nips.cc/paper_files/paper/2020/hash/05128e44e27c36bdba71221bfccf735d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/05128e44e27c36bdba71221bfccf735d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9761-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/05128e44e27c36bdba71221bfccf735d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/05128e44e27c36bdba71221bfccf735d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/05128e44e27c36bdba71221bfccf735d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/05128e44e27c36bdba71221bfccf735d-Supplemental.pdf | We study the problem of maximizing a non-monotone, non-negative submodular function subject to a matroid constraint. The prior best-known deterministic approximation ratio for this problem is $\frac{1}{4}-\epsilon$ under $\mathcal{O}(({n^4}/{\epsilon})\log n)$ time complexity. We show that this deterministic ratio can ... |
Flows for simultaneous manifold learning and density estimation | https://papers.nips.cc/paper_files/paper/2020/hash/051928341be67dcba03f0e04104d9047-Abstract.html | Johann Brehmer, Kyle Cranmer | https://papers.nips.cc/paper_files/paper/2020/hash/051928341be67dcba03f0e04104d9047-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/051928341be67dcba03f0e04104d9047-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9762-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/051928341be67dcba03f0e04104d9047-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/051928341be67dcba03f0e04104d9047-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/051928341be67dcba03f0e04104d9047-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/051928341be67dcba03f0e04104d9047-Supplemental.pdf | We introduce manifold-learning flows (ℳ-flows), a new class of generative models that simultaneously learn the data manifold as well as a tractable probability density on that manifold. Combining aspects of normalizing flows, GANs, autoencoders, and energy-based models, they have the potential to represent data sets wi... |
Simultaneous Preference and Metric Learning from Paired Comparisons | https://papers.nips.cc/paper_files/paper/2020/hash/0561bc7ecba98e39ca7994f93311ba23-Abstract.html | Austin Xu, Mark Davenport | https://papers.nips.cc/paper_files/paper/2020/hash/0561bc7ecba98e39ca7994f93311ba23-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0561bc7ecba98e39ca7994f93311ba23-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9763-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0561bc7ecba98e39ca7994f93311ba23-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0561bc7ecba98e39ca7994f93311ba23-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0561bc7ecba98e39ca7994f93311ba23-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/0561bc7ecba98e39ca7994f93311ba23-Supplemental.pdf | A popular model of preference in the context of recommendation systems is the so-called ideal point model. In this model, a user is represented as a vector u together with a collection of items x1 ... xN in a common low-dimensional space. The vector u represents the user's "ideal point," or the ideal combination of fea... |
Efficient Variational Inference for Sparse Deep Learning with Theoretical Guarantee | https://papers.nips.cc/paper_files/paper/2020/hash/05a624166c8eb8273b8464e8d9cb5bd9-Abstract.html | Jincheng Bai, Qifan Song, Guang Cheng | https://papers.nips.cc/paper_files/paper/2020/hash/05a624166c8eb8273b8464e8d9cb5bd9-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/05a624166c8eb8273b8464e8d9cb5bd9-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9764-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/05a624166c8eb8273b8464e8d9cb5bd9-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/05a624166c8eb8273b8464e8d9cb5bd9-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/05a624166c8eb8273b8464e8d9cb5bd9-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/05a624166c8eb8273b8464e8d9cb5bd9-Supplemental.pdf | Sparse deep learning aims to address the challenge of huge storage consumption by deep neural networks, and to recover the sparse structure of target functions. Although tremendous empirical successes have been achieved, most sparse deep learning algorithms are lacking of theoretical supports. On the other hand, anothe... |
Learning Manifold Implicitly via Explicit Heat-Kernel Learning | https://papers.nips.cc/paper_files/paper/2020/hash/05e2a0647e260c355dd2b2175edb45b8-Abstract.html | Yufan Zhou, Changyou Chen, Jinhui Xu | https://papers.nips.cc/paper_files/paper/2020/hash/05e2a0647e260c355dd2b2175edb45b8-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/05e2a0647e260c355dd2b2175edb45b8-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9765-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/05e2a0647e260c355dd2b2175edb45b8-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/05e2a0647e260c355dd2b2175edb45b8-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/05e2a0647e260c355dd2b2175edb45b8-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/05e2a0647e260c355dd2b2175edb45b8-Supplemental.pdf | Manifold learning is a fundamental problem in machine learning with numerous applications. Most of the existing methods directly learn the low-dimensional embedding of the data in some high-dimensional space, and usually lack the flexibility of being directly applicable to down-stream applications. In this paper, we pr... |
Deep Relational Topic Modeling via Graph Poisson Gamma Belief Network | https://papers.nips.cc/paper_files/paper/2020/hash/05ee45de8d877c3949760a94fa691533-Abstract.html | Chaojie Wang, Hao Zhang, Bo Chen, Dongsheng Wang, Zhengjue Wang, Mingyuan Zhou | https://papers.nips.cc/paper_files/paper/2020/hash/05ee45de8d877c3949760a94fa691533-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/05ee45de8d877c3949760a94fa691533-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9766-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/05ee45de8d877c3949760a94fa691533-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/05ee45de8d877c3949760a94fa691533-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/05ee45de8d877c3949760a94fa691533-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/05ee45de8d877c3949760a94fa691533-Supplemental.pdf | To analyze a collection of interconnected documents, relational topic models (RTMs) have been developed to describe both the link structure and document content, exploring their underlying relationships via a single-layer latent representation with limited expressive capability. To better utilize the document network, ... |
One-bit Supervision for Image Classification | https://papers.nips.cc/paper_files/paper/2020/hash/05f971b5ec196b8c65b75d2ef8267331-Abstract.html | Hengtong Hu, Lingxi Xie, Zewei Du, Richang Hong, Qi Tian | https://papers.nips.cc/paper_files/paper/2020/hash/05f971b5ec196b8c65b75d2ef8267331-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/05f971b5ec196b8c65b75d2ef8267331-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9767-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/05f971b5ec196b8c65b75d2ef8267331-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/05f971b5ec196b8c65b75d2ef8267331-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/05f971b5ec196b8c65b75d2ef8267331-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/05f971b5ec196b8c65b75d2ef8267331-Supplemental.zip | This paper presents one-bit supervision, a novel setting of learning from incomplete annotations, in the scenario of image classification. Instead of training a model upon the accurate label of each sample, our setting requires the model to query with a predicted label of each sample and learn from the answer whether t... |
What is being transferred in transfer learning? | https://papers.nips.cc/paper_files/paper/2020/hash/0607f4c705595b911a4f3e7a127b44e0-Abstract.html | Behnam Neyshabur, Hanie Sedghi, Chiyuan Zhang | https://papers.nips.cc/paper_files/paper/2020/hash/0607f4c705595b911a4f3e7a127b44e0-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0607f4c705595b911a4f3e7a127b44e0-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9768-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0607f4c705595b911a4f3e7a127b44e0-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0607f4c705595b911a4f3e7a127b44e0-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0607f4c705595b911a4f3e7a127b44e0-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/0607f4c705595b911a4f3e7a127b44e0-Supplemental.zip | One desired capability for machines is the ability to transfer their understanding of one domain to another domain where data is (usually) scarce. Despite ample adaptation of transfer learning in many deep learning applications, we yet do not understand what enables a successful transfer and which part of the network i... |
Submodular Maximization Through Barrier Functions | https://papers.nips.cc/paper_files/paper/2020/hash/061412e4a03c02f9902576ec55ebbe77-Abstract.html | Ashwinkumar Badanidiyuru, Amin Karbasi, Ehsan Kazemi, Jan Vondrak | https://papers.nips.cc/paper_files/paper/2020/hash/061412e4a03c02f9902576ec55ebbe77-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/061412e4a03c02f9902576ec55ebbe77-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9769-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/061412e4a03c02f9902576ec55ebbe77-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/061412e4a03c02f9902576ec55ebbe77-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/061412e4a03c02f9902576ec55ebbe77-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/061412e4a03c02f9902576ec55ebbe77-Supplemental.pdf | In this paper, we introduce a novel technique for constrained submodular maximization, inspired by barrier functions in continuous optimization. This connection not only improves the running time for constrained submodular maximization but also provides the state of the art guarantee. More precisely, for maximizing a ... |
Neural Networks with Recurrent Generative Feedback | https://papers.nips.cc/paper_files/paper/2020/hash/0660895c22f8a14eb039bfb9beb0778f-Abstract.html | Yujia Huang, James Gornet, Sihui Dai, Zhiding Yu, Tan Nguyen, Doris Tsao, Anima Anandkumar | https://papers.nips.cc/paper_files/paper/2020/hash/0660895c22f8a14eb039bfb9beb0778f-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0660895c22f8a14eb039bfb9beb0778f-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9770-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0660895c22f8a14eb039bfb9beb0778f-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0660895c22f8a14eb039bfb9beb0778f-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0660895c22f8a14eb039bfb9beb0778f-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/0660895c22f8a14eb039bfb9beb0778f-Supplemental.pdf | Neural networks are vulnerable to input perturbations such as additive noise and adversarial attacks. In contrast, human perception is much more robust to such perturbations. The Bayesian brain hypothesis states that human brains use an internal generative model to update the posterior beliefs of the sensory input. Thi... |
Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph Link Prediction | https://papers.nips.cc/paper_files/paper/2020/hash/0663a4ddceacb40b095eda264a85f15c-Abstract.html | Jinheon Baek, Dong Bok Lee, Sung Ju Hwang | https://papers.nips.cc/paper_files/paper/2020/hash/0663a4ddceacb40b095eda264a85f15c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0663a4ddceacb40b095eda264a85f15c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9771-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0663a4ddceacb40b095eda264a85f15c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0663a4ddceacb40b095eda264a85f15c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0663a4ddceacb40b095eda264a85f15c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/0663a4ddceacb40b095eda264a85f15c-Supplemental.zip | Many practical graph problems, such as knowledge graph construction and drug-drug interaction prediction, require to handle multi-relational graphs. However, handling real-world multi-relational graphs with Graph Neural Networks (GNNs) is often challenging due to their evolving nature, as new entities (nodes) can emerg... |
Exploiting weakly supervised visual patterns to learn from partial annotations | https://papers.nips.cc/paper_files/paper/2020/hash/066ca7bf90807fcd8e4f1eaef4e4e8f7-Abstract.html | Kaustav Kundu, Joseph Tighe | https://papers.nips.cc/paper_files/paper/2020/hash/066ca7bf90807fcd8e4f1eaef4e4e8f7-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/066ca7bf90807fcd8e4f1eaef4e4e8f7-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9772-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/066ca7bf90807fcd8e4f1eaef4e4e8f7-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/066ca7bf90807fcd8e4f1eaef4e4e8f7-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/066ca7bf90807fcd8e4f1eaef4e4e8f7-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/066ca7bf90807fcd8e4f1eaef4e4e8f7-Supplemental.pdf | As classifications datasets progressively get larger in terms of label space and number of examples, annotating them with all labels becomes non-trivial and expensive task. For example, annotating the entire OpenImage test set can cost $6.5M. Hence, in current large-scale benchmarks such as OpenImages and LVIS, less th... |
Improving Inference for Neural Image Compression | https://papers.nips.cc/paper_files/paper/2020/hash/066f182b787111ed4cb65ed437f0855b-Abstract.html | Yibo Yang, Robert Bamler, Stephan Mandt | https://papers.nips.cc/paper_files/paper/2020/hash/066f182b787111ed4cb65ed437f0855b-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/066f182b787111ed4cb65ed437f0855b-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9773-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/066f182b787111ed4cb65ed437f0855b-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/066f182b787111ed4cb65ed437f0855b-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/066f182b787111ed4cb65ed437f0855b-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/066f182b787111ed4cb65ed437f0855b-Supplemental.pdf | We consider the problem of lossy image compression with deep latent variable models. State-of-the-art methods build on hierarchical variational autoencoders (VAEs) and learn inference networks to predict a compressible latent representation of each data point. Drawing on the variational inference perspective on compres... |
Neuron Merging: Compensating for Pruned Neurons | https://papers.nips.cc/paper_files/paper/2020/hash/0678ca2eae02d542cc931e81b74de122-Abstract.html | Woojeong Kim, Suhyun Kim, Mincheol Park, Geunseok Jeon | https://papers.nips.cc/paper_files/paper/2020/hash/0678ca2eae02d542cc931e81b74de122-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0678ca2eae02d542cc931e81b74de122-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9774-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0678ca2eae02d542cc931e81b74de122-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0678ca2eae02d542cc931e81b74de122-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0678ca2eae02d542cc931e81b74de122-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/0678ca2eae02d542cc931e81b74de122-Supplemental.pdf | Network pruning is widely used to lighten and accelerate neural network models. Structured network pruning discards the whole neuron or filter, leading to accuracy loss. In this work, we propose a novel concept of neuron merging applicable to both fully connected layers and convolution layers, which compensates for the... |
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence | https://papers.nips.cc/paper_files/paper/2020/hash/06964dce9addb1c5cb5d6e3d9838f733-Abstract.html | Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin A. Raffel, Ekin Dogus Cubuk, Alexey Kurakin, Chun-Liang Li | https://papers.nips.cc/paper_files/paper/2020/hash/06964dce9addb1c5cb5d6e3d9838f733-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/06964dce9addb1c5cb5d6e3d9838f733-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9775-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/06964dce9addb1c5cb5d6e3d9838f733-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/06964dce9addb1c5cb5d6e3d9838f733-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/06964dce9addb1c5cb5d6e3d9838f733-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/06964dce9addb1c5cb5d6e3d9838f733-Supplemental.pdf | Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data to improve a model’s performance. This domain has seen fast progress recently, at the cost of requiring more complex methods. In this paper we propose FixMatch, an algorithm that is a significant simplification of existing SSL metho... |
Reinforcement Learning with Combinatorial Actions: An Application to Vehicle Routing | https://papers.nips.cc/paper_files/paper/2020/hash/06a9d51e04213572ef0720dd27a84792-Abstract.html | Arthur Delarue, Ross Anderson, Christian Tjandraatmadja | https://papers.nips.cc/paper_files/paper/2020/hash/06a9d51e04213572ef0720dd27a84792-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/06a9d51e04213572ef0720dd27a84792-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9776-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/06a9d51e04213572ef0720dd27a84792-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/06a9d51e04213572ef0720dd27a84792-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/06a9d51e04213572ef0720dd27a84792-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/06a9d51e04213572ef0720dd27a84792-Supplemental.zip | Value-function-based methods have long played an important role in reinforcement learning. However, finding the best next action given a value function of arbitrary complexity is nontrivial when the action space is too large for enumeration. We develop a framework for value-function-based deep reinforcement learning wi... |
Towards Playing Full MOBA Games with Deep Reinforcement Learning | https://papers.nips.cc/paper_files/paper/2020/hash/06d5ae105ea1bea4d800bc96491876e9-Abstract.html | Deheng Ye, Guibin Chen, Wen Zhang, Sheng Chen, Bo Yuan, Bo Liu, Jia Chen, Zhao Liu, Fuhao Qiu, Hongsheng Yu, Yinyuting Yin, Bei Shi, Liang Wang, Tengfei Shi, Qiang Fu, Wei Yang, Lanxiao Huang, Wei Liu | https://papers.nips.cc/paper_files/paper/2020/hash/06d5ae105ea1bea4d800bc96491876e9-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/06d5ae105ea1bea4d800bc96491876e9-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9777-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/06d5ae105ea1bea4d800bc96491876e9-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/06d5ae105ea1bea4d800bc96491876e9-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/06d5ae105ea1bea4d800bc96491876e9-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/06d5ae105ea1bea4d800bc96491876e9-Supplemental.pdf | MOBA games, e.g., Honor of Kings, League of Legends, and Dota 2, pose grand challenges to AI systems such as multi-agent, enormous state-action space, complex action control, etc. Developing AI for playing MOBA games has raised much attention accordingly. However, existing work falls short in handling the raw game comp... |
Rankmax: An Adaptive Projection Alternative to the Softmax Function | https://papers.nips.cc/paper_files/paper/2020/hash/070dbb6024b5ef93784428afc71f2146-Abstract.html | Weiwei Kong, Walid Krichene, Nicolas Mayoraz, Steffen Rendle, Li Zhang | https://papers.nips.cc/paper_files/paper/2020/hash/070dbb6024b5ef93784428afc71f2146-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/070dbb6024b5ef93784428afc71f2146-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9778-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/070dbb6024b5ef93784428afc71f2146-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/070dbb6024b5ef93784428afc71f2146-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/070dbb6024b5ef93784428afc71f2146-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/070dbb6024b5ef93784428afc71f2146-Supplemental.pdf | Several machine learning models involve mapping a score vector to a probability vector. Usually, this is done by projecting the score vector onto a probability simplex, and such projections are often characterized as Lipschitz continuous approximations of the argmax function, whose Lipschitz constant is controlled by a... |
Online Agnostic Boosting via Regret Minimization | https://papers.nips.cc/paper_files/paper/2020/hash/07168af6cb0ef9f78dae15739dd73255-Abstract.html | Nataly Brukhim, Xinyi Chen, Elad Hazan, Shay Moran | https://papers.nips.cc/paper_files/paper/2020/hash/07168af6cb0ef9f78dae15739dd73255-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/07168af6cb0ef9f78dae15739dd73255-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9779-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/07168af6cb0ef9f78dae15739dd73255-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/07168af6cb0ef9f78dae15739dd73255-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/07168af6cb0ef9f78dae15739dd73255-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/07168af6cb0ef9f78dae15739dd73255-Supplemental.pdf | Boosting is a widely used machine learning approach based on the idea of aggregating weak learning
rules. While in statistical learning numerous boosting methods exist both in the realizable and agnostic
settings, in online learning they exist only in the realizable case. In this work we provide the first agnostic
onli... |
Causal Intervention for Weakly-Supervised Semantic Segmentation | https://papers.nips.cc/paper_files/paper/2020/hash/07211688a0869d995947a8fb11b215d6-Abstract.html | Dong Zhang, Hanwang Zhang, Jinhui Tang, Xian-Sheng Hua, Qianru Sun | https://papers.nips.cc/paper_files/paper/2020/hash/07211688a0869d995947a8fb11b215d6-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/07211688a0869d995947a8fb11b215d6-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9780-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/07211688a0869d995947a8fb11b215d6-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/07211688a0869d995947a8fb11b215d6-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/07211688a0869d995947a8fb11b215d6-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/07211688a0869d995947a8fb11b215d6-Supplemental.pdf | We present a causal inference framework to improve Weakly-Supervised Semantic Segmentation (WSSS). Specifically, we aim to generate better pixel-level pseudo-masks by using only image-level labels -- the most crucial step in WSSS. We attribute the cause of the ambiguous boundaries of pseudo-masks to the confounding con... |
Belief Propagation Neural Networks | https://papers.nips.cc/paper_files/paper/2020/hash/07217414eb3fbe24d4e5b6cafb91ca18-Abstract.html | Jonathan Kuck, Shuvam Chakraborty, Hao Tang, Rachel Luo, Jiaming Song, Ashish Sabharwal, Stefano Ermon | https://papers.nips.cc/paper_files/paper/2020/hash/07217414eb3fbe24d4e5b6cafb91ca18-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/07217414eb3fbe24d4e5b6cafb91ca18-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9781-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/07217414eb3fbe24d4e5b6cafb91ca18-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/07217414eb3fbe24d4e5b6cafb91ca18-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/07217414eb3fbe24d4e5b6cafb91ca18-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/07217414eb3fbe24d4e5b6cafb91ca18-Supplemental.pdf | Learned neural solvers have successfully been used to solve combinatorial optimization and decision problems. More general counting variants of these problems, however, are still largely solved with hand-crafted solvers. To bridge this gap, we introduce belief propagation neural networks (BPNNs), a class of parameteriz... |
Over-parameterized Adversarial Training: An Analysis Overcoming the Curse of Dimensionality | https://papers.nips.cc/paper_files/paper/2020/hash/0740bb92e583cd2b88ec7c59f985cb41-Abstract.html | Yi Zhang, Orestis Plevrakis, Simon S. Du, Xingguo Li, Zhao Song, Sanjeev Arora | https://papers.nips.cc/paper_files/paper/2020/hash/0740bb92e583cd2b88ec7c59f985cb41-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0740bb92e583cd2b88ec7c59f985cb41-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9782-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0740bb92e583cd2b88ec7c59f985cb41-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0740bb92e583cd2b88ec7c59f985cb41-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0740bb92e583cd2b88ec7c59f985cb41-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/0740bb92e583cd2b88ec7c59f985cb41-Supplemental.pdf | Adversarial training is a popular method to give neural nets robustness against adversarial perturbations. In practice adversarial training leads to low robust training loss. However, a rigorous explanation for why this happens under natural conditions is still missing. Recently a convergence theory of standard (non-ad... |
Post-training Iterative Hierarchical Data Augmentation for Deep Networks | https://papers.nips.cc/paper_files/paper/2020/hash/074177d3eb6371e32c16c55a3b8f706b-Abstract.html | Adil Khan, Khadija Fraz | https://papers.nips.cc/paper_files/paper/2020/hash/074177d3eb6371e32c16c55a3b8f706b-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/074177d3eb6371e32c16c55a3b8f706b-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9783-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/074177d3eb6371e32c16c55a3b8f706b-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/074177d3eb6371e32c16c55a3b8f706b-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/074177d3eb6371e32c16c55a3b8f706b-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/074177d3eb6371e32c16c55a3b8f706b-Supplemental.pdf | In this paper, we propose a new iterative hierarchical data augmentation (IHDA) method to fine-tune trained deep neural networks to improve their generalization performance. The IHDA is motivated by three key insights: (1) Deep networks (DNs) are good at learning multi-level representations from data. (2) Performing da... |
Debugging Tests for Model Explanations | https://papers.nips.cc/paper_files/paper/2020/hash/075b051ec3d22dac7b33f788da631fd4-Abstract.html | Julius Adebayo, Michael Muelly, Ilaria Liccardi, Been Kim | https://papers.nips.cc/paper_files/paper/2020/hash/075b051ec3d22dac7b33f788da631fd4-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/075b051ec3d22dac7b33f788da631fd4-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9784-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/075b051ec3d22dac7b33f788da631fd4-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/075b051ec3d22dac7b33f788da631fd4-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/075b051ec3d22dac7b33f788da631fd4-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/075b051ec3d22dac7b33f788da631fd4-Supplemental.pdf | We investigate whether post-hoc model explanations are effective for diagnosing model errors--model debugging. In response to the challenge of explaining a model's prediction, a vast array of explanation methods have been proposed. Despite increasing use, it is unclear if they are effective. To start, we categorize \t... |
Robust compressed sensing using generative models | https://papers.nips.cc/paper_files/paper/2020/hash/07cb5f86508f146774a2fac4373a8e50-Abstract.html | Ajil Jalal, Liu Liu, Alexandros G. Dimakis, Constantine Caramanis | https://papers.nips.cc/paper_files/paper/2020/hash/07cb5f86508f146774a2fac4373a8e50-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/07cb5f86508f146774a2fac4373a8e50-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9785-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/07cb5f86508f146774a2fac4373a8e50-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/07cb5f86508f146774a2fac4373a8e50-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/07cb5f86508f146774a2fac4373a8e50-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/07cb5f86508f146774a2fac4373a8e50-Supplemental.pdf | We consider estimating a high dimensional signal in $\R^n$ using a sublinear number of linear measurements. In analogy to classical compressed sensing, here we assume a generative model as a prior, that is, we assume the
signal is represented by a
deep generative model $G: \R^k \rightarrow \R^n$.
Classical recovery app... |
Fairness without Demographics through Adversarially Reweighted Learning | https://papers.nips.cc/paper_files/paper/2020/hash/07fc15c9d169ee48573edd749d25945d-Abstract.html | Preethi Lahoti, Alex Beutel, Jilin Chen, Kang Lee, Flavien Prost, Nithum Thain, Xuezhi Wang, Ed Chi | https://papers.nips.cc/paper_files/paper/2020/hash/07fc15c9d169ee48573edd749d25945d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/07fc15c9d169ee48573edd749d25945d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9786-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/07fc15c9d169ee48573edd749d25945d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/07fc15c9d169ee48573edd749d25945d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/07fc15c9d169ee48573edd749d25945d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/07fc15c9d169ee48573edd749d25945d-Supplemental.zip | Much of the previous machine learning (ML) fairness literature assumes that protected features such as race and sex are present in the dataset, and relies upon them to mitigate fairness concerns. However, in practice factors like privacy and regulation often preclude the collection of protected features, or their use f... |
Stochastic Latent Actor-Critic: Deep Reinforcement Learning with a Latent Variable Model | https://papers.nips.cc/paper_files/paper/2020/hash/08058bf500242562c0d031ff830ad094-Abstract.html | Alex X. Lee, Anusha Nagabandi, Pieter Abbeel, Sergey Levine | https://papers.nips.cc/paper_files/paper/2020/hash/08058bf500242562c0d031ff830ad094-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/08058bf500242562c0d031ff830ad094-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9787-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/08058bf500242562c0d031ff830ad094-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/08058bf500242562c0d031ff830ad094-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/08058bf500242562c0d031ff830ad094-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/08058bf500242562c0d031ff830ad094-Supplemental.pdf | Deep reinforcement learning (RL) algorithms can use high-capacity deep networks to learn directly from image observations. However, these high-dimensional observation spaces present a number of challenges in practice, since the policy must now solve two problems: representation learning and task learning. In this work... |
Ridge Rider: Finding Diverse Solutions by Following Eigenvectors of the Hessian | https://papers.nips.cc/paper_files/paper/2020/hash/08425b881bcde94a383cd258cea331be-Abstract.html | Jack Parker-Holder, Luke Metz, Cinjon Resnick, Hengyuan Hu, Adam Lerer, Alistair Letcher, Alexander Peysakhovich, Aldo Pacchiano, Jakob Foerster | https://papers.nips.cc/paper_files/paper/2020/hash/08425b881bcde94a383cd258cea331be-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/08425b881bcde94a383cd258cea331be-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9788-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/08425b881bcde94a383cd258cea331be-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/08425b881bcde94a383cd258cea331be-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/08425b881bcde94a383cd258cea331be-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/08425b881bcde94a383cd258cea331be-Supplemental.pdf | Over the last decade, a single algorithm has changed many facets of our lives - Stochastic Gradient Descent (SGD). In the era of ever decreasing loss functions, SGD and its various offspring have become the go-to optimization tool in machine learning and are a key component of the success of deep neural networks (DNNs)... |
The route to chaos in routing games: When is price of anarchy too optimistic? | https://papers.nips.cc/paper_files/paper/2020/hash/0887f1a5b9970ad13f46b8c1485f7900-Abstract.html | Thiparat Chotibut, Fryderyk Falniowski, Michał Misiurewicz, Georgios Piliouras | https://papers.nips.cc/paper_files/paper/2020/hash/0887f1a5b9970ad13f46b8c1485f7900-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0887f1a5b9970ad13f46b8c1485f7900-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9789-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0887f1a5b9970ad13f46b8c1485f7900-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0887f1a5b9970ad13f46b8c1485f7900-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0887f1a5b9970ad13f46b8c1485f7900-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/0887f1a5b9970ad13f46b8c1485f7900-Supplemental.pdf | Routing games are amongst the most studied classes of games in game theory. Their most well-known property is that learning dynamics typically converge to equilibria implying approximately optimal performance (low Price of Anarchy). We perform a stress test for these classic results by studying the ubiquitous learning ... |
Online Algorithm for Unsupervised Sequential Selection with Contextual Information | https://papers.nips.cc/paper_files/paper/2020/hash/08e5d8066881eab185d0de9db3b36c7f-Abstract.html | Arun Verma, Manjesh Kumar Hanawal, Csaba Szepesvari, Venkatesh Saligrama | https://papers.nips.cc/paper_files/paper/2020/hash/08e5d8066881eab185d0de9db3b36c7f-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/08e5d8066881eab185d0de9db3b36c7f-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9790-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/08e5d8066881eab185d0de9db3b36c7f-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/08e5d8066881eab185d0de9db3b36c7f-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/08e5d8066881eab185d0de9db3b36c7f-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/08e5d8066881eab185d0de9db3b36c7f-Supplemental.pdf | In this paper, we study Contextual Unsupervised Sequential Selection (USS), a new variant of the stochastic contextual bandits problem where the loss of an arm cannot be inferred from the observed feedback. In our setup, arms are associated with fixed costs and are ordered, forming a cascade. In each round, a context i... |
Adapting Neural Architectures Between Domains | https://papers.nips.cc/paper_files/paper/2020/hash/08f38e0434442128fab5ead6217ca759-Abstract.html | Yanxi Li, Zhaohui Yang, Yunhe Wang, Chang Xu | https://papers.nips.cc/paper_files/paper/2020/hash/08f38e0434442128fab5ead6217ca759-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/08f38e0434442128fab5ead6217ca759-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9791-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/08f38e0434442128fab5ead6217ca759-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/08f38e0434442128fab5ead6217ca759-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/08f38e0434442128fab5ead6217ca759-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/08f38e0434442128fab5ead6217ca759-Supplemental.pdf | Neural architecture search (NAS) has demonstrated impressive performance in automatically designing high-performance neural networks. The power of deep neural networks is to be unleashed for analyzing a large volume of data (e.g. ImageNet), but the architecture search is often executed on another smaller dataset (e.g. ... |
What went wrong and when? Instance-wise feature importance for time-series black-box models | https://papers.nips.cc/paper_files/paper/2020/hash/08fa43588c2571ade19bc0fa5936e028-Abstract.html | Sana Tonekaboni, Shalmali Joshi, Kieran Campbell, David K. Duvenaud, Anna Goldenberg | https://papers.nips.cc/paper_files/paper/2020/hash/08fa43588c2571ade19bc0fa5936e028-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/08fa43588c2571ade19bc0fa5936e028-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9792-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/08fa43588c2571ade19bc0fa5936e028-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/08fa43588c2571ade19bc0fa5936e028-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/08fa43588c2571ade19bc0fa5936e028-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/08fa43588c2571ade19bc0fa5936e028-Supplemental.pdf | Explanations of time series models are useful for high stakes applications like healthcare but have received little attention in machine learning literature. We propose FIT, a framework that evaluates the importance of observations for a multivariate time-series black-box model by quantifying the shift in the predictiv... |
Towards Better Generalization of Adaptive Gradient Methods | https://papers.nips.cc/paper_files/paper/2020/hash/08fb104b0f2f838f3ce2d2b3741a12c2-Abstract.html | Yingxue Zhou, Belhal Karimi, Jinxing Yu, Zhiqiang Xu, Ping Li | https://papers.nips.cc/paper_files/paper/2020/hash/08fb104b0f2f838f3ce2d2b3741a12c2-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/08fb104b0f2f838f3ce2d2b3741a12c2-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9793-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/08fb104b0f2f838f3ce2d2b3741a12c2-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/08fb104b0f2f838f3ce2d2b3741a12c2-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/08fb104b0f2f838f3ce2d2b3741a12c2-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/08fb104b0f2f838f3ce2d2b3741a12c2-Supplemental.pdf | Adaptive gradient methods such as AdaGrad, RMSprop and Adam have been optimizers of choice for deep learning due to their fast training speed. However, it was recently observed that their generalization performance is often worse than that of SGD for over-parameterized neural networks. While new algorithms such as AdaB... |
Learning Guidance Rewards with Trajectory-space Smoothing | https://papers.nips.cc/paper_files/paper/2020/hash/0912d0f15f1394268c66639e39b26215-Abstract.html | Tanmay Gangwani, Yuan Zhou, Jian Peng | https://papers.nips.cc/paper_files/paper/2020/hash/0912d0f15f1394268c66639e39b26215-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0912d0f15f1394268c66639e39b26215-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9794-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0912d0f15f1394268c66639e39b26215-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0912d0f15f1394268c66639e39b26215-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0912d0f15f1394268c66639e39b26215-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/0912d0f15f1394268c66639e39b26215-Supplemental.pdf | Long-term temporal credit assignment is an important challenge in deep reinforcement learning (RL). It refers to the ability of the agent to attribute actions to consequences that may occur after a long time interval. Existing policy-gradient and Q-learning algorithms typically rely on dense environmental rewards that ... |
Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization | https://papers.nips.cc/paper_files/paper/2020/hash/093b60fd0557804c8ba0cbf1453da22f-Abstract.html | Chaobing Song, Yong Jiang, Yi Ma | https://papers.nips.cc/paper_files/paper/2020/hash/093b60fd0557804c8ba0cbf1453da22f-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/093b60fd0557804c8ba0cbf1453da22f-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9795-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/093b60fd0557804c8ba0cbf1453da22f-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/093b60fd0557804c8ba0cbf1453da22f-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/093b60fd0557804c8ba0cbf1453da22f-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/093b60fd0557804c8ba0cbf1453da22f-Supplemental.pdf | In this paper, we introduce a simplified and unified method for finite-sum convex optimization, named \emph{Variance Reduction via Accelerated Dual Averaging (VRADA)}. In the general convex and smooth setting, VRADA can attain an $O\big(\frac{1}{n}\big)$-accurate solution in $O(n\log\log n)$ number of stochastic gradie... |
Tree! I am no Tree! I am a low dimensional Hyperbolic Embedding | https://papers.nips.cc/paper_files/paper/2020/hash/093f65e080a295f8076b1c5722a46aa2-Abstract.html | Rishi Sonthalia, Anna Gilbert | https://papers.nips.cc/paper_files/paper/2020/hash/093f65e080a295f8076b1c5722a46aa2-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/093f65e080a295f8076b1c5722a46aa2-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9796-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/093f65e080a295f8076b1c5722a46aa2-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/093f65e080a295f8076b1c5722a46aa2-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/093f65e080a295f8076b1c5722a46aa2-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/093f65e080a295f8076b1c5722a46aa2-Supplemental.zip | Given data, finding a faithful low-dimensional hyperbolic embedding of the data is a key method by which we can extract hierarchical information or learn representative geometric features of the data. In this paper, we explore a new method for learning hyperbolic representations by taking a metric-first approach. Rathe... |
Deep Structural Causal Models for Tractable Counterfactual Inference | https://papers.nips.cc/paper_files/paper/2020/hash/0987b8b338d6c90bbedd8631bc499221-Abstract.html | Nick Pawlowski, Daniel Coelho de Castro, Ben Glocker | https://papers.nips.cc/paper_files/paper/2020/hash/0987b8b338d6c90bbedd8631bc499221-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0987b8b338d6c90bbedd8631bc499221-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9797-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0987b8b338d6c90bbedd8631bc499221-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0987b8b338d6c90bbedd8631bc499221-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0987b8b338d6c90bbedd8631bc499221-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/0987b8b338d6c90bbedd8631bc499221-Supplemental.zip | We formulate a general framework for building structural causal models (SCMs) with deep learning components. The proposed approach employs normalising flows and variational inference to enable tractable inference of exogenous noise variables - a crucial step for counterfactual inference that is missing from existing de... |
Convolutional Generation of Textured 3D Meshes | https://papers.nips.cc/paper_files/paper/2020/hash/098d86c982354a96556bd861823ebfbd-Abstract.html | Dario Pavllo, Graham Spinks, Thomas Hofmann, Marie-Francine Moens, Aurelien Lucchi | https://papers.nips.cc/paper_files/paper/2020/hash/098d86c982354a96556bd861823ebfbd-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/098d86c982354a96556bd861823ebfbd-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9798-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/098d86c982354a96556bd861823ebfbd-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/098d86c982354a96556bd861823ebfbd-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/098d86c982354a96556bd861823ebfbd-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/098d86c982354a96556bd861823ebfbd-Supplemental.zip | While recent generative models for 2D images achieve impressive visual results, they clearly lack the ability to perform 3D reasoning. This heavily restricts the degree of control over generated objects as well as the possible applications of such models. In this work, we bridge this gap by leveraging recent advances i... |
A Statistical Framework for Low-bitwidth Training of Deep Neural Networks | https://papers.nips.cc/paper_files/paper/2020/hash/099fe6b0b444c23836c4a5d07346082b-Abstract.html | Jianfei Chen, Yu Gai, Zhewei Yao, Michael W. Mahoney, Joseph E. Gonzalez | https://papers.nips.cc/paper_files/paper/2020/hash/099fe6b0b444c23836c4a5d07346082b-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/099fe6b0b444c23836c4a5d07346082b-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9799-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/099fe6b0b444c23836c4a5d07346082b-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/099fe6b0b444c23836c4a5d07346082b-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/099fe6b0b444c23836c4a5d07346082b-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/099fe6b0b444c23836c4a5d07346082b-Supplemental.pdf | Fully quantized training (FQT), which uses low-bitwidth hardware by quantizing the activations, weights, and gradients of a neural network model, is a promising approach to accelerate the training of deep neural networks. One major challenge with FQT is the lack of theoretical understanding, in particular of how gradie... |
Better Set Representations For Relational Reasoning | https://papers.nips.cc/paper_files/paper/2020/hash/09ccf3183d9e90e5ae1f425d5f9b2c00-Abstract.html | Qian Huang, Horace He, Abhay Singh, Yan Zhang, Ser Nam Lim, Austin R. Benson | https://papers.nips.cc/paper_files/paper/2020/hash/09ccf3183d9e90e5ae1f425d5f9b2c00-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/09ccf3183d9e90e5ae1f425d5f9b2c00-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9800-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/09ccf3183d9e90e5ae1f425d5f9b2c00-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/09ccf3183d9e90e5ae1f425d5f9b2c00-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/09ccf3183d9e90e5ae1f425d5f9b2c00-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/09ccf3183d9e90e5ae1f425d5f9b2c00-Supplemental.zip | Incorporating relational reasoning into neural networks has greatly expanded their capabilities and scope. One defining trait of relational reasoning is that it operates on a set of entities, as opposed to standard vector representations. Existing end-to-end approaches for relational reasoning typically extract entitie... |
AutoSync: Learning to Synchronize for Data-Parallel Distributed Deep Learning | https://papers.nips.cc/paper_files/paper/2020/hash/0a2298a72858d90d5c4b4fee954b6896-Abstract.html | Hao Zhang, Yuan Li, Zhijie Deng, Xiaodan Liang, Lawrence Carin, Eric Xing | https://papers.nips.cc/paper_files/paper/2020/hash/0a2298a72858d90d5c4b4fee954b6896-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0a2298a72858d90d5c4b4fee954b6896-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9801-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0a2298a72858d90d5c4b4fee954b6896-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0a2298a72858d90d5c4b4fee954b6896-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0a2298a72858d90d5c4b4fee954b6896-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/0a2298a72858d90d5c4b4fee954b6896-Supplemental.pdf | Synchronization is a key step in data-parallel distributed machine learning (ML). Different synchronization systems and strategies perform differently, and to achieve optimal parallel training throughput requires synchronization strategies that adapt to model structures and cluster configurations. Existing synchronizat... |
A Combinatorial Perspective on Transfer Learning | https://papers.nips.cc/paper_files/paper/2020/hash/0a3b6f64f0523984e51323fe53b8c504-Abstract.html | Jianan Wang, Eren Sezener, David Budden, Marcus Hutter, Joel Veness | https://papers.nips.cc/paper_files/paper/2020/hash/0a3b6f64f0523984e51323fe53b8c504-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0a3b6f64f0523984e51323fe53b8c504-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9802-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0a3b6f64f0523984e51323fe53b8c504-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0a3b6f64f0523984e51323fe53b8c504-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0a3b6f64f0523984e51323fe53b8c504-Review.html | null | Human intelligence is characterized not only by the capacity to learn complex skills, but the ability to rapidly adapt and acquire new skills within an ever-changing environment. In this work we study how the learning of modular solutions can allow for effective generalization to both unseen and potentially differently... |
Hardness of Learning Neural Networks with Natural Weights | https://papers.nips.cc/paper_files/paper/2020/hash/0a4dc6dae338c9cb08947c07581f77a2-Abstract.html | Amit Daniely, Gal Vardi | https://papers.nips.cc/paper_files/paper/2020/hash/0a4dc6dae338c9cb08947c07581f77a2-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0a4dc6dae338c9cb08947c07581f77a2-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9803-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0a4dc6dae338c9cb08947c07581f77a2-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0a4dc6dae338c9cb08947c07581f77a2-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0a4dc6dae338c9cb08947c07581f77a2-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/0a4dc6dae338c9cb08947c07581f77a2-Supplemental.pdf | Neural networks are nowadays highly successful despite strong hardness results. The existing hardness results focus on the network architecture, and assume that the network's weights are arbitrary.
A natural approach to settle the discrepancy is to assume that the network's weights are ``well-behaved" and posses some g... |
Higher-Order Spectral Clustering of Directed Graphs | https://papers.nips.cc/paper_files/paper/2020/hash/0a5052334511e344f15ae0bfafd47a67-Abstract.html | Steinar Laenen, He Sun | https://papers.nips.cc/paper_files/paper/2020/hash/0a5052334511e344f15ae0bfafd47a67-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0a5052334511e344f15ae0bfafd47a67-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9804-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0a5052334511e344f15ae0bfafd47a67-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0a5052334511e344f15ae0bfafd47a67-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0a5052334511e344f15ae0bfafd47a67-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/0a5052334511e344f15ae0bfafd47a67-Supplemental.pdf | Clustering is an important topic in algorithms, and has a number of applications in machine learning, computer vision, statistics, and several other research disciplines. Traditional objectives of graph clustering are to find clusters with low conductance. Not only are these objectives just applicable for undirected gr... |
Primal-Dual Mesh Convolutional Neural Networks | https://papers.nips.cc/paper_files/paper/2020/hash/0a656cc19f3f5b41530182a9e03982a4-Abstract.html | Francesco Milano, Antonio Loquercio, Antoni Rosinol, Davide Scaramuzza, Luca Carlone | https://papers.nips.cc/paper_files/paper/2020/hash/0a656cc19f3f5b41530182a9e03982a4-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0a656cc19f3f5b41530182a9e03982a4-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9805-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0a656cc19f3f5b41530182a9e03982a4-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0a656cc19f3f5b41530182a9e03982a4-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0a656cc19f3f5b41530182a9e03982a4-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/0a656cc19f3f5b41530182a9e03982a4-Supplemental.pdf | Recent works in geometric deep learning have introduced neural networks that allow performing inference tasks on three-dimensional geometric data by defining convolution --and sometimes pooling-- operations on triangle meshes. These methods, however, either consider the input mesh as a graph, and do not exploit specifi... |
The Advantage of Conditional Meta-Learning for Biased Regularization and Fine Tuning | https://papers.nips.cc/paper_files/paper/2020/hash/0a716fe8c7745e51a3185fc8be6ca23a-Abstract.html | Giulia Denevi, Massimiliano Pontil, Carlo Ciliberto | https://papers.nips.cc/paper_files/paper/2020/hash/0a716fe8c7745e51a3185fc8be6ca23a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0a716fe8c7745e51a3185fc8be6ca23a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9806-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0a716fe8c7745e51a3185fc8be6ca23a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0a716fe8c7745e51a3185fc8be6ca23a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0a716fe8c7745e51a3185fc8be6ca23a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/0a716fe8c7745e51a3185fc8be6ca23a-Supplemental.zip | Biased regularization and fine tuning are two recent meta-learning approaches. They have been shown to be effective to tackle distributions of tasks, in which the tasks’ target vectors are all close to a common meta-parameter vector. However, these methods may perform poorly on heterogeneous environments of tasks, wher... |
Watch out! Motion is Blurring the Vision of Your Deep Neural Networks | https://papers.nips.cc/paper_files/paper/2020/hash/0a73de68f10e15626eb98701ecf03adb-Abstract.html | Qing Guo, Felix Juefei-Xu, Xiaofei Xie, Lei Ma, Jian Wang, Bing Yu, Wei Feng, Yang Liu | https://papers.nips.cc/paper_files/paper/2020/hash/0a73de68f10e15626eb98701ecf03adb-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0a73de68f10e15626eb98701ecf03adb-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9807-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0a73de68f10e15626eb98701ecf03adb-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0a73de68f10e15626eb98701ecf03adb-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0a73de68f10e15626eb98701ecf03adb-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/0a73de68f10e15626eb98701ecf03adb-Supplemental.pdf | The state-of-the-art deep neural networks (DNNs) are vulnerable against adversarial examples with additive random-like noise perturbations. While such examples are hardly found in the physical world, the image blurring effect caused by object motion, on the other hand, commonly occurs in practice, making the study of w... |
Sinkhorn Barycenter via Functional Gradient Descent | https://papers.nips.cc/paper_files/paper/2020/hash/0a93091da5efb0d9d5649e7f6b2ad9d7-Abstract.html | Zebang Shen, Zhenfu Wang, Alejandro Ribeiro, Hamed Hassani | https://papers.nips.cc/paper_files/paper/2020/hash/0a93091da5efb0d9d5649e7f6b2ad9d7-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0a93091da5efb0d9d5649e7f6b2ad9d7-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9808-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0a93091da5efb0d9d5649e7f6b2ad9d7-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0a93091da5efb0d9d5649e7f6b2ad9d7-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0a93091da5efb0d9d5649e7f6b2ad9d7-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/0a93091da5efb0d9d5649e7f6b2ad9d7-Supplemental.pdf | In this paper, we consider the problem of computing the barycenter of a set of probability distributions under the Sinkhorn divergence.
This problem has recently found applications across various domains, including graphics, learning, and vision, as it provides a meaningful mechanism to aggregate knowledge.
Unlike ... |
Coresets for Near-Convex Functions | https://papers.nips.cc/paper_files/paper/2020/hash/0afe095e81a6ac76ff3f69975cb3e7ae-Abstract.html | Murad Tukan, Alaa Maalouf, Dan Feldman | https://papers.nips.cc/paper_files/paper/2020/hash/0afe095e81a6ac76ff3f69975cb3e7ae-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0afe095e81a6ac76ff3f69975cb3e7ae-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9809-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0afe095e81a6ac76ff3f69975cb3e7ae-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0afe095e81a6ac76ff3f69975cb3e7ae-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0afe095e81a6ac76ff3f69975cb3e7ae-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/0afe095e81a6ac76ff3f69975cb3e7ae-Supplemental.pdf | Coreset is usually a small weighted subset of $n$ input points in $\mathbb{R}^d$, that provably approximates their loss function for a given set of queries (models, classifiers, etc.). Coresets become increasingly common in machine learning since existing heuristics or inefficient algorithms may be improved by running ... |
Bayesian Deep Ensembles via the Neural Tangent Kernel | https://papers.nips.cc/paper_files/paper/2020/hash/0b1ec366924b26fc98fa7b71a9c249cf-Abstract.html | Bobby He, Balaji Lakshminarayanan, Yee Whye Teh | https://papers.nips.cc/paper_files/paper/2020/hash/0b1ec366924b26fc98fa7b71a9c249cf-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0b1ec366924b26fc98fa7b71a9c249cf-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9810-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0b1ec366924b26fc98fa7b71a9c249cf-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0b1ec366924b26fc98fa7b71a9c249cf-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0b1ec366924b26fc98fa7b71a9c249cf-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/0b1ec366924b26fc98fa7b71a9c249cf-Supplemental.pdf | We explore the link between deep ensembles and Gaussian processes (GPs) through the lens of the Neural Tangent Kernel (NTK): a recent development in understanding the training dynamics of wide neural networks (NNs). Previous work has shown that even in the infinite width limit, when NNs become GPs, there is no GP poste... |
Improved Schemes for Episodic Memory-based Lifelong Learning | https://papers.nips.cc/paper_files/paper/2020/hash/0b5e29aa1acf8bdc5d8935d7036fa4f5-Abstract.html | Yunhui Guo, Mingrui Liu, Tianbao Yang, Tajana Rosing | https://papers.nips.cc/paper_files/paper/2020/hash/0b5e29aa1acf8bdc5d8935d7036fa4f5-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0b5e29aa1acf8bdc5d8935d7036fa4f5-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9811-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0b5e29aa1acf8bdc5d8935d7036fa4f5-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0b5e29aa1acf8bdc5d8935d7036fa4f5-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0b5e29aa1acf8bdc5d8935d7036fa4f5-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/0b5e29aa1acf8bdc5d8935d7036fa4f5-Supplemental.zip | Current deep neural networks can achieve remarkable performance on a single task. However, when the deep neural network is continually trained on a sequence of tasks, it seems to gradually forget the previous learned knowledge. This phenomenon is referred to as catastrophic forgetting and motivates the field called lif... |
Adaptive Sampling for Stochastic Risk-Averse Learning | https://papers.nips.cc/paper_files/paper/2020/hash/0b6ace9e8971cf36f1782aa982a708db-Abstract.html | Sebastian Curi, Kfir Y. Levy, Stefanie Jegelka, Andreas Krause | https://papers.nips.cc/paper_files/paper/2020/hash/0b6ace9e8971cf36f1782aa982a708db-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0b6ace9e8971cf36f1782aa982a708db-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9812-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0b6ace9e8971cf36f1782aa982a708db-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0b6ace9e8971cf36f1782aa982a708db-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0b6ace9e8971cf36f1782aa982a708db-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/0b6ace9e8971cf36f1782aa982a708db-Supplemental.pdf | In high-stakes machine learning applications, it is crucial to not only perform well {\em on average}, but also when restricted to {\em difficult} examples.
To address this, we consider the problem of training models in a risk-averse manner.
We propose an adaptive sampling algorithm for stochastically optimizing the {\... |
Deep Wiener Deconvolution: Wiener Meets Deep Learning for Image Deblurring | https://papers.nips.cc/paper_files/paper/2020/hash/0b8aff0438617c055eb55f0ba5d226fa-Abstract.html | Jiangxin Dong, Stefan Roth, Bernt Schiele | https://papers.nips.cc/paper_files/paper/2020/hash/0b8aff0438617c055eb55f0ba5d226fa-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0b8aff0438617c055eb55f0ba5d226fa-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9813-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0b8aff0438617c055eb55f0ba5d226fa-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0b8aff0438617c055eb55f0ba5d226fa-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0b8aff0438617c055eb55f0ba5d226fa-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/0b8aff0438617c055eb55f0ba5d226fa-Supplemental.pdf | We present a simple and effective approach for non-blind image deblurring, combining classical techniques and deep learning. In contrast to existing methods that deblur the image directly in the standard image space, we propose to perform an explicit deconvolution process in a feature space by integrating a classical W... |
Discovering Reinforcement Learning Algorithms | https://papers.nips.cc/paper_files/paper/2020/hash/0b96d81f0494fde5428c7aea243c9157-Abstract.html | Junhyuk Oh, Matteo Hessel, Wojciech M. Czarnecki, Zhongwen Xu, Hado P. van Hasselt, Satinder Singh, David Silver | https://papers.nips.cc/paper_files/paper/2020/hash/0b96d81f0494fde5428c7aea243c9157-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0b96d81f0494fde5428c7aea243c9157-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9814-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0b96d81f0494fde5428c7aea243c9157-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0b96d81f0494fde5428c7aea243c9157-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0b96d81f0494fde5428c7aea243c9157-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/0b96d81f0494fde5428c7aea243c9157-Supplemental.pdf | Reinforcement learning (RL) algorithms update an agent’s parameters according to one of several possible rules, discovered manually through years of research. Automating the discovery of update rules from data could lead to more efficient algorithms, or algorithms that are better adapted to specific environments. Altho... |
Taming Discrete Integration via the Boon of Dimensionality | https://papers.nips.cc/paper_files/paper/2020/hash/0baf163c24ed14b515aaf57a9de5501c-Abstract.html | Jeffrey Dudek, Dror Fried, Kuldeep S Meel | https://papers.nips.cc/paper_files/paper/2020/hash/0baf163c24ed14b515aaf57a9de5501c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/0baf163c24ed14b515aaf57a9de5501c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/9815-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/0baf163c24ed14b515aaf57a9de5501c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/0baf163c24ed14b515aaf57a9de5501c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/0baf163c24ed14b515aaf57a9de5501c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/0baf163c24ed14b515aaf57a9de5501c-Supplemental.pdf | Building on the promising approach proposed by Chakraborty et al, our work overcomes the key weakness of their approach: a restriction to dyadic weights. We augment our proposed reduction, called DeWeight, with a state of the art efficient approximate model counter and perform detailed empirical analysis over benchmark... |
Neural Information Processing Systems NeurIPS 2020 Accepted Paper Meta Info Dataset
This dataset is collected from the NeurIPS 2020 Advances in Neural Information Processing Systems 35 conference accepted paper (https://papers.nips.cc/paper_files/paper/2020) as well as the arxiv website DeepNLP paper arxiv (http://www.deepnlp.org/content/paper/nips2020). For researchers who are interested in doing analysis of NIPS 2020 accepted papers and potential research trends, you can use the already cleaned up json file in the dataset. Each row contains the meta information of a paper in the NIPS 2020 conference. To explore more AI & Robotic papers (NIPS/ICML/ICLR/IROS/ICRA/etc) and AI equations, feel free to navigate the Equation Search Engine (http://www.deepnlp.org/search/equation) as well as the AI Agent Search Engine to find the deployed AI Apps and Agents (http://www.deepnlp.org/search/agent) in your domain.
Meta Information of Json File
{
"title": "A graph similarity for deep learning",
"url": "https://papers.nips.cc/paper_files/paper/2020/hash/0004d0b59e19461ff126e3a08a814c33-Abstract.html",
"authors": "Seongmin Ok",
"detail_url": "https://papers.nips.cc/paper_files/paper/2020/hash/0004d0b59e19461ff126e3a08a814c33-Abstract.html",
"tags": "NIPS 2020",
"AuthorFeedback": "https://papers.nips.cc/paper_files/paper/2020/file/0004d0b59e19461ff126e3a08a814c33-AuthorFeedback.pdf",
"Bibtex": "https://papers.nips.cc/paper_files/paper/9725-/bibtex",
"MetaReview": "https://papers.nips.cc/paper_files/paper/2020/file/0004d0b59e19461ff126e3a08a814c33-MetaReview.html",
"Paper": "https://papers.nips.cc/paper_files/paper/2020/file/0004d0b59e19461ff126e3a08a814c33-Paper.pdf",
"Review": "https://papers.nips.cc/paper_files/paper/2020/file/0004d0b59e19461ff126e3a08a814c33-Review.html",
"Supplemental": "https://papers.nips.cc/paper_files/paper/2020/file/0004d0b59e19461ff126e3a08a814c33-Supplemental.pdf",
"abstract": "Graph neural networks (GNNs) have been successful in learning representations from graphs. Many popular GNNs follow the pattern of aggregate-transform: they aggregate the neighbors' attributes and then transform the results of aggregation with a learnable function. Analyses of these GNNs explain which pairs of non-identical graphs have different representations. However, we still lack an understanding of how similar these representations will be. We adopt kernel distance and propose transform-sum-cat as an alternative to aggregate-transform to reflect the continuous similarity between the node neighborhoods in the neighborhood aggregation. The idea leads to a simple and efficient graph similarity, which we name Weisfeiler-Leman similarity (WLS). In contrast to existing graph kernels, WLS is easy to implement with common deep learning frameworks. In graph classification experiments, transform-sum-cat significantly outperforms other neighborhood aggregation methods from popular GNN models. We also develop a simple and fast GNN model based on transform-sum-cat, which obtains, in comparison with widely used GNN models, (1) a higher accuracy in node classification, (2) a lower absolute error in graph regression, and (3) greater stability in adversarial training of graph generation."
}
Related
AI Agent Marketplace and Search
AI Agent Marketplace and Search
Robot Search
Equation and Academic search
AI & Robot Comprehensive Search
AI & Robot Question
AI & Robot Community
AI Agent Marketplace Blog
AI Agent Reviews
AI Agent Marketplace Directory
Microsoft AI Agents Reviews
Claude AI Agents Reviews
OpenAI AI Agents Reviews
Saleforce AI Agents Reviews
AI Agent Builder Reviews
AI Equation
List of AI Equations and Latex
List of Math Equations and Latex
List of Physics Equations and Latex
List of Statistics Equations and Latex
List of Machine Learning Equations and Latex
- Downloads last month
- 7