Dataset Viewer
Auto-converted to Parquet Duplicate
title
stringlengths
12
151
url
stringlengths
41
43
detail_url
stringlengths
41
43
authors
stringlengths
6
562
tags
stringclasses
3 values
abstract
stringlengths
519
2.34k
pdf
stringlengths
71
71
Domino: Discovering Systematic Errors with Cross-Modal Embeddings
https://openreview.net/forum?id=FPCMqjI0jXN
https://openreview.net/forum?id=FPCMqjI0jXN
Sabri Eyuboglu,Maya Varma,Khaled Kamal Saab,Jean-Benoit Delbrouck,Christopher Lee-Messer,Jared Dunnmon,James Zou,Christopher Re
ICLR 2022,Oral
Machine learning models that achieve high overall accuracy often make systematic errors on important subsets (or slices) of data. Identifying underperforming slices is particularly challenging when working with high-dimensional inputs (e.g. images, audio), where important slices are often unlabeled. In order to address...
https://openreview.net/pdf/a5ca838a35d810400cfa090453cd85abe02ab6b0.pdf
Natural Language Descriptions of Deep Visual Features
https://openreview.net/forum?id=NudBMY-tzDr
https://openreview.net/forum?id=NudBMY-tzDr
Evan Hernandez,Sarah Schwettmann,David Bau,Teona Bagashvili,Antonio Torralba,Jacob Andreas
ICLR 2022,Oral
Some neurons in deep networks specialize in recognizing highly specific perceptual, structural, or semantic features of inputs. In computer vision, techniques exist for identifying neurons that respond to individual concept categories like colors, textures, and object classes. But these techniques are limited in scope,...
https://openreview.net/pdf/842234024e58a8d5073a88b3c04282011b8e20a7.pdf
Non-Transferable Learning: A New Approach for Model Ownership Verification and Applicability Authorization
https://openreview.net/forum?id=tYRrOdSnVUy
https://openreview.net/forum?id=tYRrOdSnVUy
Lixu Wang,Shichao Xu,Ruiqi Xu,Xiao Wang,Qi Zhu
ICLR 2022,Oral
As Artificial Intelligence as a Service gains popularity, protecting well-trained models as intellectual property is becoming increasingly important. There are two common types of protection methods: ownership verification and usage authorization. In this paper, we propose Non-Transferable Learning (NTL), a novel appro...
https://openreview.net/pdf/cc0b829e495ebd36c4e0dcce6f5d044ad4dce58d.pdf
Neural Structured Prediction for Inductive Node Classification
https://openreview.net/forum?id=YWNAX0caEjI
https://openreview.net/forum?id=YWNAX0caEjI
Meng Qu,Huiyu Cai,Jian Tang
ICLR 2022,Oral
This paper studies node classification in the inductive setting, i.e., aiming to learn a model on labeled training graphs and generalize it to infer node labels on unlabeled test graphs. This problem has been extensively studied with graph neural networks (GNNs) by learning effective node representations, as well as tr...
https://openreview.net/pdf/df1b628202430dff01a7eeed5b5e5a2e703d1bad.pdf
A New Perspective on "How Graph Neural Networks Go Beyond Weisfeiler-Lehman?"
https://openreview.net/forum?id=uxgg9o7bI_3
https://openreview.net/forum?id=uxgg9o7bI_3
Asiri Wijesinghe,Qing Wang
ICLR 2022,Oral
We propose a new perspective on designing powerful Graph Neural Networks (GNNs). In a nutshell, this enables a general solution to inject structural properties of graphs into a message-passing aggregation scheme of GNNs. As a theoretical basis, we develop a new hierarchy of local isomorphism on neighborhood subgraphs. ...
https://openreview.net/pdf/376e7da3d7f86a2bd40cd51fadfc278e94372443.pdf
Minibatch vs Local SGD with Shuffling: Tight Convergence Bounds and Beyond
https://openreview.net/forum?id=LdlwbBP2mlq
https://openreview.net/forum?id=LdlwbBP2mlq
Chulhee Yun,Shashank Rajput,Suvrit Sra
ICLR 2022,Oral
In distributed learning, local SGD (also known as federated averaging) and its simple baseline minibatch SGD are widely studied optimization methods. Most existing analyses of these methods assume independent and unbiased gradient estimates obtained via with-replacement sampling. In contrast, we study shuffling-based v...
https://openreview.net/pdf/1669f6cc32c853b0d69068b7ed1a230ce3f321d0.pdf
The Hidden Convex Optimization Landscape of Regularized Two-Layer ReLU Networks: an Exact Characterization of Optimal Solutions
https://openreview.net/forum?id=Z7Lk2cQEG8a
https://openreview.net/forum?id=Z7Lk2cQEG8a
Yifei Wang,Jonathan Lacotte,Mert Pilanci
ICLR 2022,Oral
We prove that finding all globally optimal two-layer ReLU neural networks can be performed by solving a convex optimization program with cone constraints. Our analysis is novel, characterizes all optimal solutions, and does not leverage duality-based analysis which was recently used to lift neural network training into...
https://openreview.net/pdf/9733b1623c23b45535cc2c126e6fb496e55e8049.pdf
Provably Filtering Exogenous Distractors using Multistep Inverse Dynamics
https://openreview.net/forum?id=RQLLzMCefQu
https://openreview.net/forum?id=RQLLzMCefQu
Yonathan Efroni,Dipendra Misra,Akshay Krishnamurthy,Alekh Agarwal,John Langford
ICLR 2022,Oral
Many real-world applications of reinforcement learning (RL) require the agent to deal with high-dimensional observations such as those generated from a megapixel camera. Prior work has addressed such problems with representation learning, through which the agent can provably extract endogenous, latent state information...
https://openreview.net/pdf/310151127bcaaee206f6987dfe48a6f9a49ae848.pdf
Bootstrapped Meta-Learning
https://openreview.net/forum?id=b-ny3x071E5
https://openreview.net/forum?id=b-ny3x071E5
Sebastian Flennerhag,Yannick Schroecker,Tom Zahavy,Hado van Hasselt,David Silver,Satinder Singh
ICLR 2022,Oral
Meta-learning empowers artificial intelligence to increase its efficiency by learning how to learn. Unlocking this potential involves overcoming a challenging meta-optimisation problem. We propose an algorithm that tackles this problem by letting the meta-learner teach itself. The algorithm first bootstraps a target fr...
https://openreview.net/pdf/0eccd48eddcbf9cfc77b50cb0e97fb58937aee70.pdf
Coordination Among Neural Modules Through a Shared Global Workspace
https://openreview.net/forum?id=XzTtHjgPDsT
https://openreview.net/forum?id=XzTtHjgPDsT
Anirudh Goyal,Aniket Rajiv Didolkar,Alex Lamb,Kartikeya Badola,Nan Rosemary Ke,Nasim Rahaman,Jonathan Binas,Charles Blundell,Michael Curtis Mozer,Yoshua Bengio
ICLR 2022,Oral
Deep learning has seen a movement away from representing examples with a monolithic hidden state towards a richly structured state. For example, Transformers segment by position, and object-centric architectures decompose images into entities. In all these architectures, interactions between different elements are mod...
https://openreview.net/pdf/19aac83e8824498df7b9d1e6952523f7c068218b.pdf
Data-Efficient Graph Grammar Learning for Molecular Generation
https://openreview.net/forum?id=l4IHywGq6a
https://openreview.net/forum?id=l4IHywGq6a
Minghao Guo,Veronika Thost,Beichen Li,Payel Das,Jie Chen,Wojciech Matusik
ICLR 2022,Oral
The problem of molecular generation has received significant attention recently. Existing methods are typically based on deep neural networks and require training on large datasets with tens of thousands of samples. In practice, however, the size of class-specific chemical datasets is usually limited (e.g., dozens of s...
https://openreview.net/pdf/c17b0db09f98b3279ad677650f18acbf907883ce.pdf
Poisoning and Backdooring Contrastive Learning
https://openreview.net/forum?id=iC4UHbQ01Mp
https://openreview.net/forum?id=iC4UHbQ01Mp
Nicholas Carlini,Andreas Terzis
ICLR 2022,Oral
Multimodal contrastive learning methods like CLIP train on noisy and uncurated training datasets. This is cheaper than labeling datasets manually, and even improves out-of-distribution robustness. We show that this practice makes backdoor and poisoning attacks a significant threat. By poisoning just 0.01% of a dataset ...
https://openreview.net/pdf/abd77f0543a72cd26da355efc5680de233f120af.pdf
Neural Collapse Under MSE Loss: Proximity to and Dynamics on the Central Path
https://openreview.net/forum?id=w1UbdvWH_R3
https://openreview.net/forum?id=w1UbdvWH_R3
X.Y. Han,Vardan Papyan,David L. Donoho
ICLR 2022,Oral
The recently discovered Neural Collapse (NC) phenomenon occurs pervasively in today's deep net training paradigm of driving cross-entropy (CE) loss towards zero. During NC, last-layer features collapse to their class-means, both classifiers and class-means collapse to the same Simplex Equiangular Tight Frame, and class...
https://openreview.net/pdf/75799bbe466f7240935655cbfaa930c9628a915e.pdf
Weighted Training for Cross-Task Learning
https://openreview.net/forum?id=ltM1RMZntpu
https://openreview.net/forum?id=ltM1RMZntpu
Shuxiao Chen,Koby Crammer,Hangfeng He,Dan Roth,Weijie J Su
ICLR 2022,Oral
In this paper, we introduce Target-Aware Weighted Training (TAWT), a weighted training algorithm for cross-task learning based on minimizing a representation-based task distance between the source and target tasks. We show that TAWT is easy to implement, is computationally efficient, requires little hyperparameter tuni...
https://openreview.net/pdf/579ed2f74ecc130396039eae33e13de66b8de08b.pdf
iLQR-VAE : control-based learning of input-driven dynamics with applications to neural data
https://openreview.net/forum?id=wRODLDHaAiW
https://openreview.net/forum?id=wRODLDHaAiW
Marine Schimel,Ta-Chu Kao,Kristopher T Jensen,Guillaume Hennequin
ICLR 2022,Oral
Understanding how neural dynamics give rise to behaviour is one of the most fundamental questions in systems neuroscience. To achieve this, a common approach is to record neural populations in behaving animals, and model these data as emanating from a latent dynamical system whose state trajectories can then be related...
https://openreview.net/pdf/c4b2a10a835b79e5cbaff71f6577c29236e964b5.pdf
Extending the WILDS Benchmark for Unsupervised Adaptation
https://openreview.net/forum?id=z7p2V6KROOV
https://openreview.net/forum?id=z7p2V6KROOV
Shiori Sagawa,Pang Wei Koh,Tony Lee,Irena Gao,Sang Michael Xie,Kendrick Shen,Ananya Kumar,Weihua Hu,Michihiro Yasunaga,Henrik Marklund,Sara Beery,Etienne David,Ian Stavness,Wei Guo,Jure Leskovec,Kate Saenko,Tatsunori Hashimoto,Sergey Levine,Chelsea Finn,Percy Liang
ICLR 2022,Oral
Machine learning systems deployed in the wild are often trained on a source distribution but deployed on a different target distribution. Unlabeled data can be a powerful point of leverage for mitigating these distribution shifts, as it is frequently much more available than labeled data and can often be obtained from ...
https://openreview.net/pdf/16bc69d47c7ff67867bfc50009d6b9fc5043a00f.pdf
Asymmetry Learning for Counterfactually-invariant Classification in OOD Tasks
https://openreview.net/forum?id=avgclFZ221l
https://openreview.net/forum?id=avgclFZ221l
S Chandra Mouli,Bruno Ribeiro
ICLR 2022,Oral
Generalizing from observed to new related environments (out-of-distribution) is central to the reliability of classifiers. However, most classifiers fail to predict label $Y$ from input $X$ when the change in environment is due a (stochastic) input transformation $T^\text{te} \circ X'$ not observed in training, as in t...
https://openreview.net/pdf/f15da1dc02ded9aba4a26e8ade750b28429da30f.pdf
Comparing Distributions by Measuring Differences that Affect Decision Making
https://openreview.net/forum?id=KB5onONJIAU
https://openreview.net/forum?id=KB5onONJIAU
Shengjia Zhao,Abhishek Sinha,Yutong He,Aidan Perreault,Jiaming Song,Stefano Ermon
ICLR 2022,Oral
Measuring the discrepancy between two probability distributions is a fundamental problem in machine learning and statistics. We propose a new class of discrepancies based on the optimal loss for a decision task -- two distributions are different if the optimal decision loss is higher on their mixture than on each indiv...
https://openreview.net/pdf/e99719a7a6796b569cc6afdf6f42024d0df2fbea.pdf
MIDI-DDSP: Detailed Control of Musical Performance via Hierarchical Modeling
https://openreview.net/forum?id=UseMOjWENv
https://openreview.net/forum?id=UseMOjWENv
Yusong Wu,Ethan Manilow,Yi Deng,Rigel Swavely,Kyle Kastner,Tim Cooijmans,Aaron Courville,Cheng-Zhi Anna Huang,Jesse Engel
ICLR 2022,Oral
Musical expression requires control of both what notes that are played, and how they are performed. Conventional audio synthesizers provide detailed expressive controls, but at the cost of realism. Black-box neural audio synthesis and concatenative samplers can produce realistic audio, but have few mechanisms for contr...
https://openreview.net/pdf/e26b385d95d67af36d02a385047be6f7a0d6f47b.pdf
Unsupervised Vision-Language Grammar Induction with Shared Structure Modeling
https://openreview.net/forum?id=N0n_QyQ5lBF
https://openreview.net/forum?id=N0n_QyQ5lBF
Bo Wan,Wenjuan Han,Zilong Zheng,Tinne Tuytelaars
ICLR 2022,Oral
We introduce a new task, unsupervised vision-language (VL) grammar induction. Given an image-caption pair, the goal is to extract a shared hierarchical structure for both image and language simultaneously. We argue that such structured output, grounded in both modalities, is a clear step towards the high-level underst...
https://openreview.net/pdf/5c104842d13e8d6efd55b6d7c04f4373a39eae18.pdf
PiCO: Contrastive Label Disambiguation for Partial Label Learning
https://openreview.net/forum?id=EhYjZy6e1gJ
https://openreview.net/forum?id=EhYjZy6e1gJ
Haobo Wang,Ruixuan Xiao,Yixuan Li,Lei Feng,Gang Niu,Gang Chen,Junbo Zhao
ICLR 2022,Oral
Partial label learning (PLL) is an important problem that allows each training example to be labeled with a coarse candidate set, which well suits many real-world data annotation scenarios with label ambiguity. Despite the promise, the performance of PLL often lags behind the supervised counterpart. In this work, we b...
https://openreview.net/pdf/f9275b96d741f229db4e61a15ce5f2a499c9ee67.pdf
Pyraformer: Low-Complexity Pyramidal Attention for Long-Range Time Series Modeling and Forecasting
https://openreview.net/forum?id=0EXmFzUn5I
https://openreview.net/forum?id=0EXmFzUn5I
Shizhan Liu,Hang Yu,Cong Liao,Jianguo Li,Weiyao Lin,Alex X. Liu,Schahram Dustdar
ICLR 2022,Oral
Accurate prediction of the future given the past based on time series data is of paramount importance, since it opens the door for decision making and risk management ahead of time. In practice, the challenge is to build a flexible but parsimonious model that can capture a wide range of temporal dependencies. In this p...
https://openreview.net/pdf/2ac159853cd001bbca6a8a12da497c8013914b31.pdf
Expressiveness and Approximation Properties of Graph Neural Networks
https://openreview.net/forum?id=wIzUeM3TAU
https://openreview.net/forum?id=wIzUeM3TAU
Floris Geerts,Juan L Reutter
ICLR 2022,Oral
Characterizing the separation power of graph neural networks (GNNs) provides an understanding of their limitations for graph learning tasks. Results regarding separation power are, however, usually geared at specific GNNs architectures, and tools for understanding arbitrary GNN architectures are generally lacking. We p...
https://openreview.net/pdf/9d0fe7ff08261aae56611b7f670de9875c2a9cd9.pdf
Filtered-CoPhy: Unsupervised Learning of Counterfactual Physics in Pixel Space
https://openreview.net/forum?id=1L0C5ROtFp
https://openreview.net/forum?id=1L0C5ROtFp
Steeven JANNY,Fabien Baradel,Natalia Neverova,Madiha Nadri,Greg Mori,Christian Wolf
ICLR 2022,Oral
Learning causal relationships in high-dimensional data (images, videos) is a hard task, as they are often defined on low dimensional manifolds and must be extracted from complex signals dominated by appearance, lighting, textures and also spurious correlations in the data. We present a method for learning counterfactua...
https://openreview.net/pdf/cbd75b662eaa377753b892113b221d062f26511e.pdf
BEiT: BERT Pre-Training of Image Transformers
https://openreview.net/forum?id=p-BhZSz59o4
https://openreview.net/forum?id=p-BhZSz59o4
Hangbo Bao,Li Dong,Songhao Piao,Furu Wei
ICLR 2022,Oral
We introduce a self-supervised vision representation model BEiT, which stands for Bidirectional Encoder representation from Image Transformers. Following BERT developed in the natural language processing area, we propose a masked image modeling task to pretrain vision Transformers. Specifically, each image has two view...
https://openreview.net/pdf/1be2cb0e0edf9af45f8ef450b802b459897cec3d.pdf
Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution
https://openreview.net/forum?id=UYneFzXSJWh
https://openreview.net/forum?id=UYneFzXSJWh
Ananya Kumar,Aditi Raghunathan,Robbie Matthew Jones,Tengyu Ma,Percy Liang
ICLR 2022,Oral
When transferring a pretrained model to a downstream task, two popular methods are full fine-tuning (updating all the model parameters) and linear probing (updating only the last linear layer---the "head"). It is well known that fine-tuning leads to better accuracy in-distribution (ID). However, in this paper, we find ...
https://openreview.net/pdf/5d8a4ae4492042b22b07eabc7a9abcfa517f419c.pdf
StyleAlign: Analysis and Applications of Aligned StyleGAN Models
https://openreview.net/forum?id=Qg2vi4ZbHM9
https://openreview.net/forum?id=Qg2vi4ZbHM9
Zongze Wu,Yotam Nitzan,Eli Shechtman,Dani Lischinski
ICLR 2022,Oral
In this paper, we perform an in-depth study of the properties and applications of aligned generative models. We refer to two models as aligned if they share the same architecture, and one of them (the child) is obtained from the other (the parent) via fine-tuning to another domain, a common practice in transfer learnin...
https://openreview.net/pdf/a75f48f49713ac38baaaee51cb3273177975f96b.pdf
Variational Inference for Discriminative Learning with Generative Modeling of Feature Incompletion
https://openreview.net/forum?id=qnQN4yr6FJz
https://openreview.net/forum?id=qnQN4yr6FJz
Kohei Miyaguchi,Takayuki Katsuki,Akira Koseki,Toshiya Iwamori
ICLR 2022,Oral
We are concerned with the problem of distributional prediction with incomplete features: The goal is to estimate the distribution of target variables given feature vectors with some of the elements missing. A typical approach to this problem is to perform missing-value imputation and regression, simultaneously or seque...
https://openreview.net/pdf/537474668e8264be0d7e7963ad009564621ad25e.pdf
Efficiently Modeling Long Sequences with Structured State Spaces
https://openreview.net/forum?id=uYLFoz1vlAC
https://openreview.net/forum?id=uYLFoz1vlAC
Albert Gu,Karan Goel,Christopher Re
ICLR 2022,Oral
A central goal of sequence modeling is designing a single principled model that can address sequence data across a range of modalities and tasks, particularly on long-range dependencies. Although conventional models including RNNs, CNNs, and Transformers have specialized variants for capturing long dependencies, they ...
https://openreview.net/pdf/a8eedf494f6698cb467c310c59d3ea6488546805.pdf
Large Language Models Can Be Strong Differentially Private Learners
https://openreview.net/forum?id=bVuP3ltATMz
https://openreview.net/forum?id=bVuP3ltATMz
Xuechen Li,Florian Tramer,Percy Liang,Tatsunori Hashimoto
ICLR 2022,Oral
Differentially Private (DP) learning has seen limited success for building large deep learning models of text, and straightforward attempts at applying Differentially Private Stochastic Gradient Descent (DP-SGD) to NLP tasks have resulted in large performance drops and high computational overhead. We show that this per...
https://openreview.net/pdf/d88e1e721c4085b8a6403837f45b8c483ad0225b.pdf
GeoDiff: A Geometric Diffusion Model for Molecular Conformation Generation
https://openreview.net/forum?id=PzcvxEMzvQC
https://openreview.net/forum?id=PzcvxEMzvQC
Minkai Xu,Lantao Yu,Yang Song,Chence Shi,Stefano Ermon,Jian Tang
ICLR 2022,Oral
Predicting molecular conformations from molecular graphs is a fundamental problem in cheminformatics and drug discovery. Recently, significant progress has been achieved with machine learning approaches, especially with deep generative models. Inspired by the diffusion process in classical non-equilibrium thermodynamic...
https://openreview.net/pdf/d6be0299d7f2d2bf947d450fffe98c8395458c75.pdf
Frame Averaging for Invariant and Equivariant Network Design
https://openreview.net/forum?id=zIUyj55nXR
https://openreview.net/forum?id=zIUyj55nXR
Omri Puny,Matan Atzmon,Edward J. Smith,Ishan Misra,Aditya Grover,Heli Ben-Hamu,Yaron Lipman
ICLR 2022,Oral
Many machine learning tasks involve learning functions that are known to be invariant or equivariant to certain symmetries of the input data. However, it is often challenging to design neural network architectures that respect these symmetries while being expressive and computationally efficient. For example, Euclidean...
https://openreview.net/pdf/d7849f0ef0f911d06889785dc7116564d5342442.pdf
Einops: Clear and Reliable Tensor Manipulations with Einstein-like Notation
https://openreview.net/forum?id=oapKSVM2bcj
https://openreview.net/forum?id=oapKSVM2bcj
Alex Rogozhnikov
ICLR 2022,Oral
Tensor computations underlie modern scientific computing and deep learning. A number of tensor frameworks emerged varying in execution model, hardware support, memory management, model definition, etc. However, tensor operations in all frameworks follow the same paradigm. Recent neural network architectures demonstrate...
https://openreview.net/pdf/d568f6e36eaa377888611b8e0d84076777edc330.pdf
A Fine-Grained Analysis on Distribution Shift
https://openreview.net/forum?id=Dl4LetuLdyK
https://openreview.net/forum?id=Dl4LetuLdyK
Olivia Wiles,Sven Gowal,Florian Stimberg,Sylvestre-Alvise Rebuffi,Ira Ktena,Krishnamurthy Dj Dvijotham,Ali Taylan Cemgil
ICLR 2022,Oral
Robustness to distribution shifts is critical for deploying machine learning models in the real world. Despite this necessity, there has been little work in defining the underlying mechanisms that cause these shifts and evaluating the robustness of algorithms across multiple, different distribution shifts. To this end,...
https://openreview.net/pdf/6be366539738706234ad0b104ed82361a3c5f6e7.pdf
Open-Set Recognition: A Good Closed-Set Classifier is All You Need
https://openreview.net/forum?id=5hLP5JY9S2d
https://openreview.net/forum?id=5hLP5JY9S2d
Sagar Vaze,Kai Han,Andrea Vedaldi,Andrew Zisserman
ICLR 2022,Oral
The ability to identify whether or not a test sample belongs to one of the semantic classes in a classifier's training set is critical to practical deployment of the model. This task is termed open-set recognition (OSR) and has received significant attention in recent years. In this paper, we first demonstrate that the...
https://openreview.net/pdf/a9e422d293a936fe65575b5e1ea6a86549b84bca.pdf
Learning Strides in Convolutional Neural Networks
https://openreview.net/forum?id=M752z9FKJP
https://openreview.net/forum?id=M752z9FKJP
Rachid Riad,Olivier Teboul,David Grangier,Neil Zeghidour
ICLR 2022,Oral
Convolutional neural networks typically contain several downsampling operators, such as strided convolutions or pooling layers, that progressively reduce the resolution of intermediate representations. This provides some shift-invariance while reducing the computational complexity of the whole architecture. A critical ...
https://openreview.net/pdf/1bc01ea49b5a288387ac5de300847b1d6690d940.pdf
Understanding over-squashing and bottlenecks on graphs via curvature
https://openreview.net/forum?id=7UmjRGzp-A
https://openreview.net/forum?id=7UmjRGzp-A
Jake Topping,Francesco Di Giovanni,Benjamin Paul Chamberlain,Xiaowen Dong,Michael M. Bronstein
ICLR 2022,Oral
Most graph neural networks (GNNs) use the message passing paradigm, in which node features are propagated on the input graph. Recent works pointed to the distortion of information flowing from distant nodes as a factor limiting the efficiency of message passing for tasks relying on long-distance interactions. This phen...
https://openreview.net/pdf/f6b974eac8792a0d8d59633044276dabbf9d01c9.pdf
Diffusion-Based Voice Conversion with Fast Maximum Likelihood Sampling Scheme
https://openreview.net/forum?id=8c50f-DoWAu
https://openreview.net/forum?id=8c50f-DoWAu
Vadim Popov,Ivan Vovk,Vladimir Gogoryan,Tasnima Sadekova,Mikhail Sergeevich Kudinov,Jiansheng Wei
ICLR 2022,Oral
Voice conversion is a common speech synthesis task which can be solved in different ways depending on a particular real-world scenario. The most challenging one often referred to as one-shot many-to-many voice conversion consists in copying target voice from only one reference utterance in the most general case when bo...
https://openreview.net/pdf/468145b46e459c5ba69e7017b6ef4eaece277e94.pdf
Resolving Training Biases via Influence-based Data Relabeling
https://openreview.net/forum?id=EskfH0bwNVn
https://openreview.net/forum?id=EskfH0bwNVn
Shuming Kong,Yanyan Shen,Linpeng Huang
ICLR 2022,Oral
The performance of supervised learning methods easily suffers from the training bias issue caused by train-test distribution mismatch or label noise. Influence function is a technique that estimates the impacts of a training sample on the model’s predictions. Recent studies on \emph{data resampling} have employed infl...
https://openreview.net/pdf/64c51657be7bb5a9efecafe39344c719ccb4d394.pdf
Representational Continuity for Unsupervised Continual Learning
https://openreview.net/forum?id=9Hrka5PA7LW
https://openreview.net/forum?id=9Hrka5PA7LW
Divyam Madaan,Jaehong Yoon,Yuanchun Li,Yunxin Liu,Sung Ju Hwang
ICLR 2022,Oral
Continual learning (CL) aims to learn a sequence of tasks without forgetting the previously acquired knowledge. However, recent CL advances are restricted to supervised continual learning (SCL) scenarios. Consequently, they are not scalable to real-world applications where the data distribution is often biased and unan...
https://openreview.net/pdf/947f2c6dc3cd63a83d402bf9cbaddf42e362709e.pdf
Vision-Based Manipulators Need to Also See from Their Hands
https://openreview.net/forum?id=RJkAHKp7kNZ
https://openreview.net/forum?id=RJkAHKp7kNZ
Kyle Hsu,Moo Jin Kim,Rafael Rafailov,Jiajun Wu,Chelsea Finn
ICLR 2022,Oral
We study how the choice of visual perspective affects learning and generalization in the context of physical manipulation from raw sensor observations. Compared with the more commonly used global third-person perspective, a hand-centric (eye-in-hand) perspective affords reduced observability, but we find that it consis...
https://openreview.net/pdf/bf5308ad68220347e7cbf2dcbedbf7bb4e0a21b1.pdf
Meta-Learning with Fewer Tasks through Task Interpolation
https://openreview.net/forum?id=ajXWF7bVR8d
https://openreview.net/forum?id=ajXWF7bVR8d
Huaxiu Yao,Linjun Zhang,Chelsea Finn
ICLR 2022,Oral
Meta-learning enables algorithms to quickly learn a newly encountered task with just a few labeled examples by transferring previously learned knowledge. However, the bottleneck of current meta-learning algorithms is the requirement of a large number of meta-training tasks, which may not be accessible in real-world sce...
https://openreview.net/pdf/ebbfc5841da414394c96beeba92500546061461a.pdf
DISCOVERING AND EXPLAINING THE REPRESENTATION BOTTLENECK OF DNNS
https://openreview.net/forum?id=iRCUlgmdfHJ
https://openreview.net/forum?id=iRCUlgmdfHJ
Huiqi Deng,Qihan Ren,Hao Zhang,Quanshi Zhang
ICLR 2022,Oral
This paper explores the bottleneck of feature representations of deep neural networks (DNNs), from the perspective of the complexity of interactions between input variables encoded in DNNs. To this end, we focus on the multi-order interaction between input variables, where the order represents the complexity of interac...
https://openreview.net/pdf/e470657e4d47a20411713a973ed0282f87c9f9a9.pdf
Sparse Communication via Mixed Distributions
https://openreview.net/forum?id=WAid50QschI
https://openreview.net/forum?id=WAid50QschI
António Farinhas,Wilker Aziz,Vlad Niculae,Andre Martins
ICLR 2022,Oral
Neural networks and other machine learning models compute continuous representations, while humans communicate mostly through discrete symbols. Reconciling these two forms of communication is desirable for generating human-readable interpretations or learning discrete latent variable models, while maintaining end-to-en...
https://openreview.net/pdf/f8c966f98befffb0bfbd9af921a4e4dd831d549f.pdf
Finetuned Language Models are Zero-Shot Learners
https://openreview.net/forum?id=gEZrGCozdqR
https://openreview.net/forum?id=gEZrGCozdqR
Jason Wei,Maarten Bosma,Vincent Zhao,Kelvin Guu,Adams Wei Yu,Brian Lester,Nan Du,Andrew M. Dai,Quoc V Le
ICLR 2022,Oral
This paper explores a simple method for improving the zero-shot learning abilities of language models. We show that instruction tuning—finetuning language models on a collection of datasets described via instructions—substantially improves zero-shot performance on unseen tasks. We take a 137B parameter pretrained langu...
https://openreview.net/pdf/16b50405ab1e3ac1e2f76190ee62a48c496c568d.pdf
F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization
https://openreview.net/forum?id=_CfpJazzXT2
https://openreview.net/forum?id=_CfpJazzXT2
Qing Jin,Jian Ren,Richard Zhuang,Sumant Hanumante,Zhengang Li,Zhiyu Chen,Yanzhi Wang,Kaiyuan Yang,Sergey Tulyakov
ICLR 2022,Oral
Neural network quantization is a promising compression technique to reduce memory footprint and save energy consumption, potentially leading to real-time inference. However, there is a performance gap between quantized and full-precision models. To reduce it, existing quantization approaches require high-precision INT3...
https://openreview.net/pdf/aed69dd0c10990a2c4948e6d230de04c5719fb7d.pdf
Transform2Act: Learning a Transform-and-Control Policy for Efficient Agent Design
https://openreview.net/forum?id=UcDUxjPYWSr
https://openreview.net/forum?id=UcDUxjPYWSr
Ye Yuan,Yuda Song,Zhengyi Luo,Wen Sun,Kris M. Kitani
ICLR 2022,Oral
An agent's functionality is largely determined by its design, i.e., skeletal structure and joint attributes (e.g., length, size, strength). However, finding the optimal agent design for a given function is extremely challenging since the problem is inherently combinatorial and the design space is prohibitively large. A...
https://openreview.net/pdf/511a5c95afacad18125605721a8d1e530c07018b.pdf
ProtoRes: Proto-Residual Network for Pose Authoring via Learned Inverse Kinematics
https://openreview.net/forum?id=s03AQxehtd_
https://openreview.net/forum?id=s03AQxehtd_
Boris N. Oreshkin,Florent Bocquelet,Felix G. Harvey,Bay Raitt,Dominic Laflamme
ICLR 2022,Oral
Our work focuses on the development of a learnable neural representation of human pose for advanced AI assisted animation tooling. Specifically, we tackle the problem of constructing a full static human pose based on sparse and variable user inputs (e.g. locations and/or orientations of a subset of body joints). To sol...
https://openreview.net/pdf/72eadcfe21558f0be18ff071adc50adc3ae85e5e.pdf
Hyperparameter Tuning with Renyi Differential Privacy
https://openreview.net/forum?id=-70L8lpp9DF
https://openreview.net/forum?id=-70L8lpp9DF
Nicolas Papernot,Thomas Steinke
ICLR 2022,Oral
For many differentially private algorithms, such as the prominent noisy stochastic gradient descent (DP-SGD), the analysis needed to bound the privacy leakage of a single training run is well understood. However, few studies have reasoned about the privacy leakage resulting from the multiple training runs needed to fin...
https://openreview.net/pdf/8832d0e112b9fd6c5c8f0be8a093625e4de6e337.pdf
Real-Time Neural Voice Camouflage
https://openreview.net/forum?id=qj1IZ-6TInc
https://openreview.net/forum?id=qj1IZ-6TInc
Mia Chiquier,Chengzhi Mao,Carl Vondrick
ICLR 2022,Oral
Automatic speech recognition systems have created exciting possibilities for applications, however they also enable opportunities for systematic eavesdropping.We propose a method to camouflage a person's voice from these systems without inconveniencing the conversation between people in the room. Standard adversarial a...
https://openreview.net/pdf/e2b96a38db73636bfa51d5ee4097373ddda15329.pdf
CycleMLP: A MLP-like Architecture for Dense Prediction
https://openreview.net/forum?id=NMEceG4v69Y
https://openreview.net/forum?id=NMEceG4v69Y
Shoufa Chen,Enze Xie,Chongjian GE,Runjian Chen,Ding Liang,Ping Luo
ICLR 2022,Oral
This paper presents a simple MLP-like architecture, CycleMLP, which is a versatile backbone for visual recognition and dense predictions. As compared to modern MLP architectures, e.g. , MLP-Mixer, ResMLP, and gMLP, whose architectures are correlated to image size and thus are infeasible in object detection and segmenta...
https://openreview.net/pdf/0ff0f728cbc430b36ea84288793e887e216cff59.pdf
Analytic-DPM: an Analytic Estimate of the Optimal Reverse Variance in Diffusion Probabilistic Models
https://openreview.net/forum?id=0xiJLKH-ufZ
https://openreview.net/forum?id=0xiJLKH-ufZ
Fan Bao,Chongxuan Li,Jun Zhu,Bo Zhang
ICLR 2022,Oral
Diffusion probabilistic models (DPMs) represent a class of powerful generative models. Despite their success, the inference of DPMs is expensive since it generally needs to iterate over thousands of timesteps. A key problem in the inference is to estimate the variance in each timestep of the reverse process. In this wo...
https://openreview.net/pdf/541cdc9e000367bb0bd3fc42201573ed434094c8.pdf
RISP: Rendering-Invariant State Predictor with Differentiable Simulation and Rendering for Cross-Domain Parameter Estimation
https://openreview.net/forum?id=uSE03demja
https://openreview.net/forum?id=uSE03demja
Pingchuan Ma,Tao Du,Joshua B. Tenenbaum,Wojciech Matusik,Chuang Gan
ICLR 2022,Oral
This work considers identifying parameters characterizing a physical system's dynamic motion directly from a video whose rendering configurations are inaccessible. Existing solutions require massive training data or lack generalizability to unknown rendering configurations. We propose a novel approach that marries doma...
https://openreview.net/pdf/999353870633727a2d50bc5b4ee873b50401eba7.pdf
The Information Geometry of Unsupervised Reinforcement Learning
https://openreview.net/forum?id=3wU2UX0voE
https://openreview.net/forum?id=3wU2UX0voE
Benjamin Eysenbach,Ruslan Salakhutdinov,Sergey Levine
ICLR 2022,Oral
How can a reinforcement learning (RL) agent prepare to solve downstream tasks if those tasks are not known a priori? One approach is unsupervised skill discovery, a class of algorithms that learn a set of policies without access to a reward function. Such algorithms bear a close resemblance to representation learning a...
https://openreview.net/pdf/4709236cdf10497a057511e94fe99f87770c5bf6.pdf
Language modeling via stochastic processes
https://openreview.net/forum?id=pMQwKL1yctf
https://openreview.net/forum?id=pMQwKL1yctf
Rose E Wang,Esin Durmus,Noah Goodman,Tatsunori Hashimoto
ICLR 2022,Oral
Modern language models can generate high-quality short texts. However, they often meander or are incoherent when generating longer texts. These issues arise from the next-token-only language modeling objective. To address these issues, we introduce Time Control (TC), a language model that implicitly plans via a latent ...
https://openreview.net/pdf/ceeec650a60b1f87ad4dda26ecd02c9df0e3ed9d.pdf
Learning to Downsample for Segmentation of Ultra-High Resolution Images
https://openreview.net/forum?id=HndgQudNb91
https://openreview.net/forum?id=HndgQudNb91
Chen Jin,Ryutaro Tanno,Thomy Mertzanidou,Eleftheria Panagiotaki,Daniel C. Alexander
ICLR 2022,Poster
Many computer vision systems require low-cost segmentation algorithms based on deep learning, either because of the enormous size of input images or limited computational budget. Common solutions uniformly downsample the input images to meet memory constraints, assuming all pixels are equally informative. In this work,...
https://openreview.net/pdf/d2ade7120315e0521c4b97b593c4a2ebd44b0652.pdf
Variational Neural Cellular Automata
https://openreview.net/forum?id=7fFO4cMBx_9
https://openreview.net/forum?id=7fFO4cMBx_9
Rasmus Berg Palm,Miguel González Duque,Shyam Sudhakaran,Sebastian Risi
ICLR 2022,Poster
In nature, the process of cellular growth and differentiation has lead to an amazing diversity of organisms --- algae, starfish, giant sequoia, tardigrades, and orcas are all created by the same generative process. Inspired by the incredible diversity of this biological generative process, we propose a generative model...
https://openreview.net/pdf/abec641c2a0c18536da3345e5cd92d673d90b69d.pdf
Wish you were here: Hindsight Goal Selection for long-horizon dexterous manipulation
https://openreview.net/forum?id=FKp8-pIRo3y
https://openreview.net/forum?id=FKp8-pIRo3y
Todor Davchev,Oleg Olegovich Sushkov,Jean-Baptiste Regli,Stefan Schaal,Yusuf Aytar,Markus Wulfmeier,Jon Scholz
ICLR 2022,Poster
Complex sequential tasks in continuous-control settings often require agents to successfully traverse a set of ``narrow passages'' in their state space. Solving such tasks with a sparse reward in a sample-efficient manner poses a challenge to modern reinforcement learning (RL) due to the associated long-horizon nature ...
https://openreview.net/pdf/524d4c3cacc5ff7803cd7061b33991511fee7db7.pdf
L0-Sparse Canonical Correlation Analysis
https://openreview.net/forum?id=KntaNRo6R48
https://openreview.net/forum?id=KntaNRo6R48
Ofir Lindenbaum,Moshe Salhov,Amir Averbuch,Yuval Kluger
ICLR 2022,Poster
Canonical Correlation Analysis (CCA) models are powerful for studying the associations between two sets of variables. The canonically correlated representations, termed \textit{canonical variates} are widely used in unsupervised learning to analyze unlabeled multi-modal registered datasets. Despite their success, CCA m...
https://openreview.net/pdf/69ae8c04ac43812f7523f009313daec68f09ea3d.pdf
Recycling Model Updates in Federated Learning: Are Gradient Subspaces Low-Rank?
https://openreview.net/forum?id=B7ZbqNLDn-_
https://openreview.net/forum?id=B7ZbqNLDn-_
Sheikh Shams Azam,Seyyedali Hosseinalipour,Qiang Qiu,Christopher Brinton
ICLR 2022,Poster
In this paper, we question the rationale behind propagating large numbers of parameters through a distributed system during federated learning. We start by examining the rank characteristics of the subspace spanned by gradients (i.e., the gradient-space) in centralized model training, and observe that the gradient-spac...
https://openreview.net/pdf/76e2433c08e957e7f19a49e6815d0f6b52da92cd.pdf
Is Homophily a Necessity for Graph Neural Networks?
https://openreview.net/forum?id=ucASPPD9GKN
https://openreview.net/forum?id=ucASPPD9GKN
Yao Ma,Xiaorui Liu,Neil Shah,Jiliang Tang
ICLR 2022,Poster
Graph neural networks (GNNs) have shown great prowess in learning representations suitable for numerous graph-based machine learning tasks. When applied to semi-supervised node classification, GNNs are widely believed to work well due to the homophily assumption (``like attracts like''), and fail to generalize to hete...
https://openreview.net/pdf/dba6b2a528efebfb036a0b908ecfc59201204429.pdf
DEGREE: Decomposition Based Explanation for Graph Neural Networks
https://openreview.net/forum?id=Ve0Wth3ptT_
https://openreview.net/forum?id=Ve0Wth3ptT_
Qizhang Feng,Ninghao Liu,Fan Yang,Ruixiang Tang,Mengnan Du,Xia Hu
ICLR 2022,Poster
Graph Neural Networks (GNNs) are gaining extensive attention for their application in graph data. However, the black-box nature of GNNs prevents users from understanding and trusting the models, thus hampering their applicability. Whereas explaining GNNs remains a challenge, most existing methods fall into approximatio...
https://openreview.net/pdf/fd7de8640028480fa9fe56dd9ed7bcad9182bf31.pdf
Improving Mutual Information Estimation with Annealed and Energy-Based Bounds
https://openreview.net/forum?id=T0B9AoM_bFg
https://openreview.net/forum?id=T0B9AoM_bFg
Rob Brekelmans,Sicong Huang,Marzyeh Ghassemi,Greg Ver Steeg,Roger Baker Grosse,Alireza Makhzani
ICLR 2022,Poster
Mutual information (MI) is a fundamental quantity in information theory and machine learning. However, direct estimation of MI is intractable, even if the true joint probability density for the variables of interest is known, as it involves estimating a potentially high-dimensional log partition function. In this work,...
https://openreview.net/pdf/a68f8e4bbad21f5599f372c94827c5f596c6555b.pdf
Sequence Approximation using Feedforward Spiking Neural Network for Spatiotemporal Learning: Theory and Optimization Methods
https://openreview.net/forum?id=bp-LJ4y_XC
https://openreview.net/forum?id=bp-LJ4y_XC
Xueyuan She,Saurabh Dash,Saibal Mukhopadhyay
ICLR 2022,Poster
A dynamical system of spiking neurons with only feedforward connections can classify spatiotemporal patterns without recurrent connections. However, the theoretical construct of a feedforward spiking neural network (SNN) for approximating a temporal sequence remains unclear, making it challenging to optimize SNN archit...
https://openreview.net/pdf/043f00a3e618d0c71bbd79dffbdfdaf6d9fd4d1b.pdf
Diverse Client Selection for Federated Learning via Submodular Maximization
https://openreview.net/forum?id=nwKXyFvaUm
https://openreview.net/forum?id=nwKXyFvaUm
Ravikumar Balakrishnan,Tian Li,Tianyi Zhou,Nageen Himayat,Virginia Smith,Jeff Bilmes
ICLR 2022,Poster
In every communication round of federated learning, a random subset of clients communicate their model updates back to the server which then aggregates them all. The optimal size of this subset is not known and several studies have shown that typically random selection does not perform very well in terms of ...
https://openreview.net/pdf/4d539789e55d133a96781cda576be4ab34ec5982.pdf
From Intervention to Domain Transportation: A Novel Perspective to Optimize Recommendation
https://openreview.net/forum?id=jT1EwXu-4hj
https://openreview.net/forum?id=jT1EwXu-4hj
Da Xu,Yuting Ye,Chuanwei Ruan,Evren Korpeoglu,Sushant Kumar,Kannan Achan
ICLR 2022,Poster
The interventional nature of recommendation has attracted increasing attention in recent years. It particularly motivates researchers to formulate learning and evaluating recommendation as causal inference and data missing-not-at-random problems. However, few take seriously the consequence of violating the critical ass...
https://openreview.net/pdf/22322b458fd437ff0b3cf13debd29cc381b25ccc.pdf
Variational Predictive Routing with Nested Subjective Timescales
https://openreview.net/forum?id=JxFgJbZ-wft
https://openreview.net/forum?id=JxFgJbZ-wft
Alexey Zakharov,Qinghai Guo,Zafeirios Fountas
ICLR 2022,Poster
Discovery and learning of an underlying spatiotemporal hierarchy in sequential data is an important topic for machine learning. Despite this, little work has been done to explore hierarchical generative models that can flexibly adapt their layerwise representations in response to datasets with different temporal dynami...
https://openreview.net/pdf/712c74938a55973dd0b3f46e154fc0696194b578.pdf
Sample and Computation Redistribution for Efficient Face Detection
https://openreview.net/forum?id=RhB1AdoFfGE
https://openreview.net/forum?id=RhB1AdoFfGE
Jia Guo,Jiankang Deng,Alexandros Lattas,Stefanos Zafeiriou
ICLR 2022,Poster
Although tremendous strides have been made in uncontrolled face detection, accurate face detection with a low computation cost remains an open challenge. In this paper, we point out that computation distribution and scale augmentation are the keys to detecting small faces from low-resolution images. Motivated by these ...
https://openreview.net/pdf/d7b9dd38011f418b1c66bb378aef38a25d8c9bf5.pdf
Sound Adversarial Audio-Visual Navigation
https://openreview.net/forum?id=NkZq4OEYN-
https://openreview.net/forum?id=NkZq4OEYN-
Yinfeng Yu,Wenbing Huang,Fuchun Sun,Changan Chen,Yikai Wang,Xiaohong Liu
ICLR 2022,Poster
Audio-visual navigation task requires an agent to find a sound source in a realistic, unmapped 3D environment by utilizing egocentric audio-visual observations. Existing audio-visual navigation works assume a clean environment that solely contains the target sound, which, however, would not be suitable in most real-wor...
https://openreview.net/pdf/892cdd541646cc28a0880494951fbd89079c2a3d.pdf
Out-of-distribution Generalization in the Presence of Nuisance-Induced Spurious Correlations
https://openreview.net/forum?id=12RoR2o32T
https://openreview.net/forum?id=12RoR2o32T
Aahlad Manas Puli,Lily H Zhang,Eric Karl Oermann,Rajesh Ranganath
ICLR 2022,Poster
In many prediction problems, spurious correlations are induced by a changing relationship between the label and a nuisance variable that is also correlated with the covariates. For example, in classifying animals in natural images, the background, which is a nuisance, can predict the type of animal. This nuisance-label...
https://openreview.net/pdf/7128d52f12e20439db2d07083f3de3995967bb53.pdf
AEVA: Black-box Backdoor Detection Using Adversarial Extreme Value Analysis
https://openreview.net/forum?id=OM_lYiHXiCL
https://openreview.net/forum?id=OM_lYiHXiCL
Junfeng Guo,Ang Li,Cong Liu
ICLR 2022,Poster
Deep neural networks (DNNs) are proved to be vulnerable against backdoor attacks. A backdoor could be embedded in the target DNNs through injecting a backdoor trigger into the training examples, which can cause the target DNNs misclassify an input attached with the backdoor trigger. Recent backdoor detection methods o...
https://openreview.net/pdf/b8ad85b4ddd615a5abac4d7c1d5713fc92b9f0e9.pdf
Resonance in Weight Space: Covariate Shift Can Drive Divergence of SGD with Momentum
https://openreview.net/forum?id=5ECQL05ub0J
https://openreview.net/forum?id=5ECQL05ub0J
Kirby Banman,Garnet Liam Peet-Pare,Nidhi Hegde,Alona Fyshe,Martha White
ICLR 2022,Poster
Most convergence guarantees for stochastic gradient descent with momentum (SGDm) rely on iid sampling. Yet, SGDm is often used outside this regime, in settings with temporally correlated input samples such as continual learning and reinforcement learning. Existing work has shown that SGDm with a decaying step-size can...
https://openreview.net/pdf/967691b8c1cb517500d87dfd7dbf7dd6293c0e89.pdf
Top-label calibration and multiclass-to-binary reductions
https://openreview.net/forum?id=WqoBaaPHS-
https://openreview.net/forum?id=WqoBaaPHS-
Chirag Gupta,Aaditya Ramdas
ICLR 2022,Poster
We propose a new notion of multiclass calibration called top-label calibration. A classifier is said to be top-label calibrated if the reported probability for the predicted class label---the top-label---is calibrated, conditioned on the top-label. This conditioning is essential for practical utility of the calibration...
https://openreview.net/pdf/a580ad8d84d1a31adcccb9f9e2102c3b503121df.pdf
Anisotropic Random Feature Regression in High Dimensions
https://openreview.net/forum?id=JfaWawZ8BmX
https://openreview.net/forum?id=JfaWawZ8BmX
Gabriel Mel,Jeffrey Pennington
ICLR 2022,Poster
In contrast to standard statistical wisdom, modern learning algorithms typically find their best performance in the overparameterized regime in which the model has many more parameters than needed to fit the training data. A growing number of recent works have shown that random feature models can offer a detailed theor...
https://openreview.net/pdf/bc2ddad146bd93609c8510aac28ae824072d1832.pdf
Back2Future: Leveraging Backfill Dynamics for Improving Real-time Predictions in Future
https://openreview.net/forum?id=L01Nn_VJ9i
https://openreview.net/forum?id=L01Nn_VJ9i
Harshavardhan Kamarthi,Alexander Rodríguez,B. Aditya Prakash
ICLR 2022,Poster
For real-time forecasting in domains like public health and macroeconomics, data collection is a non-trivial and demanding task. Often after being initially released, it undergoes several revisions later (maybe due to human or technical constraints) - as a result, it may take weeks until the data reaches a stable value...
https://openreview.net/pdf/5ff5a41a0773c6764d009a86a74cce3dd35e8ec3.pdf
Approximation and Learning with Deep Convolutional Models: a Kernel Perspective
https://openreview.net/forum?id=lrocYB-0ST2
https://openreview.net/forum?id=lrocYB-0ST2
Alberto Bietti
ICLR 2022,Poster
The empirical success of deep convolutional networks on tasks involving high-dimensional data such as images or audio suggests that they can efficiently approximate certain functions that are well-suited for such tasks. In this paper, we study this through the lens of kernel methods, by considering simple hierarchical ...
https://openreview.net/pdf/35eeb8c9531f39eb14e07db8fb296d38b7f1a369.pdf
Value Function Spaces: Skill-Centric State Abstractions for Long-Horizon Reasoning
https://openreview.net/forum?id=vgqS1vkkCbE
https://openreview.net/forum?id=vgqS1vkkCbE
Dhruv Shah,Peng Xu,Yao Lu,Ted Xiao,Alexander T Toshev,Sergey Levine,brian ichter
ICLR 2022,Poster
Reinforcement learning can train policies that effectively perform complex tasks. However for long-horizon tasks, the performance of these methods degrades with horizon, often necessitating reasoning over and chaining lower-level skills. Hierarchical reinforcement learning aims to enable this by providing a bank of low...
https://openreview.net/pdf/c49d03d6fc757e37898cc5399159de2e30589146.pdf
Fast Regression for Structured Inputs
https://openreview.net/forum?id=gNp54NxHUPJ
https://openreview.net/forum?id=gNp54NxHUPJ
Raphael A Meyer,Cameron N Musco,Christopher P Musco,David Woodruff,Samson Zhou
ICLR 2022,Poster
We study the $\ell_p$ regression problem, which requires finding $\mathbf{x}\in\mathbb R^{d}$ that minimizes $\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_p$ for a matrix $\mathbf{A}\in\mathbb R^{n \times d}$ and response vector $\mathbf{b}\in\mathbb R^{n}$. There has been recent interest in developing subsampling methods for t...
https://openreview.net/pdf/a76864e8c343a5dcb3414cc8caa6fc2fdd2afc19.pdf
CrossBeam: Learning to Search in Bottom-Up Program Synthesis
https://openreview.net/forum?id=qhC8mr2LEKq
https://openreview.net/forum?id=qhC8mr2LEKq
Kensen Shi,Hanjun Dai,Kevin Ellis,Charles Sutton
ICLR 2022,Poster
Many approaches to program synthesis perform a search within an enormous space of programs to find one that satisfies a given specification. Prior works have used neural models to guide combinatorial search algorithms, but such approaches still explore a huge portion of the search space and quickly become intractable a...
https://openreview.net/pdf/d098dde7689c9940303ddd8c11f5f44e8b866692.pdf
PEARL: Data Synthesis via Private Embeddings and Adversarial Reconstruction Learning
https://openreview.net/forum?id=M6M8BEmd6dq
https://openreview.net/forum?id=M6M8BEmd6dq
Seng Pei Liew,Tsubasa Takahashi,Michihiko Ueno
ICLR 2022,Poster
We propose a new framework of synthesizing data using deep generative models in a differentially private manner. Within our framework, sensitive data are sanitized with rigorous privacy guarantees in a one-shot fashion, such that training deep generative models is possible without re-using the original data. Hence, no ...
https://openreview.net/pdf/3efedef6ce8396ae22861cd7154606c25bd31e95.pdf
Divisive Feature Normalization Improves Image Recognition Performance in AlexNet
https://openreview.net/forum?id=aOX3a9q3RVV
https://openreview.net/forum?id=aOX3a9q3RVV
Michelle Miller,SueYeon Chung,Kenneth D. Miller
ICLR 2022,Poster
Local divisive normalization provides a phenomenological description of many nonlinear response properties of neurons across visual cortical areas. To gain insight into the utility of this operation, we studied the effects on AlexNet of a local divisive normalization between features, with learned parameters. Developin...
https://openreview.net/pdf/452011d69839dd4fa39ba4bec882b24cb5bb2649.pdf
Evaluating Distributional Distortion in Neural Language Modeling
https://openreview.net/forum?id=bTteFbU99ye
https://openreview.net/forum?id=bTteFbU99ye
Benjamin LeBrun,Alessandro Sordoni,Timothy J. O'Donnell
ICLR 2022,Poster
A fundamental characteristic of natural language is the high rate at which speakers produce novel expressions. Because of this novelty, a heavy-tail of rare events accounts for a significant amount of the total probability mass of distributions in language (Baayen, 2001). Standard language modeling metrics such as perp...
https://openreview.net/pdf/c22ea9d1df97b96c390eb350b4c09eb8e2388128.pdf
MaGNET: Uniform Sampling from Deep Generative Network Manifolds Without Retraining
https://openreview.net/forum?id=r5qumLiYwf9
https://openreview.net/forum?id=r5qumLiYwf9
Ahmed Imtiaz Humayun,Randall Balestriero,Richard Baraniuk
ICLR 2022,Poster
Deep Generative Networks (DGNs) are extensively employed in Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and their variants to approximate the data manifold, and data distribution on that manifold. However, training samples are often obtained based on preferences, costs, or convenience produ...
https://openreview.net/pdf/e9c0ccdf7ecc11a5666ac100d75f89816ce7c0f7.pdf
Neural Contextual Bandits with Deep Representation and Shallow Exploration
https://openreview.net/forum?id=xnYACQquaGV
https://openreview.net/forum?id=xnYACQquaGV
Pan Xu,Zheng Wen,Handong Zhao,Quanquan Gu
ICLR 2022,Poster
We study neural contextual bandits, a general class of contextual bandits, where each context-action pair is associated with a raw feature vector, but the specific reward generating function is unknown. We propose a novel learning algorithm that transforms the raw feature vector using the last hidden layer of a deep Re...
https://openreview.net/pdf/c6ee94e7fd22670895280aaf06535b6373d428eb.pdf
PI3NN: Out-of-distribution-aware Prediction Intervals from Three Neural Networks
https://openreview.net/forum?id=NoB8YgRuoFU
https://openreview.net/forum?id=NoB8YgRuoFU
Siyan Liu,Pei Zhang,Dan Lu,Guannan Zhang
ICLR 2022,Poster
We propose a novel prediction interval (PI) method for uncertainty quantification, which addresses three major issues with the state-of-the-art PI methods. First, existing PI methods require retraining of neural networks (NNs) for every given confidence level and suffer from the crossing issue in calculating multiple P...
https://openreview.net/pdf/84a3741f26e65df3c7b232779bcfb5dac283d41e.pdf
Discriminative Similarity for Data Clustering
https://openreview.net/forum?id=kj0_45Y4r9i
https://openreview.net/forum?id=kj0_45Y4r9i
Yingzhen Yang,Ping Li
ICLR 2022,Poster
Similarity-based clustering methods separate data into clusters according to the pairwise similarity between the data, and the pairwise similarity is crucial for their performance. In this paper, we propose {\em Clustering by Discriminative Similarity (CDS)}, a novel method which learns discriminative similarity for d...
https://openreview.net/pdf/b159fb24355dd1bf64f74a757973bbc8cc96d57e.pdf
It Takes Four to Tango: Multiagent Self Play for Automatic Curriculum Generation
https://openreview.net/forum?id=q4tZR1Y-UIs
https://openreview.net/forum?id=q4tZR1Y-UIs
Yuqing Du,Pieter Abbeel,Aditya Grover
ICLR 2022,Poster
We are interested in training general-purpose reinforcement learning agents that can solve a wide variety of goals. Training such agents efficiently requires automatic generation of a goal curriculum. This is challenging as it requires (a) exploring goals of increasing difficulty, while ensuring that the agent (b) is e...
https://openreview.net/pdf/68a6237e79699c723ce9c9c39537422391df3e2b.pdf
CROP: Certifying Robust Policies for Reinforcement Learning through Functional Smoothing
https://openreview.net/forum?id=HOjLHrlZhmx
https://openreview.net/forum?id=HOjLHrlZhmx
Fan Wu,Linyi Li,Zijian Huang,Yevgeniy Vorobeychik,Ding Zhao,Bo Li
ICLR 2022,Poster
As reinforcement learning (RL) has achieved great success and been even adopted in safety-critical domains such as autonomous vehicles, a range of empirical studies have been conducted to improve its robustness against adversarial attacks. However, how to certify its robustness with theoretical guarantees still remains...
https://openreview.net/pdf/b79f87ced196c2a5a13ca10bae3d39a8924b08b8.pdf
Neural Link Prediction with Walk Pooling
https://openreview.net/forum?id=CCu6RcUMwK0
https://openreview.net/forum?id=CCu6RcUMwK0
Liming Pan,Cheng Shi,Ivan Dokmanić
ICLR 2022,Poster
Graph neural networks achieve high accuracy in link prediction by jointly leveraging graph topology and node attributes. Topology, however, is represented indirectly; state-of-the-art methods based on subgraph classification label nodes with distance to the target link, so that, although topological information is pres...
https://openreview.net/pdf/ad031c5e836c55357e2f13cdb18fa502a7eecc80.pdf
On the Convergence of Certified Robust Training with Interval Bound Propagation
https://openreview.net/forum?id=YeShU5mLfLt
https://openreview.net/forum?id=YeShU5mLfLt
Yihan Wang,Zhouxing Shi,Quanquan Gu,Cho-Jui Hsieh
ICLR 2022,Poster
Interval Bound Propagation (IBP) is so far the base of state-of-the-art methods for training neural networks with certifiable robustness guarantees when potential adversarial perturbations present, while the convergence of IBP training remains unknown in existing literature. In this paper, we present a theoretical anal...
https://openreview.net/pdf/4e7f7f34a6f11b062e283b3a04324bb373e39067.pdf
Pretraining Text Encoders with Adversarial Mixture of Training Signal Generators
https://openreview.net/forum?id=sX3XaHwotOg
https://openreview.net/forum?id=sX3XaHwotOg
Yu Meng,Chenyan Xiong,Payal Bajaj,saurabh tiwary,Paul N. Bennett,Jiawei Han,Xia Song
ICLR 2022,Poster
We present a new framework AMOS that pretrains text encoders with an Adversarial learning curriculum via a Mixture Of Signals from multiple auxiliary generators. Following ELECTRA-style pretraining, the main encoder is trained as a discriminator to detect replaced tokens generated by auxiliary masked language models (M...
https://openreview.net/pdf/4127a755f1e5ee998e6423f7a8d734f9e88b8cab.pdf
Towards Training Billion Parameter Graph Neural Networks for Atomic Simulations
https://openreview.net/forum?id=0jP2n0YFmKG
https://openreview.net/forum?id=0jP2n0YFmKG
Anuroop Sriram,Abhishek Das,Brandon M Wood,Siddharth Goyal,C. Lawrence Zitnick
ICLR 2022,Poster
Recent progress in Graph Neural Networks (GNNs) for modeling atomic simulations has the potential to revolutionize catalyst discovery, which is a key step in making progress towards the energy breakthroughs needed to combat climate change. However, the GNNs that have proven most effective for this task are memory inten...
https://openreview.net/pdf/d00345679f2290baeabb225428516fad14fea79e.pdf
Understanding and Leveraging Overparameterization in Recursive Value Estimation
https://openreview.net/forum?id=shbAgEsk3qM
https://openreview.net/forum?id=shbAgEsk3qM
Chenjun Xiao,Bo Dai,Jincheng Mei,Oscar A Ramirez,Ramki Gummadi,Chris Harris,Dale Schuurmans
ICLR 2022,Poster
The theory of function approximation in reinforcement learning (RL) typically considers low capacity representations that incur a tradeoff between approximation error, stability and generalization. Current deep architectures, however, operate in an overparameterized regime where approximation error is not necessarily ...
https://openreview.net/pdf/c5131ad5930c1a9f32ede673f284175158a75792.pdf
Optimization and Adaptive Generalization of Three layer Neural Networks
https://openreview.net/forum?id=dPyRNUlttBv
https://openreview.net/forum?id=dPyRNUlttBv
Khashayar Gatmiry,Stefanie Jegelka,Jonathan Kelner
ICLR 2022,Poster
While there has been substantial recent work studying generalization of neural networks, the ability of deep nets in automating the process of feature extraction still evades a thorough mathematical understanding. As a step toward this goal, we analyze learning and generalization of a three-layer neural network wit...
https://openreview.net/pdf/086ce10c9607a92d59635b0ac0f1f0bd8c86ae5b.pdf
Non-Parallel Text Style Transfer with Self-Parallel Supervision
https://openreview.net/forum?id=-TSe5o7STVR
https://openreview.net/forum?id=-TSe5o7STVR
Ruibo Liu,Chongyang Gao,Chenyan Jia,Guangxuan Xu,Soroush Vosoughi
ICLR 2022,Poster
The performance of existing text style transfer models is severely limited by the non-parallel datasets on which the models are trained. In non-parallel datasets, no direct mapping exists between sentences of the source and target style; the style transfer models thus only receive weak supervision of the target sentenc...
https://openreview.net/pdf/7858e341aa92c11991455a43e9a78c35ee4655a2.pdf
Can an Image Classifier Suffice For Action Recognition?
https://openreview.net/forum?id=qhkFX-HLuHV
https://openreview.net/forum?id=qhkFX-HLuHV
Quanfu Fan,Chun-Fu Chen,Rameswar Panda
ICLR 2022,Poster
We explore a new perspective on video understanding by casting the video recognition problem as an image recognition task. Our approach rearranges input video frames into super images, which allow for training an image classifier directly to fulfill the task of action recognition, in exactly the same way as image class...
https://openreview.net/pdf/30716aa30d9fbd5e0f9a95e4c0e1255607ab8bc4.pdf
Interacting Contour Stochastic Gradient Langevin Dynamics
https://openreview.net/forum?id=IK9ap6nxXr2
https://openreview.net/forum?id=IK9ap6nxXr2
Wei Deng,Siqi Liang,Botao Hao,Guang Lin,Faming Liang
ICLR 2022,Poster
We propose an interacting contour stochastic gradient Langevin dynamics (ICSGLD) sampler, an embarrassingly parallel multiple-chain contour stochastic gradient Langevin dynamics (CSGLD) sampler with efficient interactions. We show that ICSGLD can be theoretically more efficient than a single-chain CSGLD with an equival...
https://openreview.net/pdf/bf454b672f7afe0c72e3a83029c7238309a1b4a0.pdf
NeuPL: Neural Population Learning
https://openreview.net/forum?id=MIX3fJkl_1
https://openreview.net/forum?id=MIX3fJkl_1
Siqi Liu,Luke Marris,Daniel Hennes,Josh Merel,Nicolas Heess,Thore Graepel
ICLR 2022,Poster
Learning in strategy games (e.g. StarCraft, poker) requires the discovery of diverse policies. This is often achieved by iteratively training new policies against existing ones, growing a policy population that is robust to exploit. This iterative approach suffers from two issues in real-world games: a) under finite bu...
https://openreview.net/pdf/eeeb391c4885267d9c80ba3a8ea3dfd9e9ea8832.pdf
DeSKO: Stability-Assured Robust Control with a Deep Stochastic Koopman Operator
https://openreview.net/forum?id=hniLRD_XCA
https://openreview.net/forum?id=hniLRD_XCA
Minghao Han,Jacob Euler-Rolle,Robert K. Katzschmann
ICLR 2022,Poster
The Koopman operator theory linearly describes nonlinear dynamical systems in a high-dimensional functional space and it allows to apply linear control methods to highly nonlinear systems. However, the Koopman operator does not account for any uncertainty in dynamical systems, causing it to perform poorly in real-world...
https://openreview.net/pdf/862602026e43c103de39be4295ff8f7288f3acf2.pdf
Neural Network Approximation based on Hausdorff distance of Tropical Zonotopes
https://openreview.net/forum?id=oiZJwC_fyS
https://openreview.net/forum?id=oiZJwC_fyS
Panagiotis Misiakos,Georgios Smyrnis,George Retsinas,Petros Maragos
ICLR 2022,Poster
In this work we theoretically contribute to neural network approximation by providing a novel tropical geometrical viewpoint to structured neural network compression. In particular, we show that the approximation error between two neural networks with ReLU activations and one hidden layer depends on the Hausdorff dista...
https://openreview.net/pdf/e09efd74b974abec052126ca4cbb787b04fd3265.pdf
End of preview. Expand in Data Studio

ICLR 2022 International Conference on Learning Representations 2022 Accepted Paper Meta Info Dataset

This dataset is collect from the ICLR 2022 OpenReview website (https://openreview.net/group?id=ICLR.cc/2022/Conference#tab-accept-oral) as well as the arxiv website DeepNLP paper arxiv (http://www.deepnlp.org/content/paper/iclr2022). For researchers who are interested in doing analysis of ICLR 2022 accepted papers and potential trends, you can use the already cleaned up json files. Each row contains the meta information of a paper in the ICLR 2022 conference. To explore more AI & Robotic papers (NIPS/ICML/ICLR/IROS/ICRA/etc) and AI equations, feel free to navigate the Equation Search Engine (http://www.deepnlp.org/search/equation) as well as the AI Agent Search Engine to find the deployed AI Apps and Agents (http://www.deepnlp.org/search/agent) in your domain.

Meta Information of Json File

{
    "title": "Domino: Discovering Systematic Errors with Cross-Modal Embeddings",
    "url": "https://openreview.net/forum?id=FPCMqjI0jXN",
    "detail_url": "https://openreview.net/forum?id=FPCMqjI0jXN",
    "authors": "Sabri Eyuboglu,Maya Varma,Khaled Kamal Saab,Jean-Benoit Delbrouck,Christopher Lee-Messer,Jared Dunnmon,James Zou,Christopher Re",
    "tags": "ICLR 2022,Oral",
    "abstract": "Machine learning models that achieve high overall accuracy often make systematic errors on important subsets (or slices) of data. Identifying underperforming slices is particularly challenging when working with high-dimensional inputs (e.g. images, audio), where important slices are often unlabeled. In order to address this issue, recent studies have proposed automated slice discovery methods (SDMs), which leverage learned model representations to mine input data for slices on which a model performs poorly. To be useful to a practitioner, these methods must identify slices that are both underperforming and coherent (i.e. united by a human-understandable concept). However, no quantitative evaluation framework currently exists for rigorously assessing SDMs with respect to these criteria. Additionally, prior qualitative evaluations have shown that SDMs often identify slices that are incoherent. In this work, we address these challenges by first designing a principled evaluation framework that enables a quantitative comparison of SDMs across 1,235 slice discovery settings in three input domains (natural images, medical images, and time-series data).\nThen, motivated by the recent development of powerful cross-modal representation learning approaches, we present Domino, an SDM that leverages cross-modal embeddings and a novel error-aware mixture model to discover and describe coherent slices. We find that Domino accurately identifies 36% of the 1,235 slices in our framework -- a 12 percentage point improvement over prior methods. Further, Domino is the first SDM that can provide natural language descriptions of identified slices, correctly generating the exact name of the slice in 35% of settings. ",
    "pdf": "https://openreview.net/pdf/a5ca838a35d810400cfa090453cd85abe02ab6b0.pdf"
}

Related

AI Equation

List of AI Equations and Latex
List of Math Equations and Latex
List of Physics Equations and Latex
List of Statistics Equations and Latex
List of Machine Learning Equations and Latex

AI Agent Marketplace and Search

AI Agent Marketplace and Search
Robot Search
Equation and Academic search
AI & Robot Comprehensive Search
AI & Robot Question
AI & Robot Community
AI Agent Marketplace Blog

AI Agent Reviews

AI Agent Marketplace Directory
Microsoft AI Agents Reviews
Claude AI Agents Reviews
OpenAI AI Agents Reviews
Saleforce AI Agents Reviews
AI Agent Builder Reviews

Downloads last month
10