Dataset Viewer
Auto-converted to Parquet Duplicate
abs
stringlengths
45
62
Download PDF
stringlengths
50
84
OpenReview
stringlengths
42
42
title
stringlengths
10
168
url
stringlengths
45
62
authors
stringlengths
9
704
detail_url
stringlengths
45
62
tags
stringclasses
1 value
abstract
stringlengths
415
5.03k
https://proceedings.mlr.press/v202/aamand23a.html
https://proceedings.mlr.press/v202/aamand23a/aamand23a.pdf
https://openreview.net/forum?id=BVomXLJQoH
Data Structures for Density Estimation
https://proceedings.mlr.press/v202/aamand23a.html
Anders Aamand, Alexandr Andoni, Justin Y. Chen, Piotr Indyk, Shyam Narayanan, Sandeep Silwal
https://proceedings.mlr.press/v202/aamand23a.html
ICML 2023
We study statistical/computational tradeoffs for the following density estimation problem: given $k$ distributions $v_1, \ldots, v_k$ over a discrete domain of size $n$, and sampling access to a distribution $p$, identify $v_i$ that is "close" to $p$. Our main result is the first data structure that, given a sublinear ...
https://proceedings.mlr.press/v202/abbas23a.html
https://proceedings.mlr.press/v202/abbas23a/abbas23a.pdf
https://openreview.net/forum?id=IK5SlumdGu
ClusterFuG: Clustering Fully connected Graphs by Multicut
https://proceedings.mlr.press/v202/abbas23a.html
Ahmed Abbas, Paul Swoboda
https://proceedings.mlr.press/v202/abbas23a.html
ICML 2023
We propose a graph clustering formulation based on multicut (a.k.a. weighted correlation clustering) on the complete graph. Our formulation does not need specification of the graph topology as in the original sparse formulation of multicut, making our approach simpler and potentially better performing. In contrast to u...
https://proceedings.mlr.press/v202/abbe23a.html
https://proceedings.mlr.press/v202/abbe23a/abbe23a.pdf
https://openreview.net/forum?id=3dqwXb1te4
Generalization on the Unseen, Logic Reasoning and Degree Curriculum
https://proceedings.mlr.press/v202/abbe23a.html
Emmanuel Abbe, Samy Bengio, Aryo Lotfi, Kevin Rizk
https://proceedings.mlr.press/v202/abbe23a.html
ICML 2023
This paper considers the learning of logical (Boolean) functions with focus on the generalization on the unseen (GOTU) setting, a strong case of out-of-distribution generalization. This is motivated by the fact that the rich combinatorial nature of data in certain reasoning tasks (e.g., arithmetic/logic) makes represen...
https://proceedings.mlr.press/v202/abedsoltan23a.html
https://proceedings.mlr.press/v202/abedsoltan23a/abedsoltan23a.pdf
https://openreview.net/forum?id=fCyg20LQsL
Toward Large Kernel Models
https://proceedings.mlr.press/v202/abedsoltan23a.html
Amirhesam Abedsoltan, Mikhail Belkin, Parthe Pandit
https://proceedings.mlr.press/v202/abedsoltan23a.html
ICML 2023
Recent studies indicate that kernel machines can often perform similarly or better than deep neural networks (DNNs) on small datasets. The interest in kernel machines has been additionally bolstered by the discovery of their equivalence to wide neural networks in certain regimes. However, a key feature of DNNs is their...
https://proceedings.mlr.press/v202/abels23a.html
https://proceedings.mlr.press/v202/abels23a/abels23a.pdf
https://openreview.net/forum?id=Fd7NCsKLPF
Expertise Trees Resolve Knowledge Limitations in Collective Decision-Making
https://proceedings.mlr.press/v202/abels23a.html
Axel Abels, Tom Lenaerts, Vito Trianni, Ann Nowe
https://proceedings.mlr.press/v202/abels23a.html
ICML 2023
Experts advising decision-makers are likely to display expertise which varies as a function of the problem instance. In practice, this may lead to sub-optimal or discriminatory decisions against minority cases. In this work, we model such changes in depth and breadth of knowledge as a partitioning of the problem space ...
https://proceedings.mlr.press/v202/acharki23a.html
https://proceedings.mlr.press/v202/acharki23a/acharki23a.pdf
https://openreview.net/forum?id=lJaAPdXgxL
Comparison of meta-learners for estimating multi-valued treatment heterogeneous effects
https://proceedings.mlr.press/v202/acharki23a.html
Naoufal Acharki, Ramiro Lugo, Antoine Bertoncello, Josselin Garnier
https://proceedings.mlr.press/v202/acharki23a.html
ICML 2023
Conditional Average Treatment Effects (CATE) estimation is one of the main challenges in causal inference with observational data. In addition to Machine Learning based-models, nonparametric estimators called meta-learners have been developed to estimate the CATE with the main advantage of not restraining the estimatio...
https://proceedings.mlr.press/v202/adams23a.html
https://proceedings.mlr.press/v202/adams23a/adams23a.pdf
https://openreview.net/forum?id=wHPDEyYEps
BNN-DP: Robustness Certification of Bayesian Neural Networks via Dynamic Programming
https://proceedings.mlr.press/v202/adams23a.html
Steven Adams, Andrea Patane, Morteza Lahijanian, Luca Laurenti
https://proceedings.mlr.press/v202/adams23a.html
ICML 2023
In this paper, we introduce BNN-DP, an efficient algorithmic framework for analysis of adversarial robustness of Bayesian Neural Networks (BNNs). Given a compact set of input points $T\subset \mathbb{R}^n$, BNN-DP computes lower and upper bounds on the BNN’s predictions for all the points in $T$. The framework is based...
https://proceedings.mlr.press/v202/agarwala23a.html
https://proceedings.mlr.press/v202/agarwala23a/agarwala23a.pdf
https://openreview.net/forum?id=5YAP9Ntq3L
SAM operates far from home: eigenvalue regularization as a dynamical phenomenon
https://proceedings.mlr.press/v202/agarwala23a.html
Atish Agarwala, Yann Dauphin
https://proceedings.mlr.press/v202/agarwala23a.html
ICML 2023
The Sharpness Aware Minimization (SAM) optimization algorithm has been shown to control large eigenvalues of the loss Hessian and provide generalization benefits in a variety of settings. The original motivation for SAM was a modified loss function which penalized sharp minima; subsequent analyses have also focused on ...
https://proceedings.mlr.press/v202/agarwala23b.html
https://proceedings.mlr.press/v202/agarwala23b/agarwala23b.pdf
https://openreview.net/forum?id=mP79L3pOke
Second-order regression models exhibit progressive sharpening to the edge of stability
https://proceedings.mlr.press/v202/agarwala23b.html
Atish Agarwala, Fabian Pedregosa, Jeffrey Pennington
https://proceedings.mlr.press/v202/agarwala23b.html
ICML 2023
Recent studies of gradient descent with large step sizes have shown that there is often a regime with an initial increase in the largest eigenvalue of the loss Hessian (progressive sharpening), followed by a stabilization of the eigenvalue near the maximum value which allows convergence (edge of stability). These pheno...
https://proceedings.mlr.press/v202/agazzi23a.html
https://proceedings.mlr.press/v202/agazzi23a/agazzi23a.pdf
https://openreview.net/forum?id=szQzz2H8er
Global optimality of Elman-type RNNs in the mean-field regime
https://proceedings.mlr.press/v202/agazzi23a.html
Andrea Agazzi, Jianfeng Lu, Sayan Mukherjee
https://proceedings.mlr.press/v202/agazzi23a.html
ICML 2023
We analyze Elman-type recurrent neural networks (RNNs) and their training in the mean-field regime. Specifically, we show convergence of gradient descent training dynamics of the RNN to the corresponding mean-field formulation in the large width limit. We also show that the fixed points of the limiting infinite-width d...
https://proceedings.mlr.press/v202/aggarwal23a.html
https://proceedings.mlr.press/v202/aggarwal23a/aggarwal23a.pdf
https://openreview.net/forum?id=kwb6T6LP7f
SemSup-XC: Semantic Supervision for Zero and Few-shot Extreme Classification
https://proceedings.mlr.press/v202/aggarwal23a.html
Pranjal Aggarwal, Ameet Deshpande, Karthik R Narasimhan
https://proceedings.mlr.press/v202/aggarwal23a.html
ICML 2023
Extreme classification (XC) involves predicting over large numbers of classes (thousands to millions), with real-world applications like news article classification and e-commerce product tagging. The zero-shot version of this task requires generalization to novel classes without additional supervision. In this paper, ...
https://proceedings.mlr.press/v202/aghabozorgi23a.html
https://proceedings.mlr.press/v202/aghabozorgi23a/aghabozorgi23a.pdf
https://openreview.net/forum?id=CNq0JvrDfw
Adaptive IMLE for Few-shot Pretraining-free Generative Modelling
https://proceedings.mlr.press/v202/aghabozorgi23a.html
Mehran Aghabozorgi, Shichong Peng, Ke Li
https://proceedings.mlr.press/v202/aghabozorgi23a.html
ICML 2023
Despite their success on large datasets, GANs have been difficult to apply in the few-shot setting, where only a limited number of training examples are provided. Due to mode collapse, GANs tend to ignore some training examples, causing overfitting to a subset of the training dataset, which is small in the first place....
https://proceedings.mlr.press/v202/aghajanyan23a.html
https://proceedings.mlr.press/v202/aghajanyan23a/aghajanyan23a.pdf
https://openreview.net/forum?id=2n7dHVhwJf
Scaling Laws for Generative Mixed-Modal Language Models
https://proceedings.mlr.press/v202/aghajanyan23a.html
Armen Aghajanyan, Lili Yu, Alexis Conneau, Wei-Ning Hsu, Karen Hambardzumyan, Susan Zhang, Stephen Roller, Naman Goyal, Omer Levy, Luke Zettlemoyer
https://proceedings.mlr.press/v202/aghajanyan23a.html
ICML 2023
Generative language models define distributions over sequences of tokens that can represent essentially any combination of data modalities (e.g., any permutation of image tokens from VQ-VAEs, speech tokens from HuBERT, BPE tokens for language or code, and so on). To better understand the scaling properties of such mixe...
https://proceedings.mlr.press/v202/aghbalou23a.html
https://proceedings.mlr.press/v202/aghbalou23a/aghbalou23a.pdf
https://openreview.net/forum?id=Dg5H4Qd0dZ
Hypothesis Transfer Learning with Surrogate Classification Losses: Generalization Bounds through Algorithmic Stability
https://proceedings.mlr.press/v202/aghbalou23a.html
Anass Aghbalou, Guillaume Staerman
https://proceedings.mlr.press/v202/aghbalou23a.html
ICML 2023
Hypothesis transfer learning (HTL) contrasts domain adaptation by allowing for a previous task leverage, named the source, into a new one, the target, without requiring access to the source data. Indeed, HTL relies only on a hypothesis learnt from such source data, relieving the hurdle of expansive data storage and pro...
https://proceedings.mlr.press/v202/aglietti23a.html
https://proceedings.mlr.press/v202/aglietti23a/aglietti23a.pdf
https://openreview.net/forum?id=60bhXDeTos
Constrained Causal Bayesian Optimization
https://proceedings.mlr.press/v202/aglietti23a.html
Virginia Aglietti, Alan Malek, Ira Ktena, Silvia Chiappa
https://proceedings.mlr.press/v202/aglietti23a.html
ICML 2023
We propose constrained causal Bayesian optimization (cCBO), an approach for finding interventions in a known causal graph that optimize a target variable under some constraints. cCBO first reduces the search space by exploiting the graph structure and, if available, an observational dataset; and then solves the restric...
https://proceedings.mlr.press/v202/agoritsas23a.html
https://proceedings.mlr.press/v202/agoritsas23a/agoritsas23a.pdf
https://openreview.net/forum?id=DF9aUqGzsV
Explaining the effects of non-convergent MCMC in the training of Energy-Based Models
https://proceedings.mlr.press/v202/agoritsas23a.html
Elisabeth Agoritsas, Giovanni Catania, Aurélien Decelle, Beatriz Seoane
https://proceedings.mlr.press/v202/agoritsas23a.html
ICML 2023
In this paper, we quantify the impact of using non-convergent Markov chains to train Energy-Based models (EBMs). In particular, we show analytically that EBMs trained with non-persistent short runs to estimate the gradient can perfectly reproduce a set of empirical statistics of the data, not at the level of the equili...
https://proceedings.mlr.press/v202/aher23a.html
https://proceedings.mlr.press/v202/aher23a/aher23a.pdf
https://openreview.net/forum?id=eYlLlvzngu
Using Large Language Models to Simulate Multiple Humans and Replicate Human Subject Studies
https://proceedings.mlr.press/v202/aher23a.html
Gati V Aher, Rosa I. Arriaga, Adam Tauman Kalai
https://proceedings.mlr.press/v202/aher23a.html
ICML 2023
We introduce a new type of test, called a Turing Experiment (TE), for evaluating to what extent a given language model, such as GPT models, can simulate different aspects of human behavior. A TE can also reveal consistent distortions in a language model’s simulation of a specific human behavior. Unlike the Turing Test,...
https://proceedings.mlr.press/v202/ahuja23a.html
https://proceedings.mlr.press/v202/ahuja23a/ahuja23a.pdf
https://openreview.net/forum?id=YiWzhu9pl6
Interventional Causal Representation Learning
https://proceedings.mlr.press/v202/ahuja23a.html
Kartik Ahuja, Divyat Mahajan, Yixin Wang, Yoshua Bengio
https://proceedings.mlr.press/v202/ahuja23a.html
ICML 2023
Causal representation learning seeks to extract high-level latent factors from low-level sensory data. Most existing methods rely on observational data and structural assumptions (e.g., conditional independence) to identify the latent factors. However, interventional data is prevalent across applications. Can intervent...
https://proceedings.mlr.press/v202/ailer23a.html
https://proceedings.mlr.press/v202/ailer23a/ailer23a.pdf
https://openreview.net/forum?id=dT7uMuZJjf
Sequential Underspecified Instrument Selection for Cause-Effect Estimation
https://proceedings.mlr.press/v202/ailer23a.html
Elisabeth Ailer, Jason Hartford, Niki Kilbertus
https://proceedings.mlr.press/v202/ailer23a.html
ICML 2023
Instrumental variable (IV) methods are used to estimate causal effects in settings with unobserved confounding, where we cannot directly experiment on the treatment variable. Instruments are variables which only affect the outcome indirectly via the treatment variable(s). Most IV applications focus on low-dimensional t...
https://proceedings.mlr.press/v202/aitchison23a.html
https://proceedings.mlr.press/v202/aitchison23a/aitchison23a.pdf
https://openreview.net/forum?id=xRDHjO0YBo
Atari-5: Distilling the Arcade Learning Environment down to Five Games
https://proceedings.mlr.press/v202/aitchison23a.html
Matthew Aitchison, Penny Sweetser, Marcus Hutter
https://proceedings.mlr.press/v202/aitchison23a.html
ICML 2023
The Arcade Learning Environment (ALE) has become an essential benchmark for assessing the performance of reinforcement learning algorithms. However, the computational cost of generating results on the entire 57-game dataset limits ALE’s use and makes the reproducibility of many results infeasible. We propose a novel so...
https://proceedings.mlr.press/v202/akhtar23a.html
https://proceedings.mlr.press/v202/akhtar23a/akhtar23a.pdf
https://openreview.net/forum?id=cHZBCZmfSo
Towards credible visual model interpretation with path attribution
https://proceedings.mlr.press/v202/akhtar23a.html
Naveed Akhtar, Mohammad A. A. K. Jalwana
https://proceedings.mlr.press/v202/akhtar23a.html
ICML 2023
With its inspirational roots in game-theory, path attribution framework stands out among the post-hoc model interpretation techniques due to its axiomatic nature. However, recent developments show that despite being axiomatic, path attribution methods can compute counter-intuitive feature attributions. Not only that, f...
https://proceedings.mlr.press/v202/alacaoglu23a.html
https://proceedings.mlr.press/v202/alacaoglu23a/alacaoglu23a.pdf
https://openreview.net/forum?id=UZmfIzyTvW
Convergence of First-Order Methods for Constrained Nonconvex Optimization with Dependent Data
https://proceedings.mlr.press/v202/alacaoglu23a.html
Ahmet Alacaoglu, Hanbaek Lyu
https://proceedings.mlr.press/v202/alacaoglu23a.html
ICML 2023
We focus on analyzing the classical stochastic projected gradient methods under a general dependent data sampling scheme for constrained smooth nonconvex optimization. We show the worst-case rate of convergence $\tilde{O}(t^{-1/4})$ and complexity $\tilde{O}(\varepsilon^{-4})$ for achieving an $\varepsilon$-near statio...
https://proceedings.mlr.press/v202/alam23a.html
https://proceedings.mlr.press/v202/alam23a/alam23a.pdf
https://openreview.net/forum?id=CTZHb6PrHF
Recasting Self-Attention with Holographic Reduced Representations
https://proceedings.mlr.press/v202/alam23a.html
Mohammad Mahmudul Alam, Edward Raff, Stella Biderman, Tim Oates, James Holt
https://proceedings.mlr.press/v202/alam23a.html
ICML 2023
In recent years, self-attention has become the dominant paradigm for sequence modeling in a variety of domains. However, in domains with very long sequence lengths the $\mathcal{O}(T^2)$ memory and $\mathcal{O}(T^2 H)$ compute costs can make using transformers infeasible. Motivated by problems in malware detection, whe...
https://proceedings.mlr.press/v202/alghamdi23a.html
https://proceedings.mlr.press/v202/alghamdi23a/alghamdi23a.pdf
https://openreview.net/forum?id=IK7UWsjhUp
The Saddle-Point Method in Differential Privacy
https://proceedings.mlr.press/v202/alghamdi23a.html
Wael Alghamdi, Juan Felipe Gomez, Shahab Asoodeh, Flavio Calmon, Oliver Kosut, Lalitha Sankar
https://proceedings.mlr.press/v202/alghamdi23a.html
ICML 2023
We characterize the differential privacy guarantees of privacy mechanisms in the large-composition regime, i.e., when a privacy mechanism is sequentially applied a large number of times to sensitive data. Via exponentially tilting the privacy loss random variable, we derive a new formula for the privacy curve expressin...
https://proceedings.mlr.press/v202/ali-mehmeti-gopel23a.html
https://proceedings.mlr.press/v202/ali-mehmeti-gopel23a/ali-mehmeti-gopel23a.pdf
https://openreview.net/forum?id=tAa6ivLs6D
Nonlinear Advantage: Trained Networks Might Not Be As Complex as You Think
https://proceedings.mlr.press/v202/ali-mehmeti-gopel23a.html
Christian H.X. Ali Mehmeti-Göpel, Jan Disselhoff
https://proceedings.mlr.press/v202/ali-mehmeti-gopel23a.html
ICML 2023
We perform an empirical study of the behaviour of deep networks when fully linearizing some of its feature channels through a sparsity prior on the overall number of nonlinear units in the network. In experiments on image classification and machine translation tasks, we investigate how much we can simplify the network ...
https://proceedings.mlr.press/v202/allingham23a.html
https://proceedings.mlr.press/v202/allingham23a/allingham23a.pdf
https://openreview.net/forum?id=6MU5xdrO7t
A Simple Zero-shot Prompt Weighting Technique to Improve Prompt Ensembling in Text-Image Models
https://proceedings.mlr.press/v202/allingham23a.html
James Urquhart Allingham, Jie Ren, Michael W Dusenberry, Xiuye Gu, Yin Cui, Dustin Tran, Jeremiah Zhe Liu, Balaji Lakshminarayanan
https://proceedings.mlr.press/v202/allingham23a.html
ICML 2023
Contrastively trained text-image models have the remarkable ability to perform zero-shot classification, that is, classifying previously unseen images into categories that the model has never been explicitly trained to identify. However, these zero-shot classifiers need prompt engineering to achieve high accuracy. Prom...
https://proceedings.mlr.press/v202/allouah23a.html
https://proceedings.mlr.press/v202/allouah23a/allouah23a.pdf
https://openreview.net/forum?id=5WxdnjlCv7
On the Privacy-Robustness-Utility Trilemma in Distributed Learning
https://proceedings.mlr.press/v202/allouah23a.html
Youssef Allouah, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, John Stephan
https://proceedings.mlr.press/v202/allouah23a.html
ICML 2023
The ubiquity of distributed machine learning (ML) in sensitive public domain applications calls for algorithms that protect data privacy, while being robust to faults and adversarial behaviors. Although privacy and robustness have been extensively studied independently in distributed ML, their synthesis remains poorly ...
https://proceedings.mlr.press/v202/alparslan23a.html
https://proceedings.mlr.press/v202/alparslan23a/alparslan23a.pdf
https://openreview.net/forum?id=O3adXl7uBw
Differentially Private Distributed Bayesian Linear Regression with MCMC
https://proceedings.mlr.press/v202/alparslan23a.html
Baris Alparslan, Sinan Yıldırım, Ilker Birbil
https://proceedings.mlr.press/v202/alparslan23a.html
ICML 2023
We propose a novel Bayesian inference framework for distributed differentially private linear regression. We consider a distributed setting where multiple parties hold parts of the data and share certain summary statistics of their portions in privacy-preserving noise. We develop a novel generative statistical model fo...
https://proceedings.mlr.press/v202/altamirano23a.html
https://proceedings.mlr.press/v202/altamirano23a/altamirano23a.pdf
https://openreview.net/forum?id=jWmHbfKeQF
Robust and Scalable Bayesian Online Changepoint Detection
https://proceedings.mlr.press/v202/altamirano23a.html
Matias Altamirano, Francois-Xavier Briol, Jeremias Knoblauch
https://proceedings.mlr.press/v202/altamirano23a.html
ICML 2023
This paper proposes an online, provably robust, and scalable Bayesian approach for changepoint detection. The resulting algorithm has key advantages over previous work: it provides provable robustness by leveraging the generalised Bayesian perspective, and also addresses the scalability issues of previous attempts. Spe...
https://proceedings.mlr.press/v202/altekruger23a.html
https://proceedings.mlr.press/v202/altekruger23a/altekruger23a.pdf
https://openreview.net/forum?id=Ur1Eckuj3V
Neural Wasserstein Gradient Flows for Discrepancies with Riesz Kernels
https://proceedings.mlr.press/v202/altekruger23a.html
Fabian Altekrüger, Johannes Hertrich, Gabriele Steidl
https://proceedings.mlr.press/v202/altekruger23a.html
ICML 2023
Wasserstein gradient flows of maximum mean discrepancy (MMD) functionals with non-smooth Riesz kernels show a rich structure as singular measures can become absolutely continuous ones and conversely. In this paper we contribute to the understanding of such flows. We propose to approximate the backward scheme of Jordan,...
https://proceedings.mlr.press/v202/amani23a.html
https://proceedings.mlr.press/v202/amani23a/amani23a.pdf
https://openreview.net/forum?id=vTSLiw1GfJ
Distributed Contextual Linear Bandits with Minimax Optimal Communication Cost
https://proceedings.mlr.press/v202/amani23a.html
Sanae Amani, Tor Lattimore, András György, Lin Yang
https://proceedings.mlr.press/v202/amani23a.html
ICML 2023
We study distributed contextual linear bandits with stochastic contexts, where $N$ agents/learners act cooperatively to solve a linear bandit-optimization problem with $d$-dimensional features over the course of $T$ rounds. For this problem, we derive the first ever information-theoretic lower bound $\Omega(dN)$ on the...
https://proceedings.mlr.press/v202/amin23a.html
https://proceedings.mlr.press/v202/amin23a/amin23a.pdf
https://openreview.net/forum?id=8LdBTjylEw
A Kernelized Stein Discrepancy for Biological Sequences
https://proceedings.mlr.press/v202/amin23a.html
Alan Nawzad Amin, Eli N Weinstein, Debora Susan Marks
https://proceedings.mlr.press/v202/amin23a.html
ICML 2023
Generative models of biological sequences are a powerful tool for learning from complex sequence data, predicting the effects of mutations, and designing novel biomolecules with desired properties. To evaluate generative models it is important to accurately measure differences between high-dimensional distributions. In...
https://proceedings.mlr.press/v202/amortila23a.html
https://proceedings.mlr.press/v202/amortila23a/amortila23a.pdf
https://openreview.net/forum?id=OT6gRRMmcE
The Optimal Approximation Factors in Misspecified Off-Policy Value Function Estimation
https://proceedings.mlr.press/v202/amortila23a.html
Philip Amortila, Nan Jiang, Csaba Szepesvari
https://proceedings.mlr.press/v202/amortila23a.html
ICML 2023
Theoretical guarantees in reinforcement learning (RL) are known to suffer multiplicative blow-up factors with respect to the misspecification error of function approximation. Yet, the nature of such approximation factors—especially their optimal form in a given learning problem—is poorly understood. In this paper we st...
https://proceedings.mlr.press/v202/amos23a.html
https://proceedings.mlr.press/v202/amos23a/amos23a.pdf
https://openreview.net/forum?id=vinsvrSJmd
Meta Optimal Transport
https://proceedings.mlr.press/v202/amos23a.html
Brandon Amos, Giulia Luise, Samuel Cohen, Ievgen Redko
https://proceedings.mlr.press/v202/amos23a.html
ICML 2023
We study the use of amortized optimization to predict optimal transport (OT) maps from the input measures, which we call Meta OT. This helps repeatedly solve similar OT problems between different measures by leveraging the knowledge and information present from past problems to rapidly predict and solve new problems. O...
https://proceedings.mlr.press/v202/anagnostides23a.html
https://proceedings.mlr.press/v202/anagnostides23a/anagnostides23a.pdf
https://openreview.net/forum?id=FK18BRc1vL
Near-Optimal $Φ$-Regret Learning in Extensive-Form Games
https://proceedings.mlr.press/v202/anagnostides23a.html
Ioannis Anagnostides, Gabriele Farina, Tuomas Sandholm
https://proceedings.mlr.press/v202/anagnostides23a.html
ICML 2023
In this paper, we establish efficient and uncoupled learning dynamics so that, when employed by all players in multiplayer perfect-recall imperfect-information extensive-form games, the trigger regret of each player grows as $O(\log T)$ after $T$ repetitions of play. This improves exponentially over the prior best know...
https://proceedings.mlr.press/v202/andriushchenko23a.html
https://proceedings.mlr.press/v202/andriushchenko23a/andriushchenko23a.pdf
https://openreview.net/forum?id=VZp9X410D3
A Modern Look at the Relationship between Sharpness and Generalization
https://proceedings.mlr.press/v202/andriushchenko23a.html
Maksym Andriushchenko, Francesco Croce, Maximilian Müller, Matthias Hein, Nicolas Flammarion
https://proceedings.mlr.press/v202/andriushchenko23a.html
ICML 2023
Sharpness of minima is a promising quantity that can correlate with generalization in deep networks and, when optimized during training, can improve generalization. However, standard sharpness is not invariant under reparametrizations of neural networks, and, to fix this, reparametrization-invariant sharpness definitio...
https://proceedings.mlr.press/v202/andriushchenko23b.html
https://proceedings.mlr.press/v202/andriushchenko23b/andriushchenko23b.pdf
https://openreview.net/forum?id=DnTuz0ziwN
SGD with Large Step Sizes Learns Sparse Features
https://proceedings.mlr.press/v202/andriushchenko23b.html
Maksym Andriushchenko, Aditya Vardhan Varre, Loucas Pillaud-Vivien, Nicolas Flammarion
https://proceedings.mlr.press/v202/andriushchenko23b.html
ICML 2023
We showcase important features of the dynamics of the Stochastic Gradient Descent (SGD) in the training of neural networks. We present empirical observations that commonly used large step sizes (i) may lead the iterates to jump from one side of a valley to the other causing loss stabilization, and (ii) this stabilizati...
https://proceedings.mlr.press/v202/ansari23a.html
https://proceedings.mlr.press/v202/ansari23a/ansari23a.pdf
https://openreview.net/forum?id=GTos8jbYUa
Neural Continuous-Discrete State Space Models for Irregularly-Sampled Time Series
https://proceedings.mlr.press/v202/ansari23a.html
Abdul Fatir Ansari, Alvin Heng, Andre Lim, Harold Soh
https://proceedings.mlr.press/v202/ansari23a.html
ICML 2023
Learning accurate predictive models of real-world dynamic phenomena (e.g., climate, biological) remains a challenging task. One key issue is that the data generated by both natural and artificial processes often comprise time series that are irregularly sampled and/or contain missing observations. In this work, we prop...
https://proceedings.mlr.press/v202/antoniadis23a.html
https://proceedings.mlr.press/v202/antoniadis23a/antoniadis23a.pdf
https://openreview.net/forum?id=NG8f2j1EKb
Paging with Succinct Predictions
https://proceedings.mlr.press/v202/antoniadis23a.html
Antonios Antoniadis, Joan Boyar, Marek Elias, Lene Monrad Favrholdt, Ruben Hoeksma, Kim S. Larsen, Adam Polak, Bertrand Simon
https://proceedings.mlr.press/v202/antoniadis23a.html
ICML 2023
Paging is a prototypical problem in the area of online algorithms. It has also played a central role in the development of learning-augmented algorithms. Previous work on learning-augmented paging has investigated predictions on (i) when the current page will be requested again (reoccurrence predictions), (ii) the curr...
https://proceedings.mlr.press/v202/antoniadis23b.html
https://proceedings.mlr.press/v202/antoniadis23b/antoniadis23b.pdf
https://openreview.net/forum?id=HqQIt6mt5B
Mixing Predictions for Online Metric Algorithms
https://proceedings.mlr.press/v202/antoniadis23b.html
Antonios Antoniadis, Christian Coester, Marek Elias, Adam Polak, Bertrand Simon
https://proceedings.mlr.press/v202/antoniadis23b.html
ICML 2023
A major technique in learning-augmented online algorithms is combining multiple algorithms or predictors. Since the performance of each predictor may vary over time, it is desirable to use not the single best predictor as a benchmark, but rather a dynamic combination which follows different predictors at different time...
https://proceedings.mlr.press/v202/aouali23a.html
https://proceedings.mlr.press/v202/aouali23a/aouali23a.pdf
https://openreview.net/forum?id=LJ9iKElXpl
Exponential Smoothing for Off-Policy Learning
https://proceedings.mlr.press/v202/aouali23a.html
Imad Aouali, Victor-Emmanuel Brunel, David Rohde, Anna Korba
https://proceedings.mlr.press/v202/aouali23a.html
ICML 2023
Off-policy learning (OPL) aims at finding improved policies from logged bandit data, often by minimizing the inverse propensity scoring (IPS) estimator of the risk. In this work, we investigate a smooth regularization for IPS, for which we derive a two-sided PAC-Bayes generalization bound. The bound is tractable, scala...
https://proceedings.mlr.press/v202/arbas23a.html
https://proceedings.mlr.press/v202/arbas23a/arbas23a.pdf
https://openreview.net/forum?id=b6Hxt4Jw10
Polynomial Time and Private Learning of Unbounded Gaussian Mixture Models
https://proceedings.mlr.press/v202/arbas23a.html
Jamil Arbas, Hassan Ashtiani, Christopher Liaw
https://proceedings.mlr.press/v202/arbas23a.html
ICML 2023
We study the problem of privately estimating the parameters of $d$-dimensional Gaussian Mixture Models (GMMs) with $k$ components. For this, we develop a technique to reduce the problem to its non-private counterpart. This allows us to privatize existing non-private algorithms in a blackbox manner, while incurring only...
https://proceedings.mlr.press/v202/arisaka23a.html
https://proceedings.mlr.press/v202/arisaka23a/arisaka23a.pdf
https://openreview.net/forum?id=2MbU8qSWL1
Principled Acceleration of Iterative Numerical Methods Using Machine Learning
https://proceedings.mlr.press/v202/arisaka23a.html
Sohei Arisaka, Qianxiao Li
https://proceedings.mlr.press/v202/arisaka23a.html
ICML 2023
Iterative methods are ubiquitous in large-scale scientific computing applications, and a number of approaches based on meta-learning have been recently proposed to accelerate them. However, a systematic study of these approaches and how they differ from meta-learning is lacking. In this paper, we propose a framework to...
https://proceedings.mlr.press/v202/arora23a.html
https://proceedings.mlr.press/v202/arora23a/arora23a.pdf
https://openreview.net/forum?id=kOUBFwYd2D
Faster Rates of Convergence to Stationary Points in Differentially Private Optimization
https://proceedings.mlr.press/v202/arora23a.html
Raman Arora, Raef Bassily, Tomás González, Cristóbal A Guzmán, Michael Menart, Enayat Ullah
https://proceedings.mlr.press/v202/arora23a.html
ICML 2023
We study the problem of approximating stationary points of Lipschitz and smooth functions under $(\varepsilon,\delta)$-differential privacy (DP) in both the finite-sum and stochastic settings. A point $\widehat{w}$ is called an $\alpha$-stationary point of a function $F:\mathbb{R}^d\rightarrow\mathbb{R}$ if $\|\nabla F...
https://proceedings.mlr.press/v202/asadi23a.html
https://proceedings.mlr.press/v202/asadi23a/asadi23a.pdf
https://openreview.net/forum?id=ywwdhhqNj7
Prototype-Sample Relation Distillation: Towards Replay-Free Continual Learning
https://proceedings.mlr.press/v202/asadi23a.html
Nader Asadi, Mohammadreza Davari, Sudhir Mudur, Rahaf Aljundi, Eugene Belilovsky
https://proceedings.mlr.press/v202/asadi23a.html
ICML 2023
In Continual learning (CL) balancing effective adaptation while combating catastrophic forgetting is a central challenge. Many of the recent best-performing methods utilize various forms of prior task data, e.g. a replay buffer, to tackle the catastrophic forgetting problem. Having access to previous task data can be r...
https://proceedings.mlr.press/v202/asi23a.html
https://proceedings.mlr.press/v202/asi23a/asi23a.pdf
https://openreview.net/forum?id=SjwWVAyYKh
Near-Optimal Algorithms for Private Online Optimization in the Realizable Regime
https://proceedings.mlr.press/v202/asi23a.html
Hilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar
https://proceedings.mlr.press/v202/asi23a.html
ICML 2023
We consider online learning problems in the realizable setting, where there is a zero-loss solution, and propose new Differentially Private (DP) algorithms that obtain near-optimal regret bounds. For the problem of online prediction from experts, we design new algorithms that obtain near-optimal regret $O \big( \vareps...
https://proceedings.mlr.press/v202/asi23b.html
https://proceedings.mlr.press/v202/asi23b/asi23b.pdf
https://openreview.net/forum?id=9viDfxnY3q
From Robustness to Privacy and Back
https://proceedings.mlr.press/v202/asi23b.html
Hilal Asi, Jonathan Ullman, Lydia Zakynthinou
https://proceedings.mlr.press/v202/asi23b.html
ICML 2023
We study the relationship between two desiderata of algorithms in statistical inference and machine learning—differential privacy and robustness to adversarial data corruptions. Their conceptual similarity was first observed by Dwork and Lei (STOC 2009), who observed that private algorithms satisfy robustness, and gave...
https://proceedings.mlr.press/v202/attia23a.html
https://proceedings.mlr.press/v202/attia23a/attia23a.pdf
https://openreview.net/forum?id=X7jMTrwuCz
SGD with AdaGrad Stepsizes: Full Adaptivity with High Probability to Unknown Parameters, Unbounded Gradients and Affine Variance
https://proceedings.mlr.press/v202/attia23a.html
Amit Attia, Tomer Koren
https://proceedings.mlr.press/v202/attia23a.html
ICML 2023
We study Stochastic Gradient Descent with AdaGrad stepsizes: a popular adaptive (self-tuning) method for first-order stochastic optimization. Despite being well studied, existing analyses of this method suffer from various shortcomings: they either assume some knowledge of the problem parameters, impose strong global L...
https://proceedings.mlr.press/v202/attias23a.html
https://proceedings.mlr.press/v202/attias23a/attias23a.pdf
https://openreview.net/forum?id=fcDq3BIbe9
Adversarially Robust PAC Learnability of Real-Valued Functions
https://proceedings.mlr.press/v202/attias23a.html
Idan Attias, Steve Hanneke
https://proceedings.mlr.press/v202/attias23a.html
ICML 2023
We study robustness to test-time adversarial attacks in the regression setting with $\ell_p$ losses and arbitrary perturbation sets. We address the question of which function classes are PAC learnable in this setting. We show that classes of finite fat-shattering dimension are learnable in both the realizable and agnos...
https://proceedings.mlr.press/v202/atzeni23a.html
https://proceedings.mlr.press/v202/atzeni23a/atzeni23a.pdf
https://openreview.net/forum?id=tE3BMOyUl5
Infusing Lattice Symmetry Priors in Attention Mechanisms for Sample-Efficient Abstract Geometric Reasoning
https://proceedings.mlr.press/v202/atzeni23a.html
Mattia Atzeni, Mrinmaya Sachan, Andreas Loukas
https://proceedings.mlr.press/v202/atzeni23a.html
ICML 2023
The Abstraction and Reasoning Corpus (ARC) (Chollet, 2019) and its most recent language-complete instantiation (LARC) has been postulated as an important step towards general AI. Yet, even state-of-the-art machine learning models struggle to achieve meaningful performance on these problems, falling behind non-learning ...
https://proceedings.mlr.press/v202/atzmon23a.html
https://proceedings.mlr.press/v202/atzmon23a/atzmon23a.pdf
https://openreview.net/forum?id=BJc95DyFNG
Learning to Initiate and Reason in Event-Driven Cascading Processes
https://proceedings.mlr.press/v202/atzmon23a.html
Yuval Atzmon, Eli Meirom, Shie Mannor, Gal Chechik
https://proceedings.mlr.press/v202/atzmon23a.html
ICML 2023
Training agents to control a dynamic environment is a fundamental task in AI. In many environments, the dynamics can be summarized by a small set of events that capture the semantic behavior of the system. Typically, these events form chains or cascades. We often wish to change the system behavior using a single interv...
https://proceedings.mlr.press/v202/aubert23a.html
https://proceedings.mlr.press/v202/aubert23a/aubert23a.pdf
https://openreview.net/forum?id=YvrxWGWg9E
On the convergence of the MLE as an estimator of the learning rate in the Exp3 algorithm
https://proceedings.mlr.press/v202/aubert23a.html
Julien Aubert, Luc Lehéricy, Patricia Reynaud-Bouret
https://proceedings.mlr.press/v202/aubert23a.html
ICML 2023
When fitting the learning data of an individual to algorithm-like learning models, the observations are so dependent and non-stationary that one may wonder what the classical Maximum Likelihood Estimator (MLE) could do, even if it is the usual tool applied to experimental cognition. Our objective in this work is to sho...
https://proceedings.mlr.press/v202/avdeyev23a.html
https://proceedings.mlr.press/v202/avdeyev23a/avdeyev23a.pdf
https://openreview.net/forum?id=O3jUIakvK7
Dirichlet Diffusion Score Model for Biological Sequence Generation
https://proceedings.mlr.press/v202/avdeyev23a.html
Pavel Avdeyev, Chenlai Shi, Yuhao Tan, Kseniia Dudnyk, Jian Zhou
https://proceedings.mlr.press/v202/avdeyev23a.html
ICML 2023
Designing biological sequences is an important challenge that requires satisfying complex constraints and thus is a natural problem to address with deep generative modeling. Diffusion generative models have achieved considerable success in many applications. Score-based generative stochastic differential equations (SDE...
https://proceedings.mlr.press/v202/axiotis23a.html
https://proceedings.mlr.press/v202/axiotis23a/axiotis23a.pdf
https://openreview.net/forum?id=a4bMHPm0Ji
Gradient Descent Converges Linearly for Logistic Regression on Separable Data
https://proceedings.mlr.press/v202/axiotis23a.html
Kyriakos Axiotis, Maxim Sviridenko
https://proceedings.mlr.press/v202/axiotis23a.html
ICML 2023
We show that running gradient descent with variable learning rate guarantees loss $f(x) ≤ 1.1 \cdot f(x^*)+\epsilon$ for the logistic regression objective, where the error $\epsilon$ decays exponentially with the number of iterations and polynomially with the magnitude of the entries of an arbitrary fixed solution $x$....
https://proceedings.mlr.press/v202/ayme23a.html
https://proceedings.mlr.press/v202/ayme23a/ayme23a.pdf
https://openreview.net/forum?id=gfSLvfVf0w
Naive imputation implicitly regularizes high-dimensional linear models
https://proceedings.mlr.press/v202/ayme23a.html
Alexis Ayme, Claire Boyer, Aymeric Dieuleveut, Erwan Scornet
https://proceedings.mlr.press/v202/ayme23a.html
ICML 2023
Two different approaches exist to handle missing values for prediction: either imputation, prior to fitting any predictive algorithms, or dedicated methods able to natively incorporate missing values. While imputation is widely (and easily) use, it is unfortunately biased when low-capacity predictors (such as linear mo...
https://proceedings.mlr.press/v202/azabou23a.html
https://proceedings.mlr.press/v202/azabou23a/azabou23a.pdf
https://openreview.net/forum?id=lXczFIwQkv
Half-Hop: A graph upsampling approach for slowing down message passing
https://proceedings.mlr.press/v202/azabou23a.html
Mehdi Azabou, Venkataramana Ganesh, Shantanu Thakoor, Chi-Heng Lin, Lakshmi Sathidevi, Ran Liu, Michal Valko, Petar Veličković, Eva L Dyer
https://proceedings.mlr.press/v202/azabou23a.html
ICML 2023
Message passing neural networks have shown a lot of success on graph-structured data. However, there are many instances where message passing can lead to over-smoothing or fail when neighboring nodes belong to different classes. In this work, we introduce a simple yet general framework for improving learning in message...
https://proceedings.mlr.press/v202/azad23a.html
https://proceedings.mlr.press/v202/azad23a/azad23a.pdf
https://openreview.net/forum?id=wagsJnR5GO
CLUTR: Curriculum Learning via Unsupervised Task Representation Learning
https://proceedings.mlr.press/v202/azad23a.html
Abdus Salam Azad, Izzeddin Gur, Jasper Emhoff, Nathaniel Alexis, Aleksandra Faust, Pieter Abbeel, Ion Stoica
https://proceedings.mlr.press/v202/azad23a.html
ICML 2023
Reinforcement Learning (RL) algorithms are often known for sample inefficiency and difficult generalization. Recently, Unsupervised Environment Design (UED) emerged as a new paradigm for zero-shot generalization by simultaneously learning a task distribution and agent policies on the generated tasks. This is a non-stat...
https://proceedings.mlr.press/v202/baek23a.html
https://proceedings.mlr.press/v202/baek23a/baek23a.pdf
https://openreview.net/forum?id=GXHL8ZS1GX
Personalized Subgraph Federated Learning
https://proceedings.mlr.press/v202/baek23a.html
Jinheon Baek, Wonyong Jeong, Jiongdao Jin, Jaehong Yoon, Sung Ju Hwang
https://proceedings.mlr.press/v202/baek23a.html
ICML 2023
Subgraphs of a larger global graph may be distributed across multiple devices, and only locally accessible due to privacy restrictions, although there may be links between subgraphs. Recently proposed subgraph Federated Learning (FL) methods deal with those missing links across local subgraphs while distributively trai...
https://proceedings.mlr.press/v202/baevski23a.html
https://proceedings.mlr.press/v202/baevski23a/baevski23a.pdf
https://openreview.net/forum?id=Jc5QwxfyyQ
Efficient Self-supervised Learning with Contextualized Target Representations for Vision, Speech and Language
https://proceedings.mlr.press/v202/baevski23a.html
Alexei Baevski, Arun Babu, Wei-Ning Hsu, Michael Auli
https://proceedings.mlr.press/v202/baevski23a.html
ICML 2023
Current self-supervised learning algorithms are often modality-specific and require large amounts of computational resources. To address these issues, we increase the training efficiency of data2vec, a learning objective that generalizes across several modalities. We do not encode masked tokens, use a fast convolutiona...
https://proceedings.mlr.press/v202/baey23a.html
https://proceedings.mlr.press/v202/baey23a/baey23a.pdf
https://openreview.net/forum?id=ikbUw7okHD
Efficient preconditioned stochastic gradient descent for estimation in latent variable models
https://proceedings.mlr.press/v202/baey23a.html
Charlotte Baey, Maud Delattre, Estelle Kuhn, Jean-Benoist Leger, Sarah Lemler
https://proceedings.mlr.press/v202/baey23a.html
ICML 2023
Latent variable models are powerful tools for modeling complex phenomena involving in particular partially observed data, unobserved variables or underlying complex unknown structures. Inference is often difficult due to the latent structure of the model. To deal with parameter estimation in the presence of latent vari...
https://proceedings.mlr.press/v202/bai23a.html
https://proceedings.mlr.press/v202/bai23a/bai23a.pdf
https://openreview.net/forum?id=3FydczZwkJ
Feed Two Birds with One Scone: Exploiting Wild Data for Both Out-of-Distribution Generalization and Detection
https://proceedings.mlr.press/v202/bai23a.html
Haoyue Bai, Gregory Canal, Xuefeng Du, Jeongyeol Kwon, Robert D Nowak, Yixuan Li
https://proceedings.mlr.press/v202/bai23a.html
ICML 2023
Modern machine learning models deployed in the wild can encounter both covariate and semantic shifts, giving rise to the problems of out-of-distribution (OOD) generalization and OOD detection respectively. While both problems have received significant research attention lately, they have been pursued independently. Thi...
https://proceedings.mlr.press/v202/bai23b.html
https://proceedings.mlr.press/v202/bai23b/bai23b.pdf
https://openreview.net/forum?id=KTJ6E8t9Cy
Answering Complex Logical Queries on Knowledge Graphs via Query Computation Tree Optimization
https://proceedings.mlr.press/v202/bai23b.html
Yushi Bai, Xin Lv, Juanzi Li, Lei Hou
https://proceedings.mlr.press/v202/bai23b.html
ICML 2023
Answering complex logical queries on incomplete knowledge graphs is a challenging task, and has been widely studied. Embedding-based methods require training on complex queries and may not generalize well to out-of-distribution query structures. Recent work frames this task as an end-to-end optimization problem, and it...
https://proceedings.mlr.press/v202/bai23c.html
https://proceedings.mlr.press/v202/bai23c/bai23c.pdf
https://openreview.net/forum?id=ftLm9QAqwc
Linear optimal partial transport embedding
https://proceedings.mlr.press/v202/bai23c.html
Yikun Bai, Ivan Vladimir Medri, Rocio Diaz Martin, Rana Shahroz, Soheil Kolouri
https://proceedings.mlr.press/v202/bai23c.html
ICML 2023
Optimal transport (OT) has gained popularity due to its various applications in fields such as machine learning, statistics, and signal processing. However, the balanced mass requirement limits its performance in practical problems. To address these limitations, variants of the OT problem, including unbalanced OT, Opti...
https://proceedings.mlr.press/v202/baker23a.html
https://proceedings.mlr.press/v202/baker23a/baker23a.pdf
https://openreview.net/forum?id=Q8k4WzGgnK
Implicit Graph Neural Networks: A Monotone Operator Viewpoint
https://proceedings.mlr.press/v202/baker23a.html
Justin Baker, Qingsong Wang, Cory D Hauck, Bao Wang
https://proceedings.mlr.press/v202/baker23a.html
ICML 2023
Implicit graph neural networks (IGNNs) – that solve a fixed-point equilibrium equation using Picard iteration for representation learning – have shown remarkable performance in learning long-range dependencies (LRD) in the underlying graphs. However, IGNNs suffer from several issues, including 1) their expressivity is ...
https://proceedings.mlr.press/v202/bakshi23a.html
https://proceedings.mlr.press/v202/bakshi23a/bakshi23a.pdf
https://openreview.net/forum?id=lxRIOSlTbb
Tensor Decompositions Meet Control Theory: Learning General Mixtures of Linear Dynamical Systems
https://proceedings.mlr.press/v202/bakshi23a.html
Ainesh Bakshi, Allen Liu, Ankur Moitra, Morris Yau
https://proceedings.mlr.press/v202/bakshi23a.html
ICML 2023
Recently Chen and Poor initiated the study of learning mixtures of linear dynamical systems. While linear dynamical systems already have wide-ranging applications in modeling time-series data, using mixture models can lead to a better fit or even a richer understanding of underlying subpopulations represented in the da...
https://proceedings.mlr.press/v202/balabanov23a.html
https://proceedings.mlr.press/v202/balabanov23a/balabanov23a.pdf
https://openreview.net/forum?id=EMN99LtfYA
Block Subsampled Randomized Hadamard Transform for Nyström Approximation on Distributed Architectures
https://proceedings.mlr.press/v202/balabanov23a.html
Oleg Balabanov, Matthias Beaupère, Laura Grigori, Victor Lederer
https://proceedings.mlr.press/v202/balabanov23a.html
ICML 2023
This article introduces a novel structured random matrix composed blockwise from subsampled randomized Hadamard transforms (SRHTs). The block SRHT is expected to outperform well-known dimension reduction maps, including SRHT and Gaussian matrices on distributed architectures. We prove that a block SRHT with enough rows...
https://proceedings.mlr.press/v202/ball23a.html
https://proceedings.mlr.press/v202/ball23a/ball23a.pdf
https://openreview.net/forum?id=h11j9w1ucU
Efficient Online Reinforcement Learning with Offline Data
https://proceedings.mlr.press/v202/ball23a.html
Philip J. Ball, Laura Smith, Ilya Kostrikov, Sergey Levine
https://proceedings.mlr.press/v202/ball23a.html
ICML 2023
Sample efficiency and exploration remain major challenges in online reinforcement learning (RL). A powerful approach that can be applied to address these issues is the inclusion of offline data, such as prior trajectories from a human expert or a sub-optimal exploration policy. Previous methods have relied on extensive...
https://proceedings.mlr.press/v202/ballu23a.html
https://proceedings.mlr.press/v202/ballu23a/ballu23a.pdf
https://openreview.net/forum?id=ImQC3p9wlm
Mirror Sinkhorn: Fast Online Optimization on Transport Polytopes
https://proceedings.mlr.press/v202/ballu23a.html
Marin Ballu, Quentin Berthet
https://proceedings.mlr.press/v202/ballu23a.html
ICML 2023
Optimal transport is an important tool in machine learning, allowing to capture geometric properties of the data through a linear program on transport polytopes. We present a single-loop optimization algorithm for minimizing general convex objectives on these domains, utilizing the principles of Sinkhorn matrix scaling...
https://proceedings.mlr.press/v202/balogh23a.html
https://proceedings.mlr.press/v202/balogh23a/balogh23a.pdf
https://openreview.net/forum?id=sFqfXphJh5
On the Functional Similarity of Robust and Non-Robust Neural Representations
https://proceedings.mlr.press/v202/balogh23a.html
András Balogh, Márk Jelasity
https://proceedings.mlr.press/v202/balogh23a.html
ICML 2023
Model stitching—where the internal representations of two neural networks are aligned linearly—helped demonstrate that the representations of different neural networks for the same task are surprisingly similar in a functional sense. At the same time, the representations of adversarially robust networks are considered ...
https://proceedings.mlr.press/v202/balseiro23a.html
https://proceedings.mlr.press/v202/balseiro23a/balseiro23a.pdf
https://openreview.net/forum?id=5h42xM0pwn
Robust Budget Pacing with a Single Sample
https://proceedings.mlr.press/v202/balseiro23a.html
Santiago R. Balseiro, Rachitesh Kumar, Vahab Mirrokni, Balasubramanian Sivan, Di Wang
https://proceedings.mlr.press/v202/balseiro23a.html
ICML 2023
Major Internet advertising platforms offer budget pacing tools as a standard service for advertisers to manage their ad campaigns. Given the inherent non-stationarity in an advertiser’s value and also competing advertisers’ values over time, a commonly used approach is to learn a target expenditure plan that specifies ...
https://proceedings.mlr.press/v202/banihashem23a.html
https://proceedings.mlr.press/v202/banihashem23a/banihashem23a.pdf
https://openreview.net/forum?id=2hF9MnBfUk
Dynamic Constrained Submodular Optimization with Polylogarithmic Update Time
https://proceedings.mlr.press/v202/banihashem23a.html
Kiarash Banihashem, Leyla Biabani, Samira Goudarzi, Mohammadtaghi Hajiaghayi, Peyman Jabbarzade, Morteza Monemizadeh
https://proceedings.mlr.press/v202/banihashem23a.html
ICML 2023
Maximizing a monotone submodular function under cardinality constraint $k$ is a core problem in machine learning and database with many basic applications, including video and data summarization, recommendation systems, feature extraction, exemplar clustering, and coverage problems. We study this classic problem in the...
https://proceedings.mlr.press/v202/bao23a.html
https://proceedings.mlr.press/v202/bao23a/bao23a.pdf
https://openreview.net/forum?id=Urp3atR1Z3
One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale
https://proceedings.mlr.press/v202/bao23a.html
Fan Bao, Shen Nie, Kaiwen Xue, Chongxuan Li, Shi Pu, Yaole Wang, Gang Yue, Yue Cao, Hang Su, Jun Zhu
https://proceedings.mlr.press/v202/bao23a.html
ICML 2023
This paper proposes a unified diffusion framework (dubbed UniDiffuser) to fit all distributions relevant to a set of multi-modal data in one model. Our key insight is – learning diffusion models for marginal, conditional, and joint distributions can be unified as predicting the noise in the perturbed data, where the pe...
https://proceedings.mlr.press/v202/bao23b.html
https://proceedings.mlr.press/v202/bao23b/bao23b.pdf
https://openreview.net/forum?id=rnNBSMOWvA
Optimizing the Collaboration Structure in Cross-Silo Federated Learning
https://proceedings.mlr.press/v202/bao23b.html
Wenxuan Bao, Haohan Wang, Jun Wu, Jingrui He
https://proceedings.mlr.press/v202/bao23b.html
ICML 2023
In federated learning (FL), multiple clients collaborate to train machine learning models together while keeping their data decentralized. Through utilizing more training data, FL suffers from the potential negative transfer problem: the global FL model may even perform worse than the models trained with local data onl...
https://proceedings.mlr.press/v202/bar-tal23a.html
https://proceedings.mlr.press/v202/bar-tal23a/bar-tal23a.pdf
https://openreview.net/forum?id=D4ajVWmgLB
MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation
https://proceedings.mlr.press/v202/bar-tal23a.html
Omer Bar-Tal, Lior Yariv, Yaron Lipman, Tali Dekel
https://proceedings.mlr.press/v202/bar-tal23a.html
ICML 2023
Recent advances in text-to-image generation with diffusion models present transformative capabilities in image quality. However, user controllability of the generated image, and fast adaptation to new tasks still remains an open challenge, currently mostly addressed by costly and long re-training and fine-tuning or ad-...
https://proceedings.mlr.press/v202/barakat23a.html
https://proceedings.mlr.press/v202/barakat23a/barakat23a.pdf
https://openreview.net/forum?id=ZnHXYHx70x
Reinforcement Learning with General Utilities: Simpler Variance Reduction and Large State-Action Space
https://proceedings.mlr.press/v202/barakat23a.html
Anas Barakat, Ilyas Fatkhullin, Niao He
https://proceedings.mlr.press/v202/barakat23a.html
ICML 2023
We consider the reinforcement learning (RL) problem with general utilities which consists in maximizing a function of the state-action occupancy measure. Beyond the standard cumulative reward RL setting, this problem includes as particular cases constrained RL, pure exploration and learning from demonstrations among ot...
https://proceedings.mlr.press/v202/barbiero23a.html
https://proceedings.mlr.press/v202/barbiero23a/barbiero23a.pdf
https://openreview.net/forum?id=KbvON8xOCJ
Interpretable Neural-Symbolic Concept Reasoning
https://proceedings.mlr.press/v202/barbiero23a.html
Pietro Barbiero, Gabriele Ciravegna, Francesco Giannini, Mateo Espinosa Zarlenga, Lucie Charlotte Magister, Alberto Tonda, Pietro Lio, Frederic Precioso, Mateja Jamnik, Giuseppe Marra
https://proceedings.mlr.press/v202/barbiero23a.html
ICML 2023
Deep learning methods are highly accurate, yet their opaque decision process prevents them from earning full human trust. Concept-based models aim to address this issue by learning tasks based on a set of human-understandable concepts. However, state-of-the-art concept-based models rely on high-dimensional concept embe...
https://proceedings.mlr.press/v202/bartan23a.html
https://proceedings.mlr.press/v202/bartan23a/bartan23a.pdf
https://openreview.net/forum?id=GN9bGEWvkx
Moccasin: Efficient Tensor Rematerialization for Neural Networks
https://proceedings.mlr.press/v202/bartan23a.html
Burak Bartan, Haoming Li, Harris Teague, Christopher Lott, Bistra Dilkina
https://proceedings.mlr.press/v202/bartan23a.html
ICML 2023
The deployment and training of neural networks on edge computing devices pose many challenges. The low memory nature of edge devices is often one of the biggest limiting factors encountered in the deployment of large neural network models. Tensor rematerialization or recompute is a way to address high memory requiremen...
https://proceedings.mlr.press/v202/bassily23a.html
https://proceedings.mlr.press/v202/bassily23a/bassily23a.pdf
https://openreview.net/forum?id=4UStsbnfVT
User-level Private Stochastic Convex Optimization with Optimal Rates
https://proceedings.mlr.press/v202/bassily23a.html
Raef Bassily, Ziteng Sun
https://proceedings.mlr.press/v202/bassily23a.html
ICML 2023
We study the problem of differentially private (DP) stochastic convex optimization (SCO) under the notion of user-level differential privacy. In this problem, there are $n$ users, each contributing $m>1$ samples to the input dataset of the private SCO algorithm, and the notion of indistinguishability embedded in DP is ...
https://proceedings.mlr.press/v202/basu23a.html
https://proceedings.mlr.press/v202/basu23a/basu23a.pdf
https://openreview.net/forum?id=0bR5JuxaoN
A Statistical Perspective on Retrieval-Based Models
https://proceedings.mlr.press/v202/basu23a.html
Soumya Basu, Ankit Singh Rawat, Manzil Zaheer
https://proceedings.mlr.press/v202/basu23a.html
ICML 2023
Many modern high-performing machine learning models increasingly rely on scaling up models, e.g., transformer networks. Simultaneously, a parallel line of work aims to improve the model performance by augmenting an input instance with other (labeled) instances during inference. Examples of such augmentations include ta...
https://proceedings.mlr.press/v202/bauer23a.html
https://proceedings.mlr.press/v202/bauer23a/bauer23a.pdf
https://openreview.net/forum?id=thUjOwfzzv
Human-Timescale Adaptation in an Open-Ended Task Space
https://proceedings.mlr.press/v202/bauer23a.html
Jakob Bauer, Kate Baumli, Feryal Behbahani, Avishkar Bhoopchand, Nathalie Bradley-Schmieg, Michael Chang, Natalie Clay, Adrian Collister, Vibhavari Dasagi, Lucy Gonzalez, Karol Gregor, Edward Hughes, Sheleem Kashem, Maria Loks-Thompson, Hannah Openshaw, Jack Parker-Holder, Shreya Pathak, Nicolas Perez-Nieves, Nemanja R...
https://proceedings.mlr.press/v202/bauer23a.html
ICML 2023
Foundation models have shown impressive adaptation and scalability in supervised and self-supervised learning problems, but so far these successes have not fully translated to reinforcement learning (RL). In this work, we demonstrate that training an RL agent at scale leads to a general in-context learning algorithm th...
https://proceedings.mlr.press/v202/baum23a.html
https://proceedings.mlr.press/v202/baum23a/baum23a.pdf
https://openreview.net/forum?id=XxMRhjbDGq
A Kernel Stein Test of Goodness of Fit for Sequential Models
https://proceedings.mlr.press/v202/baum23a.html
Jerome Baum, Heishiro Kanagawa, Arthur Gretton
https://proceedings.mlr.press/v202/baum23a.html
ICML 2023
We propose a goodness-of-fit measure for probability densities modeling observations with varying dimensionality, such as text documents of differing lengths or variable-length sequences. The proposed measure is an instance of the kernel Stein discrepancy (KSD), which has been used to construct goodness-of-fit tests fo...
https://proceedings.mlr.press/v202/bechavod23a.html
https://proceedings.mlr.press/v202/bechavod23a/bechavod23a.pdf
https://openreview.net/forum?id=DOdfxTZLyq
Individually Fair Learning with One-Sided Feedback
https://proceedings.mlr.press/v202/bechavod23a.html
Yahav Bechavod, Aaron Roth
https://proceedings.mlr.press/v202/bechavod23a.html
ICML 2023
We consider an online learning problem with one-sided feedback, in which the learner is able to observe the true label only for positively predicted instances. On each round, $k$ instances arrive and receive classification outcomes according to a randomized policy deployed by the learner, whose goal is to maximize accu...
https://proceedings.mlr.press/v202/becker23a.html
https://proceedings.mlr.press/v202/becker23a/becker23a.pdf
https://openreview.net/forum?id=LztkK0UZxS
Predicting Ordinary Differential Equations with Transformers
https://proceedings.mlr.press/v202/becker23a.html
Sören Becker, Michal Klein, Alexander Neitz, Giambattista Parascandolo, Niki Kilbertus
https://proceedings.mlr.press/v202/becker23a.html
ICML 2023
We develop a transformer-based sequence-to-sequence model that recovers scalar ordinary differential equations (ODEs) in symbolic form from irregularly sampled and noisy observations of a single solution trajectory. We demonstrate in extensive empirical evaluations that our model performs better or on par with existing...
https://proceedings.mlr.press/v202/beechey23a.html
https://proceedings.mlr.press/v202/beechey23a/beechey23a.pdf
https://openreview.net/forum?id=R1blujRwj1
Explaining Reinforcement Learning with Shapley Values
https://proceedings.mlr.press/v202/beechey23a.html
Daniel Beechey, Thomas M. S. Smith, Özgür Şimşek
https://proceedings.mlr.press/v202/beechey23a.html
ICML 2023
For reinforcement learning systems to be widely adopted, their users must understand and trust them. We present a theoretical analysis of explaining reinforcement learning using Shapley values, following a principled approach from game theory for identifying the contribution of individual players to the outcome of a co...
https://proceedings.mlr.press/v202/behmanesh23a.html
https://proceedings.mlr.press/v202/behmanesh23a/behmanesh23a.pdf
https://openreview.net/forum?id=PWRIIwBJFo
TIDE: Time Derivative Diffusion for Deep Learning on Graphs
https://proceedings.mlr.press/v202/behmanesh23a.html
Maysam Behmanesh, Maximilian Krahn, Maks Ovsjanikov
https://proceedings.mlr.press/v202/behmanesh23a.html
ICML 2023
A prominent paradigm for graph neural networks is based on the message-passing framework. In this framework, information communication is realized only between neighboring nodes. The challenge of approaches that use this paradigm is to ensure efficient and accurate long-distance communication between nodes, as deep con...
https://proceedings.mlr.press/v202/benbaki23a.html
https://proceedings.mlr.press/v202/benbaki23a/benbaki23a.pdf
https://openreview.net/forum?id=RAeN6s9RZV
Fast as CHITA: Neural Network Pruning with Combinatorial Optimization
https://proceedings.mlr.press/v202/benbaki23a.html
Riade Benbaki, Wenyu Chen, Xiang Meng, Hussein Hazimeh, Natalia Ponomareva, Zhe Zhao, Rahul Mazumder
https://proceedings.mlr.press/v202/benbaki23a.html
ICML 2023
The sheer size of modern neural networks makes model serving a serious computational challenge. A popular class of compression techniques overcomes this challenge by pruning or sparsifying the weights of pretrained networks. While useful, these techniques often face serious tradeoffs between computational requirements ...
https://proceedings.mlr.press/v202/bender23a.html
https://proceedings.mlr.press/v202/bender23a/bender23a.pdf
https://openreview.net/forum?id=3UHmUaOVWp
Continuously Parameterized Mixture Models
https://proceedings.mlr.press/v202/bender23a.html
Christopher M Bender, Yifeng Shi, Marc Niethammer, Junier Oliva
https://proceedings.mlr.press/v202/bender23a.html
ICML 2023
Mixture models are universal approximators of smooth densities but are difficult to utilize in complicated datasets due to restrictions on typically available modes and challenges with initialiations. We show that by continuously parameterizing a mixture of factor analyzers using a learned ordinary differential equatio...
https://proceedings.mlr.press/v202/bendinelli23a.html
https://proceedings.mlr.press/v202/bendinelli23a/bendinelli23a.pdf
https://openreview.net/forum?id=EiHX7MfAG0
Controllable Neural Symbolic Regression
https://proceedings.mlr.press/v202/bendinelli23a.html
Tommaso Bendinelli, Luca Biggio, Pierre-Alexandre Kamienny
https://proceedings.mlr.press/v202/bendinelli23a.html
ICML 2023
In symbolic regression, the objective is to find an analytical expression that accurately fits experimental data with the minimal use of mathematical symbols such as operators, variables, and constants. However, the combinatorial space of possible expressions can make it challenging for traditional evolutionary algorit...
https://proceedings.mlr.press/v202/bengs23a.html
https://proceedings.mlr.press/v202/bengs23a/bengs23a.pdf
https://openreview.net/forum?id=MUC7ASJiBT
On Second-Order Scoring Rules for Epistemic Uncertainty Quantification
https://proceedings.mlr.press/v202/bengs23a.html
Viktor Bengs, Eyke Hüllermeier, Willem Waegeman
https://proceedings.mlr.press/v202/bengs23a.html
ICML 2023
It is well known that accurate probabilistic predictors can be trained through empirical risk minimisation with proper scoring rules as loss functions. While such learners capture so-called aleatoric uncertainty of predictions, various machine learning methods have recently been developed with the goal to let the learn...
https://proceedings.mlr.press/v202/bennouna23a.html
https://proceedings.mlr.press/v202/bennouna23a/bennouna23a.pdf
https://openreview.net/forum?id=4cvSExetbO
Certified Robust Neural Networks: Generalization and Corruption Resistance
https://proceedings.mlr.press/v202/bennouna23a.html
Amine Bennouna, Ryan Lucas, Bart Van Parys
https://proceedings.mlr.press/v202/bennouna23a.html
ICML 2023
Recent work have demonstrated that robustness (to "corruption") can be at odds with generalization. Adversarial training, for instance, aims to reduce the problematic susceptibility of modern neural networks to small data perturbations. Surprisingly, overfitting is a major concern in adversarial training despite being ...
https://proceedings.mlr.press/v202/berlinghieri23a.html
https://proceedings.mlr.press/v202/berlinghieri23a/berlinghieri23a.pdf
https://openreview.net/forum?id=Qtix8HLmDx
Gaussian processes at the Helm(holtz): A more fluid model for ocean currents
https://proceedings.mlr.press/v202/berlinghieri23a.html
Renato Berlinghieri, Brian L. Trippe, David R. Burt, Ryan James Giordano, Kaushik Srinivasan, Tamay Özgökmen, Junfei Xia, Tamara Broderick
https://proceedings.mlr.press/v202/berlinghieri23a.html
ICML 2023
Oceanographers are interested in predicting ocean currents and identifying divergences in a current vector field based on sparse observations of buoy velocities. Since we expect current dynamics to be smooth but highly non-linear, Gaussian processes (GPs) offer an attractive model. But we show that applying a GP with a...
https://proceedings.mlr.press/v202/bernasconi23a.html
https://proceedings.mlr.press/v202/bernasconi23a/bernasconi23a.pdf
https://openreview.net/forum?id=jiC1uCDIEe
Optimal Rates and Efficient Algorithms for Online Bayesian Persuasion
https://proceedings.mlr.press/v202/bernasconi23a.html
Martino Bernasconi, Matteo Castiglioni, Andrea Celli, Alberto Marchesi, Francesco Trovò, Nicola Gatti
https://proceedings.mlr.press/v202/bernasconi23a.html
ICML 2023
Bayesian persuasion studies how an informed sender should influence beliefs of rational receivers that take decisions through Bayesian updating of a common prior. We focus on the online Bayesian persuasion framework, in which the sender repeatedly faces one or more receivers with unknown and adversarially selected type...
https://proceedings.mlr.press/v202/bernasconi23b.html
https://proceedings.mlr.press/v202/bernasconi23b/bernasconi23b.pdf
https://openreview.net/forum?id=RgwqlatND7
Constrained Phi-Equilibria
https://proceedings.mlr.press/v202/bernasconi23b.html
Martino Bernasconi, Matteo Castiglioni, Alberto Marchesi, Francesco Trovò, Nicola Gatti
https://proceedings.mlr.press/v202/bernasconi23b.html
ICML 2023
The computational study of equilibria involving constraints on players’ strategies has been largely neglected. However, in real-world applications, players are usually subject to constraints ruling out the feasibility of some of their strategies, such as, e.g., safety requirements and budget caps. Computational studies...
https://proceedings.mlr.press/v202/berrevoets23a.html
https://proceedings.mlr.press/v202/berrevoets23a/berrevoets23a.pdf
https://openreview.net/forum?id=8pCLQsEMPQ
Differentiable and Transportable Structure Learning
https://proceedings.mlr.press/v202/berrevoets23a.html
Jeroen Berrevoets, Nabeel Seedat, Fergus Imrie, Mihaela Van Der Schaar
https://proceedings.mlr.press/v202/berrevoets23a.html
ICML 2023
Directed acyclic graphs (DAGs) encode a lot of information about a particular distribution in their structure. However, compute required to infer these structures is typically super-exponential in the number of variables, as inference requires a sweep of a combinatorially large space of potential structures. That is, u...
https://proceedings.mlr.press/v202/berzins23a.html
https://proceedings.mlr.press/v202/berzins23a/berzins23a.pdf
https://openreview.net/forum?id=F2OjOG4j55
Polyhedral Complex Extraction from ReLU Networks using Edge Subdivision
https://proceedings.mlr.press/v202/berzins23a.html
Arturs Berzins
https://proceedings.mlr.press/v202/berzins23a.html
ICML 2023
A neural network consisting of piecewise affine building blocks, such as fully-connected layers and ReLU activations, is itself a piecewise affine function supported on a polyhedral complex. This complex has been previously studied to characterize theoretical properties of neural networks, but, in practice, extracting ...
https://proceedings.mlr.press/v202/bethune23a.html
https://proceedings.mlr.press/v202/bethune23a/bethune23a.pdf
https://openreview.net/forum?id=g68Q7mL0P5
Robust One-Class Classification with Signed Distance Function using 1-Lipschitz Neural Networks
https://proceedings.mlr.press/v202/bethune23a.html
Louis Béthune, Paul Novello, Guillaume Coiffier, Thibaut Boissin, Mathieu Serrurier, Quentin Vincenot, Andres Troya-Galvis
https://proceedings.mlr.press/v202/bethune23a.html
ICML 2023
We propose a new method, dubbed One Class Signed Distance Function (OCSDF), to perform One Class Classification (OCC) by provably learning the Signed Distance Function (SDF) to the boundary of the support of any distribution. The distance to the support can be interpreted as a normality score, and its approximation usi...
https://proceedings.mlr.press/v202/bevilacqua23a.html
https://proceedings.mlr.press/v202/bevilacqua23a/bevilacqua23a.pdf
https://openreview.net/forum?id=kP2p67F4G7
Neural Algorithmic Reasoning with Causal Regularisation
https://proceedings.mlr.press/v202/bevilacqua23a.html
Beatrice Bevilacqua, Kyriacos Nikiforou, Borja Ibarz, Ioana Bica, Michela Paganini, Charles Blundell, Jovana Mitrovic, Petar Veličković
https://proceedings.mlr.press/v202/bevilacqua23a.html
ICML 2023
Recent work on neural algorithmic reasoning has investigated the reasoning capabilities of neural networks, effectively demonstrating they can learn to execute classical algorithms on unseen data coming from the train distribution. However, the performance of existing neural reasoners significantly degrades on out-of-d...
https://proceedings.mlr.press/v202/bharti23a.html
https://proceedings.mlr.press/v202/bharti23a/bharti23a.pdf
https://openreview.net/forum?id=s4dX9ymHrP
Optimally-weighted Estimators of the Maximum Mean Discrepancy for Likelihood-Free Inference
https://proceedings.mlr.press/v202/bharti23a.html
Ayush Bharti, Masha Naslidnyk, Oscar Key, Samuel Kaski, Francois-Xavier Briol
https://proceedings.mlr.press/v202/bharti23a.html
ICML 2023
Likelihood-free inference methods typically make use of a distance between simulated and real data. A common example is the maximum mean discrepancy (MMD), which has previously been used for approximate Bayesian computation, minimum distance estimation, generalised Bayesian inference, and within the nonparametric learn...
https://proceedings.mlr.press/v202/bhaskara23a.html
https://proceedings.mlr.press/v202/bhaskara23a/bhaskara23a.pdf
https://openreview.net/forum?id=SgeIqUvo4w
Bandit Online Linear Optimization with Hints and Queries
https://proceedings.mlr.press/v202/bhaskara23a.html
Aditya Bhaskara, Ashok Cutkosky, Ravi Kumar, Manish Purohit
https://proceedings.mlr.press/v202/bhaskara23a.html
ICML 2023
We study variants of the online linear optimization (OLO) problem with bandit feedback, where the algorithm has access to external information about the unknown cost vector. Our motivation is the recent body of work on using such “hints” towards improving regret bounds for OLO problems in the full-information setting. ...
https://proceedings.mlr.press/v202/bhatnagar23a.html
https://proceedings.mlr.press/v202/bhatnagar23a/bhatnagar23a.pdf
https://openreview.net/forum?id=qqMcym6AmS
Improved Online Conformal Prediction via Strongly Adaptive Online Learning
https://proceedings.mlr.press/v202/bhatnagar23a.html
Aadyot Bhatnagar, Huan Wang, Caiming Xiong, Yu Bai
https://proceedings.mlr.press/v202/bhatnagar23a.html
ICML 2023
We study the problem of uncertainty quantification via prediction sets, in an online setting where the data distribution may vary arbitrarily over time. Recent work develops online conformal prediction techniques that leverage regret minimization algorithms from the online learning literature to learn prediction sets w...
End of preview. Expand in Data Studio

ICML 2023 International Conference on Machine Learning 2023 Accepted Paper Meta Info Dataset

This dataset is collect from the ICML 2024 OpenReview website (https://openreview.net/group?id=ICML.cc/2023/Conference#tab-accept-oral) as well as the arxiv website DeepNLP paper arxiv (http://www.deepnlp.org/content/paper/icml2023). For researchers who are interested in doing analysis of ICML 2023 accepted papers and potential trends, you can use the already cleaned up json files. Each row contains the meta information of a paper in the ICML 2023 conference. To explore more AI & Robotic papers (NIPS/ICML/ICLR/IROS/ICRA/etc) and AI equations, feel free to navigate the Equation Search Engine (http://www.deepnlp.org/search/equation) as well as the AI Agent Search Engine to find the deployed AI Apps and Agents (http://www.deepnlp.org/search/agent) in your domain.

Meta Information of Json File

{
    "abs": "https://proceedings.mlr.press/v202/aamand23a.html",
    "Download PDF": "https://proceedings.mlr.press/v202/aamand23a/aamand23a.pdf",
    "OpenReview": "https://openreview.net/forum?id=BVomXLJQoH",
    "title": "Data Structures for Density Estimation",
    "url": "https://proceedings.mlr.press/v202/aamand23a.html",
    "authors": "Anders Aamand, Alexandr Andoni, Justin Y. Chen, Piotr Indyk, Shyam Narayanan, Sandeep Silwal",
    "detail_url": "https://proceedings.mlr.press/v202/aamand23a.html",
    "tags": "ICML 2023",
    "abstract": "We study statistical/computational tradeoffs for the following density estimation problem: given $k$ distributions $v_1, \\ldots, v_k$ over a discrete domain of size $n$, and sampling access to a distribution $p$, identify $v_i$ that is \"close\" to $p$. Our main result is the first data structure that, given a sublinear (in $n$) number of samples from $p$, identifies $v_i$ in time sublinear in $k$. We also give an improved version of the algorithm of Acharya et al. (2018) that reports $v_i$ in time linear in $k$. The experimental evaluation of the latter algorithm shows that it achieves a significant reduction in the number of operations needed to achieve a given accuracy compared to prior work."
}

Related

AI Equation

List of AI Equations and Latex
List of Math Equations and Latex
List of Physics Equations and Latex
List of Statistics Equations and Latex
List of Machine Learning Equations and Latex

AI Agent Marketplace and Search

AI Agent Marketplace and Search
Robot Search
Equation and Academic search
AI & Robot Comprehensive Search
AI & Robot Question
AI & Robot Community
AI Agent Marketplace Blog

AI Agent Reviews

AI Agent Marketplace Directory
Microsoft AI Agents Reviews
Claude AI Agents Reviews
OpenAI AI Agents Reviews
Saleforce AI Agents Reviews
AI Agent Builder Reviews

Downloads last month
8