Dataset Viewer
Auto-converted to Parquet Duplicate
abs
stringlengths
44
64
Download PDF
stringlengths
75
115
OpenReview
stringlengths
42
42
title
stringlengths
15
148
url
stringlengths
44
64
authors
stringlengths
6
903
detail_url
stringlengths
44
64
tags
stringclasses
1 value
abstract
stringlengths
422
5.84k
https://proceedings.mlr.press/v235/abad-rocamora24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/abad-rocamora24a/abad-rocamora24a.pdf
https://openreview.net/forum?id=AZWqXfM6z9
Revisiting Character-level Adversarial Attacks for Language Models
https://proceedings.mlr.press/v235/abad-rocamora24a.html
Elias Abad Rocamora, Yongtao Wu, Fanghui Liu, Grigorios Chrysos, Volkan Cevher
https://proceedings.mlr.press/v235/abad-rocamora24a.html
ICML 2024
Adversarial attacks in Natural Language Processing apply perturbations in the character or token levels. Token-level attacks, gaining prominence for their use of gradient-based methods, are susceptible to altering sentence semantics, leading to invalid adversarial examples. While character-level attacks easily maintain...
https://proceedings.mlr.press/v235/abe24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/abe24a/abe24a.pdf
https://openreview.net/forum?id=9U29U3cDKq
Adaptively Perturbed Mirror Descent for Learning in Games
https://proceedings.mlr.press/v235/abe24a.html
Kenshi Abe, Kaito Ariu, Mitsuki Sakamoto, Atsushi Iwasaki
https://proceedings.mlr.press/v235/abe24a.html
ICML 2024
This paper proposes a payoff perturbation technique for the Mirror Descent (MD) algorithm in games where the gradient of the payoff functions is monotone in the strategy profile space, potentially containing additive noise. The optimistic family of learning algorithms, exemplified by optimistic MD, successfully achieve...
https://proceedings.mlr.press/v235/abhyankar24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/abhyankar24a/abhyankar24a.pdf
https://openreview.net/forum?id=wDDGQabYPQ
InferCept: Efficient Intercept Support for Augmented Large Language Model Inference
https://proceedings.mlr.press/v235/abhyankar24a.html
Reyna Abhyankar, Zijian He, Vikranth Srivatsa, Hao Zhang, Yiying Zhang
https://proceedings.mlr.press/v235/abhyankar24a.html
ICML 2024
Large language models are increasingly integrated with external environments, tools, and agents like ChatGPT plugins to extend their capability beyond language-centric tasks. However, today’s LLM inference systems are designed for standalone LLMs. They treat each external interaction as the end of LLM generation and fo...
https://proceedings.mlr.press/v235/acharya24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/acharya24a/acharya24a.pdf
https://openreview.net/forum?id=MurkwIl0h3
Balancing Feature Similarity and Label Variability for Optimal Size-Aware One-shot Subset Selection
https://proceedings.mlr.press/v235/acharya24a.html
Abhinab Acharya, Dayou Yu, Qi Yu, Xumin Liu
https://proceedings.mlr.press/v235/acharya24a.html
ICML 2024
Subset or core-set selection offers a data-efficient way for training deep learning models. One-shot subset selection poses additional challenges as subset selection is only performed once and full set data become unavailable after the selection. However, most existing methods tend to choose either diverse or difficult...
https://proceedings.mlr.press/v235/achituve24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/achituve24a/achituve24a.pdf
https://openreview.net/forum?id=GiHo83ozsF
Bayesian Uncertainty for Gradient Aggregation in Multi-Task Learning
https://proceedings.mlr.press/v235/achituve24a.html
Idan Achituve, Idit Diamant, Arnon Netzer, Gal Chechik, Ethan Fetaya
https://proceedings.mlr.press/v235/achituve24a.html
ICML 2024
As machine learning becomes more prominent there is a growing demand to perform several inference tasks in parallel. Multi-task learning (MTL) addresses this challenge by learning a single model that solves several tasks simultaneously and efficiently. Often optimizing MTL models entails first computing the gradient of...
https://proceedings.mlr.press/v235/achtibat24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/achtibat24a/achtibat24a.pdf
https://openreview.net/forum?id=emtXYlBrNF
AttnLRP: Attention-Aware Layer-Wise Relevance Propagation for Transformers
https://proceedings.mlr.press/v235/achtibat24a.html
Reduan Achtibat, Sayed Mohammad Vakilzadeh Hatefi, Maximilian Dreyer, Aakriti Jain, Thomas Wiegand, Sebastian Lapuschkin, Wojciech Samek
https://proceedings.mlr.press/v235/achtibat24a.html
ICML 2024
Large Language Models are prone to biased predictions and hallucinations, underlining the paramount importance of understanding their model-internal reasoning process. However, achieving faithful attributions for the entirety of a black-box transformer model and maintaining computational efficiency is an unsolved chall...
https://proceedings.mlr.press/v235/adcock24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/adcock24a/adcock24a.pdf
https://openreview.net/forum?id=wG2SgnH6Zv
A Unified Framework for Learning with Nonlinear Model Classes from Arbitrary Linear Samples
https://proceedings.mlr.press/v235/adcock24a.html
Ben Adcock, Juan M. Cardenas, Nick Dexter
https://proceedings.mlr.press/v235/adcock24a.html
ICML 2024
This work considers the fundamental problem of learning an unknown object from training data using a given model class. We introduce a framework that allows for objects in arbitrary Hilbert spaces, general types of (random) linear measurements as training data and general types of nonlinear model classes. We establish ...
https://proceedings.mlr.press/v235/adepu24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/adepu24a/adepu24a.pdf
https://openreview.net/forum?id=xPypr0kufs
FrameQuant: Flexible Low-Bit Quantization for Transformers
https://proceedings.mlr.press/v235/adepu24a.html
Harshavardhan Adepu, Zhanpeng Zeng, Li Zhang, Vikas Singh
https://proceedings.mlr.press/v235/adepu24a.html
ICML 2024
Transformers are the backbone of powerful foundation models for many Vision and Natural Language Processing tasks. But their compute and memory/storage footprint is large, and so, serving such models is expensive often requiring high-end hardware. To mitigate this difficulty, Post-Training Quantization seeks to modify ...
https://proceedings.mlr.press/v235/adhikary24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/adhikary24a/adhikary24a.pdf
https://openreview.net/forum?id=myCgfQZzbc
BeigeMaps: Behavioral Eigenmaps for Reinforcement Learning from Images
https://proceedings.mlr.press/v235/adhikary24a.html
Sandesh Adhikary, Anqi Li, Byron Boots
https://proceedings.mlr.press/v235/adhikary24a.html
ICML 2024
Training reinforcement learning (RL) agents directly from high-dimensional image observations continues to be a challenging problem. Recent line of work on behavioral distances proposes to learn representations that encode behavioral similarities quantified by the bisimulation metric. By learning an isometric mapping t...
https://proceedings.mlr.press/v235/adila24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/adila24a/adila24a.pdf
https://openreview.net/forum?id=dztd61efGy
Discovering Bias in Latent Space: An Unsupervised Debiasing Approach
https://proceedings.mlr.press/v235/adila24a.html
Dyah Adila, Shuai Zhang, Boran Han, Bernie Wang
https://proceedings.mlr.press/v235/adila24a.html
ICML 2024
The question-answering (QA) capabilities of foundation models are highly sensitive to prompt variations, rendering their performance susceptible to superficial, non-meaning-altering changes. This vulnerability often stems from the model’s preference or bias towards specific input characteristics, such as option positio...
https://proceedings.mlr.press/v235/afshani24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/afshani24a/afshani24a.pdf
https://openreview.net/forum?id=8iWDWQKxJ1
Optimal Coresets for Low-Dimensional Geometric Median
https://proceedings.mlr.press/v235/afshani24a.html
Peyman Afshani, Chris Schwiegelshohn
https://proceedings.mlr.press/v235/afshani24a.html
ICML 2024
We investigate coresets for approximating the cost with respect to median queries. In this problem, we are given a set of points $P\subset \mathbb{R}^d$ and median queries are $\sum_{p\in P} ||p-c||$ for any point $c\in \mathbb{R}^d$. Our goal is to compute a small weighted summary $S\subset P$ such that the cost of an...
https://proceedings.mlr.press/v235/afzal24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/afzal24a/afzal24a.pdf
https://openreview.net/forum?id=9GbAea74O6
REST: Efficient and Accelerated EEG Seizure Analysis through Residual State Updates
https://proceedings.mlr.press/v235/afzal24a.html
Arshia Afzal, Grigorios Chrysos, Volkan Cevher, Mahsa Shoaran
https://proceedings.mlr.press/v235/afzal24a.html
ICML 2024
EEG-based seizure detection models face challenges in terms of inference speed and memory efficiency, limiting their real-time implementation in clinical devices. This paper introduces a novel graph-based residual state update mechanism (REST) for real-time EEG signal analysis in applications such as epileptic seizure ...
https://proceedings.mlr.press/v235/agarwal24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/agarwal24a/agarwal24a.pdf
https://openreview.net/forum?id=xcDRx8vzCa
CHAI: Clustered Head Attention for Efficient LLM Inference
https://proceedings.mlr.press/v235/agarwal24a.html
Saurabh Agarwal, Bilge Acun, Basil Hosmer, Mostafa Elhoushi, Yejin Lee, Shivaram Venkataraman, Dimitris Papailiopoulos, Carole-Jean Wu
https://proceedings.mlr.press/v235/agarwal24a.html
ICML 2024
Large Language Models (LLMs) with hundreds of billions of parameters have transformed the field of machine learning. However, serving these models at inference time is both compute and memory intensive, where a single request can require multiple GPUs and tens of Gigabytes of memory. Multi-head attention is one of the ...
https://proceedings.mlr.press/v235/agarwal24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/agarwal24b/agarwal24b.pdf
https://openreview.net/forum?id=w8BnKGFIYN
Learning to Play Atari in a World of Tokens
https://proceedings.mlr.press/v235/agarwal24b.html
Pranav Agarwal, Sheldon Andrews, Samira Ebrahimi Kahou
https://proceedings.mlr.press/v235/agarwal24b.html
ICML 2024
Model-based reinforcement learning agents utilizing transformers have shown improved sample efficiency due to their ability to model extended context, resulting in more accurate world models. However, for complex reasoning and planning tasks, these methods primarily rely on continuous representations. This complicates ...
https://proceedings.mlr.press/v235/agarwal24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/agarwal24c/agarwal24c.pdf
https://openreview.net/forum?id=EqFxIbGWRU
Probabilistic Generating Circuits - Demystified
https://proceedings.mlr.press/v235/agarwal24c.html
Sanyam Agarwal, Markus Bläser
https://proceedings.mlr.press/v235/agarwal24c.html
ICML 2024
Zhang et al. (ICML 2021, PLMR 139, pp. 12447–12457) introduced probabilistic generating circuits (PGCs) as a probabilistic model to unify probabilistic circuits (PCs) and determinantal point processes (DPPs). At a first glance, PGCs store a distribution in a very different way, they compute the probability generating p...
https://proceedings.mlr.press/v235/agarwal24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/agarwal24d/agarwal24d.pdf
https://openreview.net/forum?id=xl2yU3dsHK
Improved Differentially Private and Lazy Online Convex Optimization: Lower Regret without Smoothness Requirements
https://proceedings.mlr.press/v235/agarwal24d.html
Naman Agarwal, Satyen Kale, Karan Singh, Abhradeep Guha Thakurta
https://proceedings.mlr.press/v235/agarwal24d.html
ICML 2024
We design differentially private regret-minimizing algorithms in the online convex optimization (OCO) framework. Unlike recent results, our algorithms and analyses do not require smoothness, thus yielding the first private regret bounds with an optimal leading-order term for non-smooth loss functions. Additionally, eve...
https://proceedings.mlr.press/v235/agarwal24e.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/agarwal24e/agarwal24e.pdf
https://openreview.net/forum?id=MMMHufVc2v
The Non-linear $F$-Design and Applications to Interactive Learning
https://proceedings.mlr.press/v235/agarwal24e.html
Alekh Agarwal, Jian Qian, Alexander Rakhlin, Tong Zhang
https://proceedings.mlr.press/v235/agarwal24e.html
ICML 2024
We propose a generalization of the classical G-optimal design concept to non-linear function classes. The criterion, termed F -design, coincides with G-design in the linear case. We compute the value of the optimal design, termed the F-condition number, for several non-linear function classes. We further provide algori...
https://proceedings.mlr.press/v235/agnihotri24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/agnihotri24a/agnihotri24a.pdf
https://openreview.net/forum?id=dmfvHU1LNF
ACPO: A Policy Optimization Algorithm for Average MDPs with Constraints
https://proceedings.mlr.press/v235/agnihotri24a.html
Akhil Agnihotri, Rahul Jain, Haipeng Luo
https://proceedings.mlr.press/v235/agnihotri24a.html
ICML 2024
Reinforcement Learning (RL) for constrained MDPs (CMDPs) is an increasingly important problem for various applications. Often, the average criterion is more suitable than the discounted criterion. Yet, RL for average-CMDPs (ACMDPs) remains a challenging problem. Algorithms designed for discounted constrained RL problem...
https://proceedings.mlr.press/v235/agnihotri24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/agnihotri24b/agnihotri24b.pdf
https://openreview.net/forum?id=CXZqGJonmt
CosPGD: an efficient white-box adversarial attack for pixel-wise prediction tasks
https://proceedings.mlr.press/v235/agnihotri24b.html
Shashank Agnihotri, Steffen Jung, Margret Keuper
https://proceedings.mlr.press/v235/agnihotri24b.html
ICML 2024
While neural networks allow highly accurate predictions in many tasks, their lack of robustness towards even slight input perturbations often hampers their deployment. Adversarial attacks such as the seminal projected gradient descent (PGD) offer an effective means to evaluate a model’s robustness and dedicated solutio...
https://proceedings.mlr.press/v235/agostinelli-iii24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/agostinelli-iii24a/agostinelli-iii24a.pdf
https://openreview.net/forum?id=XhH1OKLANY
LeaPformer: Enabling Linear Transformers for Autoregressive and Simultaneous Tasks via Learned Proportions
https://proceedings.mlr.press/v235/agostinelli-iii24a.html
Victor Agostinelli Iii, Sanghyun Hong, Lizhong Chen
https://proceedings.mlr.press/v235/agostinelli-iii24a.html
ICML 2024
A promising approach to preserving model performance in linearized transformers is to employ position-based re-weighting functions. However, state-of-the-art re-weighting functions rely heavily on target sequence lengths, making it difficult or impossible to apply them to autoregressive and simultaneous tasks, where th...
https://proceedings.mlr.press/v235/agrawal24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/agrawal24a/agrawal24a.pdf
https://openreview.net/forum?id=bID9PiBFpT
Policy Evaluation for Variance in Average Reward Reinforcement Learning
https://proceedings.mlr.press/v235/agrawal24a.html
Shubhada Agrawal, Prashanth L A, Siva Theja Maguluri
https://proceedings.mlr.press/v235/agrawal24a.html
ICML 2024
We consider an average reward reinforcement learning (RL) problem and work with asymptotic variance as a risk measure to model safety-critical applications. We design a temporal-difference (TD) type algorithm tailored for policy evaluation in this context. Our algorithm is based on linear stochastic approximation of an...
https://proceedings.mlr.press/v235/ahdritz24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ahdritz24a/ahdritz24a.pdf
https://openreview.net/forum?id=ud4GSrqUKI
Distinguishing the Knowable from the Unknowable with Language Models
https://proceedings.mlr.press/v235/ahdritz24a.html
Gustaf Ahdritz, Tian Qin, Nikhil Vyas, Boaz Barak, Benjamin L. Edelman
https://proceedings.mlr.press/v235/ahdritz24a.html
ICML 2024
We study the feasibility of identifying epistemic uncertainty (reflecting a lack of knowledge), as opposed to aleatoric uncertainty (reflecting entropy in the underlying distribution), in the outputs of large language models (LLMs) over free-form text. In the absence of ground-truth probabilities, we explore a setting ...
https://proceedings.mlr.press/v235/ahmadian24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ahmadian24a/ahmadian24a.pdf
https://openreview.net/forum?id=jaJxpKkBcL
Unmasking Vulnerabilities: Cardinality Sketches under Adaptive Inputs
https://proceedings.mlr.press/v235/ahmadian24a.html
Sara Ahmadian, Edith Cohen
https://proceedings.mlr.press/v235/ahmadian24a.html
ICML 2024
Cardinality sketches are popular data structures that enhance the efficiency of working with large data sets. The sketches are randomized representations of sets that are only of logarithmic size but can support set merges and approximate cardinality (i.e., distinct count) queries. When queries are not adaptive, that i...
https://proceedings.mlr.press/v235/ahmaditeshnizi24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ahmaditeshnizi24a/ahmaditeshnizi24a.pdf
https://openreview.net/forum?id=YT1dtdLvSN
OptiMUS: Scalable Optimization Modeling with (MI)LP Solvers and Large Language Models
https://proceedings.mlr.press/v235/ahmaditeshnizi24a.html
Ali Ahmaditeshnizi, Wenzhi Gao, Madeleine Udell
https://proceedings.mlr.press/v235/ahmaditeshnizi24a.html
ICML 2024
Optimization problems are pervasive in sectors from manufacturing and distribution to healthcare. However, most such problems are still solved heuristically by hand rather than optimally by state-of-the-art solvers because the expertise required to formulate and solve these problems limits the widespread adoption of op...
https://proceedings.mlr.press/v235/ahn24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ahn24a/ahn24a.pdf
https://openreview.net/forum?id=tpYHbEl7P1
How to Escape Sharp Minima with Random Perturbations
https://proceedings.mlr.press/v235/ahn24a.html
Kwangjun Ahn, Ali Jadbabaie, Suvrit Sra
https://proceedings.mlr.press/v235/ahn24a.html
ICML 2024
Modern machine learning applications have witnessed the remarkable success of optimization algorithms that are designed to find flat minima. Motivated by this design choice, we undertake a formal study that (i) formulates the notion of flat minima, and (ii) studies the complexity of finding them. Specifically, we adopt...
https://proceedings.mlr.press/v235/ahn24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ahn24b/ahn24b.pdf
https://openreview.net/forum?id=iE2lMjeXRR
Understanding Adam Optimizer via Online Learning of Updates: Adam is FTRL in Disguise
https://proceedings.mlr.press/v235/ahn24b.html
Kwangjun Ahn, Zhiyu Zhang, Yunbum Kook, Yan Dai
https://proceedings.mlr.press/v235/ahn24b.html
ICML 2024
Despite the success of the Adam optimizer in practice, the theoretical understanding of its algorithmic components still remains limited. In particular, most existing analyses of Adam show the convergence rate that can be simply achieved by non-adative algorithms like SGD. In this work, we provide a different perspecti...
https://proceedings.mlr.press/v235/ai24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ai24a/ai24a.pdf
https://openreview.net/forum?id=1v1oFF3aw0
Not all distributional shifts are equal: Fine-grained robust conformal inference
https://proceedings.mlr.press/v235/ai24a.html
Jiahao Ai, Zhimei Ren
https://proceedings.mlr.press/v235/ai24a.html
ICML 2024
We introduce a fine-grained framework for uncertainty quantification of predictive models under distributional shifts. This framework distinguishes the shift in covariate distributions from that in the conditional relationship between the outcome ($Y$) and the covariates ($X$). We propose to reweight the training sampl...
https://proceedings.mlr.press/v235/akbari24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/akbari24a/akbari24a.pdf
https://openreview.net/forum?id=yzNEkTmcoF
Triple Changes Estimator for Targeted Policies
https://proceedings.mlr.press/v235/akbari24a.html
Sina Akbari, Negar Kiyavash
https://proceedings.mlr.press/v235/akbari24a.html
ICML 2024
The renowned difference-in-differences (DiD) estimator relies on the assumption of ’parallel trends,’ which may not hold in many practical applications. To address this issue, economists are increasingly considering the triple difference estimator as a more credible alternative. Both DiD and triple difference are limit...
https://proceedings.mlr.press/v235/akbarian24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/akbarian24a/akbarian24a.pdf
https://openreview.net/forum?id=KwgAThfxEd
Improving Computational Complexity in Statistical Models with Local Curvature Information
https://proceedings.mlr.press/v235/akbarian24a.html
Pedram Akbarian, Tongzheng Ren, Jiacheng Zhuo, Sujay Sanghavi, Nhat Ho
https://proceedings.mlr.press/v235/akbarian24a.html
ICML 2024
It is known that when the statistical models are singular, i.e., the Fisher information matrix at the true parameter is degenerate, the fixed step-size gradient descent algorithm takes polynomial number of steps in terms of the sample size $n$ to converge to a final statistical radius around the true parameter, which c...
https://proceedings.mlr.press/v235/akeweje24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/akeweje24a/akeweje24a.pdf
https://openreview.net/forum?id=J5Yg7HMy39
Learning Mixtures of Gaussian Processes through Random Projection
https://proceedings.mlr.press/v235/akeweje24a.html
Emmanuel Akeweje, Mimi Zhang
https://proceedings.mlr.press/v235/akeweje24a.html
ICML 2024
We propose an ensemble clustering framework to uncover latent cluster labels in functional data generated from a Gaussian process mixture. Our method exploits the fact that the projection coefficients of the functional data onto any given projection function follow a univariate Gaussian mixture model (GMM). By conducti...
https://proceedings.mlr.press/v235/akhauri24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/akhauri24a/akhauri24a.pdf
https://openreview.net/forum?id=fqPH6ejwGi
Encodings for Prediction-based Neural Architecture Search
https://proceedings.mlr.press/v235/akhauri24a.html
Yash Akhauri, Mohamed S Abdelfattah
https://proceedings.mlr.press/v235/akhauri24a.html
ICML 2024
Predictor-based methods have substantially enhanced Neural Architecture Search (NAS) optimization. The efficacy of these predictors is largely influenced by the method of encoding neural network architectures. While traditional encodings used an adjacency matrix describing the graph structure of a neural network, novel...
https://proceedings.mlr.press/v235/akhound-sadegh24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/akhound-sadegh24a/akhound-sadegh24a.pdf
https://openreview.net/forum?id=gVjMwLDFoQ
Iterated Denoising Energy Matching for Sampling from Boltzmann Densities
https://proceedings.mlr.press/v235/akhound-sadegh24a.html
Tara Akhound-Sadegh, Jarrid Rector-Brooks, Joey Bose, Sarthak Mittal, Pablo Lemos, Cheng-Hao Liu, Marcin Sendera, Siamak Ravanbakhsh, Gauthier Gidel, Yoshua Bengio, Nikolay Malkin, Alexander Tong
https://proceedings.mlr.press/v235/akhound-sadegh24a.html
ICML 2024
Efficiently generating statistically independent samples from an unnormalized probability distribution, such as equilibrium samples of many-body systems, is a foundational problem in science. In this paper, we propose Iterated Denoising Energy Matching (iDEM), an iterative algorithm that uses a novel stochastic score m...
https://proceedings.mlr.press/v235/akyurek24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/akyurek24a/akyurek24a.pdf
https://openreview.net/forum?id=3Z9CRr5srL
In-Context Language Learning: Architectures and Algorithms
https://proceedings.mlr.press/v235/akyurek24a.html
Ekin Akyürek, Bailin Wang, Yoon Kim, Jacob Andreas
https://proceedings.mlr.press/v235/akyurek24a.html
ICML 2024
Some neural language models (LMs) exhibit a remarkable capacity for in-context learning (ICL): they can fit predictors to datasets provided as input. While the mechanisms underlying ICL are well-studied in the context of synthetic problems like in-context linear regression, there is still some divergence between these ...
https://proceedings.mlr.press/v235/al-jarrah24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/al-jarrah24a/al-jarrah24a.pdf
https://openreview.net/forum?id=blzDxD6bKt
Nonlinear Filtering with Brenier Optimal Transport Maps
https://proceedings.mlr.press/v235/al-jarrah24a.html
Mohammad Al-Jarrah, Niyizhen Jin, Bamdad Hosseini, Amirhossein Taghvaei
https://proceedings.mlr.press/v235/al-jarrah24a.html
ICML 2024
This paper is concerned with the problem of nonlinear filtering, i.e., computing the conditional distribution of the state of a stochastic dynamical system given a history of noisy partial observations. Conventional sequential importance resampling (SIR) particle filters suffer from fundamental limitations, in scenario...
https://proceedings.mlr.press/v235/alacaoglu24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/alacaoglu24a/alacaoglu24a.pdf
https://openreview.net/forum?id=lWy2lCTyJa
Revisiting Inexact Fixed-Point Iterations for Min-Max Problems: Stochasticity and Structured Nonconvexity
https://proceedings.mlr.press/v235/alacaoglu24a.html
Ahmet Alacaoglu, Donghwan Kim, Stephen Wright
https://proceedings.mlr.press/v235/alacaoglu24a.html
ICML 2024
We focus on constrained, $L$-smooth, potentially stochastic and nonconvex-nonconcave min-max problems either satisfying $\rho$-cohypomonotonicity or admitting a solution to the $\rho$-weakly Minty Variational Inequality (MVI), where larger values of the parameter $\rho>0$ correspond to a greater degree of nonconvexity....
https://proceedings.mlr.press/v235/alain24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/alain24a/alain24a.pdf
https://openreview.net/forum?id=afnyJfQddk
Gaussian Processes on Cellular Complexes
https://proceedings.mlr.press/v235/alain24a.html
Mathieu Alain, So Takao, Brooks Paige, Marc Peter Deisenroth
https://proceedings.mlr.press/v235/alain24a.html
ICML 2024
In recent years, there has been considerable interest in developing machine learning models on graphs to account for topological inductive biases. In particular, recent attention has been given to Gaussian processes on such structures since they can additionally account for uncertainty. However, graphs are limited to m...
https://proceedings.mlr.press/v235/alamdari24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/alamdari24a/alamdari24a.pdf
https://openreview.net/forum?id=4BIOZSz7zU
Remembering to Be Fair: Non-Markovian Fairness in Sequential Decision Making
https://proceedings.mlr.press/v235/alamdari24a.html
Parand A. Alamdari, Toryn Q. Klassen, Elliot Creager, Sheila A. Mcilraith
https://proceedings.mlr.press/v235/alamdari24a.html
ICML 2024
Fair decision making has largely been studied with respect to a single decision. Here we investigate the notion of fairness in the context of sequential decision making where multiple stakeholders can be affected by the outcomes of decisions. We observe that fairness often depends on the history of the sequential decis...
https://proceedings.mlr.press/v235/albergo24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/albergo24a/albergo24a.pdf
https://openreview.net/forum?id=FFILRGD0jG
Stochastic Interpolants with Data-Dependent Couplings
https://proceedings.mlr.press/v235/albergo24a.html
Michael Samuel Albergo, Mark Goldstein, Nicholas Matthew Boffi, Rajesh Ranganath, Eric Vanden-Eijnden
https://proceedings.mlr.press/v235/albergo24a.html
ICML 2024
Generative models inspired by dynamical transport of measure – such as flows and diffusions – construct a continuous-time map between two probability densities. Conventionally, one of these is the target density, only accessible through samples, while the other is taken as a simple base density that is data-agnostic. I...
https://proceedings.mlr.press/v235/albuquerque24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/albuquerque24a/albuquerque24a.pdf
https://openreview.net/forum?id=idyUNsoZ75
Evaluating Model Bias Requires Characterizing its Mistakes
https://proceedings.mlr.press/v235/albuquerque24a.html
Isabela Albuquerque, Jessica Schrouff, David Warde-Farley, Ali Taylan Cemgil, Sven Gowal, Olivia Wiles
https://proceedings.mlr.press/v235/albuquerque24a.html
ICML 2024
The ability to properly benchmark model performance in the face of spurious correlations is important to both build better predictors and increase confidence that models are operating as intended. We demonstrate that characterizing (as opposed to simply quantifying) model mistakes across subgroups is pivotal to properl...
https://proceedings.mlr.press/v235/alder24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/alder24a/alder24a.pdf
https://openreview.net/forum?id=v9tIJW1fzt
Energy-Efficient Gaussian Processes Using Low-Precision Arithmetic
https://proceedings.mlr.press/v235/alder24a.html
Nicolas Alder, Ralf Herbrich
https://proceedings.mlr.press/v235/alder24a.html
ICML 2024
The widespread use of artificial intelligence requires finding energy-efficient paradigms for the field. We propose to reduce the energy consumption of Gaussian process regression using low-precision floating-point representations. We explore how low-precision representations impact the results of Gaussian process regr...
https://proceedings.mlr.press/v235/alfarra24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/alfarra24a/alfarra24a.pdf
https://openreview.net/forum?id=6FtAXU4ean
Evaluation of Test-Time Adaptation Under Computational Time Constraints
https://proceedings.mlr.press/v235/alfarra24a.html
Motasem Alfarra, Hani Itani, Alejandro Pardo, Shyma Yaser Alhuwaider, Merey Ramazanova, Juan Camilo Perez, Zhipeng Cai, Matthias Müller, Bernard Ghanem
https://proceedings.mlr.press/v235/alfarra24a.html
ICML 2024
This paper proposes a novel online evaluation protocol for Test Time Adaptation (TTA) methods, which penalizes slower methods by providing them with fewer samples for adaptation. TTA methods leverage unlabeled data at test time to adapt to distribution shifts. Though many effective methods have been proposed, their imp...
https://proceedings.mlr.press/v235/ali-mehmeti-gopel24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ali-mehmeti-gopel24a/ali-mehmeti-gopel24a.pdf
https://openreview.net/forum?id=AzUCfhJ9Bs
On the Weight Dynamics of Deep Normalized Networks
https://proceedings.mlr.press/v235/ali-mehmeti-gopel24a.html
Christian H.X. Ali Mehmeti-Göpel, Michael Wand
https://proceedings.mlr.press/v235/ali-mehmeti-gopel24a.html
ICML 2024
Recent studies have shown that high disparities in effective learning rates (ELRs) across layers in deep neural networks can negatively affect trainability. We formalize how these disparities evolve over time by modeling weight dynamics (evolution of expected gradient and weight norms) of networks with normalization la...
https://proceedings.mlr.press/v235/alishahi24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/alishahi24a/alishahi24a.pdf
https://openreview.net/forum?id=jS3CMHtYJD
No Dimensional Sampling Coresets for Classification
https://proceedings.mlr.press/v235/alishahi24a.html
Meysam Alishahi, Jeff M. Phillips
https://proceedings.mlr.press/v235/alishahi24a.html
ICML 2024
We refine and generalize what is known about coresets for classification problems via the sensitivity sampling framework. Such coresets seek the smallest possible subsets of input data, so one can optimize a loss function on the coreset and ensure approximation guarantees with respect to the original data. Our analysis...
https://proceedings.mlr.press/v235/allamanis24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/allamanis24a/allamanis24a.pdf
https://openreview.net/forum?id=YnFuUX08CE
Unsupervised Evaluation of Code LLMs with Round-Trip Correctness
https://proceedings.mlr.press/v235/allamanis24a.html
Miltiadis Allamanis, Sheena Panthaplackel, Pengcheng Yin
https://proceedings.mlr.press/v235/allamanis24a.html
ICML 2024
To evaluate code large language models (LLMs), research has relied on a few small manually curated benchmarks, such as HumanEval and MBPP, which represent a narrow part of the real-world software domains. In this work, we introduce round-trip correctness (RTC) as an alternative evaluation method. RTC allows Code LLM ev...
https://proceedings.mlr.press/v235/allen-zhu24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/allen-zhu24a/allen-zhu24a.pdf
https://openreview.net/forum?id=5x788rqbcj
Physics of Language Models: Part 3.1, Knowledge Storage and Extraction
https://proceedings.mlr.press/v235/allen-zhu24a.html
Zeyuan Allen-Zhu, Yuanzhi Li
https://proceedings.mlr.press/v235/allen-zhu24a.html
ICML 2024
Large language models (LLMs) can store a vast amount of world knowledge, often extractable via question-answering (e.g., "What is Abraham Lincoln’s birthday?”). However, do they answer such questions based on exposure to similar questions during training (i.e., cheating), or by genuinely learning to extract knowledge f...
https://proceedings.mlr.press/v235/allouah24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/allouah24a/allouah24a.pdf
https://openreview.net/forum?id=Izv7gBnap3
Byzantine-Robust Federated Learning: Impact of Client Subsampling and Local Updates
https://proceedings.mlr.press/v235/allouah24a.html
Youssef Allouah, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, Geovani Rizk, Sasha Voitovych
https://proceedings.mlr.press/v235/allouah24a.html
ICML 2024
The possibility of adversarial (a.k.a., Byzantine) clients makes federated learning (FL) prone to arbitrary manipulation. The natural approach to robustify FL against adversarial clients is to replace the simple averaging operation at the server in the standard $\mathsf{FedAvg}$ algorithm by a robust averaging rule. Wh...
https://proceedings.mlr.press/v235/allouah24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/allouah24b/allouah24b.pdf
https://openreview.net/forum?id=5JrlywYHRi
The Privacy Power of Correlated Noise in Decentralized Learning
https://proceedings.mlr.press/v235/allouah24b.html
Youssef Allouah, Anastasia Koloskova, Aymane El Firdoussi, Martin Jaggi, Rachid Guerraoui
https://proceedings.mlr.press/v235/allouah24b.html
ICML 2024
Decentralized learning is appealing as it enables the scalable usage of large amounts of distributed data and resources without resorting to any central entity, while promoting privacy since every user minimizes the direct exposure of their data. Yet, without additional precautions, curious users can still leverage mod...
https://proceedings.mlr.press/v235/alonso-campana24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/alonso-campana24a/alonso-campana24a.pdf
https://openreview.net/forum?id=MDAg5Q7IsI
Predicting Dose-Response Curves with Deep Neural Networks
https://proceedings.mlr.press/v235/alonso-campana24a.html
Pedro Alonso Campana, Paul Prasse, Tobias Scheffer
https://proceedings.mlr.press/v235/alonso-campana24a.html
ICML 2024
Dose-response curves characterize the relationship between the concentration of drugs and their inhibitory effect on the growth of specific types of cells. The predominant Hill-equation model of an ideal enzymatic inhibition unduly simplifies the biochemical reality of many drugs; and for these drugs the widely-used dr...
https://proceedings.mlr.press/v235/altamirano24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/altamirano24a/altamirano24a.pdf
https://openreview.net/forum?id=5WnKLIAX4q
Robust and Conjugate Gaussian Process Regression
https://proceedings.mlr.press/v235/altamirano24a.html
Matias Altamirano, Francois-Xavier Briol, Jeremias Knoblauch
https://proceedings.mlr.press/v235/altamirano24a.html
ICML 2024
To enable closed form conditioning, a common assumption in Gaussian process (GP) regression is independent and identically distributed Gaussian observation noise. This strong and simplistic assumption is often violated in practice, which leads to unreliable inferences and uncertainty quantification. Unfortunately, exis...
https://proceedings.mlr.press/v235/altieri24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/altieri24a/altieri24a.pdf
https://openreview.net/forum?id=YqIIhl2ToH
Beyond the Norms: Detecting Prediction Errors in Regression Models
https://proceedings.mlr.press/v235/altieri24a.html
Andres Altieri, Marco Romanelli, Georg Pichler, Florence Alberge, Pablo Piantanida
https://proceedings.mlr.press/v235/altieri24a.html
ICML 2024
This paper tackles the challenge of detecting unreliable behavior in regression algorithms, which may arise from intrinsic variability (e.g., aleatoric uncertainty) or modeling errors (e.g., model uncertainty). First, we formally introduce the notion of unreliability in regression, i.e., when the output of the regresso...
https://proceedings.mlr.press/v235/altmeyer24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/altmeyer24a/altmeyer24a.pdf
https://openreview.net/forum?id=AIXUuLCuMe
Position: Stop Making Unscientific AGI Performance Claims
https://proceedings.mlr.press/v235/altmeyer24a.html
Patrick Altmeyer, Andrew M. Demetriou, Antony Bartlett, Cynthia C. S. Liem
https://proceedings.mlr.press/v235/altmeyer24a.html
ICML 2024
Developments in the field of Artificial Intelligence (AI), and particularly large language models (LLMs), have created a ’perfect storm’ for observing ’sparks’ of Artificial General Intelligence (AGI) that are spurious. Like simpler models, LLMs distill meaningful representations in their latent embeddings that have be...
https://proceedings.mlr.press/v235/alvarado24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/alvarado24a/alvarado24a.pdf
https://openreview.net/forum?id=kZKopcDp2q
Hyperbolic Optimizer as a Dynamical System
https://proceedings.mlr.press/v235/alvarado24a.html
Nico Alvarado, Hans Lobel
https://proceedings.mlr.press/v235/alvarado24a.html
ICML 2024
During the last few years, the field of dynamical systems has been developing innovative tools to study the asymptotic behavior of different optimizers in the context of neural networks. In this work, we redefine an extensively studied optimizer, employing classical techniques from hyperbolic geometry. This new definit...
https://proceedings.mlr.press/v235/ambrogioni24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ambrogioni24a/ambrogioni24a.pdf
https://openreview.net/forum?id=6CV1N7hhpA
Stationarity without mean reversion in improper Gaussian processes
https://proceedings.mlr.press/v235/ambrogioni24a.html
Luca Ambrogioni
https://proceedings.mlr.press/v235/ambrogioni24a.html
ICML 2024
The behavior of a GP regression depends on the choice of covariance function. Stationary covariance functions are preferred in machine learning applications. However, (non-periodic) stationary covariance functions are always mean reverting and can therefore exhibit pathological behavior when applied to data that does n...
https://proceedings.mlr.press/v235/ameen24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ameen24a/ameen24a.pdf
https://openreview.net/forum?id=WJn1BAx9aj
Robust Graph Matching when Nodes are Corrupt
https://proceedings.mlr.press/v235/ameen24a.html
Taha Ameen, Bruce Hajek
https://proceedings.mlr.press/v235/ameen24a.html
ICML 2024
Two models are introduced to study the problem of matching two correlated graphs when some of the nodes are corrupt. In the weak model, a random subset of nodes in one or both graphs can interact randomly with their network. For this model, it is shown that no estimator can correctly recover a positive fraction of the ...
https://proceedings.mlr.press/v235/ameranis24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ameranis24a/ameranis24a.pdf
https://openreview.net/forum?id=sfQH4JJ4We
Fast Algorithms for Hypergraph PageRank with Applications to Semi-Supervised Learning
https://proceedings.mlr.press/v235/ameranis24a.html
Konstantinos Ameranis, Adela Frances Depavia, Lorenzo Orecchia, Erasmo Tani
https://proceedings.mlr.press/v235/ameranis24a.html
ICML 2024
A fundamental approach to semi-supervised learning is to leverage the structure of the sample space to diffuse label information from annotated examples to unlabeled points. Traditional methods model the input data points as a graph and rely on fast algorithms for solving Laplacian systems of equations, such as those d...
https://proceedings.mlr.press/v235/amin24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/amin24a/amin24a.pdf
https://openreview.net/forum?id=5M4Qa9AqY7
Scalable and Flexible Causal Discovery with an Efficient Test for Adjacency
https://proceedings.mlr.press/v235/amin24a.html
Alan Nawzad Amin, Andrew Gordon Wilson
https://proceedings.mlr.press/v235/amin24a.html
ICML 2024
To make accurate predictions, understand mechanisms, and design interventions in systems of many variables, we wish to learn causal graphs from large scale data. Unfortunately the space of all possible causal graphs is enormous so scalably and accurately searching for the best fit to the data is a challenge. In princip...
https://proceedings.mlr.press/v235/aminian24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/aminian24a/aminian24a.pdf
https://openreview.net/forum?id=8h0x12p3zq
Generalization Error of Graph Neural Networks in the Mean-field Regime
https://proceedings.mlr.press/v235/aminian24a.html
Gholamali Aminian, Yixuan He, Gesine Reinert, Lukasz Szpruch, Samuel N. Cohen
https://proceedings.mlr.press/v235/aminian24a.html
ICML 2024
This work provides a theoretical framework for assessing the generalization error of graph neural networks in the over-parameterized regime, where the number of parameters surpasses the quantity of data points. We explore two widely utilized types of graph neural networks: graph convolutional neural networks and messag...
https://proceedings.mlr.press/v235/amortila24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/amortila24a/amortila24a.pdf
https://openreview.net/forum?id=C64clssMVU
Scalable Online Exploration via Coverability
https://proceedings.mlr.press/v235/amortila24a.html
Philip Amortila, Dylan J Foster, Akshay Krishnamurthy
https://proceedings.mlr.press/v235/amortila24a.html
ICML 2024
Exploration is a major challenge in reinforcement learning, especially for high-dimensional domains that require function approximation. We propose exploration objectives—policy optimization objectives that enable downstream maximization of any reward function—as a conceptual framework to systematize the study of explo...
https://proceedings.mlr.press/v235/an24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/an24a/an24a.pdf
https://openreview.net/forum?id=URtUYfC3GA
WAVES: Benchmarking the Robustness of Image Watermarks
https://proceedings.mlr.press/v235/an24a.html
Bang An, Mucong Ding, Tahseen Rabbani, Aakriti Agrawal, Yuancheng Xu, Chenghao Deng, Sicheng Zhu, Abdirisak Mohamed, Yuxin Wen, Tom Goldstein, Furong Huang
https://proceedings.mlr.press/v235/an24a.html
ICML 2024
In the burgeoning age of generative AI, watermarks act as identifiers of provenance and artificial content. We present WAVES (Watermark Analysis via Enhanced Stress-testing), a benchmark for assessing image watermark robustness, overcoming the limitations of current evaluation methods. WAVES integrates detection and id...
https://proceedings.mlr.press/v235/an24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/an24b/an24b.pdf
https://openreview.net/forum?id=If4xW9vF7U
Training-Free Long-Context Scaling of Large Language Models
https://proceedings.mlr.press/v235/an24b.html
Chenxin An, Fei Huang, Jun Zhang, Shansan Gong, Xipeng Qiu, Chang Zhou, Lingpeng Kong
https://proceedings.mlr.press/v235/an24b.html
ICML 2024
The ability of Large Language Models (LLMs) to process and generate coherent text is markedly weakened when the number of input tokens exceeds their pretraining length. Given the expensive overhead of finetuning large-scale models with longer sequences, we propose a training-free approach named Dual Chunk Attention (DC...
https://proceedings.mlr.press/v235/anagnostidis24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/anagnostidis24a/anagnostidis24a.pdf
https://openreview.net/forum?id=3KxPo62PYn
Navigating Scaling Laws: Compute Optimality in Adaptive Model Training
https://proceedings.mlr.press/v235/anagnostidis24a.html
Sotiris Anagnostidis, Gregor Bachmann, Imanol Schlag, Thomas Hofmann
https://proceedings.mlr.press/v235/anagnostidis24a.html
ICML 2024
In recent years, the state-of-the-art in deep learning has been dominated by very large models that have been pre-trained on vast amounts of data. The paradigm is very simple: investing more computational resources (optimally) leads to better performance, and even predictably so; neural scaling laws have been derived t...
https://proceedings.mlr.press/v235/anani24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/anani24a/anani24a.pdf
https://openreview.net/forum?id=iOEReiiTit
Adaptive Hierarchical Certification for Segmentation using Randomized Smoothing
https://proceedings.mlr.press/v235/anani24a.html
Alaa Anani, Tobias Lorenz, Bernt Schiele, Mario Fritz
https://proceedings.mlr.press/v235/anani24a.html
ICML 2024
Certification for machine learning is proving that no adversarial sample can evade a model within a range under certain conditions, a necessity for safety-critical domains. Common certification methods for segmentation use a flat set of fine-grained classes, leading to high abstain rates due to model uncertainty across...
https://proceedings.mlr.press/v235/anders24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/anders24a/anders24a.pdf
https://openreview.net/forum?id=dSrdnhLS2h
Adaptive Observation Cost Control for Variational Quantum Eigensolvers
https://proceedings.mlr.press/v235/anders24a.html
Christopher J. Anders, Kim Andrea Nicoli, Bingting Wu, Naima Elosegui, Samuele Pedrielli, Lena Funcke, Karl Jansen, Stefan Kühn, Shinichi Nakajima
https://proceedings.mlr.press/v235/anders24a.html
ICML 2024
The objective to be minimized in the variational quantum eigensolver (VQE) has a restricted form, which allows a specialized sequential minimal optimization (SMO) that requires only a few observations in each iteration. However, the SMO iteration is still costly due to the observation noise—one observation at a point t...
https://proceedings.mlr.press/v235/angell24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/angell24a/angell24a.pdf
https://openreview.net/forum?id=gqA8ZHO0j8
Fast, Scalable, Warm-Start Semidefinite Programming with Spectral Bundling and Sketching
https://proceedings.mlr.press/v235/angell24a.html
Rico Angell, Andrew Mccallum
https://proceedings.mlr.press/v235/angell24a.html
ICML 2024
While semidefinite programming (SDP) has traditionally been limited to moderate-sized problems, recent algorithms augmented with matrix sketching techniques have enabled solving larger SDPs. However, these methods achieve scalability at the cost of an increase in the number of necessary iterations, resulting in slower ...
https://proceedings.mlr.press/v235/angelopoulos24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/angelopoulos24a/angelopoulos24a.pdf
https://openreview.net/forum?id=2XkRIijUKw
Online conformal prediction with decaying step sizes
https://proceedings.mlr.press/v235/angelopoulos24a.html
Anastasios Nikolas Angelopoulos, Rina Barber, Stephen Bates
https://proceedings.mlr.press/v235/angelopoulos24a.html
ICML 2024
We introduce a method for online conformal prediction with decaying step sizes. Like previous methods, ours possesses a retrospective guarantee of coverage for arbitrary sequences. However, unlike previous methods, we can simultaneously estimate a population quantile when it exists. Our theory and experiments indicate ...
https://proceedings.mlr.press/v235/apostolopoulou24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/apostolopoulou24a/apostolopoulou24a.pdf
https://openreview.net/forum?id=zMGUDsPopK
A Rate-Distortion View of Uncertainty Quantification
https://proceedings.mlr.press/v235/apostolopoulou24a.html
Ifigeneia Apostolopoulou, Benjamin Eysenbach, Frank Nielsen, Artur Dubrawski
https://proceedings.mlr.press/v235/apostolopoulou24a.html
ICML 2024
In supervised learning, understanding an input’s proximity to the training data can help a model decide whether it has sufficient evidence for reaching a reliable prediction. While powerful probabilistic models such as Gaussian Processes naturally have this property, deep neural networks often lack it. In this paper, w...
https://proceedings.mlr.press/v235/archer24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/archer24a/archer24a.pdf
https://openreview.net/forum?id=S3xqyEaST9
Practical Performance Guarantees for Pipelined DNN Inference
https://proceedings.mlr.press/v235/archer24a.html
Aaron Archer, Matthew Fahrbach, Kuikui Liu, Prakash Prabhu
https://proceedings.mlr.press/v235/archer24a.html
ICML 2024
We optimize pipeline parallelism for deep neural network (DNN) inference by partitioning model graphs into $k$ stages and minimizing the running time of the bottleneck stage, including communication. We give practical and effective algorithms for this NP-hard problem, but our emphasis is on tackling the practitioner’s ...
https://proceedings.mlr.press/v235/arefin24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/arefin24a/arefin24a.pdf
https://openreview.net/forum?id=lQzmDFlsHX
Unsupervised Concept Discovery Mitigates Spurious Correlations
https://proceedings.mlr.press/v235/arefin24a.html
Md Rifat Arefin, Yan Zhang, Aristide Baratin, Francesco Locatello, Irina Rish, Dianbo Liu, Kenji Kawaguchi
https://proceedings.mlr.press/v235/arefin24a.html
ICML 2024
Models prone to spurious correlations in training data often produce brittle predictions and introduce unintended biases. Addressing this challenge typically involves methods relying on prior knowledge and group annotation to remove spurious correlations, which may not be readily available in many applications. In this...
https://proceedings.mlr.press/v235/arisaka24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/arisaka24a/arisaka24a.pdf
https://openreview.net/forum?id=yh6Y7ppf46
Accelerating Legacy Numerical Solvers by Non-intrusive Gradient-based Meta-solving
https://proceedings.mlr.press/v235/arisaka24a.html
Sohei Arisaka, Qianxiao Li
https://proceedings.mlr.press/v235/arisaka24a.html
ICML 2024
Scientific computing is an essential tool for scientific discovery and engineering design, and its computational cost is always a main concern in practice. To accelerate scientific computing, it is a promising approach to use machine learning (especially meta-learning) techniques for selecting hyperparameters of tradit...
https://proceedings.mlr.press/v235/armengol-urpi-24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/armengol-urpi-24a/armengol-urpi-24a.pdf
https://openreview.net/forum?id=6Zl9rv6PDx
Causal Action Influence Aware Counterfactual Data Augmentation
https://proceedings.mlr.press/v235/armengol-urpi-24a.html
Núria Armengol Urpı́, Marco Bagatella, Marin Vlastelica, Georg Martius
https://proceedings.mlr.press/v235/armengol-urpi-24a.html
ICML 2024
Offline data are both valuable and practical resources for teaching robots complex behaviors. Ideally, learning agents should not be constrained by the scarcity of available demonstrations, but rather generalize beyond the training distribution. However, the complexity of real-world scenarios typically requires huge am...
https://proceedings.mlr.press/v235/arnaboldi24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/arnaboldi24a/arnaboldi24a.pdf
https://openreview.net/forum?id=ZSQAf5YlvN
Online Learning and Information Exponents: The Importance of Batch size & Time/Complexity Tradeoffs
https://proceedings.mlr.press/v235/arnaboldi24a.html
Luca Arnaboldi, Yatin Dandi, Florent Krzakala, Bruno Loureiro, Luca Pesce, Ludovic Stephan
https://proceedings.mlr.press/v235/arnaboldi24a.html
ICML 2024
We study the impact of the batch size $n_b$ on the iteration time $T$ of training two-layer neural networks with one-pass stochastic gradient descent (SGD) on multi-index target functions of isotropic covariates. We characterize the optimal batch size minimizing the iteration time as a function of the hardness of the t...
https://proceedings.mlr.press/v235/arora24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/arora24a/arora24a.pdf
https://openreview.net/forum?id=e93ffDcpH3
Simple linear attention language models balance the recall-throughput tradeoff
https://proceedings.mlr.press/v235/arora24a.html
Simran Arora, Sabri Eyuboglu, Michael Zhang, Aman Timalsina, Silas Alberti, James Zou, Atri Rudra, Christopher Re
https://proceedings.mlr.press/v235/arora24a.html
ICML 2024
Recent work has shown that attention-based language models excel at "recall", the ability to ground generations in tokens previously seen in context. However, the efficiency of attention-based models is bottle-necked during inference by the KV-cache’s aggressive memory consumption. In this work, we explore whether we c...
https://proceedings.mlr.press/v235/arpino24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/arpino24a/arpino24a.pdf
https://openreview.net/forum?id=1JgCpZS17T
Inferring Change Points in High-Dimensional Linear Regression via Approximate Message Passing
https://proceedings.mlr.press/v235/arpino24a.html
Gabriel Arpino, Xiaoqi Liu, Ramji Venkataramanan
https://proceedings.mlr.press/v235/arpino24a.html
ICML 2024
We consider the problem of localizing change points in high-dimensional linear regression. We propose an Approximate Message Passing (AMP) algorithm for estimating both the signals and the change point locations. Assuming Gaussian covariates, we give an exact asymptotic characterization of its estimation performance in...
https://proceedings.mlr.press/v235/arruda24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/arruda24a/arruda24a.pdf
https://openreview.net/forum?id=uCdcXRuHnC
An amortized approach to non-linear mixed-effects modeling based on neural posterior estimation
https://proceedings.mlr.press/v235/arruda24a.html
Jonas Arruda, Yannik Schälte, Clemens Peiter, Olga Teplytska, Ulrich Jaehde, Jan Hasenauer
https://proceedings.mlr.press/v235/arruda24a.html
ICML 2024
Non-linear mixed-effects models are a powerful tool for studying heterogeneous populations in various fields, including biology, medicine, economics, and engineering. Here, the aim is to find a distribution over the parameters that describe the whole population using a model that can generate simulations for an individ...
https://proceedings.mlr.press/v235/asadi24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/asadi24a/asadi24a.pdf
https://openreview.net/forum?id=jP1zeEqHli
Learning the Target Network in Function Space
https://proceedings.mlr.press/v235/asadi24a.html
Kavosh Asadi, Yao Liu, Shoham Sabach, Ming Yin, Rasool Fakoor
https://proceedings.mlr.press/v235/asadi24a.html
ICML 2024
We focus on the task of learning the value function in the reinforcement learning (RL) setting. This task is often solved by updating a pair of online and target networks while ensuring that the parameters of these two networks are equivalent. We propose Lookahead-Replicate (LR), a new value-function approximation algo...
https://proceedings.mlr.press/v235/ashman24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ashman24a/ashman24a.pdf
https://openreview.net/forum?id=pftXzp6Yn3
Translation Equivariant Transformer Neural Processes
https://proceedings.mlr.press/v235/ashman24a.html
Matthew Ashman, Cristiana Diaconu, Junhyuck Kim, Lakee Sivaraya, Stratis Markou, James Requeima, Wessel P Bruinsma, Richard E. Turner
https://proceedings.mlr.press/v235/ashman24a.html
ICML 2024
The effectiveness of neural processes (NPs) in modelling posterior prediction maps—the mapping from data to posterior predictive distributions—has significantly improved since their inception. This improvement can be attributed to two principal factors: (1) advancements in the architecture of permutation invariant set ...
https://proceedings.mlr.press/v235/asi24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/asi24a/asi24a.pdf
https://openreview.net/forum?id=PTGJOUlQ68
Private Vector Mean Estimation in the Shuffle Model: Optimal Rates Require Many Messages
https://proceedings.mlr.press/v235/asi24a.html
Hilal Asi, Vitaly Feldman, Jelani Nelson, Huy Nguyen, Kunal Talwar, Samson Zhou
https://proceedings.mlr.press/v235/asi24a.html
ICML 2024
We study the problem of private vector mean estimation in the shuffle model of privacy where $n$ users each have a unit vector $v^{(i)} \in \mathbb{R}^d$. We propose a new multi-message protocol that achieves the optimal error using $O(\min(n\varepsilon^2,d))$ messages per user. Moreover, we show that any (unbiased) pr...
https://proceedings.mlr.press/v235/athiwaratkun24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/athiwaratkun24a/athiwaratkun24a.pdf
https://openreview.net/forum?id=JPNBFWQ9H2
Bifurcated Attention for Single-Context Large-Batch Sampling
https://proceedings.mlr.press/v235/athiwaratkun24a.html
Ben Athiwaratkun, Sujan Kumar Gonugondla, Sanjay Krishna Gouda, Haifeng Qian, Hantian Ding, Qing Sun, Jun Wang, Jiacheng Guo, Liangfu Chen, Parminder Bhatia, Ramesh Nallapati, Sudipta Sengupta, Bing Xiang
https://proceedings.mlr.press/v235/athiwaratkun24a.html
ICML 2024
In our study, we present bifurcated attention, a method developed for language model inference in single-context batch sampling contexts. This approach aims to reduce redundant memory IO costs, a significant factor in latency for high batch sizes and long context lengths. Bifurcated attention achieves this by dividing ...
https://proceedings.mlr.press/v235/attali24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/attali24a/attali24a.pdf
https://openreview.net/forum?id=uyhjKoaIQa
Delaunay Graph: Addressing Over-Squashing and Over-Smoothing Using Delaunay Triangulation
https://proceedings.mlr.press/v235/attali24a.html
Hugo Attali, Davide Buscaldi, Nathalie Pernelle
https://proceedings.mlr.press/v235/attali24a.html
ICML 2024
GNNs rely on the exchange of messages to distribute information along the edges of the graph. This approach makes the efficiency of architectures highly dependent on the specific structure of the input graph. Certain graph topologies lead to inefficient information propagation, resulting in a phenomenon known as over-s...
https://proceedings.mlr.press/v235/attia24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/attia24a/attia24a.pdf
https://openreview.net/forum?id=6L4K5jmSJq
How Free is Parameter-Free Stochastic Optimization?
https://proceedings.mlr.press/v235/attia24a.html
Amit Attia, Tomer Koren
https://proceedings.mlr.press/v235/attia24a.html
ICML 2024
We study the problem of parameter-free stochastic optimization, inquiring whether, and under what conditions, do fully parameter-free methods exist: these are methods that achieve convergence rates competitive with optimally tuned methods, without requiring significant knowledge of the true problem parameters. Existing...
https://proceedings.mlr.press/v235/attias24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/attias24a/attias24a.pdf
https://openreview.net/forum?id=CyEJn71Z00
Information Complexity of Stochastic Convex Optimization: Applications to Generalization, Memorization, and Tracing
https://proceedings.mlr.press/v235/attias24a.html
Idan Attias, Gintare Karolina Dziugaite, Mahdi Haghifam, Roi Livni, Daniel M. Roy
https://proceedings.mlr.press/v235/attias24a.html
ICML 2024
In this work, we investigate the interplay between memorization and learning in the context of stochastic convex optimization (SCO). We define memorization via the information a learning algorithm reveals about its training data points. We then quantify this information using the framework of conditional mutual informa...
https://proceedings.mlr.press/v235/attias24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/attias24b/attias24b.pdf
https://openreview.net/forum?id=71ktaA3ihI
Agnostic Sample Compression Schemes for Regression
https://proceedings.mlr.press/v235/attias24b.html
Idan Attias, Steve Hanneke, Aryeh Kontorovich, Menachem Sadigurschi
https://proceedings.mlr.press/v235/attias24b.html
ICML 2024
We obtain the first positive results for bounded sample compression in the agnostic regression setting with the $\ell_p$ loss, where $p\in [1,\infty]$. We construct a generic approximate sample compression scheme for real-valued function classes exhibiting exponential size in the fat-shattering dimension but independen...
https://proceedings.mlr.press/v235/axiotis24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/axiotis24a/axiotis24a.pdf
https://openreview.net/forum?id=WUQ4YzIQt2
Data-Efficient Learning via Clustering-Based Sensitivity Sampling: Foundation Models and Beyond
https://proceedings.mlr.press/v235/axiotis24a.html
Kyriakos Axiotis, Vincent Cohen-Addad, Monika Henzinger, Sammy Jerome, Vahab Mirrokni, David Saulpic, David Woodruff, Michael Wunder
https://proceedings.mlr.press/v235/axiotis24a.html
ICML 2024
We study the data selection problem, whose aim is to select a small representative subset of data that can be used to efficiently train a machine learning model. We present a new data selection approach based on $k$-means clustering and sensitivity sampling. Assuming access to an embedding representation of the data wi...
https://proceedings.mlr.press/v235/ayme24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ayme24a/ayme24a.pdf
https://openreview.net/forum?id=B5g6y7JlMw
Random features models: a way to study the success of naive imputation
https://proceedings.mlr.press/v235/ayme24a.html
Alexis Ayme, Claire Boyer, Aymeric Dieuleveut, Erwan Scornet
https://proceedings.mlr.press/v235/ayme24a.html
ICML 2024
Constant (naive) imputation is still widely used in practice as this is a first easy-to-use technique to deal with missing data. Yet, this simple method could be expected to induce a large bias for prediction purposes, as the imputed input may strongly differ from the true underlying data. However, recent works suggest...
https://proceedings.mlr.press/v235/ayoub24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ayoub24a/ayoub24a.pdf
https://openreview.net/forum?id=7PXSc5fURu
Switching the Loss Reduces the Cost in Batch Reinforcement Learning
https://proceedings.mlr.press/v235/ayoub24a.html
Alex Ayoub, Kaiwen Wang, Vincent Liu, Samuel Robertson, James Mcinerney, Dawen Liang, Nathan Kallus, Csaba Szepesvari
https://proceedings.mlr.press/v235/ayoub24a.html
ICML 2024
We propose training fitted Q-iteration with log-loss (FQI-LOG) for batch reinforcement learning (RL). We show that the number of samples needed to learn a near-optimal policy with FQI-LOG scales with the accumulated cost of the optimal policy, which is zero in problems where acting optimally achieves the goal and incur...
https://proceedings.mlr.press/v235/azarmehr24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/azarmehr24a/azarmehr24a.pdf
https://openreview.net/forum?id=EDEISRmi6X
Bipartite Matching in Massive Graphs: A Tight Analysis of EDCS
https://proceedings.mlr.press/v235/azarmehr24a.html
Amir Azarmehr, Soheil Behnezhad, Mohammad Roghani
https://proceedings.mlr.press/v235/azarmehr24a.html
ICML 2024
Maximum matching is one of the most fundamental combinatorial optimization problems with applications in various contexts such as balanced clustering, data mining, resource allocation, and online advertisement. In many of these applications, the input graph is massive. The sheer size of these inputs makes it impossible...
https://proceedings.mlr.press/v235/azizian24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/azizian24a/azizian24a.pdf
https://openreview.net/forum?id=vsOF7qDNhl
What is the Long-Run Distribution of Stochastic Gradient Descent? A Large Deviations Analysis
https://proceedings.mlr.press/v235/azizian24a.html
Waı̈ss Azizian, Franck Iutzeler, Jerome Malick, Panayotis Mertikopoulos
https://proceedings.mlr.press/v235/azizian24a.html
ICML 2024
In this paper, we examine the long-run distribution of stochastic gradient descent (SGD) in general, non-convex problems. Specifically, we seek to understand which regions of the problem’s state space are more likely to be visited by SGD, and by how much. Using an approach based on the theory of large deviations and ra...
https://proceedings.mlr.press/v235/babu24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/babu24a/babu24a.pdf
https://openreview.net/forum?id=8STOjGCkfH
HyperFields: Towards Zero-Shot Generation of NeRFs from Text
https://proceedings.mlr.press/v235/babu24a.html
Sudarshan Babu, Richard Liu, Avery Zhou, Michael Maire, Greg Shakhnarovich, Rana Hanocka
https://proceedings.mlr.press/v235/babu24a.html
ICML 2024
We introduce HyperFields, a method for generating text-conditioned Neural Radiance Fields (NeRFs) with a single forward pass and (optionally) some fine-tuning. Key to our approach are: (i) a dynamic hypernetwork, which learns a smooth mapping from text token embeddings to the space of NeRFs; (ii) NeRF distillation trai...
https://proceedings.mlr.press/v235/baby24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/baby24a/baby24a.pdf
https://openreview.net/forum?id=7XZKzQtooN
Online Matrix Completion: A Collaborative Approach with Hott Items
https://proceedings.mlr.press/v235/baby24a.html
Dheeraj Baby, Soumyabrata Pal
https://proceedings.mlr.press/v235/baby24a.html
ICML 2024
We investigate the low rank matrix completion problem in an online setting with ${M}$ users, ${N}$ items, ${T}$ rounds, and an unknown rank-$r$ reward matrix ${R}\in \mathbb{R}^{{M}\times {N}}$. This problem has been well-studied in the literature and has several applications in practice. In each round, we recommend ${...
https://proceedings.mlr.press/v235/bacellar24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bacellar24a/bacellar24a.pdf
https://openreview.net/forum?id=GBxflz0qdX
Differentiable Weightless Neural Networks
https://proceedings.mlr.press/v235/bacellar24a.html
Alan Tendler Leibel Bacellar, Zachary Susskind, Mauricio Breternitz Jr, Eugene John, Lizy Kurian John, Priscila Machado Vieira Lima, Felipe M.G. França
https://proceedings.mlr.press/v235/bacellar24a.html
ICML 2024
We introduce the Differentiable Weightless Neural Network (DWN), a model based on interconnected lookup tables. Training of DWNs is enabled by a novel Extended Finite Difference technique for approximate differentiation of binary values. We propose Learnable Mapping, Learnable Reduction, and Spectral Regularization to ...
https://proceedings.mlr.press/v235/bachmann24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bachmann24a/bachmann24a.pdf
https://openreview.net/forum?id=76zq8Wkl6Z
The Pitfalls of Next-Token Prediction
https://proceedings.mlr.press/v235/bachmann24a.html
Gregor Bachmann, Vaishnavh Nagarajan
https://proceedings.mlr.press/v235/bachmann24a.html
ICML 2024
Can a mere next-token predictor faithfully model human thinking? Our work is aimed at crystallizing this intuitive concern, which is currently fragmented in the literature. First, we emphasize isolating the two phases of next-token prediction that are often conflated: autoregression during inference vs. teacher-forcing...
https://proceedings.mlr.press/v235/back-de-luca24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/back-de-luca24a/back-de-luca24a.pdf
https://openreview.net/forum?id=aA2326y3hf
Simulation of Graph Algorithms with Looped Transformers
https://proceedings.mlr.press/v235/back-de-luca24a.html
Artur Back De Luca, Kimon Fountoulakis
https://proceedings.mlr.press/v235/back-de-luca24a.html
ICML 2024
The execution of graph algorithms using neural networks has recently attracted significant interest due to promising empirical progress. This motivates further understanding of how neural networks can replicate reasoning steps with relational data. In this work, we study the ability of transformer networks to simulate ...
https://proceedings.mlr.press/v235/bai24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bai24a/bai24a.pdf
https://openreview.net/forum?id=PYDCwWvbG7
QBMK: Quantum-based Matching Kernels for Un-attributed Graphs
https://proceedings.mlr.press/v235/bai24a.html
Lu Bai, Lixin Cui, Ming Li, Yue Wang, Edwin Hancock
https://proceedings.mlr.press/v235/bai24a.html
ICML 2024
In this work, we develop a new Quantum-based Matching Kernel (QBMK) for un-attributed graphs, by computing the kernel-based similarity between the quantum Shannon entropies of aligned vertices through the Continuous-time Quantum Walk (CTQW). The theoretical analysis reveals that the proposed QBMK kernel not only addres...
https://proceedings.mlr.press/v235/bai24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bai24b/bai24b.pdf
https://openreview.net/forum?id=2NUGeV64y2
Diffusion Models Demand Contrastive Guidance for Adversarial Purification to Advance
https://proceedings.mlr.press/v235/bai24b.html
Mingyuan Bai, Wei Huang, Tenghui Li, Andong Wang, Junbin Gao, Cesar F Caiafa, Qibin Zhao
https://proceedings.mlr.press/v235/bai24b.html
ICML 2024
In adversarial defense, adversarial purification can be viewed as a special generation task with the purpose to remove adversarial attacks and diffusion models excel in adversarial purification for their strong generative power. With different predetermined generation requirements, various types of guidance have been p...
https://proceedings.mlr.press/v235/bai24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bai24c/bai24c.pdf
https://openreview.net/forum?id=leJGQCron2
On the Complexity of Finite-Sum Smooth Optimization under the Polyak–Łojasiewicz Condition
https://proceedings.mlr.press/v235/bai24c.html
Yunyan Bai, Yuxing Liu, Luo Luo
https://proceedings.mlr.press/v235/bai24c.html
ICML 2024
This paper considers the optimization problem of the form $\min_{{\bf x}\in{\mathbb R}^d} f({\bf x})\triangleq \frac{1}{n}\sum_{i=1}^n f_i({\bf x})$, where $f(\cdot)$ satisfies the Polyak–Łojasiewicz (PL) condition with parameter $\mu$ and $\{f_i(\cdot)\}_{i=1}^n$ is $L$-mean-squared smooth. We show that any gradient m...
https://proceedings.mlr.press/v235/bai24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bai24d/bai24d.pdf
https://openreview.net/forum?id=AOJCCFTlfJ
Constrained Ensemble Exploration for Unsupervised Skill Discovery
https://proceedings.mlr.press/v235/bai24d.html
Chenjia Bai, Rushuai Yang, Qiaosheng Zhang, Kang Xu, Yi Chen, Ting Xiao, Xuelong Li
https://proceedings.mlr.press/v235/bai24d.html
ICML 2024
Unsupervised Reinforcement Learning (RL) provides a promising paradigm for learning useful behaviors via reward-free per-training. Existing methods for unsupervised RL mainly conduct empowerment-driven skill discovery or entropy-based exploration. However, empowerment often leads to static skills, and pure exploration ...
https://proceedings.mlr.press/v235/bailey24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/bailey24a/bailey24a.pdf
https://openreview.net/forum?id=8ho1l6RZNB
Image Hijacks: Adversarial Images can Control Generative Models at Runtime
https://proceedings.mlr.press/v235/bailey24a.html
Luke Bailey, Euan Ong, Stuart Russell, Scott Emmons
https://proceedings.mlr.press/v235/bailey24a.html
ICML 2024
Are foundation models secure against malicious actors? In this work, we focus on the image input to a vision-language model (VLM). We discover image hijacks, adversarial images that control the behaviour of VLMs at inference time, and introduce the general Behaviour Matching algorithm for training image hijacks. From t...
https://proceedings.mlr.press/v235/baker24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/baker24a/baker24a.pdf
https://openreview.net/forum?id=SZ0JnRxi0x
An Explicit Frame Construction for Normalizing 3D Point Clouds
https://proceedings.mlr.press/v235/baker24a.html
Justin Baker, Shih-Hsin Wang, Tommaso De Fernex, Bao Wang
https://proceedings.mlr.press/v235/baker24a.html
ICML 2024
Many real-world datasets are represented as 3D point clouds – yet they often lack a predefined reference frame, posing a challenge for machine learning or general data analysis. Traditional methods for determining reference frames and normalizing 3D point clouds often struggle with specific inputs, lack theoretical gua...
https://proceedings.mlr.press/v235/balabin24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/balabin24a/balabin24a.pdf
https://openreview.net/forum?id=q0lxAs5GGO
Disentanglement Learning via Topology
https://proceedings.mlr.press/v235/balabin24a.html
Nikita Balabin, Daria Voronkova, Ilya Trofimov, Evgeny Burnaev, Serguei Barannikov
https://proceedings.mlr.press/v235/balabin24a.html
ICML 2024
We propose TopDis (Topological Disentanglement), a method for learning disentangled representations via adding a multi-scale topological loss term. Disentanglement is a crucial property of data representations substantial for the explainability and robustness of deep learning models and a step towards high-level cognit...
https://proceedings.mlr.press/v235/balasubramanian24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/balasubramanian24a/balasubramanian24a.pdf
https://openreview.net/forum?id=0tPBk24xNj
Adversarial Attacks on Combinatorial Multi-Armed Bandits
https://proceedings.mlr.press/v235/balasubramanian24a.html
Rishab Balasubramanian, Jiawei Li, Prasad Tadepalli, Huazheng Wang, Qingyun Wu, Haoyu Zhao
https://proceedings.mlr.press/v235/balasubramanian24a.html
ICML 2024
We study reward poisoning attacks on Combinatorial Multi-armed Bandits (CMAB). We first provide a sufficient and necessary condition for the attackability of CMAB, a notion to capture the vulnerability and robustness of CMAB. The attackability condition depends on the intrinsic properties of the corresponding CMAB inst...
End of preview. Expand in Data Studio

ICML 2024 International Conference on Machine Learning 2024 Accepted Paper Meta Info Dataset

This dataset is collect from the ICML 2024 OpenReview website (https://openreview.net/group?id=ICML.cc/2024/Conference#tab-accept-oral) as well as the arxiv website DeepNLP paper arxiv (http://www.deepnlp.org/content/paper/icml2024). For researchers who are interested in doing analysis of ICML 2024 accepted papers and potential trends, you can use the already cleaned up json files. Each row contains the meta information of a paper in the ICML 2024 conference. To explore more AI & Robotic papers (NIPS/ICML/ICLR/IROS/ICRA/etc) and AI equations, feel free to navigate the Equation Search Engine (http://www.deepnlp.org/search/equation) as well as the AI Agent Search Engine to find the deployed AI Apps and Agents (http://www.deepnlp.org/search/agent) in your domain.

Meta Information of Json File

{
    "abs": "https://proceedings.mlr.press/v235/abad-rocamora24a.html",
    "Download PDF": "https://raw.githubusercontent.com/mlresearch/v235/main/assets/abad-rocamora24a/abad-rocamora24a.pdf",
    "OpenReview": "https://openreview.net/forum?id=AZWqXfM6z9",
    "title": "Revisiting Character-level Adversarial Attacks for Language Models",
    "url": "https://proceedings.mlr.press/v235/abad-rocamora24a.html",
    "authors": "Elias Abad Rocamora, Yongtao Wu, Fanghui Liu, Grigorios Chrysos, Volkan Cevher",
    "detail_url": "https://proceedings.mlr.press/v235/abad-rocamora24a.html",
    "tags": "ICML 2024",
    "abstract": "Adversarial attacks in Natural Language Processing apply perturbations in the character or token levels. Token-level attacks, gaining prominence for their use of gradient-based methods, are susceptible to altering sentence semantics, leading to invalid adversarial examples. While character-level attacks easily maintain semantics, they have received less attention as they cannot easily adopt popular gradient-based methods, and are thought to be easy to defend. Challenging these beliefs, we introduce Charmer, an efficient query-based adversarial attack capable of achieving high attack success rate (ASR) while generating highly similar adversarial examples. Our method successfully targets both small (BERT) and large (Llama 2) models. Specifically, on BERT with SST-2, Charmer improves the ASR in $4.84$% points and the USE similarity in $8$% points with respect to the previous art. Our implementation is available in https://github.com/LIONS-EPFL/Charmer."
}

Related

AI Equation

List of AI Equations and Latex
List of Math Equations and Latex
List of Physics Equations and Latex
List of Statistics Equations and Latex
List of Machine Learning Equations and Latex

AI Agent Marketplace and Search

AI Agent Marketplace and Search
Robot Search
Equation and Academic search
AI & Robot Comprehensive Search
AI & Robot Question
AI & Robot Community
AI Agent Marketplace Blog

AI Agent Reviews

AI Agent Marketplace Directory
Microsoft AI Agents Reviews
Claude AI Agents Reviews
OpenAI AI Agents Reviews
Saleforce AI Agents Reviews
AI Agent Builder Reviews

Downloads last month
79