title stringlengths 15 138 | url stringlengths 42 42 | detail_url stringlengths 42 42 | authors stringlengths 7 526 | tags stringclasses 3
values | abstract stringlengths 480 3.09k | pdf stringlengths 71 71 |
|---|---|---|---|---|---|---|
Proving Test Set Contamination in Black-Box Language Models | https://openreview.net/forum?id=KS8mIvetg2 | https://openreview.net/forum?id=KS8mIvetg2 | Yonatan Oren,Nicole Meister,Niladri S. Chatterji,Faisal Ladhak,Tatsunori Hashimoto | ICLR 2024,Oral | Large language models are trained on vast amounts of internet data, prompting concerns that they have memorized public benchmarks. Detecting this type of contamination is challenging because the pretraining data used by proprietary models are often not publicly accessible.
We propose a procedure for detecting test set... | https://openreview.net/pdf/cfd79aaab7bdcd4f7c032c57fe7e607058042c80.pdf |
BooookScore: A systematic exploration of book-length summarization in the era of LLMs | https://openreview.net/forum?id=7Ttk3RzDeu | https://openreview.net/forum?id=7Ttk3RzDeu | Yapei Chang,Kyle Lo,Tanya Goyal,Mohit Iyyer | ICLR 2024,Oral | Summarizing book-length documents ($>$100K tokens) that exceed the context window size of large language models (LLMs) requires first breaking the input document into smaller chunks and then prompting an LLM to merge, update, and compress chunk-level summaries. Despite the complexity and importance of this task, it ha... | https://openreview.net/pdf/975e393e430362eb39a2c1ceb2c750bd4bb80143.pdf |
Generalization in diffusion models arises from geometry-adaptive harmonic representations | https://openreview.net/forum?id=ANvmVS2Yr0 | https://openreview.net/forum?id=ANvmVS2Yr0 | Zahra Kadkhodaie,Florentin Guth,Eero P Simoncelli,Stéphane Mallat | ICLR 2024,Oral | Deep neural networks (DNNs) trained for image denoising are able to generate high-quality samples with score-based reverse diffusion algorithms. These impressive capabilities seem to imply an escape from the curse of dimensionality, but recent reports of memorization of the training set raise the question of whether th... | https://openreview.net/pdf/84eb681ff8d070ce8c829cb2120dc133901594ce.pdf |
Understanding In-Context Learning in Transformers and LLMs by Learning to Learn Discrete Functions | https://openreview.net/forum?id=ekeyCgeRfC | https://openreview.net/forum?id=ekeyCgeRfC | Satwik Bhattamishra,Arkil Patel,Phil Blunsom,Varun Kanade | ICLR 2024,Oral | In order to understand the in-context learning phenomenon, recent works have adopted a stylized experimental framework and demonstrated that Transformers can match the performance of gradient-based learning algorithms for various classes of real-valued functions. However, the limitations of Transformers in implementing... | https://openreview.net/pdf/816f489eb70fe677c4ebc1cf159cf38b3062956b.pdf |
The mechanistic basis of data dependence and abrupt learning in an in-context classification task | https://openreview.net/forum?id=aN4Jf6Cx69 | https://openreview.net/forum?id=aN4Jf6Cx69 | Gautam Reddy | ICLR 2024,Oral | Transformer models exhibit in-context learning: the ability to accurately predict the response to a novel query based on illustrative examples in the input sequence, which contrasts with traditional in-weights learning of query-output relationships. What aspects of the training data distribution and architecture favor ... | https://openreview.net/pdf/4de2c24997e6d25adcda68f174ed540f41a217e8.pdf |
Improved Techniques for Training Consistency Models | https://openreview.net/forum?id=WNzy9bRDvG | https://openreview.net/forum?id=WNzy9bRDvG | Yang Song,Prafulla Dhariwal | ICLR 2024,Oral | Consistency models are a nascent family of generative models that can sample high quality data in one step without the need for adversarial training. Current consistency models achieve optimal sample quality by distilling from pre-trained diffusion models and employing learned metrics such as LPIPS. However, distillati... | https://openreview.net/pdf/c40d76fe68ec3195a55ba242266828b01fdb06c5.pdf |
Provable Compositional Generalization for Object-Centric Learning | https://openreview.net/forum?id=7VPTUWkiDQ | https://openreview.net/forum?id=7VPTUWkiDQ | Thaddäus Wiedemer,Jack Brady,Alexander Panfilov,Attila Juhos,Matthias Bethge,Wieland Brendel | ICLR 2024,Oral | Learning representations that generalize to novel compositions of known concepts is crucial for bridging the gap between human and machine perception. One prominent effort is learning object-centric representations, which are widely conjectured to enable compositional generalization. Yet, it remains unclear when this c... | https://openreview.net/pdf/70cd6e52cd58ee0e0b07dfea409db6acc228b343.pdf |
Predictive auxiliary objectives in deep RL mimic learning in the brain | https://openreview.net/forum?id=agPpmEgf8C | https://openreview.net/forum?id=agPpmEgf8C | Ching Fang,Kim Stachenfeld | ICLR 2024,Oral | The ability to predict upcoming events has been hypothesized to comprise a key aspect of natural and machine cognition. This is supported by trends in deep reinforcement learning (RL), where self-supervised auxiliary objectives such as prediction are widely used to support representation learning and improve task perfo... | https://openreview.net/pdf/23365fd987e6b67de035adbd3b3bb679d36ddce7.pdf |
Pre-Training Goal-based Models for Sample-Efficient Reinforcement Learning | https://openreview.net/forum?id=o2IEmeLL9r | https://openreview.net/forum?id=o2IEmeLL9r | Haoqi Yuan,Zhancun Mu,Feiyang Xie,Zongqing Lu | ICLR 2024,Oral | Pre-training on task-agnostic large datasets is a promising approach for enhancing the sample efficiency of reinforcement learning (RL) in solving complex tasks. We present PTGM, a novel method that pre-trains goal-based models to augment RL by providing temporal abstractions and behavior regularization. PTGM involves ... | https://openreview.net/pdf/97ae12300fd1715ec484f1be154d49a619911fff.pdf |
Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To! | https://openreview.net/forum?id=hTEGyKf0dZ | https://openreview.net/forum?id=hTEGyKf0dZ | Xiangyu Qi,Yi Zeng,Tinghao Xie,Pin-Yu Chen,Ruoxi Jia,Prateek Mittal,Peter Henderson | ICLR 2024,Oral | Optimizing large language models (LLMs) for downstream use cases often involves the customization of pre-trained LLMs through further fine-tuning. Meta's open-source release of Llama models and OpenAI's APIs for fine-tuning GPT-3.5 Turbo on customized datasets accelerate this trend. But, what are the safety costs assoc... | https://openreview.net/pdf/cf8a15c7b5a808ae67357cdde0c8f2bbd5c4b8ed.pdf |
Never Train from Scratch: Fair Comparison of Long-Sequence Models Requires Data-Driven Priors | https://openreview.net/forum?id=PdaPky8MUn | https://openreview.net/forum?id=PdaPky8MUn | Ido Amos,Jonathan Berant,Ankit Gupta | ICLR 2024,Oral | Modeling long-range dependencies across sequences is a longstanding goal in machine learning and has led to architectures, such as state space models, that dramatically outperform Transformers on long sequences. However, these impressive empirical gains have been by and large demonstrated on benchmarks (e.g. Long Range... | https://openreview.net/pdf/0f82cdb6beb87821d0a243ee526230c73d7ae798.pdf |
LoftQ: LoRA-Fine-Tuning-aware Quantization for Large Language Models | https://openreview.net/forum?id=LzPWWPAdY4 | https://openreview.net/forum?id=LzPWWPAdY4 | Yixiao Li,Yifan Yu,Chen Liang,Nikos Karampatziakis,Pengcheng He,Weizhu Chen,Tuo Zhao | ICLR 2024,Oral | Quantization is an indispensable technique for serving Large Language Models (LLMs) and has recently found its way into LoRA fine-tuning (Dettmers et al., 2023). In this work we focus on the scenario where quantization and LoRA fine- tuning are applied together on a pre-trained model. In such cases it is common to obse... | https://openreview.net/pdf/c8a3b2454c94e0374c1778862e8fca63e370ba5b.pdf |
Graph Neural Networks for Learning Equivariant Representations of Neural Networks | https://openreview.net/forum?id=oO6FsMyDBt | https://openreview.net/forum?id=oO6FsMyDBt | Miltiadis Kofinas,Boris Knyazev,Yan Zhang,Yunlu Chen,Gertjan J. Burghouts,Efstratios Gavves,Cees G. M. Snoek,David W. Zhang | ICLR 2024,Oral | Neural networks that process the parameters of other neural networks find applications in domains as diverse as classifying implicit neural representations, generating neural network weights, and predicting generalization errors. However, existing approaches either overlook the inherent permutation symmetry in the neur... | https://openreview.net/pdf/338609142f1f45e68ec5fc8b5d6c9a3c0247ee30.pdf |
GNNCert: Deterministic Certification of Graph Neural Networks against Adversarial Perturbations | https://openreview.net/forum?id=IGzaH538fz | https://openreview.net/forum?id=IGzaH538fz | zaishuo xia,Han Yang,Binghui Wang,Jinyuan Jia | ICLR 2024,Oral | Graph classification, which aims to predict a label for a graph, has many real-world applications such as malware detection, fraud detection, and healthcare. However, many studies show an attacker could carefully perturb the structure and/or node features in a graph such that a graph classifier misclassifies the pertur... | https://openreview.net/pdf/03ea622e3c66547d24c4da2f725ddf1fe5db2233.pdf |
Efficient Episodic Memory Utilization of Cooperative Multi-Agent Reinforcement Learning | https://openreview.net/forum?id=LjivA1SLZ6 | https://openreview.net/forum?id=LjivA1SLZ6 | Hyungho Na,Yunkyeong Seo,Il-chul Moon | ICLR 2024,Oral | In cooperative multi-agent reinforcement learning (MARL), agents aim to achieve a common goal, such as defeating enemies or scoring a goal. Existing MARL algorithms are effective but still require significant learning time and often get trapped in local optima by complex tasks, subsequently failing to discover a goal-r... | https://openreview.net/pdf/8b2d5ac5539754d00bf99458a60c63157c74fbdb.pdf |
ClimODE: Climate and Weather Forecasting with Physics-informed Neural ODEs | https://openreview.net/forum?id=xuY33XhEGR | https://openreview.net/forum?id=xuY33XhEGR | Yogesh Verma,Markus Heinonen,Vikas Garg | ICLR 2024,Oral | Climate and weather prediction traditionally relies on complex numerical simulations of atmospheric physics. Deep learning approaches, such as transformers, have recently challenged the simulation paradigm with complex network forecasts. However, they often act as data-driven black-box models that neglect the underlyin... | https://openreview.net/pdf/d6e043c8dac8d842d6ba1816e2b687862e46f2bb.pdf |
Mixed-Type Tabular Data Synthesis with Score-based Diffusion in Latent Space | https://openreview.net/forum?id=4Ay23yeuz0 | https://openreview.net/forum?id=4Ay23yeuz0 | Hengrui Zhang,Jiani Zhang,Zhengyuan Shen,Balasubramaniam Srinivasan,Xiao Qin,Christos Faloutsos,Huzefa Rangwala,George Karypis | ICLR 2024,Oral | Recent advances in tabular data generation have greatly enhanced synthetic data quality. However, extending diffusion models to tabular data is challenging due to the intricately varied distributions and a blend of data types of tabular data. This paper introduces TabSyn, a methodology that synthesizes tabular data by ... | https://openreview.net/pdf/a916d9616f8be0fc9c47c323b6afe8398acf898d.pdf |
Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement | https://openreview.net/forum?id=bNt7oajl2a | https://openreview.net/forum?id=bNt7oajl2a | Linlu Qiu,Liwei Jiang,Ximing Lu,Melanie Sclar,Valentina Pyatkin,Chandra Bhagavatula,Bailin Wang,Yoon Kim,Yejin Choi,Nouha Dziri,Xiang Ren | ICLR 2024,Oral | The ability to derive underlying principles from a handful of observations and then generalize to novel situations---known as inductive reasoning---is central to human intelligence. Prior work suggests that language models (LMs) often fall short on inductive reasoning, despite achieving impressive success on research b... | https://openreview.net/pdf/4032df754ed3bcf600b7b70606e1de283e796547.pdf |
Beyond Weisfeiler-Lehman: A Quantitative Framework for GNN Expressiveness | https://openreview.net/forum?id=HSKaGOi7Ar | https://openreview.net/forum?id=HSKaGOi7Ar | Bohang Zhang,Jingchu Gai,Yiheng Du,Qiwei Ye,Di He,Liwei Wang | ICLR 2024,Oral | Designing expressive Graph Neural Networks (GNNs) is a fundamental topic in the graph learning community. So far, GNN expressiveness has been primarily assessed via the Weisfeiler-Lehman (WL) hierarchy. However, such an expressivity measure has notable limitations: it is inherently coarse, qualitative, and may not well... | https://openreview.net/pdf/1cdf9d7930ee08e1c02c2c2819a16e7a2cc56a4b.pdf |
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | https://openreview.net/forum?id=KUNzEQMWU7 | https://openreview.net/forum?id=KUNzEQMWU7 | Pan Lu,Hritik Bansal,Tony Xia,Jiacheng Liu,Chunyuan Li,Hannaneh Hajishirzi,Hao Cheng,Kai-Wei Chang,Michel Galley,Jianfeng Gao | ICLR 2024,Oral | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit impressive problem-solving skills in many tasks and domains, but their ability in mathematical reasoning in visual contexts has not been systematically studied. To bridge this gap, we present MathVista, a benchmark designed to combine challenges fr... | https://openreview.net/pdf/787a339a2bb6e601216540a43a659322ff3e4e9e.pdf |
Protein Discovery with Discrete Walk-Jump Sampling | https://openreview.net/forum?id=zMPHKOmQNb | https://openreview.net/forum?id=zMPHKOmQNb | Nathan C. Frey,Dan Berenberg,Karina Zadorozhny,Joseph Kleinhenz,Julien Lafrance-Vanasse,Isidro Hotzel,Yan Wu,Stephen Ra,Richard Bonneau,Kyunghyun Cho,Andreas Loukas,Vladimir Gligorijevic,Saeed Saremi | ICLR 2024,Oral | We resolve difficulties in training and sampling from a discrete generative model by learning a smoothed energy function, sampling from the smoothed data manifold with Langevin Markov chain Monte Carlo (MCMC), and projecting back to the true data manifold with one-step denoising. Our $\textit{Discrete Walk-Jump Samplin... | https://openreview.net/pdf/bd2adb2c58bf36a145a6eb40e827467a71d7aaf1.pdf |
ExeDec: Execution Decomposition for Compositional Generalization in Neural Program Synthesis | https://openreview.net/forum?id=oTRwljRgiv | https://openreview.net/forum?id=oTRwljRgiv | Kensen Shi,Joey Hong,Yinlin Deng,Pengcheng Yin,Manzil Zaheer,Charles Sutton | ICLR 2024,Oral | When writing programs, people have the ability to tackle a new complex task by decomposing it into smaller and more familiar subtasks. While it is difficult to measure whether neural program synthesis methods have similar capabilities, we can measure whether they compositionally generalize, that is, whether a model tha... | https://openreview.net/pdf/a69b0e436a40cc8061344c5a3db100f446f53ee6.pdf |
Batched Low-Rank Adaptation of Foundation Models | https://openreview.net/forum?id=w4abltTZ2f | https://openreview.net/forum?id=w4abltTZ2f | Yeming Wen,Swarat Chaudhuri | ICLR 2024,Oral | Low-Rank Adaptation (LoRA) has recently gained attention for fine-tuning foundation models by incorporating trainable low-rank matrices, thereby reducing the number of trainable parameters. While \lora/ offers numerous advantages, its applicability for real-time serving to a diverse and global user base
is constrained... | https://openreview.net/pdf/49eea165a2219adfe98557e7d54b6ca13ebb7db9.pdf |
Improved Active Learning via Dependent Leverage Score Sampling | https://openreview.net/forum?id=IYxDy2jDFL | https://openreview.net/forum?id=IYxDy2jDFL | Atsushi Shimizu,Xiaoou Cheng,Christopher Musco,Jonathan Weare | ICLR 2024,Oral | We show how to obtain improved active learning methods in the agnostic (adversarial noise) setting by combining marginal leverage score sampling with non-independent sampling strategies that promote spatial coverage. In particular, we propose an easily implemented method based on the \emph{pivotal sampling algorithm}, ... | https://openreview.net/pdf/98b84fd00d5f25df5c6927e10d5e51cde527543e.pdf |
Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs | https://openreview.net/forum?id=uNrFpDPMyo | https://openreview.net/forum?id=uNrFpDPMyo | Suyu Ge,Yunan Zhang,Liyuan Liu,Minjia Zhang,Jiawei Han,Jianfeng Gao | ICLR 2024,Oral | In this study, we introduce adaptive KV cache compression, a plug-and-play method that reduces the memory footprint of generative inference for Large Language Models (LLMs). Different from the conventional KV cache that retains key and value vectors for all context tokens, we conduct targeted profiling to discern the i... | https://openreview.net/pdf/757a55aa24be0345fe1687e09fa5ca448934e52f.pdf |
One-shot Empirical Privacy Estimation for Federated Learning | https://openreview.net/forum?id=0BqyZSWfzo | https://openreview.net/forum?id=0BqyZSWfzo | Galen Andrew,Peter Kairouz,Sewoong Oh,Alina Oprea,Hugh Brendan McMahan,Vinith Menon Suriyakumar | ICLR 2024,Oral | Privacy estimation techniques for differentially private (DP) algorithms are useful for comparing against analytical bounds, or to empirically measure privacy loss in settings where known analytical bounds are not tight. However, existing privacy auditing techniques usually make strong assumptions on the adversary (e.g... | https://openreview.net/pdf/7808a17938a3798b99894957cc00136bbf609c65.pdf |
SWE-bench: Can Language Models Resolve Real-world Github Issues? | https://openreview.net/forum?id=VTF8yNQM66 | https://openreview.net/forum?id=VTF8yNQM66 | Carlos E Jimenez,John Yang,Alexander Wettig,Shunyu Yao,Kexin Pei,Ofir Press,Karthik R Narasimhan | ICLR 2024,Oral | Language models have outpaced our ability to evaluate them effectively, but for their future development it is essential to study the frontier of their capabilities. We find real-world software engineering to be a rich, sustainable, and challenging testbed for evaluating the next generation of language models. To this ... | https://openreview.net/pdf/c2a76eb44300a738cbd7cb95f5bc04df621f4d25.pdf |
ReLU Strikes Back: Exploiting Activation Sparsity in Large Language Models | https://openreview.net/forum?id=osoWxY8q2E | https://openreview.net/forum?id=osoWxY8q2E | Seyed Iman Mirzadeh,Keivan Alizadeh-Vahid,Sachin Mehta,Carlo C del Mundo,Oncel Tuzel,Golnoosh Samei,Mohammad Rastegari,Mehrdad Farajtabar | ICLR 2024,Oral | Large Language Models (LLMs) with billions of parameters have drastically transformed AI applications. However, their demanding computation during inference has raised significant challenges for deployment on resource-constrained devices. Despite recent trends favoring alternative activation functions such as GELU or S... | https://openreview.net/pdf/a407324c94efa754d43a6c1718e24541d34e2f24.pdf |
On the Joint Interaction of Models, Data, and Features | https://openreview.net/forum?id=ze7DOLi394 | https://openreview.net/forum?id=ze7DOLi394 | Yiding Jiang,Christina Baek,J Zico Kolter | ICLR 2024,Oral | Learning features from data is one of the defining characteristics of deep learning,
but the theoretical understanding of the role features play in deep learning is still in
early development. To address this gap, we introduce a new tool, the interaction
tensor, for empirically analyzing the interaction between data an... | https://openreview.net/pdf/86a102e47488a58d90fc222cf560db16f68dc65d.pdf |
Topological data analysis on noisy quantum computers | https://openreview.net/forum?id=dLrhRIMVmB | https://openreview.net/forum?id=dLrhRIMVmB | Ismail Yunus Akhalwaya,Shashanka Ubaru,Kenneth L. Clarkson,Mark S. Squillante,Vishnu Jejjala,Yang-Hui He,Kugendran Naidoo,Vasileios Kalantzis,Lior Horesh | ICLR 2024,Oral | Topological data analysis (TDA) is a powerful technique for extracting complex and valuable shape-related summaries of high-dimensional data. However, the computational demands of classical algorithms for computing TDA are exorbitant, and quickly become impractical for high-order characteristics. Quantum computers offe... | https://openreview.net/pdf/07776ae8b91f82e5061d6b246a4e9aacc7bddb41.pdf |
Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection | https://openreview.net/forum?id=hSyW5go0v8 | https://openreview.net/forum?id=hSyW5go0v8 | Akari Asai,Zeqiu Wu,Yizhong Wang,Avirup Sil,Hannaneh Hajishirzi | ICLR 2024,Oral | Despite their remarkable capabilities, large language models (LLMs) often produce responses containing factual inaccuracies due to their sole reliance on the parametric knowledge they encapsulate. Retrieval-Augmented Generation (RAG), an ad hoc approach that augments LMs with retrieval of relevant knowledge, decreases ... | https://openreview.net/pdf/9a78cf641fab9032078e65ae2734293ae8e2f398.pdf |
"What Data Benefits My Classifier?" Enhancing Model Performance and Interpretability through Influence-Based Data Selection | https://openreview.net/forum?id=HE9eUQlAvo | https://openreview.net/forum?id=HE9eUQlAvo | Anshuman Chhabra,Peizhao Li,Prasant Mohapatra,Hongfu Liu | ICLR 2024,Oral | Classification models are ubiquitously deployed in society and necessitate high utility, fairness, and robustness performance. Current research efforts mainly focus on improving model architectures and learning algorithms on fixed datasets to achieve this goal. In contrast, in this paper, we address an orthogonal yet c... | https://openreview.net/pdf/c9c086d91e0480dcd349f7bb625a5031fabcc53a.pdf |
Generative Modeling with Phase Stochastic Bridge | https://openreview.net/forum?id=tUtGjQEDd4 | https://openreview.net/forum?id=tUtGjQEDd4 | Tianrong Chen,Jiatao Gu,Laurent Dinh,Evangelos Theodorou,Joshua M. Susskind,Shuangfei Zhai | ICLR 2024,Oral | Diffusion models (DMs) represent state-of-the-art generative models for continuous inputs. DMs work by constructing a Stochastic Differential Equation (SDE) in the input space (ie, position space), and using a neural network to reverse it. In this work, we introduce a novel generative modeling framework grounded in \te... | https://openreview.net/pdf/5d5ddf9cd03dbc97896ca72e62060b33d19f59e7.pdf |
Zipformer: A faster and better encoder for automatic speech recognition | https://openreview.net/forum?id=9WD9KwssyT | https://openreview.net/forum?id=9WD9KwssyT | Zengwei Yao,Liyong Guo,Xiaoyu Yang,Wei Kang,Fangjun Kuang,Yifan Yang,Zengrui Jin,Long Lin,Daniel Povey | ICLR 2024,Oral | The Conformer has become the most popular encoder model for automatic speech recognition (ASR). It adds convolution modules to a transformer to learn both local and global dependencies. In this work we describe a faster, more memory-efficient, and better-performing transformer, called Zipformer. Modeling changes incl... | https://openreview.net/pdf/73f36dfc4a1fa9d3dd37fdb3cb11d5be19364046.pdf |
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | https://openreview.net/forum?id=VtmBAGCN7o | https://openreview.net/forum?id=VtmBAGCN7o | Sirui Hong,Mingchen Zhuge,Jonathan Chen,Xiawu Zheng,Yuheng Cheng,Jinlin Wang,Ceyao Zhang,Zili Wang,Steven Ka Shing Yau,Zijuan Lin,Liyang Zhou,Chenyu Ran,Lingfeng Xiao,Chenglin Wu,Jürgen Schmidhuber | ICLR 2024,Oral | Recently, remarkable progress has been made on automated problem solving through societies of agents based on large language models (LLMs). Previous LLM-based multi-agent systems can already solve simple dialogue tasks. More complex tasks, however, face challenges through logic inconsistencies due to cascading hallucin... | https://openreview.net/pdf/474fc6dad3bd9bf7fdb97c7cd72b2cc0649a9647.pdf |
ValUES: A Framework for Systematic Validation of Uncertainty Estimation in Semantic Segmentation | https://openreview.net/forum?id=yV6fD7LYkF | https://openreview.net/forum?id=yV6fD7LYkF | Kim-Celine Kahl,Carsten T. Lüth,Maximilian Zenk,Klaus Maier-Hein,Paul F Jaeger | ICLR 2024,Oral | Uncertainty estimation is an essential and heavily-studied component for the reliable application of semantic segmentation methods. While various studies exist claiming methodological advances on the one hand, and successful application on the other hand, the field is currently hampered by a gap between theory and prac... | https://openreview.net/pdf/f1a6b968ddfb2f0ebdeb46499417239973e92e7e.pdf |
Finetuning Text-to-Image Diffusion Models for Fairness | https://openreview.net/forum?id=hnrB5YHoYu | https://openreview.net/forum?id=hnrB5YHoYu | Xudong Shen,Chao Du,Tianyu Pang,Min Lin,Yongkang Wong,Mohan Kankanhalli | ICLR 2024,Oral | The rapid adoption of text-to-image diffusion models in society underscores an urgent need to address their biases. Without interventions, these biases could propagate a skewed worldview and restrict opportunities for minority groups. In this work, we frame fairness as a distributional alignment problem. Our solution c... | https://openreview.net/pdf/9fa6cd12f622fa7dffccbd1c62d26545e012eafa.pdf |
Knowledge Card: Filling LLMs' Knowledge Gaps with Plug-in Specialized Language Models | https://openreview.net/forum?id=WbWtOYIzIK | https://openreview.net/forum?id=WbWtOYIzIK | Shangbin Feng,Weijia Shi,Yuyang Bai,Vidhisha Balachandran,Tianxing He,Yulia Tsvetkov | ICLR 2024,Oral | By design, large language models (LLMs) are static general-purpose models, expensive to retrain or update frequently. As they are increasingly adopted for knowledge-intensive tasks, it becomes evident that these design choices lead to failures to generate factual, relevant, and up-to-date knowledge. To this end, we pro... | https://openreview.net/pdf/93b8f30fd873a0887265f980d789959bfeb89e40.pdf |
METRA: Scalable Unsupervised RL with Metric-Aware Abstraction | https://openreview.net/forum?id=c5pwL0Soay | https://openreview.net/forum?id=c5pwL0Soay | Seohong Park,Oleh Rybkin,Sergey Levine | ICLR 2024,Oral | Unsupervised pre-training strategies have proven to be highly effective in natural language processing and computer vision. Likewise, unsupervised reinforcement learning (RL) holds the promise of discovering a variety of potentially useful behaviors that can accelerate the learning of a wide array of downstream tasks. ... | https://openreview.net/pdf/957e22f4e911e7ad35fff291d142a0a622982c0a.pdf |
Meta Continual Learning Revisited: Implicitly Enhancing Online Hessian Approximation via Variance Reduction | https://openreview.net/forum?id=TpD2aG1h0D | https://openreview.net/forum?id=TpD2aG1h0D | Yichen Wu,Long-Kai Huang,Renzhen Wang,Deyu Meng,Ying Wei | ICLR 2024,Oral | Regularization-based methods have so far been among the *de facto* choices for continual learning. Recent theoretical studies have revealed that these methods all boil down to relying on the Hessian matrix approximation of model weights.
However, these methods suffer from suboptimal trade-offs between knowledge transf... | https://openreview.net/pdf/28a552d86247251eb46610359a599b07e5b3e5eb.pdf |
Improving Convergence and Generalization Using Parameter Symmetries | https://openreview.net/forum?id=L0r0GphlIL | https://openreview.net/forum?id=L0r0GphlIL | Bo Zhao,Robert M. Gower,Robin Walters,Rose Yu | ICLR 2024,Oral | In many neural networks, different values of the parameters may result in the same loss value. Parameter space symmetries are loss-invariant transformations that change the model parameters. Teleportation applies such transformations to accelerate optimization. However, the exact mechanism behind this algorithm's succe... | https://openreview.net/pdf/5c8faf4be06ab48f03f7a0b88199632f8db72f7c.pdf |
Flow Matching on General Geometries | https://openreview.net/forum?id=g7ohDlTITL | https://openreview.net/forum?id=g7ohDlTITL | Ricky T. Q. Chen,Yaron Lipman | ICLR 2024,Oral | We propose Riemannian Flow Matching (RFM), a simple yet powerful framework for training continuous normalizing flows on manifolds. Existing methods for generative modeling on manifolds either require expensive simulation, are inherently unable to scale to high dimensions, or use approximations for limiting quantities t... | https://openreview.net/pdf/00e980dec1d5ee17094141c71986553014f8a41a.pdf |
Ghost on the Shell: An Expressive Representation of General 3D Shapes | https://openreview.net/forum?id=Ad87VjRqUw | https://openreview.net/forum?id=Ad87VjRqUw | Zhen Liu,Yao Feng,Yuliang Xiu,Weiyang Liu,Liam Paull,Michael J. Black,Bernhard Schölkopf | ICLR 2024,Oral | The creation of photorealistic virtual worlds requires the accurate modeling of 3D surface geometry for a wide range of objects. For this, meshes are appealing since they enable 1) fast physics-based rendering with realistic material and lighting, 2) physical simulation, and 3) are memory-efficient for modern graphics ... | https://openreview.net/pdf/97f11bc98d70c1fbee4e5f3325299c53225c6bfc.pdf |
Würstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models | https://openreview.net/forum?id=gU58d5QeGv | https://openreview.net/forum?id=gU58d5QeGv | Pablo Pernias,Dominic Rampas,Mats Leon Richter,Christopher Pal,Marc Aubreville | ICLR 2024,Oral | We introduce Würstchen, a novel architecture for text-to-image synthesis that combines competitive performance with unprecedented cost-effectiveness for large-scale text-to-image diffusion models.
A key contribution of our work is to develop a latent diffusion technique in which we learn a detailed but extremely compac... | https://openreview.net/pdf/31506ae62c31613539a0623777d341cb424cf5b9.pdf |
Unified Generative Modeling of 3D Molecules with Bayesian Flow Networks | https://openreview.net/forum?id=NSVtmmzeRB | https://openreview.net/forum?id=NSVtmmzeRB | Yuxuan Song,Jingjing Gong,Hao Zhou,Mingyue Zheng,Jingjing Liu,Wei-Ying Ma | ICLR 2024,Oral | Advanced generative model (\textit{e.g.}, diffusion model) derived from simplified continuity assumptions of data distribution, though showing promising progress, has been difficult to apply directly to geometry generation applications due to the \textit{multi-modality} and \textit{noise-sensitive} nature of molecule g... | https://openreview.net/pdf/ddfe46bc639f9c1dc849398c8b3d978ffd171431.pdf |
Small-scale proxies for large-scale Transformer training instabilities | https://openreview.net/forum?id=d8w0pmvXbZ | https://openreview.net/forum?id=d8w0pmvXbZ | Mitchell Wortsman,Peter J Liu,Lechao Xiao,Katie E Everett,Alexander A Alemi,Ben Adlam,John D Co-Reyes,Izzeddin Gur,Abhishek Kumar,Roman Novak,Jeffrey Pennington,Jascha Sohl-Dickstein,Kelvin Xu,Jaehoon Lee,Justin Gilmer,Simon Kornblith | ICLR 2024,Oral | Teams that have trained large Transformer-based models have reported training instabilities at large scale that did not appear when training with the same hyperparameters at smaller scales. Although the causes of such instabilities are of scientific interest, the amount of resources required to reproduce them has made ... | https://openreview.net/pdf/779db5974973fe74f026f4a70e3f08d16c11cadb.pdf |
How I Warped Your Noise: a Temporally-Correlated Noise Prior for Diffusion Models | https://openreview.net/forum?id=pzElnMrgSD | https://openreview.net/forum?id=pzElnMrgSD | Pascal Chang,Jingwei Tang,Markus Gross,Vinicius C. Azevedo | ICLR 2024,Oral | Video editing and generation methods often rely on pre-trained image-based diffusion models. During the diffusion process, however, the reliance on rudimentary noise sampling techniques that do not preserve correlations present in subsequent frames of a video is detrimental to the quality of the results. This either pr... | https://openreview.net/pdf/c35a99656514c0312f7f69d2ecda8ffec1a632de.pdf |
Vision Transformers Need Registers | https://openreview.net/forum?id=2dnO3LLiJ1 | https://openreview.net/forum?id=2dnO3LLiJ1 | Timothée Darcet,Maxime Oquab,Julien Mairal,Piotr Bojanowski | ICLR 2024,Oral | Transformers have recently emerged as a powerful tool for learning visual representations. In this paper, we identify and characterize artifacts in feature maps of both supervised and self-supervised ViT networks. The artifacts correspond to high-norm tokens appearing during inference primarily in low-informative backg... | https://openreview.net/pdf/1db45cd6c97acf30b37c4ee9ac6e79d4f3ac7763.pdf |
An Analytical Solution to Gauss-Newton Loss for Direct Image Alignment | https://openreview.net/forum?id=mE52zURNGc | https://openreview.net/forum?id=mE52zURNGc | Sergei Solonets,Daniil Sinitsyn,Lukas Von Stumberg,Nikita Araslanov,Daniel Cremers | ICLR 2024,Oral | Direct image alignment is a widely used technique for relative 6DoF pose estimation between two images, but its accuracy strongly depends on pose initialization.
Therefore, recent end-to-end frameworks increase the convergence basin of the learned feature descriptors with special training objectives, such as the Gauss-... | https://openreview.net/pdf/8cc6141bff9dadb82d553ab8ac1b1ff6d4f434a9.pdf |
Learning Energy Decompositions for Partial Inference in GFlowNets | https://openreview.net/forum?id=P15CHILQlg | https://openreview.net/forum?id=P15CHILQlg | Hyosoon Jang,Minsu Kim,Sungsoo Ahn | ICLR 2024,Oral | This paper studies generative flow networks (GFlowNets) to sample objects from the Boltzmann energy distribution via a sequence of actions. In particular, we focus on improving GFlowNet with partial inference: training flow functions with the evaluation of the intermediate states or transitions. To this end, the recent... | https://openreview.net/pdf/54bfe1a393ed4a31554ead18c45d5b62548007be.pdf |
Approximating Nash Equilibria in Normal-Form Games via Stochastic Optimization | https://openreview.net/forum?id=cc8h3I3V4E | https://openreview.net/forum?id=cc8h3I3V4E | Ian Gemp,Luke Marris,Georgios Piliouras | ICLR 2024,Oral | We propose the first loss function for approximate Nash equilibria of normal-form games that is amenable to unbiased Monte Carlo estimation. This construction allows us to deploy standard non-convex stochastic optimization techniques for approximating Nash equilibria, resulting in novel algorithms with provable guaran... | https://openreview.net/pdf/6116af6dc392a3153d1462f038b9dac4f8305ca6.pdf |
Multi-Source Diffusion Models for Simultaneous Music Generation and Separation | https://openreview.net/forum?id=h922Qhkmx1 | https://openreview.net/forum?id=h922Qhkmx1 | Giorgio Mariani,Irene Tallini,Emilian Postolache,Michele Mancusi,Luca Cosmo,Emanuele Rodolà | ICLR 2024,Oral | In this work, we define a diffusion-based generative model capable of both music generation and source separation by learning the score of the joint probability density of sources sharing a context. Alongside the classic total inference tasks (i.e., generating a mixture, separating the sources), we also introduce and e... | https://openreview.net/pdf/e9d4d9aabe25aa6dc764b915d5844871ff4bcd7c.pdf |
LEGO-Prover: Neural Theorem Proving with Growing Libraries | https://openreview.net/forum?id=3f5PALef5B | https://openreview.net/forum?id=3f5PALef5B | Haiming Wang,Huajian Xin,Chuanyang Zheng,Zhengying Liu,Qingxing Cao,Yinya Huang,Jing Xiong,Han Shi,Enze Xie,Jian Yin,Zhenguo Li,Xiaodan Liang | ICLR 2024,Oral | Despite the success of large language models (LLMs), the task of theorem proving still remains one of the hardest reasoning tasks that is far from being fully solved. Prior methods using language models have demonstrated promising results, but they still struggle to prove even middle school level theorems. One common l... | https://openreview.net/pdf/3133380a86db246c6a9e18dabc0a301196b70cd6.pdf |
ASID: Active Exploration for System Identification in Robotic Manipulation | https://openreview.net/forum?id=jNR6s6OSBT | https://openreview.net/forum?id=jNR6s6OSBT | Marius Memmel,Andrew Wagenmaker,Chuning Zhu,Dieter Fox,Abhishek Gupta | ICLR 2024,Oral | Model-free control strategies such as reinforcement learning have shown the ability to learn control strategies without requiring an accurate model or simulator of the world. While this is appealing due to the lack of modeling requirements, such methods can be sample inefficient, making them impractical in many real-wo... | https://openreview.net/pdf/f456ef2115275fac2aa0977b3c7db68ed00add89.pdf |
Towards a statistical theory of data selection under weak supervision | https://openreview.net/forum?id=HhfcNgQn6p | https://openreview.net/forum?id=HhfcNgQn6p | Germain Kolossov,Andrea Montanari,Pulkit Tandon | ICLR 2024,Oral | Given a sample of size $N$, it is often useful to select a subsample of smaller size $n<N$ to be used for statistical estimation or learning. Such a data selection step is useful to reduce the requirements of data labeling and the computational complexity of learning. We assume to be given $N$ unlabeled samples $x_{i}... | https://openreview.net/pdf/3afcd7230f6f462e837b839132c8cdd6cfceb037.pdf |
Mastering Memory Tasks with World Models | https://openreview.net/forum?id=1vDArHJ68h | https://openreview.net/forum?id=1vDArHJ68h | Mohammad Reza Samsami,Artem Zholus,Janarthanan Rajendran,Sarath Chandar | ICLR 2024,Oral | Current model-based reinforcement learning (MBRL) agents struggle with long-term dependencies. This limits their ability to effectively solve tasks involving extended time gaps between actions and outcomes, or tasks demanding the recalling of distant observations to inform current actions. To improve temporal coherence... | https://openreview.net/pdf/152e0fd1736694958db18ece2cda594d14c79969.pdf |
Monte Carlo guided Denoising Diffusion models for Bayesian linear inverse problems. | https://openreview.net/forum?id=nHESwXvxWK | https://openreview.net/forum?id=nHESwXvxWK | Gabriel Cardoso,Yazid Janati el idrissi,Sylvain Le Corff,Eric Moulines | ICLR 2024,Oral | Ill-posed linear inverse problems arise frequently in various applications, from computational photography to medical imaging.
A recent line of research exploits Bayesian inference with informative priors to handle the ill-posedness of such problems.
Amongst such priors, score-based generative models (SGM) have recentl... | https://openreview.net/pdf/c0015dd72ccf0837042cc9453b2722e3b53f1893.pdf |
Self-Alignment with Instruction Backtranslation | https://openreview.net/forum?id=1oijHJBRsT | https://openreview.net/forum?id=1oijHJBRsT | Xian Li,Ping Yu,Chunting Zhou,Timo Schick,Omer Levy,Luke Zettlemoyer,Jason E Weston,Mike Lewis | ICLR 2024,Oral | We present a scalable method to build a high quality instruction following language model by automatically labelling human-written text with corresponding instructions. Our approach, named instruction backtranslation, starts with a language model finetuned on a small amount of seed data, and a given web corpus. The see... | https://openreview.net/pdf/1d2560a0bb5b83c6bafcac88a94445a60971be31.pdf |
Learning Interactive Real-World Simulators | https://openreview.net/forum?id=sFyTZEqmUY | https://openreview.net/forum?id=sFyTZEqmUY | Sherry Yang,Yilun Du,Seyed Kamyar Seyed Ghasemipour,Jonathan Tompson,Leslie Pack Kaelbling,Dale Schuurmans,Pieter Abbeel | ICLR 2024,Oral | Generative models trained on internet data have revolutionized how text, image, and video content can be created. Perhaps the next milestone for generative models is to simulate realistic experience in response to actions taken by humans, robots, and other interactive agents. Applications of a real-world simulator rang... | https://openreview.net/pdf/ebbd0d77e65c2e2ffb1eef300c8c55e4f2f27c86.pdf |
Candidate Label Set Pruning: A Data-centric Perspective for Deep Partial-label Learning | https://openreview.net/forum?id=Fk5IzauJ7F | https://openreview.net/forum?id=Fk5IzauJ7F | Shuo He,Chaojie Wang,Guowu Yang,Lei Feng | ICLR 2024,Oral | Partial-label learning (PLL) allows each training example to be equipped with a set of candidate labels. Existing deep PLL research focuses on a \emph{learning-centric} perspective to design various training strategies for label disambiguation i.e., identifying the concealed true label from the candidate label set, for... | https://openreview.net/pdf/acca7b23067f28f766cd4bad4ec9bc2875702fc8.pdf |
Robust agents learn causal world models | https://openreview.net/forum?id=pOoKI3ouv1 | https://openreview.net/forum?id=pOoKI3ouv1 | Jonathan Richens,Tom Everitt | ICLR 2024,Oral | It has long been hypothesised that causal reasoning plays a fundamental role in robust and general intelligence. However, it is not known if agents must learn causal models in order to generalise to new domains, or if other inductive biases are sufficient. We answer this question, showing that any agent capable of sati... | https://openreview.net/pdf/82e4b7b89fa93d52b6278d9d868ccb4800abb8ff.pdf |
On the Humanity of Conversational AI: Evaluating the Psychological Portrayal of LLMs | https://openreview.net/forum?id=H3UayAQWoE | https://openreview.net/forum?id=H3UayAQWoE | Jen-tse Huang,Wenxuan Wang,Eric John Li,Man Ho LAM,Shujie Ren,Youliang Yuan,Wenxiang Jiao,Zhaopeng Tu,Michael Lyu | ICLR 2024,Oral | Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse us... | https://openreview.net/pdf/b229e8ebcec1e8bef4ab8642d47d29495fdc9534.pdf |
Diffusion Model for Dense Matching | https://openreview.net/forum?id=Zsfiqpft6K | https://openreview.net/forum?id=Zsfiqpft6K | Jisu Nam,Gyuseong Lee,Sunwoo Kim,Hyeonsu Kim,Hyoungwon Cho,Seyeon Kim,Seungryong Kim | ICLR 2024,Oral | The objective for establishing dense correspondence between paired images con- sists of two terms: a data term and a prior term. While conventional techniques focused on defining hand-designed prior terms, which are difficult to formulate, re- cent approaches have focused on learning the data term with deep neural netw... | https://openreview.net/pdf/08dc4021186fe33924d3253d7640112693991448.pdf |
Is ImageNet worth 1 video? Learning strong image encoders from 1 long unlabelled video | https://openreview.net/forum?id=Yen1lGns2o | https://openreview.net/forum?id=Yen1lGns2o | Shashanka Venkataramanan,Mamshad Nayeem Rizve,Joao Carreira,Yuki M Asano,Yannis Avrithis | ICLR 2024,Oral | Self-supervised learning has unlocked the potential of scaling up pretraining to billions of images, since annotation is unnecessary. But are we making the best use of data? How more economical can we be? In this work, we attempt to answer this question by making two contributions. First, we investigate first-person vi... | https://openreview.net/pdf/822b36f39680f189e99f3c34413c6b5c89d6b51a.pdf |
Neural Fine-Tuning Search for Few-Shot Learning | https://openreview.net/forum?id=T7YV5UZKBc | https://openreview.net/forum?id=T7YV5UZKBc | Panagiotis Eustratiadis,Łukasz Dudziak,Da Li,Timothy Hospedales | ICLR 2024,Oral | In few-shot recognition, a classifier that has been trained on one set of classes is required to rapidly adapt and generalize to a disjoint, novel set of classes. To that end, recent studies have shown the efficacy of fine-tuning with carefully-crafted adaptation architectures. However this raises the question of: How ... | https://openreview.net/pdf/9878859cc4979dc1552ab1c206ff30122453346b.pdf |
Latent Trajectory Learning for Limited Timestamps under Distribution Shift over Time | https://openreview.net/forum?id=bTMMNT7IdW | https://openreview.net/forum?id=bTMMNT7IdW | QIUHAO Zeng,Changjian Shui,Long-Kai Huang,Peng Liu,Xi Chen,Charles Ling,Boyu Wang | ICLR 2024,Oral | Distribution shifts over time are common in real-world machine-learning applications. This scenario is formulated as Evolving Domain Generalization (EDG), where models aim to generalize well to unseen target domains in a time-varying system by learning and leveraging the underlying evolving pattern of the distribution ... | https://openreview.net/pdf/2d6728dfffe50fd8e8627061ece7a1f07abc5462.pdf |
Less is More: Fewer Interpretable Region via Submodular Subset Selection | https://openreview.net/forum?id=jKTUlxo5zy | https://openreview.net/forum?id=jKTUlxo5zy | Ruoyu Chen,Hua Zhang,Siyuan Liang,Jingzhi Li,Xiaochun Cao | ICLR 2024,Oral | Image attribution algorithms aim to identify important regions that are highly relevant to model decisions. Although existing attribution solutions can effectively assign importance to target elements, they still face the following challenges: 1) existing attribution methods generate inaccurate small regions thus misle... | https://openreview.net/pdf/ab53441cc4465bcbb3d2ffd4fc53dc1b27e76e6e.pdf |
Cameras as Rays: Pose Estimation via Ray Diffusion | https://openreview.net/forum?id=EanCFCwAjM | https://openreview.net/forum?id=EanCFCwAjM | Jason Y. Zhang,Amy Lin,Moneish Kumar,Tzu-Hsuan Yang,Deva Ramanan,Shubham Tulsiani | ICLR 2024,Oral | Estimating camera poses is a fundamental task for 3D reconstruction and remains challenging given sparsely sampled views (<10). In contrast to existing approaches that pursue top-down prediction of global parametrizations of camera extrinsics, we propose a distributed representation of camera pose that treats a camera ... | https://openreview.net/pdf/b94ec4f9e7354e38e14b5a0da4e4f829f20f381a.pdf |
Accelerating Distributed Stochastic Optimization via Self-Repellent Random Walks | https://openreview.net/forum?id=BV1PHbTJzd | https://openreview.net/forum?id=BV1PHbTJzd | Jie Hu,Vishwaraj Doshi,Do Young Eun | ICLR 2024,Oral | We study a family of distributed stochastic optimization algorithms where gradients are sampled by a token traversing a network of agents in random-walk fashion. Typically, these random-walks are chosen to be Markov chains that asymptotically sample from a desired target distribution, and play a critical role in the co... | https://openreview.net/pdf/64bc85abf6c1e89455cfa6d45b1c2c03c4e4ee54.pdf |
Detecting, Explaining, and Mitigating Memorization in Diffusion Models | https://openreview.net/forum?id=84n3UwkH7b | https://openreview.net/forum?id=84n3UwkH7b | Yuxin Wen,Yuchen Liu,Chen Chen,Lingjuan Lyu | ICLR 2024,Oral | Recent breakthroughs in diffusion models have exhibited exceptional image-generation capabilities. However, studies show that some outputs are merely replications of training data. Such replications present potential legal challenges for model owners, especially when the generated content contains proprietary informati... | https://openreview.net/pdf/f7cb8a4a7ba048a0d09bdea01774be1a0676504f.pdf |
Quick-Tune: Quickly Learning Which Pretrained Model to Finetune and How | https://openreview.net/forum?id=tqh1zdXIra | https://openreview.net/forum?id=tqh1zdXIra | Sebastian Pineda Arango,Fabio Ferreira,Arlind Kadra,Frank Hutter,Josif Grabocka | ICLR 2024,Oral | With the ever-increasing number of pretrained models, machine learning practitioners are continuously faced with which pretrained model to use, and how to finetune it for a new dataset. In this paper, we propose a methodology that jointly searches for the optimal pretrained model and the hyperparameters for finetuning ... | https://openreview.net/pdf/0d50254746a68fad8be9e1216532dcd5924e2019.pdf |
LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models | https://openreview.net/forum?id=6PmJoRfdaK | https://openreview.net/forum?id=6PmJoRfdaK | Yukang Chen,Shengju Qian,Haotian Tang,Xin Lai,Zhijian Liu,Song Han,Jiaya Jia | ICLR 2024,Oral | We present LongLoRA, an efficient fine-tuning approach that extends the context sizes of pre-trained large language models (LLMs), with limited computation cost.
Typically, training LLMs with long context sizes is computationally expensive, requiring extensive training hours and GPU resources. For example, training on ... | https://openreview.net/pdf/c59a7d7e3b772a1cea62d9bac390273a26c26734.pdf |
Amortizing intractable inference in large language models | https://openreview.net/forum?id=Ouj6p4ca60 | https://openreview.net/forum?id=Ouj6p4ca60 | Edward J Hu,Moksh Jain,Eric Elmoznino,Younesse Kaddar,Guillaume Lajoie,Yoshua Bengio,Nikolay Malkin | ICLR 2024,Oral | Autoregressive large language models (LLMs) compress knowledge from their training data through next-token conditional distributions. This limits tractable querying of this knowledge to start-to-end autoregressive sampling. However, many tasks of interest---including sequence continuation, infilling, and other forms of... | https://openreview.net/pdf/4636785df4e848cf95cee05d7314fcb50e2d4c3c.pdf |
LLMCarbon: Modeling the End-to-End Carbon Footprint of Large Language Models | https://openreview.net/forum?id=aIok3ZD9to | https://openreview.net/forum?id=aIok3ZD9to | Ahmad Faiz,Sotaro Kaneda,Ruhan Wang,Rita Chukwunyere Osi,Prateek Sharma,Fan Chen,Lei Jiang | ICLR 2024,Oral | The carbon footprint associated with large language models (LLMs) is a significant concern, encompassing emissions from their training, inference, experimentation, and storage processes, including operational and embodied carbon emissions. An essential aspect is accurately estimating the carbon impact of emerging LLMs ... | https://openreview.net/pdf/43015130fe7515c37278585d5e156acdb8bba5fb.pdf |
A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis | https://openreview.net/forum?id=9JQtrumvg8 | https://openreview.net/forum?id=9JQtrumvg8 | Izzeddin Gur,Hiroki Furuta,Austin V Huang,Mustafa Safdari,Yutaka Matsuo,Douglas Eck,Aleksandra Faust | ICLR 2024,Oral | Pre-trained large language models (LLMs) have recently achieved better generalization and sample efficiency in autonomous web automation.
However, the performance on real-world websites has still suffered from (1) open domainness, (2) limited context length, and (3) lack of inductive bias on HTML.
We introduce WebAgent... | https://openreview.net/pdf/0b27823f96e3efd0ed6921aafc4fe4643d1aeec5.pdf |
Lipschitz Singularities in Diffusion Models | https://openreview.net/forum?id=WNkW0cOwiz | https://openreview.net/forum?id=WNkW0cOwiz | Zhantao Yang,Ruili Feng,Han Zhang,Yujun Shen,Kai Zhu,Lianghua Huang,Yifei Zhang,Yu Liu,Deli Zhao,Jingren Zhou,Fan Cheng | ICLR 2024,Oral | Diffusion models, which employ stochastic differential equations to sample images through integrals, have emerged as a dominant class of generative models. However, the rationality of the diffusion process itself receives limited attention, leaving the question of whether the problem is well-posed and well-conditioned.... | https://openreview.net/pdf/79d5382f3723bab77cf1931fe0c461eb35d8218a.pdf |
Interpreting CLIP's Image Representation via Text-Based Decomposition | https://openreview.net/forum?id=5Ca9sSzuDp | https://openreview.net/forum?id=5Ca9sSzuDp | Yossi Gandelsman,Alexei A Efros,Jacob Steinhardt | ICLR 2024,Oral | We investigate the CLIP image encoder by analyzing how individual model components affect the final representation. We decompose the image representation as a sum across individual image patches, model layers, and attention heads, and use CLIP's text representation to interpret the summands. Interpreting the attention ... | https://openreview.net/pdf/8570a395fcdad9f81c89c604044a2406efb7dc7b.pdf |
Multisize Dataset Condensation | https://openreview.net/forum?id=FVhmnvqnsI | https://openreview.net/forum?id=FVhmnvqnsI | Yang He,Lingao Xiao,Joey Tianyi Zhou,Ivor Tsang | ICLR 2024,Oral | While dataset condensation effectively enhances training efficiency, its application in on-device scenarios brings unique challenges. 1) Due to the fluctuating computational resources of these devices, there's a demand for a flexible dataset size that diverges from a predefined size. 2) The limited computational power ... | https://openreview.net/pdf/316b7fa983b9fde383169e561c22722abd5b96fb.pdf |
DreamGaussian: Generative Gaussian Splatting for Efficient 3D Content Creation | https://openreview.net/forum?id=UyNXMqnN3c | https://openreview.net/forum?id=UyNXMqnN3c | Jiaxiang Tang,Jiawei Ren,Hang Zhou,Ziwei Liu,Gang Zeng | ICLR 2024,Oral | Recent advances in 3D content creation mostly leverage optimization-based 3D generation via score distillation sampling (SDS).
Though promising results have been exhibited, these methods often suffer from slow per-sample optimization, limiting their practical usage.
In this paper, we propose DreamGaussian, a novel 3D ... | https://openreview.net/pdf/6070ff46264213801f0d925ab0af21f3c57d8c37.pdf |
LRM: Large Reconstruction Model for Single Image to 3D | https://openreview.net/forum?id=sllU8vvsFF | https://openreview.net/forum?id=sllU8vvsFF | Yicong Hong,Kai Zhang,Jiuxiang Gu,Sai Bi,Yang Zhou,Difan Liu,Feng Liu,Kalyan Sunkavalli,Trung Bui,Hao Tan | ICLR 2024,Oral | We propose the first Large Reconstruction Model (LRM) that predicts the 3D model of an object from a single input image within just 5 seconds. In contrast to many previous methods that are trained on small-scale datasets such as ShapeNet in a category-specific fashion, LRM adopts a highly scalable transformer-based arc... | https://openreview.net/pdf/21831b2594b6b1378c517f290ba90103625d2d55.pdf |
How Well Do Supervised 3D Models Transfer to Medical Imaging Tasks? | https://openreview.net/forum?id=AhizIPytk4 | https://openreview.net/forum?id=AhizIPytk4 | Wenxuan Li,Alan Yuille,Zongwei Zhou | ICLR 2024,Oral | The pre-training and fine-tuning paradigm has become prominent in transfer learning. For example, if the model is pre-trained on ImageNet and then fine-tuned to PASCAL, it can significantly outperform that trained on PASCAL from scratch. While ImageNet pre-training has shown enormous success, it is formed in 2D, and th... | https://openreview.net/pdf/08dee4fe8bfab20e8d683609d546d91345d8cd82.pdf |
Gene Regulatory Network Inference in the Presence of Dropouts: a Causal View | https://openreview.net/forum?id=gFR4QwK53h | https://openreview.net/forum?id=gFR4QwK53h | Haoyue Dai,Ignavier Ng,Gongxu Luo,Peter Spirtes,Petar Stojanov,Kun Zhang | ICLR 2024,Oral | Gene regulatory network inference (GRNI) is a challenging problem, particularly owing to the presence of zeros in single-cell RNA sequencing data: some are biological zeros representing no gene expression, while some others are technical zeros arising from the sequencing procedure (aka dropouts), which may bias GRNI by... | https://openreview.net/pdf/69813094585730931fce711a92e4bb53d955e2dd.pdf |
Statistically Optimal $K$-means Clustering via Nonnegative Low-rank Semidefinite Programming | https://openreview.net/forum?id=v7ZPwoHU1j | https://openreview.net/forum?id=v7ZPwoHU1j | Yubo Zhuang,Xiaohui Chen,Yun Yang,Richard Y. Zhang | ICLR 2024,Oral | $K$-means clustering is a widely used machine learning method for identifying patterns in large datasets. Recently, semidefinite programming (SDP) relaxations have been proposed for solving the $K$-means optimization problem, which enjoy strong statistical optimality guarantees. However, the prohibitive cost of impleme... | https://openreview.net/pdf/4a224d33173cf3086a62083b5dda9cf8d1f4261a.pdf |
Unprocessing Seven Years of Algorithmic Fairness | https://openreview.net/forum?id=jr03SfWsBS | https://openreview.net/forum?id=jr03SfWsBS | André Cruz,Moritz Hardt | ICLR 2024,Oral | Seven years ago, researchers proposed a postprocessing method to equalize the error rates of a model across different demographic groups. The work launched hundreds of papers purporting to improve over the postprocessing baseline. We empirically evaluate these claims through thousands of model evaluations on several ta... | https://openreview.net/pdf/cd3bf8642c6c69bd7fce176fc9e60e2ddc23c58e.pdf |
InfoBatch: Lossless Training Speed Up by Unbiased Dynamic Data Pruning | https://openreview.net/forum?id=C61sk5LsK6 | https://openreview.net/forum?id=C61sk5LsK6 | Ziheng Qin,Kai Wang,Zangwei Zheng,Jianyang Gu,Xiangyu Peng,xu Zhao Pan,Daquan Zhou,Lei Shang,Baigui Sun,Xuansong Xie,Yang You | ICLR 2024,Oral | Data pruning aims to obtain lossless performances with less overall cost. A common approach is to filter out samples that make less contribution to the training. This could lead to gradient expectation bias compared to the original data. To solve this problem, we propose InfoBatch, a novel framework aiming to achieve l... | https://openreview.net/pdf/9d5adb82a04bd07a7baace8a7f619a6d39a4d2a2.pdf |
Multi-granularity Correspondence Learning from Long-term Noisy Videos | https://openreview.net/forum?id=9Cu8MRmhq2 | https://openreview.net/forum?id=9Cu8MRmhq2 | Yijie Lin,Jie Zhang,Zhenyu Huang,Jia Liu,zujie wen,Xi Peng | ICLR 2024,Oral | Existing video-language studies mainly focus on learning short video clips, leaving long-term temporal dependencies rarely explored due to over-high computational cost of modeling long videos. To address this issue, one feasible solution is learning the correspondence between video clips and captions, which however ine... | https://openreview.net/pdf/578b0930059c165430921bc67cd65b6a0657e518.pdf |
SaNN: Simple Yet Powerful Simplicial-aware Neural Networks | https://openreview.net/forum?id=eUgS9Ig8JG | https://openreview.net/forum?id=eUgS9Ig8JG | Sravanthi Gurugubelli,Sundeep Prabhakar Chepuri | ICLR 2024,Spotlight | Simplicial neural networks (SNNs) are deep models for higher-order graph representation learning. SNNs learn low-dimensional embeddings of simplices in a simplicial complex by aggregating features of their respective upper, lower, boundary, and coboundary adjacent simplices. The aggregation in SNNs is carried out durin... | https://openreview.net/pdf/b5b2e785dec69b9ea0c8b01d6e2eca5896246cce.pdf |
Beyond Memorization: Violating Privacy via Inference with Large Language Models | https://openreview.net/forum?id=kmn0BhQk7p | https://openreview.net/forum?id=kmn0BhQk7p | Robin Staab,Mark Vero,Mislav Balunovic,Martin Vechev | ICLR 2024,Spotlight | Current privacy research on large language models (LLMs) primarily focuses on the issue of extracting memorized training data. At the same time, models’ inference capabilities have increased drastically. This raises the key question of whether current LLMs could violate individuals’ privacy by inferring personal attrib... | https://openreview.net/pdf/0174f3bf08c7c34d3feb07ca6e5b488bb3efc21c.pdf |
Controlled Text Generation via Language Model Arithmetic | https://openreview.net/forum?id=SLw9fp4yI6 | https://openreview.net/forum?id=SLw9fp4yI6 | Jasper Dekoninck,Marc Fischer,Luca Beurer-Kellner,Martin Vechev | ICLR 2024,Spotlight | As Large Language Models (LLMs) are deployed more widely, customization with respect to vocabulary, style, and character becomes more important. In this work, we introduce model arithmetic, a novel inference framework for composing and biasing LLMs without the need for model (re)training or highly specific datasets. In... | https://openreview.net/pdf/7b09c9f1a15373444f1e3be2bef23404a9029f8b.pdf |
Consistency Training with Learnable Data Augmentation for Graph Anomaly Detection with Limited Supervision | https://openreview.net/forum?id=elMKXvhhQ9 | https://openreview.net/forum?id=elMKXvhhQ9 | Nan Chen,Zemin Liu,Bryan Hooi,Bingsheng He,Rizal Fathony,Jun Hu,Jia Chen | ICLR 2024,Spotlight | Graph Anomaly Detection (GAD) has surfaced as a significant field of research, predominantly due to its substantial influence in production environments. Although existing approaches for node anomaly detection have shown effectiveness, they have yet to fully address two major challenges: operating in settings with limi... | https://openreview.net/pdf/0a2aa8c65cf0939e67a21de44b554069cb961a25.pdf |
Symmetric Mean-field Langevin Dynamics for Distributional Minimax Problems | https://openreview.net/forum?id=YItWKZci78 | https://openreview.net/forum?id=YItWKZci78 | Juno Kim,Kakei Yamamoto,Kazusato Oko,Zhuoran Yang,Taiji Suzuki | ICLR 2024,Spotlight | In this paper, we extend mean-field Langevin dynamics to minimax optimization over probability distributions for the first time with symmetric and provably convergent updates. We propose \emph{mean-field Langevin averaged gradient} (MFL-AG), a single-loop algorithm that implements gradient descent ascent in the distrib... | https://openreview.net/pdf/ec8abe6d7a5a4ae8d6dbb94cdc4734ed2e28024f.pdf |
Generalized Policy Iteration using Tensor Approximation for Hybrid Control | https://openreview.net/forum?id=csukJcpYDe | https://openreview.net/forum?id=csukJcpYDe | Suhan Shetty,Teng Xue,Sylvain Calinon | ICLR 2024,Spotlight | Control of dynamic systems involving hybrid actions is a challenging task in robotics. To address this, we present a novel algorithm called Generalized Policy Iteration using Tensor Train (TTPI) that belongs to the class of Approximate Dynamic Programming (ADP). We use a low-rank tensor approximation technique called ... | https://openreview.net/pdf/a40c77cc6b21af1cc748441a1c8107b6a7896cc5.pdf |
Generalization error of spectral algorithms | https://openreview.net/forum?id=3SJE1WLB4M | https://openreview.net/forum?id=3SJE1WLB4M | Maksim Velikanov,Maxim Panov,Dmitry Yarotsky | ICLR 2024,Spotlight | The asymptotically precise estimation of the generalization of kernel methods has recently received attention due to the parallels between neural networks and their associated kernels. However, prior works derive such estimates for training by kernel ridge regression (KRR), whereas neural networks are typically trained... | https://openreview.net/pdf/2ec1e556e2638c8f90cf327da20ee024055d1426.pdf |
Debiased Collaborative Filtering with Kernel-Based Causal Balancing | https://openreview.net/forum?id=Ffjc8ApSbt | https://openreview.net/forum?id=Ffjc8ApSbt | Haoxuan Li,Chunyuan Zheng,Yanghao Xiao,Peng Wu,Zhi Geng,Xu Chen,Peng Cui | ICLR 2024,Spotlight | Collaborative filtering builds personalized models from the collected user feedback. However, the collected data is observational rather than experimental, leading to various biases in the data, which can significantly affect the learned model. To address this issue, many studies have focused on propensity-based method... | https://openreview.net/pdf/c070b5e86750458e4f0f6540b5c8528297d494f8.pdf |
The Effective Horizon Explains Deep RL Performance in Stochastic Environments | https://openreview.net/forum?id=5ES5Hdlbxw | https://openreview.net/forum?id=5ES5Hdlbxw | Cassidy Laidlaw,Banghua Zhu,Stuart Russell,Anca Dragan | ICLR 2024,Spotlight | Reinforcement learning (RL) theory has largely focused on proving minimax sample complexity bounds. These require strategic exploration algorithms that use relatively limited function classes for representing the policy or value function. Our goal is to explain why deep RL algorithms often perform well in practice, des... | https://openreview.net/pdf/f82605ccc74bb3de9a73c5e505aaf9276c229c08.pdf |
Selective Visual Representations Improve Convergence and Generalization for Embodied AI | https://openreview.net/forum?id=kC5nZDU5zf | https://openreview.net/forum?id=kC5nZDU5zf | Ainaz Eftekhar,Kuo-Hao Zeng,Jiafei Duan,Ali Farhadi,Aniruddha Kembhavi,Ranjay Krishna | ICLR 2024,Spotlight | Embodied AI models often employ off the shelf vision backbones like CLIP to encode their visual observations. Although such general purpose representations encode rich syntactic and semantic information about the scene, much of this information is often irrelevant to the specific task at hand. This introduces noise wit... | https://openreview.net/pdf/2fbfaec8070dacca6ff1916307768a1f7ce97be6.pdf |
Improving Generalization of Alignment with Human Preferences through Group Invariant Learning | https://openreview.net/forum?id=fwCoLe3TAX | https://openreview.net/forum?id=fwCoLe3TAX | Rui Zheng,Wei Shen,Yuan Hua,Wenbin Lai,Shihan Dou,Yuhao Zhou,Zhiheng Xi,Xiao Wang,Haoran Huang,Tao Gui,Qi Zhang,Xuanjing Huang | ICLR 2024,Spotlight | The success of AI assistants based on language models (LLMs) hinges crucially on Reinforcement Learning from Human Feedback (RLHF), which enables the generation of responses more aligned with human preferences.
As universal AI assistants, there's a growing expectation for them to perform consistently across various do... | https://openreview.net/pdf/db0a7ea3470e4d33a4ea3826659ef675921bc697.pdf |
PINNACLE: PINN Adaptive ColLocation and Experimental points selection | https://openreview.net/forum?id=GzNaCp6Vcg | https://openreview.net/forum?id=GzNaCp6Vcg | Gregory Kang Ruey Lau,Apivich Hemachandra,See-Kiong Ng,Bryan Kian Hsiang Low | ICLR 2024,Spotlight | Physics-Informed Neural Networks (PINNs), which incorporate PDEs as soft constraints, train with a composite loss function that contains multiple training point types: different types of collocation points chosen during training to enforce each PDE and initial/boundary conditions, and experimental points which are usua... | https://openreview.net/pdf/1096a49fe85df5e82fac41af2b78c13d13d4455b.pdf |
Learning to Relax: Setting Solver Parameters Across a Sequence of Linear System Instances | https://openreview.net/forum?id=5t57omGVMw | https://openreview.net/forum?id=5t57omGVMw | Mikhail Khodak,Edmond Chow,Maria Florina Balcan,Ameet Talwalkar | ICLR 2024,Spotlight | Solving a linear system ${\bf Ax}={\bf b}$ is a fundamental scientific computing primitive for which numerous solvers and preconditioners have been developed.
These come with parameters whose optimal values depend on the system being solved and are often impossible or too expensive to identify;
thus in practice sub-... | https://openreview.net/pdf/352f7e2fbdc33555b4c784ea0abafc770f9f6836.pdf |
Rotation Has Two Sides: Evaluating Data Augmentation for Deep One-class Classification | https://openreview.net/forum?id=Ad81awoBVS | https://openreview.net/forum?id=Ad81awoBVS | Guodong Wang,Yunhong Wang,Xiuguo Bao,Di Huang | ICLR 2024,Spotlight | One-class classification (OCC) involves predicting whether a new data is normal or anomalous based solely on the data from a single class during training. Various attempts have been made to learn suitable representations for OCC within a self-supervised framework. Notably, discriminative methods that use geometric visu... | https://openreview.net/pdf/7bf21feb9bcffeac0dec9c48b1b50d05663b5a16.pdf |
ICLR 2024 International Conference on Learning Representations 2024 Accepted Paper Meta Info Dataset
This dataset is collect from the ICLR 2024 OpenReview website (https://openreview.net/group?id=ICLR.cc/2024/Conference#tab-accept-oral) as well as the arxiv website DeepNLP paper arxiv (http://www.deepnlp.org/content/paper/iclr2024). For researchers who are interested in doing analysis of ICLR 2024 accepted papers and potential trends, you can use the already cleaned up json files. Each row contains the meta information of a paper in the ICLR 2024 conference. To explore more AI & Robotic papers (NIPS/ICML/ICLR/IROS/ICRA/etc) and AI equations, feel free to navigate the Equation Search Engine (http://www.deepnlp.org/search/equation) as well as the AI Agent Search Engine to find the deployed AI Apps and Agents (http://www.deepnlp.org/search/agent) in your domain.
Meta Information of Json File
{
"title": "Proving Test Set Contamination in Black-Box Language Models",
"url": "https://openreview.net/forum?id=KS8mIvetg2",
"detail_url": "https://openreview.net/forum?id=KS8mIvetg2",
"authors": "Yonatan Oren,Nicole Meister,Niladri S. Chatterji,Faisal Ladhak,Tatsunori Hashimoto",
"tags": "ICLR 2024,Oral",
"abstract": "Large language models are trained on vast amounts of internet data, prompting concerns that they have memorized public benchmarks. Detecting this type of contamination is challenging because the pretraining data used by proprietary models are often not publicly accessible.\n\nWe propose a procedure for detecting test set contamination of language models with exact false positive guarantees and without access to pretraining data or model weights. Our approach leverages the fact that when there is no data contamination, all orderings of an exchangeable benchmark should be equally likely. In contrast, the tendency for language models to memorize example order means that a contaminated language model will find certain canonical orderings to be much more likely than others. Our test flags potential contamination whenever the likelihood of a canonically ordered benchmark dataset is significantly higher than the likelihood after shuffling the examples.\n\nWe demonstrate that our procedure is sensitive enough to reliably detect contamination in challenging situations, including models as small as 1.4 billion parameters, on small test sets only 1000 examples, and datasets that appear only a few times in the pretraining corpus. Finally, we evaluate LLaMA-2 to apply our test in a realistic setting and find our results to be consistent with existing contamination evaluations.",
"pdf": "https://openreview.net/pdf/cfd79aaab7bdcd4f7c032c57fe7e607058042c80.pdf"
}
Related
AI Equation
List of AI Equations and Latex
List of Math Equations and Latex
List of Physics Equations and Latex
List of Statistics Equations and Latex
List of Machine Learning Equations and Latex
AI Agent Marketplace and Search
AI Agent Marketplace and Search
Robot Search
Equation and Academic search
AI & Robot Comprehensive Search
AI & Robot Question
AI & Robot Community
AI Agent Marketplace Blog
AI Agent Reviews
AI Agent Marketplace Directory
Microsoft AI Agents Reviews
Claude AI Agents Reviews
OpenAI AI Agents Reviews
Saleforce AI Agents Reviews
AI Agent Builder Reviews
- Downloads last month
- 9