paper_id
uint32
0
3.7k
title
stringlengths
14
154
paper_url
stringlengths
42
42
authors
listlengths
1
21
type
stringclasses
3 values
abstract
stringlengths
413
2.52k
keywords
stringlengths
4
397
TL;DR
stringlengths
5
250
submission_number
int64
2
14.3k
arxiv_id
stringlengths
10
10
embedding
listlengths
768
768
100
SD-LoRA: Scalable Decoupled Low-Rank Adaptation for Class Incremental Learning
https://openreview.net/forum?id=5U1rlpX68A
[ "Yichen Wu", "Hongming Piao", "Long-Kai Huang", "Renzhen Wang", "Wanhua Li", "Hanspeter Pfister", "Deyu Meng", "Kede Ma", "Ying Wei" ]
Oral
Continual Learning (CL) with foundation models has recently emerged as a promising paradigm to exploit abundant knowledge acquired during pre-training for tackling sequential tasks. However, existing prompt-based and Low-Rank Adaptation-based (LoRA-based) methods often require expanding a prompt/LoRA pool or retaining ...
Continual learning; Low-rank adaptation
null
6,765
null
[ -0.0013854883145540953, -0.04679514467716217, -0.01498033944517374, 0.04210420697927475, 0.033007021993398666, 0.045592211186885834, 0.025468822568655014, -0.004146086052060127, -0.029545675963163376, -0.025940686464309692, 0.011641639284789562, 0.013242674991488457, -0.055900778621435165, ...
101
Improving Probabilistic Diffusion Models With Optimal Diagonal Covariance Matching
https://openreview.net/forum?id=fV0t65OBUu
[ "Zijing Ou", "Mingtian Zhang", "Andi Zhang", "Tim Z. Xiao", "Yingzhen Li", "David Barber" ]
Oral
The probabilistic diffusion model has become highly effective across various domains. Typically, sampling from a diffusion model involves using a denoising distribution characterized by a Gaussian with a learned mean and either fixed or learned covariances. In this paper, we leverage the recently proposed covariance mo...
Diffusion Model, Generative Model, Probalistic Modelling
We introduce Optimal Covariance Matching (OCM), a novel method that improves sampling efficiency and accuracy in diffusion models by directly regressing optimal analytic covariances.
6,659
2406.10808
[ -0.03809719160199165, 0.018975939601659775, -0.0004686130560003221, 0.02154378406703472, 0.05282618850469589, 0.033629585057497025, 0.037331391125917435, -0.014651256613433361, 0.0026983267161995173, -0.06945189088582993, 0.01052546314895153, -0.03246501460671425, -0.06094609946012497, -0....
102
PathGen-1.6M: 1.6 Million Pathology Image-text Pairs Generation through Multi-agent Collaboration
https://openreview.net/forum?id=rFpZnn11gj
[ "Yuxuan Sun", "Yunlong Zhang", "Yixuan Si", "Chenglu Zhu", "Kai Zhang", "Zhongyi Shui", "Jingxiong Li", "Xuan Gong", "XINHENG LYU", "Tao Lin", "Lin Yang" ]
Oral
Vision Language Models (VLMs) like CLIP have attracted substantial attention in pathology, serving as backbones for applications such as zero-shot image classification and Whole Slide Image (WSI) analysis. Additionally, they can function as vision encoders when combined with large language models (LLMs) to support broa...
Image-text pairs generation, Vision-language models, Multi-agent collaboration
We present PathGen-1.6M, an open-source large-scale pathology dataset with 1.6M high-quality image-caption pairs, enabling the creation of powerful multimodal models for pathology analysis.
6,633
null
[ 0.024650253355503082, -0.02949546091258526, -0.004805399104952812, 0.05490934103727341, 0.0517176128923893, 0.0009044999605976045, 0.04131350666284561, 0.004746213089674711, -0.006117497105151415, -0.03838878124952316, -0.02887914702296257, 0.030419688671827316, -0.053291335701942444, 0.01...
103
Training on the Test Task Confounds Evaluation and Emergence
https://openreview.net/forum?id=jOmk0uS1hl
[ "Ricardo Dominguez-Olmedo", "Florian E. Dorner", "Moritz Hardt" ]
Oral
We study a fundamental problem in the evaluation of large language models that we call training on the test task. Unlike wrongful practices like training on the test data, leakage, or data contamination, training on the test task is not a malpractice. Rather, the term describes a growing set of techniques to include t...
language models, benchmarking, emergence
null
6,619
2407.07890
[ -0.029626065865159035, -0.018090466037392616, -0.03824565187096596, 0.05256965011358261, 0.04856790602207184, -0.005966437980532646, 0.050125300884246826, 0.029668938368558884, -0.023689959198236465, -0.00640155328437686, -0.02793651819229126, 0.0444272980093956, -0.0744738057255745, -0.01...
104
Subgraph Federated Learning for Local Generalization
https://openreview.net/forum?id=cH65nS5sOz
[ "Sungwon Kim", "Yoonho Lee", "Yunhak Oh", "Namkyeong Lee", "Sukwon Yun", "Junseok Lee", "Sein Kim", "Carl Yang", "Chanyoung Park" ]
Oral
Federated Learning (FL) on graphs enables collaborative model training to enhance performance without compromising the privacy of each client. However, existing methods often overlook the mutable nature of graph data, which frequently introduces new nodes and leads to shifts in label distribution. Since they focus sole...
Graph Neural Networks, Graph Federated Learning
null
6,521
2503.03995
[ 0.022723982110619545, -0.061220332980155945, 0.009105358272790909, 0.07026271522045135, 0.04235854744911194, 0.015992971137166023, 0.01960657723248005, -0.010040760971605778, -0.00211891857907176, -0.020929960533976555, 0.012488004751503468, -0.009139725007116795, -0.07465185225009918, 0.0...
105
A Probabilistic Perspective on Unlearning and Alignment for Large Language Models
https://openreview.net/forum?id=51WraMid8K
[ "Yan Scholten", "Stephan Günnemann", "Leo Schwinn" ]
Oral
Comprehensive evaluation of Large Language Models (LLMs) is an open research problem. Existing evaluations rely on deterministic point estimates generated via greedy decoding. However, we find that deterministic evaluations fail to capture the whole output distribution of a model, yielding inaccurate estimations of mod...
Machine Unlearning, Alignment, Large Language Models
We demonstrate that existing deterministic evaluations in large language models are insufficient and propose a novel probabilistic evaluation framework that considers the whole output distribution of a model.
6,509
2410.03523
[ -0.011392238549888134, -0.019520558416843414, -0.010162509977817535, 0.02066637948155403, 0.05275563523173332, 0.011806951835751534, 0.031061463057994843, 0.029875464737415314, -0.015843259170651436, 0.0022992994636297226, -0.01957469806075096, 0.03661501407623291, -0.06353331357240677, -0...
106
MLE-bench: Evaluating Machine Learning Agents on Machine Learning Engineering
https://openreview.net/forum?id=6s5uXNWGIh
[ "Jun Shern Chan", "Neil Chowdhury", "Oliver Jaffe", "James Aung", "Dane Sherburn", "Evan Mays", "Giulio Starace", "Kevin Liu", "Leon Maksin", "Tejal Patwardhan", "Aleksander Madry", "Lilian Weng" ]
Oral
We introduce MLE-bench, a benchmark for measuring how well AI agents perform at machine learning engineering. To this end, we curate 75 ML engineering-related competitions from Kaggle, creating a diverse set of challenging tasks that test real-world ML engineering skills such as training models, preparing datasets, and...
benchmark, evals, evaluations, dataset, tasks, data science, engineering, agents, language agents, scaffold, coding, swe, mle
We introduce MLE-bench, a benchmark for measuring how well AI agents perform on machine learning engineering problems.
6,441
null
[ -0.01181096863001585, -0.008262417279183865, -0.03350260481238365, 0.023668207228183746, 0.02533971145749092, 0.012707795947790146, 0.03665757551789284, -0.0008776792092248797, -0.004596095997840166, -0.00915256142616272, 0.009907378815114498, 0.03028620034456253, -0.06293682754039764, -0....
107
Learning Randomized Algorithms with Transformers
https://openreview.net/forum?id=UV5p3JZMjC
[ "Johannes Von Oswald", "Seijin Kobayashi", "Yassir Akram", "Angelika Steger" ]
Oral
Randomization is a powerful tool that endows algorithms with remarkable properties. For instance, randomized algorithms excel in adversarial settings, often surpassing the worst-case performance of deterministic algorithms with large margins. Furthermore, their success probability can be amplified by simple strategies ...
Randomized algorithms, Learning under adversarial losses, Adversarial robustness, In-context learning algorithms
null
6,351
2408.10818
[ 0.0037203978281468153, -0.05155705660581589, -0.030171485617756844, 0.05826525017619133, 0.003249115077778697, 0.04045279324054718, 0.020643435418605804, 0.0018200896447524428, -0.030650289729237556, -0.033208560198545456, -0.01732589304447174, 0.0011297850869596004, -0.04875733330845833, ...
108
Data Scaling Laws in Imitation Learning for Robotic Manipulation
https://openreview.net/forum?id=pISLZG7ktL
[ "Fanqi Lin", "Yingdong Hu", "Pingyue Sheng", "Chuan Wen", "Jiacheng You", "Yang Gao" ]
Oral
Data scaling has revolutionized fields like natural language processing and computer vision, providing models with remarkable generalization capabilities. In this paper, we investigate whether similar data scaling laws exist in robotics, particularly in robotic manipulation, and whether appropriate data scaling can yie...
Data Scaling Laws, Imitation Learning, Robotic Manipulation
null
6,331
2410.18647
[ -0.030125590041279793, -0.01585790142416954, -0.000990529078990221, 0.02572188712656498, 0.04097786545753479, 0.04595104232430458, 0.018758781254291534, -0.004182495642453432, -0.049281712621450424, -0.01446016225963831, -0.020382415503263474, -0.0016419913154095411, -0.08140502125024796, ...
109
Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
https://openreview.net/forum?id=8zJRon6k5v
[ "Byoungwoo Park", "Hyungi Lee", "Juho Lee" ]
Oral
Many real-world datasets, such as healthcare, climate, and economics, are often collected as irregular time series, which poses challenges for accurate modeling. In this paper, we propose the Amortized Control of continuous State Space Model (ACSSM) for continuous dynamical modeling of time series for irregular and dis...
stochastic optimal control, variational inference, state space model, irregular time series
We propose a multi-marginal Doob's $h$-transform for irregular time series and variational inference with stochastic optimal control to approximate it.
6,305
2410.05602
[ -0.04831141605973244, -0.006971411406993866, -0.03136640414595604, 0.061257120221853256, 0.053455907851457596, 0.024052821099758148, 0.021489594131708145, 0.021978743374347687, -0.02283092588186264, -0.029637612402439117, 0.016109483316540718, -0.01589379273355007, -0.0823807641863823, 0.0...
110
Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates
https://openreview.net/forum?id=syThiTmWWm
[ "Xiaosen Zheng", "Tianyu Pang", "Chao Du", "Qian Liu", "Jing Jiang", "Min Lin" ]
Oral
Automatic LLM benchmarks, such as AlpacaEval 2.0, Arena-Hard-Auto, and MT-Bench, have become popular for evaluating language models due to their cost-effectiveness and scalability compared to human evaluation. Achieving high win rates on these benchmarks can significantly boost the promotional impact of newly released ...
Large Language Models, Cheating, Automatic LLM Benchmarks
We show that null models that always return the same cheating responses can achieve high win rates on automatic LLM benchmarks.
6,258
2410.07137
[ -0.018590370193123817, -0.011671814136207104, -0.0017842123052105308, 0.024016989395022392, 0.03188557177782059, -0.007610441185534, 0.03772801533341408, 0.019418779760599136, -0.03178487718105316, -0.013002238236367702, -0.011375688016414642, 0.03816462680697441, -0.060074880719184875, -0...
111
On the Hölder Stability of Multiset and Graph Neural Networks
https://openreview.net/forum?id=P7KIGdgW8S
[ "Yair Davidson", "Nadav Dym" ]
Oral
Extensive research efforts have been put into characterizing and constructing maximally separating multiset and graph neural networks. However, recent empirical evidence suggests the notion of separation itself doesn't capture several interesting phenomena. On the one hand, the quality of this separation may be very w...
graph neural networks, message passing neural networks, multiset neural networks, neural network stability, expressive power, WL tests
null
5,998
null
[ -0.03040725365281105, 0.0036943270824849606, -0.009319488890469074, 0.07127010077238083, 0.030837174504995346, 0.028317919000983238, 0.005159831140190363, -0.027119850739836693, -0.0463065430521965, -0.04559774696826935, 0.0010534742614254355, 0.004398304037749767, -0.08965947479009628, -0...
112
On Conformal Isometry of Grid Cells: Learning Distance-Preserving Position Embedding
https://openreview.net/forum?id=Xo0Q1N7CGk
[ "Dehong Xu", "Ruiqi Gao", "Wenhao Zhang", "Xue-Xin Wei", "Ying Nian Wu" ]
Oral
This paper investigates the conformal isometry hypothesis as a potential explanation for the hexagonal periodic patterns in grid cell response maps. We posit that grid cell activities form a high-dimensional vector in neural space, encoding the agent's position in 2D physical space. As the agent moves, this vector rota...
grid cells, conformal isometry, distance-preserving, position embedding, representation learning
We investigate the conformal isometry hypothesis that leads to the emergence of hexagon periodic patterns in grid cells, showing that learning a maximally distance-preserving position embedding naturally leads to these patterns.
5,957
2405.16865
[ -0.03525437042117119, -0.003924331162124872, 0.004319417756050825, 0.021355796605348587, 0.03488573431968689, 0.004898551385849714, 0.0032997392117977142, 0.003269677050411701, -0.038809772580862045, -0.05477841570973396, 0.01759980246424675, -0.03777112811803818, -0.06549620628356934, 0.0...
113
Combatting Dimensional Collapse in LLM Pre-Training Data via Submodular File Selection
https://openreview.net/forum?id=f4gF6AIHRy
[ "Ziqing Fan", "Siyuan Du", "Shengchao Hu", "Pingjie Wang", "Li Shen", "Ya Zhang", "Dacheng Tao", "Yanfeng Wang" ]
Oral
Selecting high-quality pre-training data for large language models (LLMs) is crucial for enhancing their overall performance under limited computation budget, improving both training and sample efficiency. Recent advancements in file selection primarily rely on using an existing or trained proxy model to assess the sim...
file selection, large language model, pre-training, submodular optimization
null
5,918
null
[ -0.029090536758303642, -0.03401511162519455, 0.001999075524508953, 0.05738096684217453, 0.051426246762275696, 0.028109773993492126, 0.03494232892990112, -0.029668817296624184, -0.020521322265267372, -0.03080802597105503, -0.025386208668351173, 0.029899178072810173, -0.08892612904310226, 0....
114
Population Transformer: Learning Population-level Representations of Neural Activity
https://openreview.net/forum?id=FVuqJt3c4L
[ "Geeling Chau", "Christopher Wang", "Sabera J Talukder", "Vighnesh Subramaniam", "Saraswati Soedarmadji", "Yisong Yue", "Boris Katz", "Andrei Barbu" ]
Oral
We present a self-supervised framework that learns population-level codes for arbitrary ensembles of neural recordings at scale. We address key challenges in scaling models with neural time-series data, namely, sparse and variable electrode distribution across subjects and datasets. The Population Transformer (PopT) st...
representation learning, neuroscience, self supervised learning
Representation learning of neural data
5,882
2406.03044
[ 0.016526512801647186, -0.02419550158083439, -0.003992310259491205, 0.047539155930280685, 0.023647192865610123, 0.044139038771390915, 0.015902575105428696, 0.005978450179100037, -0.024559486657381058, -0.011717510409653187, 0.010489491745829582, 0.0105793047696352, -0.05799053609371185, -0....
115
KAN: Kolmogorov–Arnold Networks
https://openreview.net/forum?id=Ozo7qJ5vZi
[ "Ziming Liu", "Yixuan Wang", "Sachin Vaidya", "Fabian Ruehle", "James Halverson", "Marin Soljacic", "Thomas Y. Hou", "Max Tegmark" ]
Oral
Inspired by the Kolmogorov-Arnold representation theorem, we propose Kolmogorov-Arnold Networks (KANs) as promising alternatives to Multi-Layer Perceptrons (MLPs). While MLPs have fixed activation functions on nodes ("neurons''), KANs have learnable activation functions on edges ("weights''). KANs have no linear weight...
Kolmogorov-Arnold networks, Kolmogorov-Arnold representation theorem, learnable activation functions, interpretability, AI + Science
Inspired by the Kolmogorov-Arnold representation theorem, we propose Kolmogorov-Arnold Networks (KANs) as promising alternatives to Multi-Layer Perceptrons (MLPs).
5,796
null
[ -0.025048891082406044, -0.015117136761546135, 0.00184734130743891, 0.037545036524534225, 0.0014936017105355859, 0.037647102028131485, 0.013064945116639137, -0.015986889600753784, -0.051521290093660355, -0.026012100279331207, 0.008220955729484558, 0.0005012372275814414, -0.07433195412158966, ...
116
Problem-Parameter-Free Federated Learning
https://openreview.net/forum?id=ZuazHmXTns
[ "Wenjing Yan", "Kai Zhang", "Xiaolu Wang", "Xuanyu Cao" ]
Oral
Federated learning (FL) has garnered significant attention from academia and industry in recent years due to its advantages in data privacy, scalability, and communication efficiency. However, current FL algorithms face a critical limitation: their performance heavily depends on meticulously tuned hyperparameters, part...
Adaptive federated learning, problem-parameter free, arbitrary data heterogeneity, adaptive stepsize
null
5,729
null
[ -0.0062622493132948875, -0.05214373767375946, 0.013915084302425385, 0.03949429839849472, 0.02851765789091587, 0.04105766490101814, 0.006554591469466686, -0.017356066033244133, -0.024530241265892982, -0.05979030579328537, -0.02055342122912407, -0.03135068714618683, -0.060107115656137466, 0....
117
SymmetricDiffusers: Learning Discrete Diffusion on Finite Symmetric Groups
https://openreview.net/forum?id=EO8xpnW7aX
[ "Yongxing Zhang", "Donglin Yang", "Renjie Liao" ]
Oral
The group of permutations $S_n$, also known as the finite symmetric groups, are essential in fields such as combinatorics, physics, and chemistry. However, learning a probability distribution over $S_n$ poses significant challenges due to its intractable size and discrete nature. In this paper, we introduce *SymmetricD...
Finite Symmetric Groups, Discrete Diffusion, Permutations, Riffle Shuffles, Plackett-Luce Distribution, Sorting, Jigsaw Puzzle
null
5,686
2410.02942
[ -0.008352873846888542, -0.03533709794282913, -0.01811743527650833, 0.04968518391251564, 0.03676391765475273, 0.0009848648915067315, 0.0012044996256008744, 0.001121944049373269, -0.01786523126065731, -0.06829434633255005, 0.0298378337174654, -0.041512396186590195, -0.05386299639940262, 0.02...
118
Language Representations Can be What Recommenders Need: Findings and Potentials
https://openreview.net/forum?id=eIJfOIMN9z
[ "Leheng Sheng", "An Zhang", "Yi Zhang", "Yuxin Chen", "Xiang Wang", "Tat-Seng Chua" ]
Oral
Recent studies empirically indicate that language models (LMs) encode rich world knowledge beyond mere semantics, attracting significant attention across various fields. However, in the recommendation domain, it remains uncertain whether LMs implicitly encode user preference information. Contrary to prevailing understa...
Collaborative filtering, Language-representation-based recommendation, Language models, Language model representations
null
5,613
2407.05441
[ 0.004154934547841549, -0.002103702398017049, -0.005268834065645933, 0.029475010931491852, 0.06729292124509811, 0.010635904967784882, 0.029385846108198166, 0.018254363909363747, -0.00564368162304163, -0.017120173200964928, -0.03190166875720024, 0.02169247344136238, -0.04609372466802597, 0.0...
119
HiRA: Parameter-Efficient Hadamard High-Rank Adaptation for Large Language Models
https://openreview.net/forum?id=TwJrTz9cRS
[ "Qiushi Huang", "Tom Ko", "Zhan Zhuang", "Lilian Tang", "Yu Zhang" ]
Oral
We propose Hadamard High-Rank Adaptation (HiRA), a parameter-efficient fine-tuning (PEFT) method that enhances the adaptability of Large Language Models (LLMs). While Low-rank Adaptation (LoRA) is widely used to reduce resource demands, its low-rank updates may limit its expressiveness for new tasks. HiRA addresses thi...
Parametric-efficient fine-tuning, Large Language Model
null
5,572
null
[ -0.03203292936086655, -0.0023158567491918802, -0.011139689944684505, 0.015856826677918434, 0.011738322675228119, 0.02940424159169197, 0.03302667662501335, 0.0007399743772111833, -0.024487139657139778, -0.011892841197550297, -0.011883178725838661, 0.009369432926177979, -0.0620352178812027, ...
120
A Theoretically-Principled Sparse, Connected, and Rigid Graph Representation of Molecules
https://openreview.net/forum?id=OIvg3MqWX2
[ "Shih-Hsin Wang", "Yuhao Huang", "Justin M. Baker", "Yuan-En Sun", "Qi Tang", "Bao Wang" ]
Oral
Graph neural networks (GNNs) -- learn graph representations by exploiting graph's sparsity, connectivity, and symmetries -- have become indispensable for learning geometric data like molecules. However, the most used graphs (e.g., radial cutoff graphs) in molecular modeling lack theoretical guarantees for achieving con...
Graph representation, sparsity, connectivity, rigidity, molecules, learning
We introduce a new sparse, connected, and rigid graph representation for molecules.
5,512
null
[ -0.013975172303617, 0.01981315389275551, 0.011158620938658714, 0.0608307421207428, 0.029335053637623787, 0.017883935943245888, 0.0016726873582229018, 0.007108914665877819, -0.00008771897410042584, -0.05798811838030815, 0.03854627534747124, -0.008882695809006691, -0.08686477690935135, 0.030...
121
How much of my dataset did you use? Quantitative Data Usage Inference in Machine Learning
https://openreview.net/forum?id=EUSkm2sVJ6
[ "Yao Tong", "Jiayuan Ye", "Sajjad Zarifzadeh", "Reza Shokri" ]
Oral
How much of my data was used to train a machine learning model? This is a critical question for data owners assessing the risk of unauthorized usage of their data to train models. However, previous work mistakenly treats this as a binary problem—inferring whether all-or-none or any-or-none of the data was used—which is...
Machine Learning, Privacy, Dataset Usage Inference, Dataset Ownership, Membership Inference Attack, Dataset Copyright
The first method to quantitatively and non-binarily answer the question ``How much has a dataset been used in the training of a given model?''
5,454
null
[ -0.007379419170320034, -0.03326790779829025, -0.04411854222416878, 0.03597163408994675, 0.06674958765506744, 0.013672162778675556, 0.02994399704039097, 0.010607684031128883, -0.03488994762301445, -0.024905795231461525, -0.005208217538893223, 0.010977746918797493, -0.05593862012028694, -0.0...
122
LARP: Tokenizing Videos with a Learned Autoregressive Generative Prior
https://openreview.net/forum?id=Wr3UuEx72f
[ "Hanyu Wang", "Saksham Suri", "Yixuan Ren", "Hao Chen", "Abhinav Shrivastava" ]
Oral
We present LARP, a novel video tokenizer designed to overcome limitations in current video tokenization methods for autoregressive (AR) generative models. Unlike traditional patchwise tokenizers that directly encode local visual patches into discrete tokens, LARP introduces a holistic tokenization scheme that gathers i...
Video Generation, Visual Tokenization
A holistic video tokenizer with a learned autoregressive generative prior.
5,428
2410.21264
[ 0.05878937616944313, -0.023977870121598244, 0.01591375470161438, 0.048834580928087234, 0.02868965081870556, 0.055878281593322754, 0.022349731996655464, -0.0029723418410867453, -0.033185508102178574, -0.025113612413406372, -0.045893553644418716, -0.011430138722062111, -0.04924776405096054, ...
123
MOS: Model Synergy for Test-Time Adaptation on LiDAR-Based 3D Object Detection
https://openreview.net/forum?id=Y6aHdDNQYD
[ "Zhuoxiao Chen", "Junjie Meng", "Mahsa Baktashmotlagh", "Yonggang Zhang", "Zi Huang", "Yadan Luo" ]
Oral
LiDAR-based 3D object detection is crucial for various applications but often experiences performance degradation in real-world deployments due to domain shifts. While most studies focus on cross-dataset shifts, such as changes in environments and object geometries, practical corruptions from sensor variations and weat...
Test-Time Adaptation, 3D Object Detection
null
5,340
2406.14878
[ -0.004506526980549097, 0.0032768771052360535, 0.00004413389979163185, 0.03852046653628349, 0.057220667600631714, 0.025199707597494125, 0.006020717788487673, -0.005676244385540485, -0.019015593454241753, -0.038607463240623474, -0.01556908804923296, -0.0011118061374872923, -0.05739294365048408...
124
Synthetic continued pretraining
https://openreview.net/forum?id=07yvxWDSla
[ "Zitong Yang", "Neil Band", "Shuangping Li", "Emmanuel Candes", "Tatsunori Hashimoto" ]
Oral
Pretraining on large-scale, unstructured internet text enables language models to acquire a significant amount of world knowledge. However, this knowledge acquisition is data-inefficient---to learn a fact, models must be trained on hundreds to thousands of diverse representations of it. This poses a challenge when adap...
large language model, synthetic data, continued pretraining
null
5,336
2409.07431
[ 0.004875913728028536, -0.028040625154972076, -0.004531542304903269, 0.0735638216137886, 0.046326521784067154, -0.009083588607609272, 0.04003417491912842, 0.0313623771071434, -0.01341547816991806, -0.001994370948523283, -0.0330246277153492, 0.037208035588264465, -0.054790157824754715, -0.02...
125
EmbodiedSAM: Online Segment Any 3D Thing in Real Time
https://openreview.net/forum?id=XFYUwIyTxQ
[ "Xiuwei Xu", "Huangxing Chen", "Linqing Zhao", "Ziwei Wang", "Jie Zhou", "Jiwen Lu" ]
Oral
Embodied tasks require the agent to fully understand 3D scenes simultaneously with its exploration, so an online, real-time, fine-grained and highly-generalized 3D perception model is desperately needed. Since high-quality 3D data is limited, directly training such a model in 3D is infeasible. Meanwhile, vision foundat...
3d instance segmentation; online 3d scene segmentation
We presented EmbodiedSAM, an efficient framework that leverages vision foundation models for online, real-time, fine-grained and generalized 3D instance segmentation.
5,293
2408.11811
[ 0.03574860095977783, -0.024094831198453903, 0.029999978840351105, 0.008997943252325058, 0.02727644518017769, 0.029455801472067833, 0.019566234201192856, 0.048442136496305466, -0.04568031430244446, -0.05483977496623993, -0.035772018134593964, -0.02363659255206585, -0.04820021614432335, 0.00...
126
Tractable Multi-Agent Reinforcement Learning through Behavioral Economics
https://openreview.net/forum?id=stUKwWBuBm
[ "Eric Mazumdar", "Kishan Panaganti", "Laixi Shi" ]
Oral
A significant roadblock to the development of principled multi-agent reinforcement learning (MARL) algorithms is the fact that desired solution concepts like Nash equilibria may be intractable to compute. We show how one can overcome this obstacle by introducing concepts from behavioral economics into MARL. To do so, w...
behavioral economics, risk-aversion, multi-agent reinforcement learning, quantal response, bounded rationality
By incorporating risk aversion and bounded rationality into agents' decision-making processes, we introduced a computationally tractable equilibria class for matrix and Markov games which aligns with observed human behaviors.
5,242
null
[ -0.057189084589481354, -0.006305071525275707, 0.006995043251663446, 0.027457142248749733, 0.04115237295627594, 0.022401249036192894, 0.004545997362583876, 0.019913524389266968, -0.031585995107889175, -0.03889107331633568, -0.005942500662058592, 0.03719472885131836, -0.06126607581973076, -0...
127
Improved Finite-Particle Convergence Rates for Stein Variational Gradient Descent
https://openreview.net/forum?id=sbG8qhMjkZ
[ "Sayan Banerjee", "Krishna Balasubramanian", "PROMIT GHOSAL" ]
Oral
We provide finite-particle convergence rates for the Stein Variational Gradient Descent (SVGD) algorithm in the Kernelized Stein Discrepancy ($\KSD$) and Wasserstein-2 metrics. Our key insight is that the time derivative of the relative entropy between the joint density of $N$ particle locations and the $N$-fold produc...
Stein Variational Gradient Descent, Non-asymptotic Rates, Variational Inference
Near-optimal finite-particle, discrete-time rates for SVGD
5,180
2409.08469
[ -0.041491344571113586, -0.0227967519313097, 0.034151140600442886, 0.028850983828306198, 0.0288339015096426, 0.01666252315044403, 0.019905967637896538, 0.008151347748935223, -0.02096615545451641, -0.038827091455459595, -0.004877416417002678, -0.016347210854291916, -0.04786575585603714, 0.01...
128
Unlocking the Power of Function Vectors for Characterizing and Mitigating Catastrophic Forgetting in Continual Instruction Tuning
https://openreview.net/forum?id=gc8QAQfXv6
[ "Gangwei Jiang", "Caigao JIANG", "Zhaoyi Li", "Siqiao Xue", "JUN ZHOU", "Linqi Song", "Defu Lian", "Ying Wei" ]
Oral
Catastrophic forgetting (CF) poses a significant challenge in machine learning, where a model forgets previously learned information upon learning new tasks. Despite the advanced capabilities of Large Language Models (LLMs), they continue to face challenges with CF during continual learning. The majority of existing r...
Catastrophic forgetting; Large language model; Instruction tuning
null
5,157
2502.11019
[ -0.022396204993128777, -0.017350004985928535, -0.00654103048145771, 0.02337317354977131, 0.03538776561617851, 0.008308552205562592, 0.05013678967952728, 0.015258065424859524, -0.055679284036159515, -0.010465351864695549, 0.005741751287132502, 0.03629111871123314, -0.035842470824718475, 0.0...
129
One Step Diffusion via Shortcut Models
https://openreview.net/forum?id=OlzB6LnXcS
[ "Kevin Frans", "Danijar Hafner", "Sergey Levine", "Pieter Abbeel" ]
Oral
Diffusion models and flow matching models have enabled generating diverse and realistic images by learning to transfer noise to data. However, sampling from these models involves iterative denoising over many neural network passes, making generation slow and expensive. Previous approaches for speeding up sampling requi...
diffusion, flow-matching, fast inference, distillation
null
5,115
2410.12557
[ -0.0026327241212129593, -0.029243383556604385, -0.026953494176268578, 0.0641847476363182, 0.0378279872238636, 0.042793046683073044, 0.03782389685511589, 0.004562579095363617, -0.011047843843698502, -0.07301429659128189, -0.010932950302958488, -0.029040634632110596, -0.05484747141599655, -0...
130
Judge Decoding: Faster Speculative Sampling Requires Going Beyond Model Alignment
https://openreview.net/forum?id=mtSSFiqW6y
[ "Gregor Bachmann", "Sotiris Anagnostidis", "Albert Pumarola", "Markos Georgopoulos", "Artsiom Sanakoyeu", "Yuming Du", "Edgar Schönfeld", "Ali Thabet", "Jonas K Kohler" ]
Oral
The performance of large language models (LLMs) is closely linked to their underlying size, leading to ever-growing networks and hence slower inference. Speculative decoding has been proposed as a technique to accelerate autoregressive generation, leveraging a fast draft model to propose candidate tokens, which are the...
LLM inference, speculative decoding
null
5,114
2501.19309
[ 0.0075920079834759235, -0.0328250527381897, -0.025136616080999374, 0.049784980714321136, 0.03398779407143593, 0.053086984902620316, 0.019384486600756645, 0.025769943371415138, -0.026755571365356445, -0.020199891179800034, 0.010955817997455597, 0.021739019080996513, -0.06483665853738785, -0...
131
Robustness Inspired Graph Backdoor Defense
https://openreview.net/forum?id=trKNi4IUiP
[ "Zhiwei Zhang", "Minhua Lin", "Junjie Xu", "Zongyu Wu", "Enyan Dai", "Suhang Wang" ]
Oral
Graph Neural Networks (GNNs) have achieved promising results in tasks such as node classification and graph classification. However, recent studies reveal that GNNs are vulnerable to backdoor attacks, posing a significant threat to their real-world adoption. Despite initial efforts to defend against specific graph back...
Backdoor Defense, Graph Neural Network
null
5,103
2406.09836
[ -0.017372291535139084, -0.019496893510222435, -0.006910599302500486, 0.042724281549453735, 0.03771533444523811, 0.007744015660136938, 0.05263328552246094, 0.00045362382661551237, -0.02028440684080124, -0.037894099950790405, 0.038882866501808167, -0.030077483505010605, -0.045319005846977234, ...
132
Proxy Denoising for Source-Free Domain Adaptation
https://openreview.net/forum?id=FIj9IEPCKr
[ "Song Tang", "Wenxin Su", "Yan Gan", "Mao Ye", "Jianwei Dr. Zhang", "Xiatian Zhu" ]
Oral
Source-Free Domain Adaptation (SFDA) aims to adapt a pre-trained source model to an unlabeled target domain with no access to the source data. Inspired by the success of large Vision-Language (ViL) models in many applications, the latest research has validated ViL's benefit for SFDA by using their predictions as pseudo...
Domain adaptation, source-free, multimodal proxy space, proxy confidence theory
null
5,075
2406.01658
[ 0.011988372541964054, -0.0030830800533294678, 0.01084798201918602, 0.029035937041044235, 0.046562500298023224, 0.020508304238319397, 0.04076184704899788, -0.00600402569398284, -0.0244942307472229, -0.04031127318739891, -0.021833105012774467, 0.03473713994026184, -0.06720840930938721, 0.014...
133
Cybench: A Framework for Evaluating Cybersecurity Capabilities and Risks of Language Models
https://openreview.net/forum?id=tc90LV0yRL
[ "Andy K Zhang", "Neil Perry", "Riya Dulepet", "Joey Ji", "Celeste Menders", "Justin W Lin", "Eliot Jones", "Gashon Hussein", "Samantha Liu", "Donovan Julian Jasper", "Pura Peetathawatchai", "Ari Glenn", "Vikram Sivashankar", "Daniel Zamoshchin", "Leo Glikbarg", "Derek Askaryar", "Hao...
Oral
Language Model (LM) agents for cybersecurity that are capable of autonomously identifying vulnerabilities and executing exploits have potential to cause real-world impact. Policymakers, model providers, and researchers in the AI and cybersecurity communities are interested in quantifying the capabilities of such agents...
Language Model Agents, Benchmark, Cybersecurity, Risk
Cybench is a cybersecurity agent benchmark with 40 professional-level Capture the Flag tasks that are recent, meaningful, and difficult with subtasks.
5,074
2408.08926
[ 0.02271275222301483, -0.041280027478933334, -0.025380203500390053, 0.06296582520008087, 0.04930872842669487, 0.013071834109723568, 0.021158063784241676, 0.003193108830600977, -0.02205413579940796, -0.030902666971087456, -0.02875232882797718, 0.025303643196821213, -0.06183748319745064, -0.0...
134
Open-YOLO 3D: Towards Fast and Accurate Open-Vocabulary 3D Instance Segmentation
https://openreview.net/forum?id=CRmiX0v16e
[ "Mohamed El Amine Boudjoghra", "Angela Dai", "Jean Lahoud", "Hisham Cholakkal", "Rao Muhammad Anwer", "Salman Khan", "Fahad Shahbaz Khan" ]
Oral
Recent works on open-vocabulary 3D instance segmentation show strong promise but at the cost of slow inference speed and high computation requirements. This high computation cost is typically due to their heavy reliance on aggregated clip features from multi-view, which require computationally expensive 2D foundation m...
Open Vocabulary, 3D point cloud instance segmentation
null
4,987
2406.02548
[ -0.013886956498026848, 0.004413907881826162, 0.026065178215503693, 0.038041021674871445, 0.02577286586165428, 0.06824953854084015, 0.009151903912425041, 0.014200238510966301, -0.03725471720099449, -0.026578422635793686, -0.019986191764473915, 0.017040260136127472, -0.08193904161453247, 0.0...
135
Safety Alignment Should be Made More Than Just a Few Tokens Deep
https://openreview.net/forum?id=6Mxhg9PtDE
[ "Xiangyu Qi", "Ashwinee Panda", "Kaifeng Lyu", "Xiao Ma", "Subhrajit Roy", "Ahmad Beirami", "Prateek Mittal", "Peter Henderson" ]
Oral
The safety alignment of current Large Language Models (LLMs) is vulnerable. Simple attacks, or even benign fine-tuning, can jailbreak aligned models. We note that many of these vulnerabilities are related to a shared underlying issue: safety alignment can take shortcuts, wherein the alignment adapts a model's generativ...
Safety Alignment, AI Safety, LLM
We identify an underlying problem (shallow safety alignment) tha makes current safety alignment vulnerable, and we also propose approaches for mitigations.
4,914
2406.05946
[ -0.020647823810577393, -0.013839769177138805, -0.04227946326136589, 0.04405069723725319, 0.034273918718099594, 0.02261803299188614, 0.061592571437358856, -0.006449935492128134, -0.019978336989879608, -0.024390313774347305, -0.01675751805305481, 0.01907454989850521, -0.06605923175811768, 0....
136
On the Identification of Temporal Causal Representation with Instantaneous Dependence
https://openreview.net/forum?id=2efNHgYRvM
[ "Zijian Li", "Yifan Shen", "Kaitao Zheng", "Ruichu Cai", "Xiangchen Song", "Mingming Gong", "Guangyi Chen", "Kun Zhang" ]
Oral
Temporally causal representation learning aims to identify the latent causal process from time series observations, but most methods require the assumption that the latent causal processes do not have instantaneous relations. Although some recent methods achieve identifiability in the instantaneous causality case, they...
Causal Representation Learning, Instantaneous Dependency, Identification
null
4,912
2405.15325
[ 0.008247315883636475, -0.009738718159496784, -0.016852736473083496, 0.030765768140554428, 0.02331722155213356, 0.0358595997095108, 0.05256331339478493, 0.03005494736135006, -0.0416167676448822, -0.03329073637723923, -0.006334960926324129, -0.007498065009713173, -0.037167806178331375, 0.003...
137
WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct
https://openreview.net/forum?id=mMPMHWOdOy
[ "Haipeng Luo", "Qingfeng Sun", "Can Xu", "Pu Zhao", "Jian-Guang Lou", "Chongyang Tao", "Xiubo Geng", "Qingwei Lin", "Shifeng Chen", "Yansong Tang", "Dongmei Zhang" ]
Oral
Large language models (LLMs), such as GPT-4, have shown remarkable performance in natural language processing (NLP) tasks, including challenging mathematical reasoning. However, most existing open-source models are only pre-trained on large-scale internet data and without math-related optimization. In this paper, we pr...
Mathematical Reasoning, Evol-Instruct, Reinforcement Learning
null
4,894
2308.09583
[ -0.02633417397737503, -0.026371264830231667, -0.00585283013060689, 0.05404173210263252, 0.03205054998397827, 0.01864425465464592, 0.03287350386381149, 0.011803866364061832, -0.023029997944831848, -0.0020433939062058926, -0.002359966281801462, 0.050185151398181915, -0.05519343167543411, -0....
138
Faster Cascades via Speculative Decoding
https://openreview.net/forum?id=vo9t20wsmd
[ "Harikrishna Narasimhan", "Wittawat Jitkrittum", "Ankit Singh Rawat", "Seungyeon Kim", "Neha Gupta", "Aditya Krishna Menon", "Sanjiv Kumar" ]
Oral
Cascades and speculative decoding are two common approaches to improving language models' inference efficiency. Both approaches interleave two models, but via fundamentally distinct mechanisms: deferral rule that invokes the larger model only for “hard” inputs, while speculative decoding uses speculative execution to...
Cascades, Speculative Decoding, Speculative execution, LLM, Inference, Adaptive Inference
Faster language model cascades through the use of speculative execution
4,871
2405.19261
[ -0.01441245898604393, -0.01647743210196495, -0.01287281047552824, 0.052057038992643356, 0.030330490320920944, 0.03825549781322479, 0.031523942947387695, 0.02719741314649582, -0.02253115549683571, -0.015429445542395115, 0.03017451986670494, 0.037958528846502304, -0.034742116928100586, -0.00...
139
The Hidden Cost of Waiting for Accurate Predictions
https://openreview.net/forum?id=A3YUPeJTNR
[ "Ali Shirali", "Ariel D. Procaccia", "Rediet Abebe" ]
Oral
Algorithmic predictions are increasingly informing societal resource allocations by identifying individuals for targeting. Policymakers often build these systems with the assumption that by gathering more observations on individuals, they can improve predictive accuracy and, consequently, allocation efficiency. An over...
Algorithmic Decision Making, Prediction, Resource Allocation, Social Welfare, Limits of Prediction
null
4,828
2503.00650
[ -0.007884473539888859, -0.016654230654239655, -0.030379122123122215, 0.02470192313194275, 0.04222995042800903, 0.024876460433006287, 0.008820434100925922, 0.007763285655528307, -0.049028169363737106, -0.03945334255695343, -0.014980747364461422, 0.002327190013602376, -0.04831280559301376, -...
140
Learning Dynamics of LLM Finetuning
https://openreview.net/forum?id=tPNHOoZFl9
[ "Yi Ren", "Danica J. Sutherland" ]
Oral
Learning dynamics, which describes how the learning of specific training examples influences the model's predictions on other examples, gives us a powerful tool for understanding the behavior of deep learning systems. We study the learning dynamics of large language models during different types of finetuning, by anal...
Learning dynamics, LLM, finetuning, DPO
The paper propose a novel learning dynamics framework to understand LLM's behavior during finetuning (e.g., SFT, DPO, and other variants). Some counter-intuitive behavior can be well explained by the proposed framework.
4,818
2407.10490
[ -0.014770962297916412, -0.038005828857421875, 0.012439894489943981, 0.020698266103863716, 0.05295471101999283, 0.01961943693459034, 0.02733188308775425, 0.02551470510661602, -0.0378708615899086, 0.0011553488438948989, -0.007912201806902885, 0.03828543797135353, -0.04907618835568428, 0.0012...
141
Root Cause Analysis of Anomalies in Multivariate Time Series through Granger Causal Discovery
https://openreview.net/forum?id=k38Th3x4d9
[ "Xiao Han", "Saima Absar", "Lu Zhang", "Shuhan Yuan" ]
Oral
Identifying the root causes of anomalies in multivariate time series is challenging due to the complex dependencies among the series. In this paper, we propose a comprehensive approach called AERCA that inherently integrates Granger causal discovery with root cause analysis. By defining anomalies as interventions on th...
root cause analysis, Granger causality, multivariate time series
null
4,815
null
[ -0.005938377697020769, -0.028056636452674866, -0.020213665440678596, 0.018867673352360725, 0.053497880697250366, 0.033068474382162094, 0.05668334290385246, -0.009822807274758816, -0.019540606066584587, -0.056795794516801834, -0.013698192313313484, 0.010336737148463726, -0.04341769963502884, ...
142
ProtComposer: Compositional Protein Structure Generation with 3D Ellipsoids
https://openreview.net/forum?id=0ctvBgKFgc
[ "Hannes Stark", "Bowen Jing", "Tomas Geffner", "Jason Yim", "Tommi Jaakkola", "Arash Vahdat", "Karsten Kreis" ]
Oral
We develop ProtComposer to generate protein structures conditioned on spatial protein layouts that are specified via a set of 3D ellipsoids capturing substructure shapes and semantics. At inference time, we condition on ellipsoids that are hand-constructed, extracted from existing proteins, or from a statistical model,...
protein design, diffusion model, controllable generation, drug discovery, proteins, biology
We develop a framework to generate protein structures conditioned on spatial protein layouts that are specified via a set of 3D ellipsoids.
4,802
2503.05025
[ -0.012907739728689194, -0.02561880089342594, -0.015472661703824997, 0.03812568634748459, 0.03000912442803383, 0.006117785349488258, 0.007611596491187811, -0.0036501751746982336, -0.026370126754045486, -0.03809640184044838, 0.009021450765430927, -0.02130236104130745, -0.08203665167093277, 0...
143
More RLHF, More Trust? On The Impact of Preference Alignment On Trustworthiness
https://openreview.net/forum?id=FpiCLJrSW8
[ "Aaron Jiaxun Li", "Satyapriya Krishna", "Himabindu Lakkaraju" ]
Oral
The trustworthiness of Large Language Models (LLMs) refers to the extent to which their outputs are reliable, safe, and ethically aligned, and it has become a crucial consideration alongside their cognitive performance. In practice, Reinforcement Learning From Human Feedback (RLHF) has been widely used to align LLMs wi...
Large Language Model, Trustworthy ML, Data Attribution
Evaluating the Impact of RLHF on Trustworthiness Aspects
4,767
2404.18870
[ 0.00215368764474988, 0.010866993106901646, -0.013319551944732666, 0.06193395331501961, 0.04958201199769974, 0.01982612907886505, 0.036233723163604736, 0.018185734748840332, -0.01604555733501911, -0.02842109277844429, -0.011913900263607502, 0.06087028607726097, -0.09120120108127594, -0.0173...
144
Geometry-aware RL for Manipulation of Varying Shapes and Deformable Objects
https://openreview.net/forum?id=7BLXhmWvwF
[ "Tai Hoang", "Huy Le", "Philipp Becker", "Vien Anh Ngo", "Gerhard Neumann" ]
Oral
Manipulating objects with varying geometries and deformable objects is a major challenge in robotics. Tasks such as insertion with different objects or cloth hanging require precise control and effective modelling of complex dynamics. In this work, we frame this problem through the lens of a heterogeneous graph that co...
Robotic Manipulation, Equivariance, Graph Neural Networks, Reinforcement Learning, Deformable Objects
Geometry-aware RL with heterogeneous SE(3) equivariant back-bone policy for robotic manipulation
4,674
2502.07005
[ 0.00029483798425644636, -0.03495783731341362, -0.014988958835601807, 0.021292777732014656, 0.01942077837884426, 0.057013120502233505, -0.0025794473476707935, -0.010890492238104343, -0.03356001898646355, -0.07379645854234695, -0.008061964996159077, -0.037005484104156494, -0.09664452821016312,...
145
Topological Blindspots: Understanding and Extending Topological Deep Learning Through the Lens of Expressivity
https://openreview.net/forum?id=EzjsoomYEb
[ "Yam Eitan", "Yoav Gelberg", "Guy Bar-Shalom", "Fabrizio Frasca", "Michael M. Bronstein", "Haggai Maron" ]
Oral
Topological deep learning (TDL) is a rapidly growing field that seeks to leverage topological structure in data and facilitate learning from data supported on topological objects, ranging from molecules to 3D shapes. Most TDL architectures can be unified under the framework of higher-order message-passing (HOMP), which...
Topological Deep Learning, Message Passing, Higher Order Message Passing, Expressivity, Graph Neural Networks, GNNs, Topology, Homology, Symmetry
null
4,548
2408.05486
[ -0.026550419628620148, -0.0037432806566357613, -0.00600476935505867, 0.045499563217163086, -0.0031225166749209166, 0.0006567356758750975, 0.02801898494362831, 0.013988306745886803, -0.013036920689046383, -0.04395807534456253, 0.014555941335856915, -0.017060428857803345, -0.07197488099336624,...
146
Loopy: Taming Audio-Driven Portrait Avatar with Long-Term Motion Dependency
https://openreview.net/forum?id=weM4YBicIP
[ "Jianwen Jiang", "Chao Liang", "Jiaqi Yang", "Gaojie Lin", "Tianyun Zhong", "Yanbo Zheng" ]
Oral
With the introduction of video diffusion model, audio-conditioned human video generation has recently achieved significant breakthroughs in both the naturalness of motion and the synthesis of portrait details. Due to the limited control of audio signals in driving human motion, existing methods often add auxiliary spat...
Diffusion Model, Avatar, Portrait Animation, Audio-Condition Video Generation
We propose Loopy, an end-to-end audio-conditioned video diffusion model that uses long-term motion information to learn natural motions and improve audio-portrait correlation, eliminating motion constraints and delivering high-quality results.
4,292
2409.02634
[ 0.040748231112957, -0.020861677825450897, 0.011311620473861694, 0.015088643878698349, 0.05429625138640404, 0.03817103058099747, 0.04492909833788872, -0.00804237462580204, -0.052918870002031326, -0.050369467586278915, 0.0019178988877683878, -0.02459723874926567, -0.07066012918949127, -0.003...
147
CyberHost: A One-stage Diffusion Framework for Audio-driven Talking Body Generation
https://openreview.net/forum?id=vaEPihQsAA
[ "Gaojie Lin", "Jianwen Jiang", "Chao Liang", "Tianyun Zhong", "Jiaqi Yang", "Zerong Zheng", "Yanbo Zheng" ]
Oral
Diffusion-based video generation technology has advanced significantly, catalyzing a proliferation of research in human animation. While breakthroughs have been made in driving human animation through various modalities for portraits, most of current solutions for human body animation still focus on video-driven method...
Audio-driven Human Animation.+Diffusion Model.+Generative Model.+Human Video Generation
We propose a one-stage audio-driven talking body generation framework, CyberHost, designed to produce human videos that match the input audio with high expressiveness and realism.
4,230
null
[ 0.03262991085648537, -0.030557727441191673, -0.010935095138847828, 0.03456716984510422, 0.06303297728300095, -0.0036998153664171696, 0.03152783587574959, 0.019588695839047432, -0.02032400481402874, -0.0634770393371582, -0.030799956992268562, -0.046046916395425797, -0.04993576928973198, 0.0...
148
Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model
https://openreview.net/forum?id=SI2hI0frk6
[ "Chunting Zhou", "LILI YU", "Arun Babu", "Kushal Tirumala", "Michihiro Yasunaga", "Leonid Shamis", "Jacob Kahn", "Xuezhe Ma", "Luke Zettlemoyer", "Omer Levy" ]
Oral
We introduce Transfusion, a recipe for training a multi-modal model over discrete and continuous data. Transfusion combines the language modeling loss function (next token prediction) with diffusion to train a single transformer over mixed-modality sequences. We pretrain multiple Transfusion models up to 7B parameters ...
multimodal foundation model, multimodal generation and understanding, diffusion, next token prediction
Transfusion is a recipe for training a multi-modal model over discrete and continuous data.
4,134
2408.11039
[ 0.01123978290706873, -0.04030008614063263, 0.0011551470961421728, 0.028509818017482758, 0.029135148972272873, 0.03899955376982689, 0.028372185304760933, 0.024803832173347473, -0.019806263968348503, -0.047113146632909775, -0.005780233535915613, 0.002390551380813122, -0.047640688717365265, -...
149
MoE++: Accelerating Mixture-of-Experts Methods with Zero-Computation Experts
https://openreview.net/forum?id=t7P5BUKcYv
[ "Peng Jin", "Bo Zhu", "Li Yuan", "Shuicheng YAN" ]
Oral
In this work, we aim to simultaneously enhance the effectiveness and efficiency of Mixture-of-Experts (MoE) methods. To achieve this, we propose MoE++, a general and heterogeneous MoE framework that integrates both Feed-Forward Network (FFN) and zero-computation experts. Specifically, we introduce three types of zero-c...
Mixture of Experts, Large Language Models, Efficient Foundation Models
We propose MoE++, a general and heterogeneous mixture-of-experts framework that achieves better performance while delivering 1.1$\sim$2.1$\times$ expert forward throughput compared to a vanilla MoE model of the same size.
4,125
2410.07348
[ 0.012907963246107101, -0.023714622482657433, 0.014710781164467335, 0.036832358688116074, 0.023780537769198418, 0.06114567443728447, 0.015030688606202602, 0.017563477158546448, -0.042089544236660004, -0.03942112252116203, 0.005357551388442516, 0.019723670557141304, -0.042915403842926025, -0...
150
Compositional Entailment Learning for Hyperbolic Vision-Language Models
https://openreview.net/forum?id=3i13Gev2hV
[ "Avik Pal", "Max van Spengler", "Guido Maria D'Amely di Melendugno", "Alessandro Flaborea", "Fabio Galasso", "Pascal Mettes" ]
Oral
Image-text representation learning forms a cornerstone in vision-language models, where pairs of images and textual descriptions are contrastively aligned in a shared embedding space. Since visual and textual concepts are naturally hierarchical, recent work has shown that hyperbolic space can serve as a high-potential ...
Vision-Language Models, Hyperbolic Geometry, Representation Learning, CLIP
We explore the benefits brought in when using visual-semantic compositional hierarchies for learning hyperbolic representations through unsupervised contrastive training.
4,111
2410.06912
[ 0.011464657261967659, 0.024000301957130432, -0.0038960024248808622, 0.06469081342220306, 0.025019776076078415, 0.010319042019546032, 0.021263867616653442, 0.03258049115538597, -0.026808958500623703, -0.028277531266212463, -0.04329589009284973, 0.04203503206372261, -0.05876518413424492, 0.0...
151
Advantage Alignment Algorithms
https://openreview.net/forum?id=QFO1asgas2
[ "Juan Agustin Duque", "Milad Aghajohari", "Tim Cooijmans", "razvan ciuca", "Tianyu Zhang", "Gauthier Gidel", "Aaron Courville" ]
Oral
Artificially intelligent agents are increasingly being integrated into human decision-making: from large language model (LLM) assistants to autonomous vehicles. These systems often optimize their individual objective, leading to conflicts, particularly in general-sum games where naive reinforcement learning agents empi...
Multi-agent Reinforcement Learning, Opponent Shaping, Social Dilemmas, General-Sum Games
We introduce Advantage Alignment, a new family of algorithms for opponent shaping in general-sum games, designed to promote cooperation and avoid suboptimal outcomes.
3,875
2406.14662
[ -0.03438407927751541, -0.0021830282639712095, -0.00532126659527421, 0.03327280655503273, 0.016589097678661346, 0.02690093219280243, 0.02688346616923809, 0.01941572315990925, -0.015337379649281502, -0.057292334735393524, -0.008898904547095299, 0.03205765038728714, -0.08580232411623001, -0.0...
152
Scaling In-the-Wild Training for Diffusion-based Illumination Harmonization and Editing by Imposing Consistent Light Transport
https://openreview.net/forum?id=u1cQYxRI1H
[ "Lvmin Zhang", "Anyi Rao", "Maneesh Agrawala" ]
Oral
Diffusion-based image generators are becoming unique methods for illumination harmonization and editing. The current bottleneck in scaling up the training of diffusion-based illumination editing models is mainly in the difficulty of preserving the underlying image details and maintaining intrinsic properties, such as a...
diffusion model, illumination editing, image editing
Diffusion-based image illumination harmonization and editing model
3,821
null
[ 0.00996484700590372, -0.010281199589371681, -0.02414102666079998, 0.05126495659351349, 0.050666388124227524, 0.008033494465053082, 0.014220804907381535, -0.009957061149179935, -0.03229162469506264, -0.0730702206492424, -0.011685353703796864, -0.03173007816076279, -0.06317996978759766, 0.00...
153
AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models
https://openreview.net/forum?id=HvSytvg3Jh
[ "Junfeng Fang", "Houcheng Jiang", "Kun Wang", "Yunshan Ma", "Jie Shi", "Xiang Wang", "Xiangnan He", "Tat-Seng Chua" ]
Oral
Large language models (LLMs) often exhibit hallucinations, producing incorrect or outdated knowledge. Hence, model editing methods have emerged to enable targeted knowledge updates. To achieve this, a prevailing paradigm is the locating-then-editing approach, which first locates influential parameters and then edits ...
Model Editing, Null-Space, Large Language Model
We propose a novel model editing method named AlphaEdit to minimize the disruption to the preserved knowledge during editing.
3,792
2410.02355
[ -0.03199690207839012, -0.006551577243953943, -0.033615872263908386, 0.042706120759248734, 0.060737982392311096, 0.014815878123044968, 0.05284164473414421, 0.007951739244163036, -0.043736182153224945, -0.015349463559687138, -0.045651815831661224, 0.026876449584960938, -0.0670463889837265, -...
154
DeepLTL: Learning to Efficiently Satisfy Complex LTL Specifications for Multi-Task RL
https://openreview.net/forum?id=9pW2J49flQ
[ "Mathias Jackermeier", "Alessandro Abate" ]
Oral
Linear temporal logic (LTL) has recently been adopted as a powerful formalism for specifying complex, temporally extended tasks in multi-task reinforcement learning (RL). However, learning policies that efficiently satisfy arbitrary specifications not observed during training remains a challenging problem. Existing app...
reinforcement learning, linear temporal logic, ltl, generalization
null
3,756
null
[ -0.006227348931133747, -0.04436841234564781, -0.02255740389227867, 0.025080054998397827, 0.047580618411302567, 0.030241617932915688, 0.021620240062475204, 0.012221957556903362, -0.008687129244208336, -0.011459747329354286, -0.01910182647407055, 0.04347417131066322, -0.07463394850492477, -0...
155
On the Role of Attention Heads in Large Language Model Safety
https://openreview.net/forum?id=h0Ak8A5yqw
[ "Zhenhong Zhou", "Haiyang Yu", "Xinghua Zhang", "Rongwu Xu", "Fei Huang", "Kun Wang", "Yang Liu", "Junfeng Fang", "Yongbin Li" ]
Oral
Large language models (LLMs) achieve state-of-the-art performance on multiple language tasks, yet their safety guardrails can be circumvented, leading to harmful generations. In light of this, recent research on safety mechanisms has emerged, revealing that when safety representations or component are suppressed, the s...
interpretability, large language model, multi-head attention, safety, harmful content
We identify safety-critical attention heads in large language models, and when these heads are ablated, the model safety is significantly compromised.
3,741
2410.13708
[ -0.027358490973711014, 0.02545349672436714, -0.014126898720860481, 0.010016492567956448, 0.018251115456223488, 0.005173825193196535, 0.033509183675050735, -0.0011316250311210752, -0.03356940299272537, -0.01720915548503399, -0.04440632835030556, 0.0393499918282032, -0.06320611387491226, 0.0...
156
Influence Functions for Scalable Data Attribution in Diffusion Models
https://openreview.net/forum?id=esYrEndGsr
[ "Bruno Kacper Mlodozeniec", "Runa Eschenhagen", "Juhan Bae", "Alexander Immer", "David Krueger", "Richard E. Turner" ]
Oral
Diffusion models have led to significant advancements in generative modelling. Yet their widespread adoption poses challenges regarding data attribution and interpretability. In this paper, we aim to help address such challenges in diffusion models by extending influence functions. Influence function-based data attribu...
diffusion models, influence functions, Generalised Gauss Newton, GGN, data attribution, Hessian approximation, interpretability, curvature, Kronecker-Factored Approximate Curvature, K-FAC
We present a method for attributing the influence of training data on diffusion model’s output by adapting influence functions and a KFAC approximation for diffusion models, and we explore what measurements we want to attribute for in the first place
3,597
2410.13850
[ -0.002559081418439746, -0.024651620537042618, 0.006144994869828224, 0.03935732692480087, 0.04603952914476395, 0.033890292048454285, 0.00466980691999197, -0.0311033483594656, -0.001959393732249737, -0.05721152201294899, -0.004211458843201399, 0.000690596119966358, -0.052405159920454025, 0.0...
157
Second-Order Min-Max Optimization with Lazy Hessians
https://openreview.net/forum?id=ijbA5swmoK
[ "Lesi Chen", "Chengchang Liu", "Jingzhao Zhang" ]
Oral
This paper studies second-order methods for convex-concave minimax optimization. Monteiro & Svaiter (2012) proposed a method to solve the problem with an optimal iteration complexity of $\mathcal{O}(\epsilon^{-3/2})$ to find an $\epsilon$-saddle point. However, it is unclear whether the computational complexity, $...
min-max optimization; second-order methods; computational complexity
We propose novel second-order methods for min-max optimization that are provably better than existing optimal methods
3,596
2410.09568
[ -0.050684697926044464, -0.026412034407258034, 0.013435817323625088, 0.029334967955946922, 0.03134198114275932, 0.055238571017980576, 0.014261519536376, -0.009575573727488518, -0.012788621708750725, -0.05675482004880905, 0.004128718748688698, 0.009614897891879082, -0.04286438599228859, 0.00...
158
Composing Unbalanced Flows for Flexible Docking and Relaxation
https://openreview.net/forum?id=gHLWTzKiZV
[ "Gabriele Corso", "Vignesh Ram Somnath", "Noah Getz", "Regina Barzilay", "Tommi Jaakkola", "Andreas Krause" ]
Oral
Diffusion models have emerged as a successful approach for molecular docking, but they often cannot model protein flexibility or generate nonphysical poses. We argue that both these challenges can be tackled by framing the problem as a transport between distributions. Still, existing paradigms lack the flexibility to d...
molecular docking, flow matching, structure relaxation, unbalanced transport
A new generalized flow matching paradigm and its applications to flexible docking and relaxation
3,566
null
[ -0.0461922250688076, -0.03452812507748604, -0.0015646510291844606, 0.01607530005276203, 0.04808869585394859, 0.009059669449925423, -0.013963036239147186, -0.01322003360837698, -0.008993832394480705, -0.0891730859875679, 0.05790340527892113, -0.02185809798538685, -0.0818590521812439, -0.000...
159
Learning Distributions of Complex Fluid Simulations with Diffusion Graph Networks
https://openreview.net/forum?id=uKZdlihDDn
[ "Mario Lino Valencia", "Tobias Pfaff", "Nils Thuerey" ]
Oral
Physical systems with complex unsteady dynamics, such as fluid flows, are often poorly represented by a single mean solution. For many practical applications, it is crucial to access the full distribution of possible states, from which relevant statistics (e.g., RMS and two-point correlations) can be derived. Here, we ...
Graph Neural Networks, Diffusion Models, Physics Simulations
We propose an efficient graph-based latent diffusion model, which allows us to directly sample unsteady flow states from their equilibrium distribution given a mesh discretisation of the system and its physical parameters.
3,559
null
[ -0.02477024868130684, -0.01005417387932539, 0.007902017794549465, 0.056541237980127335, 0.050891730934381485, 0.030384764075279236, -0.0017631598748266697, -0.013162883929908276, -0.020526673644781113, -0.052574001252651215, 0.04272628203034401, -0.05022265762090683, -0.035747431218624115, ...
160
Training Language Models to Self-Correct via Reinforcement Learning
https://openreview.net/forum?id=CjwERcAU7w
[ "Aviral Kumar", "Vincent Zhuang", "Rishabh Agarwal", "Yi Su", "John D Co-Reyes", "Avi Singh", "Kate Baumli", "Shariq Iqbal", "Colton Bishop", "Rebecca Roelofs", "Lei M Zhang", "Kay McKinney", "Disha Shrivastava", "Cosmin Paduraru", "George Tucker", "Doina Precup", "Feryal Behbahani",...
Oral
Self-correction is a highly desirable capability of large language models (LLMs), yet it has consistently been found to be largely ineffective in modern LLMs. Current methods for training self-correction typically depend on either multiple models, a more advanced model, or additional forms of supervision. To address t...
language models, reinforcement learning
null
3,518
2409.12917
[ -0.01361435279250145, -0.021336710080504417, -0.0002021315594902262, 0.037398017942905426, 0.07103973627090454, 0.02874891646206379, 0.016152560710906982, 0.016461797058582306, -0.02155442349612713, -0.023110710084438324, -0.015439070761203766, 0.06324004381895065, -0.05173564329743385, -0...
161
AI as Humanity’s Salieri: Quantifying Linguistic Creativity of Language Models via Systematic Attribution of Machine Text against Web Text
https://openreview.net/forum?id=ilOEOIqolQ
[ "Ximing Lu", "Melanie Sclar", "Skyler Hallinan", "Niloofar Mireshghallah", "Jiacheng Liu", "Seungju Han", "Allyson Ettinger", "Liwei Jiang", "Khyathi Chandu", "Nouha Dziri", "Yejin Choi" ]
Oral
Creativity has long been considered one of the most difficult aspect of human intelligence for AI to mimic. However, the rise of Large Language Models (LLMs), like ChatGPT, has raised questions about whether AI can match or even surpass human creativity. We present CREATIVITY INDEX as the first step to quantify the lin...
Machine Creativity, Large Language Model, Science of LLM, Machine Text Detection
We present CREATIVITY INDEX, a metric that quantifies the creativity of a text by reconstructing it from existing web snippets, supported by a novel dynamic programming algorithm, DJ SEARCH, for efficient computation.
3,478
null
[ -0.009438997134566307, 0.0002791288716252893, -0.01955309882760048, 0.026055702939629555, 0.0618428997695446, 0.0018609563121572137, 0.03146687522530556, 0.0426558293402195, -0.015646198764443398, -0.01714552752673626, -0.0441608764231205, 0.03729083761572838, -0.05787428840994835, -0.0061...
162
Comparing noisy neural population dynamics using optimal transport distances
https://openreview.net/forum?id=cNmu0hZ4CL
[ "Amin Nejatbakhsh", "Victor Geadah", "Alex H Williams", "David Lipshutz" ]
Oral
Biological and artificial neural systems form high-dimensional neural representations that underpin their computational capabilities. Methods for quantifying geometric similarity in neural representations have become a popular tool for identifying computational principles that are potentially shared across neural syste...
Representational similarity, shape metrics, optimal transport, Wasserstein distance
We propose using optimal transport distances on stochastic processes to compare noisy neural trajectories.
3,439
2412.14421
[ -0.03857683762907982, -0.003256463212892413, -0.027525654062628746, 0.04385042190551758, 0.0308698657900095, 0.04930994287133217, 0.021946895867586136, 0.0140706030651927, -0.04938143491744995, -0.06533446162939072, 0.014698406681418419, -0.012333858758211136, -0.04337494447827339, 0.01049...
163
Learning stochastic dynamics from snapshots through regularized unbalanced optimal transport
https://openreview.net/forum?id=gQlxd3Mtru
[ "Zhenyi Zhang", "Tiejun Li", "Peijie Zhou" ]
Oral
Reconstructing dynamics using samples from sparsely time-resolved snapshots is an important problem in both natural sciences and machine learning. Here, we introduce a new deep learning approach for solving regularized unbalanced optimal transport (RUOT) and inferring continuous unbalanced stochastic dynamics from obse...
optimal transport, Schrödinger bridge, trajectory inference, single-cell
null
3,337
2410.00844
[ -0.013424222357571125, -0.0496273972094059, -0.013450000435113907, 0.034008581191301346, 0.057073961943387985, 0.00006068208676879294, 0.03992754966020584, 0.021304327994585037, -0.023938380181789398, -0.07652896642684937, 0.051388759166002274, -0.022656114771962166, -0.061569053679704666, ...
164
Prioritized Generative Replay
https://openreview.net/forum?id=5IkDAfabuo
[ "Renhao Wang", "Kevin Frans", "Pieter Abbeel", "Sergey Levine", "Alexei A Efros" ]
Oral
Sample-efficient online reinforcement learning often uses replay buffers to store experience for reuse when updating the value function. However, uniform replay is inefficient, since certain classes of transitions can be more relevant to learning. While prioritization of more useful samples is helpful, this strategy c...
online learning, model-based reinforcement learning, generative modeling, synthetic data, continual learning
We construct a conditional generative model of an agent's online memory, allowing us to replay high-priority data at large quantities to accelerate training of online RL agents.
3,226
2410.18082
[ -0.038029417395591736, -0.026495812460780144, -0.00869293324649334, 0.06427215039730072, 0.044952936470508575, 0.020129039883613586, 0.0014365186216309667, 0.025322161614894867, -0.044822875410318375, -0.044743478298187256, -0.002873767167329788, 0.009252441115677357, -0.0656534880399704, ...
165
The Geometry of Categorical and Hierarchical Concepts in Large Language Models
https://openreview.net/forum?id=bVTM2QKYuA
[ "Kiho Park", "Yo Joong Choe", "Yibo Jiang", "Victor Veitch" ]
Oral
The linear representation hypothesis is the informal idea that semantic concepts are encoded as linear directions in the representation spaces of large language models (LLMs). Previous work has shown how to make this notion precise for representing binary concepts that have natural contrasts (e.g., {male, female}) as _...
categorical concepts, hierarchical concepts, linear representation hypothesis, causal inner product, interpretability
We extend the linear representation hypothesis to general concepts and show that hierarchical relationships are encoded as orthogonality.
3,176
2406.01506
[ -0.025107180699706078, -0.0017539234831929207, -0.02064323052763939, 0.033993154764175415, 0.03094629943370819, 0.031445685774087906, 0.03249775990843773, 0.02894251048564911, -0.030417021363973618, -0.0021330956369638443, -0.02140229567885399, -0.021422425284981728, -0.05719782039523125, ...
166
Generator Matching: Generative modeling with arbitrary Markov processes
https://openreview.net/forum?id=RuP17cJtZo
[ "Peter Holderrieth", "Marton Havasi", "Jason Yim", "Neta Shaul", "Itai Gat", "Tommi Jaakkola", "Brian Karrer", "Ricky T. Q. Chen", "Yaron Lipman" ]
Oral
We introduce Generator Matching, a modality-agnostic framework for generative modeling using arbitrary Markov processes. Generators characterize the infinitesimal evolution of a Markov process, which we leverage for generative modeling in a similar vein to flow matching: we construct conditional generators which genera...
Flow matching, Markov process, Diffusion model, Generative Modeling
The core principles of flow matching can be vastly generalized to practically all continuous-time Markov processes using Markov generators, unifying all previous methods and opening the door to new generative models agnostic to data modality.
3,162
2410.20587
[ 0.0004361197352409363, -0.007574766408652067, -0.019560372456908226, 0.06475751101970673, 0.059189390391111374, 0.054846927523612976, 0.0004407609230838716, 0.012155727483332157, -0.006689924746751785, -0.04200601205229759, -0.0009205005480907857, -0.02510157600045204, -0.075786292552948, ...
167
No Pose, No Problem: Surprisingly Simple 3D Gaussian Splats from Sparse Unposed Images
https://openreview.net/forum?id=P4o9akekdf
[ "Botao Ye", "Sifei Liu", "Haofei Xu", "Xueting Li", "Marc Pollefeys", "Ming-Hsuan Yang", "Songyou Peng" ]
Oral
We introduce NoPoSplat, a feed-forward model capable of reconstructing 3D scenes parameterized by 3D Gaussians from unposed sparse multi-view images. Our model, trained exclusively with photometric loss, achieves real-time 3D Gaussian reconstruction during inference. To eliminate the need for accurate pose input during...
3D Gaussian Splatting, Pose Free, Pose Estimation, Novel View Synthesis, 3D Reconstruction
NoPoSplat is a novel feed-forward model that reconstructs scenes from unposed images by predicting Gaussians in a canonical space, demonstrating superior performance in both novel view synthesis and pose estimation.
3,116
2410.24207
[ 0.009094174019992352, -0.015740951523184776, 0.01457470003515482, 0.04789816215634346, 0.020473642274737358, 0.020499110221862793, 0.010584414005279541, 0.01406576856970787, -0.03082558885216713, -0.05072568356990814, -0.006453048903495073, -0.005518066231161356, -0.09061659872531891, -0.0...
168
Variational Diffusion Posterior Sampling with Midpoint Guidance
https://openreview.net/forum?id=6EUtjXAvmj
[ "Badr MOUFAD", "Yazid Janati", "Lisa Bedin", "Alain Oliviero Durmus", "randal douc", "Eric Moulines", "Jimmy Olsson" ]
Oral
Diffusion models have recently shown considerable potential in solving Bayesian inverse problems when used as priors. However, sampling from the resulting denoising posterior distributions remains a challenge as it involves intractable terms. To tackle this issue, state-of-the-art approaches formulate the problem as th...
Diffusion models, Inverse problems, posterior sampling
null
3,058
2410.09945
[ -0.02341112308204174, -0.0074170310981571674, -0.009132299572229385, 0.016020940616726875, 0.06401336938142776, 0.039845824241638184, 0.009257789701223373, -0.03534034639596939, -0.023834431543946266, -0.07225432991981506, 0.011773660778999329, 0.006353507749736309, -0.0010774028487503529, ...
169
Towards Understanding Why FixMatch Generalizes Better Than Supervised Learning
https://openreview.net/forum?id=25kAzqzTrz
[ "Jingyang Li", "Jiachun Pan", "Vincent Y. F. Tan", "Kim-chuan Toh", "Pan Zhou" ]
Oral
Semi-supervised learning (SSL), exemplified by FixMatch (Sohn et al., 2020), has shown significant generalization advantages over supervised learning (SL), particularly in the context of deep neural networks (DNNs). However, it is still unclear, from a theoretical standpoint, why FixMatch-like SSL algorithms generalize...
deep semi-supervised learning, generalization error, feature learning
null
2,984
2410.11206
[ 0.01908169314265251, -0.058533523231744766, -0.0203908272087574, 0.05314628407359123, 0.05805392190814018, 0.021974340081214905, 0.02078353241086006, 0.028040919452905655, -0.016440793871879578, -0.04043226316571236, -0.014711379073560238, 0.0054578594863414764, -0.09565158933401108, 0.002...
170
NeuralPlane: Structured 3D Reconstruction in Planar Primitives with Neural Fields
https://openreview.net/forum?id=5UKrnKuspb
[ "Hanqiao Ye", "Yuzhou Liu", "Yangdong Liu", "Shuhan Shen" ]
Oral
3D maps assembled from planar primitives are compact and expressive in representing man-made environments. In this paper, we present **NeuralPlane**, a novel approach that explores **neural** fields for multi-view 3D **plane** reconstruction. Our method is centered upon the core idea of distilling geometric and semanti...
3D Reconstruction, 3D Scene Understanding, Scene Abstraction, Neural Rendering
NeuralPlane rebuilds indoor scenes as arrangements of planar primitives from multi-view images.
2,933
null
[ -0.006603008136153221, -0.022976508364081383, -0.0076772840693593025, 0.029331713914871216, 0.02652641199529171, 0.04478384554386139, -0.019443700090050697, -0.007254574913531542, -0.015176226384937763, -0.0681699812412262, -0.004066172521561384, -0.02159043774008751, -0.05857740715146065, ...
171
Sparse Feature Circuits: Discovering and Editing Interpretable Causal Graphs in Language Models
https://openreview.net/forum?id=I4e82CIDxv
[ "Samuel Marks", "Can Rager", "Eric J Michaud", "Yonatan Belinkov", "David Bau", "Aaron Mueller" ]
Oral
We introduce methods for discovering and applying **sparse feature circuits**. These are causally implicated subnetworks of human-interpretable features for explaining language model behaviors. Circuits identified in prior work consist of polysemantic and difficult-to-interpret units like attention heads or neurons, re...
Interpretability, mechanistic interpretability, circuits, spurious correlations, generalization, dictionary learning
We automatically discover circuits of interpretable components and apply them to remove sensitivity to spurious correlates
2,718
2403.19647
[ 0.002851861296221614, -0.0060537876561284065, -0.03188232332468033, 0.0445571169257164, 0.041939619928598404, 0.044133249670267105, 0.039609361439943314, 0.021708263084292412, -0.03502894937992096, -0.027925394475460052, 0.013339271768927574, 0.017971616238355637, -0.05653626471757889, 0.0...
172
Retrieval Head Mechanistically Explains Long-Context Factuality
https://openreview.net/forum?id=EytBpUGB1Z
[ "Wenhao Wu", "Yizhong Wang", "Guangxuan Xiao", "Hao Peng", "Yao Fu" ]
Oral
Despite the recent progress in long-context language models, it remains elusive how transformer-based models exhibit the capability to retrieve relevant information from arbitrary locations within the long context. This paper aims to address this question. Our systematic investigation across a wide spectrum of models r...
Large language models, long context, interpretability, attention
We study retrieval head, a special type of attention head that mechanistically explains long-context factuality
2,659
2404.15574
[ -0.017197996377944946, 0.003378302324563265, -0.011709215119481087, 0.0334000438451767, 0.022558411583304405, -0.015448681078851223, 0.016340922564268112, 0.03359635919332504, -0.04041583091020584, -0.00868156272917986, -0.06442572921514511, 0.03127094358205795, -0.0340406596660614, -0.007...
173
High-Dynamic Radar Sequence Prediction for Weather Nowcasting Using Spatiotemporal Coherent Gaussian Representation
https://openreview.net/forum?id=Cjz9Xhm7sI
[ "Ziye Wang", "Yiran Qin", "Lin Zeng", "Ruimao Zhang" ]
Oral
Weather nowcasting is an essential task that involves predicting future radar echo sequences based on current observations, offering significant benefits for disaster management, transportation, and urban planning. Current prediction methods are limited by training and storage efficiency, mainly focusing on 2D spatial ...
3D Gaussian, Dynamic Reconstruction, Radar Prediction, Weather Nowcasting
null
2,603
2502.14895
[ -0.0027598021551966667, -0.03737422823905945, 0.023493213579058647, 0.014547033235430717, 0.024858856573700905, 0.023586362600326538, 0.04712234437465668, 0.030000394210219383, -0.05561790242791176, -0.06079983338713646, -0.012353962287306786, -0.0004013122234027833, -0.037813592702150345, ...
174
Differential Transformer
https://openreview.net/forum?id=OvoCm1gGhN
[ "Tianzhu Ye", "Li Dong", "Yuqing Xia", "Yutao Sun", "Yi Zhu", "Gao Huang", "Furu Wei" ]
Oral
Transformer tends to overallocate attention to irrelevant context. In this work, we introduce Diff Transformer, which amplifies attention to the relevant context while canceling noise. Specifically, the differential attention mechanism calculates attention scores as the difference between two separate softmax attention...
sequence modeling, language models, model architecture, Transformer
null
2,557
2410.05258
[ 0.026388246566057205, -0.00851830467581749, -0.013001589104533195, 0.045401886105537415, 0.019594833254814148, 0.02199419215321541, 0.01528747845441103, 0.011231077834963799, -0.002515207277610898, -0.011708706617355347, -0.009193873964250088, 0.02458740957081318, -0.06775497645139694, 0.0...
175
Open-Vocabulary Customization from CLIP via Data-Free Knowledge Distillation
https://openreview.net/forum?id=1aF2D2CPHi
[ "Yongxian Wei", "Zixuan Hu", "Li Shen", "Zhenyi Wang", "Chun Yuan", "Dacheng Tao" ]
Oral
Vision-language models such as CLIP have demonstrated strong zero-shot performance, but their considerable size and inefficient inference limit customizable deployment for users. While knowledge distillation is a solution, it still requires the original data, which is not always available due to copyrights and privacy ...
Data-Free Learning, CLIP Model, Customization
Could we distill models from CLIP without data to meet customized tasks?
2,525
null
[ 0.02264239639043808, -0.015413038432598114, 0.005889678839594126, 0.0921197235584259, 0.07192056626081467, 0.013372340239584446, 0.04415609687566757, -0.0008928619208745658, 0.007335658185184002, -0.020002800971269608, -0.051016390323638916, 0.027322303503751755, -0.07642067223787308, -0.0...
176
Trust or Escalate: LLM Judges with Provable Guarantees for Human Agreement
https://openreview.net/forum?id=UHPnqSTBPO
[ "Jaehun Jung", "Faeze Brahman", "Yejin Choi" ]
Oral
We present a principled approach to provide LLM-based evaluation with a rigorous guarantee of human agreement. We first propose that a reliable evaluation method should not uncritically rely on model preferences for pairwise evaluation, but rather assess the confidence of judge models and selectively decide when to tru...
Large Language Model, LLM, LLM Judge, Evaluation, Alignment
We propose Cascaded Selective Evaluation, an LLM-as-Judge framework that dynamically selects when to trust different judge models to reduce evaluation overhead, while providing a provable guarantee of human-judge agreement.
2,430
2407.18370
[ 0.0077982754446566105, -0.027189873158931732, 0.0013822488253936172, 0.05961312726140022, 0.04695773497223854, 0.006237925961613655, 0.0372375063598156, 0.019108517095446587, -0.0014627482742071152, -0.027429083362221718, 0.010336737148463726, 0.05704951286315918, -0.0861314907670021, -0.0...
177
Your Mixture-of-Experts LLM Is Secretly an Embedding Model for Free
https://openreview.net/forum?id=eFGQ97z5Cd
[ "Ziyue Li", "Tianyi Zhou" ]
Oral
While large language models (LLMs) excel on generation tasks, their decoder-only architecture often limits their potential as embedding models if no further representation finetuning is applied. Does this contradict their claim of generalists? To answer the question, we take a closer look at Mixture-of-Experts (MoE) LL...
Mixture of Experts
null
2,416
2410.10814
[ 0.01866007037460804, -0.04574169963598251, 0.0027300985530018806, 0.03341127187013626, 0.05388783663511276, -0.0011398241622373462, 0.03188943490386009, 0.012038083747029305, -0.014294233173131943, 0.002665274078026414, -0.004699633456766605, 0.04657762870192528, -0.037958137691020966, -0....
178
REEF: Representation Encoding Fingerprints for Large Language Models
https://openreview.net/forum?id=SnDmPkOJ0T
[ "Jie Zhang", "Dongrui Liu", "Chen Qian", "Linfeng Zhang", "Yong Liu", "Yu Qiao", "Jing Shao" ]
Oral
Protecting the intellectual property of open-source Large Language Models (LLMs) is very important, because training LLMs costs extensive computational resources and data. Therefore, model owners and third parties need to identify whether a suspect model is a subsequent development of the victim model. To this end, we ...
Large Language Model, Fingerprint, Representation, Intellectual Property
null
2,401
2410.14273
[ -0.0399414487183094, -0.028650641441345215, -0.03354748338460922, 0.045109398663043976, 0.03066273033618927, 0.030128302052617073, 0.036477766931056976, -0.001094074104912579, -0.037906620651483536, -0.004481864627450705, -0.015442994423210621, 0.05707664415240288, -0.06390377879142761, 0....
179
Flat Reward in Policy Parameter Space Implies Robust Reinforcement Learning
https://openreview.net/forum?id=4OaO3GjP7k
[ "Hyun Kyu Lee", "Sung Whan Yoon" ]
Oral
Investigating flat minima on loss surfaces in parameter space is well-documented in the supervised learning context, highlighting its advantages for model generalization. However, limited attention has been paid to the reinforcement learning (RL) context, where the impact of flatter reward landscapes in policy paramete...
Reinforcement learning, Flat Minima, Robust Reinforcement learning
null
2,326
null
[ -0.03053046204149723, -0.01865505240857601, -0.0025920921470969915, 0.03982967510819435, 0.0359712615609169, 0.02874215878546238, 0.01597285456955433, -0.005274283699691296, -0.032566726207733154, -0.0533156655728817, 0.0033391087781637907, -0.0018990007229149342, -0.06742362678050995, -0....
180
LLM-SR: Scientific Equation Discovery via Programming with Large Language Models
https://openreview.net/forum?id=m2nmp8P5in
[ "Parshin Shojaee", "Kazem Meidani", "Shashank Gupta", "Amir Barati Farimani", "Chandan K. Reddy" ]
Oral
Mathematical equations have been unreasonably effective in describing complex natural phenomena across various scientific disciplines. However, discovering such insightful equations from data presents significant challenges due to the necessity of navigating extremely large combinatorial hypothesis spaces. Current meth...
Symbolic Regression, Equation Discovery, Large Language Models, Evolutionary Search
We introduce LLM-SR, an approach that harnesses Large Language Models (LLMs) to discover governing equations from data in an efficient, knowledge-guided manner.
2,272
null
[ -0.03397534042596817, -0.0035560117103159428, -0.0026082503609359264, 0.024288861081004143, 0.05451451241970062, 0.02923044003546238, 0.029769858345389366, -0.014450125396251678, -0.023196034133434296, -0.006598854903131723, 0.011779905296862125, 0.03381120786070824, -0.05553211271762848, ...
181
Backtracking Improves Generation Safety
https://openreview.net/forum?id=Bo62NeU6VF
[ "Yiming Zhang", "Jianfeng Chi", "Hailey Nguyen", "Kartikeya Upasani", "Daniel M. Bikel", "Jason E Weston", "Eric Michael Smith" ]
Oral
Text generation has a fundamental limitation almost by definition: there is no taking back tokens that have been generated, even when they are clearly problematic. In the context of language model safety, when a partial unsafe generation is produced, language models by their nature tend to happily keep on generating si...
AI safety, Generation algorithm, Backtracking
We introduce a backtracking technique that trains language models to recover from unsafe generations and substantially improves generation safety.
2,265
2409.14586
[ -0.03697201609611511, -0.01572187803685665, -0.025554070249199867, 0.06106230616569519, 0.02921713888645172, 0.011534013785421848, 0.06043552979826927, 0.012444326654076576, -0.036676909774541855, -0.009168464690446854, -0.027732914313673973, 0.032231349498033524, -0.04583045095205307, -0....
182
Rethinking the generalization of drug target affinity prediction algorithms via similarity aware evaluation
https://openreview.net/forum?id=j7cyANIAxV
[ "Chenbin Zhang", "Zhiqiang Hu", "Jiang Chuchu", "Wen Chen", "JIE XU", "Shaoting Zhang" ]
Oral
Drug-target binding affinity prediction is a fundamental task for drug discovery. It has been extensively explored in literature and promising results are reported. However, in this paper, we demonstrate that the results may be misleading and cannot be well generalized to real practice. The core observation is that the...
Drug-Target Affinity Prediction, Similarity-Aware Evaluation
null
2,093
null
[ 0.00020293158013373613, -0.04750838875770569, -0.0033363301772624254, 0.017835278064012527, 0.05591541528701782, 0.0004712123773060739, 0.0196369718760252, -0.04319482296705246, 0.026572424918413162, -0.031080611050128937, -0.010296680964529514, 0.031575724482536316, -0.08923089504241943, ...
183
GridMix: Exploring Spatial Modulation for Neural Fields in PDE Modeling
https://openreview.net/forum?id=Fur0DtynPX
[ "Honghui Wang", "Shiji Song", "Gao Huang" ]
Oral
Significant advancements have been achieved in PDE modeling using neural fields. Despite their effectiveness, existing methods rely on global modulation, limiting their ability to reconstruct local details. While spatial modulation with vanilla grid-based representations offers a promising alternative, it struggles wit...
Partial Differential Equations, Neural Fields
null
2,066
null
[ -0.028564125299453735, -0.04091569036245346, -0.00013346232299227268, 0.02268022485077381, 0.017605286091566086, 0.01880696229636669, -0.008587140589952469, 0.0105228740721941, -0.04415608197450638, -0.06132832542061806, 0.006349900271743536, -0.03537712246179581, -0.03867786377668381, 0.0...
184
Data Selection via Optimal Control for Language Models
https://openreview.net/forum?id=dhAL5fy8wS
[ "Yuxian Gu", "Li Dong", "Hongning Wang", "Yaru Hao", "Qingxiu Dong", "Furu Wei", "Minlie Huang" ]
Oral
This work investigates the selection of high-quality pre-training data from massive corpora to enhance LMs' capabilities for downstream usage. We formulate data selection as a generalized Optimal Control problem, which can be solved theoretically by Pontryagin's Maximum Principle (PMP), yielding a set of necessary con...
Pre-training Language Models, Data Selection, Optimal Control
This paper introduces a framework to select high-quality pre-training data via optimal control.
2,015
2410.07064
[ -0.08673499524593353, -0.005085158161818981, -0.011666552163660526, 0.06433041393756866, 0.0495704784989357, 0.05507770553231239, 0.011679016053676605, 0.01944190077483654, -0.029733354225754738, -0.019804542884230614, -0.021999981254339218, 0.027467790991067886, -0.06963787227869034, -0.0...
185
Simplifying, Stabilizing and Scaling Continuous-time Consistency Models
https://openreview.net/forum?id=LyJi5ugyJx
[ "Cheng Lu", "Yang Song" ]
Oral
Consistency models (CMs) are a powerful class of diffusion-based generative models optimized for fast sampling. Most existing CMs are trained using discretized timesteps, which introduce additional hyperparameters and are prone to discretization errors. While continuous-time formulations can mitigate these issues, thei...
continuous-time consistency models, diffusion models, fast sampling
2-step continuous-time consistency models reduce the gap to within 10\% in sample quality (FID) compared to best diffusion models
1,982
2410.11081
[ 0.003316558199003339, -0.04398882016539574, -0.02340715005993843, 0.06920589506626129, 0.0482357032597065, 0.026895539835095406, 0.007069791201502085, -0.006253985688090324, -0.02186404913663864, -0.0682375431060791, 0.014312943443655968, -0.051250699907541275, -0.0661320835351944, 0.01162...
186
Dynamic Multimodal Evaluation with Flexible Complexity by Vision-Language Bootstrapping
https://openreview.net/forum?id=X1OfiRYCLn
[ "Yue Yang", "Shuibo Zhang", "Kaipeng Zhang", "Yi Bin", "Yu Wang", "Ping Luo", "Wenqi Shao" ]
Oral
Large Vision-Language Models (LVLMs) have demonstrated remarkable capabilities across multimodal tasks such as visual perception and reasoning, leading to good performance on various multimodal evaluation benchmarks. However, these benchmarks keep a static nature and overlap with the pre-training data, resulting in fix...
Dynamic Evaluation, Vision-Language Bootstrapping, data contamination, Flexible Complexity, Large Vision-Language Model
We develop the first dynamic multimodal evaluation protocol with flexible complexity via Vision-Language Bootstrapping.
1,837
2410.08695
[ 0.00867460947483778, 0.008789703249931335, 0.013669866137206554, 0.034645650535821915, 0.033932287245988846, 0.015656501054763794, 0.029430696740746498, 0.032082099467515945, -0.03584447503089905, -0.0088704414665699, -0.03994090482592583, 0.04104915261268616, -0.07268528640270233, -0.0057...
187
Mind the Gap: Examining the Self-Improvement Capabilities of Large Language Models
https://openreview.net/forum?id=mtJSMcF3ek
[ "Yuda Song", "Hanlin Zhang", "Carson Eisenach", "Sham M. Kakade", "Dean Foster", "Udaya Ghai" ]
Oral
Self-improvement is a mechanism in Large Language Model (LLM) pre-training, post-training and test-time inference. We explore a framework where the model verifies its own outputs, filters or reweights data based on this verification, and distills the filtered data. Despite several empirical successes, a fundamental un...
LLM, self-improvement, synthetic data, post-training, test-time optimization
We conduct a comprehensive examination on LLM self-improvement capability via the generation-verification gap.
1,706
2412.02674
[ -0.01551152765750885, -0.015235682018101215, -0.002846313873305917, 0.027955645695328712, 0.07019605487585068, 0.0029121642000973225, 0.06252952665090561, 0.018156085163354874, -0.0327078215777874, 0.010989099740982056, -0.008066265843808651, 0.029476724565029144, -0.05646877735853195, -0....
188
SANA: Efficient High-Resolution Text-to-Image Synthesis with Linear Diffusion Transformers
https://openreview.net/forum?id=N8Oj1XhtYZ
[ "Enze Xie", "Junsong Chen", "Junyu Chen", "Han Cai", "Haotian Tang", "Yujun Lin", "Zhekai Zhang", "Muyang Li", "Ligeng Zhu", "Yao Lu", "Song Han" ]
Oral
We introduce Sana, a text-to-image framework that can efficiently generate images up to 4096$\times$4096 resolution. Sana can synthesize high-resolution, high-quality images with strong text-image alignment at a remarkably fast speed, deployable on laptop GPU. Core designs include: (1) Deep compression autoencoder: unl...
Efficient AI, Diffusion Models, Text to Image generation
Sana can synthesize high-resolution, high-quality images with strong text-image alignment at a remarkably fast speed.
1,682
null
[ 0.0020427831914275885, -0.0353410467505455, -0.024531902745366096, 0.05679412558674812, 0.025865191593766212, 0.04180269315838814, 0.01725098304450512, 0.008806383237242699, -0.008128548040986061, -0.05537410452961922, -0.014300738461315632, -0.010834618471562862, -0.04357776418328285, 0.0...
189
Can a MISL Fly? Analysis and Ingredients for Mutual Information Skill Learning
https://openreview.net/forum?id=xoIeVdFO7U
[ "Chongyi Zheng", "Jens Tuyls", "Joanne Peng", "Benjamin Eysenbach" ]
Oral
Self-supervised learning has the potential of lifting several of the key challenges in reinforcement learning today, such as exploration, representation learning, and reward design. Recent work (METRA) has effectively argued that moving away from mutual information and instead optimizing a certain Wasserstein distance ...
unsupervised learning, reinforcement learning, mutual information, successor feature
Through careful analysis of a prior method, we develop a new method called Contrastive Successor Features (CSF) that illustrates mutual information skill learning can be made highly effective.
1,383
2412.08021
[ -0.036635927855968475, -0.027617041021585464, -0.03510095924139023, 0.05369461700320244, 0.04286420717835426, -0.003436744213104248, 0.02871338091790676, -0.0000070198511821217835, -0.022948039695620537, -0.019053377211093903, -0.006530373822897673, 0.04516557976603508, -0.07319672405719757,...
190
When Selection Meets Intervention: Additional Complexities in Causal Discovery
https://openreview.net/forum?id=xByvdb3DCm
[ "Haoyue Dai", "Ignavier Ng", "Jianle Sun", "Zeyu Tang", "Gongxu Luo", "Xinshuai Dong", "Peter Spirtes", "Kun Zhang" ]
Oral
We address the common yet often-overlooked selection bias in interventional studies, where subjects are selectively enrolled into experiments. For instance, participants in a drug trial are usually patients of the relevant disease; A/B tests on mobile applications target existing users only, and gene perturbation studi...
causal discovery, selection bias, experiments, interventions
null
1,361
2503.07302
[ -0.03803445026278496, -0.03451394662261009, -0.03024286963045597, 0.0057774861343204975, 0.03536522760987282, 0.028723780065774918, 0.04599682241678238, 0.010019383393228054, -0.017537783831357956, -0.04511569067835808, 0.003959920257329941, 0.011458142660558224, -0.048508018255233765, -0....
191
LVSM: A Large View Synthesis Model with Minimal 3D Inductive Bias
https://openreview.net/forum?id=QQBPWtvtcn
[ "Haian Jin", "Hanwen Jiang", "Hao Tan", "Kai Zhang", "Sai Bi", "Tianyuan Zhang", "Fujun Luan", "Noah Snavely", "Zexiang Xu" ]
Oral
We propose the Large View Synthesis Model (LVSM), a novel transformer-based approach for scalable and generalizable novel view synthesis from sparse-view inputs. We introduce two architectures: (1) an encoder-decoder LVSM, which encodes input image tokens into a fixed number of 1D latent tokens, functioning as a fully ...
novel view synthesis, transformer, large model
We put forward a purely transformer-based large view synthesis model, which achieves impressive novel view synthesis results on both object-level and scene-level with minimal 3D inductive bias.
1,355
2410.17242
[ 0.016413509845733643, -0.00033669694676063955, 0.022358926013112068, 0.02272763103246689, 0.007922768592834473, 0.0442461334168911, -0.004105798900127411, 0.02264845184981823, -0.03145633637905121, -0.03165282681584358, -0.014397624880075455, -0.002679596422240138, -0.05785830691456795, 0....
192
Flow Matching with General Discrete Paths: A Kinetic-Optimal Perspective
https://openreview.net/forum?id=tcvMzR2NrP
[ "Neta Shaul", "Itai Gat", "Marton Havasi", "Daniel Severo", "Anuroop Sriram", "Peter Holderrieth", "Brian Karrer", "Yaron Lipman", "Ricky T. Q. Chen" ]
Oral
The design space of discrete-space diffusion or flow generative models are significantly less well-understood than their continuous-space counterparts, with many works focusing only on a simple masked construction. In this work, we aim to take a holistic approach to the construction of discrete generative models based ...
flow matching, discrete generative modeling
Through the lens of kinetic optimality, we expand the design space of Discrete Flow Matching, allowing the use of any probability path and simultaneously justifying existing mixture paths.
1,351
2412.03487
[ 0.010729641653597355, 0.012492436915636063, -0.022303063422441483, 0.07465232908725739, 0.04840654507279396, 0.02376106008887291, 0.00931857991963625, 0.019636837765574455, 0.0035937069915235043, -0.05650416761636734, -0.0045673297718167305, -0.01244468241930008, -0.06301918625831604, -0.0...
193
Cut Your Losses in Large-Vocabulary Language Models
https://openreview.net/forum?id=E4Fk3YuG56
[ "Erik Wijmans", "Brody Huval", "Alexander Hertzberg", "Vladlen Koltun", "Philipp Kraehenbuehl" ]
Oral
As language models grow ever larger, so do their vocabularies. This has shifted the memory footprint of LLMs during training disproportionately to one single layer: the cross-entropy in the loss computation. Cross-entropy builds up a logit matrix with entries for each pair of input tokens and vocabulary items and, for ...
large language model, large vocabulary, efficient
We propose Cut Cross-Entropy (CCE), a method that computes the cross-entropy loss with negligible memory consumption.
1,344
2411.09009
[ -0.03541720286011696, -0.019945302978157997, -0.00440195482224226, 0.018612168729305267, 0.03752338886260986, 0.030753444880247116, 0.018926797434687614, 0.016316024586558342, -0.018583297729492188, -0.000925474043469876, -0.031161513179540634, 0.04351704195141792, -0.04324941709637642, -0...
194
AFlow: Automating Agentic Workflow Generation
https://openreview.net/forum?id=z5uVAKwmjf
[ "Jiayi Zhang", "Jinyu Xiang", "Zhaoyang Yu", "Fengwei Teng", "Xiong-Hui Chen", "Jiaqi Chen", "Mingchen Zhuge", "Xin Cheng", "Sirui Hong", "Jinlin Wang", "Bingnan Zheng", "Bang Liu", "Yuyu Luo", "Chenglin Wu" ]
Oral
Large language models (LLMs) have demonstrated remarkable potential in solving complex tasks across diverse domains, typically by employing agentic workflows that follow detailed instructions and operational sequences. However, constructing these workflows requires significant human effort, limiting scalability and gen...
LLM Agent; Prompt Optimization; Workflow Generation
We introduce the field of Agentic Workflow Optimization and propose an effective search algorithm called AFLOW, enabling it to surpass manually constructed workflows on six reasoning datasets.
1,308
2410.10762
[ 0.01045062392950058, -0.05524498224258423, -0.0068790726363658905, 0.015626583248376846, 0.0415695421397686, 0.0008456604555249214, 0.03840180113911629, 0.03728412091732025, -0.011685584671795368, -0.03753706067800522, -0.03024861216545105, -0.0002926137822214514, -0.08335328102111816, -0....
195
Two Effects, One Trigger: On the Modality Gap, Object Bias, and Information Imbalance in Contrastive Vision-Language Models
https://openreview.net/forum?id=uAFHCZRmXk
[ "Simon Schrodi", "David T. Hoffmann", "Max Argus", "Volker Fischer", "Thomas Brox" ]
Oral
Contrastive vision-language models (VLMs), like CLIP, have gained popularity for their versatile applicability to various downstream tasks. Despite their successes in some tasks, like zero-shot object recognition, they perform surprisingly poor on other tasks, like attribute recognition. Previous work has attributed th...
CLIP, modality gap, object bias, contrastive loss, data-centric, vision language models, VLM
We find that an information imbalance between images and texts leads to the modality gap and object bias of contrastive VLMs. We study both phenomena in depth, eliminate common misconceptions, and improve the understanding of both of them.
1,079
2404.07983
[ 0.016726678237318993, -0.006162658333778381, 0.012613823637366295, 0.046974100172519684, 0.023421579971909523, -0.03001248650252819, 0.06355578452348709, 0.044731345027685165, -0.037273306399583817, -0.03099929727613926, -0.0413132943212986, 0.038081955164670944, -0.08204766362905502, -0.0...
196
FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference
https://openreview.net/forum?id=OfjIlbelrT
[ "Xunhao Lai", "Jianqiao Lu", "Yao Luo", "Yiyuan Ma", "Xun Zhou" ]
Oral
Large language models (LLMs) encounter computational challenges during long-sequence inference, especially in the attention pre-filling phase, where the complexity grows quadratically with the prompt length. Previous efforts to mitigate these challenges have relied on fixed sparse attention patterns or identifying spar...
Large Language Models (LLMs), LLM inference, Long-context LLMs, Sparse Attention Mechanism
FlexPrefill is a novel sparse attention mechanism for large language models that dynamically adapts attention patterns and computational budgets in real-time to optimize performance for each input and attention head.
1,022
2502.20766
[ 0.010869160294532776, -0.03466133400797844, 0.0027790952008217573, 0.01680801436305046, 0.0337640717625618, 0.03376683592796326, 0.005038573872298002, 0.01280043926090002, -0.062480077147483826, -0.006130644120275974, -0.015445824712514877, 0.03114846721291542, -0.03785083442926407, -0.006...
197
REGENT: A Retrieval-Augmented Generalist Agent That Can Act In-Context in New Environments
https://openreview.net/forum?id=NxyfSW6mLK
[ "Kaustubh Sridhar", "Souradeep Dutta", "Dinesh Jayaraman", "Insup Lee" ]
Oral
Building generalist agents that can rapidly adapt to new environments is a key challenge for deploying AI in the digital and real worlds. Is scaling current agent architectures the most effective way to build generalist agents? We propose a novel approach to pre-train relatively small policies on relatively small datas...
Generalist Agent, Retrieval, In-Context Learning, VLA, Imitation Learning, Reinforcement Learning
We propose a retrieval-augmented generalist agent that can adapt to new environments via in-context learning
961
2412.04759
[ -0.0340435765683651, -0.03150881081819534, -0.0013505255337804556, 0.04562932625412941, 0.036843977868556976, -0.0063585275784134865, 0.016218138858675957, -0.0021544224582612514, -0.02541651949286461, -0.03261656314134598, -0.04243304580450058, 0.03398275747895241, -0.07733441144227982, -...
198
MMIE: Massive Multimodal Interleaved Comprehension Benchmark for Large Vision-Language Models
https://openreview.net/forum?id=HnhNRrLPwm
[ "Peng Xia", "Siwei Han", "Shi Qiu", "Yiyang Zhou", "Zhaoyang Wang", "Wenhao Zheng", "Zhaorun Chen", "Chenhang Cui", "Mingyu Ding", "Linjie Li", "Lijuan Wang", "Huaxiu Yao" ]
Oral
Interleaved multimodal comprehension and generation, enabling models to produce and interpret both images and text in arbitrary sequences, have become a pivotal area in multimodal learning. Despite significant advancements, the evaluation of this capability remains insufficient. Existing benchmarks suffer from limitati...
large vision-language model, interleaved text-and-image evaluation
null
944
2410.10139
[ 0.0014480737736448646, 0.00006354741344694048, -0.0037072934210300446, 0.023094378411769867, 0.013322955928742886, 0.018104204908013344, 0.015422710217535496, 0.022922247648239136, -0.03879848122596741, 0.0032563682179898024, -0.006946631241589785, 0.04888742044568062, -0.048204921185970306,...
199
Do as We Do, Not as You Think: the Conformity of Large Language Models
https://openreview.net/forum?id=st77ShxP1K
[ "Zhiyuan Weng", "Guikun Chen", "Wenguan Wang" ]
Oral
Recent advancements in large language models (LLMs) revolutionize the field of intelligent agents, enabling collaborative multi-agent systems capable of tackling complex problems across various domains. However, the potential of conformity within these systems, analogous to phenomena like conformity bias and group-thin...
Large Language Models, Conformity, Multi-agent System
null
934
2501.13381
[ -0.010422773659229279, -0.004617293830960989, -0.015834756195545197, 0.0035755946300923824, 0.04446205124258995, -0.02598157711327076, 0.04335503652691841, 0.03815736249089241, -0.02035856992006302, -0.030684905126690865, -0.03062272258102894, 0.021455347537994385, -0.06998851895332336, -0...