paper_id uint32 0 3.7k | title stringlengths 14 154 | paper_url stringlengths 42 42 | authors listlengths 1 21 | type stringclasses 3
values | abstract stringlengths 413 2.52k | keywords stringlengths 4 397 | TL;DR stringlengths 5 250 ⌀ | submission_number int64 2 14.3k | arxiv_id stringlengths 10 10 ⌀ | embedding listlengths 768 768 |
|---|---|---|---|---|---|---|---|---|---|---|
200 | Artificial Kuramoto Oscillatory Neurons | https://openreview.net/forum?id=nwDRD4AMoN | [
"Takeru Miyato",
"Sindy Löwe",
"Andreas Geiger",
"Max Welling"
] | Oral | It has long been known in both neuroscience and AI that ``binding'' between neurons leads to a form of competitive learning where representations are compressed in order to represent more abstract concepts in deeper layers of the network. More recently, it was also hypothesized that dynamic (spatiotemporal) representat... | Oscillatory neurons, Feature binding, Object-centric learning, Reasoning, Adversarial robustness | Oscillatory neurons strongly bind object features, can reason, and are robust to adversarial and natural perturbations | 923 | 2410.13821 | [
-0.02339429222047329,
-0.01943136565387249,
-0.013396000489592552,
0.021958159282803535,
0.009113370440900326,
0.02646026387810707,
0.02979942224919796,
0.01710651069879532,
-0.0749935731291771,
-0.037523090839385986,
-0.011540080420672894,
-0.02032243274152279,
-0.06548555195331573,
0.012... |
201 | Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturbation | https://openreview.net/forum?id=tTPHgb0EtV | [
"Tiansheng Huang",
"Sihao Hu",
"Fatih Ilhan",
"Selim Furkan Tekin",
"Ling Liu"
] | Oral | Harmful fine-tuning attack poses serious safety concerns for large language models' fine-tuning-as-a-service. While existing defenses have been proposed to mitigate the issue, their performances are still far away from satisfactory, and the root cause of the problem has not been fully recovered. To this end, we in this... | Harmful fine-tuning, LLM, safety alignment | This paper proposes Booster, an alignment stage solution against harmful fine-tuning issues for LLMs | 680 | 2409.01586 | [
-0.005447108764201403,
-0.040179282426834106,
-0.021501902490854263,
0.021886350587010384,
0.027594007551670074,
-0.0025237221270799637,
0.041866593062877655,
-0.003362406510859728,
-0.02294107899069786,
-0.015587861649692059,
-0.0035760514438152313,
0.0401897206902504,
-0.07468065619468689,... |
202 | Unlearning-based Neural Interpretations | https://openreview.net/forum?id=PBjCTeDL6o | [
"Ching Lam Choi",
"Alexandre Duplessis",
"Serge Belongie"
] | Oral | Gradient-based interpretations often require an anchor point of comparison to avoid saturation in computing feature importance. We show that current baselines defined using static functions—constant mapping, averaging or blurring—inject harmful colour, texture or frequency assumptions that deviate from model behaviour.... | Explainability, Attribution, Debiasing, Bias | UNI computes a debiased, adaptive baseline for gradient-based interpretations by perturbing the input towards an unlearning direction of steepest ascent. | 604 | 2410.08069 | [
0.00793299451470375,
0.012481088750064373,
0.010403219610452652,
0.01634630560874939,
0.027834681794047356,
0.03282051533460617,
0.004125365987420082,
0.01725446991622448,
-0.04224511608481407,
-0.03644189611077309,
-0.0230875201523304,
0.03523631766438484,
-0.07848074287176132,
0.00237662... |
203 | ChartMoE: Mixture of Diversely Aligned Expert Connector for Chart Understanding | https://openreview.net/forum?id=o5TsWTUSeF | [
"Zhengzhuo Xu",
"Bowen Qu",
"Yiyan Qi",
"SiNan Du",
"Chengjin Xu",
"Chun Yuan",
"Jian Guo"
] | Oral | Automatic chart understanding is crucial for content comprehension and document parsing. Multimodal Large Language Models (MLLMs) have demonstrated remarkable capabilities in chart understanding through domain-specific alignment and fine-tuning. However, current MLLMs still struggle to provide faithful data and reliabl... | Multimodal Large Language Models, Chart Reasoning, Mixture of Expert | null | 526 | 2409.03277 | [
0.011819127015769482,
-0.017962928861379623,
0.016073327511548996,
0.03536299616098404,
0.026913248002529144,
0.005844421684741974,
0.02435644157230854,
0.028519626706838608,
-0.05789954960346222,
-0.006118718069046736,
-0.041241154074668884,
0.06615181267261505,
-0.07584673166275024,
0.00... |
204 | Probabilistic Learning to Defer: Handling Missing Expert Annotations and Controlling Workload Distribution | https://openreview.net/forum?id=zl0HLZOJC9 | [
"Cuong C. Nguyen",
"Thanh-Toan Do",
"Gustavo Carneiro"
] | Oral | Recent progress in machine learning research is gradually shifting its focus towards *human-AI cooperation* due to the advantages of exploiting the reliability of human experts and the efficiency of AI models. One of the promising approaches in human-AI cooperation is *learning to defer* (L2D), where the system analyse... | learning to defer, expectation - maximisation | null | 451 | null | [
-0.024222226813435555,
-0.0489116795361042,
-0.019497698172926903,
0.010886091738939285,
0.047491855919361115,
-0.000008236957000917755,
0.03955148905515671,
0.008334382437169552,
-0.023386914283037186,
-0.029627960175275803,
-0.03534809872508049,
0.036805517971515656,
-0.024142926558852196,... |
205 | A Decade's Battle on Dataset Bias: Are We There Yet? | https://openreview.net/forum?id=SctfBCLmWo | [
"Zhuang Liu",
"Kaiming He"
] | Oral | We revisit the ``dataset classification'' experiment suggested by Torralba & Efros (2011) a decade ago, in the new era with large-scale, diverse, and hopefully less biased datasets as well as more capable neural network architectures. Surprisingly, we observe that modern neural networks can achieve excellent accuracy i... | Vision datasets, Dataset bias, Deep learning | Modern large-scale vision datasets that are supposedly very general and diverse, are in fact still very biased | 407 | 2403.08632 | [
0.007584207225590944,
-0.0526413768529892,
-0.023706592619419098,
0.06677640229463577,
0.024301769211888313,
-0.004480540752410889,
0.012034979648888111,
0.0035913295578211546,
-0.012302099727094173,
-0.04377385973930359,
-0.009661167860031128,
0.011868852190673351,
-0.08255649358034134,
-... |
206 | Knowing Your Target: Target-Aware Transformer Makes Better Spatio-Temporal Video Grounding | https://openreview.net/forum?id=WOzffPgVjF | [
"Xin Gu",
"Yaojie Shen",
"Chenxi Luo",
"Tiejian Luo",
"Yan Huang",
"Yuewei Lin",
"Heng Fan",
"Libo Zhang"
] | Oral | Transformer has attracted increasing interest in spatio-temporal video grounding, or STVG, owing to its end-to-end pipeline and promising result. Existing Transformer-based STVG approaches often leverage a set of object queries, which are initialized simply using zeros and then gradually learn target position informati... | Spatio-Temporal Video Grounding | null | 284 | 2502.11168 | [
0.028935926035046577,
-0.004198055248707533,
0.02762479893863201,
0.031084703281521797,
-0.007113018073141575,
0.03907078132033348,
0.034764908254146576,
0.02460325136780739,
-0.005855419207364321,
-0.035854533314704895,
-0.03547915071249008,
0.025077585130929947,
-0.03498481959104538,
0.0... |
207 | Open-World Reinforcement Learning over Long Short-Term Imagination | https://openreview.net/forum?id=vzItLaEoDa | [
"Jiajian Li",
"Qi Wang",
"Yunbo Wang",
"Xin Jin",
"Yang Li",
"Wenjun Zeng",
"Xiaokang Yang"
] | Oral | Training visual reinforcement learning agents in a high-dimensional open world presents significant challenges. While various model-based methods have improved sample efficiency by learning interactive world models, these agents tend to be “short-sighted”, as they are typically trained on short snippets of imagined exp... | World models, reinforcement learning, visual control | null | 242 | 2410.03618 | [
-0.03153219819068909,
-0.008180930279195309,
-0.013168460689485073,
0.037362515926361084,
0.04830053821206093,
0.02075018733739853,
0.018096575513482094,
0.026015184819698334,
-0.03420911729335785,
-0.02569643221795559,
-0.03538718819618225,
-0.01248897798359394,
-0.08421093970537186,
-0.0... |
208 | OLMoE: Open Mixture-of-Experts Language Models | https://openreview.net/forum?id=xXTkbTBmqq | [
"Niklas Muennighoff",
"Luca Soldaini",
"Dirk Groeneveld",
"Kyle Lo",
"Jacob Morrison",
"Sewon Min",
"Weijia Shi",
"Evan Pete Walsh",
"Oyvind Tafjord",
"Nathan Lambert",
"Yuling Gu",
"Shane Arora",
"Akshita Bhagia",
"Dustin Schwenk",
"David Wadden",
"Alexander Wettig",
"Binyuan Hui",
... | Oral | We introduce OLMoE, a fully open, state-of-the-art language model leveraging sparse Mixture-of-Experts (MoE). OLMoE-1B-7B has 7 billion (B) parameters but uses only 1B per input token. We pretrain it on 5 trillion tokens and further adapt it to create OLMoE-1B-7B-Instruct. Our models outperform all available models wit... | large language models, mixture-of-experts, open-source | A state-of-the-art Mixture-of-Experts LLM with 1B active and 7B total parameters trained for 5T tokens that is 100% open-source | 211 | 2409.02060 | [
-0.013446763157844543,
-0.008973155170679092,
0.0034317767713218927,
0.02586204931139946,
0.026230592280626297,
0.05237206816673279,
0.027111656963825226,
0.037584178149700165,
-0.03217237815260887,
-0.0013311608927324414,
-0.023616405203938484,
0.047334134578704834,
-0.07365436106920242,
... |
209 | Do Vision-Language Models Represent Space and How? Evaluating Spatial Frame of Reference under Ambiguities | https://openreview.net/forum?id=84pDoCD4lH | [
"Zheyuan Zhang",
"Fengyuan Hu",
"Jayjun Lee",
"Freda Shi",
"Parisa Kordjamshidi",
"Joyce Chai",
"Ziqiao Ma"
] | Oral | Spatial expressions in situated communication can be ambiguous, as their meanings vary depending on the frames of reference (FoR) adopted by speakers and listeners. While spatial language understanding and reasoning by vision-language models (VLMs) have gained increasing attention, potential ambiguities in these models... | vision-language models, spatial reasoning, multimodal reasoning | We present an evaluation protocol to systematically assess the spatial reasoning capabilities of vision language models, and shed light on the ambiguity and cross-cultural diversity of frame of reference in spatial reasoning. | 194 | 2410.17385 | [
0.03193424642086029,
0.04985644295811653,
0.0010616299696266651,
0.01437732856720686,
0.03252376616001129,
0.007091353181749582,
0.050373345613479614,
0.043942634016275406,
-0.01969841495156288,
-0.03614753112196922,
-0.06369966268539429,
0.04494904726743698,
-0.07827691733837128,
-0.00526... |
210 | SAM 2: Segment Anything in Images and Videos | https://openreview.net/forum?id=Ha6RTeWMd0 | [
"Nikhila Ravi",
"Valentin Gabeur",
"Yuan-Ting Hu",
"Ronghang Hu",
"Chaitanya Ryali",
"Tengyu Ma",
"Haitham Khedr",
"Roman Rädle",
"Chloe Rolland",
"Laura Gustafson",
"Eric Mintun",
"Junting Pan",
"Kalyan Vasudev Alwala",
"Nicolas Carion",
"Chao-Yuan Wu",
"Ross Girshick",
"Piotr Dolla... | Oral | We present Segment Anything Model 2 (SAM 2), a foundation model towards solving promptable visual segmentation in images and videos. We build a data engine, which improves model and data via user interaction, to collect the largest video segmentation dataset to date. Our model is a simple transformer architecture with ... | computer vision, video segmentation, image segmentation | null | 92 | 2408.00714 | [
0.0016219484386965632,
-0.014239952899515629,
0.00016988060087896883,
0.039477601647377014,
0.014702855609357357,
-0.0020068762823939323,
0.03661656752228737,
0.03274570778012276,
-0.06354605406522751,
-0.05526056885719299,
-0.05254726856946945,
-0.019316667690873146,
-0.04194667190313339,
... |
211 | A Computational Framework for Modeling Emergence of Color Vision in the Human Brain | https://openreview.net/forum?id=g3xuCtrG6H | [
"Atsunobu Kotani",
"Ren Ng"
] | Oral | It is a mystery how the brain decodes color vision purely from the optic nerve signals it receives, with a core inferential challenge being how it disentangles internal perception with the correct color dimensionality from the unknown encoding properties of the eye.
In this paper, we introduce a computational framewor... | color vision, computational neuroscience, retina simulation, cortical learning, self-supervised learning, color blindness | This paper introduces a novel computational framework for modeling the emergence of human color vision by simulating the eye and the cortex. | 20 | 2408.16916 | [
0.006800738163292408,
0.017521068453788757,
-0.001819924684241414,
0.01296368706971407,
0.0499858632683754,
0.024514444172382355,
0.019615665078163147,
0.04280046001076698,
-0.05867801979184151,
-0.05711103230714798,
-0.02285001426935196,
-0.006463657133281231,
-0.06721537560224533,
0.0193... |
212 | PhysBench: Benchmarking and Enhancing Vision-Language Models for Physical World Understanding | https://openreview.net/forum?id=Q6a9W6kzv5 | [
"Wei Chow",
"Jiageng Mao",
"Boyi Li",
"Daniel Seita",
"Vitor Campagnolo Guizilini",
"Yue Wang"
] | Oral | Understanding the physical world is a fundamental challenge in embodied AI, critical for enabling agents to perform complex tasks and operate safely in real-world environments. While Vision-Language Models (VLMs) have shown great promise in reasoning and task planning for embodied agents, their ability to comprehend ph... | vision-language, multi-modal understanding | We propose PhysBench to evaluate VLMs' physical understanding, highlighting their limitations and introducing PhysAgent to enhance VLMs' physical understanding. | 3 | 2501.16411 | [
-0.004851648584008217,
0.011426898650825024,
-0.00035570241743698716,
0.044998329132795334,
0.01354208029806614,
0.017750496044754982,
0.017386358231306076,
0.02100612223148346,
-0.027582775801420212,
-0.02227841503918171,
-0.013998636975884438,
0.015365691855549812,
-0.06424666941165924,
... |
213 | Beyond Random Masking: When Dropout meets Graph Convolutional Networks | https://openreview.net/forum?id=PwxYoMvmvy | [
"Yuankai Luo",
"Xiao-Ming Wu",
"Hao Zhu"
] | Poster | Graph Convolutional Networks (GCNs) have emerged as powerful tools for learning on graph-structured data, yet the behavior of dropout in these models remains poorly understood. This paper presents a comprehensive theoretical analysis of dropout in GCNs, revealing that its primary role differs fundamentally from standar... | Graph neural networks, Dropout | null | 14,284 | null | [
0.009391793049871922,
-0.041465748101472855,
0.008272549137473106,
0.055479712784290314,
0.020958486944437027,
0.011886964552104473,
0.03354768455028534,
0.043090466409921646,
-0.02341361716389656,
-0.040460698306560516,
0.003436874132603407,
-0.02124178782105446,
-0.05386868864297867,
-0.... |
214 | Self-supervised contrastive learning performs non-linear system identification | https://openreview.net/forum?id=ONfWFluZBI | [
"Rodrigo González Laiz",
"Tobias Schmidt",
"Steffen Schneider"
] | Poster | Self-supervised learning (SSL) approaches have brought tremendous success across many tasks and domains. It has been argued that these successes can be attributed to a link between SSL and identifiable representation learning: Temporal structure and auxiliary variables ensure that latent representations are related to ... | system identification, dynamics learning, identifiability, self-supervised learning | null | 14,280 | 2410.14673 | [
0.012531629763543606,
-0.020865073427557945,
-0.008165786042809486,
0.05218159407377243,
0.03961380943655968,
0.019370488822460175,
0.02921350672841072,
-0.0017048740992322564,
-0.041764989495277405,
-0.015593063086271286,
0.020298276096582413,
0.011482126079499722,
-0.049924105405807495,
... |
215 | Sparse autoencoders reveal selective remapping of visual concepts during adaptation | https://openreview.net/forum?id=imT03YXlG2 | [
"Hyesu Lim",
"Jinho Choi",
"Jaegul Choo",
"Steffen Schneider"
] | Poster | Adapting foundation models for specific purposes has become a standard approach to build machine learning systems for downstream applications. Yet, it is an open question which mechanisms take place during adaptation. Here we develop a new Sparse Autoencoder (SAE) for the CLIP vision transformer, named PatchSAE, to ext... | interpretability, vision-language models, sparse autoencoder, adaptation | null | 14,240 | 2412.05276 | [
0.031114311888813972,
-0.017943067476153374,
-0.005269259680062532,
0.029151713475584984,
0.05001385882496834,
0.05442996323108673,
0.02660679817199707,
0.025221481919288635,
-0.04961595684289932,
-0.04919089376926422,
-0.019143415614962578,
0.014828404411673546,
-0.08563373982906342,
-0.0... |
216 | PIED: Physics-Informed Experimental Design for Inverse Problems | https://openreview.net/forum?id=w7P92BEsb2 | [
"Apivich Hemachandra",
"Gregory Kang Ruey Lau",
"See-Kiong Ng",
"Bryan Kian Hsiang Low"
] | Poster | In many science and engineering settings, system dynamics are characterized by governing partial differential equations (PDEs), and a major challenge is to solve inverse problems (IPs) where unknown PDE parameters are inferred based on observational data gathered under limited budget.
Due to the high costs of setting ... | Physics-Informed Neural Network, PINNs, Experimental Design, AI For Science, Active Learning, Data Selection | An experimental design framework for PDE-based inverse problems that uses PINNs and its training dynamics, in a fully differentiable architecture to perform continuous optimization of design parameters. | 14,224 | 2503.07070 | [
-0.040954217314720154,
-0.0004158609954174608,
-0.01531934179365635,
0.060338832437992096,
0.03714010491967201,
0.04032835736870766,
0.013027941808104515,
-0.032107193022966385,
-0.03440898284316063,
-0.04828229546546936,
0.004991821479052305,
-0.0007358411676250398,
-0.026443220674991608,
... |
217 | AgentRefine: Enhancing Agent Generalization through Refinement Tuning | https://openreview.net/forum?id=FDimWzmcWn | [
"Dayuan Fu",
"Keqing He",
"Yejie Wang",
"Wentao Hong",
"Zhuoma GongQue",
"Weihao Zeng",
"Wei Wang",
"Jingang Wang",
"Xunliang Cai",
"Weiran Xu"
] | Poster | Large Language Model (LLM) based agents have proved their ability to perform complex tasks like humans. However, there is still a large gap between open-sourced LLMs and commercial models like the GPT series. In this paper, we focus on improving the agent generalization capabilities of LLMs via instruction tuning. We f... | agent, self-refine, diversity, generalization, data synthesis | The self-refine data can expand the search space of LLM agent and improve the reason quality, leading a generalized performance in agent tasks. | 14,212 | 2501.01702 | [
-0.014488745480775833,
-0.036115068942308426,
0.0017777703469619155,
0.030818704515695572,
0.0586979053914547,
0.015472165308892727,
0.05063007399439812,
0.008617712184786797,
-0.0046289050951600075,
-0.024009205400943756,
-0.016588684171438217,
0.06029511243104935,
-0.07338552922010422,
-... |
218 | TabM: Advancing tabular deep learning with parameter-efficient ensembling | https://openreview.net/forum?id=Sd4wYYOhmY | [
"Yury Gorishniy",
"Akim Kotelnikov",
"Artem Babenko"
] | Poster | Deep learning architectures for supervised learning on tabular data range from simple multilayer perceptrons (MLP) to sophisticated Transformers and retrieval-augmented methods.
This study highlights a major, yet so far overlooked opportunity for substantially improving tabular MLPs; namely, parameter-efficient ensembl... | tabular, tabular data, deep learning, architecture | Parameter-efficient ensembling has a massive positive impact on tabular MLPs, and TabM is a new SOTA architecture illustrating that. | 14,197 | 2410.24210 | [
-0.037913545966148376,
-0.017194533720612526,
-0.02649553492665291,
0.0400208905339241,
0.03279563784599304,
0.014015424996614456,
0.0053941928781569,
0.004463847726583481,
-0.036809612065553665,
-0.032453496009111404,
-0.00005893817433388904,
-0.01902409829199314,
-0.06540772318840027,
0.... |
219 | Multi-Label Test-Time Adaptation with Bound Entropy Minimization | https://openreview.net/forum?id=75PhjtbBdr | [
"Xiangyu Wu",
"Feng Yu",
"Yang Yang",
"Qing-Guo Chen",
"Jianfeng Lu"
] | Poster | Mainstream test-time adaptation (TTA) techniques endeavor to mitigate distribution shifts via entropy minimization for multi-class classification, inherently increasing the probability of the most confident class. However, when encountering multi-label instances, the primary challenge stems from the varying number of l... | Vision-Language Models, Zero-Shot Multi-Label Generalization, Test-Time Adaptation | A Multi-Label Test-Time Adaptation method with Bound Entropy Minimization objective. | 14,187 | 2502.03777 | [
0.0028729569166898727,
-0.004465380217880011,
0.004815447609871626,
0.04137752577662468,
0.03561945632100105,
0.029161032289266586,
0.01611309126019478,
-0.00674999225884676,
-0.023901542648673058,
-0.005051958374679089,
-0.03207791969180107,
0.02702346257865429,
-0.06953925639390945,
0.00... |
220 | ToolGen: Unified Tool Retrieval and Calling via Generation | https://openreview.net/forum?id=XLMAMmowdY | [
"Renxi Wang",
"Xudong Han",
"Lei Ji",
"Shu Wang",
"Timothy Baldwin",
"Haonan Li"
] | Poster | As large language models (LLMs) advance, their inability to autonomously execute tasks by directly interacting with external tools remains a critical limitation. Traditional methods rely on inputting tool descriptions as context, which is constrained by context length and requires separate, often inefficient, retrieval... | Agent, Tool Learning, Virtual Token | Unified tool retrieval and calling by transforming tools into virtual tokens | 14,183 | 2410.03439 | [
-0.03235570341348648,
-0.02581068128347397,
-0.043578993529081345,
0.028929613530635834,
0.03073805943131447,
0.05146407335996628,
0.010168371722102165,
0.010103493928909302,
-0.013891384936869144,
-0.01531835738569498,
-0.03404594212770462,
0.05189722031354904,
-0.04716920480132103,
-0.01... |
221 | Activation Gradient based Poisoned Sample Detection Against Backdoor Attacks | https://openreview.net/forum?id=VNMJfBBUd5 | [
"Danni Yuan",
"Mingda Zhang",
"Shaokui Wei",
"Li Liu",
"Baoyuan Wu"
] | Poster | This work studies the task of poisoned sample detection for defending against data poisoning based backdoor attacks. Its core challenge is finding a generalizable and discriminative metric to distinguish between clean and various types of poisoned samples (e.g., various triggers, various poisoning ratios). Inspired by ... | Backdoor Defense, Poisoned Sample Detection, AI security | null | 14,155 | 2312.06230 | [
-0.007540267426520586,
-0.005000596400350332,
0.0064361076802015305,
0.05138885974884033,
0.05111617222428322,
-0.011507363058626652,
0.04682203009724617,
-0.018346283584833145,
-0.01572415605187416,
-0.030576951801776886,
0.0036095755640417337,
-0.029366236180067062,
-0.06591182947158813,
... |
222 | Causally Motivated Sycophancy Mitigation for Large Language Models | https://openreview.net/forum?id=yRKelogz5i | [
"Haoxi Li",
"Xueyang Tang",
"Jie ZHANG",
"Song Guo",
"Sikai Bai",
"Peiran Dong",
"Yue Yu"
] | Poster | Incorporating user preferences into large language models (LLMs) can enhance the personalization and reliability of model outputs and facilitate the application of LLMs to real-world scenarios. However, leveraging user preferences can be a double-edged sword. Recent studies have found that improper utilization can incu... | Large Language Model; Sycophancy; Causal Modeling | null | 14,154 | null | [
-0.010289282537996769,
-0.029992947354912758,
0.0052740625105798244,
0.02476843073964119,
0.04595395550131798,
0.03247210383415222,
0.02980257384479046,
0.028530556708574295,
-0.015506631694734097,
-0.01132648903876543,
-0.010295414365828037,
0.04475086182355881,
-0.060737740248441696,
0.0... |
223 | Compositional simulation-based inference for time series | https://openreview.net/forum?id=uClUUJk05H | [
"Manuel Gloeckler",
"Shoji Toyota",
"Kenji Fukumizu",
"Jakob H. Macke"
] | Poster | Amortized simulation-based inference (SBI) methods train neural networks on simulated data to perform Bayesian inference. While this strategy avoids the need for tractable likelihoods, it often requires a large number of simulations and has been challenging to scale to time series data. Scientific simulators frequently... | Simulation-based inference, Bayesian inference, time series, markovian simulators, Amortized Bayesian inference | Simulation-based inference for Markovian simulators leveraging the factorization | 14,141 | 2411.02728 | [
-0.02133449912071228,
0.004771067760884762,
-0.02172723039984703,
0.023930346593260765,
0.043997447937726974,
0.065191350877285,
0.048726245760917664,
0.0037893904373049736,
-0.03150233253836632,
-0.03204811364412308,
0.045606158673763275,
0.0028499013278633356,
-0.0710829570889473,
0.0113... |
224 | Bayesian Treatment of the Spectrum of the Empirical Kernel in (Sub)Linear-Width Neural Networks | https://openreview.net/forum?id=O6znYvxC1U | [
"Ouns El Harzli",
"Bernardo Cuenca Grau"
] | Poster | We study Bayesian neural networks (BNNs) in the theoretical limits of infinitely increasing number of training examples, network width and input space dimension. Our findings establish new bridges between kernel-theoretic approaches and techniques derived from statistical mechanics through the correspondence between Me... | infinite bayesian neural networks, kernel theory, random matrix theory | null | 14,113 | null | [
-0.02618899568915367,
0.00916858296841383,
0.01953289285302162,
0.010397182777523994,
0.03277525305747986,
0.04294588416814804,
0.019940465688705444,
-0.003762507112696767,
-0.027193499729037285,
-0.040639523416757584,
-0.009634782560169697,
0.030568523332476616,
-0.051480695605278015,
0.0... |
225 | When GNNs meet symmetry in ILPs: an orbit-based feature augmentation approach | https://openreview.net/forum?id=wVTJRnZ11Z | [
"Qian Chen",
"Lei Li",
"Qian Li",
"Jianghua Wu",
"Akang Wang",
"Ruoyu Sun",
"Xiaodong Luo",
"Tsung-Hui Chang",
"Qingjiang Shi"
] | Poster | A common characteristic in integer linear programs (ILPs) is symmetry, allowing variables to be permuted without altering the underlying problem structure. Recently, GNNs have emerged as a promising approach for solving ILPs.
However, a significant challenge arises when applying GNNs to ILPs with symmetry: classic GNN... | integer linear programming, symmetry, machine learning, graph neural networks | null | 14,111 | 2501.14211 | [
-0.011204924434423447,
-0.010089682415127754,
-0.005827825516462326,
0.03448135778307915,
0.025074027478694916,
0.05380380526185036,
0.0365140475332737,
-0.0018691670848056674,
-0.03907213732600212,
-0.039444196969270706,
-0.0034992252476513386,
-0.0441298633813858,
-0.09770680218935013,
-... |
226 | Optimal Transport for Time Series Imputation | https://openreview.net/forum?id=xPTzjpIQNp | [
"Hao Wang",
"zhengnan li",
"Haoxuan Li",
"Xu Chen",
"Mingming Gong",
"BinChen",
"Zhichao Chen"
] | Poster | Missing data imputation through distribution alignment has demonstrated advantages for non-temporal datasets but exhibits suboptimal performance in time-series applications. The primary obstacle is crafting a discrepancy measure that simultaneously (1) captures temporal patterns—accounting for periodicity and temporal ... | Time series, Imputation | null | 14,099 | null | [
-0.03956788033246994,
-0.03762475401163101,
-0.01659982278943062,
0.0668872594833374,
0.04927117004990578,
0.035814639180898666,
0.05502963438630104,
-0.0002836193307302892,
-0.029632965102791786,
-0.08799649029970169,
0.012085947208106518,
0.0004971273010596633,
-0.05204489454627037,
-0.0... |
227 | Video Action Differencing | https://openreview.net/forum?id=3bcN6xlO6f | [
"James Burgess",
"Xiaohan Wang",
"Yuhui Zhang",
"Anita Rau",
"Alejandro Lozano",
"Lisa Dunlap",
"Trevor Darrell",
"Serena Yeung-Levy"
] | Poster | How do two individuals differ when performing the same action? In this work, we introduce Video Action Differencing (VidDiff), the novel task of identifying subtle differences between videos of the same action, which has numerous applications, such as coaching and skill learning. To enable development on this new task,... | Video, Actions, Differencing, Zero-shot, benchmark, multimodal, lmm, llm | A new task and benchmark for comparing how an action is performed between two videos, with a zero-shot method | 14,085 | 2503.07860 | [
0.05496681481599808,
-0.0354589968919754,
-0.007651247084140778,
0.03564087301492691,
0.035685066133737564,
0.01603872701525688,
0.054755572229623795,
0.003912143409252167,
-0.038330819457769394,
-0.04975830763578415,
-0.013582218438386917,
-0.005284919403493404,
-0.062039561569690704,
-0.... |
228 | GANDALF: Generative AttentioN based Data Augmentation and predictive modeLing Framework for personalized cancer treatment | https://openreview.net/forum?id=WwmtcGr4lP | [
"Aishwarya Jayagopal",
"Yanrong Zhang",
"Robert John Walsh",
"Tuan Zea Tan",
"Anand D Jeyasekharan",
"Vaibhav Rajan"
] | Poster | Effective treatment of cancer is a major challenge faced by healthcare providers, due to the highly individualized nature of patient responses to treatment. This is caused by the heterogeneity seen in cancer-causing alterations (mutations) across patient genomes. Limited availability of response data in patients makes ... | personalized drug response prediction, cancer, genomic data augmentation, diffusion model, pseudolabelling | A cancer drug response prediction model that addresses the problem of limited labelled data through a novel genomic data augmentation technique. | 14,072 | null | [
-0.010760142467916012,
-0.04536585509777069,
-0.006003966089338064,
0.021872228011488914,
0.04039048030972481,
-0.012328589335083961,
0.03886822983622551,
-0.007235242053866386,
0.02434036135673523,
-0.03292790427803993,
-0.012589268386363983,
0.020535588264465332,
-0.08667212724685669,
0.... |
229 | RaSA: Rank-Sharing Low-Rank Adaptation | https://openreview.net/forum?id=GdXI5zCoAt | [
"Zhiwei He",
"Zhaopeng Tu",
"Xing Wang",
"Xingyu Chen",
"Zhijie Wang",
"Jiahao Xu",
"Tian Liang",
"Wenxiang Jiao",
"Zhuosheng Zhang",
"Rui Wang"
] | Poster | Low-rank adaptation (LoRA) has been prominently employed for parameter-efficient fine-tuning of large language models (LLMs). However, the limited expressive capacity of LoRA, stemming from the low-rank constraint, has been recognized as a bottleneck, particularly in rigorous tasks like code generation and mathematical... | parameter-efficient fine-tuning, large language model, low-rank adaptation | null | 14,067 | 2503.12576 | [
-0.021881744265556335,
-0.03638816997408867,
-0.01421371940523386,
0.03938974067568779,
0.01645098254084587,
0.042464401572942734,
0.036210834980010986,
-0.01140164490789175,
-0.03617778420448303,
-0.016065480187535286,
-0.01685553416609764,
0.010959407314658165,
-0.06536273658275604,
-0.0... |
230 | Scaling Speech-Text Pre-training with Synthetic Interleaved Data | https://openreview.net/forum?id=3tukjsVyrE | [
"Aohan Zeng",
"Zhengxiao Du",
"Mingdao Liu",
"Lei Zhang",
"shengmin jiang",
"Yuxiao Dong",
"Jie Tang"
] | Poster | Speech language models (SpeechLMs) accept speech input and produce speech output, allowing for more natural human-computer interaction compared to text-based large language models (LLMs).
Traditional approaches for developing SpeechLMs are constrained by the limited availability of unsupervised speech data and parallel... | large language models; speech language model; spoken chatbots | null | 14,059 | 2411.17607 | [
-0.015823401510715485,
-0.04168417304754257,
-0.001606013742275536,
0.03812757879495621,
0.03199320286512375,
0.04048767313361168,
0.02939748577773571,
0.007209255360066891,
-0.02220243401825428,
0.005558257922530174,
-0.026339048519730568,
0.040985602885484695,
-0.060000911355018616,
-0.0... |
231 | Offline Model-Based Optimization by Learning to Rank | https://openreview.net/forum?id=sb1HgVDLjN | [
"Rong-Xi Tan",
"Ke Xue",
"Shen-Huan Lyu",
"Haopu Shang",
"yaowang",
"Yaoyuan Wang",
"Fu Sheng",
"Chao Qian"
] | Poster | Offline model-based optimization (MBO) aims to identify a design that maximizes a black-box function using only a fixed, pre-collected dataset of designs and their corresponding scores. This problem has garnered significant attention from both scientific and industrial domains. A common approach in offline MBO is to tr... | Offline model-based optimization, black-box optimization, learning to rank, learning to optimize | null | 14,057 | 2410.11502 | [
-0.013380160555243492,
-0.010344978421926498,
0.02872614376246929,
0.00902353785932064,
0.052194491028785706,
0.029139826074242592,
0.012791295535862446,
-0.020415449514985085,
-0.027265073731541634,
-0.03567764163017273,
-0.020710090175271034,
0.02358461171388626,
-0.05986851081252098,
0.... |
232 | From Search to Sampling: Generative Models for Robust Algorithmic Recourse | https://openreview.net/forum?id=NtwFghsJne | [
"Prateek Garg",
"Lokesh Nagalapatti",
"Sunita Sarawagi"
] | Poster | Algorithmic Recourse provides recommendations to individuals who are adversely impacted by automated model decisions, on how to alter their profiles to achieve a favorable outcome. Effective recourse methods must balance three conflicting goals: proximity to the original profile to minimize cost, plausibility for reali... | Algorithmic recourse, explainability, generative modelling | We propose a generative model for recourse that outputs a distribution over likely recourse instances. | 14,050 | null | [
0.004527687095105648,
-0.015154095366597176,
-0.019548922777175903,
0.055030349642038345,
0.06069062650203705,
0.015383191406726837,
0.025636745616793633,
-0.0015046047046780586,
-0.003982247784733772,
-0.036953169852495193,
-0.023732487112283707,
0.022180533036589622,
-0.06937873363494873,
... |
233 | Neural Wave Equation for Irregularly Sampled Sequence Data | https://openreview.net/forum?id=kbeX97jExm | [
"Arkaprava Majumdar",
"M Anand Krishna",
"P. K. Srijith"
] | Poster | Sequence labeling problems arise in several real-world applications such as healthcare and robotics. In many such applications, sequence data are irregularly sampled and are of varying complexities. Recently, efforts have been made to develop neural ODE-based architectures to model the evolution of hidden states contin... | Wave Equation, Neural ODE, Sequence Labelling | Partial Differential Equations parameterised by a Neural Network (like Neural ODE) can be used to solve sequence modeling problems. We hypothesize why this might be the case and demonstrate that it outpeforms many known continuous RNN models. | 14,044 | null | [
-0.06306847184896469,
-0.023253059014678,
-0.02543935552239418,
0.055259015411138535,
0.04855787754058838,
0.041482504457235336,
0.007579654920846224,
0.010426468215882778,
-0.017383087426424026,
-0.06062329560518265,
0.014920146204531193,
0.02743542194366455,
-0.04577130079269409,
0.02654... |
234 | ComaDICE: Offline Cooperative Multi-Agent Reinforcement Learning with Stationary Distribution Shift Regularization | https://openreview.net/forum?id=5o9JJJPPm6 | [
"The Viet Bui",
"Thanh Hong Nguyen",
"Tien Anh Mai"
] | Poster | Offline reinforcement learning (RL) has garnered significant attention for its ability to learn effective policies from pre-collected datasets without the need for further environmental interactions. While promising results have been demonstrated in single-agent settings, offline multi-agent reinforcement learning (MAR... | Offline Reinforcement Learning, Multi-Agent Reinforcement Learning, Stationary Distribution Correction Estimation | This paper introduces ComaDICE, a novel offline cooperative multi-agent reinforcement learning algorithm that uses stationary distribution shift regularization to improve performance in complex environments like MuJoCo and StarCraft II. | 14,042 | 2410.01954 | [
-0.041922442615032196,
-0.04223010689020157,
-0.009166928008198738,
0.028168808668851852,
0.05508095771074295,
0.015111039392650127,
-0.012973586097359657,
0.017225084826350212,
-0.028972363099455833,
-0.04357152432203293,
-0.01606890559196472,
-0.0016329833306372166,
-0.08472442626953125,
... |
235 | Probabilistic Conformal Prediction with Approximate Conditional Validity | https://openreview.net/forum?id=Nfd7z9d6Bb | [
"Vincent Plassier",
"Alexander Fishkov",
"Mohsen Guizani",
"Maxim Panov",
"Eric Moulines"
] | Poster | We develop a new method for generating prediction sets that combines the flexibility of conformal methods with an estimate of the conditional distribution $\textup{P}_{Y \mid X}$. Existing methods, such as conformalized quantile regression and probabilistic conformal prediction, usually provide only a marginal coverage... | Conformal Prediction, Conditional coverage, Probabilistic method, Uncertainty Quantification | We introduce a method that effectively integrates conformal approaches with an estimate of the conditional distribution to ensure the approximate conditional validity. | 14,036 | 2407.01794 | [
-0.006372830364853144,
-0.029997168108820915,
0.017861394211649895,
0.023309020325541496,
0.06284859031438828,
0.024210773408412933,
0.007836420089006424,
-0.028079718351364136,
-0.016724135726690292,
-0.039106424897909164,
-0.0008705693762749434,
-0.005225218366831541,
-0.08656628429889679,... |
236 | Rethinking Neural Multi-Objective Combinatorial Optimization via Neat Weight Embedding | https://openreview.net/forum?id=GM7cmQfk2F | [
"Jinbiao Chen",
"Zhiguang Cao",
"Jiahai Wang",
"Yaoxin Wu",
"Hanzhang Qin",
"Zizhen Zhang",
"Yue-Jiao Gong"
] | Poster | Recent decomposition-based neural multi-objective combinatorial optimization (MOCO) methods struggle to achieve desirable performance. Even equipped with complex learning techniques, they often suffer from significant optimality gaps in weight-specific subproblems. To address this challenge, we propose a neat weight em... | Neural Multi-Objective Combinatorial Optimization, Weight Embedding, Conditional Attention | We propose a neat weight embedding method for neural multi-objective combinatorial optimization | 14,028 | null | [
-0.045013368129730225,
-0.026684828102588654,
0.025840753689408302,
0.04122033342719078,
0.03867455571889877,
0.060633666813373566,
-0.017269780859351158,
-0.02584603615105152,
-0.036729127168655396,
-0.032392360270023346,
-0.021254952996969223,
0.014004441909492016,
-0.06935051828622818,
... |
237 | Robust Root Cause Diagnosis using In-Distribution Interventions | https://openreview.net/forum?id=l11DZY5Nxu | [
"Lokesh Nagalapatti",
"Ashutosh Srivastava",
"Sunita Sarawagi",
"Amit Sharma"
] | Poster | Diagnosing the root cause of an anomaly in a complex interconnected system is
a pressing problem in today’s cloud services and industrial operations. We propose In-Distribution Interventions (IDI), a novel algorithm that predicts root cause
as nodes that meet two criteria: 1) Anomaly: root cause nodes should take on
an... | Root Cause Diagnosis, Causal Inference, Interventional RCD | Identifying root cause of anomalies using interventions rather than counterfactuals estimated from a learned SCM | 14,022 | null | [
-0.0266676414757967,
-0.014830606058239937,
-0.03044712170958519,
0.028775852173566818,
0.07618743926286697,
0.014779358170926571,
0.04030724987387657,
-0.013039514422416687,
-0.011979380622506142,
-0.03222421929240227,
0.02113422192633152,
-0.015481231734156609,
-0.062407877296209335,
0.0... |
238 | Boosting Neural Combinatorial Optimization for Large-Scale Vehicle Routing Problems | https://openreview.net/forum?id=TbTJJNjumY | [
"Fu Luo",
"Xi Lin",
"Yaoxin Wu",
"Zhenkun Wang",
"Tong Xialiang",
"Mingxuan Yuan",
"Qingfu Zhang"
] | Poster | Neural Combinatorial Optimization (NCO) methods have exhibited promising performance in solving Vehicle Routing Problems (VRPs). However, most NCO methods rely on the conventional self-attention mechanism that induces excessive computational complexity, thereby struggling to contend with large-scale VRPs and hindering ... | Neural Combinatorial Optimization, Large-Scale Vehicle Routing Problem | null | 14,013 | null | [
-0.018857141956686974,
-0.044361814856529236,
-0.00555088184773922,
0.05519489943981171,
0.018575366586446762,
0.0571305938065052,
-0.0016515242168679833,
0.02189459837973118,
-0.03343222662806511,
-0.037931889295578,
-0.0073438407853245735,
-0.0004524040559772402,
-0.05839028209447861,
0.... |
239 | Sensitivity Verification for Additive Decision Tree Ensembles | https://openreview.net/forum?id=h0vC0fm1q7 | [
"Arhaan Ahmad",
"Tanay Vineet Tayal",
"Ashutosh Gupta",
"S. Akshay"
] | Poster | Tree ensemble models, such as Gradient Boosted Decision Trees (GBDTs) and random forests, are widely popular models for a variety of machine learning tasks. The power of these models comes from the ensemble of decision trees, which makes analysis of such models significantly harder than for single trees. As a result, r... | Robustness verification, Sensitivity analysis, SAT solvers, efficient encodings, NP-hardness, fairness, confidence | We ask if an (additive) decision tree ensemble is sensitive to (potentially small) changes to a given feature or set of features. We show theoretical NP-hardness results, and provide a pseudo-Boolean encoding to solve the problem. | 14,006 | null | [
-0.0425553172826767,
-0.01561669260263443,
-0.019280828535556793,
0.041299961507320404,
0.027613727375864983,
0.033635858446359634,
0.025205329060554504,
-0.034174878150224686,
-0.023757394403219223,
-0.005362027790397406,
-0.02887848950922489,
0.013116579502820969,
-0.0737963542342186,
-0... |
240 | Monte Carlo Planning with Large Language Model for Text-Based Game Agents | https://openreview.net/forum?id=r1KcapkzCt | [
"Zijing Shi",
"Meng Fang",
"Ling Chen"
] | Poster | Text-based games provide valuable environments for language-based autonomous agents. However, planning-then-learning paradigms, such as those combining Monte Carlo Tree Search (MCTS) and reinforcement learning (RL), are notably time-consuming due to extensive iterations. Additionally, these algorithms perform uncertain... | Large language model, Monte Carlo tree search, Text-based games | null | 14,005 | null | [
-0.04841667786240578,
-0.0019452822161838412,
-0.0152497673407197,
0.04409981146454811,
0.04238694906234741,
0.0220335740596056,
0.02367388643324375,
0.0038568356540054083,
-0.038156650960445404,
-0.019347570836544037,
-0.03836854174733162,
0.0484592467546463,
-0.047970376908779144,
-0.011... |
241 | Actions Speak Louder Than Words: Rate-Reward Trade-off in Markov Decision Processes | https://openreview.net/forum?id=Za3M6OZuCU | [
"Haotian Wu",
"Gongpu Chen",
"Deniz Gunduz"
] | Poster | The impact of communication on decision-making systems has been extensively studied under the assumption of dedicated communication channels. We instead consider communicating through actions, where the message is embedded into the actions of an agent which interacts with the environment in a Markov decision process (M... | Markov Decision Process, Channel coding, Rate-Reward Trade-off, Finite state channel | null | 13,994 | 2502.03335 | [
-0.06676633656024933,
-0.0322052463889122,
-0.029687991365790367,
0.028242764994502068,
0.04607037082314491,
0.021541433408856392,
0.024706106632947922,
0.012431968003511429,
0.006968988571316004,
-0.07369699329137802,
-0.007851642556488514,
0.03450726717710495,
-0.0740128755569458,
-0.017... |
242 | A Statistical Framework for Ranking LLM-based Chatbots | https://openreview.net/forum?id=rAoEub6Nw2 | [
"Siavash Ameli",
"Siyuan Zhuang",
"Ion Stoica",
"Michael W. Mahoney"
] | Poster | Large language models (LLMs) have transformed natural language processing, with frameworks like Chatbot Arena providing pioneering platforms for evaluating these models. By facilitating millions of pairwise comparisons based on human judgments, Chatbot Arena has become a cornerstone in LLM evaluation, offering rich dat... | Large Language Models (LLMs), Paired Comparison, Statistical Ranking, Human Preferences, Chatbot Arena, Logistic Regression | We introduce a rigorous statistical framework for ranking large language models (LLMs) using crowdsourced comparisons, improving accuracy for ties, wins, and losses beyond current methods like Elo. | 13,986 | 2412.18407 | [
0.0015959810698404908,
-0.04645754024386406,
-0.01162289921194315,
0.034465301781892776,
0.02066989615559578,
0.027456162497401237,
0.01140446774661541,
0.025754086673259735,
-0.018806027248501778,
-0.004745941609144211,
-0.021335043013095856,
0.057272955775260925,
-0.06953813880681992,
-0... |
243 | ONLINE EPSILON NET & PIERCING SET FOR GEOMETRIC CONCEPTS | https://openreview.net/forum?id=nNiWRRj6r9 | [
"Sujoy Bhore",
"Devdan Dey",
"Satyam Singh"
] | Poster | VC-dimension (Vapnik & Chervonenkis (1971)) and $\varepsilon$-nets (Haussler & Welzl (1987)) are key concepts in Statistical Learning Theory. Intuitively, VC-dimension is a measure of the size of a class of sets. The famous $\varepsilon$-net theorem, a fundamental result in Discrete Geometry, asserts that if the VC-di... | Theoretical machine learning, VC-dimension, Geometric sampling | null | 13,983 | 2410.07059 | [
0.013985726982355118,
0.006280154921114445,
0.021149054169654846,
0.02618223801255226,
0.02534787356853485,
0.04294021055102348,
0.011558253318071365,
0.006383311934769154,
-0.03038462996482849,
-0.06271254271268845,
-0.02828596904873848,
-0.035934511572122574,
-0.06647094339132309,
-0.004... |
244 | SimulPL: Aligning Human Preferences in Simultaneous Machine Translation | https://openreview.net/forum?id=XBF63bHDZw | [
"Donglei Yu",
"Yang Zhao",
"Jie Zhu",
"Yangyifan Xu",
"Yu Zhou",
"Chengqing Zong"
] | Poster | Simultaneous Machine Translation (SiMT) generates translations while receiving streaming source inputs. This requires the SiMT model to learn a read/write policy, deciding when to translate and when to wait for more source input. Numerous linguistic studies indicate that audiences in SiMT scenarios have distinct prefer... | simultaneous machine translation, simultaneous preference optimization, human preferences | null | 13,982 | 2502.00634 | [
0.0008274471038021147,
-0.011018474586308002,
0.012053404003381729,
0.028814345598220825,
0.027166349813342094,
0.04024919122457504,
0.0009536081342957914,
0.03679081052541733,
-0.015280060470104218,
-0.034084953367710114,
-0.03250546008348465,
0.048917245119810104,
-0.07476616650819778,
-... |
245 | Neural Interactive Proofs | https://openreview.net/forum?id=R2834dhBlo | [
"Lewis Hammond",
"Sam Adam-Day"
] | Poster | We consider the problem of how a trusted, but computationally bounded agent (a 'verifier') can learn to interact with one or more powerful but untrusted agents ('provers') in order to solve a given task. More specifically, we study the case in which agents are represented using neural networks and refer to solutions of... | interactive proofs, game theory, neural networks, safety, multi-agent reinforcement learning | We study how a trusted, weak model can learn to interact with one or more stronger but untrusted models in order to solve a given task. | 13,981 | 2412.08897 | [
-0.011210781522095203,
-0.009922157973051071,
-0.010324407368898392,
0.04051617905497551,
0.03295058384537697,
0.02907482348382473,
0.011370953172445297,
-0.025165671482682228,
-0.013575786724686623,
-0.02435377798974514,
0.0017671966925263405,
0.029925595968961716,
-0.04220927879214287,
-... |
246 | Oracle efficient truncated statistics | https://openreview.net/forum?id=ZS7UEI3vG5 | [
"Konstantinos Karatapanis",
"Vasilis Kontonis",
"Christos Tzamos"
] | Poster | We study the problem of learning from truncated samples: instead of observing
samples from some underlying population $p^\ast$, we observe only the examples that fall in some survival set $S \subset \mathbb{R}^d$ whose probability mass (measured with respect to $p^\ast$) is at least $\alpha$. Assuming membership oracl... | truncated statistics, exponential family, statistical learning | null | 13,970 | null | [
-0.019982386380434036,
-0.015522164292633533,
-0.023845620453357697,
0.058066967874765396,
0.05651332437992096,
0.03386373072862625,
0.004567649215459824,
-0.009959875606000423,
-0.024639658629894257,
-0.04222072660923004,
-0.021614864468574524,
0.009289266541600227,
-0.06358456611633301,
... |
247 | Drop-Upcycling: Training Sparse Mixture of Experts with Partial Re-initialization | https://openreview.net/forum?id=gx1wHnf5Vp | [
"Taishi Nakamura",
"Takuya Akiba",
"Kazuki Fujii",
"Yusuke Oda",
"Rio Yokota",
"Jun Suzuki"
] | Poster | The Mixture of Experts (MoE) architecture reduces the training and inference cost significantly compared to a dense model of equivalent capacity. Upcycling is an approach that initializes and trains an MoE model using a pre-trained dense model. While upcycling leads to initial performance gains, the training progresses... | mixture of experts, large language models, continual pre-training | null | 13,966 | null | [
0.0027877020183950663,
-0.04345623776316643,
-0.015304883010685444,
0.04890107363462448,
0.05738881975412369,
0.01353473961353302,
0.011773859150707722,
0.005687551572918892,
-0.04034091904759407,
-0.018181728199124336,
-0.0110426414757967,
0.01159083005040884,
-0.07564350217580795,
-0.007... |
248 | Black-Box Detection of Language Model Watermarks | https://openreview.net/forum?id=E4LAVLXAHW | [
"Thibaud Gloaguen",
"Nikola Jovanović",
"Robin Staab",
"Martin Vechev"
] | Poster | Watermarking has emerged as a promising way to detect LLM-generated text, by augmenting LLM generations with later detectable signals. Recent work has proposed multiple families of watermarking schemes, several of which focus on preserving the LLM distribution. This distribution-preservation property is motivated by th... | llm, watermarking | null | 13,958 | 2405.20777 | [
-0.007994475774466991,
-0.016235828399658203,
-0.02013363130390644,
0.03891519457101822,
0.059863489121198654,
0.005856679752469063,
0.025475651025772095,
0.012792278081178665,
-0.02915477193892002,
-0.015761645510792732,
-0.033358946442604065,
0.025581778958439827,
-0.037378016859292984,
... |
249 | ProAdvPrompter: A Two-Stage Journey to Effective Adversarial Prompting for LLMs | https://openreview.net/forum?id=tpHqsyZ3YX | [
"Hao Di",
"Tong He",
"Haishan Ye",
"Yinghui Huang",
"Xiangyu Chang",
"Guang Dai",
"Ivor Tsang"
] | Poster | As large language models (LLMs) are increasingly being integrated into various real-world applications, the identification of their vulnerabilities to jailbreaking attacks becomes an essential component of ensuring the safety and reliability of LLMs.
Previous studies have developed LLM assistants, known as the adversa... | jailbreaking attacks; large language model | null | 13,954 | null | [
-0.027003446593880653,
-0.05229748785495758,
0.00867140106856823,
0.023386556655168533,
0.02853362075984478,
-0.00004937369885738008,
0.05706405267119408,
-0.019986867904663086,
-0.031664568930864334,
-0.007483707740902901,
-0.019177893176674843,
0.029902346432209015,
-0.05158339813351631,
... |
250 | Ward: Provable RAG Dataset Inference via LLM Watermarks | https://openreview.net/forum?id=kVrwHLAb20 | [
"Nikola Jovanović",
"Robin Staab",
"Maximilian Baader",
"Martin Vechev"
] | Poster | RAG enables LLMs to easily incorporate external data, raising concerns for data owners regarding unauthorized usage of their content. The challenge of detecting such unauthorized usage remains underexplored, with datasets and methods from adjacent fields being ill-suited for its study. We take several steps to bridge t... | llm, watermarks, dataset inference, rag | We formalize RAG Dataset Inference, introduce a suitable dataset and baselines, and propose Ward, a rigorous method based on LLM watermarks. | 13,947 | 2410.03537 | [
0.025905568152666092,
-0.033392660319805145,
-0.02929246425628662,
0.06629756093025208,
0.043194305151700974,
-0.031390976160764694,
0.01674100011587143,
-0.029616910964250565,
0.00032969130552373827,
-0.010642028413712978,
-0.04972207546234131,
0.015084377489984035,
-0.07373581826686859,
... |
251 | SCOPE: A Self-supervised Framework for Improving Faithfulness in Conditional Text Generation | https://openreview.net/forum?id=dTkqaCKLPp | [
"Song Duong",
"Florian Le Bronnec",
"Alexandre Allauzen",
"Vincent Guigue",
"Alberto Lumbreras",
"Laure Soulier",
"Patrick Gallinari"
] | Poster | Large Language Models (LLMs), when used for conditional text generation, often produce hallucinations, i.e., information that is unfaithful or not grounded in the input context. This issue arises in typical conditional text generation tasks, such as text summarization and data-to-text generation, where the goal is to p... | faithfulness, hallucination, conditional text generation, natural language processing, large language models | We propose a self-supervised method for faithfulness enhancement for conditional text generation. | 13,935 | 2502.13674 | [
-0.007875688374042511,
-0.016692476347088814,
-0.01902250573039055,
0.05279005318880081,
0.06325072050094604,
0.000878671882674098,
0.03185930475592613,
0.021102232858538628,
-0.020419925451278687,
0.017258575186133385,
-0.046516988426446915,
0.06231587380170822,
-0.07556707412004471,
0.01... |
252 | Clique Number Estimation via Differentiable Functions of Adjacency Matrix Permutations | https://openreview.net/forum?id=DFSb67ksVr | [
"Indradyumna Roy",
"Eeshaan Jain",
"Soumen Chakrabarti",
"Abir De"
] | Poster | Estimating the clique number in a graph is central to various applications, e.g., community detection, graph retrieval, etc.
Existing estimators often rely on non-differentiable combinatorial components. Here, we propose a full differentiable estimator for clique number estimation, which can be trained from distant su... | Graph neural network, distant supervision | We propose a differentiable model for clique number estimation, learning from distant supervision by searching for dense submatrices in permuted adjacency matrices. | 13,932 | null | [
-0.009422268718481064,
-0.023360637947916985,
0.001538370968773961,
0.03147987648844719,
0.0350106805562973,
0.009981329552829266,
0.02784651704132557,
-0.008013865910470486,
-0.021155012771487236,
-0.046777401119470596,
0.01468910463154316,
-0.025109337642788887,
-0.07885754108428955,
0.0... |
253 | Reliable and Diverse Evaluation of LLM Medical Knowledge Mastery | https://openreview.net/forum?id=TXfzH933qV | [
"Yuxuan Zhou",
"Xien Liu",
"Chen Ning",
"Xiao Zhang",
"Ji Wu"
] | Poster | Mastering medical knowledge is crucial for medical-specific LLMs. However, despite the existence of medical benchmarks like MedQA, a unified framework that fully leverages existing knowledge bases to evaluate LLMs' mastery of medical knowledge is still lacking. We propose PretexEval, a novel framework that dynamically ... | LLM Evaluation, Medical Evaluation, Large Language Model | We propose a reliable and diverse evaluation method, aiming to probe the medical knowledge mastery of LLMs. | 13,909 | 2409.14302 | [
0.0055589391849935055,
-0.016776463016867638,
-0.0027570228558033705,
0.020941391587257385,
0.0723763033747673,
-0.004392934497445822,
0.04785841703414917,
0.014327183365821838,
-0.016977785155177116,
-0.012793420813977718,
-0.008368311449885368,
0.035565488040447235,
-0.036374349147081375,
... |
254 | SWE-Search: Enhancing Software Agents with Monte Carlo Tree Search and Iterative Refinement | https://openreview.net/forum?id=G7sIFXugTX | [
"Antonis Antoniades",
"Albert Örwall",
"Kexun Zhang",
"Yuxi Xie",
"Anirudh Goyal",
"William Yang Wang"
] | Poster | Software engineers operating in complex and dynamic environments must continuously adapt to evolving requirements, learn iteratively from experience, and reconsider their approaches based on new insights. However, current large language model (LLM)-based software agents often follow linear, sequential processes that pr... | agents, LLM, SWE-agents, SWE-bench, search, planning, reasoning, self-improvement, open-ended | Introduce an inference-time Monte Carlo Tree Search method for Software Agents. | 13,886 | null | [
-0.025377249345183372,
-0.027445049956440926,
-0.011714668944478035,
0.05183863639831543,
0.04617476463317871,
0.020127832889556885,
0.05371800810098648,
0.008892095647752285,
-0.012159438803792,
-0.03863026946783066,
-0.02738776057958603,
0.05285990983247757,
-0.05078776925802231,
-0.0097... |
255 | Language Models are Advanced Anonymizers | https://openreview.net/forum?id=82p8VHRsaK | [
"Robin Staab",
"Mark Vero",
"Mislav Balunovic",
"Martin Vechev"
] | Poster | Recent privacy research on large language models (LLMs) has shown that they achieve near-human-level performance at inferring personal data from online texts. With ever-increasing model capabilities, existing text anonymization methods are currently lacking behind regulatory requirements and adversarial threats. In thi... | privacy, anonymization, large language models | We demonstrate how large language models can be employed in an adversarial framework to surpass state-of-the-art anonymization tools both in terms of privacy and utility. | 13,884 | 2402.13846 | [
0.004712380934506655,
-0.002681761048734188,
-0.002588014118373394,
0.04608052223920822,
0.05738760903477669,
-0.0009644516394473612,
0.04570247232913971,
-0.01551588624715805,
-0.00597818149253726,
0.008023194037377834,
-0.01724638044834137,
0.019827725365757942,
-0.0492975078523159,
-0.0... |
256 | ADAM: An Embodied Causal Agent in Open-World Environments | https://openreview.net/forum?id=Ouu3HnIVBc | [
"Shu Yu",
"Chaochao Lu"
] | Poster | In open-world environments like Minecraft, existing agents face challenges in continuously learning structured knowledge, particularly causality. These challenges stem from the opacity inherent in black-box models and an excessive reliance on prior knowledge during training, which impair their interpretability and gene... | embodied agent, causality, large language model, interpretability, vision language navigation, cross-modal application, cross-modal information extraction, multimodality | null | 13,881 | 2410.22194 | [
-0.019106566905975342,
-0.014391065575182438,
-0.02292596362531185,
0.010076467879116535,
0.03406404331326485,
0.01646178960800171,
0.0448429174721241,
0.02096358872950077,
-0.014855076558887959,
-0.05263097956776619,
-0.031907059252262115,
0.004659499507397413,
-0.05002342909574509,
-0.00... |
257 | Expected Return Symmetries | https://openreview.net/forum?id=wFg0shwoRe | [
"Darius Muglich",
"Johannes Forkel",
"Elise van der Pol",
"Jakob Nicolaus Foerster"
] | Poster | Symmetry is an important inductive bias that can improve model robustness and generalization across many deep learning domains. In multi-agent settings, a priori known symmetries have been shown to address a fundamental coordination failure mode known as mutually incompatible symmetry breaking; e.g. in a game where two... | multi-agent reinforcement learning, zero-shot coordination | Discovering a symmetry class over policies that improves coordination between agents | 13,880 | 2502.01711 | [
-0.05113644897937775,
-0.020355558022856712,
-0.004740745294839144,
0.0397506058216095,
0.015398357063531876,
0.005662713665515184,
0.034421157091856,
0.01112459134310484,
-0.03284323215484619,
-0.050463661551475525,
0.009759052656590939,
-0.018241576850414276,
-0.06930691748857498,
-0.013... |
258 | Beware of Calibration Data for Pruning Large Language Models | https://openreview.net/forum?id=x83w6yGIWb | [
"Yixin Ji",
"Yang Xiang",
"Juntao Li",
"Qingrong Xia",
"Ping Li",
"Xinyu Duan",
"Zhefeng Wang",
"Min Zhang"
] | Poster | As large language models (LLMs) are widely applied across various fields, model
compression has become increasingly crucial for reducing costs and improving
inference efficiency. Post-training pruning is a promising method that does not
require resource-intensive iterative training and only needs a small amount of
cali... | calibration data, post-training pruning, large language models | null | 13,874 | 2410.17711 | [
-0.013183139264583588,
-0.02494754083454609,
-0.017894458025693893,
0.012674925848841667,
0.0536869615316391,
0.040434252470731735,
0.025817861780524254,
0.013692565262317657,
-0.0394718274474144,
-0.017905108630657196,
-0.02318611554801464,
0.046550653874874115,
-0.051379598677158356,
0.0... |
259 | Herald: A Natural Language Annotated Lean 4 Dataset | https://openreview.net/forum?id=Se6MgCtRhz | [
"Guoxiong Gao",
"Yutong Wang",
"Jiedong Jiang",
"Qi Gao",
"Zihan Qin",
"Tianyi Xu",
"Bin Dong"
] | Poster | Verifiable formal languages like Lean have profoundly impacted mathematical reasoning, particularly through the use of large language models (LLMs) for automated reasoning. A significant challenge in training LLMs for these formal languages is the lack of parallel datasets that align natural language with formal langua... | Lean 4, Autoformalizing, LLM, Retrieval Augmented Generation, Dataset | null | 13,870 | 2410.10878 | [
0.0086351428180933,
-0.024928705766797066,
-0.03444444015622139,
0.03208978474140167,
0.03434162586927414,
0.04206051677465439,
0.021187245845794678,
0.02222820557653904,
-0.03287624567747116,
-0.01866912841796875,
-0.02543855644762516,
0.022547109052538872,
-0.041053783148527145,
-0.00722... |
260 | Efficient Residual Learning with Mixture-of-Experts for Universal Dexterous Grasping | https://openreview.net/forum?id=BUj9VSCoET | [
"Ziye Huang",
"Haoqi Yuan",
"Yuhui Fu",
"Zongqing Lu"
] | Poster | Universal dexterous grasping across diverse objects presents a fundamental yet formidable challenge in robot learning. Existing approaches using reinforcement learning (RL) to develop policies on extensive object datasets face critical limitations, including complex curriculum design for multi-task learning and limited... | dexterous grasping, residual policy learning, reinforcement learning | null | 13,867 | 2410.02475 | [
-0.003756109392270446,
-0.02269729971885681,
0.014151130802929401,
0.055015839636325836,
0.040165673941373825,
0.05891868472099304,
0.007754433434456587,
-0.014315144158899784,
-0.04660746827721596,
-0.05565075948834419,
-0.015128718689084053,
0.03346928581595421,
-0.05906784161925316,
-0.... |
261 | DPLM-2: A Multimodal Diffusion Protein Language Model | https://openreview.net/forum?id=5z9GjHgerY | [
"Xinyou Wang",
"Zaixiang Zheng",
"Fei YE",
"Dongyu Xue",
"Shujian Huang",
"Quanquan Gu"
] | Poster | Proteins are essential macromolecules defined by their amino acid sequences, which determine their three-dimensional structures and, consequently, their functions in all living organisms. Therefore, generative protein modeling necessitates a multimodal approach to simultaneously model, understand, and generate both seq... | protein foundation model, diffusion language model, multimodal language model | null | 13,865 | null | [
-0.03508985787630081,
-0.009045876562595367,
-0.02089516818523407,
0.032680410891771317,
0.04329923912882805,
0.026532087475061417,
0.010503585450351238,
0.01383467111736536,
-0.01047599595040083,
-0.02352503500878811,
0.03237415477633476,
-0.0026129751931875944,
-0.05995037779211998,
0.02... |
262 | Web Agents with World Models: Learning and Leveraging Environment Dynamics in Web Navigation | https://openreview.net/forum?id=moWiYJuSGF | [
"Hyungjoo Chae",
"Namyoung Kim",
"Kai Tzu-iunn Ong",
"Minju Gwak",
"Gwanwoo Song",
"Jihoon Kim",
"Sunghwan Kim",
"Dongha Lee",
"Jinyoung Yeo"
] | Poster | Large language models (LLMs) have recently gained much attention in building autonomous agents. However, performance of current LLM-based web agents in long-horizon tasks is far from optimal, often yielding errors such as repeatedly buying a non-refundable flight ticket. By contrast, humans can avoid such an irreversib... | Web Agent, World Model, Digital Agent, Planning, LLM | null | 13,861 | 2410.13232 | [
-0.025546742603182793,
0.004466733895242214,
0.01972080208361149,
0.022774022072553635,
0.04261019453406334,
-0.00803869217634201,
0.03933669999241829,
0.04144010320305824,
-0.005139543674886227,
-0.026564817875623703,
-0.04124530404806137,
0.027611806988716125,
-0.0709480494260788,
-0.027... |
263 | HyperFace: Generating Synthetic Face Recognition Datasets by Exploring Face Embedding Hypersphere | https://openreview.net/forum?id=4YzVF9isgD | [
"Hatef Otroshi Shahreza",
"Sébastien Marcel"
] | Poster | Face recognition datasets are often collected by crawling Internet and without individuals' consents, raising ethical and privacy concerns. Generating synthetic datasets for training face recognition models has emerged as a promising alternative. However, the generation of synthetic datasets remains challenging as it... | Face Recognition, Hypersphere Optimization, Privacy, Synthetic Data | We formulate the dataset generation as a packing problem on the embedding space (represented on a hypersphere) of a face recognition model and propose a new synthetic dataset generation approach. | 13,859 | 2411.08470 | [
0.010353876277804375,
-0.00004141052704653703,
0.0018557403236627579,
0.04987989366054535,
0.033959466964006424,
0.023287996649742126,
0.03150283917784691,
-0.016424570232629776,
-0.009951979853212833,
-0.0587807260453701,
-0.039732225239276886,
-0.01577022671699524,
-0.0969497486948967,
-... |
264 | Language Imbalance Driven Rewarding for Multilingual Self-improving | https://openreview.net/forum?id=Kak2ZH5Itp | [
"Wen Yang",
"Junhong Wu",
"Chen Wang",
"Chengqing Zong",
"Jiajun Zhang"
] | Poster | Large Language Models (LLMs) have achieved state-of-the-art performance across numerous tasks. However, these advancements have predominantly benefited "first-class" languages such as English and Chinese, leaving many other languages underrepresented. This imbalance, while limiting broader applications, generates a nat... | Large Language Model, Self-Improving, Multilinguality | This paper proposes Language Imbalance Driven Rewarding, which leverages the inherent imbalance in LLMs as a reward signal to bootstrap LLMs’ multilingual capabilities in a self-improving manner. | 13,855 | 2410.08964 | [
-0.03745884820818901,
-0.028143612667918205,
0.0010222840355709195,
0.013617519289255142,
0.0337846614420414,
0.025319309905171394,
0.03392230346798897,
0.029229873791337013,
-0.024092992767691612,
-0.012362259440124035,
-0.020052028819918633,
0.03180135414004326,
-0.05829911679029465,
-0.... |
265 | Quantum-PEFT: Ultra parameter-efficient fine-tuning | https://openreview.net/forum?id=dgR6i4TSng | [
"Toshiaki Koike-Akino",
"Francesco Tonin",
"Yongtao Wu",
"Zhengqing Wu",
"Leyla Naz Candogan",
"Volkan Cevher"
] | Poster | This paper introduces Quantum-PEFT that leverages quantum computations for parameter-efficient fine-tuning (PEFT). Unlike other additive PEFT methods, such as low-rank adaptation (LoRA), Quantum-PEFT exploits an underlying full-rank yet surprisingly parameter efficient _quantum unitary parameterization_. With the use o... | parameter-efficient fine-tuning, lora, quantum machine learning, orthogonality constraints | null | 13,846 | null | [
-0.015150892548263073,
-0.02457251027226448,
-0.00837523303925991,
0.04646572843194008,
0.04755554720759392,
0.038688015192747116,
0.02199060283601284,
-0.014765356667339802,
-0.004853018559515476,
-0.047797322273254395,
-0.014727110974490643,
-0.02537674270570278,
-0.0834430530667305,
-0.... |
266 | Think Then React: Towards Unconstrained Action-to-Reaction Motion Generation | https://openreview.net/forum?id=UxzKcIZedp | [
"Wenhui Tan",
"Boyuan Li",
"Chuhao Jin",
"Wenbing Huang",
"Xiting Wang",
"Ruihua Song"
] | Poster | Modeling human-like action-to-reaction generation has significant real-world applications, like human-robot interaction and games.
Despite recent advancements in single-person motion generation, it is still challenging to well handle action-to-reaction generation, due to the difficulty of directly predicting reaction f... | Human Reaction Generation, 3D Human Motion, Large Language Model | null | 13,835 | null | [
0.010175970382988453,
-0.020975874736905098,
-0.00566091388463974,
0.03809237480163574,
0.015762116760015488,
0.0023279280867427588,
0.031998325139284134,
0.03284718841314316,
-0.024634335190057755,
-0.017876701429486275,
-0.046571023762226105,
-0.01763266697525978,
-0.055065181106328964,
... |
267 | Rapid Selection and Ordering of In-Context Demonstrations via Prompt Embedding Clustering | https://openreview.net/forum?id=1Iu2Yte5N6 | [
"Kha Pham",
"Hung Le",
"Man Ngo",
"Truyen Tran"
] | Poster | While Large Language Models (LLMs) excel at in-context learning (ICL) using just a few demonstrations, their performances are sensitive to demonstration orders. The reasons behind this sensitivity remain poorly understood. In this paper, we investigate the prompt embedding space to bridge the gap between the order sens... | in-context learning, order sensitivity, LLMs, clustering, cluster-based search, positional encoding, attention mask, serial-position effect, cluster-based search | We accelerate selection and ordering of in-context demonstrations in self-adaptive ICL settings by leveraging our newfound clustering property in prompt embedding spaces. | 13,824 | null | [
-0.03307735174894333,
-0.013816249556839466,
-0.01588304154574871,
0.05413375422358513,
0.01947595737874508,
0.015775417909026146,
0.017166677862405777,
0.025502100586891174,
-0.024430174380540848,
0.022989632561802864,
-0.027360886335372925,
0.0340074859559536,
-0.025022711604833603,
-0.0... |
268 | Asymptotic Analysis of Two-Layer Neural Networks after One Gradient Step under Gaussian Mixtures Data with Structure | https://openreview.net/forum?id=tNn6Hskmti | [
"Samet Demir",
"Zafer Dogan"
] | Poster | In this work, we study the training and generalization performance of two-layer neural networks (NNs) after one gradient descent step under structured data modeled by Gaussian mixtures. While previous research has extensively analyzed this model under isotropic data assumption, such simplifications overlook the complex... | deep learning theory, random features, Gaussian equivalence, universality, high-dimensional asymptotics | We study the impacts of Gaussian mixtures data assumption to feature learning in neural networks trained with one gradient step in order to bridge the gap between isotropic data assumption and real datasets. | 13,819 | 2503.00856 | [
-0.027616901323199272,
-0.03297589719295502,
-0.008163446560502052,
0.029426109045743942,
0.022135060280561447,
0.021158084273338318,
0.02214905619621277,
-0.01062364224344492,
-0.04332536831498146,
-0.02421366237103939,
-0.0006621227948926389,
0.0031888566445559263,
-0.04346464201807976,
... |
269 | OCEAN: Offline Chain-of-thought Evaluation and Alignment in Large Language Models | https://openreview.net/forum?id=rlgplAuN2p | [
"Junda Wu",
"Xintong Li",
"Ruoyu Wang",
"Yu Xia",
"Yuxin Xiong",
"Jianing Wang",
"Tong Yu",
"Xiang Chen",
"Branislav Kveton",
"Lina Yao",
"Jingbo Shang",
"Julian McAuley"
] | Poster | Offline evaluation of LLMs is crucial in understanding their capacities, though current methods remain underexplored in existing research. In this work, we focus on the offline evaluation of the chain-of-thought capabilities and show how to optimize LLMs based on the proposed evaluation method. To enable offline feedba... | chain-of-thought, large language models, offline policy evaluation, agentic | null | 13,812 | 2410.23703 | [
-0.01566457934677601,
-0.020777743309736252,
-0.00019868393428623676,
0.03448977321386337,
0.04922115430235863,
0.011008591391146183,
0.01920393668115139,
0.02504161186516285,
-0.027233567088842392,
0.008051096461713314,
-0.021685056388378143,
0.05393090099096298,
-0.07494321465492249,
-0.... |
270 | Distribution-Free Data Uncertainty for Neural Network Regression | https://openreview.net/forum?id=pDDODPtpx9 | [
"Domokos M. Kelen",
"Ádám Jung",
"Péter Kersch",
"Andras A Benczur"
] | Poster | Quantifying uncertainty is an essential part of predictive modeling, especially in the context of high-stakes decision-making. While classification output includes data uncertainty by design in the form of class probabilities, the regression task generally aims only to predict the expected value of the target variable.... | deep learning, uncertainty quantification, regression uncertainty, aleatoric uncertainty, scoring rules, continuous ranked probability score | We propose a distribution-free neural network regression approach that learns aleatoric uncertainty through sample-based CRPS optimization. | 13,804 | null | [
-0.009858825244009495,
-0.00677643995732069,
-0.02670259214937687,
0.041583750396966934,
0.048666518181562424,
0.0623282752931118,
-0.007308233994990587,
-0.014784114435315132,
-0.017746446654200554,
-0.046039778739213943,
-0.02153686247766018,
0.029111169278621674,
-0.07181309908628464,
0... |
271 | SOO-Bench: Benchmarks for Evaluating the Stability of Offline Black-Box Optimization | https://openreview.net/forum?id=bqf0aCF3Dd | [
"Hong Qian",
"Yiyi Zhu",
"Xiang Shu",
"Shuo Liu",
"Yaolin Wen",
"Xin An",
"Huakang Lu",
"Aimin Zhou",
"Ke Tang",
"Yang Yu"
] | Poster | Black-box optimization aims to find the optima through building a model close to the black-box objective function based on function value evaluation. However, in many real-world tasks, such as the design of molecular formulas and mechanical structures, it is perilous, costly, or even infeasible to evaluate the objectiv... | Offline Optimization, Black-Box Optimization, Stability, Benchmarks | null | 13,800 | null | [
-0.029866430908441544,
0.025311162695288658,
-0.0033872132189571857,
0.00724933622404933,
0.0399508960545063,
0.04310828074812889,
0.014829851686954498,
-0.01171188522130251,
-0.007464001886546612,
-0.03624773025512695,
-0.01596921868622303,
-0.007466907147318125,
-0.05774306505918503,
-0.... |
272 | Understanding and Enhancing Safety Mechanisms of LLMs via Safety-Specific Neuron | https://openreview.net/forum?id=yR47RmND1m | [
"Yiran Zhao",
"Wenxuan Zhang",
"Yuxi Xie",
"Anirudh Goyal",
"Kenji Kawaguchi",
"Michael Shieh"
] | Poster | Safety alignment for large language models (LLMs) has become a critical issue due to their rapid progress. However, our understanding of effective safety mechanisms in LLMs remains limited, leading to safety alignment training that mainly focuses on improving optimization, data-level enhancement, or adding extra struct... | Large Language Models, Alignment, Safety, Interpretability, Neuron Detection | null | 13,799 | null | [
-0.012047367170453072,
-0.005355106201022863,
-0.020389387384057045,
0.00977363996207714,
0.0320160873234272,
0.027197085320949554,
0.04139825329184532,
0.002239255467429757,
-0.0451994314789772,
-0.012718208134174347,
-0.036791156977415085,
0.03439045324921608,
-0.04691329225897789,
-0.00... |
273 | Long Context Compression with Activation Beacon | https://openreview.net/forum?id=1eQT9OzfNQ | [
"Peitian Zhang",
"Zheng Liu",
"Shitao Xiao",
"Ninglu Shao",
"Qiwei Ye",
"Zhicheng Dou"
] | Poster | Long context compression is a critical research problem due to its significance in reducing the high computational and memory costs associated with LLMs. In this paper, we propose Activation Beacon, a plug-in module for transformer-based LLMs that targets effective, efficient, and flexible compression of long contexts.... | Context Compression, Long Context LLMs, LLM Memory | null | 13,798 | 2401.03462 | [
-0.014675888232886791,
-0.03980150818824768,
-0.020669100806117058,
-0.007384527008980513,
0.04124525189399719,
0.007996734231710434,
0.005192267708480358,
0.001198427053168416,
-0.019269799813628197,
-0.0204425398260355,
-0.05672081932425499,
0.03819391503930092,
-0.02989221177995205,
-0.... |
274 | LASeR: Towards Diversified and Generalizable Robot Design with Large Language Models | https://openreview.net/forum?id=7mlvOHL6qJ | [
"Junru Song",
"Yang Yang",
"Huan Xiao",
"Wei Peng",
"Wen Yao",
"Feifei Wang"
] | Poster | Recent advances in Large Language Models (LLMs) have stimulated a significant paradigm shift in evolutionary optimization, where hand-crafted search heuristics are gradually replaced with LLMs serving as intelligent search operators. However, these studies still bear some notable limitations, including a challenge to b... | Robot Design Automation, Large Language Model, Voxel-Based Soft Robot | This work improves the diversity and inter-task generalizability of robot design processes with the aid of Large Language Models. | 13,792 | null | [
-0.01434539444744587,
0.0032263249158859253,
0.002400957979261875,
0.01796332746744156,
0.0576922632753849,
0.028702661395072937,
0.03460622951388359,
0.018649708479642868,
-0.031174633651971817,
-0.026448484510183334,
-0.05114729329943657,
0.011407633312046528,
-0.07169442623853683,
-0.00... |
275 | Be More Diverse than the Most Diverse: Optimal Mixtures of Generative Models via Mixture-UCB Bandit Algorithms | https://openreview.net/forum?id=2Chkk5Ye2s | [
"Parham Rezaei",
"Farzan Farnia",
"Cheuk Ting Li"
] | Poster | The availability of multiple training algorithms and architectures for generative models requires a selection mechanism to form a single model over a group of well-trained generation models. The selection task is commonly addressed by identifying the model that maximizes an evaluation score based on the diversity and q... | Multi-Armed Bandits, Evaluation of generative models, Kernel-based evaluation scores, Mixture-UCB, Diversity in data generation | null | 13,785 | 2412.17622 | [
-0.013603643514215946,
-0.045785512775182724,
-0.013108672574162483,
0.0672079399228096,
0.019641907885670662,
0.02767982892692089,
0.015499213710427284,
0.010035745799541473,
-0.019338013604283333,
-0.05411767214536667,
-0.0357675701379776,
0.020821038633584976,
-0.05808735638856888,
0.00... |
276 | Arithmetic Transformers Can Length-Generalize in Both Operand Length and Count | https://openreview.net/forum?id=eIgGesYKLG | [
"Hanseul Cho",
"Jaeyoung Cha",
"Srinadh Bhojanapalli",
"Chulhee Yun"
] | Poster | Transformers often struggle with *length generalization*, meaning they fail to generalize to sequences longer than those encountered during training. While arithmetic tasks are commonly used to study length generalization, certain tasks are considered notoriously difficult, e.g., multi-operand addition (requiring gener... | Length Generalization, Transformers, Scratchpad, Position Coupling, Positional Encoding, Out-of-distribution Generalization, Arithmetic Tasks | We propose combining scratchpad with position coupling, and demonstrate that Transformers can achieve length generalization in both operand length and count for addition problems. | 13,776 | 2410.15787 | [
-0.0051214308477938175,
-0.035083040595054626,
0.0035922243259847164,
0.01910126395523548,
0.027126463130116463,
0.03436121717095375,
0.029183238744735718,
0.014348509721457958,
-0.04287981241941452,
-0.00682301539927721,
-0.0056602745316922665,
0.01970972865819931,
-0.06787771731615067,
-... |
277 | Stealthy Shield Defense: A Conditional Mutual Information-Based Post-Processing against Black-Box Model Inversion Attacks | https://openreview.net/forum?id=p0DjhjPXl3 | [
"Tianqu Zhuang",
"Hongyao Yu",
"Yixiang Qiu",
"Hao Fang",
"Bin Chen",
"Shu-Tao Xia"
] | Poster | Model inversion attacks (MIAs) aim to reconstruct the private training data by accessing a public model, raising concerns about privacy leakage. Black-box MIAs, where attackers can only query the model and obtain outputs, are closer to real-world scenarios. The latest black-box attacks have outperformed the state-of-th... | model inversion attack, model inversion defense, conditional mutual information | null | 13,774 | null | [
-0.03129595145583153,
-0.027339857071638107,
-0.021525487303733826,
0.06429144740104675,
0.025623297318816185,
0.007120934780687094,
0.0387561172246933,
-0.037365227937698364,
-0.027942148968577385,
-0.033460695296525955,
-0.0018667092081159353,
0.004548358265310526,
-0.027691280469298363,
... |
278 | NovelQA: Benchmarking Question Answering on Documents Exceeding 200K Tokens | https://openreview.net/forum?id=uMEsKEiB7J | [
"Cunxiang Wang",
"Ruoxi Ning",
"Boqi Pan",
"Tonghui Wu",
"Qipeng Guo",
"Cheng Deng",
"Guangsheng Bao",
"Xiangkun Hu",
"Zheng Zhang",
"Qian Wang",
"Yue Zhang"
] | Poster | Recent advancements in Large Language Models (LLMs) have pushed the boundaries of natural language processing, especially in long-context understanding. However, the evaluation of these models' long-context abilities remains a challenge due to the limitations of current benchmarks. To address this gap, we introduce Nov... | Long-context, Large Language Models, Question Answering | We introduce NovelQA, the first question answering dataset on documents with an average length exceeding 200K tokens. | 13,767 | 2403.12766 | [
-0.003885984420776367,
-0.0498783215880394,
-0.020989956334233284,
0.027837565168738365,
0.051787227392196655,
0.012740911915898323,
-0.002411932684481144,
0.007726689334958792,
0.00007180459215305746,
0.027129974216222763,
-0.0170529056340456,
0.02773209474980831,
-0.048330049961805344,
-... |
279 | Look Before You Leap: Universal Emergent Mechanism for Retrieval in Language Models | https://openreview.net/forum?id=eIB1UZFcFg | [
"Alexandre Variengien",
"Eric Winsor"
] | Poster | When solving challenging problems, language models (LMs) are able to identify relevant information from long and complicated contexts. To study how LMs solve retrieval tasks in diverse situations, we introduce ORION, a collection of structured retrieval tasks spanning six domains, from text understanding to coding. Eac... | Interpretability, LLM, Universality | We show that LM decompose retrieval internally by first compiling a representation of the query, and then looking for matching elements in the context. | 13,760 | null | [
-0.029045866802334785,
-0.01208167988806963,
-0.014760356396436691,
0.042142730206251144,
0.05436471104621887,
-0.0074949418194592,
0.009007520973682404,
0.039235908538103104,
-0.02301129885017872,
0.010942607186734676,
-0.0545760877430439,
0.04658975079655647,
-0.05160699784755707,
-0.014... |
280 | A Multi-Power Law for Loss Curve Prediction Across Learning Rate Schedules | https://openreview.net/forum?id=KnoS9XxIlK | [
"Kairong Luo",
"Haodong Wen",
"Shengding Hu",
"Zhenbo Sun",
"Maosong Sun",
"Zhiyuan Liu",
"Kaifeng Lyu",
"Wenguang Chen"
] | Poster | Training large models is both resource-intensive and time-consuming, making it crucial to understand the quantitative relationship between model performance and hyperparameters. In this paper, we derive an empirical law that predicts pretraining loss for large language models for every intermediate training step across... | Large language model, Learning rate scheduler, Scaling Law, Hyperparameter optimization | Loss curve prediction and optimized learning rate schedule | 13,754 | 2503.12811 | [
-0.03182128816843033,
-0.0024222456850111485,
0.00033058892586268485,
0.04064088314771652,
0.02750549092888832,
0.0084310881793499,
0.03102649189531803,
0.0042274752631783485,
-0.007370414212346077,
-0.006729979999363422,
0.012678250670433044,
0.025752535089850426,
-0.04468704015016556,
-0... |
281 | LLaVA-Mini: Efficient Image and Video Large Multimodal Models with One Vision Token | https://openreview.net/forum?id=UQJ7CDW8nb | [
"Shaolei Zhang",
"Qingkai Fang",
"Zhe Yang",
"Yang Feng"
] | Poster | The advent of real-time large multimodal models (LMMs) like GPT-4o has sparked considerable interest in efficient LMMs. LMM frameworks typically encode visual inputs into vision tokens (continuous representations) and integrate them and textual instructions into the context of large language models (LLMs), where large-... | Large Multimodal Models, Large Language Models | null | 13,752 | null | [
0.009340420365333557,
-0.020433813333511353,
-0.0035682928282767534,
0.03555597737431526,
0.012887698598206043,
0.03994230553507805,
-0.01773989200592041,
0.046599023044109344,
-0.05189390107989311,
-0.02068517543375492,
-0.027925506234169006,
-0.0028799655847251415,
-0.05407391116023064,
... |
282 | URLOST: Unsupervised Representation Learning without Stationarity or Topology | https://openreview.net/forum?id=MBBRHDuiwM | [
"Zeyu Yun",
"Juexiao Zhang",
"Yann LeCun",
"Yubei Chen"
] | Poster | Unsupervised representation learning has seen tremendous progress. However, it is constrained by its reliance on domain specific stationarity and topology, a limitation not found in biological intelligence systems. For instance, unlike computer vision, human vision can process visual signals sampled from highly irregul... | Unsupervised learning, Self-Supervised Learning, NeuroAI, Multi-Modality, Human Vision, Biologically-inspired Models | null | 13,751 | 2310.04496 | [
0.000165487130288966,
-0.032735440880060196,
0.004563885275274515,
0.04272913187742233,
0.012985366396605968,
0.01594374142587185,
0.03430989384651184,
0.026112180203199387,
-0.039718929678201675,
-0.034804634749889374,
-0.008250472135841846,
-0.009295832365751266,
-0.07713773846626282,
0.... |
283 | One-for-All Few-Shot Anomaly Detection via Instance-Induced Prompt Learning | https://openreview.net/forum?id=Zzs3JwknAY | [
"Wenxi Lv",
"Qinliang Su",
"Wenchao Xu"
] | Poster | Anomaly detection methods under the 'one-for-all' paradigm aim to develop a unified model capable of detecting anomalies across multiple classes. However, these approaches typically require a large number of normal samples for model training, which may not always be feasible in practice. Few-shot anomaly detection meth... | Anomaly detection, few-shot, vision-language model | null | 13,750 | null | [
0.0016706101596355438,
-0.02835899218916893,
-0.014080900698900223,
0.035495251417160034,
0.04106398671865463,
0.0017406559782102704,
0.045903656631708145,
0.0054377405904233456,
-0.03607214614748955,
-0.010953923687338829,
-0.05283329635858536,
0.030886024236679077,
-0.07248044013977051,
... |
284 | K-HALU: Multiple Answer Korean Hallucination Benchmark for Large Language Models | https://openreview.net/forum?id=VnLhUogHYE | [
"Jaehyung Seo",
"Heuiseok Lim"
] | Poster | Recent researchers and companies have been developing large language models (LLMs) specifically designed for particular purposes and have achieved significant advancements in various natural language processing tasks. However, LLMs are still prone to generating hallucinations—results that are unfaithful or inconsistent... | Hallucination, Benchmark dataset, Multiple answer, Korean, Large language model | Multiple-answer Korean hallucination benchmark for large language models | 13,748 | null | [
-0.0008197393035516143,
0.005599022842943668,
-0.0013327249325811863,
0.04117061197757721,
0.011257180012762547,
0.015520679764449596,
0.054357320070266724,
0.027774319052696228,
-0.027977081015706062,
-0.002338755177333951,
-0.036685872822999954,
0.013084053993225098,
-0.07391183078289032,
... |
285 | Charting the Design Space of Neural Graph Representations for Subgraph Matching | https://openreview.net/forum?id=5pd78GmXC6 | [
"Vaibhav Raj",
"Indradyumna Roy",
"Ashwin Ramachandran",
"Soumen Chakrabarti",
"Abir De"
] | Poster | Subgraph matching is vital in knowledge graph (KG) question answering, molecule design, scene graph, code and circuit search, etc.
Neural methods have shown promising results for subgraph matching.
Our study of recent systems suggests refactoring them into a unified design space for graph matching networks.
Existing me... | Graph Retrieval, Graph Neural Networks, Subgraph Matching | We propose a unified framework for graph matching networks and experiment with various alternatives for each design axis to obtain state-of-the-art results on the subgraph isomorphism task. | 13,746 | null | [
-0.0006676872144453228,
-0.01448645070195198,
0.0006428764318116009,
0.04726940765976906,
0.049262646585702896,
0.031501106917858124,
0.005020426120609045,
-0.004755895584821701,
0.01540050096809864,
-0.057495053857564926,
0.014549068175256252,
-0.0035253488458693027,
-0.07469277083873749,
... |
286 | Convergence and Implicit Bias of Gradient Descent on Continual Linear Classification | https://openreview.net/forum?id=DTqx3iqjkz | [
"Hyunji Jung",
"Hanseul Cho",
"Chulhee Yun"
] | Poster | We study continual learning on multiple linear classification tasks by sequentially running gradient descent (GD) for a fixed budget of iterations per each given task. When all tasks are jointly linearly separable and are presented in a cyclic/random order, we show the directional convergence of the trained linear clas... | Continual Learning, Sequential Learning, Gradient Descent, Linear Classification, Convergence, Implicit Bias | null | 13,745 | null | [
-0.01986733451485634,
0.003940069582313299,
-0.017389321699738503,
0.02757885493338108,
0.03096570260822773,
0.02192050963640213,
0.044409263879060745,
0.025259235873818398,
-0.036804601550102234,
-0.03200269117951393,
-0.012853334657847881,
0.01792251132428646,
-0.09489548951387405,
0.009... |
287 | The Unreasonable Ineffectiveness of the Deeper Layers | https://openreview.net/forum?id=ngmEcEer8a | [
"Andrey Gromov",
"Kushal Tirumala",
"Hassan Shapourian",
"Paolo Glorioso",
"Dan Roberts"
] | Poster | How is knowledge stored in an LLM’s weights? We study this via layer pruning: if removing a certain layer does not affect model performance in common question-answering benchmarks, then the weights in that layer are not necessary for storing the knowledge needed to answer those questions. To find these unnecessary para... | NLP, Pruning, Science of Deep Learning, Efficient Inference | We use model pruning as tool to understand how and where knowledge is located in open-weight LLMs: we find that we can remove up to half the layers of Llama-2 70B with essentially no impact on performance on QA benchmarks. | 13,737 | 2403.17887 | [
-0.0008830406004562974,
-0.0448157899081707,
0.011366426944732666,
0.03971482813358307,
0.054381512105464935,
0.015572684817016125,
0.003771684831008315,
-0.0110126081854105,
-0.018109530210494995,
-0.005332167726010084,
-0.01544160582125187,
0.023673029616475105,
-0.05815139412879944,
0.0... |
288 | Distilling Dataset into Neural Field | https://openreview.net/forum?id=nCrJD7qPJN | [
"Donghyeok Shin",
"HeeSun Bae",
"Gyuwon Sim",
"Wanmo Kang",
"Il-chul Moon"
] | Poster | Utilizing a large-scale dataset is essential for training high-performance deep learning models, but it also comes with substantial computation and storage costs. To overcome these challenges, dataset distillation has emerged as a promising solution by compressing the large-scale dataset into a smaller synthetic datase... | Dataset distillation, Dataset condensation, Neural field | This paper proposes an utilization framework of neural field for dataset distillation. | 13,708 | 2503.04835 | [
0.01322607509791851,
-0.030415387824177742,
-0.024420253932476044,
0.06562459468841553,
0.06020219996571541,
0.029449399560689926,
0.0017879934748634696,
-0.00833943486213684,
0.003096916014328599,
-0.048013363033533096,
0.01588173769414425,
-0.019850322976708412,
-0.053051553666591644,
0.... |
289 | Relax and Merge: A Simple Yet Effective Framework for Solving Fair k-Means and k-sparse Wasserstein Barycenter Problems | https://openreview.net/forum?id=n8h1z588eu | [
"Shihong Song",
"Guanlin Mo",
"Hu Ding"
] | Poster | The fairness of clustering algorithms has gained widespread attention across various areas, including machine learning, In this paper, we study fair $k$-means clustering in Euclidean space.
Given a dataset comprising several groups, the fairness constraint requires that each cluster should contain a proportion of po... | clustering, k-means, fairness, approxiamte algorithm, optimal transport | An improved algorithm for fair k-means problem. | 13,707 | null | [
-0.014200086705386639,
-0.030812613666057587,
0.03163991868495941,
0.031803809106349945,
0.02665579319000244,
0.02125987969338894,
0.013058734126389027,
0.0070137991569936275,
-0.0231049545109272,
-0.05828331038355827,
-0.010323171503841877,
-0.0376787967979908,
-0.04894305393099785,
-0.00... |
290 | Neural Dueling Bandits: Preference-Based Optimization with Human Feedback | https://openreview.net/forum?id=VELhv9BBfn | [
"Arun Verma",
"Zhongxiang Dai",
"Xiaoqiang Lin",
"Patrick Jaillet",
"Bryan Kian Hsiang Low"
] | Poster | Contextual dueling bandit is used to model the bandit problems, where a learner's goal is to find the best arm for a given context using observed noisy human preference feedback over the selected arms for the past contexts. However, existing algorithms assume the reward function is linear, which can be complex and non-... | Contextual Dueling Bandits, Preferences Learning, Human Feedback, Neural Bandits, Thompson Sampling | We study contextual dueling bandits problem and propose upper confidence bound- and Thompson sampling-based algorithms that use a neural network to estimate the reward function using human preference feedback and have sub-linear regret guarantees. | 13,697 | null | [
-0.01803969405591488,
0.008015518076717854,
0.0008779226918704808,
0.05905381962656975,
0.0051042442210018635,
0.02135266549885273,
0.004768214654177427,
0.02590986341238022,
-0.024174658581614494,
-0.04729132726788521,
-0.023455673828721046,
0.043472740799188614,
-0.0464724600315094,
-0.0... |
291 | SPORTU: A Comprehensive Sports Understanding Benchmark for Multimodal Large Language Models | https://openreview.net/forum?id=x1yOHtFfDh | [
"Haotian Xia",
"Zhengbang Yang",
"Junbo Zou",
"Rhys Tracy",
"Yuqing Wang",
"Chi Lu",
"Christopher Lai",
"Yanjun He",
"Xun Shao",
"Zhuoqing Xie",
"Yuan-fang Wang",
"Weining Shen",
"Hanjie Chen"
] | Poster | Multimodal Large Language Models (MLLMs) are advancing the ability to reason about complex sports scenarios by integrating textual and visual information. To comprehensively evaluate their capabilities, we introduce SPORTU, a benchmark designed to assess MLLMs across multi-level sports reasoning tasks. SPORTU comprises... | Multimodal Large Language Models, Sports Understanding, Benchmark | null | 13,686 | 2410.08474 | [
-0.0013373990077525377,
-0.01673932373523712,
0.005983095616102219,
0.028335480019450188,
0.021950440481305122,
-0.01871608756482601,
0.02789832279086113,
0.0275384820997715,
-0.015381711535155773,
-0.012584857642650604,
-0.014742261730134487,
0.036413196474313736,
-0.07449286431074142,
-0... |
292 | SIM: Surface-based fMRI Analysis for Inter-Subject Multimodal Decoding from Movie-Watching Experiments | https://openreview.net/forum?id=OJsMGsO6yn | [
"Simon Dahan",
"Gabriel Bénédict",
"Logan Zane John Williams",
"Yourong Guo",
"Daniel Rueckert",
"Robert Leech",
"Emma Claire Robinson"
] | Poster | Current AI frameworks for brain decoding and encoding, typically train and test models within the same datasets. This limits their utility for cognitive training (neurofeedback) for which it would be useful to pool experiences across individuals to better simulate stimuli not sampled during training. A key obstacle to ... | movie-watching experiment, fMRI, cortical analysis, surface-based transformers, multimodal learning, contrastive learning, self-supervised learning, generalization, encoding/decoding | A surface-based deep learning fMRI model that generalises encoding and decoding of audio-visual stimuli from movie-watching experiments to unseen subjects and unseen stimuli | 13,684 | 2501.16471 | [
0.02490704320371151,
0.022275861352682114,
0.013363630510866642,
0.007359199691563845,
0.027895551174879074,
0.02826046571135521,
0.04594480246305466,
0.026340315118432045,
-0.038000866770744324,
-0.05879666656255722,
-0.009720257483422756,
0.019023656845092773,
-0.06275859475135803,
-0.00... |
293 | Why In-Context Learning Models are Good Few-Shot Learners? | https://openreview.net/forum?id=iLUcsecZJp | [
"Shiguang Wu",
"Yaqing Wang",
"Quanming Yao"
] | Poster | We explore in-context learning (ICL) models from a learning-to-learn perspective. Unlike studies that identify specific learning algorithms in ICL models, we compare ICL models with typical meta-learners to understand their superior performance. We theoretically prove the expressiveness of ICL models as learning algori... | In-Context Learning, Meta-Learning | null | 13,664 | null | [
0.012762411497533321,
-0.008504990488290787,
-0.008463527075946331,
0.04075856879353523,
0.03440053015947342,
-0.004857610911130905,
0.03281957656145096,
0.03088061697781086,
-0.02646145597100258,
0.024079635739326477,
-0.028261594474315643,
0.04978567734360695,
-0.040695831179618835,
-0.0... |
294 | Evaluating Large Language Models through Role-Guide and Self-Reflection: A Comparative Study | https://openreview.net/forum?id=E36NHwe7Zc | [
"Lili Zhao",
"Yang Wang",
"Qi Liu",
"Mengyun Wang",
"Wei Chen",
"Zhichao Sheng",
"Shijin Wang"
] | Poster | Large Language Models fine-tuned with Reinforcement Learning from Human Feedback (RLHF-LLMs) can over-rely on aligned preferences without truly gaining self-knowledge, leading to hallucination and biases. If an LLM can better access its knowledge and know what it knows, it can avoid making false or unsupported claims. ... | LLMs, Verbalized confidence, Shortcut learning | Evaluating large language models through role-guide and self-reflection strategy to make comparative study | 13,663 | null | [
-0.013698723167181015,
-0.019590027630329132,
-0.007367664948105812,
0.014560660347342491,
0.05616806074976921,
-0.00537517573684454,
0.0334034226834774,
0.01786232553422451,
-0.02571013756096363,
0.015113434754312038,
-0.0326368622481823,
0.05674658343195915,
-0.032463785260915756,
-0.013... |
295 | Local Loss Optimization in the Infinite Width: Stable Parameterization of Predictive Coding Networks and Target Propagation | https://openreview.net/forum?id=g6syfIrVuS | [
"Satoki Ishikawa",
"Rio Yokota",
"Ryo Karakida"
] | Poster | Local learning, which trains a network through layer-wise local targets and losses, has been studied as an alternative to backpropagation (BP) in neural computation. However, its algorithms often become more complex or require additional hyperparameters due to the locality, making it challenging to identify desirable s... | deep learning, feature learning, local learning, predictive coding, target propagation, infinite width, maximal update parameterization (muP) | We derive the parameterization of major local learning methods that enable feature learning in infinite-width neural networks and demonstrate its benefits. | 13,642 | 2411.02001 | [
-0.02426576428115368,
-0.027569912374019623,
0.0029478927608579397,
0.018994837999343872,
0.03021043911576271,
0.04960261285305023,
0.007027293089777231,
-0.014887935481965542,
-0.023691730573773384,
-0.055491868406534195,
0.0008343582157976925,
-0.006201930809766054,
-0.05413490906357765,
... |
296 | Towards Faster Decentralized Stochastic Optimization with Communication Compression | https://openreview.net/forum?id=CMMpcs9prj | [
"Rustem Islamov",
"Yuan Gao",
"Sebastian U Stich"
] | Poster | Communication efficiency has garnered significant attention as it is considered the main bottleneck for large-scale decentralized Machine Learning applications in distributed and federated settings. In this regime, clients are restricted to transmitting small amounts of compressed information to their neighbors over a ... | Optimization, Decentralized Learning, Federated Learning, Communication Compression | null | 13,638 | 2405.20114 | [
-0.02312680333852768,
-0.0329427495598793,
-0.015750780701637268,
0.05689172074198723,
0.021559754386544228,
0.05095117539167404,
0.03382599353790283,
-0.024368269369006157,
-0.007720399182289839,
-0.07561495900154114,
-0.005804325919598341,
-0.0074241808615624905,
-0.06873678416013718,
-0... |
297 | Group-robust Sample Reweighting for Subpopulation Shifts via Influence Functions | https://openreview.net/forum?id=aQj9Ifxrl6 | [
"Rui Qiao",
"Zhaoxuan Wu",
"Jingtan Wang",
"Pang Wei Koh",
"Bryan Kian Hsiang Low"
] | Poster | Machine learning models often have uneven performance among subpopulations
(a.k.a., groups) in the data distributions. This poses a significant challenge for the
models to generalize when the proportions of the groups shift during deployment.
To improve robustness to such shifts, existing approaches have developed stra... | distribution shift, subpopulation shift, spurious correlation, influence function, sample reweighting, data selection | We introduce Group-robust Sample Reweighting (GSR), which uses group-labeled data to guide the iterative retraining of the model its on group-unlabeled data reweighted using influence functions. | 13,637 | 2503.07315 | [
0.0027158644516021013,
-0.041001416742801666,
0.024958528578281403,
0.03845593333244324,
0.023184319958090782,
0.034640394151210785,
0.02963799238204956,
-0.019740695133805275,
-0.010012256912887096,
-0.05560823529958725,
-0.023441528901457787,
-0.008498775772750378,
-0.0830770879983902,
-... |
298 | Endless Jailbreaks with Bijection Learning | https://openreview.net/forum?id=xP1radUi32 | [
"Brian R.Y. Huang",
"Maximilian Li",
"Leonard Tang"
] | Poster | Despite extensive safety measures, LLMs are vulnerable to adversarial inputs, or jailbreaks, which can elicit unsafe behaviors. In this work, we introduce bijection learning, a powerful attack algorithm which automatically fuzzes LLMs for safety vulnerabilities using randomly-generated encodings whose complexity can be... | jailbreaking, redteaming, AI safety, AI alignment, adversarial robustness, adversarial attacks | We jailbreak frontier language models with a novel state-of-the-art encoding-based jailbreak, and we derive inverse scaling laws regarding the efficacy of our jailbreak. | 13,633 | 2410.01294 | [
-0.016481993719935417,
-0.02095223031938076,
-0.0323280394077301,
0.023577360436320305,
0.03959035500884056,
0.01470633689314127,
0.03982127085328102,
-0.01044289581477642,
-0.023250769823789597,
-0.0035969202872365713,
-0.020385174080729485,
-0.011782368645071983,
-0.07326751947402954,
-0... |
299 | GotenNet: Rethinking Efficient 3D Equivariant Graph Neural Networks | https://openreview.net/forum?id=5wxCQDtbMo | [
"Sarp Aykent",
"Tian Xia"
] | Poster | Understanding complex three-dimensional (3D) structures of graphs is essential for accurately modeling various properties, yet many existing approaches struggle with fully capturing the intricate spatial relationships and symmetries inherent in such systems, especially in large-scale, dynamic molecular datasets. These ... | graph neural networks, computational physics, 3D graphs | GotenNet: An efficient framework that uses high-degree steerable features to model complex 3D molecular structures while maintaining E(3) equivariance. | 13,629 | null | [
-0.005428466480225325,
-0.02334756962954998,
0.023288195952773094,
0.03271272033452988,
0.02121184580028057,
-0.006300627253949642,
0.004083604086190462,
0.02341613918542862,
-0.00814311858266592,
-0.0533689446747303,
0.022964753210544586,
-0.0020747415255755186,
-0.05898824334144592,
0.04... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.