Dataset Viewer
Auto-converted to Parquet Duplicate
type
stringclasses
1 value
name
stringlengths
14
183
virtualsite_url
stringlengths
46
46
speakers/authors
stringlengths
8
1.31k
abstract
stringlengths
246
3.59k
Poster
$\boldsymbol{\lambda}$-Orthogonality Regularization for Compatible Representation Learning
https://neurips.cc//virtual/2025/poster/119181
Simone Ricci, Niccolò Biondi, Federico Pernici, Ioannis Patras, Alberto Del Bimbo
Retrieval systems rely on representations learned by increasingly powerful models. However, due to the high training cost and inconsistencies in learned representations, there is significant interest in facilitating communication between representations and ensuring compatibility across independently trained neural net...
Poster
$\Delta \mathrm{Energy}$: Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization
https://neurips.cc//virtual/2025/poster/116579
Lin Zhu, Yifeng Yang, Xinbing Wang, Qinying Gu, Nanyang Ye
Recent approaches for vision-language models (VLMs) have shown remarkable success in achieving fast downstream adaptation. When applied to real-world downstream tasks, VLMs inevitably encounter both the in-distribution (ID) data and out-of-distribution (OOD) data. The OOD datasets often include both covariate shifts (e...
Poster
$\epsilon$-Seg: Sparsely Supervised Semantic Segmentation of Microscopy Data
https://neurips.cc//virtual/2025/poster/115837
Sheida RahnamaiKordasiabi, Damian Nogare, Florian Jug
Semantic segmentation of electron microscopy (EM) images of biological samples remains a challenge in the life sciences.EM data captures details of biological structures, sometimes with such complexity that even human observers can find it overwhelming.Here we introduce $\epsilon$-Seg, a method based on hierarchical va...
Poster
$\mathcal{X}^2$-DFD: A framework for e$\mathcal{X}$plainable and e$\mathcal{X}$tendable Deepfake Detection
https://neurips.cc//virtual/2025/poster/115622
Yize Chen, Zhiyuan Yan, Guangliang Cheng, Kangran Zhao, Siwei Lyu, Baoyuan Wu
This paper proposes **$\mathcal{X}^2$-DFD**, an **e$\mathcal{X}$plainable** and **e$\mathcal{X}$tendable** framework based on multimodal large-language models (MLLMs) for deepfake detection, consisting of three key stages. The first stage, *Model Feature Assessment*, systematically evaluates the detectability of forger...
Poster
$\mathtt{VIBE}$: Video-to-Text Information Bottleneck Evaluation for TL;DR
https://neurips.cc//virtual/2025/poster/119324
Shenghui Chen, Po-han Li, Sandeep Chinchali, Ufuk Topcu
Many decision-making tasks, where both accuracy and efficiency matter, still require human supervision. For example, tasks like traffic officers reviewing hour-long dashcam footage or researchers screening conference videos can benefit from concise summaries that reduce cognitive load and save time. Yet current vision-...
Poster
$\mu$PC: Scaling Predictive Coding to 100+ Layer Networks
https://neurips.cc//virtual/2025/poster/116280
Francesco Innocenti, El Mehdi Achour, Christopher L Buckley
The biological implausibility of backpropagation (BP) has motivated many alternative, brain-inspired algorithms that attempt to rely only on local information, such as predictive coding (PC) and equilibrium propagation. However, these algorithms have notoriously struggled to train very deep networks, preventing them fr...
Poster
$O(\sqrt{T})$ Static Regret and Instance Dependent Constraint Violation for Constrained Online Convex Optimization
https://neurips.cc//virtual/2025/poster/117398
Rahul Vaze, Abhishek Sinha
The constrained version of the standard online convex optimization (OCO) framework, called COCO is considered, where on every round, a convex cost function and a convex constraint function are revealed to the learner after it chooses the action for that round.The objective is to simultaneously minimize the static regre...
Poster
$\Psi$-Sampler: Initial Particle Sampling for SMC-Based Inference-Time Reward Alignment in Score Models
https://neurips.cc//virtual/2025/poster/115660
Taehoon Yoon, Yunhong Min, Kyeongmin Yeo, Minhyuk Sung
We introduce $\Psi$-Sampler, an SMC-based framework incorporating pCNL-based initial particle sampling for effective inference-time reward alignment with a score-based model. Inference-time reward alignment with score-based generative models has recently gained significant traction, following a broader paradigm shift f...
Poster
$Q\sharp$: Provably Optimal Distributional RL for LLM Post-Training
https://neurips.cc//virtual/2025/poster/118428
Jin Zhou, Kaiwen Wang, Jonathan Chang, Zhaolin Gao, Nathan Kallus, Kilian Weinberger, Kianté Brantley, Wen Sun
Reinforcement learning (RL) post-training is crucial for LLM alignment and reasoning, but existing policy-based methods, such as PPO and DPO, can fall short of fixing shortcuts inherited from pre-training. In this work, we introduce $Q\sharp$, a value-based algorithm for KL-regularized RL that guides the reference poli...
Poster
$\text{G}^2\text{M}$: A Generalized Gaussian Mirror Method to Boost Feature Selection Power
https://neurips.cc//virtual/2025/poster/117060
Hongyu Shen, Zhizhen Jane Zhao
Recent advances in false discovery rate (FDR)-controlled methods have enhanced reliability by limiting false positives, making them particularly suitable for applications in complex scenarios. This paper identifies the limitation of a so-called "mirror statistics" that is introduced in a prominent FDR-controlled framew...
Poster
$\textit{HiMaCon:}$ Discovering Hierarchical Manipulation Concepts from Unlabeled Multi-Modal Data
https://neurips.cc//virtual/2025/poster/120127
Ruizhe Liu, Pei Zhou, Qian Luo, Li Sun, Jun CEN, Yibing Song, Yanchao Yang
Effective generalization in robotic manipulation requires representations that capture invariant patterns of interaction across environments and tasks.We present a self-supervised framework for learning hierarchical manipulation concepts that encode these invariant patterns through cross-modal sensory correlations and ...
Poster
$\textit{Hyper-GoalNet}$: Goal-Conditioned Manipulation Policy Learning with HyperNetworks
https://neurips.cc//virtual/2025/poster/117261
Pei Zhou, Wanting Yao, Qian Luo, Xunzhe Zhou, Yanchao Yang
Goal-conditioned policy learning for robotic manipulation presents significant challenges in maintaining performance across diverse objectives and environments. We introduce *Hyper-GoalNet*, a framework that generates task-specific policy network parameters from goal specifications using hypernetworks. Unlike conventio...
Poster
$\text{R}^2\text{ec}$: Towards Large Recommender Models with Reasoning
https://neurips.cc//virtual/2025/poster/117677
Runyang You, Yongqi Li, Xinyu Lin, Xin Zhang, Wenjie Wang, Wenjie Li, Liqiang Nie
Recent advances in Large Recommender Models extend LLMs for recommendation tasks via encoding or item generation,while reasoning ability of LLM is typically utilized as an external module to produce extra inputs or features for conventional architectures, resulting in misaligned optimization and underutilized LLM capac...
Poster
$\text{S}^2$Q-VDiT: Accurate Quantized Video Diffusion Transformer with Salient Data and Sparse Token Distillation
https://neurips.cc//virtual/2025/poster/116949
Weilun Feng, Haotong Qin, Chuanguang Yang, Xiangqi Li, Han Yang, Yuqi Li, Zhulin An, Libo Huang, Michele Magno, Yongjun Xu
Diffusion transformers have emerged as the mainstream paradigm for video generation models. However, the use of up to billions of parameters incurs significant computational costs. Quantization offers a promising solution by reducing memory usage and accelerating inference. Nonetheless, we observe that the joint modeli...
Poster
$\texttt{AVROBUSTBENCH}$: Benchmarking the Robustness of Audio-Visual Recognition Models at Test-Time
https://neurips.cc//virtual/2025/poster/121746
Sarthak Kumar Maharana, Saksham Singh Kushwaha, Baoming Zhang, Adrian Rodriguez, Songtao Wei, Yapeng Tian, Yunhui Guo
While recent audio-visual models have demonstrated impressive performance, their robustness to distributional shifts at test-time remains not fully understood. Existing robustness benchmarks mainly focus on single modalities, making them insufficient for thoroughly assessing the robustness of audio-visual models. Motiv...
Poster
$\texttt{BetaConform}$: Efficient MAP Estimation of LLM Ensemble Judgment Performance with Prior Transfer
https://neurips.cc//virtual/2025/poster/117896
Huaizhi Qu, Inyoung Choi, Zhen Tan, Song Wang, Sukwon Yun, Qi Long, Faizan Siddiqui, Kwonjoon Lee, Tianlong Chen
LLM ensembles are widely used for LLM judges. However, how to estimate their accuracy, especially in an efficient way, is unknown. In this paper, we present a principled $\textit{maximum a posteriori}$ (MAP) framework for an economical and precise estimation of the performance of LLM ensemble judgment. We first propose...
Poster
$\texttt{G1}$: Teaching LLMs to Reason on Graphs
https://neurips.cc//virtual/2025/poster/118526
Xiaojun Guo, Ang Li, Yifei Wang, Stefanie Jegelka, Yisen Wang
Although Large Language Models (LLMs) have demonstrated remarkable progress, their proficiency in graph-related tasks remains notably limited, hindering the development of truly general-purpose models. Previous attempts, including pretraining graph foundation models or employing supervised fine-tuning, often face chall...
Poster
$\texttt{STRCMP}$: Integrating Graph Structural Priors with Language Models for Combinatorial Optimization
https://neurips.cc//virtual/2025/poster/117663
Xijun Li, Jiexiang Yang, Jinghao Wang, Bo Peng, Jianguo Yao, Haibing Guan
Combinatorial optimization (CO) problems, central to operation research and theoretical computer science, present significant computational challenges due to their $\mathcal{NP}$-hard nature. While large language models (LLMs) have emerged as promising tools for CO—either by directly generating solutions or synthesizin...
Poster
1000+ FPS 4D Gaussian Splatting for Dynamic Scene Rendering
https://neurips.cc//virtual/2025/poster/117408
Yuheng Yuan, Qiuhong Shen, Xingyi Yang, Xinchao Wang
4D Gaussian Splatting (4DGS) has recently gained considerable attention as a method for reconstructing dynamic scenes. Despite achieving superior quality, 4DGS typically requires substantial storage and suffers from slow rendering speed. In this work, we delve into these issues and identify two key sources of temporal ...
Poster
1000 Layer Networks for Self-Supervised RL: Scaling Depth Can Enable New Goal-Reaching Capabilities
https://neurips.cc//virtual/2025/poster/115731
Kevin Wang, Ishaan Javali, Michał Bortkiewicz, Tomasz Trzcinski, Benjamin Eysenbach
Scaling up self-supervised learning has driven breakthroughs in language and vision, yet comparable progress has remained elusive in reinforcement learning (RL). In this paper, we study building blocks for self-supervised RL that unlock substantial improvements in scalability, with network depth serving as a critical f...
Poster
3BASiL: An Algorithmic Framework for Sparse plus Low-Rank Compression of LLMs
https://neurips.cc//virtual/2025/poster/117134
Mehdi Makni, Xiang Meng, Rahul Mazumder
Sparse plus Low-Rank $(\mathbf{S} + \mathbf{L}\mathbf{R})$ decomposition of Large Language Models (LLMs) has emerged as a promising direction in $\textit{model compression}$, aiming to decompose pre-trained model weights into a sum of sparse and low-rank matrices $\mathbf{W} \approx \mathbf{S} + \mathbf{LR}$. Despite r...
Poster
3D Equivariant Visuomotor Policy Learning via Spherical Projection
https://neurips.cc//virtual/2025/poster/116373
Boce Hu, Dian Wang, David Klee, Heng Tian, Xupeng Zhu, Haojie Huang, Robert Platt, Robin Walters
Equivariant models have recently been shown to improve the data efficiency of diffusion policy by a significant margin. However, prior work that explored this direction focused primarily on point cloud inputs generated by multiple cameras fixed in the workspace. This type of point cloud input is not compatible with the...
Poster
3D Gaussian Flats: Hybrid 2D/3D Photometric Scene Reconstruction
https://neurips.cc//virtual/2025/poster/115491
Maria Taktasheva, Lily Goli, Alessandro Fiorini, Zhen Li, Daniel Rebain, Andrea Tagliasacchi
Recent advances in radiance fields and novel view synthesis enable creation of realistic digital twins from photographs. However, current methods struggle with flat, texture-less surfaces, creating uneven and semi-transparent reconstructions, due to an ill-conditioned photometric reconstruction objective. Surface recon...
Poster
3D Gaussian Splatting based Scene-independent Relocalization with Unidirectional and Bidirectional Feature Fusion
https://neurips.cc//virtual/2025/poster/116876
Junyi Wang, Yuze Wang, Wantong Duan, Meng Wang, Yue Qi
Visual localization is a critical component across various domains. The recent emergence of novel scene representations, such as 3D Gaussian Splatting (3D GS), introduces new opportunities for advancing localization pipelines. In this paper, we propose a novel 3D GS-based framework for RGB-based, scene-independent came...
Poster
3D-GSRD: 3D Molecular Graph Auto-Encoder with Selective Re-mask Decoding
https://neurips.cc//virtual/2025/poster/118211
Chang Wu, ZHIYUAN LIU, Wen Shu, Liang Wang, Yanchen Luo, Wenqiang Lei, Yatao Bian, Junfeng Fang, Xiang Wang
Masked graph modeling (MGM) is a promising approach for molecular representation learning (MRL). However, extending the success of re-mask decoding from 2D to 3D MGM is non-trivial, primarily due to two conflicting challenges: avoiding 2D structure leakage to the decoder, while still providing sufficient 2D context for...
Poster
3D Human Pose Estimation with Muscles
https://neurips.cc//virtual/2025/poster/116069
Kevin Zhu, AliAsghar MohammadiNasrabadi, Alexander Wong, John McPhee
We introduce MusclePose as an end-to-end learnable physics-infused 3D human pose estimator that incorporates muscle-dynamics modeling to infer human dynamics from monocular video. Current physics pose estimators aim to predict physically plausible poses by enforcing the underlying dynamics equations that govern motion....
Poster
3DID: Direct 3D Inverse Design with Physics-Aware Optimization
https://neurips.cc//virtual/2025/poster/116170
Yuze Hao, Linchao Zhu, Yi Yang
Inverse design aims to design the input variables of a physical system to optimize a specified objective function, typically formulated as a search or optimization problem. However, in 3D domains, the design space grows exponentially, rendering exhaustive grid-based searches infeasible. Recent advances in deep learning...
Poster
3D Interaction Geometric Pre-training for Molecular Relational Learning
https://neurips.cc//virtual/2025/poster/118210
Namkyeong Lee, Yunhak Oh, Heewoong Noh, Gyoung S. Na, Minkai Xu, Hanchen Wang, Tianfan Fu, Chanyoung Park
Molecular Relational Learning (MRL) is a rapidly growing field that focuses on understanding the interaction dynamics between molecules, which is crucial for applications ranging from catalyst engineering to drug discovery. Despite recent progress, earlier MRL approaches are limited to using only the 2D topological str...
Poster
3DLLM-Mem: Long-Term Spatial-Temporal Memory for Embodied 3D Large Language Model
https://neurips.cc//virtual/2025/poster/115898
Wenbo Hu, Yining Hong, Yanjun Wang, Leison Gao, Zibu Wei, Xingcheng Yao, Nanyun Peng, Yonatan Bitton, Idan Szpektor, Kai-Wei Chang
Humans excel at performing complex tasks by leveraging long-term memory across temporal and spatial experiences. In contrast, current Large Language Models (LLMs) struggle to effectively plan and act in dynamic, multi-room 3D environments. We posit that part of this limitation is due to the lack of proper 3D spatial-te...
Poster
3D-OTT: Texture Transfer for 3D Objects from a Single Reference Image
https://neurips.cc//virtual/2025/poster/119398
Xiao Cao, Beibei Lin, Bo Wang, Zhiyong Huang, Robby Tan
Image-based 3D texture transfer from a single 2D reference image enables practical customization of 3D object appearances with minimal manual effort.Adapted 2D editing and text-driven 3D editing approaches can serve this purpose. However, 2D editing typically involves frame-by-frame manipulation, often resulting in inc...
Poster
3DPE-Gaze:Unlocking the Potential of 3D Facial Priors for Generalized Gaze Estimation
https://neurips.cc//virtual/2025/poster/117489
Yangshi Ge, Yiwei Bao, Feng Lu
In recent years, face-based deep-learning gaze estimation methods have achieved significant advancements. However, while face images provide supplementary information beneficial for gaze inference, the substantial extraneous information they contain also increases the risk of overfitting during model training and compr...
Poster
3D-Prover: Diversity Driven Theorem Proving With Determinantal Point Processes
https://neurips.cc//virtual/2025/poster/115866
Sean Lamont, Christian Walder, Amir Dezfouli, Paul Montague, Michael Norrish
A key challenge in automated formal reasoning is the intractable search space, which grows exponentially with the depth of the proof. This branching is caused by the large number of candidate proof tactics which can be applied to a given goal. Nonetheless, many of these tactics are semantically similar or lead to an ex...
Poster
3D-RAD: A Comprehensive 3D Radiology Med-VQA Dataset with Multi-Temporal Analysis and Diverse Diagnostic Tasks
https://neurips.cc//virtual/2025/poster/121618
Xiaotang Gai, Jiaxiang Liu, Yichen Li, Zijie Meng, Jian Wu, Zuozhu Liu
Medical Visual Question Answering (Med-VQA) holds significant potential for clinical decision support, yet existing efforts primarily focus on 2D imaging with limited task diversity. This paper presents 3D-RAD, a large-scale dataset designed to advance 3D Med-VQA using radiology CT scans. The 3D-RAD dataset encompasses...
Poster
3D Visual Illusion Depth Estimation
https://neurips.cc//virtual/2025/poster/115511
Chengtang Yao, Zhidan Liu, Jiaxi Zeng, Lidong Yu, Yuwei Wu, Yunde Jia
3D visual illusion is a perceptual phenomenon where a two-dimensional plane is manipulated to simulate three-dimensional spatial relationships, making a flat artwork or object look three-dimensional in the human visual system. In this paper, we reveal that the machine visual system is also seriously fooled by 3D visual...
Poster
3EED: Ground Everything Everywhere in 3D
https://neurips.cc//virtual/2025/poster/121462
Rong Li, Yuhao Dong, Tianshuai Hu, Alan Liang, Youquan Liu, Dongyue Lu, Liang Pan, Lingdong Kong, Junwei Liang, Ziwei Liu
Visual grounding in 3D is the key for embodied agents to localize language-referred objects in open-world environments. However, existing benchmarks are limited to indoor focus, single-platform constraints, and small scale. We introduce 3EED, a multi-platform, multi-modal 3D grounding benchmark featuring RGB and LiDAR ...
Poster
4D3R: Motion-Aware Neural Reconstruction and Rendering of Dynamic Scenes from Monocular Videos
https://neurips.cc//virtual/2025/poster/119055
Mengqi Guo, Bo Xu, Yanyan Li, Gim Hee Lee
Novel view synthesis from monocular videos of dynamic scenes with unknown camera poses remains a fundamental challenge in computer vision and graphics. While recent advances in 3D representations such as Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS) have shown promising results for static scenes, they ...
Poster
4DGCPro: Efficient Hierarchical 4D Gaussian Compression for Progressive Volumetric Video Streaming
https://neurips.cc//virtual/2025/poster/118452
Zihan Zheng, Zhenlong Wu, Houqiang Zhong, Yuan Tian, Ning Cao, Lan Xu, Jiangchao Yao, Xiaoyun Zhang, Qiang Hu, Wenjun Zhang
Achieving seamless viewing of high-fidelity volumetric video, comparable to 2D video experiences, remains an open challenge. Existing volumetric video compression methods either lack the flexibility to adjust quality and bitrate within a single model for efficient streaming across diverse networks and devices, or strug...
Poster
4DGT: Learning a 4D Gaussian Transformer Using Real-World Monocular Videos
https://neurips.cc//virtual/2025/poster/115879
Zhen Xu, Zhengqin Li, Zhao Dong, Xiaowei Zhou, Richard Newcombe, Zhaoyang Lv
We propose 4DGT, a 4D Gaussian-based Transformer model for dynamic scene reconstruction, trained entirely on real-world monocular posed videos. Using 4D Gaussian as an inductive bias, 4DGT unifies static and dynamic components, enabling the modeling of complex, time-varying environments with varying object lifespans. W...
Poster
4D-LRM: Large Space-Time Reconstruction Model From and To Any View at Any Time
https://neurips.cc//virtual/2025/poster/120016
Ziqiao Ma, Xuweiyi Chen, Shoubin Yu, Sai Bi, Kai Zhang, Ziwen Chen, Sihan Xu, Jianing Yang, Zexiang Xu, Kalyan Sunkavalli, Mohit Bansal, Joyce Chai, Hao Tan
Can we scale 4D pretraining to learn a general space-time representation that reconstructs an object from a few views at some times to any view at any time? We introduce 4D-LRM, the first large-scale 4D reconstruction model that takes input from unconstrained views and timestamps and renders arbitrary novel view-time c...
Poster
4D-VLA: Spatiotemporal Vision-Language-Action Pretraining with Cross-Scene Calibration
https://neurips.cc//virtual/2025/poster/115166
Jiahui Zhang, Yurui Chen, Yueming Xu, Ze Huang, Yanpeng Zhou, Yu-Jie Yuan, Xinyue Cai, Guowei Huang, Xingyue Quan, Hang Xu, Li Zhang
Leveraging diverse robotic data for pretraining remains a critical challenge. Existing methods typically model the dataset’s action distribution using simple observations as inputs. However, these inputs are often incomplete, resulting in a dispersed conditional action distribution—an issue we refer to as coordinate sy...
Poster
4KAgent: Agentic Any Image to 4K Super-Resolution
https://neurips.cc//virtual/2025/poster/118816
Yushen Zuo, Qi Zheng, Mingyang Wu, Xinrui Jiang, Renjie Li, Jian Wang, Yide Zhang, Gengchen Mai, Lihong Wang, James Zou, Xiaoyu Wang, Ming-Hsuan Yang, Zhengzhong Tu
We present 4KAgent, a unified agentic super-resolution generalist system designed to universally upscale any image to 4K resolution. Our system can transform images from extremely low resolutions with severe degradations, for example, highly distorted inputs at $256\times 256$, into crystal clear, high-quality 4K outpu...
Poster
70% Size, 100% Accuracy: Lossless LLM Compression for Efficient GPU Inference via Dynamic-Length Float
https://neurips.cc//virtual/2025/poster/115225
Tianyi Zhang, Shaochen (Henry) Zhong, Mohsen Hariri, Vipin Chaudhary, Yang Sui, Xia Hu, Anshumali Shrivastava
Large-scale AI models, such as Large Language Models (LLMs) and Diffusion Models (DMs), have grown rapidly in size, creating significant challenges for efficient deployment on resource-constrained hardware. In this paper, we introduce Dynamic-Length Float (DFloat11), a lossless compression framework that reduces LLM an...
Poster
A$^3$E: Towards Compositional Model Editing
https://neurips.cc//virtual/2025/poster/118255
Hongming Piao, Hao Wang, Dapeng Wu, Ying Wei
Model editing has become a *de-facto* practice to address hallucinations and outdated knowledge of large language models (LLMs). However, existing methods are predominantly evaluated in isolation, i.e., one edit at a time, failing to consider a critical scenario of compositional model editing, where multiple edits must...
Poster
A2Seek: Towards Reasoning-Centric Benchmark for Aerial Anomaly Understanding
https://neurips.cc//virtual/2025/poster/121562
Mengjingcheng Mo, Xinyang Tong, Mingpi Tan, Jiaxu Leng, JianKang Zheng, Yiran Liu, Haosheng Chen, Ji Gan, Weisheng Li, Xinbo Gao
While unmanned aerial vehicles (UAVs) offer wide-area, high-altitude coverage for anomaly detection, they face challenges such as dynamic viewpoints, scale variations, and complex scenes. Existing datasets and methods, mainly designed for fixed ground-level views, struggle to adapt to these conditions, leading to signi...
Poster
AANet: Virtual Screening under Structural Uncertainty via Alignment and Aggregation
https://neurips.cc//virtual/2025/poster/117847
Wenyu Zhu, Jianhui Wang, Bowen Gao, Yinjun Jia, Haichuan Tan, Ya-Qin Zhang, Wei-Ying Ma, Yanyan Lan
Virtual screening (VS) is a critical component of modern drug discovery, yet most existing methods—whether physics-based or deep learning-based—are developed around {\em holo} protein structures with known ligand-bound pockets. Consequently, their performance degrades significantly on {\em apo} or predicted structures ...
Poster
A Bayesian Approach to Contextual Dynamic Pricing using the Proportional Hazards Model with Discrete Price Data
https://neurips.cc//virtual/2025/poster/115838
Dongguen Kim, Young-Geun Choi, Minwoo Chae
Dynamic pricing algorithms typically assume continuous price variables, which may not reflect real-world scenarios where prices are often discrete. This paper demonstrates that leveraging discrete price information within a semi-parametric model can substantially improve performance, depending on the size of the suppor...
Poster
A Bayesian Fast-Slow Framework to Mitigate Interference in Non-Stationary Reinforcement Learning
https://neurips.cc//virtual/2025/poster/118436
Yihuan Mao, Chongjie Zhang
Given the ever-changing nature of the world and its inhabitants, agents must possess the ability to adapt and evolve over time. Recent research in Given the ever-changing nature of the world and its inhabitants, agents must possess the ability to adapt and evolve over time. Recent research in non-stationary MDPs has fo...
Poster
A Beyond-Worst-Case Analysis of Greedy k-means++
https://neurips.cc//virtual/2025/poster/120080
Qingyun Chen, Sungjin Im, Ben Moseley, Ryan Milstrey, Chenyang Xu, Ruilong Zhang
$k$-means++ and the related greedy $k$-means++ algorithm are celebrated algorithms that efficiently compute seeds for Lloyd's algorithm. Greedy $k$-means++ is a generalization of $k$-means++ where, in each iteration, a new seed is greedily chosen among multiple $\ell \geq 2$ points sampled, as opposed to a single seed ...
Poster
A Black-Box Debiasing Framework for Conditional Sampling
https://neurips.cc//virtual/2025/poster/120255
Han Cui, Jingbo Liu
Conditional sampling is a fundamental task in Bayesian statistics and general modeling. Consider the problem of sampling from the posterior distribution $P\_{X|Y=y^*}$ for some observation $y^*$, where the likelihood $P\_{Y|X}$ is known, and we are given $n$ i.i.d. samples $D=\\{X\_i\\}\_{i=1}^n$ drawn from an unknown ...
Poster
Absence Bench: Language Models Can’t See What’s Missing
https://neurips.cc//virtual/2025/poster/121453
Harvey Yiyun Fu, Aryan Shrivastava, Jared Moore, Peter West, Chenhao Tan, Ari Holtzman
Large language models (LLMs) are increasingly capable of processing long inputs and locating specific information within them, as evidenced by their performance on the Needle in a Haystack (NIAH) test. However, while models excel at recalling surprising information, they still struggle to identify *clearly omitted* inf...
Poster
Absolute Zero: Reinforced Self-play Reasoning with Zero Data
https://neurips.cc//virtual/2025/poster/116121
Andrew Zhao, Yiran Wu, Yang Yue, Tong Wu, Quentin Xu, Yang Yue, Matthieu Lin, Shenzhi Wang, Qingyun Wu, Zilong Zheng, Gao Huang
Reinforcement learning with verifiable rewards (RLVR) has shown promise in enhancing the reasoning capabilities of large language models by learning directly from rule-based outcome rewards. Recent RLVR works that operate under the zero setting avoid supervision in labeling the reasoning process, but still depend on ma...
Poster
Absorb and Converge: Provable Convergence Guarantee for Absorbing Discrete Diffusion Models
https://neurips.cc//virtual/2025/poster/117647
Yuchen Liang, Renxiang Huang, Lifeng LAI, Ness Shroff, Yingbin Liang
Discrete state space diffusion models have shown significant advantages in applications involving discrete data, such as text and image generation. It has also been observed that their performance is highly sensitive to the choice of rate matrices, particularly between uniform and absorbing rate matrices. While empiric...
Poster
Abstain Mask Retain Core: Time Series Prediction by Adaptive Masking Loss with Representation Consistency
https://neurips.cc//virtual/2025/poster/118603
Renzhao Liang, Sizhe Xu, Chenggang Xie, Jingru Chen, Feiyang Ren, Shu Yang, Takahiro Yabe
Time series forecasting plays a pivotal role in critical domains such as energy management and financial markets. Although deep learning-based approaches (e.g., MLP, RNN, Transformer) have achieved remarkable progress, the prevailing "long-sequence information gain hypothesis" exhibits inherent limitations. Through sys...
Poster
AbstentionBench: Reasoning LLMs Fail on Unanswerable Questions
https://neurips.cc//virtual/2025/poster/121675
Polina Kirichenko, Mark Ibrahim, Kamalika Chaudhuri, Samuel Bell
For Large Language Models (LLMs) to be reliably deployed in both everyday and high-stakes domains, knowing when not to answer is equally critical as answering correctly.Real-world user queries, which can be underspecified, ill-posed, or fundamentally unanswerable, require LLMs to reason about uncertainty and selectivel...
Poster
Abstract Counterfactuals for Language Model Agents
https://neurips.cc//virtual/2025/poster/116537
Edoardo Pona, Milad Kazemi, Yali Du, David Watson, Nicola Paoletti
Counterfactual inference is a powerful tool for analysing and evaluating autonomous agents, but its application to language model (LM) agents remains challenging. Existing work on counterfactuals in LMs has primarily focused on token-level counterfactuals, which are often inadequate for LM agents due to their open-ende...
Poster
Abstract Rendering: Certified Rendering Under 3D Semantic Uncertainty
https://neurips.cc//virtual/2025/poster/119130
Yangge Li, Chenxi Ji, Xiangru Zhong, Huan Zhang, Sayan Mitra
The rendering process, which generates 2D images from 3D scene representations, has been extensively studied, yet the impact of camera pose and scene uncertainty on rendered outputs and downstream tasks remains underexplored. We propose **Abstract Rendering**, a framework that computes provable bounds on all images ren...
Poster
A Cautionary Tale on Integrating Studies with Disparate Outcome Measures for Causal Inference
https://neurips.cc//virtual/2025/poster/119654
Harsh Parikh, Trang Nguyen, Elizabeth Stuart, Kara Rudolph, Caleb Miles
Data integration approaches are increasingly used to enhance the efficiency and generalizability of studies. However, a key limitation of these methods is the assumption that outcome measures are identical across datasets -- an assumption that often does not hold in practice. Consider the following opioid use disorder ...
Poster
Accelerated Distance-adaptive Methods for Hölder Smooth and Convex Optimization
https://neurips.cc//virtual/2025/poster/116926
Yijin Ren, Haifeng Xu, Qi Deng
This paper introduces new parameter-free first-order methods for convex optimization problems in which the objective function exhibits Hölder smoothness. Inspired by the recently proposed distance-over-gradient (DOG) technique, we propose an accelerated distance-adaptive method which achieves optimal anytime convergenc...
Poster
Accelerated Evolving Set Processes for Local PageRank Computation
https://neurips.cc//virtual/2025/poster/115072
Binbin Huang, Luo Luo, Yanghua Xiao, Deqing Yang, Baojian Zhou
This work proposes a novel framework based on nested evolving set processes to accelerate Personalized PageRank (PPR) computation. At each stage of the process, we employ a localized inexact proximal point iteration to solve a simplified linear system. We show that the time complexity of such localized methods is upper...
Poster
Accelerated Sampling from Masked Diffusion Models via Entropy Bounded Unmasking
https://neurips.cc//virtual/2025/poster/117618
Heli Ben-Hamu, Itai Gat, Daniel Severo, Niklas S Nolte, Brian Karrer
Recent masked diffusion models (MDMs) have shown competitive performance compared to autoregressive models (ARMs) for language modeling. While most literature has focused on performance enhancing sampling procedures, efficient sampling from MDMs has been scarcely explored. We make the observation that often a given seq...
Poster
Accelerated Vertical Federated Adversarial Learning through Decoupling Layer-Wise Dependencies
https://neurips.cc//virtual/2025/poster/115884
Tianxing Man, Yu Bai, Ganyu Wang, Jinjie Fang, Haoran Fang, Bin Gu, Yi Chang
Vertical Federated Learning (VFL) enables participants to collaboratively train models on aligned samples while keeping their heterogeneous features private and distributed.Despite their utility, VFL models remain vulnerable to adversarial attacks during inference. Adversarial Training (AT), which generates adversarial...
Poster
Accelerating 3D Molecule Generative Models with Trajectory Diagnosis
https://neurips.cc//virtual/2025/poster/119455
Zhilong Zhang, Yuxuan Song, Yichun Wang, Jingjing Gong, Hanlin Wu, Dongzhan Zhou, Hao Zhou, Wei-Ying Ma
Geometric molecule generative models have found expanding applications across various scientific domains, but their generation inefficiency has become a critical bottleneck. Through a systematic investigation of the generative trajectory, we discover a unique challenge for molecule geometric graph generation: generativ...
Poster
Accelerating Block Coordinate Descent for LLM Finetuning via Landscape Expansion
https://neurips.cc//virtual/2025/poster/119233
Qijun Luo, Yifei Shen, Liangzu Peng, Dongsheng Li, Xiao Li
Finetuning large language models (LLMs) is a resource-intensive task for researchers in academia, with memory constraints posing a key bottleneck. A classic optimization method, block coordinate descent (BCD), significantly reduces memory cost by segmenting the trainable parameters into multiple blocks and optimizing o...
Poster
Accelerating data-driven algorithm selection for combinatorial partitioning problems
https://neurips.cc//virtual/2025/poster/115687
Vaggos Chatziafratis, Ishani Karmarkar, Yingxi Li, Ellen Vitercik
Data-driven algorithm selection is a powerful approach for choosing effective heuristics for computational problems. It operates by evaluating a set of candidate algorithms on a collection of representative training instances and selecting the one with the best empirical performance. However, running each algorithm on ...
Poster
Accelerating Diffusion LLMs via Adaptive Parallel Decoding
https://neurips.cc//virtual/2025/poster/115194
Daniel Israel, Guy Van den Broeck, Aditya Grover
The generation speed of LLMs are bottlenecked by autoregressive decoding, where tokens are predicted sequentially one by one. Alternatively, diffusion large language models (dLLMs) theoretically allow for parallel token generation, but in practice struggle to achieve the speed of autoregressive models without significa...
Poster
Accelerating Feature Conformal Prediction via Taylor Approximation
https://neurips.cc//virtual/2025/poster/115011
Zihao Tang, Boyuan Wang, Chuan Wen, Jiaye Teng
Conformal prediction is widely adopted in uncertainty quantification, due to its post-hoc, distribution-free, and model-agnostic properties.In the realm of modern deep learning, researchers have proposed Feature Conformal Prediction (FCP), which deploys conformal prediction in a feature space, yielding reduced band len...
Poster
Accelerating Model-Free Optimization via Averaging of Cost Samples
https://neurips.cc//virtual/2025/poster/116522
Guido Carnevale, Giuseppe Notarstefano
Model-free optimization methods typically rely on cost samples gathered by perturbing the current solution estimate along a finite and fixed set of directions. However, at each iteration, only current cost samples are used, while potentially informative, previously collected samples are discarded. In this work, we chal...
Poster
Accelerating Multimodal Large Language Models via Dynamic Visual-Token Exit and the Empirical Findings
https://neurips.cc//virtual/2025/poster/118110
Qiong Wu, Wenhao Lin, Yiyi Zhou, Weihao Ye, Zhanpeng Zeng, Xiaoshuai Sun, Rongrong Ji
In this paper, we study the visual redundancy problem of multimodal large language models (MLLMs) from the perspective of attention behaviors. Via extensive empirical experiments, we observe and conclude three main inference stages of MLLMs:(i) Early fusion between tokens is first accomplished quickly. (ii) Intra-modal...
Poster
Accelerating Optimization via Differentiable Stopping Time
https://neurips.cc//virtual/2025/poster/118578
Zhonglin Xie, Yiman Fong, Haoran Yuan, Zaiwen Wen
Optimization is an important module of modern machine learning applications. Tremendous efforts have been made to accelerate optimization algorithms. A common formulation is achieving a lower loss at a given time. This enables a differentiable framework with respect to the algorithm hyperparameters. In contrast, its du...
Poster
Accelerating Parallel Diffusion Model Serving with Residual Compression
https://neurips.cc//virtual/2025/poster/116432
Jiajun Luo, Yicheng Xiao, Jianru Xu, Yangxiu You, Rongwei Lu, Chen Tang, Jingyan Jiang, Zhi Wang
Diffusion models produce realistic images and videos but require substantial computational resources, necessitating multi-accelerator parallelism for real-time deployment. However, parallel inference introduces significant communication overhead from exchanging large activations between devices, limiting efficiency and...
Poster
Accelerating RL for LLM Reasoning with Optimal Advantage Regression
https://neurips.cc//virtual/2025/poster/117885
Kianté Brantley, Mingyu Chen, Zhaolin Gao, Jason Lee, Wen Sun, Wenhao Zhan, Xuezhou Zhang
Reinforcement learning (RL) has emerged as a powerful tool for fine-tuning large language models (LLMs) to improve complex reasoning abilities. However, state-of-the-art policy optimization methods often suffer from high computational overhead and memory consumption, primarily due to the need for multiple generations p...
Poster
Accelerating Video Diffusion Transformers with Sparse Attention via Semantic-Aware Permutation
https://neurips.cc//virtual/2025/poster/117598
Shuo Yang, Haocheng Xi, Yilong Zhao, Muyang Li, Jintao Zhang, Han Cai, Yujun Lin, Xiuyu Li, Chenfeng Xu, Kelly Peng, Jianfei Chen, Song Han, Kurt Keutzer, Ion Stoica
Diffusion Transformers (DiTs) are essential for video generation but suffer from significant latency due to the quadratic complexity of attention. By computing only critical tokens, sparse attention reduces computational costs and offers a promising acceleration approach. However, we identify that existing methods fail...
Poster
Accelerating Visual-Policy Learning through Parallel Differentiable Simulation
https://neurips.cc//virtual/2025/poster/119928
Haoxiang You, Yilang Liu, Ian Abraham
In this work, we propose a computationally efficient algorithm for visual policy learning that leverages differentiable simulation and first-order analytical policy gradients.Our approach decouple the rendering process from the computation graph, enabling seamless integration with existing differentiable simulation eco...
Poster
Acceleration via silver step-size on Riemannian manifolds with applications to Wasserstein space
https://neurips.cc//virtual/2025/poster/118337
Jiyoung Park, Anirban Bhattacharya, Abhishek Roy, Jonathan W. Siegel
There is extensive literature on accelerating first-order optimization methods in a Euclidean setting. Under which conditions such acceleration is feasible in Riemannian optimization problems is an active area of research. Motivated by the recent success of varying step-size methods in the Euclidean setting, we underta...
Poster
Accident Anticipation via Temporal Occurrence Prediction
https://neurips.cc//virtual/2025/poster/119711
Tianhao Zhao, Yiyang Zou, Zihao Mao, Peilun Xiao, Yulin Huang, Hongda Yang, Yuxuan Li, Tracy Li, Guobin Wu, Yutian Lin
Driving accident anticipation aims to predict potential collisions in real time, enabling timely alarms to enhance road safety. Existing methods typically predict frame-level anomaly scores as risk indicators. However, these approaches suffer from inconsistent supervision signals because driving risks evolve progressiv...
Poster
ACCO: Accumulate While You Communicate for Communication-Overlapped Sharded LLM Training
https://neurips.cc//virtual/2025/poster/120191
Adel Nabli, Louis Fournier, Pierre ERBACHER, Louis Serrano, Eugene Belilovsky, Edouard Oyallon
Training Large Language Models (LLMs) relies heavily on distributed implementations, employing multiple GPUs to compute stochastic gradients on model replicas in parallel. However, synchronizing gradients in data parallel settings induces a communication overhead increasing with the number of distributed workers, which...
Poster
AccuQuant: Simulating Multiple Denoising Steps for Quantizing Diffusion Models
https://neurips.cc//virtual/2025/poster/118264
Seunghoon Lee, Jeongwoo Choi, Byunggwan Son, JaeHyeon Moon, Jeimin Jeon, Bumsub Ham
We present in this paper a novel post-training quantization (PTQ) method, dubbed AccuQuant, for diffusion models. We show analytically and empirically that quantization errors for diffusion models are accumulated over denoising steps in a sampling process. To alleviate the error accumulation problem, AccuQuant minimize...
Poster
Accurate KV Cache Eviction via Anchor Direction Projection for Efficient LLM Inference
https://neurips.cc//virtual/2025/poster/117838
Zijie Geng, Jie Wang, Ziqi Liu, Feng Ju, Yiming Li, Xing Li, Mingxuan Yuan, Jianye Hao, Defu Lian, Enhong Chen, Feng Wu
Key-Value (KV) cache eviction---which retains the KV pairs of the most important tokens while discarding less important ones---is a critical technique for optimizing both memory usage and inference latency in large language models (LLMs).However, existing approaches often rely on simple heuristics---such as attention w...
Poster
Accurately Predicting Protein Mutational Effects via a Hierarchical Many-Body Attention Network
https://neurips.cc//virtual/2025/poster/119261
Dahao Xu, Jiahua Rao, Mingming Zhu, Jixian Zhang, Wei Lu, Shuangjia Zheng, Yuedong Yang
Predicting changes in binding free energy ($\Delta\Delta G$) is essential for understanding protein-protein interactions, which are critical in drug design and protein engineering. However, existing methods often rely on pre-trained knowledge and heuristic features, limiting their ability to accurately model complex mu...
Poster
AC-DiT: Adaptive Coordination Diffusion Transformer for Mobile Manipulation
https://neurips.cc//virtual/2025/poster/115843
Sixiang Chen, Jiaming Liu, Siyuan Qian, Han Jiang, Zhuoyang Liu, Chenyang Gu, Xiaoqi Li, Chengkai Hou, Pengwei Wang, Zhongyuan Wang, Renrui Zhang, Shanghang Zhang
Recently, mobile manipulation has attracted increasing attention for enabling language-conditioned robotic control in household tasks.However, existing methods still face challenges in coordinating mobile base and manipulator, primarily due to two limitations.On the one hand, they fail to explicitly model the influence...
Poster
AceRAG: Advancing Reasoning-Intensive Retrieval-Augmented Generation via LLM Self-Play
https://neurips.cc//virtual/2025/poster/116458
Ran Xu, Yuchen Zhuang, Zihan Dong, Ruiyu Wang, Yue Yu, Joyce Ho, Linjun Zhang, Haoyu Wang, Wenqi Shi, Carl Yang
Retrieval-augmented generation (RAG) systems often struggle with complex reasoning tasks due to ineffective multi-hop retrieval and limited reasoning ability. We propose AceRAG, a cooperative self-play framework that trains a single large language model (LLM) to alternate between two roles: a decomposer that breaks dow...
Poster
AceReason: Advancing Math and Code Reasoning through Reinforcement Learning
https://neurips.cc//virtual/2025/poster/119111
Yang Chen, Zhuolin Yang, Zihan Liu, Chankyu Lee, Peng Xu, Mohammad Shoeybi, Bryan Catanzaro, Wei Ping
Despite recent progress in large-scale reinforcement learning (RL) for reasoning, the training recipe for building high-performing reasoning models remains elusive. Key implementation details of frontier models, such as DeepSeek-R1, including data curation strategies and RL training recipe, are often omitted. Moreover,...
Poster
Achieving $\tilde{\mathcal{O}}(1/N)$ Optimality Gap in Restless Bandits through Gaussian Approximation
https://neurips.cc//virtual/2025/poster/117853
Chen YAN, Weina Wang, Lei Ying
We study the finite-horizon Restless Multi-Armed Bandit (RMAB) problem with $N$ homogeneous arms. Prior work has shown that when an RMAB satisfies a non-degeneracy condition, Linear-Programming-based (LP-based) policies derived from the fluid approximation, which captures the mean dynamics of the system, achieve an exp...
Poster
Achilles' Heel of Mamba: Essential difficulties of the Mamba architecture demonstrated by synthetic data
https://neurips.cc//virtual/2025/poster/119917
Tianyi Chen, Pengxiao Lin, Zhiwei Wang, Zhi-Qin Xu
State Space Models (SSMs) have emerged as promising alternatives to attention mechanisms, with the Mamba architecture demonstrating impressive performance and linear complexity for processing long sequences. However, the fundamental differences between Mamba and Transformer architectures remain incompletely understood....
Poster
A Circular Argument: Does RoPE need to be Equivariant for Vision?
https://neurips.cc//virtual/2025/poster/117614
Chase van de Geijn, Polina Turishcheva, Alexander Ecker, Timo Lüddecke
Rotary Positional Encodings (RoPE) have emerged as a highly effective technique for one-dimensional sequences in Natural Language Processing spurring recent progress towards generalizing RoPE to higher-dimensional data such as images and videos. The success of RoPE has been thought to be due to its positional equivaria...
Poster
A Clean Slate for Offline Reinforcement Learning
https://neurips.cc//virtual/2025/poster/119622
Matthew T Jackson, Uljad Berdica, Jarek Liesen, Shimon Whiteson, Jakob Foerster
Progress in offline reinforcement learning (RL) has been impeded by ambiguous problem definitions and entangled algorithmic designs, resulting in inconsistent implementations, insufficient ablations, and unfair evaluations. Although offline RL explicitly avoids environment interaction, prior methods frequently employ e...
Poster
AC-LoRA: (Almost) Training-Free Access Control Aware Multi-Modal LLMs
https://neurips.cc//virtual/2025/poster/117175
Lara Lazier, Aritra Dhar, Vasilije Stambolic, Lukas Cavigelli
Corporate LLMs are gaining traction for efficient knowledge dissemination and management within organizations. However, as current LLMs are vulnerable to leaking sensitive information, it has proven difficult to apply them in settings where strict access control is necessary.To this end, we design AC-LoRA, an end-to-en...
Poster
A Closed-Form Solution for Fast and Reliable Adaptive Testing
https://neurips.cc//virtual/2025/poster/117905
Yan Zhuang, Chenye Ke, Zirui Liu, Qi Liu, Yuting Ning, Zhenya Huang, Weizhe Huang, Qingyang Mao, Shijin Wang
Human ability estimation is essential for educational assessment, career advancement, and professional certification. Adaptive Testing systems can improve estimation efficiency by selecting fewer, targeted questions, and are widely used in exams, e.g., GRE, GMAT, and Duolingo English Test. However, selecting an optimal...
Poster
A Closer Look at Model Collapse: From a Generalization-to-Memorization Perspective
https://neurips.cc//virtual/2025/poster/119754
Lianghe Shi, Meng Wu, Huijie Zhang, Zekai Zhang, Molei Tao, Qing Qu
The widespread use of diffusion models has led to an abundance of AI-generated data, raising concerns about model collapse---a phenomenon in which recursive iterations of training on synthetic data lead to performance degradation. Prior work primarily characterizes this collapse via variance shrinkage or distribution s...
Poster
A Closer Look at NTK Alignment: Linking Phase Transitions in Deep Image Regression
https://neurips.cc//virtual/2025/poster/118981
Giuseppe Castiglione, Christopher L Buckley, Ivor Simpson
Deep neural networks trained with gradient descent exhibit varying rates of learning for different patterns. However, the complexity of fitting models to data makes direct elucidation of the dynamics of learned patterns challenging. To circumvent this, many works have opted to characterize phases of learning through su...
Poster
A Closer Look at TabPFN v2: Understanding Its Strengths and Extending Its Capabilities
https://neurips.cc//virtual/2025/poster/116283
Han-Jia Ye, Si-Yang Liu, Wei-Lun (Harry) Chao
Tabular datasets are inherently heterogeneous, presenting significant challenges for developing pre-trained foundation models. The recently introduced transformer-based Tabular Prior-data Fitted Network v2 (TabPFN v2) achieves unprecedented *in-context learning* performance across diverse downstream datasets, marking a...
Poster
A Closer Look to Positive-Unlabeled Learning from Fine-grained Perspectives: An Empirical Study
https://neurips.cc//virtual/2025/poster/116712
Yuanchao Dai, Zhengzhang Hou, Changchun Li, Yuanbo Xu, En Wang, Ximing Li
Positive-Unlabeled (PU) learning refers to a specific weakly-supervised learning paradigm that induces a binary classifier with a few positive labeled instances and massive unlabeled instances. To handle this task, the community has proposed dozens of PU learning methods with various techniques, demonstrating strong po...
Poster
A CLT for Polynomial GNNs on Community-Based Graphs
https://neurips.cc//virtual/2025/poster/117042
Luciano Vinas, Arash Amini
We consider the empirical distribution of the embeddings of a $k$-layer polynomial GNN on a semi-supervised node classification task and prove a central limit theorem for them. Assuming a community based model for the underlying graph, with growing average degree $\nu_n\to\infty$, we show that the empirical distributio...
Poster
A compressive-expressive communication framework for compositional representations
https://neurips.cc//virtual/2025/poster/115077
Rafael Elberg, Felipe del Río, Mircea Petrache, Denis Parra
Compositional generalization—the ability to interpret novel combinations of familiar elements—is a hallmark of human cognition and language. Despite recent advances, deep neural networks still struggle to acquire this property reliably. In this work, we introduce CELEBI (Compressive-Expressive Language Emergence throug...
Poster
A Computationally Viable Numerical Gradient-based Technique for Optimal Covering Problems
https://neurips.cc//virtual/2025/poster/116031
Gokul Rajaraman, Debasish Chatterjee
The problem of optimally covering a given compact subset of $\mathbb{R}^N$ with a preassigned number $n$ of Euclidean metric balls has a long-standing history and it is well-recognized to be computationally hard. This article establishes a numerically viable algorithm for obtaining optimal covers of compact sets via tw...
Poster
A Controllable Examination for Long-Context Language Models
https://neurips.cc//virtual/2025/poster/121560
Yijun Yang, Zeyu Huang, Wenhao Zhu, Zihan Qiu, Fei Yuan, Jeff Pan, Ivan Titov
Existing frameworks for evaluating long-context language models (LCLM) can be broadly categorized into real-world and synthetic tasks.Despite their utility, both approaches are accompanied by certain intrinsic limitations.For example, real-world tasks are too complex to interpret or characterize and are susceptible to ...
Poster
A Convergence Theory for Diffusion Language Models: An Information-Theoretic Perspective
https://neurips.cc//virtual/2025/poster/115552
Gen Li, Changxiao Cai
Diffusion models have emerged as a powerful paradigm for modern generative modeling, demonstrating strong potential for large language models (LLMs). Unlike conventional autoregressive (AR) models that generate tokens sequentially, diffusion models allow for parallel sampling of tokens, leading to faster sampling and e...
Poster
A Counterfactual Semantics for Hybrid Dynamical Systems
https://neurips.cc//virtual/2025/poster/119012
Andy Zane, Dmitry Batenkov, Rafal Urbaniak, Jeremy Zucker, Sam Witty
Models of hybrid dynamical systems are widely used to answer questions about the causes and effects of dynamic events in time. Unfortunately, existing causal reasoning formalisms lack support for queries involving the dynamically triggered, discontinuous interventions that characterize hybrid dynamical systems. This mi...
Poster
A Cramér–von Mises Approach to Incentivizing Truthful Data Sharing
https://neurips.cc//virtual/2025/poster/119973
Alex Clinton, Thomas Zeng, Yiding Chen, Jerry Zhu, Kirthevasan Kandasamy
Modern data marketplaces and data sharing consortia increasingly rely on incentive mechanisms to encourage agents to contribute data. However, schemes that reward agents based on the quantity of submitted data are vulnerable to manipulation, as agents may submit fabricated or low-quality data to inflate their rewards. ...
Poster
ACT as Human: Multimodal Large Language Model Data Annotation with Critical Thinking
https://neurips.cc//virtual/2025/poster/117727
Lequan Lin, Dai Shi, Andi Han, Feng Chen, Qiuzheng Chen, Jiawen Li, Zhaoyang Li, Jiyuan Zhang, Zhenbang Sun, Junbin Gao
Supervised learning relies on high-quality labeled data, but obtaining such data through human annotation is both expensive and time-consuming. Recent work explores using large language models (LLMs) for annotation, but LLM-generated labels still fall short of human-level quality. To address this problem, we propose th...
End of preview. Expand in Data Studio

Abstracts from https://neurips.cc/Downloads/2025. Pulled Sep 20, 2025

There's 23,712 unique authors, 5,480 unique first authors (one has 95 coauthors!). 11 papers have "all you need" in the title. Also check out this paper, with the shortest abstract

How am I supposed to find the right papers when there's so many :(

Downloads last month
188