Dataset Viewer
Auto-converted to Parquet Duplicate
paper
stringlengths
14
183
authors
listlengths
1
95
abstract
stringlengths
246
3.6k
link
stringlengths
42
42
track
stringclasses
2 values
award
stringclasses
3 values
paper_id
stringlengths
10
10
DisCO: Reinforcing Large Reasoning Models with Discriminative Constrained Optimization
[ "Gang Li", "Ming Lin", "Tomer Galanti", "Zhengzhong Tu", "Tianbao Yang" ]
The recent success and openness of DeepSeek-R1 have brought widespread attention to Group Relative Policy Optimization (GRPO) as a reinforcement learning method for large reasoning models (LRMs). In this work, we analyze the GRPO objective under a binary reward setting and reveal an inherent limitation of question-leve...
https://openreview.net/forum?id=zzUXS4f91r
Main
Poster
zzUXS4f91r
Private Zeroth-Order Optimization with Public Data
[ "Xuchen Gong", "Tian Li" ]
One of the major bottlenecks for deploying popular first-order differentially private (DP) machine learning algorithms (e.g., DP-SGD) lies in their high computation and memory cost, despite the existence of optimized implementations. Zeroth-order methods have promise in mitigating the overhead, as they leverage functi...
https://openreview.net/forum?id=zytITzY4IW
Main
Poster
zytITzY4IW
GeneFlow: Translation of Single-cell Gene Expression to Histopathological Images via Rectified Flow
[ "Mengbo Wang", "Shourya Verma", "Aditya Malusare", "Luopin Wang", "Yiyang Lu", "Vaneet Aggarwal", "Mario Sola", "Ananth Grama", "Nadia Atallah Lanman" ]
Spatial transcriptomics technologies can be used to align transcriptomes with histopathological morphology, presenting exciting new opportunities for biomolecular discovery. Using spatial transcriptomic gene expression and corresponding histology data, we construct a novel framework, GeneFlow, to map single- and multi-...
https://openreview.net/forum?id=zyopvwZbSj
Main
Poster
zyopvwZbSj
MultiNet: Adaptive Multi-Viewed Subgraph Convolutional Networks for Graph Classification
[ "Xinya Qin", "Lu Bai", "Lixin Cui", "Ming Li", "Hangyuan Du", "Edwin Hancock" ]
The problem of over-smoothing has emerged as a fundamental issue for Graph Convolutional Networks (GCNs). While existing efforts primarily focus on enhancing the discriminability of node representations for node classification, they tend to overlook the over-smoothing at the graph level, significantly influencing the p...
https://openreview.net/forum?id=zxfwVts5it
Main
Poster
zxfwVts5it
EPA: Boosting Event-based Video Frame Interpolation with Perceptually Aligned Learning
[ "Yuhan Liu", "LingHui Fu", "Zhen Yang", "Hao Chen", "Youfu Li", "Yongjian Deng" ]
Event cameras, with their capacity to provide high temporal resolution information between frames, are increasingly utilized for video frame interpolation (VFI) in challenging scenarios characterized by high-speed motion and significant occlusion. However, prevalent issues of blur and distortion within the keyframes an...
https://openreview.net/forum?id=zxZPpVoCNO
Main
Poster
zxZPpVoCNO
Novel View Synthesis from A Few Glimpses via Test-Time Natural Video Completion
[ "Yan Xu", "Yixing Wang", "Stella X. Yu" ]
Given just a few glimpses of a scene, can you imagine the movie playing out as the camera glides through it? That’s the lens we take on sparse-input novel view synthesis, not only as filling spatial gaps between widely spaced views, but also as completing a natural video unfolding through space. We recast the task as t...
https://openreview.net/forum?id=zwmq0MsIMG
Main
Poster
zwmq0MsIMG
SAVVY: Spatial Awareness via Audio-Visual LLMs through Seeing and Hearing
[ "Mingfei Chen", "Zijun Cui", "Xiulong Liu", "Jinlin Xiang", "Caleb Zheng", "Jingyuan Li", "Eli Shlizerman" ]
3D spatial reasoning in dynamic, audio-visual environments is a cornerstone of human cognition yet remains largely unexplored by existing Audio-Visual Large Language Models (AV-LLMs) and benchmarks, which predominantly focus on static or 2D scenes. We introduce SAVVY-Bench, the first benchmark for 3D spatial reasoning ...
https://openreview.net/forum?id=zwCb9cKHpd
Main
Oral
zwCb9cKHpd
Training the Untrainable: Introducing Inductive Bias via Representational Alignment
[ "Vighnesh Subramaniam", "David Mayo", "Colin Conwell", "Tomaso Poggio", "Boris Katz", "Brian Cheung", "Andrei Barbu" ]
We demonstrate that architectures which traditionally are considered to be ill-suited for a task can be trained using inductive biases from another architecture. We call a network untrainable when it overfits, underfits, or converges to poor results even when tuning their hyperparameters. For example, fully connected ...
https://openreview.net/forum?id=zvYxXhlQHM
Main
Poster
zvYxXhlQHM
Sparse Meets Dense: Unified Generative Recommendations with Cascaded Sparse-Dense Representations
[ "Yuhao Yang", "Zhi Ji", "Zhaopeng Li", "YI LI", "Zhonglin Mo", "Yue Ding", "Kai Chen", "Zijian Zhang", "Jie Li", "shuanglong li", "LIU LIN" ]
Generative models have recently gained attention in recommendation systems by directly predicting item identifiers from user interaction sequences. However, existing methods suffer from significant information loss due to the separation of stages such as quantization and sequence modeling, hindering their ability to ac...
https://openreview.net/forum?id=zugMif2nm6
Main
Poster
zugMif2nm6
Uncovering a Universal Abstract Algorithm for Modular Addition in Neural Networks
[ "Gavin McCracken", "Gabriela Moisescu-Pareja", "Vincent Létourneau", "Doina Precup", "Jonathan Love" ]
We propose a testable universality hypothesis, asserting that seemingly disparate neural network solutions observed in the simple task of modular addition actually reflect a common abstract algorithm. While prior work interpreted variations in neuron-level representations as evidence for distinct algorithms, we demonst...
https://openreview.net/forum?id=zuHs6RHQwT
Main
Poster
zuHs6RHQwT
Mamba Goes HoME: Hierarchical Soft Mixture-of-Experts for 3D Medical Image Segmentation
[ "Szymon Plotka", "Gizem Mert", "Maciej Chrabaszcz", "Ewa Szczurek", "Arkadiusz Sitek" ]
In recent years, artificial intelligence has significantly advanced medical image segmentation. Nonetheless, challenges remain, including efficient 3D medical image processing across diverse modalities and handling data variability. In this work, we introduce Hierarchical Soft Mixture-of-Experts (HoME), a two-level tok...
https://openreview.net/forum?id=ztgYn0Uk94
Main
Poster
ztgYn0Uk94
HAIF-GS: Hierarchical and Induced Flow-Guided Gaussian Splatting for Dynamic Scene
[ "Jianing Chen", "Zehao Li", "Yujun Cai", "Hao Jiang", "Chengxuan Qian", "Juyuan Kang", "Shuqin Gao", "Honglong Zhao", "Tianlu Mao", "Yucheng Zhang" ]
Reconstructing dynamic 3D scenes from monocular videos remains a fundamental challenge in 3D vision. While 3D Gaussian Splatting (3DGS) achieves real-time rendering in static settings, extending it to dynamic scenes is challenging due to the difficulty of learning structured and temporally consistent motion representat...
https://openreview.net/forum?id=ztVk8XNffY
Main
Poster
ztVk8XNffY
Know Thyself by Knowing Others: Learning Neuron Identity from Population Context
[ "Vinam Arora", "Divyansha Lachi", "Ian Jarratt Knight", "Mehdi Azabou", "Blake Aaron Richards", "Cole Lincoln Hurwitz", "Josh Siegle", "Eva L Dyer" ]
Identifying the functional identity of individual neurons is essential for interpreting circuit dynamics, yet it remains a major challenge in large-scale _in vivo_ recordings where anatomical and molecular labels are often unavailable. Here we introduce NuCLR, a self-supervised framework that learns context-aware repre...
https://openreview.net/forum?id=zt3RKc6VBp
Main
Poster
zt3RKc6VBp
RAST: Reasoning Activation in LLMs via Small-model Transfer
[ "Siru Ouyang", "Xinyu Zhu", "Zilin Xiao", "Minhao Jiang", "Yu Meng", "Jiawei Han" ]
Reinforcement learning (RL) has become a powerful approach for improving the reasoning capabilities of large language models (LLMs), as evidenced by recent successes such as OpenAI's o1 and Deepseek-R1. However, applying RL at scale remains intimidatingly resource-intensive, requiring multiple model copies and extensiv...
https://openreview.net/forum?id=zswylB4Wnt
Main
Poster
zswylB4Wnt
Scalable and Cost-Efficient de Novo Template-Based Molecular Generation
[ "Piotr Gaiński", "Oussama Boussif", "Andrei Rekesh", "Dmytro Shevchuk", "Ali Parviz", "Mike Tyers", "Robert A. Batey", "Michał Koziarski" ]
Template-based molecular generation offers a promising avenue for drug design by ensuring generated compounds are synthetically accessible through predefined reaction templates and building blocks. In this work, we tackle three core challenges in template-based GFlowNets: (1) minimizing synthesis cost, (2) scaling to l...
https://openreview.net/forum?id=zssWxiiJZ1
Main
Poster
zssWxiiJZ1
Accelerating Feature Conformal Prediction via Taylor Approximation
[ "Zihao Tang", "Boyuan Wang", "Chuan Wen", "Jiaye Teng" ]
Conformal prediction is widely adopted in uncertainty quantification, due to its post-hoc, distribution-free, and model-agnostic properties. In the realm of modern deep learning, researchers have proposed Feature Conformal Prediction (FCP), which deploys conformal prediction in a feature space, yielding reduced band le...
https://openreview.net/forum?id=zsUOQRUFOy
Main
Poster
zsUOQRUFOy
PhySwin: An Efficient and Physically-Informed Foundation Model for Multispectral Earth Observation
[ "Chong Tang", "Joseph Powell", "Dirk Koch", "Robert D. Mullins", "Alex S. Weddell", "Jagmohan Chauhan" ]
Recent progress on Remote Sensing Foundation Models (RSFMs) aims toward universal representations for Earth observation imagery. However, current efforts often scale up in size significantly without addressing efficiency constraints critical for real-world applications (e.g., onboard processing, rapid disaster response...
https://openreview.net/forum?id=zrBucj9BwG
Main
Poster
zrBucj9BwG
CURE: Concept Unlearning via Orthogonal Representation Editing in Diffusion Models
[ "Shristi Das Biswas", "Arani Roy", "Kaushik Roy" ]
As Text-to-Image models continue to evolve, so does the risk of generating unsafe, copyrighted, or privacy-violating content. Existing safety interventions - ranging from training data curation and model fine-tuning to inference-time filtering and guidance - often suffer from incomplete concept removal, susceptibility ...
https://openreview.net/forum?id=zprMrpiLgT
Main
Spotlight
zprMrpiLgT
Implicit-ARAP: Efficient Handle-Guided Neural Field Deformation via Local Patch Meshing
[ "Daniele Baieri", "Filippo Maggioli", "Emanuele Rodolà", "Simone Melzi", "Zorah Lähner" ]
Neural fields have emerged as a powerful representation for 3D geometry, enabling compact and continuous modeling of complex shapes. Despite their expressive power, manipulating neural fields in a controlled and accurate manner -- particularly under spatial constraints -- remains an open challenge, as existing approach...
https://openreview.net/forum?id=zp7W2QmxHS
Main
Poster
zp7W2QmxHS
Robust Integrated Learning and Pauli Noise Mitigation for Parametrized Quantum Circuits
[ "Md Mobasshir Arshed Naved", "Wenbo Xie", "Wojciech Szpankowski", "Ananth Grama" ]
We propose a novel gradient-based framework for learning parameterized quantum circuits (PQCs) in the presence of Pauli noise in gate operation. The key innovation in our framework is the simultaneous optimization of model parameters and learning of an inverse noise channel, specifically designed to mitigate Pauli nois...
https://openreview.net/forum?id=zoNpnBlJWh
Main
Poster
zoNpnBlJWh
Analog Foundation Models
[ "Julian Büchel", "Iason Chalas", "Giovanni Acampa", "An Chen", "Omobayode Fagbohungbe", "Hsinyu Tsai", "Kaoutar El Maghraoui", "Manuel Le Gallo", "Abbas Rahimi", "Abu Sebastian" ]
Analog in-memory computing (AIMC) is a promising compute paradigm to improve speed and power efficiency of neural network inference beyond the limits of conventional von Neumann-based architectures. However, AIMC introduces fundamental challenges such as noisy computations and strict constraints on input and output qua...
https://openreview.net/forum?id=zo4zYTR8vn
Main
Poster
zo4zYTR8vn
From Synapses to Dynamics: Obtaining Function from Structure in a Connectome Constrained Model of the Head Direction Circuit
[ "Sunny Duan", "Ling Liang Dong", "Ila R Fiete" ]
How precisely does circuit wiring specify function? This fundamental question is particularly relevant for modern neuroscience, as large-scale electron microscopy now enables the reconstruction of neural circuits at single-synapse resolution across many organisms. To interpret circuit function from such datasets, we mu...
https://openreview.net/forum?id=zn4F6os6cq
Main
Poster
zn4F6os6cq
Plug-and-play Feature Causality Decomposition for Multimodal Representation Learning
[ "Ye Liu", "Zihan Ji", "Hongmin Cai" ]
Multimodal representation learning is critical for a wide range of applications, such as multimodal sentiment analysis. Current multimodal representation learning methods mainly focus on the multimodal alignment or fusion strategies, such that the complementary and consistent information among heterogeneous modalities ...
https://openreview.net/forum?id=zmCBCbr2Wj
Main
Poster
zmCBCbr2Wj
Towards Syn-to-Real IQA: A Novel Perspective on Reshaping Synthetic Data Distributions
[ "Aobo Li", "Jinjian Wu", "Yongxu Liu", "Leida Li", "Weisheng Dong" ]
Blind Image Quality Assessment (BIQA) has advanced significantly through deep learning, but the scarcity of large-scale labeled datasets remains a challenge. While synthetic data offers a promising solution, models trained on existing synthetic datasets often show limited generalization ability. In this work, we make a...
https://openreview.net/forum?id=zlRvBwWFII
Main
Poster
zlRvBwWFII
Causality Meets the Table: Debiasing LLMs for Faithful TableQA via Front-Door Intervention
[ "Zhen Yang", "Ziwei Du", "Minghan Zhang", "Wei Du", "Jie Chen", "Fulan Qian", "Shu Zhao" ]
Table Question Answering (TableQA) combines natural language understanding and structured data reasoning, posing challenges in semantic interpretation and logical inference. Recent advances in Large Language Models (LLMs) have improved TableQA performance through Direct Prompting and Agent paradigms. However, these mod...
https://openreview.net/forum?id=zlMupLoKRf
Main
Poster
zlMupLoKRf
Learning Cocoercive Conservative Denoisers via Helmholtz Decomposition for Poisson Imaging Inverse Problems
[ "Deliang Wei", "Peng Chen", "Haobo Xu", "Jiale Yao", "Fang Li", "Tieyong Zeng" ]
Plug-and-play (PnP) methods with deep denoisers have shown impressive results in imaging problems. They typically require strong convexity or smoothness of the fidelity term and a (residual) non-expansive denoiser for convergence. These assumptions, however, are violated in Poisson inverse problems, and non-expansivene...
https://openreview.net/forum?id=zl4FR39Ibh
Main
Poster
zl4FR39Ibh
TAMI: Taming Heterogeneity in Temporal Interactions for Temporal Graph Link Prediction
[ "Zhongyi Yu", "Jianqiu Wu", "Zhenghao Wu", "Shuhan Zhong", "Weifeng Su", "Chul-Ho Lee", "Weipeng Zhuo" ]
Temporal graph link prediction aims to predict future interactions between nodes in a graph based on their historical interactions, which are encoded in node embeddings. We observe that heterogeneity naturally appears in temporal interactions, e.g., a few node pairs can make most interaction events, and interaction eve...
https://openreview.net/forum?id=zjQLUiguRz
Main
Poster
zjQLUiguRz
Activity Pruning for Efficient Spiking Neural Networks
[ "Tong Bu", "Xinyu Shi", "Zhaofei Yu" ]
While sparse coding plays an important role in promoting the efficiency of biological neural systems, it has not been fully utilized by artificial models as the activation sparsity is not well suited to the current structure of deep networks. Spiking Neural Networks (SNNs), with their event-driven characteristics, offe...
https://openreview.net/forum?id=zjOXZEXQKZ
Main
Poster
zjOXZEXQKZ
Private Hyperparameter Tuning with Ex-Post Guarantee
[ "Badih Ghazi", "Pritish Kamath", "Alexander Knop", "Ravi Kumar", "Pasin Manurangsi", "Chiyuan Zhang" ]
The conventional approach in differential privacy (DP) literature formulates the privacy-utility tradeoff with a "privacy-first" perspective: for a predetermined level of privacy, a certain utility is achievable. However, practitioners often operate under a "utility-first" paradigm, prioritizing a desired level of ut...
https://openreview.net/forum?id=zjMd3yfyWv
Main
Spotlight
zjMd3yfyWv
Scalable In-context Ranking with Generative Models
[ "Nilesh Gupta", "Chong You", "Srinadh Bhojanapalli", "Sanjiv Kumar", "Inderjit S Dhillon", "Felix X. Yu" ]
In-context Ranking (ICR) is an emerging paradigm for Information Retrieval (IR), which leverages contextual understanding of LLMs by directly incorporating the task description, candidate documents, and the query into the model's input prompt and tasking the LLM to identify relevant document(s). While it is effective, ...
https://openreview.net/forum?id=zj45hoQhjD
Main
Poster
zj45hoQhjD
Optimal Adjustment Sets for Nonparametric Estimation of Weighted Controlled Direct Effect
[ "Ruiyang Lin", "Yongyi Guo", "Kyra Gan" ]
The weighted controlled direct effect (WCDE) generalizes the standard controlled direct effect (CDE) by averaging over the mediator distribution, providing a robust estimate when treatment effects vary across mediator levels. This makes the WCDE especially relevant in fairness analysis, where it isolates the direct eff...
https://openreview.net/forum?id=zho5kN8jTn
Main
Poster
zho5kN8jTn
Constrained Optimization From a Control Perspective via Feedback Linearization
[ "Runyu Zhang", "Arvind Raghunathan", "Jeff S Shamma", "Na Li" ]
Tools from control and dynamical systems have proven valuable for analyzing and developing optimization methods. In this paper, we establish rigorous theoretical foundations for using feedback linearization—a well-established nonlinear control technique—to solve constrained optimization problems. For equality-constrain...
https://openreview.net/forum?id=zhgfM0dJ3F
Main
Poster
zhgfM0dJ3F
Bigger, Regularized, Categorical: High-Capacity Value Functions are Efficient Multi-Task Learners
[ "Michal Nauman", "Marek Cygan", "Carmelo Sferrazza", "Aviral Kumar", "Pieter Abbeel" ]
Recent advances in language modeling and vision stem from training large models on diverse, multi‑task data. This paradigm has had limited impact in value-based reinforcement learning (RL), where improvements are often driven by small models trained in a single-task context. This is because in multi-task RL sparse rewa...
https://openreview.net/forum?id=zhOUfuOIzA
Main
Poster
zhOUfuOIzA
LoRA-EnVar: Parameter-Efficient Hybrid Ensemble Variational Assimilation for Weather Forecasting
[ "Yi Xiao", "Hang Fan", "Kun Chen", "Ye Cao", "Ben Fei", "Wei Xue", "LEI BAI" ]
Accurate estimation of background error (i.e., forecast error) distribution is critical for effective data assimilation (DA) in numerical weather prediction (NWP). In state-of-the-art operational DA systems, it is common to account for the temporal evolution of background errors by employing hybrid methods, which blend...
https://openreview.net/forum?id=zhMl4Smau7
Main
Poster
zhMl4Smau7
Co-PatcheR: Collaborative Software Patching with Component-specific Small Reasoning Models
[ "Yuheng Tang", "Hongwei Li", "Kaijie Zhu", "Michael Yang", "Yangruibo Ding", "Wenbo Guo" ]
Motivated by the success of general‑purpose large language models (LLMs) in software patching, recent works started to train specialized patching models. Most works trained one model to handle the end‑to‑end patching pipeline (including issue localization, patch generation, and patch validation). However, it is hard fo...
https://openreview.net/forum?id=zhFEO67s5w
Main
Poster
zhFEO67s5w
Dual-Flow: Transferable Multi-Target, Instance-Agnostic Attacks via $\textit{In-the-wild}$ Cascading Flow Optimization
[ "Yixiao Chen", "Shikun Sun", "Jianshu Li", "Ruoyu Li", "Zhe Li", "Junliang Xing" ]
Adversarial attacks are widely used to evaluate model robustness, and in black-box scenarios, the transferability of these attacks becomes crucial. Existing generator-based attacks have excellent generalization and transferability due to their instance-agnostic nature. However, when training generators for multi-target...
https://openreview.net/forum?id=zhCv5uZ8bh
Main
Poster
zhCv5uZ8bh
Structure-Aware Cooperative Ensemble Evolutionary Optimization on Combinatorial Problems with Multimodal Large Language Models
[ "Jie Zhao", "Kang Hao Cheong" ]
Evolutionary algorithms (EAs) have proven effective in exploring the vast solution spaces typical of graph-structured combinatorial problems. However, traditional encoding schemes, such as binary or numerical representations, often fail to straightforwardly capture the intricate structural properties of networks. Throu...
https://openreview.net/forum?id=zftxlb1AOo
Main
Poster
zftxlb1AOo
Noisy Multi-Label Learning through Co-Occurrence-Aware Diffusion
[ "Senyu Hou", "Yuru Ren", "Gaoxia Jiang", "Wenjian Wang" ]
Noisy labels often compel models to overfit, especially in multi-label classification tasks. Existing methods for noisy multi-label learning (NML) primarily follow a discriminative paradigm, which relies on noise transition matrix estimation or small-loss strategies to correct noisy labels. However, they remain substan...
https://openreview.net/forum?id=zft0zTOFkN
Main
Poster
zft0zTOFkN
RefLoRA: Refactored Low-Rank Adaptation for Efficient Fine-Tuning of Large Models
[ "Yilang Zhang", "Bingcong Li", "Georgios B. Giannakis" ]
Low-Rank Adaptation (LoRA) lowers the computational and memory overhead of fine-tuning large models by updating a low-dimensional subspace of the pre-trained weight matrix. Albeit efficient, LoRA exhibits suboptimal convergence and noticeable performance degradation, due to inconsistent and imbalanced weight updates in...
https://openreview.net/forum?id=zefDc9oi5T
Main
Poster
zefDc9oi5T
Architectural and Inferential Inductive Biases for Exchangeable Sequence Modeling
[ "Daksh Mittal", "Ang Li", "Thomson Yen", "C. Daniel Guetta", "Hongseok Namkoong" ]
Autoregressive models have emerged as a powerful framework for modeling exchangeable sequences---i.i.d. observations when conditioned on some latent factor---enabling direct modeling of uncertainty from missing data (rather than a latent). Motivated by the critical role posterior inference plays as a subroutine in deci...
https://openreview.net/forum?id=zdRW39Tc3C
Main
Poster
zdRW39Tc3C
ZPressor: Bottleneck-Aware Compression for Scalable Feed-Forward 3DGS
[ "Weijie Wang", "Donny Y. Chen", "Zeyu Zhang", "Duochao Shi", "Akide Liu", "Bohan Zhuang" ]
Feed-forward 3D Gaussian Splatting (3DGS) models have recently emerged as a promising solution for novel view synthesis, enabling one-pass inference without the need for per-scene 3DGS optimization. However, their scalability is fundamentally constrained by the limited capacity of their encoders, leading to degraded pe...
https://openreview.net/forum?id=zbucdbZ0fU
Main
Poster
zbucdbZ0fU
scSplit: Bringing Severity Cognizance to Image Decomposition in Fluorescence Microscopy
[ "Ashesh", "Florian Jug" ]
Fluorescence microscopy, while being a key driver for progress in the life sciences, is also subject to technical limitations. To overcome them, computational multiplexing techniques have recently been proposed, which allow multiple cellular structures to be captured in a single image and later be unmixed. Existing ima...
https://openreview.net/forum?id=zb16xZ1NGB
Main
Poster
zb16xZ1NGB
UniGTE: Unified Graph–Text Encoding for Zero-Shot Generalization across Graph Tasks and Domains
[ "Duo Wang", "Yuan Zuo", "Guangyue Lu", "Junjie Wu" ]
Generalizing to unseen graph tasks without task-specific supervision is challenging: conventional graph neural networks are typically tied to a fixed label space, while large language models (LLMs) struggle to capture graph structure. We introduce UniGTE, an instruction-tuned encoder–decoder framework that unifies stru...
https://openreview.net/forum?id=zaV9s8iM2T
Main
Poster
zaV9s8iM2T
Datasets, Documents, and Repetitions: The Practicalities of Unequal Data Quality
[ "Alex Fang", "Hadi Pouransari", "Matt Jordan", "Alexander T Toshev", "Vaishaal Shankar", "Ludwig Schmidt", "Tom Gunter" ]
Data filtering has become a powerful tool for improving model performance while reducing computational cost. However, as large language model compute budgets continue to grow, the limited data volume provided by heavily filtered and deduplicated datasets will become a practical constraint. In efforts to better understa...
https://openreview.net/forum?id=zZecO3RZ7Z
Main
Poster
zZecO3RZ7Z
UltraLED: Learning to See Everything in Ultra-High Dynamic Range Scenes
[ "Yuang Meng", "Xin Jin", "Lina Lei", "Chun-Le Guo", "Chongyi Li" ]
Ultra-high dynamic range (UHDR) scenes exhibit pronounced exposure disparities between bright and dark regions. Such conditions are Ultra-high dynamic range (UHDR) scenes exhibit significant exposure disparities between bright and dark regions. Such conditions are commonly encountered in nighttime scenes with light sou...
https://openreview.net/forum?id=zZLfHw4Erp
Main
Poster
zZLfHw4Erp
Heterogeneous Swarms: Jointly Optimizing Model Roles and Weights for Multi-LLM Systems
[ "Shangbin Feng", "Zifeng Wang", "Palash Goyal", "Yike Wang", "Weijia Shi", "Huang Xia", "Hamid Palangi", "Luke Zettlemoyer", "Yulia Tsvetkov", "Chen-Yu Lee", "Tomas Pfister" ]
We propose Heterogeneous Swarms, an algorithm to design multi-LLM systems by jointly optimizing model roles and weights. We represent multi-LLM systems as directed acyclic graphs (DAGs) of LLMs with topological message passing for collaborative generation. Given a pool of LLM experts and a utility function, Heterogeneo...
https://openreview.net/forum?id=zYEZ5KqtDO
Main
Poster
zYEZ5KqtDO
RDD: Retrieval-Based Demonstration Decomposer for Planner Alignment in Long-Horizon Tasks
[ "Mingxuan Yan", "Yuping Wang", "Zechun Liu", "Jiachen Li" ]
To tackle long-horizon tasks, recent hierarchical vision-language-action (VLAs) frameworks employ vision-language model (VLM)-based planners to decompose complex manipulation tasks into simpler sub-tasks that low-level visuomotor policies can easily handle. Typically, the VLM planner is finetuned to learn to decompose ...
https://openreview.net/forum?id=zY5J1vp7tZ
Main
Poster
zY5J1vp7tZ
Imagined Autocurricula
[ "Ahmet H. Güzel", "Matthew Thomas Jackson", "Jarek Luca Liesen", "Tim Rocktäschel", "Jakob Nicolaus Foerster", "Ilija Bogunovic", "Jack Parker-Holder" ]
Training agents to act in embodied environments typically requires vast training data or access to accurate simulation, neither of which exists for many cases in the real world. Instead, world models are emerging as an alternative–leveraging offline, passively collected data, they make it possible to generate diverse w...
https://openreview.net/forum?id=zXlB9A5xya
Main
Poster
zXlB9A5xya
Mozart: Modularized and Efficient MoE Training on 3.5D Wafer-Scale Chiplet Architectures
[ "Shuqing Luo", "Ye Han", "Pingzhi Li", "Jiayin Qin", "Jie Peng", "Yang Katie Zhao", "Yu Cao", "Tianlong Chen" ]
Mixture-of-Experts (MoE) architecture offers enhanced efficiency for Large Language Models (LLMs) with modularized computation, yet its inherent sparsity poses significant hardware deployment challenges, including memory locality issues, communication overhead, and inefficient computing resource utilization. Inspired b...
https://openreview.net/forum?id=zWHKKspghT
Main
Spotlight
zWHKKspghT
Towards Generalizable Retina Vessel Segmentation with Deformable Graph Priors
[ "Ke Liu", "Shangde Gao", "Yichao Fu", "Shangqi Gao" ]
Retinal vessel segmentation is critical for medical diagnosis, yet existing models often struggle to generalize across domains due to appearance variability, limited annotations, and complex vascular morphology. We propose GraphSeg, a variational Bayesian framework that integrates anatomical graph priors with structure...
https://openreview.net/forum?id=zVkbsGlKn9
Main
Poster
zVkbsGlKn9
On the Expressive Power of Mixture-of-Experts for Structured Complex Tasks
[ "Mingze Wang", "Weinan E" ]
Mixture-of-experts networks (MoEs) have demonstrated remarkable efficiency in modern deep learning. Despite their empirical success, the theoretical foundations underlying their ability to model complex tasks remain poorly understood. In this work, we conduct a systematic study of the expressive power of MoEs in modeli...
https://openreview.net/forum?id=zSrb8rtH9M
Main
Spotlight
zSrb8rtH9M
Depth-Supervised Fusion Network for Seamless-Free Image Stitching
[ "Zhiying Jiang", "Ruhao Yan", "Zengxi Zhang", "Bowei Zhang", "Jinyuan Liu" ]
Image stitching synthesizes images captured from multiple perspectives into a single image with a broader field of view. The significant variations in object depth often lead to large parallax, resulting in ghosting and misalignment in the stitched results. To address this, we propose a depth-consistency-constrained se...
https://openreview.net/forum?id=zQqDqfja4Y
Main
Poster
zQqDqfja4Y
OpenHype: Hyperbolic Embeddings for Hierarchical Open-Vocabulary Radiance Fields
[ "Lisa Weijler", "Sebastian Koch", "Fabio Poiesi", "Timo Ropinski", "Pedro Hermosilla" ]
Modeling the inherent hierarchical structure of 3D objects and 3D scenes is highly desirable, as it enables a more holistic understanding of environments for autonomous agents. Accomplishing this with implicit representations, such as Neural Radiance Fields, remains an unexplored challenge. Existing methods that explic...
https://openreview.net/forum?id=zQmXDUbZ5D
Main
Poster
zQmXDUbZ5D
Dynamic Masking and Auxiliary Hash Learning for Enhanced Cross-Modal Retrieval
[ "Shuang Zhang", "Yue Wu", "Lei Shi", "Yingxue Zhang", "Feifei Kou", "Huilong Jin", "Pengfei Zhang", "Meiyu Liang", "Mingying Xu" ]
The demand for multimodal data processing drives the development of information technology. Cross-modal hash retrieval has attracted much attention because it can overcome modal differences and achieve efficient retrieval, and has shown great application potential in many practical scenarios. Existing cross-modal hashi...
https://openreview.net/forum?id=zQK6IluJi3
Main
Poster
zQK6IluJi3
Delving into Cascaded Instability: A Lipschitz Continuity View on Image Restoration and Object Detection Synergy
[ "Qing Zhao", "Weijian Deng", "Pengxu Wei", "ZiYi Dong", "Hannan Lu", "Xiangyang Ji", "Liang Lin" ]
To improve detection robustness in adverse conditions (e.g., haze and low light), image restoration is commonly applied as a pre-processing step to enhance image quality for the detector. However, the functional mismatch between restoration and detection networks can introduce instability and hinder effective integrati...
https://openreview.net/forum?id=zPgPDHupcE
Main
Poster
zPgPDHupcE
What is Your Data Worth to GPT? LLM-Scale Data Valuation with Influence Functions
[ "Sang Keun Choe", "Hwijeen Ahn", "Juhan Bae", "Kewen Zhao", "Youngseog Chung", "Adithya Pratapa", "Willie Neiswanger", "Emma Strubell", "Teruko Mitamura", "Jeff Schneider", "Eduard Hovy", "Roger Baker Grosse", "Eric P. Xing" ]
Large language models (LLMs) are trained on a vast amount of human-written data, but data providers often remain uncredited. In response to this issue, data valuation (or data attribution), which quantifies the contribution or value of each data to the model output, has been discussed as a potential solution. Neverthel...
https://openreview.net/forum?id=zPKeJAEo27
Main
Poster
zPKeJAEo27
Online Portfolio Selection with ML Predictions
[ "Ziliang Zhang", "Tianming Zhao", "Albert Zomaya" ]
Online portfolio selection seeks to determine a sequence of allocations to maximize capital growth. Classical universal strategies asymptotically match the best constant-rebalanced portfolio but ignore potential forecasts, whereas heuristic methods often collapse when belief fails. We formalize this tension in a learni...
https://openreview.net/forum?id=zOFxp98km2
Main
Poster
zOFxp98km2
Private Evolution Converges
[ "Tomás González", "Giulia Fanti", "Aaditya Ramdas" ]
Private Evolution (PE) is a promising training-free method for differentially private (DP) synthetic data generation. While it achieves strong performance in some domains (e.g., images and text), its behavior in others (e.g., tabular data) is less consistent. To date, the only theoretical analysis of the convergence of...
https://openreview.net/forum?id=zOCENGh1Jg
Main
Poster
zOCENGh1Jg
Bootstrap Off-policy with World Model
[ "Guojian Zhan", "Likun Wang", "Xiangteng Zhang", "Jiaxin Gao", "Masayoshi Tomizuka", "Shengbo Eben Li" ]
Online planning has proven effective in reinforcement learning (RL) for improving sample efficiency and final performance. However, using planning for environment interaction inevitably introduces a divergence between the collected data and the policy's actual behaviors, degrading both model learning and policy improve...
https://openreview.net/forum?id=zNqDCSokDR
Main
Poster
zNqDCSokDR
AdaSPEC: Selective Knowledge Distillation for Efficient Speculative Decoders
[ "Yuezhou Hu", "Jiaxin Guo", "Xinyu Feng", "Tuo Zhao" ]
Speculative Decoding (SD) accelerates large language model inference by employing a small draft model to generate predictions, which are then verified by a larger target model. The effectiveness of SD hinges on the alignment between these models, which is typically enhanced by Knowledge Distillation (KD). However, conv...
https://openreview.net/forum?id=zNLlglSOwD
Main
Spotlight
zNLlglSOwD
LLMs Encode Harmfulness and Refusal Separately
[ "Jiachen Zhao", "Jing Huang", "Zhengxuan Wu", "David Bau", "Weiyan Shi" ]
LLMs are trained to refuse harmful instructions, but do they truly understand harmfulness beyond just refusing? Prior work has shown that LLMs’ refusal behaviors can be mediated by a one-dimensional subspace, i.e., a refusal direction. In this work, we identify a new dimension to analyze safety mechanisms in LLMs, i.e....
https://openreview.net/forum?id=zLkpt30ngy
Main
Poster
zLkpt30ngy
Learnable Burst-Encodable Time-of-Flight Imaging for High-Fidelity Long-Distance Depth Sensing
[ "Manchao Bao", "Shengjiang Fang", "Tao Yue", "Xuemei Hu" ]
Long-distance depth imaging holds great promise for applications such as autonomous driving and robotics. Direct time-of-flight (dToF) imaging offers high-precision, long-distance depth sensing, yet demands ultra-short pulse light sources and high-resolution time-to-digital converters. In contrast, indirect time-of-fli...
https://openreview.net/forum?id=zL4ifL17bU
Main
Spotlight
zL4ifL17bU
Consistency of Physics-Informed Neural Networks for Second-Order Elliptic Equations
[ "Yuqian Cheng", "Zhuo Chen", "Qian Lin" ]
The physics-informed neural networks (PINNs) are widely applied in solving differential equations. However, few studies have discussed their consistency. In this paper, we consider the consistency of PINNs when applied to second-order elliptic equations with Dirichlet boundary conditions. We first provide the necessary...
https://openreview.net/forum?id=zL4JRfBr7R
Main
Poster
zL4JRfBr7R
Don't Just Chase “Highlighted Tokens” in MLLMs: Revisiting Visual Holistic Context Retention
[ "Xin Zou", "Di Lu", "Yizhou Wang", "Yibo Yan", "Yuanhuiyi Lyu", "Xu Zheng", "Linfeng Zhang", "Xuming Hu" ]
Despite their powerful capabilities, multimodal large language models (MLLMs) suffer from considerable computational overhead due to their reliance on massive visual tokens. Recent studies have explored token pruning to alleviate this problem, which typically uses text-vision cross-attention or [CLS] attention to asses...
https://openreview.net/forum?id=zKoeRtye8o
Main
Poster
zKoeRtye8o
BeyondMix: Leveraging Structural Priors and Long-Range Dependencies for Domain-Invariant LiDAR Segmentation
[ "Yujia Chen", "Rui Sun", "Wangkai Li", "Huayu Mai", "Si Chen", "Zhuoyuan Li", "Zhixin Cheng", "Tianzhu Zhang" ]
Domain adaptation for LiDAR semantic segmentation remains challenging due to the complex structural properties of point cloud data. While mix-based paradigms have shown promise, they often fail to fully leverage the rich structural priors inherent in 3D LiDAR point clouds. In this paper, we identify three critical yet ...
https://openreview.net/forum?id=zKV3CN40tE
Main
Poster
zKV3CN40tE
LittleBit: Ultra Low-Bit Quantization via Latent Factorization
[ "Banseok Lee", "Dongkyu Kim", "Youngcheon you", "Young-Min Kim" ]
Deploying large language models (LLMs) often faces challenges from substantial memory and computational costs. Quantization offers a solution, yet performance degradation in the sub-1-bit regime remains particularly difficult. This paper introduces LittleBit, a novel method for extreme LLM compression. It targets level...
https://openreview.net/forum?id=zJzu9evD5K
Main
Poster
zJzu9evD5K
Discovering Opinion Intervals from Conflicts in Signed Graphs
[ "Peter Blohm", "Florian Chen", "Aristides Gionis", "Stefan Neumann" ]
Online social media provide a platform for people to discuss current events and exchange opinions with their peers. While interactions are predominantly positive, in recent years, there has been a lot of research to understand the conflicts in social networks and how they are based on different views and opinions. In ...
https://openreview.net/forum?id=zJdutIT6vT
Main
Oral
zJdutIT6vT
SALS: Sparse Attention in Latent Space for KV Cache Compression
[ "Junlin Mu", "Hantao Huang", "Jihang Zhang", "Minghui Yu", "Tao Wang", "Yidong Li" ]
Large Language Models (LLMs) capable of handling extended contexts are in high demand, yet their inference remains challenging due to substantial Key-Value (KV) cache size and high memory bandwidth requirements. Previous research has demonstrated that KV cache exhibits low-rank characteristics within the hidden dimensi...
https://openreview.net/forum?id=zJSZupQ889
Main
Poster
zJSZupQ889
PhySense: Sensor Placement Optimization for Accurate Physics Sensing
[ "Yuezhou Ma", "Haixu Wu", "Hang Zhou", "Huikun Weng", "Jianmin Wang", "Mingsheng Long" ]
Physics sensing plays a central role in many scientific and engineering domains, which inherently involves two coupled tasks: reconstructing dense physical fields from sparse observations and optimizing scattered sensor placements to observe maximum information. While deep learning has made rapid advances in sparse-dat...
https://openreview.net/forum?id=zIzZxDsNNP
Main
Oral
zIzZxDsNNP
A Reinforcement Learning-based Bidding Strategy for Data Consumers in Auction-based Federated Learning
[ "Xiaoli Tang", "Han Yu", "Xiaoxiao Li" ]
Auction-based Federated Learning (AFL) fosters collaboration among self-interested data consumers (DCs) and data owners (DOs). A major challenge in AFL pertains to how DCs select and bid for DOs. Existing methods are generally static, making them ill-suited for dynamic AFL markets. To address this issue, we propose the...
https://openreview.net/forum?id=zIbNGkaYij
Main
Poster
zIbNGkaYij
Semi-off-Policy Reinforcement Learning for Vision-Language Slow-Thinking Reasoning
[ "Junhao Shen", "Haiteng Zhao", "Yuzhe Gu", "Songyang Gao", "Kuikun Liu", "Haian Huang", "Jianfei Gao", "Dahua Lin", "Wenwei Zhang", "Kai Chen" ]
Enhancing large vision-language models (LVLMs) with visual slow-thinking reasoning is crucial for solving complex multimodal tasks. However, since LVLMs are mainly trained with vision-language alignment, it is difficult to adopt on-policy reinforcement learning (RL) to develop the slow thinking ability because the roll...
https://openreview.net/forum?id=zIFuLxUAu9
Main
Poster
zIFuLxUAu9
Test-Time Adaptation by Causal Trimming
[ "Yingnan Liu", "Rui Qiao", "Mong-Li Lee", "Wynne Hsu" ]
Test-time adaptation aims to improve model robustness under distribution shifts by adapting models with access to unlabeled target samples. A primary cause of performance degradation under such shifts is the model’s reliance on features that lack a direct causal relationship with the prediction target. We introduce Tes...
https://openreview.net/forum?id=zFGdHL9pcD
Main
Poster
zFGdHL9pcD
ReplaceMe: Network Simplification via Depth Pruning and Transformer Block Linearization
[ "Dmitriy Shopkhoev", "Ammar Ali", "Magauiya Zhussip", "Valentin Malykh", "Stamatios Lefkimmiatis", "Nikos Komodakis", "Sergey Zagoruyko" ]
We introduce ReplaceMe, a generalized training-free depth pruning method that effectively replaces transformer blocks with a linear operation, while maintaining high performance for low compression ratios. In contrast to conventional pruning approaches that require additional training or fine-tuning, our approach requi...
https://openreview.net/forum?id=zEj1FSYCRn
Main
Poster
zEj1FSYCRn
Curriculum Design for Trajectory-Constrained Agent: Compressing Chain-of-Thought Tokens in LLMs
[ "Georgios Tzannetos", "Parameswaran Kamalaruban", "Adish Singla" ]
Training agents to operate under strict constraints during deployment, such as limited resource budgets or stringent safety requirements, presents significant challenges, especially when these constraints render the task complex. In this work, we propose a curriculum learning strategy that gradually tightens constraint...
https://openreview.net/forum?id=zDU5sfYK1Z
Main
Poster
zDU5sfYK1Z
Accelerated Evolving Set Processes for Local PageRank Computation
[ "BinbinHuang", "Luo Luo", "Yanghua Xiao", "Deqing Yang", "Baojian Zhou" ]
This work proposes a novel framework based on nested evolving set processes to accelerate Personalized PageRank (PPR) computation. At each stage of the process, we employ a localized inexact proximal point iteration to solve a simplified linear system. We show that the time complexity of such localized methods is upper...
https://openreview.net/forum?id=zDOo34mbpl
Main
Poster
zDOo34mbpl
Continual Model Merging without Data: Dual Projections for Balancing Stability and Plasticity
[ "Enneng Yang", "Anke Tang", "Li Shen", "Guibing Guo", "Xingwei Wang", "Xiaochun Cao", "Jie Zhang" ]
Model merging integrates multiple expert models with diverse capabilities into a unified framework, facilitating collaborative learning. However, most existing methods assume simultaneous access to all models, which is often impractical in real-world scenarios where models are received sequentially. While some studies ...
https://openreview.net/forum?id=zD5cUX67b9
Main
Poster
zD5cUX67b9
Robust Egocentric Referring Video Object Segmentation via Dual-Modal Causal Intervention
[ "Haijing Liu", "Zhiyuan Song", "Hefeng Wu", "Tao Pu", "Keze Wang", "Liang Lin" ]
Egocentric Referring Video Object Segmentation (Ego-RVOS) aims to segment the specific object actively involved in a human action, as described by a language query, within first-person videos. This task is critical for understanding egocentric human behavior. However, achieving such segmentation robustly is challenging...
https://openreview.net/forum?id=z9xyREqxzq
Main
Poster
z9xyREqxzq
ARM: Adaptive Reasoning Model
[ "Siye Wu", "Jian Xie", "Yikai Zhang", "Aili Chen", "Kai Zhang", "Yu Su", "Yanghua Xiao" ]
While large reasoning models demonstrate strong performance on complex tasks, they lack the ability to adjust reasoning token usage based on task difficulty. This often leads to the "overthinking" problem—excessive and unnecessary reasoning—which, although potentially mitigated by human intervention to control the toke...
https://openreview.net/forum?id=z9oeQrcNh9
Main
Spotlight
z9oeQrcNh9
Spatially-aware Weights Tokenization for NeRF-Language Models
[ "Andrea Amaduzzi", "Pierluigi Zama Ramirez", "Giuseppe Lisanti", "Samuele Salti", "Luigi Di Stefano" ]
Neural Radiance Fields (NeRFs) are neural networks -- typically multilayer perceptrons (MLPs) -- that represent the geometry and appearance of objects, with applications in vision, graphics, and robotics. Recent works propose understanding NeRFs with natural language using Multimodal Large Language Models (MLLMs) that ...
https://openreview.net/forum?id=z9MxyboJ7R
Main
Poster
z9MxyboJ7R
A compressive-expressive communication framework for compositional representations
[ "Rafael Elberg", "Felipe del Rio", "Mircea Petrache", "Denis Parra" ]
Compositionality in knowledge and language—the ability to represent complex concepts as a combination of simpler ones—is a hallmark of human cognition and communication. Despite recent advances, deep neural networks still struggle to acquire this property reliably. Neural models for emergent communication look to endow...
https://openreview.net/forum?id=z6mwI6VcHA
Main
Poster
z6mwI6VcHA
Faithful Group Shapley Value
[ "Kiljae Lee", "Ziqi Liu", "Weijing Tang", "Yuan Zhang" ]
Data Shapley is an important tool for data valuation, which quantifies the contribution of individual data points to machine learning models. In practice, group-level data valuation is desirable when data providers contribute data in batch. However, we identify that existing group-level extensions of Data Shapley ar...
https://openreview.net/forum?id=z6d5MRMDNf
Main
Poster
z6d5MRMDNf
Equilibrium Policy Generalization: A Reinforcement Learning Framework for Cross-Graph Zero-Shot Generalization in Pursuit-Evasion Games
[ "Runyu Lu", "Peng Zhang", "Ruochuan Shi", "Yuanheng Zhu", "Dongbin Zhao", "Yang Liu", "Dong Wang", "Cesare Alippi" ]
Equilibrium learning in adversarial games is an important topic widely examined in the fields of game theory and reinforcement learning (RL). Pursuit-evasion game (PEG), as an important class of real-world games from the fields of robotics and security, requires exponential time to be accurately solved. When the underl...
https://openreview.net/forum?id=z67on2D0j1
Main
Poster
z67on2D0j1
NeedleInATable: Exploring Long-Context Capability of Large Language Models towards Long-Structured Tables
[ "Lanrui Wang", "Mingyu Zheng", "Hongyin Tang", "Zheng Lin", "Yanan Cao", "Jingang Wang", "Xunliang Cai", "Weiping Wang" ]
Processing structured tabular data, particularly large and lengthy tables, constitutes a fundamental yet challenging task for large language models (LLMs). However, existing long-context benchmarks like Needle-in-a-Haystack primarily focus on unstructured text, neglecting the challenge of diverse structured tables. Mea...
https://openreview.net/forum?id=z5vZDI2r6J
Main
Poster
z5vZDI2r6J
From Replication to Redesign: Exploring Pairwise Comparisons for LLM-Based Peer Review
[ "Yaohui Zhang", "Haijing ZHANG", "Wenlong Ji", "Tianyu Hua", "Nick Haber", "Hancheng Cao", "Weixin Liang" ]
The advent of large language models (LLMs) offers unprecedented opportunities to reimagine peer review beyond the constraints of traditional workflows. Despite these opportunities, prior efforts have largely focused on replicating traditional review workflows with LLMs serving as direct substitutes for human reviewers,...
https://openreview.net/forum?id=z5KTxW5sJd
Main
Poster
z5KTxW5sJd
Composing Linear Layers from Irreducibles
[ "Travis Pence", "Daisuke Yamada", "Vikas Singh" ]
Contemporary large models often exhibit behaviors suggesting the presence of low-level primitives that compose into modules with richer functionality, but these fundamental building blocks remain poorly understood. We investigate this compositional structure in linear layers by asking: \textit{can we identify/synthesi...
https://openreview.net/forum?id=z5FGi0vyCr
Main
Poster
z5FGi0vyCr
LogicTree: Improving Complex Reasoning of LLMs via Instantiated Multi-step Synthetic Logical Data
[ "Zehao Wang", "Lin Yang", "Jie Wang", "Kehan Wang", "Hanzhu Chen", "Bin Wang", "Jianye HAO", "Defu Lian", "Bin Li", "Enhong Chen" ]
Despite their remarkable performance on various tasks, Large Language Models (LLMs) still struggle with logical reasoning, particularly in complex and multi-step reasoning processes. Among various efforts to enhance LLMs' reasoning capabilities, synthesizing large-scale, high-quality logical reasoning datasets has eme...
https://openreview.net/forum?id=z4AMrCOetn
Main
Spotlight
z4AMrCOetn
GeRaF: Neural Geometry Reconstruction from Radio Frequency Signals
[ "Jiachen Lu", "Hailan Shanbhag", "Haitham Al Hassanieh" ]
GeRaF is the first method to use neural implicit learning for near-range 3D geometry reconstruction from radio frequency (RF) signals. Unlike RGB or LiDAR-based methods, RF sensing can see through occlusion but suffers from low resolution and noise due to its lens-less imaging nature. While lenses in RGB imaging constr...
https://openreview.net/forum?id=z3PMVmzoya
Main
Spotlight
z3PMVmzoya
Learning to Flow from Generative Pretext Tasks for Neural Architecture Encoding
[ "Sunwoo Kim", "Hyunjin Hwang", "Kijung Shin" ]
The performance of a deep learning model on a specific task and dataset depends heavily on its neural architecture, motivating considerable efforts to rapidly and accurately identify architectures suited to the target task and dataset. To achieve this, researchers use machine learning models—typically neural architectu...
https://openreview.net/forum?id=z2vJpjopJk
Main
Poster
z2vJpjopJk
SGCD: Stain-Guided CycleDiffusion for Unsupervised Domain Adaptation of Histopathology Image Classification
[ "Hsi-Ling Chen", "Chun-Shien Lu", "Pau-Choo Chung" ]
The effectiveness of domain translation in addressing image-based problems of Unsupervised Domain Adaptation (UDA) depends on the quality of the translated images and the preservation of crucial discriminative features. However, achieving high-quality and stable translations typically requires paired data, which poses ...
https://openreview.net/forum?id=z2SGaPIhLT
Main
Spotlight
z2SGaPIhLT
COALA: Numerically Stable and Efficient Framework for Context-Aware Low-Rank Approximation
[ "Uliana Parkina", "Maxim Rakhuba" ]
Recent studies suggest that context-aware low-rank approximation is a useful tool for compression and fine-tuning of modern large-scale neural networks. In this type of approximation, a norm is weighted by a matrix of input activations, significantly improving metrics over the unweighted case. Nevertheless, existing m...
https://openreview.net/forum?id=z1wIUZtBmK
Main
Poster
z1wIUZtBmK
Sim-LLM: Optimizing LLM Inference at the Edge through Inter-Task KV Reuse
[ "Ruikun Luo", "Changwei Gu", "Qiang He", "Feifei Chen", "Song Wu", "Hai Jin", "Yun Yang" ]
KV cache technology, by storing key-value pairs, helps reduce the computational overhead incurred by *large language models* (LLMs). It facilitates their deployment on resource-constrained edge computing nodes like edge servers. However, as the complexity and size of tasks increase, KV cache usage leads to substantial ...
https://openreview.net/forum?id=z1Cvcovlms
Main
Poster
z1Cvcovlms
Bifrost-1: Bridging Multimodal LLMs and Diffusion Models with Patch-level CLIP Latents
[ "Han Lin", "Jaemin Cho", "Amir Zadeh", "Chuan Li", "Mohit Bansal" ]
There is growing interest in integrating high-fidelity visual synthesis capabilities into large language models (LLMs) without compromising their strong reasoning capabilities. Existing methods that directly train LLMs or bridge LLMs and diffusion models usually suffer from costly training since the backbone LLMs have ...
https://openreview.net/forum?id=z0WhTwZscg
Main
Poster
z0WhTwZscg
VETA-DiT: Variance-Equalized and Temporally Adaptive Quantization for Efficient 4-bit Diffusion Transformers
[ "QinkaiXu", "yijin liu", "YangChen", "Lin Yang", "Li Li", "Yuxiang Fu" ]
Diffusion Transformers (DiTs) have recently demonstrated remarkable performance in visual generation tasks, surpassing traditional U-Net-based diffusion models by significantly improving image and video generation quality and scalability. However, the large model size and iterative denoising process introduce substanti...
https://openreview.net/forum?id=z0BgfL1FRV
Main
Poster
z0BgfL1FRV
Optimistic Query Routing in Clustering-based Approximate Maximum Inner Product Search
[ "Sebastian Bruch", "Aditya Krishnan", "Franco Maria Nardini" ]
Clustering-based nearest neighbor search algorithms partition points into shards to form an index, and search only a subset of shards to process a query. Even though search efficacy is heavily influenced by the algorithm that identifies the shards to probe, it has received little attention in the literature. We study r...
https://openreview.net/forum?id=yzvpEHNL70
Main
Poster
yzvpEHNL70
Variational Task Vector Composition
[ "Boyuan Zhang", "Yingjun Du", "Xiantong Zhen", "Ling Shao" ]
Task vectors capture how a model changes during fine-tuning by recording the difference between pre-trained and task-specific weights. The composition of task vectors, a key operator in task arithmetic, enables models to integrate knowledge from multiple tasks without incurring significant additional inference costs. I...
https://openreview.net/forum?id=yzv6kysYbw
Main
Poster
yzv6kysYbw
Semantic Representation Attack against Aligned Large Language Models
[ "Jiawei Lian", "Jianhong Pan", "Lefan Wang", "Yi Wang", "Shaohui Mei", "Lap-Pui Chau" ]
Large Language Models (LLMs) increasingly employ alignment techniques to prevent harmful outputs. Despite these safeguards, attackers can circumvent them by crafting prompts that induce LLMs to generate harmful content. Current methods typically target exact affirmative responses, suffering from limited convergence, un...
https://openreview.net/forum?id=yzl5tL0Z2M
Main
Poster
yzl5tL0Z2M
Visual Instruction Bottleneck Tuning
[ "Changdae Oh", "Jiatong Li", "Shawn Im", "Sharon Li" ]
Despite widespread adoption, multimodal large language models (MLLMs) suffer performance degradation when encountering unfamiliar queries under distribution shifts. Existing methods to improve MLLM generalization typically require either more instruction data or larger advanced model architectures, both of which incur ...
https://openreview.net/forum?id=yzHiEmLSk8
Main
Poster
yzHiEmLSk8
Reinforcing Spatial Reasoning in Vision-Language Models with Interwoven Thinking and Visual Drawing
[ "Junfei Wu", "Jian Guan", "Kaituo Feng", "Qiang Liu", "Shu Wu", "Liang Wang", "Wei Wu", "Tieniu Tan" ]
As textual reasoning with large language models (LLMs) has advanced significant, there has been growing interest in enhancing the multimodal reasoning capabilities of large vision-language models (LVLMs). However, existing methods primarily approach multimodal reasoning in a straightforward, text-centric manner, where ...
https://openreview.net/forum?id=yyWeSAsOhs
Main
Poster
yyWeSAsOhs
Dendritic Resonate-and-Fire Neuron for Effective and Efficient Long Sequence Modeling
[ "Dehao Zhang", "Malu Zhang", "Shuai Wang", "Jingya Wang", "Wenjie Wei", "Zeyu Ma", "Guoqing Wang", "Yang Yang", "Haizhou Li" ]
The explosive growth in sequence length has intensified the demand for effective and efficient long sequence modeling. Benefiting from intrinsic oscillatory membrane dynamics, Resonate-and-Fire (RF) neurons can efficiently extract frequency components from input signals and encode them into spatiotemporal spike trains,...
https://openreview.net/forum?id=ywzGKDStrm
Main
Poster
ywzGKDStrm
Logical Expressiveness of Graph Neural Networks with Hierarchical Node Individualization
[ "Arie Soeteman", "Balder ten Cate" ]
We propose and study Hierarchical Ego Graph Neural Networks (HE-GNNs), an expressive extension of graph neural networks (GNNs) with hierarchical node individualization, inspired by the Individualization-Refinement paradigm for isomorphism testing. HE-GNNs generalize subgraph-GNNs and form a hierarchy of increasingly ex...
https://openreview.net/forum?id=yvGnOqy0Zf
Main
Poster
yvGnOqy0Zf
End of preview. Expand in Data Studio

NeurIPS 2025 Papers Dataset

This dataset contains all accepted papers from NeurIPS 2025, scraped from OpenReview.

Dataset Statistics

Overview

  • Total Papers: 5772
  • Unique Paper IDs: 5772
  • ✅ No duplicate IDs

Track Distribution

  • Main Track: 5,275 papers (91.4%)
  • Datasets and Benchmarks Track: 497 papers (8.6%)

Award Distribution

  • Poster: 4,949 papers (85.7%)
  • Oral: 84 papers (1.5%)
  • Spotlight: 739 papers (12.8%)

Track × Award Combinations

  • Main - Poster: 4,515 papers (78.2%)
  • Main - Spotlight: 683 papers (11.8%)
  • Datasets and Benchmarks - Poster: 434 papers (7.5%)
  • Main - Oral: 77 papers (1.3%)
  • Datasets and Benchmarks - Spotlight: 56 papers (1.0%)
  • Datasets and Benchmarks - Oral: 7 papers (0.1%)

Author Statistics

  • Total Authors (across all papers): 33,878 if stats else 'N/A'
  • Unique Authors: 23,704 if stats else 'N/A'
  • Average Authors per Paper: 5.87 if stats else 'N/A'
  • Authors per Paper Range: Min: 1 if stats else 'N/A', Max: 95 if stats else 'N/A', Avg: 5.87 if stats else 'N/A'
  • Papers with Authors: 5,772 (100%) if stats else 'N/A'

Abstract Statistics

  • Papers with Abstracts: 5,772 (100%) if stats else 'N/A'
  • Average Abstract Length: 1376 characters if stats else 'N/A'
  • Total Abstract Text: 7,939,587 characters if stats else 'N/A'

Dataset Structure

Each paper contains the following fields:

  • paper: Title of the paper
  • authors: List of author names
  • abstract: Abstract text
  • link: Direct link to OpenReview
  • track: Track name (Main or Datasets and Benchmarks)
  • award: Award type (Oral, Spotlight, or Poster)
  • paper_id: Unique OpenReview paper ID

Usage

from datasets import load_dataset

dataset = load_dataset("neurips-2025-papers", split="train")
print(dataset[0])

Citation

If you use this dataset, please cite the original NeurIPS 2025 conference and OpenReview.

License

This dataset is provided for research purposes. Please refer to OpenReview's terms of service.

Downloads last month
146