Dataset Viewer
Auto-converted to Parquet Duplicate
platform
stringclasses
1 value
venue
stringclasses
4 values
year
int32
2.02k
2.03k
title
stringlengths
8
177
abstract
stringlengths
310
3.08k
keywords
stringlengths
0
613
areas
stringclasses
152 values
tldr
stringlengths
0
281
scores
listlengths
0
8
decision
stringclasses
21 values
authors
stringlengths
6
834
author_ids
stringlengths
8
956
cdate
stringclasses
976 values
url
stringlengths
41
45
platform_id
stringlengths
9
13
bibtex
stringlengths
228
1.26k
figure_path
stringlengths
61
79
figure_number
stringclasses
134 values
figure_caption
stringlengths
8
2.35k
figure_context
stringlengths
0
20.2k
figure_type
stringclasses
1 value
confidence
float32
0.85
1
OpenReview
ICLR
2,026
FreqKV: Key-Value Compression in Frequency Domain for Context Window Extension
Existing key-value (KV) cache compression methods for large language models (LLMs) often rely on token eviction, which risks losing critical local information in both long prefilling and decoding scenarios. When extrapolating beyond the pretrained context length, their performance degrades sharply on long-context bench...
Large Language Models, KV Compression, Context Extension
foundation or frontier models, including LLMs
This paper introduces FreqKV, an efficient context extension method that iteratively compresses key-value states in the frequency domain.
[ 4, 6, 4 ]
Accept (Poster)
Jushi Kai, Yixuan Wang, Boyi Zeng, Haoli Bai, Bo Jiang, Ziwei He, Zhouhan Lin
~Jushi_Kai1, ~Yixuan_Wang10, ~Boyi_Zeng2, ~Haoli_Bai2, ~Bo_Jiang2, ~Ziwei_He1, ~Zhouhan_Lin1
20250918
https://openreview.net/forum?id=wFSOtyvQ9d
wFSOtyvQ9d
@inproceedings{ kai2026freqkv, title={Freq{KV}: Key-Value Compression in Frequency Domain for Context Window Extension}, author={Jushi Kai and Yixuan Wang and Boyi Zeng and Haoli Bai and Bo Jiang and Ziwei He and Zhouhan Lin}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026},...
OpenReview/ICLR/figures/2026/accept_poster/wFSOtyvQ9d/Figure3.png
3
Figure 3: The overview of our FreqKV. (a) The illustration of the frequency-domain compression. (b) The KV cache will be compressed in an iterative manner to extend the context window. Sink tokens remain uncompressed throughout the process. The tokens after sink tokens will be compressed in the frequency domain and sub...
<paragraph_1>To reduce redundancy in the key-value (KV) cache, we compress KV states in the frequency domain as shown in Figure 3a. Specifically, we conduct DCT along the sequence dimension to transfer the KV cache to the frequency domain:</paragraph_1> <paragraph_2>Extending the context window of LLMs is fundamentally...
diagram
0.899471
OpenReview
ICLR
2,026
ThinkOmni: Lifting Textual Reasoning to Omni-modal Scenarios via Guidance Decoding
Omni-modal reasoning is essential for intelligent systems to understand and draw inferences from diverse data sources. While existing omni-modal large language models (OLLM) excel at perceiving diverse modalities, they lack the complex reasoning abilities of recent large reasoning models (LRM). However, enhancing the r...
Omni-modal large language models, training-free guidance decoding, language model reasoning
applications to computer vision, audio, language, and other modalities
[ 6, 6, 6, 6 ]
Accept (Poster)
Yiran Guan, Sifan Tu, Dingkang Liang, Linghao Zhu, Jianzhong Ju, Zhenbo Luo, Jian Luan, Yuliang Liu, Xiang Bai
~Yiran_Guan1, ~Sifan_Tu2, ~Dingkang_Liang2, ~Linghao_Zhu1, ~Jianzhong_Ju1, ~Zhenbo_Luo2, ~Jian_Luan1, ~Yuliang_Liu2, ~Xiang_Bai1
20250917
https://openreview.net/forum?id=pMpCOjzwI1
pMpCOjzwI1
@inproceedings{ guan2026thinkomni, title={ThinkOmni: Lifting Textual Reasoning to Omni-modal Scenarios via Guidance Decoding}, author={Yiran Guan and Sifan Tu and Dingkang Liang and Linghao Zhu and Jianzhong Ju and Zhenbo Luo and Jian Luan and Yuliang Liu and Xiang Bai}, booktitle={The Fourteenth International Conferen...
OpenReview/ICLR/figures/2026/accept_poster/pMpCOjzwI1/Figure3.png
3
Figure 3: Guidance decoding methods. “Guid.” denotes the guiding model, and “Amat.” denotes the amateur model.
<paragraph_1>In Contrastive Decoding (Fig. 3(a)), the contrastive pair is formed by comparing the responses to the same prompt from the original guiding model and an additional amateur model, with z+ set to zbase. In Visual Contrastive Decoding (Fig. 3(b)), the contrastive pair is created by applying different input co...
diagram
0.93543
OpenReview
ICLR
2,026
Task-Agnostic Amortized Multi-Objective Optimization
Balancing competing objectives is omnipresent across disciplines, from drug design to autonomous systems. Multi-objective Bayesian optimization is a promising solution for such expensive, black-box problems: it fits probabilistic surrogates and selects new designs via an acquisition function that balances exploration a...
Multi-Objective Optimization, Bayesian Optimization, Transformers, Neural Processes
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
We introduce a fully amortized (surrogate model + acquisition function), dimension-agnostic policy for multi-objective optimization.
[ 6, 6, 8, 4 ]
Accept (Poster)
Xinyu Zhang, Conor Hassan, Julien Martinelli, Daolang Huang, Samuel Kaski
~Xinyu_Zhang41, ~Conor_Hassan1, ~Julien_Martinelli1, ~Daolang_Huang1, ~Samuel_Kaski1
20250920
https://openreview.net/forum?id=odmeUlWta8
odmeUlWta8
@inproceedings{ zhang2026taskagnostic, title={Task-Agnostic Amortized Multi-Objective Optimization}, author={Xinyu Zhang and Conor Hassan and Julien Martinelli and Daolang Huang and Samuel Kaski}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/f...
OpenReview/ICLR/figures/2026/accept_poster/odmeUlWta8/Figure2.png
2
Figure 2: Dimension-agnostic embedder for a single observation.
<paragraph_1>(I) Dimension-agnostic embedder. We apply learnable scalar-to-vector maps ex : R →Rde and ey : R →Rde dimension-wise, resulting in ex = ex(x) ∈Rdτ x×de and ey = ey(y) ∈Rdτ y×de. Both functions ex and ey are parameterized as feedforward neural networks. After L transformer layers on the concatenated tokens ...
diagram
0.99614
OpenReview
ICLR
2,026
DESIGNER: Design-Logic-Guided Multidisciplinary Data Synthesis for LLM Reasoning
Large language models (LLMs) perform strongly on many language tasks but still struggle with complex multi-step reasoning across disciplines. Existing reasoning datasets often lack disciplinary breadth, reasoning depth, and diversity, as well as guiding principles for question synthesis. We propose DESIGNER: a DESIGN-l...
Large Language Models, Data Synthesis, Synthetic Data, Reasoning, Post-Training, Supervised Fine-Tuning
datasets and benchmarks
[ 6, 4, 2, 8, 4 ]
Accept (Poster)
Weize Liu, Yongchi Zhao, Yijia Luo, Mingyu Xu, Jiaheng Liu, Yanan Li, Xiguo Hu, ZhiqiBai, Yuchi Xu, Wenbo Su, Bo Zheng
~Weize_Liu1, ~Yongchi_Zhao1, ~Yijia_Luo1, ~Mingyu_Xu3, ~Jiaheng_Liu1, ~Yanan_Li8, ~Xiguo_Hu1, ~ZhiqiBai1, ~Yuchi_Xu1, ~Wenbo_Su2, ~Bo_Zheng5
20250903
https://openreview.net/forum?id=SQVxBJhIrK
SQVxBJhIrK
@inproceedings{ liu2026designer, title={{DESIGNER}: Design-Logic-Guided Multidisciplinary Data Synthesis for {LLM} Reasoning}, author={Weize Liu and Yongchi Zhao and Yijia Luo and Mingyu Xu and Jiaheng Liu and Yanan Li and Xiguo Hu and ZhiqiBai and Yuchi Xu and Wenbo Su and Bo Zheng}, booktitle={The Fourteenth Internat...
OpenReview/ICLR/figures/2026/accept_poster/SQVxBJhIrK/Figure2.png
2
Figure 2: The Design-Logic-Guided Multidisciplinary Data Synthesis Pipeline.
<paragraph_1>Specifically, our pipeline is illustrated in Figure 2. First, we process large-scale book and web corpora with multi-dimensional labeling and filtering (discipline, readability, educational value, reasoning depth) to construct a high-quality source material library. From a question bank of hundreds of mill...
diagram
0.99595
OpenReview
ICLR
2,026
DESIGNER: Design-Logic-Guided Multidisciplinary Data Synthesis for LLM Reasoning
Large language models (LLMs) perform strongly on many language tasks but still struggle with complex multi-step reasoning across disciplines. Existing reasoning datasets often lack disciplinary breadth, reasoning depth, and diversity, as well as guiding principles for question synthesis. We propose DESIGNER: a DESIGN-l...
Large Language Models, Data Synthesis, Synthetic Data, Reasoning, Post-Training, Supervised Fine-Tuning
datasets and benchmarks
[ 6, 4, 2, 8, 4 ]
Accept (Poster)
Weize Liu, Yongchi Zhao, Yijia Luo, Mingyu Xu, Jiaheng Liu, Yanan Li, Xiguo Hu, ZhiqiBai, Yuchi Xu, Wenbo Su, Bo Zheng
~Weize_Liu1, ~Yongchi_Zhao1, ~Yijia_Luo1, ~Mingyu_Xu3, ~Jiaheng_Liu1, ~Yanan_Li8, ~Xiguo_Hu1, ~ZhiqiBai1, ~Yuchi_Xu1, ~Wenbo_Su2, ~Bo_Zheng5
20250903
https://openreview.net/forum?id=SQVxBJhIrK
SQVxBJhIrK
@inproceedings{ liu2026designer, title={{DESIGNER}: Design-Logic-Guided Multidisciplinary Data Synthesis for {LLM} Reasoning}, author={Weize Liu and Yongchi Zhao and Yijia Luo and Mingyu Xu and Jiaheng Liu and Yanan Li and Xiguo Hu and ZhiqiBai and Yuchi Xu and Wenbo Su and Bo Zheng}, booktitle={The Fourteenth Internat...
OpenReview/ICLR/figures/2026/accept_poster/SQVxBJhIrK/Figure22.png
22
Figure 22: An example of the Design Logic for a Mathematics problem, showing the Mermaid source code (a) and the corresponding visual flowchart (b).
diagram
0.907912
OpenReview
ICLR
2,026
Enhancing Multivariate Time Series Forecasting with Global Temporal Retrieval
Multivariate time series forecasting (MTSF) plays a vital role in numerous real-world applications, yet existing models remain constrained by their reliance on a limited historical context. This limitation prevents them from effectively capturing global periodic patterns that often span cycles significantly longer than...
Time-series forecasting, model plugins
learning on time series and dynamical systems
A lightweight, model-agnostic plug-and-play module for time-series forecasting models.
[ 6, 4, 4, 8 ]
Accept (Poster)
Fanpu Cao, Lu Dai, Jindong Han, Hui Xiong
~Fanpu_Cao1, ~Lu_Dai1, ~Jindong_Han1, ~Hui_Xiong1
20250915
https://openreview.net/forum?id=QUJBPSfyui
QUJBPSfyui
@inproceedings{ cao2026enhancing, title={Enhancing Multivariate Time Series Forecasting with Global Temporal Retrieval}, author={Fanpu Cao and Lu Dai and Jindong Han and Hui Xiong}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=QUJBPSf...
OpenReview/ICLR/figures/2026/accept_poster/QUJBPSfyui/Figure2.png
2
Figure 2: Overview of the Global Temporal Retriever (GTR): a plug-and-play module compatible with any MTSF forecaster. GTR operates in three stages: (1) retrieves corresponding segments from global temporal embedding; (2) aligns them with the input and uses 2D convolution to jointly model local and global periodicity; ...
<paragraph_1>Method Overview. In this paper, we propose the Global Temporal Retriever (GTR) — a lightweight, plug-and-play module designed to extend a model’s temporal receptive field beyond the immediate input window. As illustrated in Figure 2, the proposed method operates in two phases: (1) The GTR module enhances g...
diagram
0.993829
OpenReview
ICLR
2,026
From EduVisBench to EduVisAgent: A Benchmark and Multi-Agent Framework for Reasoning-Driven Pedagogical Visualization
While foundation models (FMs), such as diffusion models and large vision-language models (LVLMs), have been widely applied in educational contexts, their ability to generate pedagogically effective visual explanations remains limited. Most existing approaches focus primarily on textual reasoning, overlooking the critic...
education, agent, benchmark, llm, application, visualisation
datasets and benchmarks
[ 6, 2, 2, 6, 6 ]
Accept (Poster)
Haonian Ji, Shi Qiu, Siyang Xin, Siwei Han, Zhaorun Chen, Dake Zhang, Hongyi Wang, Huaxiu Yao
~Haonian_Ji1, ~Shi_Qiu2, ~Siyang_Xin1, ~Siwei_Han1, ~Zhaorun_Chen1, ~Dake_Zhang3, ~Hongyi_Wang1, ~Huaxiu_Yao1
20250918
https://openreview.net/forum?id=FVCpV04ZRe
FVCpV04ZRe
@inproceedings{ ji2026from, title={From EduVisBench to EduVisAgent: A Benchmark and Multi-Agent Framework for Reasoning-Driven Pedagogical Visualization}, author={Haonian Ji and Shi Qiu and Siyang Xin and Siwei Han and Zhaorun Chen and Dake Zhang and Hongyi Wang and Huaxiu Yao}, booktitle={The Fourteenth International ...
OpenReview/ICLR/figures/2026/accept_poster/FVCpV04ZRe/Figure4.png
4
Figure 4: Workflow for evaluation.
<paragraph_1>Evaluation Protocol. As shown in Figure 4, models are provided with a visualization prompt together with a question and are asked to generate visual outputs. To enable fair comparison across heterogeneous outputs, we first canonicalize every model result to a raster image prior to scoring. This standardizat...
diagram
0.932038
OpenReview
ICLR
2,026
A State-Transition Framework for Efficient LLM Reasoning
While Long Chain-of-Thought (CoT) reasoning significantly improves Large Language Models (LLMs) performance on complex reasoning tasks, the substantial computational and memory costs of generating long CoT sequences limit their efficiency and practicality. Existing studies usually enhance the reasoning efficiency of LL...
Large Language Models, reasoning, efficient reasoning
foundation or frontier models, including LLMs
[ 4, 6, 6, 6 ]
Accept (Poster)
Liang Zhang, Yu Zhao, Longyue Wang, Tianqi Shi, Weihua Luo, Kaifu Zhang, Jinsong Su
~Liang_Zhang9, ~Yu_Zhao1, ~Longyue_Wang3, ~Tianqi_Shi1, ~Weihua_Luo2, ~Kaifu_Zhang2, ~Jinsong_Su1
20250919
https://openreview.net/forum?id=Zz8ikW4uWG
Zz8ikW4uWG
@inproceedings{ zhang2026a, title={A State-Transition Framework for Efficient {LLM} Reasoning}, author={Liang Zhang and Yu Zhao and Longyue Wang and Tianqi Shi and Weihua Luo and Kaifu Zhang and Jinsong Su}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openr...
OpenReview/ICLR/figures/2026/accept_poster/Zz8ikW4uWG/Figure4.png
4
Figure 4: (a) shows the computational and memory efficiency of our model and the base model. (b) and (c) present our model’s performance with different values of hyper-parameters β and αmax, respectively. These experiments are conducted on Qwen2.5-1.5B.
<paragraph_1>Analysis of Computational and Memory Costs. We conduct experiments to further compare the computational and memory efficiency of our model and the base model across varying CoT lengths. The experimental results are presented in Figure 4(a). Although our model exhibits similar reasoning efficiency to the ba...
diagram
0.868907
OpenReview
ICLR
2,026
STITCH: Simultaneous Thinking and Talking with Chunked Reasoning for Spoken Language Models
Spoken Language Models (SLMs) are designed to take speech inputs and produce spoken responses. However, current SLMs lack the ability to perform an internal, unspoken thinking process before responding. In contrast, humans typically engage in complex mental reasoning internally, enabling them to communicate ideas clear...
spoken language model, reasoning, chain-of-thought
applications to computer vision, audio, language, and other modalities
[ 6, 4, 6, 4 ]
Accept (Poster)
Cheng-Han Chiang, Xiaofei Wang, Linjie Li, Chung-Ching Lin, Kevin Lin, Shujie LIU, Zhendong Wang, Zhengyuan Yang, Hung-yi Lee, Lijuan Wang
~Cheng-Han_Chiang1, ~Xiaofei_Wang9, ~Linjie_Li1, ~Chung-Ching_Lin2, ~Kevin_Lin3, ~Shujie_LIU1, ~Zhendong_Wang1, ~Zhengyuan_Yang1, ~Hung-yi_Lee2, ~Lijuan_Wang1
20250915
https://openreview.net/forum?id=5Z1eMhCeTb
5Z1eMhCeTb
@inproceedings{ chiang2026stitch, title={{STITCH}: Simultaneous Thinking and Talking with Chunked Reasoning for Spoken Language Models}, author={Cheng-Han Chiang and Xiaofei Wang and Linjie Li and Chung-Ching Lin and Kevin Lin and Shujie LIU and Zhendong Wang and Zhengyuan Yang and Hung-yi Lee and Lijuan Wang}, booktit...
OpenReview/ICLR/figures/2026/accept_poster/5Z1eMhCeTb/Figure2.png
2
Figure 2: Different generation method explored in this paper. The arrow represents the timeline for the SLM to generate the tokens; this timeline should not be confused with the timeline that the end user receives the audio, i.e., the upper timeline in Figure 1. We plot tokens of the same type in a chunk using the same...
<paragraph_1>In the interleaved decoding paradigm, the SLM backbone model generates a chunk of text tokens and a chunk of speech tokens alternately. The text tokens serve as guidance for future speech tokens by transcribing what the speech token will say. For example, GLM-4-Voice (Zeng et al., 2024) interleaves between...
diagram
0.959533
OpenReview
ICLR
2,026
Seeing Across Views: Benchmarking Spatial Reasoning of Vision-Language Models in Robotic Scenes
Vision-language models (VLMs) are essential to Embodied AI, enabling robots to perceive, reason, and act in complex environments. They also serve as the foundation for the recent Vision-Language-Action (VLA) models. Yet, most evaluations of VLMs focus on single-view settings, leaving their ability to integrate multi-vi...
spatial understanding, benchmark, multi-view, vlm, robotics
datasets and benchmarks
MV-RoboBench evaluates whether vision–language models can integrate multi-view images for precise robotic perception and decision-making, revealing major gaps compared to human performance.
[ 8, 6, 6, 6 ]
Accept (Poster)
ZhiYuan Feng, Zhaolu Kang, Qijie Wang, Zhiying Du, Jiongrui Yan, Shi Shubin, Chengbo Yuan, Huizhi Liang, Yu Deng, Qixiu Li, Rushuai Yang, Ruichuan An, Leqi Zheng, Weijie Wang, Shawn Chen, Sicheng Xu, Yaobo Liang, Jiaolong Yang, Baining Guo
~ZhiYuan_Feng1, ~Zhaolu_Kang2, ~Qijie_Wang1, ~Zhiying_Du1, ~Jiongrui_Yan1, ~Shi_Shubin3, ~Chengbo_Yuan2, ~Huizhi_Liang1, ~Yu_Deng2, ~Qixiu_Li1, ~Rushuai_Yang1, ~Ruichuan_An1, ~Leqi_Zheng1, ~Weijie_Wang2, ~Shawn_Chen1, ~Sicheng_Xu1, ~Yaobo_Liang1, ~Jiaolong_Yang3, ~Baining_Guo1
20250913
https://openreview.net/forum?id=jXDZJAfRZB
jXDZJAfRZB
@inproceedings{ feng2026seeing, title={Seeing Across Views: Benchmarking Spatial Reasoning of Vision-Language Models in Robotic Scenes}, author={ZhiYuan Feng and Zhaolu Kang and Qijie Wang and Zhiying Du and Jiongrui Yan and Shi Shubin and Chengbo Yuan and Huizhi Liang and Yu Deng and Qixiu Li and Rushuai Yang and Ruic...
OpenReview/ICLR/figures/2026/accept_poster/jXDZJAfRZB/Figure12.png
12
Figure 12: Illustration of the righthanded coordinate system defined relative to each camera.
<paragraph_1>Directional convention. In summary, +z = upward, −z = downward; +y = forward, −y = backward; +x = right, −x = left. Figure 12 provides an illustration of this definition.</paragraph_1>
diagram
0.955413
OpenReview
ICLR
2,026
R-Horizon: How Far Can Your Large Reasoning Model Really Go in Breadth and Depth?
Recent trends in test-time scaling for reasoning models (e.g., OpenAI o1, DeepSeek-R1) have led to remarkable improvements through long Chain-of-Thought (CoT). However, existing benchmarks mainly focus on immediate, single-horizon tasks, failing to adequately evaluate models’ ability to understand and respond to comple...
Large Reasoning Models, Long Horizon Reasoning
foundation or frontier models, including LLMs
A scalable, controllable, and low-cost paradigm for enhancing and evaluating the long-horizon reasoning capabilities of LRMs
[ 6, 6, 6, 6 ]
Accept (Poster)
Yi Lu, Jianing Wang, Linsen Guo, Wei He, Hongyin Tang, Tao Gui, Xuanjing Huang, Xuezhi Cao, Wei Wang, Xunliang Cai
~Yi_Lu7, ~Jianing_Wang4, ~Linsen_Guo2, ~Wei_He14, ~Hongyin_Tang1, ~Tao_Gui1, ~Xuanjing_Huang1, ~Xuezhi_Cao1, ~Wei_Wang41, ~Xunliang_Cai1
20250916
https://openreview.net/forum?id=rRB1bYErbL
rRB1bYErbL
@inproceedings{ lu2026rhorizon, title={R-Horizon: How Far Can Your Large Reasoning Model Really Go in Breadth and Depth?}, author={Yi Lu and Jianing Wang and Linsen Guo and Wei He and Hongyin Tang and Tao Gui and Xuanjing Huang and Xuezhi Cao and Wei Wang and Xunliang Cai}, booktitle={The Fourteenth International Confe...
OpenReview/ICLR/figures/2026/accept_poster/rRB1bYErbL/Figure2.png
2
Figure 2: The R-HORIZON data composition pipeline is illustrated in (a)-(c). We leverage RHORIZON to construct a comprehensive long-horizon reasoning evaluation benchmark spanning 6 tasks and generate multi-horizon training data for long-horizon reinforcement learning.
<paragraph_1>We propose R-HORIZON, a method designed to stimulate long-horizon reasoning behaviors in LRMs via query composition. As illustrated in Figure 2, R-HORIZON supports the concatenation of three types of expanded questions and can be employed in both the training and evaluation stages to enhance and evaluate t...
diagram
0.95814
OpenReview
ICLR
2,026
IGC-Net for conditional average potential outcome estimation over time
Estimating potential outcomes for treatments over time based on observational data is important for personalized decision-making in medicine. However, many existing methods for this task fail to properly adjust for time-varying confounding and thus yield biased estimates. There are only a few neural methods with proper...
causal inference, potential outcomes, treatment effects, healthcare
causal reasoning
We develop a novel neural method that performs G-computation in an iterative end-to-end training algorithm for conditional average potential outcome estimation over time.
[ 8, 6, 2, 4, 4 ]
Accept (Poster)
Konstantin Hess, Dennis Frauen, Valentyn Melnychuk, Stefan Feuerriegel
~Konstantin_Hess1, ~Dennis_Frauen1, ~Valentyn_Melnychuk1, ~Stefan_Feuerriegel1
20250916
https://openreview.net/forum?id=ZmhpqpKzAT
ZmhpqpKzAT
@inproceedings{ hess2026igcnet, title={{IGC}-Net for conditional average potential outcome estimation over time}, author={Konstantin Hess and Dennis Frauen and Valentyn Melnychuk and Stefan Feuerriegel}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openrevie...
OpenReview/ICLR/figures/2026/accept_poster/ZmhpqpKzAT/Figure1.png
1
Figure 1: Iterative G-computation network. Neural end-toend architecture and training of our iterative G-computation network.
<paragraph_1>Our IGC-Net consists of two key components (see Figure 1): (i) a neural backbone zϕ(·), which can be, for example, be an LSTM or a transformer, and (ii) several G-computation heads {gϕ δ (·)}τ−1 δ=0, where ϕ denote the trainable weights. First, the neural backbone encodes the entire observed history. Then,...
diagram
0.992686
OpenReview
ICLR
2,026
**TandemFoilSet**: Datasets for Flow Field Prediction of Tandem-Airfoil Through the Reuse of Single Airfoils
Accurate simulation of flow fields around tandem geometries is critical for engineering design but remains computationally intensive. Existing machine learning approaches typically focus on simpler cases and lack evaluation on multi-body configurations. To support research in this area, we present **TandemFoilSet**: fi...
Physics-informed Graph Neural Network; Tandem-Airfoil; Flow Field Prediction; CFD; Aerodynamics;
datasets and benchmarks
We introduce TandemFoilSet, a paired set of 5 tandem-airfoil + 4 single-airfoil CFD datasets (8,104 simulations total) and baseline benchmarks to enable scalable ML flow-field prediction for tandem-airfoil interactions.
[ 2, 6, 6, 4 ]
Accept (Poster)
Wei Xian Lim, Loh Sher En Jessica, Zenong Li, Thant Zin Oo, Wai Lee Chan, Adams Wai-Kin Kong
~Wei_Xian_Lim2, ~Loh_Sher_En_Jessica1, ~Zenong_Li1, ~Thant_Zin_Oo1, ~Wai_Lee_Chan1, ~Adams_Wai-Kin_Kong1
20250918
https://openreview.net/forum?id=4Z0P4Nbosn
4Z0P4Nbosn
@inproceedings{ lim2026tandemfoilset, title={**TandemFoilSet**: Datasets for Flow Field Prediction of Tandem-Airfoil Through the Reuse of Single Airfoils}, author={Wei Xian Lim and Loh Sher En Jessica and Zenong Li and Thant Zin Oo and Wai Lee Chan and Adams Wai-Kin Kong}, booktitle={The Fourteenth International Confer...
OpenReview/ICLR/figures/2026/accept_poster/4Z0P4Nbosn/Figure16.png
16
Figure 16: Determining obstruction of a boundary point from the reference point in a (a) single-object case and (b) double-object case. Note how a boundary point that is unobstructed in the first case may be obstructed by another object in the second case.
<paragraph_1>As mentions previously, the DID was estimated numerically following the procedure outlined in Algorithm 1. Although extending the theoretical definition of DID to multiple geometries is conceptually straightforward, the numerical calculations grow significantly more complex with each additional object. The...
diagram
0.991554
OpenReview
ICLR
2,026
Understanding and Improving Length Generalization in Hierarchical Sparse Attention Models
Effectively processing long contexts is a critical challenge for language models. While standard Transformers are limited by quadratic complexity and poor length extrapolation, alternative architectures like sliding window attention and state space models sacrifice the ability to effectively utilize the full context du...
long-context modeling, length generalization, length extrapolation, sparse attention, language modeling
unsupervised, self-supervised, semi-supervised, and supervised representation learning
We demonstrate that extreme length generalization in hierarchical sparse attention is enabled by the interplay of an expressive chunking, a stable bypassing residual path, and enforced retrieval sparsity.
[ 4, 6, 4, 8 ]
Accept (Poster)
Jiaqi Leng, Xiang Hu, Junxiong Wang, Jianguo Li, Wei Wu, Yucheng Lu
~Jiaqi_Leng3, ~Xiang_Hu2, ~Junxiong_Wang1, ~Jianguo_Li2, ~Wei_Wu1, ~Yucheng_Lu1
20250912
https://openreview.net/forum?id=iHqdSQk6qc
iHqdSQk6qc
@inproceedings{ leng2026understanding, title={Understanding and Improving Length Generalization in Hierarchical Sparse Attention Models}, author={Jiaqi Leng and Xiang Hu and Junxiong Wang and Jianguo Li and Wei Wu and Yucheng Lu}, booktitle={The Fourteenth International Conference on Learning Representations}, year={20...
OpenReview/ICLR/figures/2026/accept_poster/iHqdSQk6qc/Figure2.png
2
Figure 2: Design of Encoder: (a): Encoder w/o CLS (b): Encoder with a learnable CLS token.
<paragraph_1>The different architectural configurations we investigate, summarized in Table 1, can be expressed as joint definitions of (f, g). In the “w/ CLS” variant, we prepend a learnable token, xCLS, to the input chunk H[i], as shown in Fig. 2. The Encoder processes this combined sequence, and its output correspon...
diagram
0.911093
OpenReview
ICLR
2,026
Omni-Weather: Unified Multimodal Foundation Model for Weather Generation and Understanding
Weather modeling requires both accurate prediction and mechanistic interpretation, yet existing methods treat these goals in isolation, separating generation from understanding. To address this gap, we present Omni-Weather, the first multimodal foundation model that unifies weather generation and understanding within a...
AI for Science, Unified foundation model, Interpretable reasoning
applications to physical sciences (physics, chemistry, biology, etc.)
[ 6, 6, 4, 8 ]
Accept (Poster)
Zhiwang Zhou, Yuandong Pu, Xuming He, Yidi Liu, Yixin Chen, Junchao Gong, Xiang Zhuang, Wanghan Xu, Qinglong Cao, SHIXIANG TANG, Yihao Liu, Wenlong Zhang, LEI BAI
~Zhiwang_Zhou1, ~Yuandong_Pu1, ~Xuming_He4, ~Yidi_Liu3, ~Yixin_Chen26, ~Junchao_Gong1, ~Xiang_Zhuang1, ~Wanghan_Xu1, ~Qinglong_Cao1, ~SHIXIANG_TANG1, ~Yihao_Liu1, ~Wenlong_Zhang3, ~LEI_BAI1
20250910
https://openreview.net/forum?id=3WnXsp72v6
3WnXsp72v6
@inproceedings{ zhou2026omniweather, title={Omni-Weather: Unified Multimodal Foundation Model for Weather Generation and Understanding}, author={Zhiwang Zhou and Yuandong Pu and Xuming He and Yidi Liu and Yixin Chen and Junchao Gong and Xiang Zhuang and Wanghan Xu and Qinglong Cao and SHIXIANG TANG and Yihao Liu and We...
OpenReview/ICLR/figures/2026/accept_poster/3WnXsp72v6/Figure2.png
2
Figure 2: Comparison between separated architectures for weather understanding / generation (top) and unified framework with shared self-attention (bottom).
<paragraph_1>Despite these advances, unified architectures remain absent in the weather domain. As shown in Figure 2, existing approaches are divided into two disjoint paradigms: model such as ClimaX Nguyen et al. (2023) and WeatherGFM Zhao et al. (2024) excel at forecasting and downscaling but lack interpretation, whi...
diagram
0.992263
OpenReview
ICLR
2,026
Weight Space Representation Learning on Diverse NeRF Architectures
Neural Radiance Fields (NeRFs) have emerged as a groundbreaking paradigm for representing 3D objects and scenes by encoding shape and appearance information into the weights of a neural network. Recent studies have demonstrated that these weights can be used as input for frameworks designed to address deep learning tas...
weight space learning, representation learning, metanetworks, graph metanetworks, neural fields, neural radiance fields, NeRF, implicit neural representations, INR
unsupervised, self-supervised, semi-supervised, and supervised representation learning
We present the first framework that performs tasks on NeRFs by processing their weights and is able to work on diverse architectures
[ 6, 4, 4, 6 ]
Accept (Poster)
Francesco Ballerini, Pierluigi Zama Ramirez, Luigi Di Stefano, Samuele Salti
~Francesco_Ballerini1, ~Pierluigi_Zama_Ramirez1, ~Luigi_Di_Stefano2, ~Samuele_Salti1
20250918
https://openreview.net/forum?id=u90rHXaBve
u90rHXaBve
@inproceedings{ ballerini2026weight, title={Weight Space Representation Learning on Diverse Ne{RF} Architectures}, author={Francesco Ballerini and Pierluigi Zama Ramirez and Luigi Di Stefano and Samuele Salti}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://op...
OpenReview/ICLR/figures/2026/accept_poster/u90rHXaBve/Figure5.png
5
Figure 5: Parameter graph conversion. Top left: parameter graph representation of an MLP, proposed by Lim et al. (2024). Right: parameter graph representation of a tri-plane, proposed by Lim et al. (2024). Dotted edges should be connected to the C channel nodes, but are not fully drawn for better visual clarity. Bottom...
<paragraph_1>The parameter graph conversion of an MLP, a tri-plane, and a multi-resolution hash table is depicted in Fig. 5, with additional details compared to Fig. 2 (left).</paragraph_1>
diagram
0.883032
OpenReview
ICLR
2,026
Toward Effective Tool-Integrated Reasoning via Self-Evolved Preference Learning
Tool-Integrated Reasoning (TIR) enables large language models (LLMs) to enhance their internal reasoning ability by integrating external tools. However, models with TIR often exhibit suboptimal behaviors, including insufficient tool calls, excessive tool calls, and overthinking after receiving tool call results. How to...
reasoning model, tool-integrated reasoning, self-evolved training, information entropy
foundation or frontier models, including LLMs
[ 4, 6, 8, 6 ]
Accept (Poster)
Yifei Chen, Guanting Dong, Zhicheng Dou
~Yifei_Chen12, ~Guanting_Dong1, ~Zhicheng_Dou1
20250916
https://openreview.net/forum?id=mNeitRAdWV
mNeitRAdWV
@inproceedings{ chen2026toward, title={Toward Effective Tool-Integrated Reasoning via Self-Evolved Preference Learning}, author={Yifei Chen and Guanting Dong and Zhicheng Dou}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=mNeitRAdWV} ...
OpenReview/ICLR/figures/2026/accept_poster/mNeitRAdWV/Figure3.png
3
Figure 3: The overall structure of Tool-Light’s training pipeline. Among them, the Self-Evolved DPO Alignment stage will conduct multiple rounds of training.
<paragraph_1>Overview. We propose Tool-Light, a multi-stage training pipeline aiming to improve the effectiveness of model tool calls. As shown in Figures 2 and 3, Tool-Light consists of two key components: (1) Dataset construction, which includes carefully designed sampling strategies to screen out training data. (2) ...
diagram
0.939537
OpenReview
ICLR
2,026
Lookup multivariate Kolmogorov-Arnold Networks
High-dimensional linear mappings, or linear layers, dominate both the parameter count and the computational cost of most modern deep-learning models. We introduce lookup multivariate Kolmogorov-Arnold Networks (lmKANs), which deliver a substantially better trade-off between capacity and inference cost. Our construction...
KAN, inference efficiency, CUDA kernels
other topics in machine learning (i.e., none of the above)
We propose a fully connected layer that decouples inference efficiency from the number of trainable parameters and empirically find it to be Pareto optimal across a wide range of macro-architectural backbones.
[ 6, 2, 6, 6 ]
Accept (Poster)
Sergey Pozdnyakov, Philippe Schwaller
~Sergey_Pozdnyakov1, ~Philippe_Schwaller1
20250919
https://openreview.net/forum?id=XRQVIeBnB0
XRQVIeBnB0
@inproceedings{ pozdnyakov2026lookup, title={Lookup multivariate Kolmogorov-Arnold Networks}, author={Sergey Pozdnyakov and Philippe Schwaller}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=XRQVIeBnB0} }
OpenReview/ICLR/figures/2026/accept_poster/XRQVIeBnB0/Figure6.png
6
Figure 6: A methane configuration
<paragraph_1>Having demonstrated that lmKANs are Pareto-optimal when approximating a general function, we proceed to benchmark their efficiency on real data. We chose the tabular-like dataset of randomly displaced methane configurations for the comparison, as it is particularly suitable for this purpose (see Appendix G...
diagram
0.866739
OpenReview
ICLR
2,026
Automata Learning and Identification of the Support of Language Models
We study the learnability of languages in the *Next Symbol Prediction* (NSP) setting, where a learner receives only positive examples from a language together with, for every prefix, (i) whether the prefix itself is in the language and (ii) which next symbols can lead to an accepting string. This setting has been used ...
automata learning, regular languages, learning theory, DFA extraction, language models
learning theory
[ 8, 6, 6, 8 ]
Accept (Poster)
Satwik Bhattamishra, Michael Hahn, Varun Kanade
~Satwik_Bhattamishra1, ~Michael_Hahn1, ~Varun_Kanade1
20250919
https://openreview.net/forum?id=L8SMNWsxfK
L8SMNWsxfK
@inproceedings{ bhattamishra2026automata, title={Automata Learning and Identification of the Support of Language Models}, author={Satwik Bhattamishra and Michael Hahn and Varun Kanade}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=L8S...
OpenReview/ICLR/figures/2026/accept_poster/L8SMNWsxfK/Figure7.png
7
Figure 7: DFA with 28 states extracted by L⋆ nsp from Transformer trained on Tomita-5. See App. H.2 for more details.
<paragraph_1>Identifying erroneous examples. When the learned DFA ˆA is not equivalent to the target DFA A⋆, we construct the product DFA B which recognizes the strings in the symmetric difference of the two languages L(B) = L( ˆA)△L(A⋆). We use a BFS-like approach to identify several erroneous examples for the languag...
diagram
0.92614
OpenReview
ICLR
2,026
Nef-Net v2: Adapting Electrocardio Panorama in the wild
Conventional multi-lead electrocardiogram (ECG) systems capture cardiac signals from a fixed set of anatomical viewpoints defined by lead placement. However, cer- tain cardiac conditions (e.g., Brugada syndrome) require additional, non-standard viewpoints to reveal diagnostically critical patterns that may be absent in...
ECG representation, Cardiac Diagnosis
applications to physical sciences (physics, chemistry, biology, etc.)
An enhanced variant of Nef-Net to generate panoramic ECG views, including previously unseen views.
[ 6, 2, 6 ]
Accept (Poster)
Zehui Zhan, Yaojun Hu, Jiajing Zhang, Wanchen Lian, Wanqing Wu, Jintai Chen
~Zehui_Zhan1, ~Yaojun_Hu2, ~Jiajing_Zhang1, ~Wanchen_Lian1, ~Wanqing_Wu1, ~Jintai_Chen1
20250917
https://openreview.net/forum?id=JzZhhhxniR
JzZhhhxniR
@inproceedings{ zhan2026nefnet, title={Nef-Net v2: Adapting Electrocardio Panorama in the wild}, author={Zehui Zhan and Yaojun Hu and Jiajing Zhang and Wanchen Lian and Wanqing Wu and Jintai Chen}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/...
OpenReview/ICLR/figures/2026/accept_poster/JzZhhhxniR/Figure2.png
2
Figure 2: NEF-NET V2 architecture for Electrocardio Panorama synthesis (illustrated for a 3-input to 2-query view synthesis task as example). The NEF-NET V2 first employs a View Encoder to extract features from the Recorded ECG that are relevant to the Queried ECG. These extracted features are then fused using a Geomet...
<paragraph_1>The key idea of NEF-NET V2 is to formulate ECG view synthesis as a direct view-to-view transformation problem. This is a pairwise deterministic mapping: the model converts the observed lead signals into the target lead through a single-step transformation, without modeling any shared geometric prior (e.g.,...
diagram
0.992609
OpenReview
ICLR
2,026
Unified Vision–Language Modeling via Concept Space Alignment
We introduce vSONAR, a vision–language embedding space extended from the text-only embedding space SONAR, which supports 200 text languages and 37 speech languages. To construct vSONAR, we propose a post-hoc alignment pipeline that maps the representations of an existing vision encoder into the SONAR space. We thorough...
multimodal embedding space, multilingual embedding space
applications to computer vision, audio, language, and other modalities
[ 6, 6, 6, 4 ]
Accept (Poster)
Yifu QIU, Paul-Ambroise Duquenne, Holger Schwenk
~Yifu_QIU1, ~Paul-Ambroise_Duquenne1, ~Holger_Schwenk1
20250918
https://openreview.net/forum?id=4LiX5ddGcU
4LiX5ddGcU
@inproceedings{ qiu2026unified, title={Unified Vision{\textendash}Language Modeling via Concept Space Alignment}, author={Yifu QIU and Paul-Ambroise Duquenne and Holger Schwenk}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=4LiX5ddGcU...
OpenReview/ICLR/figures/2026/accept_poster/4LiX5ddGcU/Figure1.png
1
Figure 1: Left: Illustration of V-SONAR. Right: fine-tuning V-LCM with vision-language instruction tuning.
<paragraph_1>Architecture The architecture of V-SONAR is illustrated in the left panel of Figure 1. Given the input image or video, PERCEPTION ENCODER (PE) will first encode each frame separately. Then, we stack a lightweight projector on top of PE to adapt the encoder’s representations into the SONAR space. The projec...
diagram
0.931501
OpenReview
ICLR
2,026
Not All Clients Are Equal: Collaborative Model Personalization on Heterogeneous Multi-Modal Clients
As AI becomes more personal, e.g., Agentic AI, there is an increasing need for personalizing models for various use cases. Personalized federated learning (PFL) enables each client to collaboratively leverage other clients' knowledge for better adaptation to the task of interest, without privacy risks. Despite its pote...
Collaborative Learning, Federated Learning, Continual Learning, Multi-modal Learning, Personalization, Distributed Learning
applications to computer vision, audio, language, and other modalities
[ 10, 4, 6, 8 ]
Accept (Poster)
Minhyuk Seo, Taeheon Kim, Hankook Lee, Jonghyun Choi, Tinne Tuytelaars
~Minhyuk_Seo1, ~Taeheon_Kim3, ~Hankook_Lee1, ~Jonghyun_Choi1, ~Tinne_Tuytelaars1
20250918
https://openreview.net/forum?id=0g5Dk4Qfh0
0g5Dk4Qfh0
@inproceedings{ seo2026not, title={Not All Clients Are Equal: Collaborative Model Personalization on Heterogeneous Multi-Modal Clients}, author={Minhyuk Seo and Taeheon Kim and Hankook Lee and Jonghyun Choi and Tinne Tuytelaars}, booktitle={The Fourteenth International Conference on Learning Representations}, year={202...
OpenReview/ICLR/figures/2026/accept_poster/0g5Dk4Qfh0/Figure14.png
14
Figure 14: Illustration of blockwise PQ-LoRA. When a model has NB PQ-LoRA modules, each block employs PQ-LoRA at its last layer, while the remaining layers adopt conventional LoRA. Each block contains the same number of layers.
<paragraph_1>To identify layer-wise correspondences between depth-heterogeneous models, we analyze representation alignment using CKA (Kornblith et al., 2019). Specifically, we measure similarity across layers within the Llama-3 family (1B, 3B, 8B) and the Qwen-2.5 family (0.5B, 1.5B, 3B), as illustrated in Fig. 12. As...
diagram
0.962517
OpenReview
ICLR
2,026
FreeKV: Boosting KV Cache Retrieval for Efficient LLM Inference
Large language models (LLMs) have been widely deployed with rapidly expanding context windows to support increasingly demanding applications. However, long contexts pose significant deployment challenges, primarily due to the KV cache whose size grows proportionally with context length. While KV cache compression metho...
LLM inference, KV cache
infrastructure, software libraries, hardware, systems, etc.
We propose FreeKV, an algorithm-system co-optimization framework for LLM inference to enhance KV retrieval efficiency while preserving accuracy.
[ 8, 2, 6, 6 ]
Accept (Poster)
Guangda Liu, Chengwei Li, Zhenyu Ning, Jing Lin, Yiwu Yao, Danning Ke, Minyi Guo, Jieru Zhao
~Guangda_Liu1, ~Chengwei_Li1, ~Zhenyu_Ning1, ~Jing_Lin6, ~Yiwu_Yao1, ~Danning_Ke1, ~Minyi_Guo1, ~Jieru_Zhao1
20250918
https://openreview.net/forum?id=wXAn7orB1H
wXAn7orB1H
@inproceedings{ liu2026freekv, title={Free{KV}: Boosting {KV} Cache Retrieval for Efficient {LLM} Inference}, author={Guangda Liu and Chengwei Li and Zhenyu Ning and Jing Lin and Yiwu Yao and Danning Ke and Minyi Guo and Jieru Zhao}, booktitle={The Fourteenth International Conference on Learning Representations}, year=...
OpenReview/ICLR/figures/2026/accept_poster/wXAn7orB1H/Figure5.png
5
Figure 5: System overview of FreeKV.
<paragraph_1>The system overview of FreeKV is illustrated in Fig. 5. In the data plane, FreeKV retains the query vectors from the previous step, page summaries and cache for selected KV pages in GPU memory. In CPU memory, FreeKV maintains a complete KV cache pool for offloading KV pages. In the control plane, a control...
diagram
0.981499
OpenReview
ICLR
2,026
Fine-Grained Activation Steering: Steering Less, Achieving More
Activation steering has emerged as a cost-effective paradigm for modifying large language model (LLM) behaviors. Existing methods typically intervene at the block level, steering the bundled activations of selected attention heads, feedforward networks, or residual streams. However, we reveal that block-level activatio...
Activation Steering, Large Language Models, Fine-Grained Intervention
foundation or frontier models, including LLMs
Breaking LLM blocks to fine-grained atomic units for intervention: steering less achieves more
[ 4, 4, 6 ]
Accept (Poster)
Zijian Feng, Tianjiao Li, Zixiao Zhu, Hanzhang Zhou, Junlang Qian, Li Zhang, Chua Jia Jim Deryl, Mak Lee Onn, Gee Wah Ng, Kezhi Mao
~Zijian_Feng2, ~Tianjiao_Li2, ~Zixiao_Zhu2, ~Hanzhang_Zhou1, ~Junlang_Qian1, ~Li_Zhang70, ~Chua_Jia_Jim_Deryl2, ~Mak_Lee_Onn1, ~Gee_Wah_Ng1, ~Kezhi_Mao1
20250918
https://openreview.net/forum?id=guSVafqhrB
guSVafqhrB
@inproceedings{ feng2026finegrained, title={Fine-Grained Activation Steering: Steering Less, Achieving More}, author={Zijian Feng and Tianjiao Li and Zixiao Zhu and Hanzhang Zhou and Junlang Qian and Li Zhang and Chua Jia Jim Deryl and Mak Lee Onn and Gee Wah Ng and Kezhi Mao}, booktitle={The Fourteenth International C...
OpenReview/ICLR/figures/2026/accept_poster/guSVafqhrB/Figure1.png
1
Figure 1: Comparison of block-level steering (prior work) and AU-level steering (Ours).
<paragraph_1>However, a common practice in existing methods is block-level steering, where a “block” denotes the multi-head attention (MHA), the feed-forward network (FFN), or the layer’s residual stream. As shown in Figure 1 (a), the intervention is vector-level: every dimension of the selected block’s activation is b...
diagram
0.998495
OpenReview
ICLR
2,026
Counterfactual Structural Causal Bandits
Causal reasoning lies at the heart of robust and generalizable decision-making, and the *Pearl Causal Hierarchy* provides a formal language for distinguishing between observational ($\mathcal{L}_1$), interventional ($\mathcal{L}_2$), and counterfactual ($\mathcal{L}_3$) levels of reasoning. Existing bandit algorithms t...
causal inference, counterfactual inference, structural causal bandits, causal decision making
causal reasoning
We introduce a counterfactual structural causal bandit (ctf-SCB) framework which expands the agent's feasible action space beyond conventional observational and interventional arms to include a class of realizable counterfactual actions.
[ 4, 4, 6, 8 ]
Accept (Poster)
Min Woo Park, Sanghack Lee
~Min_Woo_Park1, ~Sanghack_Lee1
20250920
https://openreview.net/forum?id=gjvTNxVd2f
gjvTNxVd2f
@inproceedings{ park2026counterfactual, title={Counterfactual Structural Causal Bandits}, author={Min Woo Park and Sanghack Lee}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=gjvTNxVd2f} }
OpenReview/ICLR/figures/2026/accept_poster/gjvTNxVd2f/Figure10.png
10
Figure 10: MUCT and IB are shown in red and blue, respectively; (b, c) non-POMISs; (d, e) POMISs.
<paragraph_1>For example, consider the causal diagram in Fig. 10a. Here, G = G[An(Y )G] holds. An L1 action do(∅) is not a POMIS. To see this, we construct MUCT, initializing T = {Y }, as follows: Since Y has an unobserved confounder with C, we update T = cc(Y )G = {C, Y }, and thereafter add all the descendants of C, ...
diagram
0.990313
OpenReview
ICLR
2,026
SpaCE-Eval: A Benchmark for Real-World Multi-Modal Reasoning
Multi-modal Large Language Models (MLLMs) represent a significant advancement in artificial intelligence. Among the growing capabilities exhibited by MLLMs, abilities to understand and reason in real-world environments stand out as particularly vital as a fundamental prerequisite for a wide array of real-world applicat...
Benchmark, Multi-modal Large Language Model, Visual Reasoning, Real World Environments, Evaluation
datasets and benchmarks
[ 6, 4, 6, 6 ]
Accept (Poster)
Xuyou Yang, Yucheng Zhao, Wenxuan Zhang, Immanuel Koh
~Xuyou_Yang1, ~Yucheng_Zhao3, ~Wenxuan_Zhang1, ~Immanuel_Koh1
20250919
https://openreview.net/forum?id=VAEkLS9VBr
VAEkLS9VBr
@inproceedings{ yang2026spaceeval, title={Spa{CE}-Eval: A Benchmark for Real-World Multi-Modal Reasoning}, author={Xuyou Yang and Yucheng Zhao and Wenxuan Zhang and Immanuel Koh}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=VAEkLS9VB...
OpenReview/ICLR/figures/2026/accept_poster/VAEkLS9VBr/Figure9.png
9
Figure 9: Example of Spatial Reasoning/Form Transformation.
diagram
0.873445
OpenReview
ICLR
2,026
GaussianFusion: Unified 3D Gaussian Representation for Multi-Modal Fusion Perception
The bird’s-eye view (BEV) representation enables multi-sensor features to be fused within a unified space, serving as the primary approach for achieving comprehensive multi-task perception. However, the discrete grid representation of BEV leads to significant detail loss and limits feature alignment and cross-modal inf...
Gaussian Representation, BEV Representation, Detection, Occupancy
applications to robotics, autonomy, planning
[ 2, 4, 6, 6 ]
Accept (Poster)
Xiao Zhao, Chang Liu, Mingxu Zhu, Zheyuan Zhang, Linna Song, Qingliang Luo, Chufan Guo, Kuifeng Su
~Xiao_Zhao4, ~Chang_Liu67, ~Mingxu_Zhu1, ~Zheyuan_Zhang6, ~Linna_Song1, ~Qingliang_Luo1, ~Chufan_Guo1, ~Kuifeng_Su1
20250916
https://openreview.net/forum?id=7jXxQ9bGoU
7jXxQ9bGoU
@inproceedings{ zhao2026gaussianfusion, title={GaussianFusion: Unified 3D Gaussian Representation for Multi-Modal Fusion Perception}, author={Xiao Zhao and Chang Liu and Mingxu Zhu and Zheyuan Zhang and Linna Song and Qingliang Luo and Chufan Guo and Kuifeng Su}, booktitle={The Fourteenth International Conference on Le...
OpenReview/ICLR/figures/2026/accept_poster/7jXxQ9bGoU/Figure1.png
1
Figure 1: Comparison of the discrete BEV representation fusion paradigm Liu et al. (2023b) and our proposed continuous Gaussian representation fusion paradigm. B, G, C, L, and F denote BEV, Gaussian, Camera, Lidar, and Fusion.
<paragraph_1>BEV directly discretizes and quantizes data, leading to inevitable information loss. During feature extraction, perception data are projected onto a fixed-resolution BEV grid, which compresses spatial information. This issue becomes particularly severe when the BEV resolution is low, as it directly impacts ...
diagram
0.993349
OpenReview
ICLR
2,026
Beyond Simple Graphs: Neural Multi-Objective Routing on Multigraphs
Learning-based methods for routing have gained significant attention in recent years, both in single-objective and multi-objective contexts. Yet, existing methods are unsuitable for routing on multigraphs, which feature multiple edges with distinct attributes between node pairs, despite their strong relevance in real-w...
Combinatorial Optimization, Reinforcement Learning, Graph-based Machine Learning, Multigraphs, Traveling Salesman Problem, Multi-Objective Optimization
learning on graphs and other geometries & topologies
We introduce two GNN-based models for routing with multiple objectives on multigraphs and asymmetric graphs
[ 8, 4, 4 ]
Accept (Poster)
Filip Rydin, Attila Lischka, Jiaming Wu, Morteza Haghir Chehreghani, Balazs Kulcsar
~Filip_Rydin1, ~Attila_Lischka1, ~Jiaming_Wu3, ~Morteza_Haghir_Chehreghani2, ~Balazs_Kulcsar1
20250919
https://openreview.net/forum?id=55laGcPNZZ
55laGcPNZZ
@inproceedings{ rydin2026beyond, title={Beyond Simple Graphs: Neural Multi-Objective Routing on Multigraphs}, author={Filip Rydin and Attila Lischka and Jiaming Wu and Morteza Haghir Chehreghani and Balazs Kulcsar}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https...
OpenReview/ICLR/figures/2026/accept_poster/55laGcPNZZ/Figure1.png
1
Figure 1: Edge-based GMS and its most important components.
<paragraph_1>We visualize GMS-EB in Figure 1. The encoder, consisting of L GREAT-layers, outputs edge embeddings. Using them, the decoder constructs valid tours autoregressively. Given the instance s and incomplete route π1:t−1 in construction step t, the decoder selects edge πt with probability pθ(λ)(πt | π1:t−1, s). ...
diagram
0.998319
OpenReview
ICLR
2,026
Goedel-Prover-V2: Scaling Formal Theorem Proving with Scaffolded Data Synthesis and Self-Correction
Automated theorem proving (ATP) --- the task of generating a proof that passes automated proof verification given a math question in formal language --- is a critical challenge at the intersection of mathematics and Artificial Intelligence (AI). We introduce Goedel-Prover-V2, a family of two language models that establ...
Theorem Proving, Reasoning
foundation or frontier models, including LLMs
[ 6, 6, 4, 6 ]
Accept (Poster)
Yong Lin, Shange Tang, Bohan Lyu, Ziran Yang, Jui-Hui Chung, Haoyu Zhao, Lai Jiang, Yihan Geng, Jiawei Ge, Jingruo Sun, Jiayun Wu, Jiri Gesi, Ximing Lu, David Acuna, Kaiyu Yang, Hongzhou Lin, Yejin Choi, Danqi Chen, Sanjeev Arora, Chi Jin
~Yong_Lin2, ~Shange_Tang1, ~Bohan_Lyu1, ~Ziran_Yang1, ~Jui-Hui_Chung1, ~Haoyu_Zhao1, ~Lai_Jiang4, ~Yihan_Geng1, ~Jiawei_Ge3, ~Jingruo_Sun1, ~Jiayun_Wu1, ~Jiri_Gesi1, ~Ximing_Lu1, ~David_Acuna1, ~Kaiyu_Yang1, ~Hongzhou_Lin1, ~Yejin_Choi1, ~Danqi_Chen1, ~Sanjeev_Arora1, ~Chi_Jin1
20250916
https://openreview.net/forum?id=j4C0nALrgK
j4C0nALrgK
@inproceedings{ lin2026goedelproverv, title={Goedel-Prover-V2: Scaling Formal Theorem Proving with Scaffolded Data Synthesis and Self-Correction}, author={Yong Lin and Shange Tang and Bohan Lyu and Ziran Yang and Jui-Hui Chung and Haoyu Zhao and Lai Jiang and Yihan Geng and Jiawei Ge and Jingruo Sun and Jiayun Wu and J...
OpenReview/ICLR/figures/2026/accept_poster/j4C0nALrgK/Figure3.png
3
Figure 3: The overall pipeline of model training.
<paragraph_1>We observe that while DeepSeek-Prover-V2 models are already heavily trained and have lost selfcorrection capabilities, other models like Qwen3 lack the ability to generate formal proofs. To address this trade-off, we use data distilled from DeepSeek-Prover-V2 to cold-start Qwen3, followed by large-scale ge...
diagram
0.951549
OpenReview
ICLR
2,026
Learning Unified Representation of 3D Gaussian Splatting
A well-designed vectorized representation is crucial for the learning systems natively based on 3D Gaussian Splatting. While 3DGS enables efficient and explicit 3D reconstruction, its parameter-based representation remains hard to learn as features, especially for neural-network-based models. Directly feeding raw Gauss...
Representation Learning, 3D Gaussian Splatting
unsupervised, self-supervised, semi-supervised, and supervised representation learning
Proposed a new representation of 3DGS based on submanifold field that is more suitable for learning.
[ 2, 4, 8, 8 ]
Accept (Poster)
Yuelin Xin, Yuheng Liu, Xiaohui Xie, Xinke Li
~Yuelin_Xin1, ~Yuheng_Liu1, ~Xiaohui_Xie2, ~Xinke_Li1
20250904
https://openreview.net/forum?id=NvpVtGG6hk
NvpVtGG6hk
@inproceedings{ xin2026learning, title={Learning Unified Representation of 3D Gaussian Splatting}, author={Yuelin Xin and Yuheng Liu and Xiaohui Xie and Xinke Li}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=NvpVtGG6hk} }
OpenReview/ICLR/figures/2026/accept_poster/NvpVtGG6hk/Figure6.png
6
Figure 6: Setting of a Gaussian Neural Field, we compare between the prediction target SF embedding and raw GS parameters.
<paragraph_1>Gaussian Neural Fields. To validate the potential of our representation for advanced downstream tasks, we introduce the Gaussian Neural Field (GNF). Drawing inspiration from the decoding structures in generative diffusion models (e.g., DiffGS by Zhou et al. (2024b)) and neural compression frameworks (Wu & ...
diagram
0.973273
OpenReview
ICLR
2,026
Disentangled representation learning through unsupervised symmetry group discovery
Symmetry-based disentangled representation learning leverages the group structure of environment transformations to uncover the latent factors of variation. Prior approaches to symmetry-based disentanglement have required strong prior knowledge of the symmetry group's structure, or restrictive assumptions about the sub...
Representation learning, Disentanglement, Group Theory
unsupervised, self-supervised, semi-supervised, and supervised representation learning
[ 8, 4, 8, 6 ]
Accept (Poster)
Barthélémy Dang-Nhu, Louis Annabi, Sylvain ARGENTIERI
~Barthélémy_Dang-Nhu1, ~Louis_Annabi1, ~Sylvain_ARGENTIERI1
20250919
https://openreview.net/forum?id=I6xjMoLY3j
I6xjMoLY3j
@inproceedings{ dang-nhu2026disentangled, title={Disentangled representation learning through unsupervised symmetry group discovery}, author={Barth{\'e}l{\'e}my Dang-Nhu and Louis Annabi and Sylvain ARGENTIERI}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://o...
OpenReview/ICLR/figures/2026/accept_poster/I6xjMoLY3j/Figure4.png
4
Figure 4: Two isomorphic group actions satisfying Assumption 2.
<paragraph_1>We argue that this assumption alone is not sufficient to recover the correct decomposition. To illustrate this point, consider two distinct environments analogous to Flatland shown Figure 4: (a) a 2  3 cyclic grid i.e. Ga  Z{2Z  Z{3Z with actions Ga  txu Y tyu and (b) a 6  1 cyclic grid i.e. Gb  Z{...
diagram
0.908796
OpenReview
ICLR
2,026
On-the-Fly Adaptation to Quantization: Configuration-Aware LoRA for Efficient Fine-Tuning of Quantized LLMs
As increasingly large pre-trained models are released, deploying them on edge devices for privacy-preserving applications requires effective compression. Recent works combine quantization with the fine-tuning of high-precision LoRA adapters, which can substantially reduce model size while mitigating the accuracy loss f...
Configuration-aware optimization, Pareto-base configuration search, Quantization, Fine-tuning
foundation or frontier models, including LLMs
[ 4, 6, 6, 6 ]
Accept (Poster)
Rongguang Ye, Ming Tang, Edith C. H. Ngai
~Rongguang_Ye1, ~Ming_Tang5, ~Edith_C._H._Ngai1
20250916
https://openreview.net/forum?id=9OUg0nJE72
9OUg0nJE72
@inproceedings{ ye2026onthefly, title={On-the-Fly Adaptation to Quantization: Configuration-Aware Lo{RA} for Efficient Fine-Tuning of Quantized {LLM}s}, author={Rongguang Ye and Ming Tang and Edith C. H. Ngai}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://op...
OpenReview/ICLR/figures/2026/accept_poster/9OUg0nJE72/Figure3.png
3
Figure 3: Illustration of configuration-aware LoRA adapters with parallel adjustment. The configurationaware model θ generates adjustment matrices I+Uθ(Ci) from the quantization configuration Ci in parallel, where I denotes the identity matrix.
<paragraph_1>Motivated by this observation, we introduce a configuration-aware model θ : R|Qi| →Rr×r, which maps a layer-level configuration vector Qi to a lightweight adjustment matrix Uθ(Qi) ∈Rr×r. As shown in Fig. 3, each layer’s low-rank matrix L2,i is reparameterized as (I + Uθ(Qi))L2,i, where I is the identity ma...
diagram
0.998697
OpenReview
ICLR
2,026
FHE-Coder: Evaluating LLM Agents for secure Fully Homomorphic Encryption Code Generation
Fully Homomorphic Encryption over the Torus (TFHE) is a cornerstone of confidential computing, yet its adoption is severely limited by a steep learning curve requiring specialized cryptographic expertise. To bridge this skills gap, we investigate the potential of Large Language Model (LLM) agents to automate the genera...
Large Language Models, Agents, Code generation, Fully Homomorphic Encryption, Retrieval Augmented Generation
alignment, fairness, safety, privacy, and societal considerations
We built a three-phase agentic framework that enables Large Language Models to automatically generate secure and functional TFHE code, bridging the expertise gap that currently limits the adoption of privacy-preserving computation.
[ 6, 4, 6 ]
Accept (Poster)
Mayank Kumar, Jiaqi Xue, Mengxin Zheng, Qian Lou
~Mayank_Kumar8, ~Jiaqi_Xue1, ~Mengxin_Zheng1, ~Qian_Lou1
20250919
https://openreview.net/forum?id=4F1py5vQXm
4F1py5vQXm
@inproceedings{ kumar2026fhecoder, title={{FHE}-Coder: Evaluating {LLM} Agents for secure Fully Homomorphic Encryption Code Generation}, author={Mayank Kumar and Jiaqi Xue and Mengxin Zheng and Qian Lou}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openrevi...
OpenReview/ICLR/figures/2026/accept_poster/4F1py5vQXm/Figure4.png
4
Figure 4: An offline, human-in-the-loop process creates a dictionary mapping expert-enriched docstrings to code snippets from the TFHE documentation.
<paragraph_1>Therefore, to mitigate each of these issues, we introduce the novel agentic code generation workflow and evaluation framework as shown in Fig. 2. Our workflow is composed of three key components designed to address these specific challenges. First, the FHE Prompt Formalizer (Fig. 3) corrects structural and...
diagram
0.926502
OpenReview
ICLR
2,026
PALC: Preference Alignment via Logit Calibration
Aligning Large Language Models with human preferences typically requires computationally intensive training or complex reward architectures. We introduce PALC (Preference Alignment via Logit Calibration), a parameter-efficient framework that achieves test-time alignment through a novel intervention strategy: direct cal...
AI alignment, Representation Editing
alignment, fairness, safety, privacy, and societal considerations
PALC: preference alignment via logit calibration. Learns compact calibrations for frozen LLMs, achieving strong alignment without external rewards or fine-tuning. Outperforms most test-time methods with minimal latency.
[ 6, 6, 6, 4 ]
Accept (Poster)
SANGHYUN LEE, Hoh Peter In
~SANGHYUN_LEE4, ~Hoh_Peter_In1
20250920
https://openreview.net/forum?id=0cmuYj3WeG
0cmuYj3WeG
@inproceedings{ lee2026palc, title={{PALC}: Preference Alignment via Logit Calibration}, author={SANGHYUN LEE and Hoh Peter In}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=0cmuYj3WeG} }
OpenReview/ICLR/figures/2026/accept_poster/0cmuYj3WeG/Figure1.png
1
Figure 1: Overview of the PALC framework. Unlike conventional representation steering methods that intervene in entangled hidden spaces, PALC treats the base model’s hidden states ht strictly as a read-only context. A lightweight Calibration Module (θ) extracts essential preference signals through a bottleneck architec...
<paragraph_1>We examine how the scaling factor γ affects PALC’s performance. Figure 3 shows results for five values: γ ∈{0.5, 1.0, 3.0, 5.0, 10.0}.</paragraph_1>
diagram
0.942897
OpenReview
ICLR
2,026
Neural Predictor-Corrector: Solving Homotopy Problems with Reinforcement Learning
The Homotopy paradigm, a general principle for solving challenging problems, appears across diverse domains such as robust optimization, global optimization, polynomial root-finding, and sampling. Practical solvers for these problems typically follow a predictor-corrector (PC) structure, but rely on hand-crafted heuris...
Homotopy System, Graduated optimization, Reinforcement Learning, Polynomial Equitions System, Gaussian Homotopy, Sampling
applications to computer vision, audio, language, and other modalities
[ 6, 6, 4 ]
Accept (Poster)
Jiayao Mai, Bangyan Liao, Zhenjun Zhao, Yingping Zeng, Haoang Li, Javier Civera, Tailin Wu, Yi Zhou, Peidong Liu
~Jiayao_Mai3, ~Bangyan_Liao1, ~Zhenjun_Zhao1, ~Yingping_Zeng1, ~Haoang_Li1, ~Javier_Civera1, ~Tailin_Wu1, ~Yi_Zhou27, ~Peidong_Liu3
20250905
https://openreview.net/forum?id=x6iodYWNty
x6iodYWNty
@inproceedings{ mai2026neural, title={Neural Predictor-Corrector: Solving Homotopy Problems with Reinforcement Learning}, author={Jiayao Mai and Bangyan Liao and Zhenjun Zhao and Yingping Zeng and Haoang Li and Javier Civera and Tailin Wu and Yi Zhou and Peidong Liu}, booktitle={The Fourteenth International Conference ...
OpenReview/ICLR/figures/2026/accept_poster/x6iodYWNty/Figure2.png
2
Figure 2: Illustration of the Predictor-Corrector algorithm. Predictor proposes the next level and provides an initial solution estimate, while Corrector iteratively refines this estimate to project it back onto the solution trajectory. Orange curve denotes the implicit solution trajectory, as in Fig. 1.
<paragraph_1>While the homotopy paradigm specifies the abstract principle, an effective algorithm is needed to trace the implicit solution trajectory in practice. The PC method (Allgower & Georg, 2012) provides such a concrete algorithmic framework. As shown in Fig. 2, PC decomposes trajectory tracking into two complem...
diagram
0.881063
OpenReview
ICLR
2,026
CLUE: Conflict-guided Localization for LLM Unlearning Framework
The LLM unlearning aims to eliminate the influence of undesirable data without affecting causally unrelated information. This process typically involves using a **forget set** to remove target information, alongside a **retain set** to maintain non-target capabilities. While recent localization-based methods demonstrat...
LLM unlearning, circuit discovery, conjunctive normal form, interpretability
foundation or frontier models, including LLMs
We use circuit discovery and CNF solving to design the localization for forget neurons and retain neurons in the LLM unlearning task.
[ 6, 6, 4, 2 ]
Accept (Poster)
Hang Chen, Jiaying Zhu, Xinyu Yang, Wenya Wang
~Hang_Chen3, ~Jiaying_Zhu5, ~Xinyu_Yang2, ~Wenya_Wang1
20250901
https://openreview.net/forum?id=jtRYvazBWv
jtRYvazBWv
@inproceedings{ chen2026clue, title={{CLUE}: Conflict-guided Localization for {LLM} Unlearning Framework}, author={Hang Chen and Jiaying Zhu and Xinyu Yang and Wenya Wang}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=jtRYvazBWv} }
OpenReview/ICLR/figures/2026/accept_poster/jtRYvazBWv/Figure2.png
2
Figure 2: Overview from datasets to localization.
<paragraph_1>In this section, we provide a three-step framework of how circuit discovery ultimately enables precise localization. An overview of our localization procedure is shown in Figure 2. Specifically,</paragraph_1>
diagram
0.850337
OpenReview
ICLR
2,026
Latent Geometry-Driven Network Automata for Complex Network Dismantling
Complex networks model the structure and function of critical technological, biological, and communication systems. Network dismantling, the targeted removal of nodes to fragment a network, is essential for analyzing and improving system robustness. Existing dismantling methods suffer from key limitations: they depend ...
network robustness, network dismantling, network geometry, network science, complex systems, network automata, graphs, network topology
learning on graphs and other geometries & topologies
Latent Geometry-Driven Network Automata dismantles networks by estimating effective link distances on the latent manifold via local rules, outperforming all existing methods on 1,475 real-world networks and runs efficiently on large systems via GPU.
[ 4, 2, 6, 6 ]
Accept (Poster)
Thomas Adler, Marco Grassia, Ziheng Liao, Giuseppe Mangioni, Carlo Vittorio Cannistraci
~Thomas_Adler2, ~Marco_Grassia1, ~Ziheng_Liao1, ~Giuseppe_Mangioni1, ~Carlo_Vittorio_Cannistraci1
20250918
https://openreview.net/forum?id=yz29QCGVzC
yz29QCGVzC
@inproceedings{ adler2026latent, title={Latent Geometry-Driven Network Automata for Complex Network Dismantling}, author={Thomas Adler and Marco Grassia and Ziheng Liao and Giuseppe Mangioni and Carlo Vittorio Cannistraci}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, ur...
OpenReview/ICLR/figures/2026/accept_poster/yz29QCGVzC/Figure1.png
1
Figure 1: Overview of the LGD Network Automata framework. A: Begin with an unweighted and undirected network. B: Estimate latent geometry by assigning a weight νij to each edge between nodes i and j using local latent geometry estimators. C: Construct a dissimilarity-weighted network based on these weights. D: Compute ...
<paragraph_1>We introduce the Latent Geometry-Driven Network Automata (LGD-NA) framework. LGD-NA adopts a parameter-free network automaton rule, such as RA2, to estimate latent geometric linked node pairwise distances and to assign edge weights based on these geometric distances. Then, it computes for each node its net...
diagram
0.976884
OpenReview
ICLR
2,026
Accelerated co-design of robots through morphological pretraining
The co-design of robot morphology and neural control typically requires using reinforcement learning to approximate a unique control policy gradient for each body plan, demanding massive amounts of training data to measure the performance of each design. Here we show that a universal, morphology-agnostic controller can...
robot co-design, universal control, differentiable simulation, embodied intelligence
applications to robotics, autonomy, planning
[ 2, 6, 6 ]
Accept (Poster)
Luke Strgar, Sam Kriegman
~Luke_Strgar1, ~Sam_Kriegman1
20250919
https://openreview.net/forum?id=WVliGyFwZv
WVliGyFwZv
@inproceedings{ strgar2026accelerated, title={Accelerated co-design of robots through morphological pretraining}, author={Luke Strgar and Sam Kriegman}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=WVliGyFwZv} }
OpenReview/ICLR/figures/2026/accept_poster/WVliGyFwZv/Figure2.png
2
Figure 2: Overview of the proposed method. End-to-end differentiable policy training across tens of millions of morphologically distinct robots—morphological pretraining—produces a universal controller, which was kept frozen throughout zero-shot evolution and finetuned for each generation of few-shot evolution.
<paragraph_1>Inspired by the remarkable success of large-scale pretrained models in computer vision and natural language processing, we here pretrain a universal controller across millions of complex body plans using gradient information from differentiable simulation, averaging gradients across variations in the robot...
diagram
0.924178
OpenReview
ICLR
2,026
Automatic and Structure-Aware Sparsification of Hybrid Neural ODEs with Application to Glucose Prediction
Hybrid neural ordinary differential equations (neural ODEs) integrate mechanistic models with neural ODEs, offering strong inductive bias and flexibility, and are particularly advantageous in data-scarce healthcare settings. However, excessive latent states and interactions from mechanistic models can lead to training ...
Predictive Sparsity, Hybrid Neural ODE, Group LASSO, Glucose Prediction
applications to physical sciences (physics, chemistry, biology, etc.)
[ 4, 6, 4, 8 ]
Accept (Poster)
Bob Junyi Zou, Lu Tian
~Bob_Junyi_Zou1, ~Lu_Tian4
20250918
https://openreview.net/forum?id=QBzFrjEF59
QBzFrjEF59
@inproceedings{ zou2026automatic, title={Automatic and Structure-Aware Sparsification of Hybrid Neural {ODE}s with Application to Glucose Prediction}, author={Bob Junyi Zou and Lu Tian}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=QB...
OpenReview/ICLR/figures/2026/accept_poster/QBzFrjEF59/Figure5.png
5
Figure 5: An illustration of the mechanistic vs true graphs used in the synthetic experiments
<paragraph_1>In figure 5, we provide an illustration of the mechanistic graph used in the synthetic experiments.</paragraph_1>
diagram
0.92587
OpenReview
ICLR
2,026
Tractability via Low Dimensionality: The Parameterized Complexity of Training Quantized Neural Networks
The training of neural networks has been extensively studied from both algorithmic and complexity-theoretic perspectives, yet recent results in this direction almost exclusively concern real-valued networks. In contrast, advances in machine learning practice highlight the benefits of quantization, where network paramet...
treewidth, parameterized complexity, quantized neural networks, ReLU networks
learning theory
We study the classical and parameterized complexity of training quantized neural networks and obtain new upper as well as lower bounds for the problem.
[ 6, 8, 6 ]
Accept (Poster)
Robert Ganian, Frank Sommer, Manuel Sorge
~Robert_Ganian1, ~Frank_Sommer1, ~Manuel_Sorge1
20250918
https://openreview.net/forum?id=BAQNrsr987
BAQNrsr987
@inproceedings{ ganian2026tractability, title={Tractability via Low Dimensionality: The Parameterized Complexity of Training Quantized Neural Networks}, author={Robert Ganian and Frank Sommer and Manuel Sorge}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://op...
OpenReview/ICLR/figures/2026/accept_poster/BAQNrsr987/Figure4.png
4
Figure 4: An illustration of the reduction behind Theorem 3 for the universe U = [6] and the set family F with sets S1 = {1, 4, 5}, S2 = {2, 3}, S3 = {1, 6}, S4 = {2, 5}, S5 = {3, 5}, S6 = {6} and k = 3 and with a hitting set S = {2, 5, 6}. In the solution corresponding to S, inputs p1, p2 and p3 are associated with el...
<paragraph_1>We construct an equivalent instance I of 2-QNNT as follows; see Figure 4 for an illustration. Description of architecture G. We create two input neurons z1 and z2. For each of the two literals</paragraph_1> <paragraph_2>Construction. We construct an instance I of 2-QNNT as follows. For an illustration, see...
diagram
0.90793
OpenReview
ICLR
2,026
Constrained Decoding of Diffusion LLMs with Context-Free Grammars
Large language models (LLMs) have shown promising performance across diverse domains. Many practical applications of LLMs, such as code completion and structured data extraction, require adherence to syntactic constraints specified by a formal language. Yet, due to their probabilistic nature, LLM output is not guarante...
diffusion llm, constrained decoding, llm, code generation, json, multi-region infilling, fill in the middle, code synthesis
generative models
We reduce constrained decoding for generalized code generation paradigms to an operation on formal languages, enabling constrained decoding for infilling and diffusion LLMs.
[ 4, 8, 6, 4 ]
Accept (Poster)
Niels Mündler, Jasper Dekoninck, Martin Vechev
~Niels_Mündler1, ~Jasper_Dekoninck1, ~Martin_Vechev1
20250916
https://openreview.net/forum?id=7Sph4KyeYO
7Sph4KyeYO
@inproceedings{ mundler2026constrained, title={Constrained Decoding of Diffusion {LLM}s with Context-Free Grammars}, author={Niels M{\"u}ndler and Jasper Dekoninck and Martin Vechev}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=7Sph4...
OpenReview/ICLR/figures/2026/accept_poster/7Sph4KyeYO/Figure3.png
3
Figure 3: Examples of Figures 1 and 4 processed during our method. (a) The grammar is first normalized into C2F+ε, and (b) the NFA is transformed into a minimal DFA. (c) To determine
<paragraph_1>Constructing the regular language The language Cx of all possible completions of x = x1 . . . xn contains all words that start with x1, end with xn, and contain the strings xi (1 ≤i ≤n) in the correct order, with arbitrary symbols in between. We prove that Cx is regular by constructing an NFA that accepts ...
diagram
0.965765
OpenReview
ICLR
2,026
Grounding Generative Planners in Verifiable Logic: A Hybrid Architecture for Trustworthy Embodied AI
While Large Language Models (LLMs) show immense promise as planners for embodied AI, their stochastic nature and lack of formal reasoning capabilities prevent the strict safety guarantees required for physical deployment. Current approaches fall short: they either rely on other unreliable LLMs for safety checks or simp...
neurosymbolic AI, hybrid AI, formal reasoning, large language models, AI safety, verifiable AI, embodied AI, robotics
neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
We propose a hybrid neuro-symbolic architecture where a formal logic verifier tutors an LLM planner, enabling the generation of verifiably safe plans for embodied agents.
[ 4, 2, 6, 4 ]
Accept (Poster)
Feiyu Wu, Xu Zheng, Yue Qu, Zhuocheng Wang, Zicheng Feng, HUI LI
~Feiyu_Wu1, ~Xu_Zheng1, ~Yue_Qu4, ~Zhuocheng_Wang1, ~Zicheng_Feng1, ~HUI_LI17
20250916
https://openreview.net/forum?id=wb05ver1k8
wb05ver1k8
@inproceedings{ wu2026grounding, title={Grounding Generative Planners in Verifiable Logic: A Hybrid Architecture for Trustworthy Embodied {AI}}, author={Feiyu Wu and Xu Zheng and Yue Qu and Zhuocheng Wang and Zicheng Feng and HUI LI}, booktitle={The Fourteenth International Conference on Learning Representations}, year...
OpenReview/ICLR/figures/2026/accept_poster/wb05ver1k8/Figure1.png
1
Figure 1: The architecture of the Verifiable Iterative Refinement Framework (VIRF). Instead of direct execution, an LLM planner’s actions are verified in a symbolic sandbox against a formal knowledge base. The framework’s core is the Logic Tutor feedback loop, which provides three distinct responses: approval for safe ...
<paragraph_1>Our work introduces the Verifiable Iterative Refinement Framework (VIRF), a novel neurosymbolic architecture designed to govern a generative Large Language Model (LLM) planner. At its core, VIRF transforms the interaction between the stochastic LLM and a deterministic symbolic verifier from a simple pass/f...
diagram
0.91071
OpenReview
ICLR
2,026
Characterizing and Optimizing the Spatial Kernel of Multi Resolution Hash Encodings
Multi-Resolution Hash Encoding (MHE), the foundational technique behind Instant Neural Graphics Primitives, provides a powerful parameterization for neural fields. However, its spatial behavior lacks rigorous understanding from a physical systems perspective, leading to reliance on heuristics for hyperparameter selecti...
multi-resolution hash encoding, implicit neural representations, neural fields, point spread function, spatial kernel analysis, anisotropy, resolution limit, FWHM, hash collisions, signal-to-noise ratio, NeRF
applications to computer vision, audio, language, and other modalities
We analyze Multi-Resolution Hash Encoding (MHE) using its Point Spread Function (PSF) to reveal that effective resolution is governed by average, not finest, resolution, and introduce Rotated MHE to mitigate inherent anisotropy and collision noise.
[ 4, 6, 6, 4 ]
Accept (Poster)
Tianxiang Dai, Jonathan Fan
~Tianxiang_Dai1, ~Jonathan_Fan1
20250920
https://openreview.net/forum?id=q05hC1Pzkr
q05hC1Pzkr
@inproceedings{ dai2026characterizing, title={Characterizing and Optimizing the Spatial Kernel of Multi Resolution Hash Encodings}, author={Tianxiang Dai and Jonathan Fan}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=q05hC1Pzkr} }
OpenReview/ICLR/figures/2026/accept_poster/q05hC1Pzkr/Figure1.png
1
Figure 1: Overview of MHE Characterization and Optimization. (a) The MHE architecture utilizes L grid levels with resolutions growing by a factor b. The encoding e(x) is passed to an MLP gθ. We characterize the system by optimizing for a point constraint and measuring the resulting Point Spread Function (PSF). (b) This...
<paragraph_1>In this work, we introduce a novel methodology to characterize and understand the performance of MHE by analyzing its Point Spread Function (PSF). Analogous to measuring the Green’s function of a physical system, the PSF characterizes the model’s response when optimized to represent an idealized point sour...
diagram
0.984853
OpenReview
ICLR
2,026
CaTs and DAGs: Integrating Directed Acyclic Graphs with Transformers for Causally Constrained Predictions
Artificial Neural Networks (ANNs), including fully-connected networks and transformers, are highly flexible and powerful function approximators, widely applied in fields like computer vision and natural language processing. However, their inability to inherently respect causal structures can limit their robustness, mak...
transformers, causal inference, causality, inductive bias, DAGs
causal reasoning
Causal Transformers (CaTs) are neural networks constrained by a causal DAG, combining the power of standard ANNs with improved robustness to covariate shift, greater reliability, and interpretability for real-world applications.
[ 4, 6, 4 ]
Accept (Poster)
Matthew James Vowels, Mathieu Rochat, Sina Akbari
~Matthew_James_Vowels1, ~Mathieu_Rochat1, ~Sina_Akbari1
20250910
https://openreview.net/forum?id=ZIQactmQxb
ZIQactmQxb
@inproceedings{ vowels2026cats, title={CaTs and {DAG}s: Integrating Directed Acyclic Graphs with Transformers for Causally Constrained Predictions}, author={Matthew James Vowels and Mathieu Rochat and Sina Akbari}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https:...
OpenReview/ICLR/figures/2026/accept_poster/ZIQactmQxb/Figure8.png
8
Figure 8: The DAG used in the real-world psychology example - reconstructed from the causal discovery and domain expertise results presented in (Vowels et al., 2023a). Treatment is attachment style ’attachment’ (also highlighted in orange) and the two outcomes of interest at the measures of depression (highlighted in g...
<paragraph_1>We follow closely the process in (Vowels et al., 2023a) for estimating the causal effect of shifting from one category of attachment style to another on depression. We also report the results for a subset of their analyses in Table 3, which use a ‘naive’ estimator (comprising the bivariate linear model bet...
diagram
0.913085
OpenReview
ICLR
2,026
A.I.R.: Enabling Adaptive, Iterative, and Reasoning-based Frame Selection For Video Question Answering
Effectively applying Vision-Language Models (VLMs) to Video Question Answering (VideoQA) hinges on selecting a concise yet comprehensive set of frames, as processing entire videos is computationally infeasible. However, current frame selection methods face a critical trade-off: approaches relying on lightweight similar...
Video Frame Selection, Vision Language Model, Training-Free, Video understanding
applications to computer vision, audio, language, and other modalities
[ 6, 4, 6, 4 ]
Accept (Poster)
Yuanhao Zou, Shengji Jin, Andong Deng, Youpeng Zhao, Jun Wang, Chen Chen
~Yuanhao_Zou1, ~Shengji_Jin1, ~Andong_Deng2, ~Youpeng_Zhao2, ~Jun_Wang7, ~Chen_Chen18
20250902
https://openreview.net/forum?id=SZVpOKw0YD
SZVpOKw0YD
@inproceedings{ zou2026air, title={A.I.R.: Enabling Adaptive, Iterative, and Reasoning-based Frame Selection For Video Question Answering}, author={Yuanhao Zou and Shengji Jin and Andong Deng and Youpeng Zhao and Jun Wang and Chen Chen}, booktitle={The Fourteenth International Conference on Learning Representations}, y...
OpenReview/ICLR/figures/2026/accept_poster/SZVpOKw0YD/Figure2.png
2
Figure 2: General pipeline of A.I.R. with three stages: (1) Adaptive Initial Sampling that identifies potential ‘events’ based on query similarity and dynamically samples frames around them using an adaptive budget; (2) Iterative Frame Selection that progressively refines the frame selection via four steps; and (3) QA ...
<paragraph_1>As illustrated in Fig. 2, our proposed approach, A.I.R., performs frame selection in three stages: Adaptive Initial Sampling, Iterative Frame Selection, and QA Stage. The process begins by sampling n frames from the video (containing N total frames) at a fixed frame rate. As a pre-processing step, these n ...
diagram
0.968053
OpenReview
ICLR
2,026
Amortising Inference and Meta-Learning Priors in Neural Networks
One of the core facets of Bayesianism is in the updating of prior beliefs in light of new evidence$\textemdash$so how can we maintain a Bayesian approach if we have no prior beliefs in the first place? This is one of the central challenges in the field of Bayesian deep learning, where it is not clear how to represent b...
neural processes, Bayesian neural networks, meta-learning, priors, variational inference
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
[ 4, 6, 4, 6 ]
Accept (Poster)
Tommy Rochussen, Vincent Fortuin
~Tommy_Rochussen1, ~Vincent_Fortuin1
20250919
https://openreview.net/forum?id=KG6SSTz2GJ
KG6SSTz2GJ
@inproceedings{ rochussen2026amortising, title={Amortising Inference and Meta-Learning Priors in Neural Networks}, author={Tommy Rochussen and Vincent Fortuin}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=KG6SSTz2GJ} }
OpenReview/ICLR/figures/2026/accept_poster/KG6SSTz2GJ/Figure9.png
9
Figure 9: Computational diagrams of the amortised attention layer (a), amortised attention block (b), and BNAM (c). Due to the numerous crossing lines in (a), we colour code the context and target input data paths as orange and light blue respectively. Arbitrarily many amortised attention blocks can be stacked sequenti...
<paragraph_1>We see in Fig. 9(a) that amortised inference can be performed in an attention layer by using amortised linear layers in place of standard linear layers, where MHA is the usual multi-head dot-product attention mechanism acting on keys K, queries Q, and values V. Similarly, in Fig. 9(b) we follow the standar...
diagram
0.988838
OpenReview
ICLR
2,026
DETR-ViP: Detection Transformer with Robust Discriminative Visual Prompts
Visual prompted object detection enables interactive and flexible definition of target categories, thereby facilitating open-vocabulary detection. Since visual prompts are derived directly from image features, they often outperform text prompts in recognizing rare categories. Nevertheless, research on visual prompted d...
object detection, prompt-based detection, open-set object detection
applications to computer vision, audio, language, and other modalities
This paper presents the DETR-ViP framework, which enhances visual prompt detection by improving the semantic consistency of visual prompts and introducing a selective fusion strategy.
[ 6, 4, 6 ]
Accept (Poster)
Bo Qian, Dahu Shi, Xing Wei
~Bo_Qian1, ~Dahu_Shi2, ~Xing_Wei5
20250903
https://openreview.net/forum?id=2KKDWERRm3
2KKDWERRm3
@inproceedings{ qian2026detrvip, title={{DETR}-ViP: Detection Transformer with Robust Discriminative Visual Prompts}, author={Bo Qian and Dahu Shi and Xing Wei}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=2KKDWERRm3} }
OpenReview/ICLR/figures/2026/accept_poster/2KKDWERRm3/Figure2.png
2
Figure 2: The overview of DETR-ViP. DETR-ViP builds on Grounding DINO by incorporating a visual prompt encoder for visual-prompted detection. It improves prompt semantics via global prompt Integration and visual-textual prompt relation distillation, and refines the fusion module to stabilize image-prompt interactions, ...
<paragraph_1>We develop the baseline VIS-GDINO from Grounding DINO by inserting the visual prompt encoder, as defined in Equation (3), between the backbone and the encoder, and removing the fusion modules in the encoder and decoder as represented in Equation (2). On top of this architecture, we introduce the global pro...
diagram
0.991753
OpenReview
ICLR
2,026
When Large Multimodal Models Confront Evolving Knowledge: Challenges and Explorations
Large Multimodal Models (LMMs) store vast amounts of pretrained knowledge but struggle to remain aligned with real-world updates, making it difficult to avoid capability degradation when acquiring evolving knowledge. Furthermore, most current work focuses on exploring static textual knowledge injection, neglecting dyna...
Evolving Knowledge Injection; Large multimodal model; Benchmark and Dataset
datasets and benchmarks
This work introduces MMEVOKE benchmark to reveal challenges in knowledge injection and explores potential solutions.
[ 6, 6, 4, 8 ]
Accept (Poster)
Kailin Jiang, Yuntao Du, Yukai Ding, Yuchen Ren, Ning Jiang, Zhi Gao, Zilong Zheng, Lei Liu, Bin Li, Qing Li
~Kailin_Jiang1, ~Yuntao_Du2, ~Yukai_Ding2, ~Yuchen_Ren1, ~Ning_Jiang7, ~Zhi_Gao5, ~Zilong_Zheng1, ~Lei_Liu28, ~Bin_Li8, ~Qing_Li1
20250901
https://openreview.net/forum?id=iaPEM00wEs
iaPEM00wEs
@inproceedings{ jiang2026when, title={When Large Multimodal Models Confront Evolving Knowledge: Challenges and Explorations}, author={Kailin Jiang and Yuntao Du and Yukai Ding and Yuchen Ren and Ning Jiang and Zhi Gao and Zilong Zheng and Lei Liu and Bin Li and Qing Li}, booktitle={The Fourteenth International Conferen...
OpenReview/ICLR/figures/2026/accept_poster/iaPEM00wEs/Figure25.png
25
Figure 25: Fine-grained dimensional results on MathVision and HallusionBench.
<paragraph_1>According to Figures 22, 23, 24, 25, and 26, we conduct result analysis for each benchmark.</paragraph_1>
diagram
0.915522
End of preview. Expand in Data Studio

DiagramBank

DiagramBank is a large-scale dataset designed for Retrieval-Augmented Generation (RAG) on scientific figures. It aggregates papers and their corresponding diagrams from top AI conferences (ICLR, ICML, NeurIPS, TMLR), providing rich metadata including review scores, acceptance decisions, and figure captions.

Dataset Structure

The dataset is provided as a single JSONL file (data.jsonl). Each row represents a specific figure extracted from a paper.

Data Fields

Field Description
platform Source platform (e.g., OpenReview).
venue Conference venue (ICLR, ICML, NeurIPS, TMLR).
year Year of the conference venue.
title Title of the research paper.
abstract Full abstract of the paper.
keywords Comma-separated list of keywords provided by authors.
areas Primary subject areas (e.g., "Deep Learning", "Optimization").
tldr One-sentence summary ("Too Long; Didn't Read").
scores List of reviewer scores (integers).
decision Final decision for the paper (e.g., Accept, Reject).
authors Comma-separated list of author names.
author_ids Comma-separated author IDs on the source platform.
cdate Creation date of the record (YYYYMMDD).
url Direct URL to the paper on the source platform.
platform_id Unique identifier for the paper on the platform.
bibtex BibTeX citation entry for the paper.
figure_number "Figure<figure_number>".
figure_path Relative path to the raw image file in the accompanying archives.
figure_caption The caption text associated with the figure.
figure_context Paragraphs from the paper that explicitly reference this figure.
figure_type Classification of the image (e.g., "diagram").
confidence Confidence score of the figure classification.

Usage

Loading the Data

from datasets import load_dataset

# This will automatically load 'data.jsonl' as the train split
dataset = load_dataset("zhangt20/DiagramBank", split="train")

# Example: Access the first figure's caption
print(dataset[0]['figure_caption'])

Loading the figures

You can download the diagrams and automatically reconstruct the folder structure using our setup script.

# 1. Download the setup script directly from this repo
wget [https://huggingface.co/datasets/zhangt20/DiagramBank/resolve/main/download_diagrambank.py](https://huggingface.co/datasets/zhangt20/DiagramBank/resolve/main/download_diagrambank.py)
# 2. Run it (downloads ~60GB of diagrams from accepted papers)
# Set the target folder using the FIG_RAG_DIR environment variable
export FIG_RAG_DIR=<a scratch folder with at least 60 GB of space>
# 1. Default: Download Accepted papers + Core files (DBs/FAISS)
python download_diagrambank.py

# 2. Download Everything: All papers (Accept + Reject) + Core files
# python download_diagrambank.py --subset all

# 3. Download Rejected papers only + Core files
# python download_diagrambank.py --subset reject

# 4. Skip Core Files: Download only images (no DBs or FAISS)
# python download_diagrambank.py --no-core

# 5. Combine Flags: Download all images but skip core files
# python download_diagrambank.py --subset all --no-core

For a more detailed usage, see https://github.com/csml-rpi/DiagramBank

Downloads last month
413