id
stringlengths
10
10
number
int64
1
25.6k
forum
stringlengths
10
10
title
stringlengths
5
214
abstract
stringlengths
26
4.31k
content_TLDR
stringlengths
1
250
content_keywords
stringlengths
6
1.02k
content_pdf
stringlengths
49
49
content_primary_area
stringclasses
21 values
content_supplementary_material
stringlengths
56
56
signatures
stringlengths
47
51
qfeSpu5FBE
25,459
qfeSpu5FBE
Treating Neural Image Compression via Modular Adversarial Optimization: From Global Distortion to Local Artifacts
The rapid progress in neural image compression (NIC) led to the deployment of advanced codecs, such as JPEG AI, which significantly outperform conventional approaches. However, despite extensive research on the adversarial robustness of neural networks in various computer vision tasks, the vulnerability of NIC models t...
We propose a modular adversarial attack on neural image codecs that reduces the compression quality of both the entire image and local areas, in order to improve the effectiveness and filters noise to stay imperceptible.
['Adversarial Robustness', 'Neural Image Compression', 'Adversarial Attacks']
/pdf/c27e2a1d745c3bc20af3e220ea1164b7e312a2d3.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission25459/Authors']
4TFfiG17ec
25,458
4TFfiG17ec
Thanos: A Block-wise Pruning Algorithm for Efficient Large Language Model Compression
This paper presents Thanos, a novel weight-pruning algorithm designed to reduce the memory footprint and enhance the computational efficiency of large language models (LLMs) by removing redundant weights while maintaining accuracy. Thanos introduces a block-wise pruning strategy with adaptive masks that dynamically adj...
We developed a novel pruning method for LLMs that compresses matrices in a block-wise manner.
['LLM Compression', 'Pruning', 'Wanda', 'SparseGPT', 'Deep Learning', 'AI']
/pdf/9fca108dcf3ca3b5fe5f807bf88eeb2de0f5a57b.pdf
foundation or frontier models, including LLMs
/attachment/53f3e08859a3fdf60d993a1573cabd2e9812d653.zip
['ICLR.cc/2026/Conference/Submission25458/Authors']
f9cYLpakOI
25,457
f9cYLpakOI
Endogenous Communication in Repeated Games with Learning Agents
Communication among learning agents often emerges without explicit supervision. We study endogenous protocol formation in infinitely repeated stage games with a costless pre-play channel. Each agent has a representation map that compresses private signals into messages subject to an information budget. Agents update st...
We show when cheap-talk communication learned by agents in repeated games is predictive, incentive-compatible, and sample-efficient, giving tight conditions for stable emergent protocols.
['multi-agent learning', 'repeated games', 'cheap talk', 'communication', 'information bottleneck', 'equilibrium', 'representation learning']
/pdf/f066855147b7f8a0cc58eb0f08fc0e64b7bf487c.pdf
learning theory
null
['ICLR.cc/2026/Conference/Submission25457/Authors']
QWopGahUEL
25,452
QWopGahUEL
Games for AI Control: Models of Safety Evaluations of AI Deployment Protocols
To evaluate the safety and usefulness of deployment protocols for untrusted AIs, AI Control uses a red-teaming exercise played between a protocol designer and an adversary. This paper introduces AI-Control Games, a formal decision-making model of the red-teaming exercise as a multi-objective, partially observable, stoc...
We introduce a game-theoretic framework for modelling AI Control evaluations, and synthesising protocols.
['Partially Observable Stochastic Games', 'AI Control', 'AI Evaluations', 'Safeguards', 'Game theory']
/pdf/7c6bf24da5832b676ea37c1f217c451e5d09b73a.pdf
alignment, fairness, safety, privacy, and societal considerations
/attachment/2ecbbd78b6374b506c760f201ef667c983b87fa8.zip
['ICLR.cc/2026/Conference/Submission25452/Authors']
q05hC1Pzkr
25,450
q05hC1Pzkr
Characterizing and Optimizing the Spatial Kernel of Multi Resolution Hash Encodings
Multi Resolution Hash Encoding (MHE), the foundational technique behind Instant Neural Graphics Primitives, provides a powerful parameterization for neural fields. However, its spatial behavior lacks rigorous understanding from a physical systems perspective, leading to reliance on heuristics for hyperparameter selecti...
We analyze Multi-Resolution Hash Encoding (MHE) using its Point Spread Function (PSF) to reveal that effective resolution is governed by average, not finest, resolution, and introduce Rotated MHE to mitigate inherent anisotropy and collision noise.
['multi-resolution hash encoding', 'implicit neural representations', 'neural fields', 'point spread function', 'spatial kernel analysis', 'anisotropy', 'resolution limit', 'FWHM', 'hash collisions', 'signal-to-noise ratio', 'NeRF']
/pdf/11f4413fe01f01addafd76cd01dfd0c3346c148e.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission25450/Authors']
0JLUFJMo5p
25,449
0JLUFJMo5p
Dynamic Task-Embedded Reward Machines for \\ Adaptive Code Generation and Manipulation \\ in Reinforcement Learning
We introduce Dynamic Task-Embedded Reward Machine (DTERM), a new machine learning approach for reinforcement learning on tasks of code generation and code manipulation. Conventional reward models tend to be based on fixed weightings or manual tuning, which is not flexible enough for many different coding tasks, such as...
null
['Reinforcement Learning']
/pdf/fa6de8f172967f9988c29abcc16091879272bcd0.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission25449/Authors']
ScpCaOVGw1
25,448
ScpCaOVGw1
EVEREST: A Transformer for Probabilistic Rare-Event Anomaly Detection with Evidential and Tail-Aware Uncertainty
Forecasting rare events in multivariate time-series data is a central challenge in machine learning, complicated by severe class imbalance, long-range dependencies, and distributional uncertainty. We introduce EVEREST, a transformer-based architecture for probabilistic rare-event forecasting that delivers calibrated pr...
EVEREST is a transformer architecture for rare-event time-series forecasting that combines evidential and tail-aware uncertainty to deliver calibrated, interpretable, and state-of-the-art predictions across scientific anomaly detection tasks.
['Transformer models', 'Uncertainty quantification', 'Evidential deep learning', 'Extreme value theory', 'Imbalanced classification']
/pdf/95203a99a1ccbf3fd0495c1baadd9fa578a921c5.pdf
learning on time series and dynamical systems
/attachment/ab91622dbcd60fe6eb53bd44423454704b34fc62.zip
['ICLR.cc/2026/Conference/Submission25448/Authors']
rb7rnOSa2g
25,446
rb7rnOSa2g
Latents-Inv:Robust Semantic Watermark with Key-Assisted Recovery for diffusion models
Semantic watermarking provides imperceptible identity traceability for diffusion-generated images, enabling model copyright protection and image source verification. However, existing semantic watermarking methods based on initial latent noise render the protected image vulnerable to adversarial latent-space manipulati...
null
['watermark', 'AI Security', 'diffusion model']
/pdf/953f565cb7f04df0535f50c851a11c19dacee315.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission25446/Authors']
CyVUxyDc4U
25,444
CyVUxyDc4U
IDAP++: Advancing Divergence-Based Pruning via Filter-Level and Layer-Level Optimization
This paper presents a novel approach to neural network compression that addresses redundancy at both the filter and architectural levels through a unified framework grounded in information flow analysis. Building on the concept of tensor flow divergence, which quantifies how information is transformed across network la...
null
['Neural Network Pruning', 'Information Flow Divergence', 'Model Compression', 'Architecture Optimization']
/pdf/e3eb774465bc449139535e73d2f1868321ba7680.pdf
optimization
null
['ICLR.cc/2026/Conference/Submission25444/Authors']
FPBtaGBv81
25,440
FPBtaGBv81
Dynamic Trust Region Adaptation for \\ Human-in-the-Loop Reinforcement Learning \\ in Code Refinement
We propose a dynamic trust region adaptation framework for Human-in-the-Loop Reinforcement Learning (HITL-RL) in code refinement to address the challenge of incorporating unskilled human feedback into policy updates. Conventional methods handle all feedback in the same way, and this may result in poor convergence becau...
null
['Code Refinement']
/pdf/5e672f3ea95a9e58fe41dc6e69e40c23a3003aa5.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission25440/Authors']
guUUlHPXRw
25,437
guUUlHPXRw
Modelling Optimal Trade-Off Between Continued Pre-Training and Supervised Fine-Tuning for LLM Domain Adaptation
Domain adaptation is critical for tailoring pre-trained Large Language Models (LLMs) to specialised tasks without significant costs of pre-training from scratch. Two common approaches for domain adaptation are Continual Pre-training (CPT) and Supervised Fine-Tuning (SFT), yet the data mix for each is often determined a...
Finding the optimal data allocation between CPT and SFT for domain adaptation
['Machine Learning', 'Continuous Pretraining', 'Supervised Fine Tuning', 'Parameter-Efficient Fine-Tuning (PEFT)', 'Optimization']
/pdf/3ea0a83c73d87a6a98d2b88890ee861937e1cc3c.pdf
optimization
null
['ICLR.cc/2026/Conference/Submission25437/Authors']
gMc5Qa45ia
25,435
gMc5Qa45ia
DynamicRank LoRA: Real-Time Adaptive Fine-Tuning \\ for Code Models via Token-Level Importance and Loss Landscape Awareness
\begin{abstract} We propose \textbf{DynamicRank LoRA}, a novel fine-tuning mechanism for code models that dynamically adjusts the rank of low-rank adaptation (LoRA) matrices in real-time, addressing the limitations of static rank configurations in conventional LoRA. The proposed approach combines two fundamental ingred...
null
['Real-Time Adaptive Fine-Tuning']
/pdf/da473586ea64c99f5a828a62e17a734bfc042785.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission25435/Authors']
cY2aTfhT3L
25,432
cY2aTfhT3L
ReSafe: Enhancing Safety of Text-to-Image Diffusion via Post-Hoc Image Back Translation
Ensuring safe images in Text-to-Image (T2I) diffusion models has emerged as an active area of research. However, existing T2I safe image generation methods may fail to fully erase learned knowledge and remain vulnerable to circumvention like adversarial prompts or concept arithmetic. Given that safe image generation me...
Image-to-image translation framework designed to remove inappropriate components from a given unsafe image and regenerate a safe image.
['Safe generation', 'Image-to-Image translation', 'Image back translation']
/pdf/4b5185bc7cded893c915145060b91bd2f0732553.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission25432/Authors']
EbSkBZQF9g
25,431
EbSkBZQF9g
Mechanistic Interpretability analysis of a single-layer transformer on 0-1 knapsack
Small language models have been shown to exhibit generalisation for toy problems while being trained on algorithmically generated datasets. It is poorly understood whether this phenomenon happens in complex problems such as NP-complete problems. In this work, we show the inability of a single-layer transformer to "grok...
mechanistic interpretability of a single-layer transformer on 0-1 knapsack, shows the inability of transformers to solve NP-complete tasks
['Mechanistic Interpretability', 'Machine Learning', 'grokking', 'knapsack problem']
/pdf/b22ab5fff3cc4fc0689e7fae9ee4e09f1f1bd6f2.pdf
interpretability and explainable AI
null
['ICLR.cc/2026/Conference/Submission25431/Authors']
APawIJjJlP
25,428
APawIJjJlP
Fed-Energy: Federated Reinforcement Learning for Scalable and Energy-Efficient Large-Scale Code Optimization
\begin{abstract} We propose \textbf{Fed-Energy}, a federated reinforcement learning (RL) framework for scalable and energy-efficient large-scale code optimization. Runaway mass: Modern code optimization contains two conflicting goals: computational burden of training model by RL and lack of estimation of energy consump...
null
['Large-Scale Code Optimization']
/pdf/5fa20d3ba87fb7edecacdbbb12614927552139e1.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission25428/Authors']
i6fc97RY1l
25,426
i6fc97RY1l
Addition Circuit: How LLMs Add in Their Heads using State Vectors
Large Language Models (LLMs) are often treated as black boxes, yet many of their behaviours suggest the presence of internal, algorithm-like structures. We present addition circuit as a concrete, mechanistic example of such a structure: a sparse set of attention heads that perform integer addition. Focusing on two popu...
We show that LLMs learn representations of integers in the addition tasks that generalize across prompt templates/number formats/languages and we reverse engineer the 2-argument addition circuit for muti-token integers in Llama 3.1 8B
['Mechanistic Interpretability', 'Large Language Models', 'Addition', 'Arithmetic', 'Algorithmic Reasoning', 'Circuits']
/pdf/bf3cdff17b45198441e6affc204f49282648af1f.pdf
interpretability and explainable AI
null
['ICLR.cc/2026/Conference/Submission25426/Authors']
Gq7cBZC04L
25,424
Gq7cBZC04L
Steering Language Models for Theorem Proving
Recent progress in automated theorem proving leverages Large Language Models (LLMs) for their capacity to comprehend informal mathematical statements and generate corresponding formal proofs. Even though these techniques perform well, very little exploration has been done to understand how language models interpret and...
null
['Theorem proving', 'activation steering']
/pdf/2387c2996333c2671934a348f83f77f88b91180f.pdf
neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
/attachment/2b3be24122b5e297732737020636f7a8fb930635.zip
['ICLR.cc/2026/Conference/Submission25424/Authors']
Oq3yRhFp0t
25,423
Oq3yRhFp0t
How Well Does GPT-4o Understand Vision? Evaluating Multimodal Foundation Models on Standard Computer Vision Tasks
Multimodal foundation models, such as GPT-4o, have recently made remarkable progress, but it is not clear where exactly these models stand in terms of understanding vision. In this paper, we benchmark the performance of popular multimodal foundation models (GPT-4o, o4-mini, Gemini 1.5 Pro and Gemini 2.0 Flash, Claude 3...
null
['vision benchmark', 'multimodal foundation models', 'vision language models', 'standard computer vision tasks']
/pdf/e62eedf4fc606a238123b0c26aeb9f413944fcad.pdf
datasets and benchmarks
/attachment/d87ced81699641e0183dde7f95a0332ea626ea78.zip
['ICLR.cc/2026/Conference/Submission25423/Authors']
YkLA6exfqW
25,417
YkLA6exfqW
Are Color Trained Models Robust in Handling Binary Images: A Fingerprint Recognition Study
Fingerprint recognition has long been a cornerstone of biometric authentication, yet robust performance across varying imaging conditions remains a challenge, especially fingerphoto, which are generally acquired from the camera, instead of the Livescan images, which are not prone to the environmental factors. Due to th...
null
['Fingerprint Recognition', 'Binary Images', 'Color Images', 'Deep Learning']
/pdf/b21410099a629d42563f4e9b90612001ed84bb5b.pdf
other topics in machine learning (i.e., none of the above)
null
['ICLR.cc/2026/Conference/Submission25417/Authors']
Kkcaz5XlJB
25,416
Kkcaz5XlJB
AgentChangeBench: A Multi-Dimensional Evaluation Framework for Goal-Shift Robustness in Conversational AI
Goal changes are a defining feature of real-world multi-turn interactions, yet current agent benchmarks primarily evaluate static objectives or one-shot tool use. We introduce $\textbf{AgentChangeBench}$, a benchmark explicitly designed to measure how tool augmented language model agents adapt to mid dialogue goal shif...
We present a benchmark that stress-tests agents on explicit goal-shifts in dual-control, multi-turn dialogs. We also add sequence-annotated scenarios spanning multiple service domains, personas and goal-shift based evaluation metrics.
['benchmark', 'multiturn', 'goal-shift', 'robustness', 'agents', 'evaluation', 'llm']
/pdf/3c915ee0d1b420cbcd944d8353796982627e4fc9.pdf
datasets and benchmarks
/attachment/97a53d76e26d9905382f775adfcb870275422de0.zip
['ICLR.cc/2026/Conference/Submission25416/Authors']
w7jkX7FfZ5
25,415
w7jkX7FfZ5
Formal-Lagrangian Policy Optimization for Safe Reinforcement Learning in Code Generation with Differentiable Verification
\begin{abstract} We propose Formal-Lagrangian Policy Optimization (FLPO), an original framework of safe reinforcement learning (RL) in code generation that combines safe image inspection and policy optimization through a Lagrangian multiplier mechanism. The major bottleneck to RL-based code synthesis, however, is to en...
null
['Code Generation']
/pdf/eca150f63d4f8ff01a5c7f0e6a9f4f1e5d598224.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission25415/Authors']
5PBKxl7o49
25,414
5PBKxl7o49
Listens like Mel: Boosting Latent Audio Diffusion with Channel Locality
Latent representations critically shape diffusion-based audio generation. We observe that Mel spectrograms exhibit an approximate power-law spectrum that aligns with diffusion’s coarse-to-fine denoising, whereas waveform variational autoencoder (VAE) latents are nearly equal intensity along the channel axis. We introdu...
Channel span masking imposes mel-like spectral bias on high-compression VAE latents by acting as a low-pass window over channels, restoring power-law structure and delivering up to 4× faster Diffusion Transformer convergence.
['audio generation', 'variational auto-encoder', 'representation learning', 'self-supervised learning']
/pdf/e756b36f1a434cbaf885374a85d473e8e271d7df.pdf
generative models
null
['ICLR.cc/2026/Conference/Submission25414/Authors']
cQvBP4TZHe
25,413
cQvBP4TZHe
When Forces Disagree: A Data-Free Fast Diagnostic from Internal Consistency in Direct-Force Neural Network Potentials
Direct-force neural network potentials (NNIPs) offer superior speed for atomistic simulations, but their reliability is limited by the lack of a fast and data-free uncertainty estimate to monitor the impact of non-conservativity and prediction errors. While ensembles are data-free but slow, and other single-model metho...
We introduce a fast physics-informed uncertainty metric for pre-trained direct-force neural network potentials that leverages the model's internal physical inconsistency to achieve the data-free advantage of ensembles at the single-model speed.
['NNIPs', 'Uncertainty', 'Pre-trained', 'Data-free', 'Physics-informed Uncertainty Estimate', 'Algorithmic Stability', 'Internal Consistency', 'Inter-head Influence', 'Multi-headed Architecture']
/pdf/8be6125d13960cf54a6c92db369718c179664af3.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
null
['ICLR.cc/2026/Conference/Submission25413/Authors']
HDSlPuFoEu
25,412
HDSlPuFoEu
Do Large Language Models Respect Contracts? Evaluating and Enforcing Contract-Adherence in Code Generation
Prevailing code generation benchmarks, such as HumanEval+ and MBPP+, primarily evaluate large language models (LLMs) with $\textit{pass@k}$ on functional correctness using well-formed inputs. However, they ignore a crucial aspect of real-world software: adherence to $\textit{contracts}$$\textemdash$the preconditions an...
A contract-aware benchmark and generation framework that pairs LLMs with an SMT solver to create violation focused tests and quantitatively assess whether generated code satisfies explicit contracts.
['Test-Case Generation', 'Contract-Violating Test Cases', 'Contract-Aware Evaluation', 'SMT solver', 'Code Generation']
/pdf/80647a798a6d02678370c296d3ff1b9c358db3a5.pdf
applications to computer vision, audio, language, and other modalities
/attachment/26fea117473e05f7d05fc76571856d7cb83b793d.zip
['ICLR.cc/2026/Conference/Submission25412/Authors']
pcaHnwjnsO
25,409
pcaHnwjnsO
Graph Adversarial Refinement for Robust Code Fixes: Enhancing Policy Networks via Structure-Aware Contrastive Learning
\begin{abstract} We propose \textbf{Graph Adversarial Refinement (GARM)}, a novel module to enhance the robustness of policy networks in adversarial reinforcement learning for code fixes. Modern code repair systems frequently breakdown when confronted with adversary perturbed inputs, which mainstreamer the structural w...
null
['Structure-Aware Contrastive Learning']
/pdf/74e1ece49e8eb2553a2458820fc063c358a86c26.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission25409/Authors']
cUrshXsWYK
25,406
cUrshXsWYK
MARINA-P: Superior Performance in Nonsmooth Federated Optimization with Adaptive Stepsizes
Non-smooth communication-efficient federated optimization remains largely unexplored theoretically, despite its importance in machine learning applications. We consider a setup focusing on optimizing downlink communication by improving state-of-the-art schemes like EF21-P [Gruntkowska et al., 2023] and MARINA-P [Gruntk...
We extend MARINA-P and EF21-P to non-smooth distributed optimization, introduce adaptive stepsizes, and show MARINA-P with permutation compressors outperforms EF21-P in non-smooth settings
['Federated Learning', 'Communication-efficient non-smooth optimization', 'Adaptive Stepsizes']
/pdf/fe0d57f0d21aa1b22be55d3ee6383abccc106cb7.pdf
optimization
/attachment/51668adc2048872493c6c3f4296b75aae17e00fb.zip
['ICLR.cc/2026/Conference/Submission25406/Authors']
NRX1iNUrZ3
25,404
NRX1iNUrZ3
Graph-Energy Reinforcement Learning: Adaptive Reward Design for API Usage Pattern Mining with OOD Detection
\begin{abstract} We propose its a novel framework Graph-Energy Reinforcement Learning (GERL), in which the goal is in the case of mining API usage patterns with robust out of distribution (OOD) detection capabilities. Growing complexity of API ecosystems demands adaptive methods to differentiate between in-distribution...
null
['OOD Detection']
/pdf/39278bb4311c713b48318136b74f3834049dd323.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission25404/Authors']
0Ow7PTK0Qj
25,400
0Ow7PTK0Qj
FastEdit: Low-Rank Structured Regularization for Efficient Model Editing
When new knowledge emerges, it is crucial to efficiently update large language models (LLMs) to reflect the latest information. However, state-of-the-art methods widely adopted in the model editing community --- such as MEMIT, PRUNE, and AlphaEdit --- suffer from prohibitively slow editing speeds, often taking 6 to 14 ...
null
['Large Language Models', 'Model Editing', 'Knowledge Updating']
/pdf/a9b25e95a586621fb175980502f410c29b8a691d.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission25400/Authors']
zwfpyw345l
25,398
zwfpyw345l
Hierarchical Code Embeddings with Multi-Level Attention for Reinforcement Learning State Representation
\begin{abstract} In this paper, we propose novel state representation and reinforcement learning (RL) system of encoding the semantics of code hierarchically using multiple attention mechanisms. Traditional approaches regularly address code embeddings as flat sequences or to be reliant only on graph-based representatio...
null
['Multi-Level Attention']
/pdf/293bbf406ac5f2948e1bb7bb48c7a1596b0596c7.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission25398/Authors']
6MgD2sXZmg
25,395
6MgD2sXZmg
Deep Cognition: A Multi-Agent Framework for Collaborative Research with Real-Time Cognitive Oversight
Despite advances in large language models, current systems for deep research are limited by an asynchronous, "input-wait-output" interaction paradigm. This model creates a critical disconnect between human intent and AI execution, leading to error propagation and an inability to dynamically course-correct during comple...
null
['Interactive AI Systems', 'Human-in-the-Loop', 'Multi Agent Framework']
/pdf/76608e16d84874597e4f482fc64058578bc5eaf7.pdf
applications to robotics, autonomy, planning
null
['ICLR.cc/2026/Conference/Submission25395/Authors']
GVIei1IdmC
25,390
GVIei1IdmC
Large Language Models as Nondeterministic Causal Models
Chatzi et al. (2025) recently developed, for the first time, a method for generating counterfactuals of probabilistic Large Language Models. Such counterfactuals tell us what would - or might - have been the output of an LLM if some factual prompt ${\bf x}$ had been ${\bf x}^*$ instead. The ability to generate such cou...
By representing Large Language Models as Nondeterministic Causal Models we show that the generation of counterfactuals becomes extremely simple.
['Large Language Models', 'counterfactuals', 'causal models']
/pdf/ecb2259a8c51ce0330d579f1faaefef0922d4ed6.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission25390/Authors']
2wshkCgNYk
25,387
2wshkCgNYk
Performance vs interpretability trade-off of hand-crafted and language model features: The case of protein superfamily classification
The newfound rise of protein language models (PLMs) that leverage data and compute has introduced an interesting conflict in computational biology: a trade-off between the high predictive performance of non-interpretable features and the scientific insight that can be gained from interpretable, hand-crafted ones. In th...
null
['Feature engineering', 'interpretability', 'proteins', 'CATH superfamily', 'hand-crafted features', 'attention matrix', 'protein language models', 'class imbalance']
/pdf/ef4c28715f84304ec46425923b04c58b4b76a767.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
null
['ICLR.cc/2026/Conference/Submission25387/Authors']
FAK3lJSRQQ
25,386
FAK3lJSRQQ
ExLLM: Experience-Enhanced LLM Optimization for Molecular Design and Beyond
Molecular design involves an enormous and irregular search space, where traditional optimizers such as Bayesian optimization, genetic algorithms, and generative models struggle to leverage expert knowledge or handle complex feedback. Recently, LLMs have been used as optimizers, achieving promising results on benchmarks...
ExLLM is an LLM-as-Optimizer with experience, offspring, and feedback mechanisms, achieving SOTA in molecular design and generalizing to diverse discrete optimization tasks with minimal problem templates.
['Large Language Models', 'Molecular Design', 'Evolutionary Algorithms', 'Discrete Optimization']
/pdf/565d3e43e701210d23422f488938de88d1fae4e2.pdf
optimization
null
['ICLR.cc/2026/Conference/Submission25386/Authors']
JJeZWINFmz
25,385
JJeZWINFmz
SAGE Can Quantify Why Two Models Behave Differently
Vision-based activity recognition tasks are sensitive to environmental context and lighting, making generalization across domains difficult. Models trained in controlled settings can report high accuracy, but often fail under domain shift, where it remains unclear whether predictions depend on causal foreground cues, s...
null
['Explainable AI', 'Vision-based Driver Distraction Detection (vDDD)', 'SAGE', 'Saliency Embeddings', 'Behavioral Divergence', 'Domain Shift', 'Generalization', 'Shortcut Learning', 'Vision--Language Models (VLMs)']
/pdf/ead736e219b2243d5f786eca923eafa27860fd53.pdf
interpretability and explainable AI
/attachment/039311be5f790d4e79b9c2d476321292ba1bf422.zip
['ICLR.cc/2026/Conference/Submission25385/Authors']
BdlIQGetYv
25,382
BdlIQGetYv
Octopus: An Auto-Generated Multidimensional Fine-Grained Benchmark for Evaluating Text-to-SQL Systems
Text-to-SQL is to convert natural language queries into structured SQLs, facilitating user interaction with databases without any SQL knowledge. The advent of LLM technologies significantly accelerates the text-to-SQL development. It is important to construct an appropriate benchmark to evaluate the performance of text...
null
['Text-to-SQL', 'Benchmark', 'Large Language Model']
/pdf/083725aed7cc56b521d1b95d51c69446083981ac.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission25382/Authors']
xRxh48OAAM
25,381
xRxh48OAAM
Eliminating the first moment state in Adam optimizer
The Adam optimizer and its variants are widely used in large-scale machine learning, but their memory footprint is high because they maintain two state variables per parameter. In Adam, the exponential moving average (EMA) of gradients (m) serves as a first-moment estimator, but it also carries variance information tha...
We present a novel variant of Adam optimizer that uses one state variable, instead of two
['Half-memory Adam', 'efficient Adam', 'Memory efficient optimizer']
/pdf/3c066a66ffe593be4edc42cd97e09428bd5f1246.pdf
optimization
null
['ICLR.cc/2026/Conference/Submission25381/Authors']
29Mote2SrR
25,378
29Mote2SrR
Hierarchical Feedback Interface for Human-in-the-Loop Reinforcement Learning in Debugging
We propose Hierarchical Feedback Interface (HFI) for human-in-the-loop reinforcement learning in debugging which structures human feedback grouped into high level objectives and low level refinements to cover the subjectivity and inefficaciousness of ad-hoc corrections. The HFI employs a two-tiered policy architecture,...
null
['Reinforcement Learning in Debugging']
/pdf/8a3e9c4f1d1111df09bb5b27b93e15cd35858148.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission25378/Authors']
lvtiRJ2nwU
25,375
lvtiRJ2nwU
Semantic Proximity for Redundancy-Aware Context Compression in Large Language Models
LLMs are increasingly bottlenecked by fixed context windows, motivating principled compression of conversational histories. We study semantic-redundancy–aware compression, in which we pair human–assistant turns, embed them, and summarize those that are most semantically overlapping. We introduce STAE (Semantic-Temporal...
Compress LLM context by summarizing semantically redundant turns, via embedding similarity (or blended with recency) and extended to cluster-level summaries, alleviating extra LLM calls, outperforming FIFO on an augmented LongMemEval benchmark.
['Large language models', 'context compression', 'semantic proximity']
/pdf/7a633668675afc936bcbb39004813bccca2dfca4.pdf
generative models
null
['ICLR.cc/2026/Conference/Submission25375/Authors']
w0rVXs6QJM
25,372
w0rVXs6QJM
EucliFold: Probing 3D Euclidean Prior in VLMs via Cognitively-Stratified Folding Tasks
Humans leverage robust 3D spatial priors to align perception with the physical world, enabling flexible and intelligent behavior. While Vision-Language Models (VLMs) exhibit impressive zero-shot performance, it remains unclear whether they possess genuine spatial reasoning capabilities, as standard evaluations are conf...
null
['vision language model', 'synthetic dataset']
/pdf/7b834136df5c9f61b6e5976859831dd3fcf904e9.pdf
datasets and benchmarks
/attachment/353df013a9588e5616023b26dcc22016dddd0a9c.zip
['ICLR.cc/2026/Conference/Submission25372/Authors']
Mq6bGrtktf
25,371
Mq6bGrtktf
Aligning Large Language Model Behavior with Human Citation Preferences
Most services built on powerful large-scale language models (LLMs) add citations to their output to enhance credibility. Recent research has paid increasing attention to the question of what reference documents to link to outputs. However, how LLMs recognize cite-worthiness and how this process should be controlled rem...
Across 8 content types, LLMs over-cite “Citation needed” (up to +27%) and under-cite numeric (−22.6%) and person-name (−20.1%) sentences vs humans; DPO improves alignment by ~5.76%. Data/code will be released upon publication.
['LLM', 'Citaion', 'Credibility']
/pdf/2be5ae7132670186460ac752ed27c2cc35981c18.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission25371/Authors']
0wSlFpMsGb
25,369
0wSlFpMsGb
Common Corpus: The Largest Collection of Ethical Data for LLM Pre-Training
Large Language Models (LLMs) are pre-trained on large data from different sources and domains. These data most often contain trillions of tokens with large portions of copyrighted or proprietary content, which hinders the usage of such models under AI legislation. This raises the need for truly open pre-training data t...
We assemble and release the largest truly open multilingual dataset for LLM pre-training consisting of 2 trillion tokens
['dataset', 'pre-training', 'large language models', 'open data', 'open science', 'multilingual']
/pdf/e141458035fcff8c02d4916469b622af70d94021.pdf
datasets and benchmarks
/attachment/045fb9a31e057a27cf6dafc3e64ccda88fe88900.pdf
['ICLR.cc/2026/Conference/Submission25369/Authors']
uxi7YoZ13b
25,368
uxi7YoZ13b
Adversarial Robust Reward Shaping for Safe Reinforcement Learning in AI-Generated Code
We propose \textbf{Adversarial Robust Reward Shaping (ARRS)}, a novel reinforcement learning framework for generating secure code that explicitly addresses vulnerabilities to adversarial evasion attacks. Conventional reward functions in code generation tasks often do not take into consideration how vulnerable detection...
null
['Adversarial Robust Reward']
/pdf/e3a6fbfe593154484f778dadb4de89cd18289b9c.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission25368/Authors']
zOWljZMbCm
25,365
zOWljZMbCm
Unlocking the Potential of Weighting Methods in Federated Learning Through Communication Compression
Modern machine learning problems are frequently formulated in federated learning domain and incorporate inherently heterogeneous data. Weighting methods operate efficiently in terms of iteration complexity and represent a common direction in this setting. At the same time, they do not address directly the main obstacle...
null
['Convex optimization', 'Compression', 'Stochastic optimization']
/pdf/b115f08c7af8144ceefe4b9c36739de2a333012b.pdf
optimization
/attachment/ac1c6cb25f064144f3042112b000a9c70f9b27c3.pdf
['ICLR.cc/2026/Conference/Submission25365/Authors']
ULqzEEkyxk
25,363
ULqzEEkyxk
LLMs Leak Training Data Beyond Verbatim Memorization via Membership Decoding
Extracting training data from large language models (LLMs) exposes serious memorization issues and privacy risks. Existing attacks extract data by generations, followed by membership inference. However, extraction attacks do not guide such generations, and the extraction scope of member data is limited to the greedy de...
null
['Membership Inference Attacks', 'Privacy', 'LLMs', 'Data Extraction Attacks']
/pdf/e14ceb430f82c81d1d021fc97c331ca3d9d12bcb.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission25363/Authors']
weWUOuLTdj
25,359
weWUOuLTdj
Generative Model via Quantile Assignment
Deep Generative models (DGMs) play two central roles in modern machine learning: (i) producing new information (e.g., image synthesis, data augmentation, and creative content generation) and (ii) reducing dimensionality (by deriving low-dimensional latent representations). Yet, DGMs' versatility must confront training ...
null
['generative models', 'quantile assignment', 'optimal transportation', 'latent representation learning', 'synthetic data generation']
/pdf/03dc9c505e450ba4983984b1c65cf40beda8f828.pdf
generative models
null
['ICLR.cc/2026/Conference/Submission25359/Authors']
goBph2pXDS
25,358
goBph2pXDS
Image Hashing via Cross-View Code Alignment in the Age of Foundation Models
Efficient large-scale retrieval requires representations that are both compact and discriminative. Foundation models provide powerful visual and multimodal embeddings, but nearest neighbor search in these high-dimensional spaces is computationally expensive. Hashing offers an efficient alternative by enabling fast Hamm...
We propose cross-view code alignment, a simple and universal principle for hashing foundation model embeddings using binary cross-entropy and coding-rate maximization, unifying unsupervised and supervised hashing.
['Image Hashing', 'Image Retrieval', 'Cross-View Alignment', 'Coding-Rate Maximization', 'Foundation Models']
/pdf/3c324e37a1742959a96014f8bca45e0b9ecad963.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission25358/Authors']
gILGafxq8R
25,354
gILGafxq8R
Joint Learning Between Reference Image and Text Prompt for Fashion Image Editing
Fashion image editing is an essential tool for designers to visualize design concepts, aiming to modify the garment in an input fashion image while ensuring that other areas of the image remain unaffected. Existing methods primarily focus on images-based virtual try-on or text-driven fashion image editing, often relyin...
null
['Fashion Image Editing', 'Diffusion model', 'Text-Guided Image Editing']
/pdf/78bbeaf23b7657f87db354bc52f5851790339303.pdf
applications to computer vision, audio, language, and other modalities
/attachment/cc3c05c8e5d7c1caaa21632eb733c0fb8b37e738.zip
['ICLR.cc/2026/Conference/Submission25354/Authors']
MniooZbsKw
25,353
MniooZbsKw
Spectral Multiple-Instance Learning for Efficient Gigapixel Image Analysis
With ongoing advances in imaging technology, gigapixel images are now widely utilized in both scientific research and industrial applications. However, their extremely large scale presents significant challenges for conventional deep learning workflows. A common approach involves partitioning the image into thousands o...
null
['Multiple-Instance Learning', 'Spectral Methods', 'Whole Slide Images']
/pdf/1f0a03814f63fe183c4842fbb1038413d3044570.pdf
learning on graphs and other geometries & topologies
/attachment/742d7b35bcb971323ada832890552da3b07a55fc.zip
['ICLR.cc/2026/Conference/Submission25353/Authors']
K4ngUOra9m
25,348
K4ngUOra9m
Masked Skill Token Training for Hierarchical Off-Dynamics Transfer
Generalizing policies across environments with altered dynamics remains a key challenge in reinforcement learning, particularly in offline settings where direct interaction or fine-tuning is impractical. We introduce Masked Skill Token Training (MSTT), a fully offline hierarchical RL framework that enables policy trans...
null
['Tranfser Learning', 'Skills', 'Hierarchical RL', 'Embodied AI']
/pdf/e9f5c6214a2e0cfdadef9431dd4cc79a24ed9296.pdf
reinforcement learning
null
['ICLR.cc/2026/Conference/Submission25348/Authors']
n3u7PK2kyd
25,347
n3u7PK2kyd
From Divergence to Normalized Similarity:A Symmetric and Scalable Topological Toolkit for Representation Analysis
Representation Topology Divergence (RTD) offers a powerful lens for analyzing topological differences in neural network representations. However, its asymme- try and lack of a normalized scale limit its interpretability and direct comparability across different models. Our work addresses these limitations on two fronts...
We introduce a topological toolkit to advance representation analysis. SRTD unifies RTD's theoretical framework, while our novel, scale-invariant similarity score, NTS, provides a practical tool for robust, normalized comparisons
['Representation Learning', 'Topological Data Analysis (TDA)', 'Representation Similarity', 'Persistent Homology', 'Neural Network Analysis', 'Large Language Models (LLMs)']
/pdf/2a5d4eb5f7c5dd657e26ff1e588a05d6de695a0f.pdf
learning on graphs and other geometries & topologies
null
['ICLR.cc/2026/Conference/Submission25347/Authors']
VjGU55hEwV
25,346
VjGU55hEwV
RLIE: Rule Generation with Logistic Regression, Iterative Refinement, and Evaluation for Large Language Models
Nowadays, Large Lange Models (LLMs) are able to propose rules in natural language, overcoming constrains of a predefined predicate space inherent in traditional rule learning. However, existing methods using LLMs often overlook the combination effects of rules, and the potential of coupling LLMs with probabilistic rule...
null
['Rule Learning', 'Neuro-Symbolic', 'LLM']
/pdf/6d5bc1ea7d11b77ca666b7f36d65c53cfbae6733.pdf
interpretability and explainable AI
null
['ICLR.cc/2026/Conference/Submission25346/Authors']
sWs0cCuM8I
25,344
sWs0cCuM8I
Spilling the Beans: Teaching LLMs to Self-Report Their Hidden Objectives
As AI systems become more capable of complex agentic tasks, they also become more capable of pursuing undesirable objectives and causing harm. Previous work has attempted to catch these unsafe instances by interrogating LLMs directly about their objectives and behaviors. However, the main weakness of trusting interroga...
We propose a SFT method that trains models to admit simple factual errors, which generalizes to admitting hidden objectives in sabotage tasks under adversarial pressure to conceal them, improving techniques for incriminating misaligned AI systems.
['honesty', 'interrogation', 'alignment auditing']
/pdf/4011ef9f2982f3e1483f17b89e4d05b031367f0a.pdf
alignment, fairness, safety, privacy, and societal considerations
/attachment/c6e83f49718b4a3b67c581a1606f86d687b90837.zip
['ICLR.cc/2026/Conference/Submission25344/Authors']
1jXc6SHcUV
25,339
1jXc6SHcUV
Efficient Fine-Tuning of Quantized Models via Adaptive Rank and Bitwidth
As large language models (LLMs) scale up, model compression is crucial for their deployment on resource-constrained devices. While methods like QLoRA reduce resource demands by combining parameter quantization with LoRA fine-tuning, their use of uniform precision can limit performance by failing to account for layer-wi...
we propose QR-Adaptor, a unified, gradient-free strategy that uses partial calibration data to jointly search the quantization components and the rank of low-rank spaces for each layer, thereby continuously improving model performance.
['Fine-tuning', 'Mixed Precision', 'LoRA', 'Adaptive rank', 'Multi-objective optimization']
/pdf/445c9940f7fb92522e2b23492e37f788ef6f3d5c.pdf
transfer learning, meta learning, and lifelong learning
/attachment/02c31a640220092b6ad396773b19279e93a1a45c.zip
['ICLR.cc/2026/Conference/Submission25339/Authors']
UbWy2QVmke
25,338
UbWy2QVmke
GAA-PtrNet: Graph attention aggregation-based pointer network for one-shot DAG scheduling
Optimizing Directed Acyclic Graph (DAG) workflow makespan by scheduling techniques is a critical issue in the high performance computing area. Many studies in recent years combined Pointer Network (PtrNet) with reinforcement learning (RL) to schedule DAGs by generating DAG task priorities in a sequence-to-sequence mann...
null
['DAG Scheduling', 'Graph Attention', 'Pointer Network', 'Reinforcement Learning', 'Combinatorial Optimization']
/pdf/92e9128d1c3a8308f5df44f2882a3fb263fd4eda.pdf
optimization
null
['ICLR.cc/2026/Conference/Submission25338/Authors']
EyswpODUEL
25,336
EyswpODUEL
DIANA with Compression for Distributed Variational Inequalities: Eliminating the Need to Transmit Full Gradients
Variational inequalities (VIs) are attracting increasing interest among machine learning (ML) researchers due to their applicability in numerous areas, such as empirical risk minimization (ERM) problems, adversarial learning, generative adversarial networks (GANs), and robust optimization. The growing volume of trainin...
null
['Variational inequalities', 'Compression operators', 'Convex optimization', 'Distributed learning']
/pdf/8ccf43e7a3eea9462877afb613362f58937c6d6f.pdf
optimization
/attachment/fe0855cb5917f4e09fc5721b247475db00de6c2f.zip
['ICLR.cc/2026/Conference/Submission25336/Authors']
HoUIYpitfo
25,331
HoUIYpitfo
Learning on the Job: Test-Time Curricula for Targeted Reinforcement Learning
Humans are good at learning on the job: We learn how to solve the tasks we face as we go along. Can a model do the same? We propose an agent that assembles a task-specific curriculum, called *test-time curriculum* (TTC-RL), and applies reinforcement learning to continue training the model for its target task. The test-...
We propose a test-time curriculum agent that self-curates a sequence of training tasks to specialize towards a specific target task via reinforcement learning
['large language models', 'test-time training', 'reinforcement learning', 'curriculum learning']
/pdf/483ac4c407181fa2d576d35294adb21c65ea249e.pdf
foundation or frontier models, including LLMs
/attachment/660dac2be8ef1da70d94f4de187662390cd06b1e.zip
['ICLR.cc/2026/Conference/Submission25331/Authors']
3Gre3i1tSD
25,328
3Gre3i1tSD
GRACE-MoE: Grouping and Replication with Locality-Aware Routing for Efficient Distributed MoE Inference
Sparse Mixture of Experts (SMoE) performs conditional computation by selectively activating a subset of experts, thereby enabling scalable parameter growth in large language models (LLMs). However, the expanded parameter scale exceeds the memory capacity of a single device, necessitating distributed deployment for infe...
We propose a co-optimization framework that reduces communication overhead and balances computational load across devices for efficient distributed SMoE inference.
['Mixture of Experts', 'Large Language Model', 'Efficient Inference']
/pdf/7979f53f11df58ae69e71e386419013b2d8def4c.pdf
other topics in machine learning (i.e., none of the above)
null
['ICLR.cc/2026/Conference/Submission25328/Authors']
ICANwnoGgN
25,327
ICANwnoGgN
Model soups need only one ingredient
Fine-tuning large pre-trained models on a target distribution often improves in-distribution (ID) accuracy, but at the cost of out-of-distribution (OOD) robustness as representations specialize to the fine-tuning data. Weight-space ensembling methods, such as Model Soups, mitigate this effect by averaging multiple che...
null
['Deep learning', 'Generalization', 'Out of Distribution']
/pdf/679ceccdc9e10bb43ae2b18cbed14bb7f6fa3ca5.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission25327/Authors']
bydk8kAZRM
25,324
bydk8kAZRM
FedSycle: Mitigating Post-Unlearning Performance Inconsistency in Federated Learning via Latent Feature Decoupling
Federated Learning (FL) safeguards data privacy by enabling collaborative model training without centralizing client data. The emerging 'Right to Be Forgotten' mandates necessitate Federated Unlearning (FU), allowing clients to revoke their data's influence on the global model. However, a critical yet overlooked challe...
We propose a high-performance federated unlearning algorithm, ensuring model performance while reducing domain inconsistency, with theoretical convergence and experimental demonstration.
['post-unlearning performance; inconsistency']
/pdf/f10ac4e07be35e2aaa6c297320c849ed4c9b8ccc.pdf
alignment, fairness, safety, privacy, and societal considerations
/attachment/06fa8395669b9e0d10528ebd79603d4ce81a2dd2.zip
['ICLR.cc/2026/Conference/Submission25324/Authors']
P0xkQNyguy
25,321
P0xkQNyguy
Gaussian Entropy Flow World Model for Streaming 3D Occupancy Predition
In 3D occupancy prediction, temporal information is crucial. Traditional methods fuse multi-frame features through a pipeline of perception, alignment, and fusion, but they overlook the coherence of static elements and the motion patterns of dynamic elements in 3D scenes. Existing methods reformulate 3D prediction as 4...
null
['Occupancy', 'World Model', 'Autonomous Driving']
/pdf/7a279c16f984091c0f2561af982860d6d59f8823.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission25321/Authors']
sbEb0Ld6MK
25,320
sbEb0Ld6MK
Fairness via Independence: A General Regularization Framework for Machine Learning
Fairness in machine learning has emerged as a central concern, as predictive models frequently inherit or even amplify biases present in training data. Such biases often manifest as unintended correlations between model outcomes and sensitive attributes, leading to systematic disparities across demographic groups. Exis...
We introduce a general framework to promote fairness in machine learning by reducing the dependence between model predictions and sensitive attributes.
['Bias Mitigation', 'Statistical Independence', 'Fairness in Machine Learning']
/pdf/ba7e37116d06bac6d33c2a905bd5d5fabe5e25ca.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission25320/Authors']
d3dSicnYkN
25,319
d3dSicnYkN
MANGO: MANGROVE GLOBAL OBSERVATIONS –A DATASET AND BENCHMARK
Mangroves buffer coasts and store large amounts of carbon, yet they are vulnerable to storms and require reliable monitoring at global scale. Thresholded spectral indices break across sensors, seasons, and atmospheres, which limits their usefulness beyond local settings. Recent segmentation models are more promising bu...
null
['Earth Observation', 'Mangrove']
/pdf/7bbe6157e54b6f8d06c1bd26c5c2c8c15969802a.pdf
datasets and benchmarks
/attachment/a5092be009e0a28405a4c0cb0c035363b720836d.zip
['ICLR.cc/2026/Conference/Submission25319/Authors']
C5Dgtmk7ho
25,318
C5Dgtmk7ho
MI-Grad-CAM: Letting Your Model Reveal What’s Most Informative
With the growing role of machine vision in critical applications such as healthcare, achieving precise and interpretable decision-making is crucial. Class Activation Mapping (CAM) is widely used for visual explanations in computer vision, but improving its interpretability remains an open research area. In this work, w...
null
['Mutual Information']
/pdf/6453cf71e518f802ab0ef99a67a1761b3b4d33ba.pdf
interpretability and explainable AI
null
['ICLR.cc/2026/Conference/Submission25318/Authors']
0WdN7pFCja
25,317
0WdN7pFCja
Adaptive Inference‑Time Scaling for LRMs using Uncertainty‑Aware RL
The widespread adoption of Large Reasoning Models (LRMs), such as Gemini 2.5 Pro Deep Think, OpenAI GPT-5 Pro, and SuperGrok 4 Heavy, is bottlenecked by their computational inefficiency, primarily stemming from the “overthinking phenomenon”—the propensity to generate unnecessarily long Chain-of-Thought (CoT) sequences ...
USBT learns RL policies that throttle LRM reasoning depth using uncertainty (semantic entropy) plus length penalties, yielding concise CoT. S‑GRPO adds early‑exit control with parallel search, cutting tokens and latency, maintaining accuracy.
['uncertainty-guided self-braking tuning (USBT)', 'adaptive inference', 'large reasoning models (LRMs)', 'reasoning depth control', 'uncertainty-aware reinforcement learning', 'semantic entropy (confidence)', 'chain-of-thought (CoT)', 'early exit', 'S‑GRPO', 'GRPO', 'reward shaping', 'length penalties', 'branch‑paralle...
/pdf/1d8bbfaefbf9f74be3ad138ee460fd623eaeb837.pdf
reinforcement learning
null
['ICLR.cc/2026/Conference/Submission25317/Authors']
b0gKCXLzuB
25,316
b0gKCXLzuB
Semi-Supervised Dataset Condensation with Dual Consistency Trajectory Matching
Dataset condensation synthesizes a small dataset that preserves the performance of training on the original, large-scale data. However, existing methods rely on fully labeled data, which limits their applicability in real-world scenarios where unlabeled data is abundant. To bridge this gap, we introduce a new task call...
null
['Dataset condensation', 'semi-supervised learning', 'knowledge distillation']
/pdf/e3ca34100fdf921e5d46aa121b8bc6fa66b78276.pdf
unsupervised, self-supervised, semi-supervised, and supervised representation learning
null
['ICLR.cc/2026/Conference/Submission25316/Authors']
4hKNGmjXVQ
25,315
4hKNGmjXVQ
Transformers as Unsupervised Learning Algorithms: A study on Gaussian Mixtures
The transformer architecture has demonstrated remarkable capabilities in modern artificial intelligence, among which the capability of implicitly learning an internal model during inference time is widely believed to play a key role in the understanding of pre-trained large language models. However, most recent works h...
null
['In-context learning', 'Gaussian Mixture Models', 'Theory']
/pdf/ac76cb9229e04c860ebb33e6b1c9aae67846983e.pdf
learning theory
/attachment/3069355987f017b7e648da33792cad0777138c80.zip
['ICLR.cc/2026/Conference/Submission25315/Authors']
c4ir92gYjv
25,313
c4ir92gYjv
Data-Efficient Generalization and Faster Initial Learning in Quantum Models for Classifying Cellular Activation States
Quantum computing is in its infancy. While it promises to solve some of the intractable problems of computing, real world application is scarce. It is mainly challenged by the hardware which are currently limited both in circuit width and depth. Finding a real world application with an advantage compared to classically...
This paper shows that for classifying cancerous cells from cytometric data, quantum models learn faster and generalize more effectively from limited data than classical neural networks, and their performance predictably scales as theory suggests.
['Quantum Machine Learning', 'Generalization Error', 'Data-Efficient Learning', 'Computational Biology', 'Quantum Neural Networks', 'Deep Learning']
/pdf/06952b01a910fe0985ec5472b51bf3cf12b0578f.pdf
other topics in machine learning (i.e., none of the above)
null
['ICLR.cc/2026/Conference/Submission25313/Authors']
MUnHOkaEFC
25,310
MUnHOkaEFC
From Uncertainty to Inconsistency: Open-Set RF Fingerprint Identification
The rejection of unknown devices outside the known categories is crucial for radio frequency fingerprint identification (RFFI). Current open-set recognition (OSR) methods rely on the uncertainty of the model output, where unknown classes exhibit low confidence and vice versa for known classes. However, we demonstrate t...
Inspired by an interesting observation that predictions for unknown classes across multiple models exhibit high inconsistency, while predictions for known classes show high consistency, we propose an inconsistency based open-set RFFI approach.
['Open-set recognition', 'radio frequency fingerprint identification', 'deep learning']
/pdf/3bad493dbb8cd98aaea968255ec493f1532939bf.pdf
other topics in machine learning (i.e., none of the above)
null
['ICLR.cc/2026/Conference/Submission25310/Authors']
KpvZ1kGOjH
25,307
KpvZ1kGOjH
EvoCF: Multi-agent Collaboration with Memory-guided Evolutionary Counterfactual Planning
Planning collaboration strategies for multi-agent embodied systems remains a core challenge for LLM-based planners, which often fail to capture the physical and coordination constraints of real-world environments. To address this, we present \textbf{EvoCF} (Evolutionary Counterfactual Planning), a memory-guided framewo...
null
['Multi-Agent Collaboration', 'Long-horizon Planning', 'Large Language Models']
/pdf/00a958c142661486301a48b13f1f3d1e831ce30f.pdf
applications to robotics, autonomy, planning
/attachment/7d2c284df23a2d24160b5de5683222e7fd1b7fa7.zip
['ICLR.cc/2026/Conference/Submission25307/Authors']
8QHxu9CGAB
25,306
8QHxu9CGAB
General Risk Measure meets Offline RL: Provably Efficient Risk-Sensitive Offline RL via Optimized Certainty Equivalent
We study the risk-sensitive reinforcement learning (RL), which is crucial in scenarios involving uncertainty and potential adverse outcomes. However, existing works on risk-sensitive RL either only focus on a specific risk measure or overlook the offline RL setting. In this work, we investigate the provably efficient r...
null
['Reinforcement Learning', 'Offline RL', 'Risk-Sensitive', 'Optimized Certainty Equivalent', 'General Risk Measure']
/pdf/6b63426e170cdf90412cec29d4e7971c9c42cf3c.pdf
reinforcement learning
null
['ICLR.cc/2026/Conference/Submission25306/Authors']
kITJl37ULw
25,305
kITJl37ULw
BridgeRAG: A Framework for Reasoning over Partitioned Knowledge Graphs
Existing Knowledge Graph-based RAG (Retrieval-Augmented Generation) systems face a fundamental dilemma in multi-document scenarios. They either treat each document as an isolated knowledge graph, which preserves contextual purity but prevents cross-document reasoning, or merge them into a single, massive graph, leading...
null
['RAG', 'Knowledge Graphs', 'Multi-hop Question Answering', 'Multi-Document Reasoning', 'LLM Agents', 'Planned Navigation']
/pdf/51f5805fffc4fa55eb2d7ef9f890b997e3aefb09.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission25305/Authors']
nuHmMRmyFV
25,304
nuHmMRmyFV
Semantic Fragment Similarity Representation Learning for Information Retrieval
We introduce Semantic Fragment Similarity (SFS), a novel similarity metric designed to enhance representation quality by partitioning embeddings into non-overlapping fragments, computing fragment level similarity, and aggregating these local scores. Conventional similarity metrics compute relevance using the global vec...
We propose Semantic Fragment Similarity, a representation learning method that partitions embeddings and applies fragment-level contrastive learning, yielding semantically specialized representations, improving relevance and retrieval performance.
['Information Retrieval', 'Representation Learning', 'Sentence Embeddings', 'Fragment Similarity']
/pdf/2492db6374921a18c4bbbd735cdf833d32591b63.pdf
unsupervised, self-supervised, semi-supervised, and supervised representation learning
null
['ICLR.cc/2026/Conference/Submission25304/Authors']
R59Nk7DS3a
25,303
R59Nk7DS3a
FMGTranDD: A Deception Detection Method Based on Spatiotemporal Facial Abnormal Emotional Changes
While multimodal deception detection methods improve detection efficiency, they inevitably introduce higher data collection and processing costs. Deceptive behavior is often accompanied by emotional fluctuations such as tension, anxiety, and guilt, which can lead to contradictory, inconsistent, or suppressed emotional ...
null
['Emotion recognition', 'deception detection', 'facial emotion embedding sequence']
/pdf/e56bbe91b3df4a8d91445d55ca0e5b2d7f35bce8.pdf
applications to computer vision, audio, language, and other modalities
/attachment/f09a16675384d01324150c96c24a17a5f479a791.zip
['ICLR.cc/2026/Conference/Submission25303/Authors']
MS9nWFY7LG
25,302
MS9nWFY7LG
Q-RAG: Long Context Multi‑Step Retrieval via Value‑Based Embedder Training
Retrieval-Augmented Generation (RAG) methods enhance LLM performance by efficiently filtering relevant context for LLMs, reducing hallucinations and inference cost. However, most existing RAG methods focus on single-step retrieval, which is often insufficient for answering complex questions that require multi-step sear...
null
['Reinforcement Learning', 'RL', 'QA', 'Long-context', 'RAG', 'NLP']
/pdf/7875418351b10da4baeeeea9d900d57da2640f94.pdf
reinforcement learning
/attachment/ae9b9a32c651ef6aa99f966c793d9754a96a5033.zip
['ICLR.cc/2026/Conference/Submission25302/Authors']
1lLWZzikiT
25,300
1lLWZzikiT
Multi-objective Hyperparameter Optimization in the Age of Deep Learning
While Deep Learning (DL) experts often have prior knowledge about which hyperparameter settings yield strong performance, only few Hyperparameter Optimization (HPO) algorithms can leverage such prior knowledge and none incorporate priors over multiple objectives. As DL practitioners often need to optimize not just one ...
We propose to use multi-objective expert priors to make hyperparameter optimization for expensive deep learning workloads feasible and show our algorithm PriMO achieves state-of-the-art performance in the multi-objective and single-objective setting.
['Hyperparameter Optimization', 'Multi-objective', 'Deep Learning']
/pdf/6f91078d016ee60ed80b3df8a88af7363fef73c3.pdf
optimization
null
['ICLR.cc/2026/Conference/Submission25300/Authors']
y3UkklvoW9
25,299
y3UkklvoW9
THEMIS: Towards Holistic Evaluation of MLLMs for Scientific Paper Fraud Forensics
We present **THEMIS**, a novel multi-task benchmark designed to comprehensively evaluate Multimodal Large Language Models (MLLMs) on visual fraud reasoning within real-world academic scenarios. Compared to existing benchmarks, THEMIS introduces three major advancements. (1) **Real-world Scenarios & Complexity**: Our b...
We present THEMIS, a holistic multi-task benchmark of over 4K questions derived from authentic retracted-paper cases and realistically simulated synthetic data, to systematically evaluate the fine-grained visual fraud reasoning abilities of MLLMs.
['Multimodal Large Language Model', 'Vision Fraud Reasoning', 'Scientific Paper Fraud Detection', 'Benchmark']
/pdf/1a0c9477a5233fbf5e0563f788de5ee5dd9505de.pdf
datasets and benchmarks
/attachment/7011ce3c6c4cd1f886f65284645bb19464ba55e8.zip
['ICLR.cc/2026/Conference/Submission25299/Authors']
XbVMiW0jTM
25,298
XbVMiW0jTM
PROBE: Benchmarking Reasoning Paradigm Overfitting in Large Language Models
The reliability of reasoning benchmarks for Large Language Models (LLMs) is threatened by overfitting, which leads to inflated scores that misrepresent true capability. While existing benchmarks focus on surface-level perturbations, they fail to detect a more profound form of overfitting where models memorize problem-s...
null
['Large Language Models', 'Benchmark Evaluation']
/pdf/33134f0a6fb57afd2677195eae07b55fad083822.pdf
datasets and benchmarks
/attachment/422d4f487bba035ccd90602dfa547b21a14ee8c5.zip
['ICLR.cc/2026/Conference/Submission25298/Authors']
eQtSuMQNtH
25,296
eQtSuMQNtH
Beyond Turn Limits: Training Deep Search Agents with Dynamic Context Window
While recent advances in reasoning models have demonstrated cognitive behaviors through reinforcement learning, existing approaches struggle to invoke deep reasoning capabilities in multi-turn agents with long-horizon interactions. We propose DeepMiner, a novel framework that elicits such abilities by introducing high-...
We present DeepMiner, a novel training framework that breaks the turn constraint in multi-turn search agents through dynamic context management.
['LLM', 'DeepResearch', 'Agent']
/pdf/b779fc86cfce2763d1e10fac7b37ed2e608038ab.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission25296/Authors']
4w9HzBBLRk
25,295
4w9HzBBLRk
Towards Multimodal Understanding, Reasoning, and Tool Usage across Vision, Speech, and Audio in Long Videos
Long-form, multimodal video understanding requires models to integrate vision, speech, and ambient audio while reasoning coherently over extended contexts. However, existing benchmarks often emphasize either long temporal contexts or rich multimodal content, but rarely both. Moreover, they are typically restricted to m...
STARBench is a human-validated benchmark for long-form multimodal video understanding, and STARAgent is an agentic pipeline for multimodal long video understanding, together exposing current state-of-the-art MLLMs’ limits
['multimodal', 'long-form video understanding', 'benchmark', 'agentic pipeline', 'question answering', 'scenario-driven QA']
/pdf/e9af7c3f35795c5d0d036af54a6e7031e5c42642.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission25295/Authors']
wNAUAPfceN
25,294
wNAUAPfceN
Guided Star-Shaped Masked Diffusion
The performance of pre-trained masked diffusion models is often constrained by their sampling procedure, which makes decisions irreversible and struggles in low-step generation regimes. We introduce a novel sampling algorithm that works with pre-trained models and, after a lightweight fine-tuning of a single layer, sig...
We developed a new sampling algorithm that, with minimal fine-tuning, enables pre-trained diffusion models to self-correct, significantly boosting quality in few-step generation.
['Discrete Diffusion', 'Text Diffusion Models', 'Masked Diffusion', 'Guided Sampling']
/pdf/a3a60fcc0ae92a74480c02d92232c618b137c91d.pdf
generative models
/attachment/963d0a783d7d1a6479b54a88098d22d0cc665dce.zip
['ICLR.cc/2026/Conference/Submission25294/Authors']
riOevy2RwZ
25,292
riOevy2RwZ
Towards Text-Mask Consistency in Medical Image Segmentation
Vision-language models for medical image segmentation often produce masks that conflict with the accompanying text, especially under multi-site/multi-lesion descriptions. We trace this failure to two factors: (i) highly templated and repetitive clinical language causes one-to-one hard contrastive learning to yield nume...
null
['Medical image segmentation', 'Vision language models', 'Multimodal learning', 'Kolmogorov–Arnold Networks']
/pdf/e05da9928ef8ca6dc9c0d857b79e738dd17148dc.pdf
other topics in machine learning (i.e., none of the above)
/attachment/086874f5351d8b08c7774ba8b5507c5ac84f2171.zip
['ICLR.cc/2026/Conference/Submission25292/Authors']
MLZLdOwEpA
25,286
MLZLdOwEpA
AI Alignment with Provable Protection of Human Judgements
Reinforcement learning from human preference rankings forms the basis for training language models to be helpful and value-aligned. As these powerful AI systems are trained for increasingly high-stakes tasks, the risk of leaking sensitive human training data increases. However, the problem of protecting human preferenc...
null
['Alignment', 'RLHF', 'performance guarantees', 'asymptotic match']
/pdf/92a748e9119bf34cfc22f518404542aef9271b9b.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission25286/Authors']
U30FO4wae8
25,284
U30FO4wae8
Entropy-driven Fair and Effective Federated Learning
Federated Learning (FL) enables collaborative model training across distributed devices while preserving data privacy. Nonetheless, the heterogeneity of edge devices often leads to inconsistent performance of the globally trained models, resulting in unfair outcomes among users. Existing federated fairness algorithms s...
We propose a fair FL algorithm that addresses the underexplored challenge of improving performance fairness while enhancing global accuracy, with theoretical and empirical demonstrations.
['fairness alignment', 'federated learning']
/pdf/6b387893a1e1da8333909721171641b166c97874.pdf
alignment, fairness, safety, privacy, and societal considerations
/attachment/b2f2b80d686cd5396ea3480f1a828698746e1a5f.zip
['ICLR.cc/2026/Conference/Submission25284/Authors']
PzCrvhSarX
25,283
PzCrvhSarX
HomeSafeBench: A Benchmark for Embodied Vision-Language Models in Free-Exploration Home Safety Inspection
Embodied agents can identify and report safety hazards in the home environments. Accurately evaluating their capabilities in home safety inspection tasks is curcial, but existing benchmarks suffer from two key limitations. First, they oversimplify safety inspection tasks by using textual descriptions of the environment...
null
['Home Safty Inspection', 'Embodied Agent', 'Vision Language Model']
/pdf/0ca8c620e04a3eb10cce7b6073dbc6962cc10b99.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission25283/Authors']
RuYwbd5xYa
25,282
RuYwbd5xYa
SCRAPL: Scattering Transform with Random Paths for Machine Learning
The Euclidean distance between differentiable wavelet scattering transform coefficients (known as paths) provides informative gradients for perceptual quality assessment of deep inverse problems in computer vision, speech, and audio processing. However, these transforms are computationally expensive when employed as ...
A stochastic optimization scheme for efficient perceptual quality assessment of deep inverse problems, implemented for differentiable joint time–frequency scattering, with applications to unsupervised sound matching of the Roland TR-808 drum machine.
['scattering transform', 'wavelets', 'stochastic optimization', 'ddsp', 'perceptual quality assessment']
/pdf/455213e4fdff77edc79ffb5719ed3403fdbdc52e.pdf
learning on time series and dynamical systems
null
['ICLR.cc/2026/Conference/Submission25282/Authors']
xalTjNXVHb
25,281
xalTjNXVHb
Where Redundancy Lives: Stage-Aware Block Saliency in Skip-Connected Models
Residual (skip-connected) architectures such as ResNets are widely used, yet the extent and structure of their inference-time redundancy remain unclear. We repurpose post-training block ablation as a diagnostic probe: we ablate residual blocks by replacing them with identity mappings, then measure the resulting accurac...
null
['Residual networks', 'Post-training pruning', 'Latency', 'Model compression']
/pdf/246ca2052f6285d4f411b00e7a6015a2fc7082a6.pdf
other topics in machine learning (i.e., none of the above)
/attachment/2407fbb43851ac2da687846cbd7475c2386869ca.pdf
['ICLR.cc/2026/Conference/Submission25281/Authors']
YtBJHVbxf8
25,279
YtBJHVbxf8
HEX: Merging Heavy-Hitters and Expanders for Adaptive KV Cache Optimization in Long-Context Inference
Key–Value (KV) caching accelerates large-language model inference but grows linearly with sequence length, quickly exhausting GPU memory. Existing compression strategies such as quantization, pruning, or sparsification shrink this footprint, but often degrade performance. Most pruning methods discard crucial connection...
HEX combines expander-graph sparsity with dynamic token selection and quantization to compress KV caches, achieving strong accuracy–efficiency trade-offs for long-context inference.
['Large Language Models', 'Key-Value Caching', 'Efficient Inference', 'Memory Optimization', 'KV Cache Compression', 'Structural Sparsity', 'Expander Graphs', 'Long Context Inference', 'Heavy-Hitters']
/pdf/277d5640e73139b6e5b2c962c43be882d3b3ba0f.pdf
generative models
null
['ICLR.cc/2026/Conference/Submission25279/Authors']
Vogxs8BzJS
25,274
Vogxs8BzJS
CABA: A Collusive Aggregation-Emergent Backdoor Attack in Federated Learning
Federated Learning (FL) has been shown to be vulnerable to backdoor attacks conducted by malicious clients. Although many studies have enhanced the stealthiness and durability of backdoors, the full potential of collusive attacks in FL remains underexplored. Existing collusive attacks typically adopt a strategy where e...
null
['Collusive Backdoor Attack', 'Federated Learning']
/pdf/46e2e33e68d0e01ade1992996aca8725809aab39.pdf
other topics in machine learning (i.e., none of the above)
null
['ICLR.cc/2026/Conference/Submission25274/Authors']
BeLwO47iNn
25,270
BeLwO47iNn
A Function Centric Perspective on Flat and Sharp Minima
Flat minima are widely believed to correlate with improved generalisation in deep neural networks. However, this connection has proven more nuanced in recent studies, with both theoretical counterexamples and empirical exceptions emerging in the literature. In this paper, we revisit the role of sharpness in model perfo...
We investigate flat and sharp minima from a function-centric lens, characterising global minima in single-objective optimisation and scaling to large-scale tasks, we find sharp minima counterintuitively, can improve both generalisation and safety.
['Flat Minima', 'Sharp Minima', 'Generalisation', 'Function', 'Robustness', 'Calibration', 'Safety']
/pdf/d709d634f6f3f9f9e6abb63113271495565ae0cb.pdf
learning theory
null
['ICLR.cc/2026/Conference/Submission25270/Authors']
e3XLWHFrnr
25,264
e3XLWHFrnr
From Text to Talk: Audio-Language Model Needs Non-Autoregressive Joint Training
Recent advances in large language models (LLMs) have attracted significant interest in extending their capabilities to multimodal scenarios, particularly for speech-to-speech conversational systems. However, existing multimodal models handling interleaved audio and text rely on autoregressive methods, overlooking that ...
null
['Large Multimodal Models', 'Multi-token Prediction', 'Non-Autoregressive Learning']
/pdf/77f85376e5ef0aad208b40e86d4c896e89495109.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission25264/Authors']
h0xG4JmGOP
25,261
h0xG4JmGOP
GDEGAN: Gaussian Dynamic Equivariant Graph Attention Network for Ligand Binding Site Prediction
Accurate prediction of binding sites of a given protein, to which ligands can bind, is a critical step in structure-based computational drug discovery. Recently, Equivariant Graph Neural Networks (GNNs) have emerged as a powerful paradigm for binding site identification methods due to the large-scale availability of ...
By recognizing that binding pockets have distinct statistical signatures, GDEGAN improves ligand binding site prediction by 37% and upto 20× faster than current methods while doing inference.
['equivariant gnns', 'protein ligand interaction', 'binding site identification', 'statistical attention']
/pdf/f36d94798ae3cbfc035572b8d7ffce9bb5f9bd89.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
null
['ICLR.cc/2026/Conference/Submission25261/Authors']
dGZYYishs0
25,260
dGZYYishs0
TopoGuide: A Finetuning Framework for Topologically-Consistent 3D Molecule Generation
Equivariant diffusion models can generate high-quality 3D molecular geometries but often struggle with chemical validity due to a lack of explicit guidance from the 2D molecular graph. While prior works have addressed this by adding graph-based information to the model's input, this often increases architectural comple...
null
['Molecule Generation', 'Diffusion Models', 'Equivariant Neural Networks', 'Drug Discovery']
/pdf/cf9d698d8c2fd2d6aa216ab7a5c70970f1f72d92.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
null
['ICLR.cc/2026/Conference/Submission25260/Authors']
QQp11zpm8M
25,258
QQp11zpm8M
Character Beyond Speech: Leveraging Role-Playing Evaluation in Large Audio Language Models via Reinforcement Learning
The advancement of multimodal large model technology has propelled the simulation of diverse characters in speech dialogue systems, establishing a novel interactive paradigm. Character attributes are manifested not only in textual responses but also through vocal features, with speech containing non-semantic informatio...
null
['Role-Playing Language Agents', 'Large Audio Language Models', 'Reinforcement Learning']
/pdf/49cf336bfa6ae012a5aeeb23ba14939ef0ad62e0.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission25258/Authors']
cqNAjXUBOV
25,257
cqNAjXUBOV
Tables2Traces: Distilling Tabular Data to Improve LLM Reasoning in Healthcare
Large language models (LLMs) excel at reasoning when fine-tuned on curated text corpora, but many domains, such as medicine, primarily store knowledge in structured tabular data. Despite its richness, tabular data has been largely overlooked as a source of reasoning supervision. Interpreting such data requires structur...
We convert tabular clinical data into reasoning traces that improve LLM medical question answering across domains.
['large language models', 'tabular data', 'healthcare', 'medicine']
/pdf/ece73eadbb7d312ff9edc26b94ef3ddb0be07036.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
null
['ICLR.cc/2026/Conference/Submission25257/Authors']
rdf9BRHNql
25,253
rdf9BRHNql
TowerVision : Understanding and Improving Multilinguality in Vision-Language Models
Despite significant advances in vision-language models (VLMs), most existing work follows an English-centric design process, limiting their effectiveness in multilingual settings. In this work, we provide a comprehensive empirical study analyzing the impact of several multilingual design choices, such as training data ...
We introduce a VLM that supports image and video called TowerVision, with improved multilingual capabilities explored via several ablations on data, base model, and vision encoders
['mutltilinguality', 'large language model', 'vision language models', 'multimodal models', 'image', 'video', 'cultural']
/pdf/1f559577292de0e5fa2bc621e877fff325aca1e2.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission25253/Authors']
170GODIkgT
25,252
170GODIkgT
SpecExtend: A Drop-in Enhancement for Speculative Decoding of Long Sequences
Speculative decoding is a widely used technique for accelerating inference in large language models (LLMs), but its performance degrades as input length grows, with significant drops even at moderate lengths. Yet, this early degradation has remained largely underexplored. We introduce SpecExtend, a drop-in enhancement ...
We propose SpecExtend, a drop-in enhancement that improves the performance of speculative decoding on long sequences without additional training.
['Efficient LLM', 'LLM Inference', 'Speculative Decoding', 'Long-context Inference']
/pdf/f5a61bd460e7eac4062df32c7c959658139fc749.pdf
generative models
/attachment/6c3536be6ff4b3a4f33edc58386bc7937538fbe8.zip
['ICLR.cc/2026/Conference/Submission25252/Authors']
GDA1yB6yDP
25,245
GDA1yB6yDP
Not Search, But Scan: Benchmarking MLLMs on Scan-Oriented Academic Paper Reasoning
With the rapid progress of multimodal large language models (MLLMs), AI already performs well at literature retrieval and certain reasoning tasks, serving as a capable assistant to human researchers, yet it remains far from autonomous research. The fundamental reason is that current work on scholarly paper reasoning is...
We present ScholScan, a scan-oriented benchmark for full-paper scholarly reasoning that requires models to build a paper-level evidence view; spanning 1,800 questions from 715 papers, which exposes MLLM gaps and shows RAG ineffective.
['Multimodal Large Language Models; Academic Paper Reasoning; Scan-Oriented Reasoning']
/pdf/49cc34d84563f82a0411d2ea1c053215d0925474.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission25245/Authors']
bld5GVRad0
25,243
bld5GVRad0
InfoBlend: Storing and Reusing KV Caches of Multimodal Information without Positional Restriction
The context caching technique is employed to accelerate the Multimodal Large Language Model (MLLM) inference by prevailing serving platforms currently. However, this approach merely reuses the Key-Value (KV) cache of the initial sequence of prompt, resulting in full KV cache recomputation even if the prefix differs sli...
The KV cache can be reused without positional restriction, through partial recomputation.
['Multimodal Large Language Model', 'AI System', 'Position-Independent Caching']
/pdf/b0d06f97f9c6ad80b2f9594560c7ee8d676679a3.pdf
infrastructure, software libraries, hardware, systems, etc.
/attachment/7c1cdb642495362f339e608259469e73183144d6.zip
['ICLR.cc/2026/Conference/Submission25243/Authors']
WHVk2qoCIY
25,240
WHVk2qoCIY
Exposing Weak Links in Multi-Agent Systems under Adversarial Prompting
LLM-based agents are increasingly deployed in multi-agent systems (MAS). As these systems move toward real-world applications, their security becomes paramount. Existing research largely evaluates single-agent security, leaving a critical gap in understanding the vulnerabilities introduced by multi-agent design. Howeve...
We introduce SafeAgents, a framework for evaluating security vulnerabilities in multi-agent LLM systems, revealing that popular architectures contain significant security flaws stemming from design choices like autonomy levels and context sharing.
['Multi-agent systems', 'Vulnerability Attacks', 'Security']
/pdf/4d6aaacd05f8e26949a2377c2ded7905fe48cbd5.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission25240/Authors']
CVqYCYpq75
25,238
CVqYCYpq75
Dem-HEC: High-Entropy Contrastive Fine-Tuning for Countering Natural Corruptions
Neural networks are highly susceptible to natural image corruptions such as noise, blur, and weather distortions, limiting their reliability in real-world deployment. The prime reason to maintain the high integrity against natural corruptions is that these distortions are the primary force of distribution shift intenti...
null
['Corruption', 'Convolution', 'Transformer', 'Robustness', 'Explainability']
/pdf/a783c6effb58647d3f7a801d502180adcc642fb8.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission25238/Authors']