id
stringlengths
10
10
number
int64
1
25.6k
forum
stringlengths
10
10
title
stringlengths
5
214
abstract
stringlengths
26
4.31k
content_TLDR
stringlengths
1
250
content_keywords
stringlengths
6
1.02k
content_pdf
stringlengths
49
49
content_primary_area
stringclasses
21 values
content_supplementary_material
stringlengths
56
56
signatures
stringlengths
47
51
m8sPQEd71W
104
m8sPQEd71W
Unified Multimodal Model as Auto-Encoder
The pursuit of unified multimodal models (UMMs) has long been hindered by a fundamental schism between multimodal understanding and generation. Current approaches typically disentangle the two and treat them as separate endeavors with disjoint objectives, missing the mutual benefits. We argue that true unification requ...
Exploring synergy between visual generation and perception by formulating the unified multimodal model as autoencoder.
['Multimodal', 'Unified Multimodal Model', 'Generative Model']
/pdf/61fc10b43f944f5731b7129602b602d5f0ec06d5.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission104/Authors']
74M7InKlVs
103
74M7InKlVs
C$^3$-Bench: Evaluating and Achieving Controllable Code Completion in Code LLM
Code completion has become a central task, gaining significant attention with the rise of large language model (LLM)-based tools in software engineering. Although recent advances have greatly improved LLMs' code completion abilities, evaluation methods have not advanced equally. Most current benchmarks focus solely on ...
We created C³-Bench, a new benchmark for code LLMs that tests both code correctness and instruction following, revealing gaps in current models and developed a better-performing solution through automated training data generation.
['Large Language Models', 'Code Language Models', 'Code Completion', 'Instruction Following']
/pdf/2365463e7c923ffa6529d3000c4c06c547b44ea5.pdf
other topics in machine learning (i.e., none of the above)
null
['ICLR.cc/2026/Conference/Submission103/Authors']
eAge74DIgk
101
eAge74DIgk
LitExplorer: Training-Free Diffusion Guidance with Adaptive Exploration-Filtering Framework
Diffusion models possess strong general generative capabilities, yet they remain insufficient when aligned with specific target objectives. Fine-tuning methods can enhance alignment but incur high training costs and face the risk of reward hacking. Consequently, training-free guidance mechanisms have emerged, which lev...
null
['Diffusion Model;Traning-free']
/pdf/1c6b6c00941091ec239340ef422fc7d9f01f4462.pdf
applications to computer vision, audio, language, and other modalities
/attachment/2b27b42c94f12c88f9c49a8e2c11c1adbec795e8.zip
['ICLR.cc/2026/Conference/Submission101/Authors']
FGkknrhv09
100
FGkknrhv09
Curing "Miracle Steps'' in LLM Math Reasoning with Rubric Rewards
Large language models for mathematical reasoning are typically trained with outcome-based rewards, which credit only the final answer. In our experiments, we observe that this paradigm is highly susceptible to reward hacking, leading to a substantial overestimation of a model's reasoning ability. This is evidenced by...
This paper diagnoses how LLMs achieve correct math answers with flawed logic ("false positives") and introduces a "Rubric Reward Model" that rewards the entire problem-solving process to build more trustworthy and accurate reasoners.
['faithful chain-of-thought', 'math reasoning', 'false positive', 'rubric']
/pdf/365c5050a2ce26e04b0f1c843f16e9a72f9c704f.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission100/Authors']
I88toT6Leg
99
I88toT6Leg
The PIMMUR Principles: Ensuring Validity in Collective Behavior of LLM Societies
Large Language Models (LLMs) are increasingly used for social simulation, where populations of agents are expected to reproduce human-like collective behavior. However, we find that many recent studies adopt experimental designs that systematically undermine the validity of their claims. From a survey of over 40 papers...
null
['Large Language Model', 'Multi-Agent System', 'Social Simulation', 'Social Science']
/pdf/69878b43abed6ff5ad1c4ca4539e64eb75e06895.pdf
alignment, fairness, safety, privacy, and societal considerations
/attachment/b67110f6633e800df1fd66d725185552fa32de05.zip
['ICLR.cc/2026/Conference/Submission99/Authors']
5HHkCSVHaU
98
5HHkCSVHaU
Teaching LLMs According to Their Aptitude: Adaptive Switching Between CoT and TIR for Mathematical Problem Solving
Existing supervised fine-tuning (SFT) approaches to enhance the mathematical reasoning of large language models (LLMs) rely either on Chain-of-Thought (CoT) for generalizability or Tool-Integrated Reasoning (TIR) for precise computation. While efforts have been made to combine these methods, they primarily rely on post...
we propose TATA, an adaptive framework that enables LLMs to personalize their reasoning strategy for different problems spontaneously, aligning it with their intrinsic aptitude.
['Large Language Models', 'math QA', 'chain-of-thought', 'tool-integrated reasoning', 'fine-tuning']
/pdf/d166d32d51c34eb2be6da6ef8e733c286e3e78a7.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission98/Authors']
GymjF88oGQ
97
GymjF88oGQ
The Pensieve Paradigm: Stateful Language Models with Learned Memory Management
In the world of Harry Potter, when Dumbledore's mind is overburdened, he extracts memories into a Pensieve to be revisited later. In the world of AI, while we possess the Pensieve—mature databases and retrieval systems, our models inexplicably lack the "wand" to operate it. They remain like a Dumbledore without agency,...
null
['LLM', 'memory management']
/pdf/d411b45856f6dfaf3ae0c24c5b9aa995014326ba.pdf
foundation or frontier models, including LLMs
/attachment/bceafcddb1daa855bd0be813fc8c88bb16a1e0ff.zip
['ICLR.cc/2026/Conference/Submission97/Authors']
NSjAYTNB11
95
NSjAYTNB11
PlotCraft: Pushing the Limits of LLMs for Complex and Interactive Data Visualization
Recent Large Language Models (LLMs) have demonstrated remarkable proficiency in code generation. However, their ability to create complex visualizations for scaled and structured data remains largely unevaluated and underdeveloped. To address this gap, we introduce \textbf{PlotCraft}, a new benchmark featuring 1k chall...
LLMs are bad at complex charts. We built a small, specialized model, PlotCraftor, that fixes this and is now state-of-the-art.
['Large Language Model; Code Generation; Data Visualization']
/pdf/ff4d59f420150b9719d3866dffd007b2331fcf54.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission95/Authors']
9aB3BWye1j
92
9aB3BWye1j
PairedContrast: A Multimodal Benchmark for Medical Image Translation
Contrast medium play a pivotal role in radiological imaging, as it amplifies lesion conspicuity and improves detection in the diagnosis of tumor-related diseases. However, depending on the patient’s health condition or the medical resources available, the use of contrast medium is not always feasible. Recent work has t...
null
['benchmark; pan-cancer; paired datasets; medical image translation; contrast media']
/pdf/dea3b2acd9ac51578b6ec8fb77b1aa575911de9e.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission92/Authors']
8pi1rP71qv
91
8pi1rP71qv
FlyPrompt: Brain-Inspired Random-Expanded Routing with Temporal-Ensemble Experts for General Continual Learning
General continual learning (GCL) challenges intelligent systems to learn from single-pass, non-stationary data streams without clear task boundaries. While recent advances in continual parameter-efficient tuning (PET) of pretrained models show promise, they typically rely on multiple training epochs and explicit task c...
We propose a brain-inspired method FlyPrompt that uses random-expanded routing and temporal-ensemble experts to effectively tackle General Continual Learning problem, achieving significant gains on major benchmarks.
['Continual Learning', 'Life-long Learning', 'Brain-inspired AI', 'Catastrophic Forgetting', 'Prompt Tuning']
/pdf/9bde35abdb2f177c878cde658e6f42cb93590032.pdf
transfer learning, meta learning, and lifelong learning
/attachment/a502549ab359383dbaa373fb0cb2e6c40e6ff16f.zip
['ICLR.cc/2026/Conference/Submission91/Authors']
XHzrBDzKaX
88
XHzrBDzKaX
Castle-in-the-Air: Evaluating MLLM Visual Abilities on Human Cognitive Benchmarks
Despite significant progress on popular multimodal benchmarks, state-of-the-art Multimodal Large Language Models (MLLMs) continue to struggle with basic visual reasoning tasks that are trivially solved by humans, such as recognizing abstract patterns or identifying spatial relationships. Such deficiencies undermine the...
null
['Multimodal Large Language Model', 'Vision Language Model', 'Cognition', 'Evaluation']
/pdf/1c1f48dc0ef033ef1f5986cdd84c20217453d3fc.pdf
applications to computer vision, audio, language, and other modalities
/attachment/70605ccf308eee0a1323bf598602ed76ea43a554.zip
['ICLR.cc/2026/Conference/Submission88/Authors']
EXFKk4Y3yc
87
EXFKk4Y3yc
Spilled Energy in Large Language Models
We reinterpret the final softmax classifier over the vocabulary of Large Language Models (LLM) as an Energy-based Model (EBM). This allows us to decompose the chain of probabilities used in sequence-to-sequence modeling as multiple EBMs that interact together at inference time. Our decomposition offers a principled app...
We recast the LLM softmax as an Energy-Based Model, introducing training-free energy measures to detect hallucinations. Our method pinpoints errors, generalizes across tasks, and shows robust results on nine benchmarks.
['LLM', 'hallucination detection', 'EBM']
/pdf/c7f4a295dde283e8da45345b35965fcf90a31fbf.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission87/Authors']
6XvqXQq0ae
86
6XvqXQq0ae
NextLocMoE: Enhancing Next Location Prediction via Location-Semantics Mixture-of-Experts and Personalized Mixture-of-Experts
Next location prediction is a key task in human mobility modeling. Existing methods face two challenges: (1) they fail to capture the multi-faceted semantics of real-world locations; and (2) they struggle to model diverse behavioral patterns across user groups. To address these issues, we propose NextLocMoE, a large la...
We propose NextLocMoE, a Mixture-of-Experts LLM framework for next-location prediction, which jointly modelslocation semantics and behavioral preferences via dual expert modules and history-aware routing.
['next location prediction', 'Mixture-of-Experts', 'Large Language Model', 'Location Function MoE', 'Persona MoE']
/pdf/f5b63891a6c4d26f62a5d31b7d29da7969c92e8c.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission86/Authors']
i4BiQK5Ndw
83
i4BiQK5Ndw
TopoMHC: Sequence–Topology Fusion for MHC Binding
Accurate prediction of peptide immunogenicity, particularly the binding affinity to major histocompatibility complex (MHC) molecules, is critical for vaccine design and immunotherapy. Existing approaches are predominantly sequence-based and often overlook structural variability and topological organization, which restr...
null
['immunogenicity prediction', 'major histocompatibility complex', 'peptide representation learning', 'statistical topology', 'persistent homology', 'protein language models', 'cross-modal learning', 'vaccine design']
/pdf/73d717f9219d719e35f3d8e629d5634b1dee6df2.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
null
['ICLR.cc/2026/Conference/Submission83/Authors']
Tp70ig4iKN
80
Tp70ig4iKN
Seeing Before Reasoning: A Unified Framework for Generalizable and Explainable Fake Image Detection
Detecting AI-generated images with multimodal large language models (MLLMs) has gained increasing attention, due to their rich world knowledge, common-sense reasoning, and potential for explainability. However, naively applying those MLLMs for detection often leads to suboptimal performance. We argue that the root of t...
We propose a unified MLLM-based framework that simultaneously perceives low-level artifacts and reasons dialectically about high-level plausibility, without reliance on external detectors.
['AI-Generated Image Detection', 'MLLM', 'Media Forensics']
/pdf/0f0450b32e796e0cde2b002e3c20ad8a749d6c10.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission80/Authors']
NlMXI17iou
77
NlMXI17iou
Reordered SparseGPT: Optimizing the Pruning Order in Second-Order LLM Pruning
Pruning is widely recognized as an effective method for reducing the parameters of large language models (LLMs), potentially leading to more efficient inference. One classic and prominent path of one-shot LLM pruning is to leverage the second-order gradients (i.e., Hessian), represented by the pioneering works like Spa...
This paper presents a new SoTA Hessian-based one-shot LLM pruning algorithm, which can be applied to unstructured and semi-structured sparsities.
['LLM', 'Network Pruning', 'Hessian-based Pruning']
/pdf/af7361eb2c49fd861f47a41b43506dee223d3eb4.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission77/Authors']
oKyDZabG0I
74
oKyDZabG0I
More Than a Snapshot: Forcing Temporal Reasoning in Video Segmentation
Video Reasoning Segmentation (VRS) inherits the settings of reasoning based on world knowledge and spatial contents, lacking queries demanding temporal reasoning according to the unique temporal dynamics of videos. To bridge the gap, we introduce TempVRS, a large-scale Temporal Video Reasoning Segmentation dataset con...
null
['Video Reasoning Segmentation', 'Temporal Dynamics']
/pdf/57772ac96c8fcc2c882888bf4e50ebcd74e67222.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission74/Authors']
RKYO6R8Jgb
72
RKYO6R8Jgb
Thinking-Free Policy Initialization Makes Distilled Reasoning Models More Effective and Efficient Reasoners
Reinforcement Learning with Verifiable Reward (RLVR) effectively solves complex tasks but demands extremely long context lengths during training, leading to substantial computational costs. While multi-stage training can partially mitigate this, starting with overly short contexts often causes irreversible performance ...
We propose Thinking-Free Policy Initialization, a stage prior to RL that can accelerate RL convergence to a higher performance ceiling and naturally yield reasoning-efficient models
['Large Language Models', 'Reasoning', 'Reinforcement Learning with Verifiable Rewards', 'Long Chain-of-Thought']
/pdf/9485752602f24c1d423333799dadade407c91cf6.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission72/Authors']
WEg7e5pcso
70
WEg7e5pcso
ABConformer: Physics‑inspired Sliding Attention for Antibody-Antigen Interface Prediction
Accurate prediction of antibody-antigen (Ab-Ag) interfaces is critical for vaccine design, immunodiagnostics and therapeutic antibody development. However, achieving reliable predictions from sequences alone remains a challenge. In this paper, we present \textsc{ABConformer}, a model based on the Conformer backbone tha...
null
['Antibody–antigen interface prediction', 'Protein sequence modeling', 'Conformer', 'Sliding attention mechanism', 'Epitope prediction', 'Paratope prediction', 'Structural bioinformatics']
/pdf/38039f8f48fb41930fb9d9ea4cf56c01bf411aab.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
/attachment/76811f6954e6a1df174951d8ce851b45a4a300af.zip
['ICLR.cc/2026/Conference/Submission70/Authors']
84vy8ZomFn
68
84vy8ZomFn
Breaking Scale Anchoring: Frequency Representation Learning for Accurate High-Resolution Inference from Low-Resolution Training
Zero-Shot Super-Resolution Spatiotemporal Forecasting requires a deep learning model to be trained on low-resolution data and deployed for inference on high-resolution. Existing studies consider **maintaining** similar error across different resolutions as indicative of successful multi-resolution generalization perfor...
null
['Scale Anchoring', 'Zero-Shot Super-Resolution', 'Spatiotemporal Forecasting', 'Frequency Representation']
/pdf/6dab9cd5dfc2a8ac07dbb4dda69abb99c96e651c.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
/attachment/5c15803f970e08058eb5c6c9ec1fd16dadd86cb9.zip
['ICLR.cc/2026/Conference/Submission68/Authors']
Y9b5UuGi9O
66
Y9b5UuGi9O
CAI: Caption-Sensitive Attention Intervention for Mitigating Object Hallucination in Large Vision-Language Models
Although Large Vision-Language Models (LVLMs) have demonstrated remarkable performance on downstream tasks, they frequently produce contents that deviate from visual information, leading to object hallucination. To tackle this, recent works mostly depend on expensive manual annotations and training cost, or decoding st...
We propose Caption-sensitive Attention Intervention (CAI), a training-free method, that refines caption-sensitive attention heads outputs during inference to enhance the fine-grained visual perception capability and mitigate object hallucination.
['Larger Vision-Language Model', 'Hallucination']
/pdf/e1c8340e562f9d274c2e634e4f49374ce76b0d78.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission66/Authors']
CuzTXLB7Jz
65
CuzTXLB7Jz
OmniSAT: Compact Action Token, Faster Auto Regression
Existing Vision-Language-Action (VLA) models can be broadly categorized into diffusion-based and auto-regressive (AR) approaches: diffusion models capture continuous action distributions but rely on computationally heavy iterative denoising. In contrast, AR models enable efficient optimization and flexible sequence con...
null
['Imitation Learning; Action Representation; Vision-Language-Action Learning']
/pdf/ccc987a5b4b404f0a409b34c2eba4139a884ce88.pdf
applications to robotics, autonomy, planning
null
['ICLR.cc/2026/Conference/Submission65/Authors']
jov79sMFHn
64
jov79sMFHn
NANO3D: A Training-Free Approach for Efficient 3D Editing Without Masks
3D object editing is essential for interactive content creation in gaming, animation, and robotics, yet current approaches remain inefficient, inconsistent, and often fail to preserve unedited regions. Most methods rely on editing multi-view renderings followed by reconstruction, which introduces artifacts and limits p...
null
['3D Computer Vision', '3D Editing', '3D Generation', 'Flow', 'Image Editiing']
/pdf/cbf3e28722c3010620160fa33672819483eba27a.pdf
generative models
null
['ICLR.cc/2026/Conference/Submission64/Authors']
9qOF3zgVfa
63
9qOF3zgVfa
A Needle In A Haystack: Referring Hour-Level Video Object Segmentation
Long-term videos over minutes are ubiquitous in daily life while existing Referring Video Object Segmentation (RVOS) datasets are limited to short-term videos with a duration of only 5-60 seconds. To unveil the dilemma of referring object segmentation towards hour-level videos, we construct the first Hour-level Referr...
null
['Referring Video Object Segmentation', 'Hierarchical Memory']
/pdf/3b643a86f53d4d2476c0f3ea238941b545fde51e.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission63/Authors']
tw1IWcVKTT
62
tw1IWcVKTT
Automated Optimization Modeling via a Localizable Error-Driven Perspective
Automated optimization modeling via Large Language Models (LLMs) has emerged as a promising approach to assist complex human decision-making. While post-training has become a pivotal technique to enhance LLMs' capabilities in this domain, its effectiveness is severely constrained by the scarcity and underutilization of...
null
['LLM post-training', 'automated optimization modeling']
/pdf/23fb085ea34ec9c3758c3b82f1b0675987c4f205.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission62/Authors']
AaZVrbElhC
61
AaZVrbElhC
CaRe-BN: Precise Moving Statistics for Stabilizing Spiking Neural Networks in Reinforcement Learning
Spiking Neural Networks (SNNs) offer low-latency and energy-efficient decision-making on neuromorphic hardware by mimicking the event-driven dynamics of biological neurons. However, due to the discrete and non-differentiable nature of spikes, directly trained SNNs rely heavily on Batch Normalization (BN) to stabilize g...
null
['Spiking Neural Networks', 'Batch Normalization', 'Reinforcement Learning']
/pdf/c059da07546cb4a9c34c3abff3df59e0351f2515.pdf
applications to neuroscience & cognitive science
/attachment/d24c798f75e7488595b21d7268076fb8c487bb43.zip
['ICLR.cc/2026/Conference/Submission61/Authors']
Fa3C0TkWYi
60
Fa3C0TkWYi
RectiWeather: Photo-Realistic Adverse Weather Removal via Zero-shot Soft Weather Perception and Rectified Flow
Despite significant progress in Adverse Weather Removal (AWR), challenges remain in applying existing methods to real-world scenarios and in generating photo-realistic and visually compelling outcomes. The limited generalization of current approaches can be attributed to their inability to accurately perceive complex d...
null
['zero-shot', 'soft perception', 'rectified flow']
/pdf/0ccbf8172fd9da11e5a1c3badd0efedef04b4355.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission60/Authors']
hQhqq6G3Be
58
hQhqq6G3Be
Adaptive Text and Feature Embedding for Consistent Story Generation
Recent advancements in text-to-image (T2I) generation have significantly improved image quality and text alignment. However, generating multiple coherent images that maintain consistent character identities across diverse textual descriptions remains challenging. Existing methods face trade-offs between identity consis...
null
['consistent generation']
/pdf/8d375ec00fc86c1fb6e13bf50e2685577220a456.pdf
generative models
null
['ICLR.cc/2026/Conference/Submission58/Authors']
SGsxxbAjXH
53
SGsxxbAjXH
MVCustom: Multi-View Customized Diffusion via Geometric Latent Rendering and Completion
Multi-view generation with camera pose control and prompt-based customization are both essential elements for achieving controllable generative models. However, existing multi-view generation models do not support customization with geometric consistency, whereas customization models lack explicit viewpoint control, ...
null
['Multi-view generation', 'Customizaton', 'Personalization']
/pdf/7402b82185602eb505889e6c56ce19060b583db8.pdf
generative models
/attachment/64288fe47f2bb519516b57e495715432940c8b78.zip
['ICLR.cc/2026/Conference/Submission53/Authors']
eGI1HQeCmn
51
eGI1HQeCmn
ImmunoTrace: A Meta-Agent for Immune History Tracking
The adaptive immune system encodes an individual's exposure history in the T-cell receptor (TCR) repertoire. We present ImmunoTrace, an AI agent for immune history tracking that estimates past pathogen exposure from a single time-point repertoire by linking TCRs and HLA alleles to proteome-scale peptide libraries. A sh...
ImmunoTrace is an AI agent that links a single-time-point TCR repertoire (with optional HLA) to proteome-scale peptide libraries.
['AI Agent', 'Retrieval-Augmented Modeling', 'Contrastive Learning', 'Probabilistic Evidence Fusion', 'Immune Exposure']
/pdf/d1ffbbfed5979176e21ac50a4ef3cc142581e5b4.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
/attachment/0400c6e99d42ec68b2906e04d70169648f6a2e03.zip
['ICLR.cc/2026/Conference/Submission51/Authors']
8IjxLiNXL1
49
8IjxLiNXL1
Memory Forgetting Adapter Sculpting for Selective Multimodal Large Language Model Unlearning
Multimodal Large Language Models (MLLMs) achieve remarkable capabilities but can inadvertently memorize privacy-sensitive information. Existing unlearning methods can remove such knowledge, yet they often degrade the model’s general image understanding. To address this, we propose the Sculpted Memory Forgetting Adapter...
null
['MLLMs', 'Machine Unlearning', 'MLLM Unlearning', 'Privacy Protection']
/pdf/5b82a24c81db1a9f2c82edacb3914001b9b28546.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission49/Authors']
PaYo96rjij
44
PaYo96rjij
Lifelong Embodied Navigation Learning
Embodied navigation agents powered by large language models have shown strong performance on individual tasks but struggle to continually acquire new navigation skills, which suffer from catastrophic forgetting. We formalize this challenge as lifelong embodied navigation learning (LENL), where an agent is required to a...
We propose Uni-Walker, a lifelong embodied navigation framework that decouples navigation knowledge into task-shared and task-specific components with Decoder Extension LoRA (DE-LoRA).
['Embodied Navigation', 'Lifelong Learning', 'Robotics Learning']
/pdf/a2c3cf69753a38670628cc736ba09431d8cd98fc.pdf
applications to robotics, autonomy, planning
/attachment/27ecb2511cb145533bcdfaf495bc8e661f073efd.zip
['ICLR.cc/2026/Conference/Submission44/Authors']
QYH7JGzEzM
43
QYH7JGzEzM
GrapHist: Large-Scale Graph Self-Supervised Learning for Histopathology
Self-supervised vision models have achieved notable success in digital pathology. However, their domain-agnostic transformer architectures are not designed to inherently account for fundamental biological elements of histopathology images, namely cells and their complex interactions. In this work, we hypothesize that a...
null
['graph representation learning', 'digital pathology']
/pdf/2052ad1273f1ab95b7b4c3bccd593425b3377553.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
/attachment/a9efe21b02e5834a998f7c3922d90bfef6a411fa.zip
['ICLR.cc/2026/Conference/Submission43/Authors']
cnrhmiw1VG
39
cnrhmiw1VG
GLEAM: Learning to Match and Explain in Cross-View Geo-Localization
Cross-View Geo-Localization (CVGL) focuses on identifying correspondences between images captured from distinct perspectives of the same geographical location. However, existing CVGL approaches are typically restricted to a single view or modality, and their direct visual matching strategy lacks interpretability: they ...
This work presents GLEAM-C and GLEAM-X, a unified pipeline that advances cross-view geo-localization by integrating multi-view alignment with interpretable, explainable reasoning.
['Remote Sensing', 'Cross-View Geo-Localization', 'Multimodal Large Language Model']
/pdf/7a130beba23634a98a969092af6d39b7b1dbd331.pdf
foundation or frontier models, including LLMs
/attachment/3649d00816711f2efb443d6c95c2566a816df980.zip
['ICLR.cc/2026/Conference/Submission39/Authors']
15HYjY5ol7
37
15HYjY5ol7
An AI Agent for Immune Receptor Fingerprint‑Based Diagnosis of Infection of Unknown Origin
When routine tests fail to find a pathogen, diagnosing infections of unknown origin stalls. We instead read the patient's immune response for AI-readable clues. We formalize a new machine learning task: inferring plausible epitopes directly from immune-receptor repertoires and localizing their pathogen sources. To addr...
Generative allele-aware epitope inference plus proteome retrieval turns TCR “fingerprints” into ranked pathogen hypotheses with calibrated confidence for IUO diagnosis.
['AI Agent', 'multi-task representation learning', 'Conditional sequence generation', 'Immune repertoire modeling', 'Epitope inference', 'Clinical diagnostics']
/pdf/c22586b13406a373b84019d98c4949f7c95ef57b.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
/attachment/3ccac7098c85ede22a372fedd3bed2c138d4049a.zip
['ICLR.cc/2026/Conference/Submission37/Authors']
GjsE9C1grt
36
GjsE9C1grt
Nonlinear Steering for Token-Efficient Reasoning in LLMs via Flow Matching
Large Reasoning Models (LRMs) excel at complex reasoning tasks, but their efficiency is often hampered by overly verbose outputs. Prior steering methods attempt to address this issue by applying a single, global vector to hidden representations—a rigid approach grounded in the restrictive *linear representation hypothe...
This paper introduces a nonlinear steering method using Flow Matching to transform verbose reasoning paths into concise ones, achieving superior accuracy and token efficiency in LLMs.
['representation steering; large reasoning models; LRMs; large language models; LLMs; efficient reasoning; flow matching']
/pdf/3421f23aa0576a1a0ef1db91cfc97936c8c749b3.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission36/Authors']
sE8DCSJTzd
35
sE8DCSJTzd
Exploration v.s. Exploitation: Rethinking RLVR through Clipping, Entropy, and Spurious Reward
This paper examines the exploration–exploitation trade-off in reinforcement learning with verifiable rewards (RLVR), a framework for improving the reasoning of Large Language Models (LLMs). Recent studies suggest that RLVR can elicit strong mathematical reasoning in LLMs through two seemingly paradoxical mechanisms: \t...
null
['Reinforcement Learning with Verifiable Rewards', 'Group Relative Policy Optimization', 'LLM Reasoning']
/pdf/cb6d1e97c04de37d8f35dd44516f78647f047f46.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission35/Authors']
6eSNG1VNkl
33
6eSNG1VNkl
SEMA: Simple yet Effective Learning for Multi-Turn Jailbreak Attacks
Multi-turn jailbreaks capture the real threat model for safety-aligned chatbots, where single-turn attacks are merely a special case. Yet existing approaches break under exploration complexity and intent drift. We propose SEMA, a simple yet effective framework that trains a multi-turn attacker without relying on any ex...
null
['jailbreak', 'attack', 'multi-turn', 'reinforcement learning', 'large language model']
/pdf/689aa1dbf5ca139920b52f3c93fd1376cf21b832.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission33/Authors']
KoLYNHJRBY
32
KoLYNHJRBY
CL-DPS: A Contrastive Learning Approach to Blind Nonlinear Inverse Problem Solving via Diffusion Posterior Sampling
Diffusion models (DMs) have recently become powerful priors for solving inverse problems. However, most work focuses on non-blind settings with known measurement operators, and existing DM-based blind solvers largely assume linear measurements, which limits practical applicability where operators are frequently nonline...
null
['Diffusion Models', 'Blind Inverse Problems', 'Contrastive Learning']
/pdf/60ec680452c3952a435815e5ec6fb69f635a1ee0.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission32/Authors']
AZ6lqcvHLX
30
AZ6lqcvHLX
Half-order Fine-Tuning for Diffusion Model: A Recursive Likelihood Ratio Optimizer
The probabilistic diffusion model (DM), generating content by inferencing through a recursive chain structure, has emerged as a powerful framework for visual generation. After pre-training on enormous data, the model needs to be properly aligned to meet requirements for downstream applications. How to efficiently align...
null
['perturbation-based gradient estimation', 'diffusion model', 'post-training']
/pdf/1c4cb7e5e1ed617120bf74e26bf181ee341f737f.pdf
optimization
null
['ICLR.cc/2026/Conference/Submission30/Authors']
lWc3QZkC9e
27
lWc3QZkC9e
WWW.Serve: A Decentralized Framework for Collaborative LLM Serving
Recent Large language model (LLM) services remain mostly centralized, restricting both scalability and privacy. Decentralization could address these limitations, but impose challenges of trustless coordination, fair scheduling, and efficiency. To this end, we propose WWW.Serve, a decentralized framework for interconnec...
We propose WWW.Serve, a fully decentralized framework for trustless and collaborative LLM serving, which improves efficiency, latency, and scalability while preserving privacy.
['Large Language Model Serving', 'Efficienct Serving Systems', 'Decentralized LLM Serving', 'Distributed LLMs']
/pdf/9ee240e1cc36c7066864a2f959d22211f84eb1dd.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission27/Authors']
FRXNMF0to7
26
FRXNMF0to7
The Personality Illusion: Revealing Dissociation Between Self-Reports & Behavior in LLMs
Personality traits have long been studied as predictors of human behavior. Recent advances in Large Language Models (LLMs) suggest similar patterns may emerge in artificial systems, with advanced LLMs displaying consistent behavioral tendencies resembling human traits like agreeableness and self-regulation. Understandi...
LLMs develop stable self-reported trait profiles through instructional alignment, yet these traits fail to manifest in real-world behavior.
['LLMs', 'personality traits', 'behavioral alignment', 'self-regulation', 'persona', 'trait manifestation', 'personality illusion', 'psychology of AI']
/pdf/dd4504df8949b129861273747acae5ac0c9aa6ca.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission26/Authors']
oKHPJ0GTLG
25
oKHPJ0GTLG
De-hallucinating CLIP Embeddings to Improve Brain-Vision Mapping
Recent advances in vision-language models, such as CLIP, have enabled their widespread use in brain encoding and decoding, where global image embeddings serve as anchors linking visual stimuli to voxel-level brain responses. However, we observe that CLIP's global visual embeddings often exhibit hallucinatory semantics:...
null
['Brain-vision mapping', 'neuro decoding', 'semantic selectivity']
/pdf/023eb00fa2c555ec3dde2f9e72adb17b07ad5be3.pdf
applications to neuroscience & cognitive science
null
['ICLR.cc/2026/Conference/Submission25/Authors']
cf0yp18EeD
24
cf0yp18EeD
Inductive Visual Logic for Few-Shot Out-of-Distribution Adaptation in VLMs
Few-shot visual reasoning requires models not only to learn from limited supervision while also adapting across domains, including those that are far from pretraining distributions. Modern vision-language models (VLMs) such as Qwen and LLaVA excel in zero-shot tasks while collapsing in these distant out-of-distribution...
Instead of fine-tuning VLMs on novel concepts they can't represent, IVL extracts and reasons over human-interpretable visual traits from few examples.Retry
['VLM', 'LLM', 'FSDA', 'OOD']
/pdf/81ac309434737e538d77f147b50938ac1de8dae4.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission24/Authors']
G5YWhGslEr
20
G5YWhGslEr
History-Aware Transformation of ReID Features for Multiple Object Tracking
In Multiple Object Tracking (MOT), Re-identification (ReID) features are widely employed as a powerful cue for object association. However, they are often wielded as a one-size-fits-all hammer, applied uniformly across all videos through simple similarity metrics. We argue that this overlooks a fundamental truth: MOT ...
null
['tracking', 'multiple object tracking', 're-identification']
/pdf/16835c94aa3e20c6a4b74bb0c5f020a23318f8c9.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission20/Authors']
KjHB7rebQO
19
KjHB7rebQO
RiskPO: Risk-based Policy Optimization with Verifiable Reward for LLM Post-Training
Reinforcement learning with verifiable reward has recently emerged as a central paradigm for post-training large language models (LLMs); however, prevailing mean-based methods, such as Group Relative Policy Optimization (GRPO), suffer from entropy collapse and limited reasoning gains. We argue that these issues stem fr...
null
['Reinforcement Learning with Verifiable Reward', 'Risk-Sensitive RL']
/pdf/2bfcde92ee156da77da0b811626948b78d757aaf.pdf
reinforcement learning
null
['ICLR.cc/2026/Conference/Submission19/Authors']
6a2CJrizrh
15
6a2CJrizrh
BALROG: Contextual Bandits meets Active Learning for Online Generative Model Selection
The rapid proliferation of open-platform text-to-image generative models has made prompt-wise model selection essential for producing high-quality and semantically accurate images, yet it remains a challenging problem. Existing approaches, including contextual bandit algorithms, often converge slowly and fail to exploi...
We propose a new method for online generative model selection based on Nearest Neighbors bandits and active learning.
['Generative models', 'Online model selection', 'Contextual bandits']
/pdf/54c87e6d7725a7415b0cb0d69f045032dce69826.pdf
reinforcement learning
/attachment/98f2aa81b8d04ee560ab457d2b6b09b7fd7dc1b0.zip
['ICLR.cc/2026/Conference/Submission15/Authors']
CYmjrbQRyM
13
CYmjrbQRyM
ASMIL: Attention-Stabilized Multiple Instance Learning for Whole-Slide Imaging
Attention-based multiple instance learning (MIL) has emerged as a powerful framework for whole slide image (WSI) diagnosis, leveraging attention to aggregate instance-level features into bag-level predictions. Despite this success, we find that such methods exhibit a new failure mode: unstable attention dynamics. Acr...
null
['Whole slide image', 'Multiple instance learning']
/pdf/418d8e4d45ea48edbf688f51ac04e4883f5b9b31.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission13/Authors']
0QPXvKE4SV
12
0QPXvKE4SV
TCR-EML: Explainable Model Layers for TCR-pMHC Prediction
T cell receptor (TCR) recognition of peptide-MHC (pMHC) complexes is a central component of adaptive immunity, with implications for vaccine design, cancer immunotherapy, and autoimmune disease. While recent advances in machine learning have improved prediction of TCR-pMHC binding, the most effective approaches are bla...
We propose an approach to TCR-pMHC binding prediction, TCR-EML, that utilizes concept and prototype layers to provide accurate, detailed insights into the mechanisms of T cell response.
['T Cell', 'TCR', 'Transformers', 'XAI', 'Interpretability']
/pdf/375f047df05621d5eab2d0aeaca75d228a14f6fe.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
/attachment/6d620dc959977bf2b0739218312766c3b2f70f47.zip
['ICLR.cc/2026/Conference/Submission12/Authors']
jxyEci13Dd
11
jxyEci13Dd
Long-Text-to-Image Generation via Compositional Prompt Decomposition
While modern text-to-image models excel at generating images from intricate prompts, they struggle to capture the key details when the prompts are expanded into descriptive paragraphs. This limitation stems from the prevalence of short captions in their training data. Existing methods attempt to address this by either ...
We decompose long-prompts to allow pre-trained Text-to-Image models to handle long-prompts input, demonstrating superior generalization as prompt length increases.
['Compositionality; Text-to-Image Generation; Generative Model Generalization']
/pdf/627d989858c3b9c53434578fa91d6b150461ba83.pdf
generative models
/attachment/8d0b75d6bfa9ccefd81852db1fc8ec579a826281.zip
['ICLR.cc/2026/Conference/Submission11/Authors']
Q5mkmW0cUD
9
Q5mkmW0cUD
Learn Globally, Speak Locally: Bridging the Gaps in Multilingual Reasoning
Large Language Models (LLMs) have achieved strong performance in domains like mathematics, factual question answering, and code generation, yet their ability to reason on these tasks in different languages remains underdeveloped. Especially for low-resource languages such as Swahili or Thai, LLMs can often misinterpret...
null
['LLM', 'multilingual reasoning', 'alignment', 'multilingualism', 'cross-lingual transfer', 'multilingual benchmarks', 'multilingual evaluation']
/pdf/6e284e24dbdd3f0ebf98ecdf056906cc636a3291.pdf
foundation or frontier models, including LLMs
/attachment/480d0cda0e04bbe6a70db744b1241d6cf81398c1.zip
['ICLR.cc/2026/Conference/Submission9/Authors']
6wA4qpyyU9
8
6wA4qpyyU9
Directional Textual Inversion for Personalized Text-to-Image Generation
Textual Inversion (TI) is an efficient approach to text‑to‑image personalization but often fails on complex prompts. We trace these failures to embedding norm inflation: learned tokens drift to out‑of‑distribution magnitudes, degrading prompt conditioning in pre‑norm Transformers. Empirically, we show semantics are pri...
We propose Directional Textual Inversion that improves text fidelity for personalized text-to-image generation.
['personalized generation', 'text-to-image models', 'textual inversion']
/pdf/b17f9b520fbbc0e9058eadefe8a86be0c78c13fb.pdf
generative models
/attachment/af2f3b748c05b1e82967c55d394d9b19b47ee32b.zip
['ICLR.cc/2026/Conference/Submission8/Authors']
SOxO7e6ySB
5
SOxO7e6ySB
Language Models Do Not Have Human-Like Working Memory
While Large Language Models (LLMs) exhibit remarkable reasoning abilities, we demonstrate that they fundamentally lack a core aspect of human cognition: working memory. Human working memory is an active cognitive system that enables not only the temporary storage of information but also its processing and utilization. ...
null
['Large Language Model', 'Working Memory']
/pdf/084d84afc131ae518ba31ce2e59c46fc31f7880a.pdf
alignment, fairness, safety, privacy, and societal considerations
/attachment/ee19770b3a6c3e66c892d5fda95d59c64d0dc169.zip
['ICLR.cc/2026/Conference/Submission5/Authors']
iQsKotob31
4
iQsKotob31
HSSBench: Benchmarking Humanities and Social Sciences Ability for Multimodal Large Language Models
Multimodal Large Language Models (MLLMs) have demonstrated significant potential to advance a broad range of domains. However, current benchmarks for evaluating MLLMs primarily emphasize general knowledge and vertical step-by-step reasoning typical of STEM disciplines, while overlooking the distinct needs and potential...
null
['MLLMs', 'Benchmark', 'Dataset', 'Humanities and Social Sciences']
/pdf/ff3c4ae9941ee045727fd87e2601e013ed3c6f69.pdf
datasets and benchmarks
/attachment/c208fbf6969a7a84b43b0fc88841c399c07c508e.zip
['ICLR.cc/2026/Conference/Submission4/Authors']
WffiETiSeU
3
WffiETiSeU
Part-X-MLLM: Part-aware 3D Multimodal Large Language Model
We introduce Part-X-MLLM, a native 3D multimodal large language model that unifies diverse 3D tasks by formulating them as programs in a structured, executable grammar. Given an RGB point cloud and a natural language prompt, our model autoregressively generates a single, coherent token sequence encoding part-level boun...
null
['3D Computer Vision', '3D Vision-language Modeling', 'Part-aware 3D understanding', 'Multimodal Large Language Model']
/pdf/b2fd606362abe100ac17ca69fffcf57890a3260b.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission3/Authors']
7QjQ1mpNMX
2
7QjQ1mpNMX
Large Pretraining Datasets Don't Guarantee Robustness after Fine-Tuning
Large-scale pretrained models are widely leveraged as foundations for learning new specialized tasks via fine-tuning, with the goal of maintaining the general performance of the model while allowing it to gain new skills. A valuable goal for all such models is robustness: the ability to perform well on out-of-distribut...
We demonstrate that models pretrained on larger datasets can exhibit poorer robustness after fine-tuning compared to models pretrained on smaller datasets when the fine-tuning dataset is small. We analyze this phenomenon using the proposed benchmark.
['robust fine-tuning', 'catastrophic forgetting', 'transfer learning', 'representation learning', 'continual learning']
/pdf/f6812f4dc804deda91be0eb90507f714bc417515.pdf
transfer learning, meta learning, and lifelong learning
/attachment/a63d1dfbf32d9198e061dcfdde77c5b8112095b4.zip
['ICLR.cc/2026/Conference/Submission2/Authors']
h7qdCvhMdb
1
h7qdCvhMdb
Can Microcanonical Langevin Dynamics Leverage Mini-Batch Gradient Noise?
Scaling inference methods such as Markov chain Monte Carlo to high-dimensional models remains a central challenge in Bayesian deep learning. A promising recent proposal, microcanonical Langevin Monte Carlo, has shown state-of-the-art performance across a wide range of problems. However, its reliance on full-dataset gra...
null
['Microcanonical Langevin', 'Sampling', 'Bayesian Deep Learning']
/pdf/39a21aa61c118533fef10b61bcc5eee5b5244840.pdf
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
null
['ICLR.cc/2026/Conference/Submission1/Authors']