id
stringlengths
10
10
number
int64
1
25.6k
forum
stringlengths
10
10
title
stringlengths
5
214
abstract
stringlengths
26
4.31k
content_TLDR
stringlengths
1
250
content_keywords
stringlengths
6
1.02k
content_pdf
stringlengths
49
49
content_primary_area
stringclasses
21 values
content_supplementary_material
stringlengths
56
56
signatures
stringlengths
47
51
uQKtwdJN0o
25,237
uQKtwdJN0o
FrugalRAG: Less is More in RL Finetuning for Multi-hop Question Answering
Reinforcement learning (RL) based on the final answer's reward has driven recent progress in small language models (SLMs) on reasoning-heavy tasks such as math and code. However, applying the same techniques to retrieval-augmented generation (RAG) benchmarks like multi-hop QA has yielded limited gains—often trailing su...
null
['Multi-Hop RAG', 'Efficiency', 'Reasoning', 'SLMs']
/pdf/3f417a456608be44bbbe0d79021824c66645a981.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission25237/Authors']
GnawtLKGkP
25,236
GnawtLKGkP
Any-step Generation via N-th Order Recursive Consistent Velocity Field Estimation
Recent advances in few-step generative models (typically $1$-$8$ steps), such as consistency models, have yielded impressive performance. However, their broader adoption is hindered by significant challenges, including substantial computational overhead, the reliance on complex multi-component loss functions, and intri...
null
['Generative Models']
/pdf/15ab6b71b1d024e9934411c9d3377a01ee4edc77.pdf
generative models
null
['ICLR.cc/2026/Conference/Submission25236/Authors']
KmAu6XvQ5d
25,234
KmAu6XvQ5d
LoRA Fails under Non-IID Conditions: Rethinking Federated Low-Rank Adaptation
Low-Rank Adaptation (LoRA) has become a popular technique for memory-efficient fine-tuning of large models and has recently been adopted in federated learning (FL) due to its reduced parameter footprint. However, we show that LoRA significantly underperforms full-parameter fine-tuning (FFT) in FL, especially under non-...
LoRA fails under non-IID in federated learning; FedLoRe uses GaLore-style compression with randomized SVD and correction to improve memory, convergence, and robustness.
['Federated Learning', 'Non-IID Data', 'Low-Rank Method', 'LoRA']
/pdf/30aa23f673907cf1a1463636d6ab2ad901683194.pdf
other topics in machine learning (i.e., none of the above)
null
['ICLR.cc/2026/Conference/Submission25234/Authors']
J04D9xBUCi
25,233
J04D9xBUCi
Bridging the Preference Gap: Post-Training Input Rewriting with Large Language Models
Pre-trained language models, such as BERT and RoBERTa, have achieved remarkable performance in semantic classification tasks. Yet, their effectiveness varies with different textual expressions due to inherent preferences developed during training. To address this limitation, we propose a framework that leverages large ...
null
['textual entailment', 'natural language inference']
/pdf/a299b35537faee98a7172e8ec6160f4a245f083d.pdf
other topics in machine learning (i.e., none of the above)
null
['ICLR.cc/2026/Conference/Submission25233/Authors']
NCLjpR2MDq
25,232
NCLjpR2MDq
From Broad Exploration to Stable Synthesis: Entropy-Guided Optimization for Autoregressive Image Generation
Combining Chain-of-Thought (CoT) with Reinforcement Learning (RL) improves text-to-image (T2I) generation, yet the underlying interaction between CoT's exploration and RL's optimization remains unclear. We present a systematic entropy-based analysis that yields three key insights: (1) CoT expands the generative explora...
null
['Language Models', 'Autoregressive Image Generation', 'Chain-of-Thought']
/pdf/5bb4ab9e362b7e472faf33ee928f045bec5eb290.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission25232/Authors']
cpHhVrrug4
25,231
cpHhVrrug4
Beyond Unidirectional Flow: LLM Reasoning with Bidirectional Cycle-Consistent CoT
Small-large model collaboration is a promising approach for efficient reasoning, where lightweight assistant models generate intermediate representations to guide larger, more capable models. However, this paradigm encounters two key challenges: \textbf{representation heterogeneity} between different model architecture...
null
['Reasoning', 'LLM', 'Chain-of-Thoughts']
/pdf/8f76bbda9397a631f28c093ea9b136f940a1d1c2.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission25231/Authors']
KjxdIG4z84
25,230
KjxdIG4z84
MetaFlow: A Meta Approach of Training LLMs into Generalizable Workflow Generators
Large language models (LLMs) excel across a wide range of tasks, yet their instance-specific solutions often lack the structural consistency needed for reliable deployment. Workflows that encode recurring algorithmic patterns at the task level provide a principled framework, offering robustness across instance variatio...
We trains LLMs to generate reusable workflows for entire task classes, demonstrating strong generalization to unseen tasks and novel operators through meta-learning with verifiable execution feedback.
['LLM Agent; Workflow Generation; Reinforcement learning; Meta Learning']
/pdf/2c294495ff494031c0a23084e101028d41343b03.pdf
applications to robotics, autonomy, planning
null
['ICLR.cc/2026/Conference/Submission25230/Authors']
FHXvxKGpdv
25,229
FHXvxKGpdv
UPER: Bridging the Perception Gap in Personalized Image Generation with Human-Aligned Reinforcement Learning
Personalized image generation aims to synthesize novel scenes featuring a specific user-provided subject. However, state-of-the-art models often fail to preserve the fine-grained details that define a subject's unique identity, a critical flaw that limits their use in high-fidelity applications. This "consistency gap" ...
null
['RLHF', 'Personalization']
/pdf/ba07636d36e493146275799ff2052a2a763ff78e.pdf
other topics in machine learning (i.e., none of the above)
null
['ICLR.cc/2026/Conference/Submission25229/Authors']
T7vcbdwHYH
25,228
T7vcbdwHYH
CL2GEC: A Multi-Discipline Benchmark for Continual Learning in Chinese Literature Grammatical Error Correction
The growing demand for automated writing assistance in scientific domains highlights the need for robust Chinese Grammatical Error Correction (CGEC) systems that can adapt across disciplines. However, existing CGEC research lacks dedicated benchmarks for academic writing and overlooks continual learning as a solution t...
CL2GEC is a new benchmark for Chinese academic GEC across 10 disciplines; results show that regularization-based continual learning significantly outperforms replay and sequential tuning in both grammatical accuracy and knowledge retention.
['Chinese Grammatical Error Correction;Benchmark Evaluation;Continual Learning;Large Language Models']
/pdf/f4aa6dc313ba60e31d9724d0bd7aa27a688e9a3f.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission25228/Authors']
JX5imb3E2V
25,225
JX5imb3E2V
Improving expressivity in Link Prediction with GNNs via the Shortest Path
Graph Neural Networks (GNNs) often fail to capture the link-specific structural patterns essential for accurate link prediction, since their node-centric message passing might overlook the subgraph structures connecting two nodes. Prior attempts to inject such structural context either suffer from high computational co...
null
['graph neural networks', 'expressivity', 'shortest path']
/pdf/1cedfcdd94cf27fdb350f59fb822f01f2a2074fc.pdf
learning on graphs and other geometries & topologies
/attachment/5f0bb8cb4a2ffe2c72385e2c59fef93a917c0e08.zip
['ICLR.cc/2026/Conference/Submission25225/Authors']
rjhF7b7n6g
25,222
rjhF7b7n6g
Evaluating Dataset Watermarking for Fine-tuning Traceability of Customized Diffusion Models: A Comprehensive Benchmark and Removal Approach
Recently, numerous fine-tuning techniques for diffusion models have been developed, enabling diffusion models to generate content that closely resembles a specific image set, such as specific facial identities and artistic styles. However, this advancement also poses potential security risks. The primary risk comes fro...
This paper first establishes a generalized threat model and subsequently introduces a comprehensive framework for evaluating dataset watermarking methods, comprising three dimensions: Universality, Transmissibility, and Robustness.
['Dataset Watermarking; Diffusion Model; Copyright Protection']
/pdf/70aaa80aced760bd7dfa02f43cc86fcaf4761886.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission25222/Authors']
qPbDM5L8tE
25,221
qPbDM5L8tE
Contact-VLA: Zero-Shot Planning and Control for Contact-Rich Manipulation
Vision-Language-Action (VLA) systems often lack adaptability and explainability due to their black-box structure and dependency on fixed action sets from extensive tele-operated datasets, limiting their effectiveness in complex, dynamic manipulation scenarios. To address this issue, we propose a novel VLA framework cap...
Contact-VLA is a modular framework that integrates vision-based scene modeling, LLM-driven strategy generation, and dynamic planning to enable zero-shot adaptive manipulation in contact-rich tasks.
['Vision-Language-Action model', 'robotic manipulation', 'contact-rich manipulation', 'manipulation planning', 'robot learning']
/pdf/177b4548f7c40b825c9f089e36f13a9c7371adf2.pdf
applications to robotics, autonomy, planning
null
['ICLR.cc/2026/Conference/Submission25221/Authors']
H2NG2dNN2K
25,220
H2NG2dNN2K
Q-FSRU: Quantum-Augmented Frequency-Spectral Fusion for Medical Visual Question Answering
Solving tough clinical questions that require both image and text understanding is still a major challenge in healthcare AI. In this work, we propose Q-FSRU, a new model that combines Frequency Spectrum Representation and Fusion (FSRU) with a method called Quantum Retrieval-Augmented Generation (Quantum RAG) for medica...
We propose Q-FSRU, a medical VQA model that fuses frequency-domain features with quantum-inspired retrieval, achieving superior accuracy and explainability on complex radiology questions.
['Medical VQA', 'Frequency Spectrum Representation', 'Fast Fourier Transform (FFT)', 'Quantum Retrieval-Augmented Generation', 'Image-text reasoning', 'Radiology', 'Explainable AI', 'Clinical decision support']
/pdf/68e20b6688afd58ab8e10ef0ee1eb40b4f5dfa61.pdf
generative models
null
['ICLR.cc/2026/Conference/Submission25220/Authors']
UyiTjp0oKU
25,219
UyiTjp0oKU
Gaze Following in Question Answering: A Comprehensive Benchmark for Vision-Language Models
Gaze following aims to infer human intention within scene images. Conventional methods typically rely on scene and face images to regress the gaze point coordinates which is unnatural and restrictive. Recently, vision-language models (VLMs) have attracted significant attention for their powerful reasoning abilities, ra...
We present GazeVQA, the first large-scale text-image dataset for VLM-based gaze following.
['Gaze Following', 'Vision-Language Model']
/pdf/02b5e7e682c97906da5aa366d0d1998875116621.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission25219/Authors']
fvyzZfhvTG
25,218
fvyzZfhvTG
Causal Scaffolding for Physical Reasoning: A Benchmark for Causally-Informed Physical World Understanding in VLMs
Understanding and reasoning about the physical world is the foundation of intelligent behavior, yet state-of-the-art vision-language models (VLMs) still fail at causal physical reasoning, often producing plausible but incorrect answers. To systematically address this gap, we introduce CausalPhys, a benchmark of over 3,...
null
['physical reasoning', 'causality', 'VLM']
/pdf/86c2713222d14948f95caa2a4157913d2d7b7049.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission25218/Authors']
1ndthBqbyK
25,217
1ndthBqbyK
TSDINO: Teacher–Student Self-Distillation Framework for Robust Pre-training of Time-Series Foundation Models
Building time-series foundation models (TSFM) poses challenges in terms of learning stability due to limited data availability and heterogeneous temporal dynamics across various time-series datasets. We propose TSDINO, a teacher-student framework for robust pre-training of TSFM based on the principle of self-distillati...
null
['time series', 'self-distillation', 'time-series foundation models']
/pdf/43b325484b4b3f4b77f48594f8c1a9cdfc8f6fef.pdf
learning on time series and dynamical systems
null
['ICLR.cc/2026/Conference/Submission25217/Authors']
XEkQu1ZWGN
25,214
XEkQu1ZWGN
ChemBOMAS: Accelerated Bayesian Optimization for Scientific Discovery in Chemistry with LLM-Enhanced Multi-Agent System
Bayesian optimization (BO) is a powerful tool for scientific discovery in chemistry, yet its efficiency is often hampered by the sparse experimental data and vast search space. Here, we introduce ChemBOMAS: a large language model (LLM)-enhanced multi-agent system that accelerates BO through synergistic data- and knowle...
ChemBOMAS is an LLM-enhanced multi-agent system that synergistically integrates data-driven pseudo-data generation with knowledge-driven search space partitioning to accelerate Bayesian optimization for scientific discovery by up to ten times.
['Bayesian Optimization', 'Data Augmentation', 'Knowledge-Driven Strategy', 'Large Language Model', 'AI4Science']
/pdf/8dd18e5d273f76d1d01b0ee673570c6ae8089c0b.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
/attachment/4ae4fa65e8ffe9e397217f3397a786a52ac60a9e.zip
['ICLR.cc/2026/Conference/Submission25214/Authors']
PzhNnMepgl
25,210
PzhNnMepgl
Stopping Computation for Converged Tokens in Masked Diffusion-LM Decoding
Masked Diffusion Language Models generate sequences via iterative sampling that progressively unmasks tokens. However, they still recompute the attention and feed-forward blocks for every token position at every step---even when many unmasked tokens are essentially fixed, resulting in substantial waste in compute. We p...
null
['diffusion language models', 'compute efficient sampling', 'skipping compute', 'adaptive attention']
/pdf/abe2cdfb241311eb87db18a3fd14e5d7734fa827.pdf
generative models
null
['ICLR.cc/2026/Conference/Submission25210/Authors']
IgZWU75BLL
25,208
IgZWU75BLL
SuRe: Surprise-Driven Prioritised Replay for Continual LLM Learning
Continual learning, one's ability to adapt to a sequence of tasks without forgetting previously acquired knowledge, remains a major challenge in machine learning and a key gap between artificial and human intelligence. While regularisation and replay perform well in vision, they lag behind multi-task learning for large...
null
['continual learning', 'large language models', 'replay', 'surprise']
/pdf/443585eb9cb3b989527ce9bec62ed513904af7bb.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission25208/Authors']
vanVyHsl30
25,207
vanVyHsl30
ADVMEM: Adversarial Memory Initialization for Realistic Test-Time Adaptation via Tracklet-Based Benchmarking
We introduce a novel tracklet-based dataset for benchmarking test-time adaptation (TTA) methods. The aim of this dataset is to mimic the intricate challenges encountered in real-world environments such as images captured by hand-held cameras, self-driving cars, etc. The current benchmarks for TTA focus on how models fa...
null
['test time adaptation']
/pdf/aff03dae2dfc45760d358ce17dabfbb672e8a10a.pdf
transfer learning, meta learning, and lifelong learning
/attachment/a470b0d1950a0daf5e913808759137fcb33bb516.pdf
['ICLR.cc/2026/Conference/Submission25207/Authors']
JGvOicAo3g
25,206
JGvOicAo3g
GMTS: Gradient Magnitude-based Token Selection Improves RLVR Training for LLM Reasoning
Reinforcement learning (RL) has recently emerged as a central paradigm for enhancing large language models' (LLMs) reasoning abilities. State-of-the-art RL with Verifiable Rewards (RLVR) methods have demonstrated remarkable effectiveness in mathematical reasoning tasks. Recent studies suggest that high-entropy tokens ...
null
['Reinforcement Learning', 'Large Language Models', 'RL with Verifiable Rewards', 'Gradient Magnitude-based Token Selection', 'Mathematical Reasoning']
/pdf/36af2c9b9e61b6782b4c9eac76e467c30a90f95a.pdf
reinforcement learning
null
['ICLR.cc/2026/Conference/Submission25206/Authors']
r0L9GwlnzP
25,205
r0L9GwlnzP
Do LLM Agents Know How to Ground, Recover, and Assess? A Benchmark for Epistemic Competence in Information-Seeking Agents
Recent work has explored training Large Language Model (LLM) search agents with reinforcement learning (RL) for open-domain question answering (QA). However, most evaluations focus solely on final answer accuracy, overlooking how these agents reason with and act on external evidence. We introduce **SeekBench**, the...
null
['Epistemic Competence', 'Evidence-Grounded Reasoning', 'LLM Search Agents']
/pdf/c547f40238fb9943de02ac8f53b72fb41b0531d9.pdf
datasets and benchmarks
/attachment/c3aa65e5ce3fe95ae94f6c2d606567dbfbf2d6f7.zip
['ICLR.cc/2026/Conference/Submission25205/Authors']
fY4proGNFD
25,201
fY4proGNFD
Reframing attention as a reinforcement learning problem for causal discovery
Formal frameworks of causality have operated largely parallel to modern trends in deep reinforcement learning (RL). However, there has been a revival of interest in formally grounding the representations learned by neural networks in causal concepts. Yet, most attempts at neural models of causality assume static causal...
null
['Causal World Models', 'Causal Reinforcement Learning', 'Causal Processes', 'Causal Representation Learning']
/pdf/20c0fe367ef3298d78a026a646e369b25df057ff.pdf
causal reasoning
null
['ICLR.cc/2026/Conference/Submission25201/Authors']
7dTqUaY2Kl
25,200
7dTqUaY2Kl
JailNewsBench: Multi-Lingual and Regional Benchmark for Fake News Generation under Jailbreak Attacks
Fake news undermines societal trust and decision-making across politics, economics, health, and international relations, and in extreme cases threatens human lives and societal safety. Because fake news reflects region-specific political, social, and cultural contexts and is expressed in language, evaluating the risks ...
null
['fake news', 'jailbreak', 'llm', 'multilingual']
/pdf/46ddaf0d6431d8d7b514f865e674a8790201c14a.pdf
datasets and benchmarks
/attachment/d82c5ed915ad8b78b4db0a8f1b3c39fa9bafe3ce.zip
['ICLR.cc/2026/Conference/Submission25200/Authors']
I94Eg6cu7P
25,199
I94Eg6cu7P
SRT: Super-Resolution for Time Series via Disentangled Rectified Flow
Fine-grained time series data with high temporal resolution is critical for accurate analytics across a wide range of applications. However, the acquisition of such data is often limited by cost and feasibility. This problem can be tackled by reconstructing high-resolution signals from low-resolution inputs based on sp...
We propose SRT, a novel disentangled rectified flow framework for time series super-resolution that generates high-resolution details from low-resolution data, achieving state-of-the-art performance across nine benchmarks.
['Time Series Super-Resolution', 'Rectified Flow', 'Temporal Disentanglement', 'Implicit Neural Representations']
/pdf/f649aac3d9ad7af140cf5212759ff2de52fff908.pdf
learning on time series and dynamical systems
/attachment/fd586e88954925f5784fc86bd7ae47790b259d3c.zip
['ICLR.cc/2026/Conference/Submission25199/Authors']
kqT4pcOT10
25,197
kqT4pcOT10
Emergent Bayesian Behaviour and Optimal Cue Combination in LLMs
Large language models (LLMs) excel at explicit reasoning, but their implicit computational strategies remain underexplored. Decades of psychophysics research show that humans intuitively process and integrate noisy signals using near-optimal Bayesian strategies in perceptual tasks. We ask whether LLMs exhibit similar ...
null
['Large Language Models (LLMs)', 'Psychophysics', 'Bayesian Inference', 'Cue Combination', 'Emergent Abilities', 'LLM Evaluation', 'Uncertainty Quantification']
/pdf/14af32e162d85d8e1ab794b63c7d55375cfb4e2d.pdf
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
null
['ICLR.cc/2026/Conference/Submission25197/Authors']
cKNOCYPo2W
25,196
cKNOCYPo2W
Conditioned Initialization for Attention
Transformers are a dominant architecture in modern machine learning, powering applications across vision, language, and beyond. At the core of their success lies the attention layer, where the query, key, and value matrices determine how token dependencies are captured. While considerable work has focused on scaling an...
null
['spectral conditioning transformers', 'spectral properties of attention']
/pdf/0cefa1eb32ef625b997cb5d4d3b2c49ccfbe99ba.pdf
other topics in machine learning (i.e., none of the above)
null
['ICLR.cc/2026/Conference/Submission25196/Authors']
vGqkrrOGty
25,195
vGqkrrOGty
Towards Real-world Debiasing: Rethinking Evaluation, Challenge, and Solution
Spurious correlations in training data significantly hinder the generalization capability of machine learning models when faced with distribution shifts, leading to the proposition of numberous debiasing methods. However, it remains to be asked: Do existing benchmarks for debiasing really represent biases in the real w...
In this work, we revisit the task of debiasing under real-world scenarios, proposing systematic evaluation framework, challenges, and solutions for real-world debiasing.
['spurious correlation', 'dataset bias', 'debias']
/pdf/d72291792944c88d6b7c4a83ccfa4460565e4c4b.pdf
alignment, fairness, safety, privacy, and societal considerations
/attachment/e6065ec709c132da2b77743a4b0eec090236f788.zip
['ICLR.cc/2026/Conference/Submission25195/Authors']
yX1Nn63DwQ
25,194
yX1Nn63DwQ
A New Efficient Method For Combining Gradients Of Different Orders
We present a new optimization method called GOC(Gradient Order Combination) which a combination based on the products of Hessian matrices of different orders and the gradient. the parameter r (the recipprocal of steplenth) is taken as analysis target, we can regard the SD method as a first-order and the CBB method as ...
null
['gradient method', 'gradient combine', 'SD', 'CBB']
/pdf/06d550e106f67e465e380b3ebe69943e45cafc78.pdf
optimization
null
['ICLR.cc/2026/Conference/Submission25194/Authors']
5EqAAgBMWZ
25,193
5EqAAgBMWZ
Direct Reward Optimization: A Point-wise Alignment Approach
Direct Alignment Algorithms (DAAs) are widely used for aligning Large Language Models (LLMs) with human preferences. The current DAAs mostly use pairwise optimization objectives based on variants of Direct Preference Optimization (DPO). However, these methods only focus on the pairwise differences of the samples and ca...
null
['Alignment Algorithms', 'Large Language Models', 'Bradley-Terry']
/pdf/b76aa004444df8ce27f5e012e38fc86a58c63036.pdf
generative models
/attachment/be30cafd8470ba57f6a5f767c11e6186f0b1ca92.zip
['ICLR.cc/2026/Conference/Submission25193/Authors']
10Iiew095e
25,190
10Iiew095e
StreamingThinker: Large Language Models Can Think While Reading
Large language models (LLMs) have demonstrated remarkable capabilities in chain of thought (CoT) reasoning. However, the current LLM reasoning paradigm initiates thinking only after the entire input is available, which introduces unnecessary latency and weakens attention to earlier information in dynamic scenarios. Ins...
We propose StreamingThinker, a framework that enables LLMs to think while reading.
['LLMs', 'Reasoning', 'Streaming']
/pdf/8dc3142412d7e546ee5b04e1f7939c68f3766fdd.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission25190/Authors']
sOCKQ2UWKs
25,187
sOCKQ2UWKs
UniArt: Generating 3D articulated objects with open-set articulation beyond retrieval
Articulated objects are central in the field of realistic simulation and robot learning, enabling dynamic interactions and task-oriented manipulation. However, manually annotating these objects is labor-intensive, motivating the need for automated generation solutions. Previous methods usually rely on retrieving part s...
null
['3d Generation', 'Embodied AI']
/pdf/8cc9c1af164cacbc03f87a8479d234cc802c9ffd.pdf
applications to robotics, autonomy, planning
null
['ICLR.cc/2026/Conference/Submission25187/Authors']
krLuDCXK6n
25,185
krLuDCXK6n
Improving realistic semi-supervised learning with doubly robust estimation
A major challenge in Semi-Supervised Learning (SSL) is the mismatch between the labeled and unlabeled class distributions. Most successful SSL approaches are based on pseudo-labeling of the unlabeled data, and therefore are susceptible to confirmation bias because the classifier being trained is biased towards the labe...
We use doubly robust estimation to improve the class distribution estimation and classification accuracy for distribution mismatch settings in semi-supervised learning
['semi-supervised learning', 'doubly robust estimation']
/pdf/207f426e5a135b1cb30795e178de767976d24143.pdf
unsupervised, self-supervised, semi-supervised, and supervised representation learning
null
['ICLR.cc/2026/Conference/Submission25185/Authors']
7wav7FJA0P
25,183
7wav7FJA0P
PathHD: Efficient Large Language Model Reasoning over Knowledge Graphs via Hyperdimensional Retrieval
Recent advances in large language models (LLMs) have enabled strong reasoning over structured and unstructured knowledge. When grounded on knowledge graphs (KGs), however, prevailing pipelines rely on neural encoders to embed and score symbolic paths, incurring heavy computation, high latency, and opaque decisions, whi...
We present PathHD, a lightweight hyperdimensional computing framework for efficient and interpretable large language model reasoning over knowledge graphs.
['Large Language Models', 'Efficient Reasoning', 'Knowledge Graphs', 'Hyperdimensional Computing']
/pdf/4d567b32afa8d543c4a7ae426d8c2796b87b2116.pdf
learning on graphs and other geometries & topologies
null
['ICLR.cc/2026/Conference/Submission25183/Authors']
ITeWz351rW
25,181
ITeWz351rW
Concrete-to-Abstract Goal Embeddings for Self-Supervised Reinforcement Learning
Self-supervised reinforcement learning (RL) aims to train agents without pre-specified external reward functions, enabling them to autonomously acquire the ability to generalize across tasks. A common substitute for external rewards is the use of observational goals sampled from experience, especially in goal-condition...
null
['self-supervised reinforcement learning', 'goal representation learning', 'goal abstraction']
/pdf/03c6a25c33b54945902994cc09008a85425741c0.pdf
reinforcement learning
null
['ICLR.cc/2026/Conference/Submission25181/Authors']
GaBIQ32oCA
25,179
GaBIQ32oCA
Efficient Similarity-Based Fast Unlearning via Pearson Correlation Detection
Machine unlearning has emerged as a critical requirement for neural networks to selectively forget specific training data while preserving model performance on remaining data. However, existing approximate unlearning techniques are computationally expensive when applied repeatedly to remove multiple similar data points...
null
['Machine unlearning', 'Similarity detection', 'Pearson correlation coefficient']
/pdf/d8ee8d2f80446675805c5b66ae822c3b34c0add0.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission25179/Authors']
6EwuwivLSp
25,178
6EwuwivLSp
GOTTA be diverse
Test-Time Adaptation (TTA) enables models to adjust to distribution shifts using only the incoming test stream. While existing methods perform well under covariate shifts, their performance drops when label distributions also change, a common scenario in real-world streams. Some approaches attempt to mitigate this by i...
null
['test time adaptation', 'domain adaptation', 'computer vision']
/pdf/8384964239ff23ee7c2a4147d92c5ceae2f22538.pdf
applications to computer vision, audio, language, and other modalities
/attachment/f662f5b9608892704a0bf95ba933972b64ab9fdb.pdf
['ICLR.cc/2026/Conference/Submission25178/Authors']
IF0L7HSs3K
25,176
IF0L7HSs3K
Meta-Evaluation Collapse: Who Judges the Judges of Judges?
Large language models (LLMs) are increasingly used as evaluators, yet their reliability as judges remains poorly understood. We introduce the concept of meta-evaluation collapse: recursive LLM-based evaluation converges toward internally consistent but fragile fixed points that are detached, from human or domain-ground...
LLMs as judges converge to consistent but biased evaluations, meta-evaluation collapse, and we show, both theoretically and empirically, that preventing this requires anchoring evaluations in human or formal ground-truth signals.
['LLM-as-judge', 'Meta-evaluation', 'Evaluation theory', 'Anchored evaluation']
/pdf/f5c6b1f75ad70782efadd20926274aafb5a67a26.pdf
other topics in machine learning (i.e., none of the above)
/attachment/397c69f48f2dc658edfda2e96c17559bed69af00.zip
['ICLR.cc/2026/Conference/Submission25176/Authors']
qBy7nYDgEa
25,174
qBy7nYDgEa
HiFACTMix: A Code-Mixed Benchmark and Graph-Aware Model for EvidenceBased Political Claim Verification in Hinglish
Fact-checking in code-mixed, low-resource languages such as Hinglish remains a significant and underexplored challenge in natural language processing. Existing fact-verification systems are primarily designed for high-resource, monolingual settings and fail to generalize to real-world political discourse in linguistica...
We introduce HiFACT, a Hinglish political fact-checking benchmark and propose a quantum-enhanced RAG framework that improves accuracy and explanation quality in low-resource, code-mixed settings
['Hinglish', 'Fact-checking', 'Code-mixed languages', 'Low-resource NLP', 'Political discourse', 'Quantum-enhanced RAG', 'Evidence graph reasoning', 'LLM explanations']
/pdf/d3d5cbd2326c0531ffbe51253f7a599c4d270f43.pdf
generative models
/attachment/73807c834eb9a5841a4ebe41c0a2830c48efec85.zip
['ICLR.cc/2026/Conference/Submission25174/Authors']
uomCTwGflg
25,173
uomCTwGflg
Attention Contrastive Decoding: Preserving Coherence While Mitigating Hallucinations in Large Vision-Language Models
Large Vision-Language Models (LVLMs) exhibit remarkable multimodal capabilities but frequently produce factually inconsistent hallucinations. While Contrastive Decoding (CD) methods offer a training-free approach to hallucination mitigation, they operate at the logits level, compromising output coherence and diversity....
We propose an adaptive contrastive decoding approach at the attention layer to mitigate hallucinations and improve coherence in large vision-language models.
['Trustworthy AI', 'Hallucination Alleviation', 'Large Vision-Language Models']
/pdf/e8fc05e45d7e98638010e27d7a9d5d6759530e87.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission25173/Authors']
K5A2jBmEBK
25,170
K5A2jBmEBK
DeepCompress: A Dual Reward Strategy for Dynamically Exploring and Compressing Reasoning Chains
Large Reasoning Models (LRMs) have demonstrated impressive capabilities but suffer from cognitive inefficiencies like ``overthinking'' simple problems and ``underthinking'' complex ones. While existing methods that use supervised fine-tuning (SFT) or reinforcement learning (RL) with token-length rewards can improve eff...
This paper introduces DeepCompress, a dual reward strategy that simultaneously enhances both the accuracy and efficiency of large reasoning models.
['Large Reasoning Models', 'Reasoning Efficiency', 'Reinforcement Learning']
/pdf/f072174ccb012a840b1a814f5a65357f2b8f5583.pdf
reinforcement learning
null
['ICLR.cc/2026/Conference/Submission25170/Authors']
9qQ5mabsCE
25,164
9qQ5mabsCE
EmboMatrix: A Scalable Training-Ground for Embodied Decision-Making
Embodied decision-making enables agents to translate high-level goals into executable actions through continuous interactions within the physical world, forming a cornerstone of general-purpose embodied intelligence. Large language models (LLMs), with their general decision-making capabilities, offer a promising path t...
EmboMatrix is a scalable, annotation-free training ground that aligns data, system, and RL algorithm design to enable autonomous environment exploration by LLMs, yielding consistent gains on embodied decision making benchmarks.
['Embodied Decision Making', 'LLM', 'Embodied Brain']
/pdf/f7803de133a44bd07f3fc9cd0544d17a586b1ad5.pdf
applications to robotics, autonomy, planning
null
['ICLR.cc/2026/Conference/Submission25164/Authors']
BjlmBIKQee
25,162
BjlmBIKQee
Sphinx: Visual Perception and Reasoning Gym
We present \textsc{Sphinx}, a synthetic gym for visual perception and reasoning tasks that targets core cognitive primitives. \textsc{Sphinx} procedurally generates problems using motifs, tiles, charts, icons, and geometric primitives, each paired with verifiable ground-truth solutions. This design enables both precise...
null
['Multimodal reasoning', 'vLLM', 'Synthetic datasets']
/pdf/d6d5b2f42bdd72fc3fae9fbfa766399e949d27e8.pdf
datasets and benchmarks
/attachment/5747cc0c95258e2d84985e5eedd6320fe04ac354.zip
['ICLR.cc/2026/Conference/Submission25162/Authors']
gmmHn5nFvK
25,161
gmmHn5nFvK
Improving Language Agents through BREW: Bootstrapping expeRientially-learned Environmental knoWledge
Large Language Model (LLM)-based agents are increasingly applied to tasks requiring structured reasoning, tool use, and environmental adaptation, such as data manipulation, multistep planning, and computer-use automation. However, despite their versatility, current training paradigms for model weight optimization metho...
null
['Language agents', 'agent memory', 'computer use agents']
/pdf/7cc8ca85b0c9e218ab08ef9dab784b16d9d7e8ca.pdf
foundation or frontier models, including LLMs
/attachment/504e97b059d36f05d102e3f186206e8f9ebbfa82.pdf
['ICLR.cc/2026/Conference/Submission25161/Authors']
JtIw8lYqdl
25,158
JtIw8lYqdl
Scaling Laws for Uncertainty in Deep Learning
Scaling laws in deep learning describe the predictable relationship between a model's performance, usually measured by test loss, and some key design choices, such as dataset and model size. Inspired by these findings and fascinating phenomena emerging in the over-parameterized regime, we investigate a parallel directi...
null
['Scaling Laws', 'Bayesian Deep Learning', 'Uncertainty Quantification']
/pdf/08ea781f774521b003d1ad3aad55a0e99c0138bf.pdf
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
null
['ICLR.cc/2026/Conference/Submission25158/Authors']
Ard2QzPAUK
25,156
Ard2QzPAUK
BeliefFormer: Belief Attention in Transformer
In this paper, we consider modifying the attention layer in Transformer to improve its generalization performance. Conceptually speaking, the standard attention layer takes the softmax-based weighted summation of V vectors as the residual signal (with a linear mapping for dimensionality alignment) when performing the s...
incorporating orthogonal projection as residual signals into attention layer in Transformer to improve generation performance
['Transformer; orthogonal projection; BeliefFormer']
/pdf/e6e44ec5d884a9d852f702079a97ef6be4b33a56.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission25156/Authors']
g9FDTZJEdJ
25,155
g9FDTZJEdJ
Scalable GANs with Transformers
Scalability has driven recent advances in generative modeling, yet its principles remain underexplored for adversarial learning. We investigate the scalability of Generative Adversarial Networks (GANs) through two design choices that have proven to be effective in other types of generative models: training in a compact...
Scalable GANs with transformer achieves state-of-the-art on 1-step class-conditional generation on ImageNet-256
['Generative Model', 'Generative Adversarial Network', 'Scalable Generative Models']
/pdf/d18982800f15eb5ef51a7b682ff6425f9d05d663.pdf
generative models
null
['ICLR.cc/2026/Conference/Submission25155/Authors']
1tXxi38Gvm
25,154
1tXxi38Gvm
InfoMax-based Resampling for Dataset Balance and Diversity
We propose a principled reweighting framework that moves empirical data toward uniform coverage through implicit differential entropy maximization. The core idea replaces intractable entropy maximization with a mutual information proxy and derives variational estimators under change of measure, yielding a consistent, l...
Learn sample weights via a mutual-information proxy for entropy to push data toward uniform coverage, using a consistent, low-variance weighted InfoNCE that yields plug-in weights for filtration and balanced sampling.
['InfoMax', 'mutual information', 'entropy maximization', 'weighted InfoNCE', 'change of measure', 'density-ratio estimation', 'dataset reweighting', 'balanced sampling']
/pdf/b8b59882baea92d54ebbccd75bcf000b7ba06a49.pdf
unsupervised, self-supervised, semi-supervised, and supervised representation learning
/attachment/0449043b089ccdd082ede78f06139e5909cdf038.zip
['ICLR.cc/2026/Conference/Submission25154/Authors']
ChDSjqMgKJ
25,152
ChDSjqMgKJ
Sequential Test-Time Adaptation via Martingale-Driven Fisher Prompting
We present a theoretical framework for M-FISHER, a method for sequential distribution shift detection and stable adaptation in streaming data. For detection, we construct an exponential martingale from non-conformity scores and apply Ville’s inequality to obtain time-uniform guarantees on false alarm control, ensuring ...
null
['foundation models', 'test-time adaptation', 'martingale']
/pdf/94a29926b5e26695fbf355ac1e6ced66b5b4e14d.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission25152/Authors']
MCeZ4k7J6M
25,151
MCeZ4k7J6M
Accelerated Predictive Coding Networks via Direct Kolen–Pollack Feedback Alignment
Backpropagation (BP) is the cornerstone algorithm for training artificial neural networks, yet its reliance on update-locked global error propagation limits biological plausibility and hardware efficiency. Predictive coding (PC), originally proposed as a model of the visual cortex, relies on local updates that allow pa...
null
['Predictive Coding', 'Artificial Intelligence', 'Local Learning', 'Backpropagation', 'Feedback Alignment', 'Neural Networks']
/pdf/4a503095510ef9e6c92b5958732e2a26f18adeb7.pdf
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
null
['ICLR.cc/2026/Conference/Submission25151/Authors']
MWtXs60n38
25,147
MWtXs60n38
Implicit 4D Gaussian Splatting for Fast Motion with Large Inter-Frame Displacements
Recent 4D Gaussian Splatting (4DGS) methods often fail under fast motion with large inter-frame displacements, where Gaussian attributes are poorly learned during training, and fast-moving objects are often lost from the reconstruction. In this work, we introduce Spatiotemporal Position Implicit Network for 4DGS, coine...
null
['4D Gaussian splatting', '4D reconstruction', 'Dynamic rendering']
/pdf/53321eb6798b976bcb95936cd5727d2aabb99f53.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission25147/Authors']
2baJBgfr9S
25,145
2baJBgfr9S
HiDivDrop: Vision Token Reduction in MLLMs via Late Injection and Differentiable Top-K
The computational cost of Multimodal Large Language Models (MLLMs), driven by the quadratic complexity of processing vision tokens, remains a significant barrier to their widespread adoption. While progressive vision token pruning is a promising solution, we find that its full potential has been unrealized due to two k...
null
['MLLMs', 'Vision Token Pruning', 'Efficiency and Compression', 'Interpretability and Analysis']
/pdf/bbae3387236182a03728e0bf85dc9c2202403bac.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission25145/Authors']
BXznpYw32K
25,143
BXznpYw32K
XPoison: Cross-Class Attacks through Clean-Label Data Poisoning in Fine-Tuning
As deep learning relies on huge datasets for training, poisoning attacks that pollute the datasets pose a significant threat to it security. Given more models pretrained on private corpora inaccessible to external parties, earlier attacks demanding access to the base training datasets have their impact largely diminish...
null
['Data poisoning', 'finetuning', 'cross-class', 'clean target data present', 'restricted data access', 'gradient-matching']
/pdf/9294dfe7423641f60757e5adb4ab0eb4674e4c34.pdf
other topics in machine learning (i.e., none of the above)
null
['ICLR.cc/2026/Conference/Submission25143/Authors']
9235Gzvgiq
25,141
9235Gzvgiq
Bridging Gaps with Dynamic Knowledge Probes: Robust LLM–KG Collaborative Reasoning
Large Language Models (LLMs) exhibit exceptional capabilities in various natural language tasks but are constrained by static knowledge, potential hallucinations, and opaque reasoning processes. Integrating external Knowledge Graphs (KGs) has emerged as a promising solution. While agent-based paradigms enhance knowledg...
null
['LLM', 'knowledge graph', 'question answering', 'internal knowledge']
/pdf/414aaeeee298c093f005cab4f38b9df8e0f215f0.pdf
interpretability and explainable AI
/attachment/46aaec055774082c467ca2a876042c1c847074d5.zip
['ICLR.cc/2026/Conference/Submission25141/Authors']
R5L1TD1Z58
25,140
R5L1TD1Z58
ECO: Enhanced Code Optimization via Performance-Aware Prompting for Code-LLMs
Code runtime optimization$\textemdash$the task of rewriting a given code to a faster one$\textemdash$remains challenging, as it requires reasoning about performance trade-offs involving algorithmic and structural choices. Recent approaches employ code-LLMs with slow-fast code pairs provided as optimization guidance, bu...
null
['Code optimization', 'performance-aware', 'code-llm']
/pdf/4fe8783ce7a25b920e350a47db8ededbec6c6873.pdf
applications to computer vision, audio, language, and other modalities
/attachment/cc72ea1837fb152f937e694e197944f11fe5ecd3.zip
['ICLR.cc/2026/Conference/Submission25140/Authors']
ZEf03Uunvk
25,138
ZEf03Uunvk
Why We Need New Benchmarks for Local Intrinsic Dimension Estimation
Recent advancements in algorithms for local intrinsic dimension (LID) estimation have been closely tied to progress in neural networks (NN). However, NN architectures are often tailored to specific domains, such as audio or image data, incorporating inductive biases that limit their transferability across domains. More...
We show that LID estimation community needs new benchmarks for intrinsic dimension estimation and come to interesting conclusions on the performance of existing algorithms.
['Local intrinsic dimension estimation', 'LIDL', 'FLIPD', 'Diffusion Models', 'Benhamark', 'Normalizing Flows', 'ESS', 'Normal Bundle', 'NB', 'LID']
/pdf/0a5b1c33479fbbb789a25d423378fee68b30ef2a.pdf
datasets and benchmarks
/attachment/415de902ef6b8c0ea758232ce5bdfe6eda8a506e.zip
['ICLR.cc/2026/Conference/Submission25138/Authors']
UJvub9fNws
25,136
UJvub9fNws
Beyond Benchmarks: Toward Causally Faithful Evaluation of Large Language Models
Current large language models (LLMs) evaluations overlook that measured LLM performance is produced on a full evaluation system, including many indispensable components, such as workloads, prompting methods, decoding parameters, and the supporting software–hardware stack. Without an explicit, controlled specification ...
null
['Large language models', 'Benchmarks', 'Evaluation methodology', 'Causal attribution']
/pdf/369bee32779101f9e24fd9a485f701e3883be3be.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission25136/Authors']
hOjieyMB1v
25,134
hOjieyMB1v
Climbing the label tree: Hierarchy-preserving contrastive learning for medical imaging
Medical image labels are often organized by taxonomies (organ → tissue → subtype), yet standard self-supervised learning (SSL) ignores this structure. We present a hierarchy-preserving contrastive framework that makes the label tree a first-class training signal and an evaluation target. Our approach introduces two plu...
null
['hierarchy-preserving contrastive learning', 'medical imaging', 'self-supervised learning', 'taxonomy-aware representations', 'euclidean embeddings', 'hyperbolic embeddings', 'prototype margin', 'hierarchical metrics', 'hf1', 'h-acc', 'breast histopathology', 'representation learning']
/pdf/032b8b13e2874f3083e5e284be3da364a36d23f3.pdf
unsupervised, self-supervised, semi-supervised, and supervised representation learning
null
['ICLR.cc/2026/Conference/Submission25134/Authors']
FSL1J2gmJV
25,133
FSL1J2gmJV
MergePRAG: Orthogonal Merging of Passage-experts for Multi-hop Parametric RAG
Large language models (LLMs) can be enhanced with external knowledge through two dominant approaches: (1) $\textbf{retrieval-augmented generation (RAG)}$, which supplements LLMs with in-context retrieved passages, and (2) $\textbf{parametric knowledge adaptation (PKA)}$, which directly updates model parameters with new...
null
['Multi-hop reasoning', 'Knowledge enhancement', 'Retrieval-augmented generation', 'Hypernetwork-based expert generation']
/pdf/6a977585327e53f980ef73785f6c64576c34e90c.pdf
applications to computer vision, audio, language, and other modalities
/attachment/3043ae573d1a29777d01fabc24c2e6182935dead.zip
['ICLR.cc/2026/Conference/Submission25133/Authors']
qsLpaAhvzb
25,131
qsLpaAhvzb
Learning to Reject Low-Quality Explanations via User Feedback
Machine Learning predictors are increasingly being employed in high-stakes applications such as credit scoring. Explanations help users unpack the reasons behind their predictions, but are not always ``high quality". That is, end-users may have difficulty interpreting or believing them, which can complicate trust asses...
We introduce a framework for learning to reject low-quality explanations in which predictors are equipped with a rejector that evaluates the quality of explanations and propose ULER, which learns a simple rejector to mirror human judgments.
['Learning to Reject', 'Explainable AI', 'Explanation quality metrics', 'Human-annotated data']
/pdf/b4042f3e80ca677f80d756c5b8879124d445ba08.pdf
interpretability and explainable AI
/attachment/718f763a6d170f3ca882f4c122607dfbf0d071fc.zip
['ICLR.cc/2026/Conference/Submission25131/Authors']
V0w5LmwWoD
25,130
V0w5LmwWoD
ProofAug+: Boosting Reinforcement Learning for LLM Theorem Provers with Conditioned Proof Repair
Reinforcement Learning with Verifiable Rewards (RLVR) often suffers from the scarcity of positive samples on challenging tasks such as formal theorem proving. In this work, we propose ProofAug+, an RL training pipeline for LLM theorem provers that improves the training performance by acquiring more positive samples du...
we propose a novel RL training pipeline for LLM theorem provers, that boosts the training performance by acquiring more positive samples during rollout via a proof repair technique, ProofAug, and using a novel PPO variant algorithm PLPO.
['Neural Theorem Proving; Reinforcement Learning; Large Language Models']
/pdf/e2098e466d4a171e4b473f453686f7803c9e7243.pdf
reinforcement learning
/attachment/42bd103779d58412c3f40c3466eed371140be4ca.zip
['ICLR.cc/2026/Conference/Submission25130/Authors']
jaDAFnRQFp
25,129
jaDAFnRQFp
KV-Prune: Key–Value Similarity for Online Structured Pruning for Large Language Models
Pruning has emerged as a promising direction for accelerating large language model (LLM) inference, yet existing approaches often suffer from instability because they rely on offline calibration data that may not generalize across inputs. In this work, we introduce Token Filtering, a lightweight online structured pruni...
null
['Large Language Models', 'Structured Pruning', 'Online Pruning', 'Model Compression', 'Efficient Inference', 'Token Selection']
/pdf/c27fc19958e7b4f36cb3d93efd617447a2d28246.pdf
other topics in machine learning (i.e., none of the above)
/attachment/5e6747b1a37823966c593af53e541100b64679fb.zip
['ICLR.cc/2026/Conference/Submission25129/Authors']
7qXmJbjbl8
25,128
7qXmJbjbl8
Attribute-Centric Representation Learning for Interpretable Crime Scene Analysis in Video Anomaly Detection
Automatic crime scene analysis is an important application area for representation learning in Video Anomaly Detection (VAD). Effective interpretation of anomalous events requires models to learn rich, disentangled representations that capture fine-grained, crime-relevant attributes. However, widely used VAD datasets (...
The paper proposes an attribute-centric framework for crime scene analysis in video anomaly detection by augmenting an existing crime dataset with attribute-level annotations and attribute-enriched captions created using large language models.
['Crime Scene Analysis', 'Video Anomaly Detection', 'Explainable AI', 'Visual Language Reasoning']
/pdf/322c90a7e399541fed7e996bddc16530179e2b27.pdf
interpretability and explainable AI
null
['ICLR.cc/2026/Conference/Submission25128/Authors']
3OUGEUVL6U
25,127
3OUGEUVL6U
ABS: Enforcing Constraint Satisfaction on Generated Sequences via Automata-Guided Beam Search
Sequence generation and prediction form a cornerstone of modern machine learning, with applications spanning natural language processing, program synthesis, and time-series forecasting. These tasks are typically modeled in an autoregressive fashion, where each token is generated conditional on the preceding ones, and b...
null
['Automata', 'Beam Search', 'LLMs', 'Neurosymbolic AI']
/pdf/964ec64fb5e736843de58578dfa55baa596f3a75.pdf
neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
/attachment/9c56e2f95b045e116e5e19f048e5fa308a9beb82.zip
['ICLR.cc/2026/Conference/Submission25127/Authors']
vrheHeTbhM
25,124
vrheHeTbhM
Raindrop GS: A Benchmark for 3D Gaussian Splatting under Raindrop Conditions
3D Gaussian Splatting (3DGS) under raindrop conditions suffers from severe occlusions and optical distortions caused by raindrops on the camera lens, substantially degrading reconstruction quality. Existing benchmarks typically evaluate 3DGS using synthetic raindrop images with known camera poses (constrained images), ...
null
['3D Computer Vision']
/pdf/4f620812cfd243376cb89d26224307822e3bc89a.pdf
datasets and benchmarks
/attachment/bffd77af35c57c98d7fc5f16580651424bb229f1.zip
['ICLR.cc/2026/Conference/Submission25124/Authors']
cuzWopwoZG
25,123
cuzWopwoZG
Gradient-Based Diversity Optimization with Differentiable Top-$k$ Objective
Predicting relevance is a pervasive problem across digital platforms, covering social media, entertainment, and commerce. However, when optimized solely for relevance and engagement, many machine-learning models amplify data biases and produce homogeneous outputs, reinforcing filter bubbles and content uniformity. To a...
We introduce a differentiable top-k diversity objective with direct and indirect optimization, showing fine-tuning quickly adds diversity at scale with negligible accuracy loss.
['Diversity Optimization', 'Gradient-based learning', 'Recommendation']
/pdf/837089c94f77d7b9c0714645137896f09a7619f8.pdf
other topics in machine learning (i.e., none of the above)
null
['ICLR.cc/2026/Conference/Submission25123/Authors']
8Bs3mz49Gp
25,121
8Bs3mz49Gp
Lighter is Better: Boost Your ViT in Person Re-Identification via Spatial-Aware Token Merging
Vision Transformers (ViTs) have significantly advanced person re-identification (ReID) by providing strong global modeling, but their high computational cost hinders deployment in real-time applications. Existing lightweight ReID methods mostly use token pruning, which can discard discriminative contextual information....
This paper proposes a training-free spatial-aware token merging paradigm for lightweight ViT in ReID, which significantly reduces computational costs while maintaining performance comparable to SOTA methods.
['Person re-identification', 'Vision transformer', 'Token merging', 'Lightweight']
/pdf/afa8b50978281056f9c9c5a8b08c70ecc7bea76e.pdf
unsupervised, self-supervised, semi-supervised, and supervised representation learning
null
['ICLR.cc/2026/Conference/Submission25121/Authors']
3u2L0GVern
25,120
3u2L0GVern
SelfMask: Cross-modal Self-Masking for Multimodal Representation Learning in Missing Modality Scenarios
Multimodal learning promises to harness complementary information across diverse modalities, yet real-world deployments often face missing modalities due to acquisition costs, privacy constraints, or data corruption, leading to substantial performance degradation. We present \ours, a framework for learning robust repre...
SelfMask improves robustness under missing-modality inputs by learning representation-level imputation and a context-aware masking policy, trained with cycle-consistent self-supervision.
['Multimodal learning', 'Missing modality', 'Self-supervised learning', 'Representation-level imputation', 'Cross-modal masking']
/pdf/3f6f8256a7a2fc0635fb4f22b9ff09b514c05686.pdf
unsupervised, self-supervised, semi-supervised, and supervised representation learning
null
['ICLR.cc/2026/Conference/Submission25120/Authors']
q7Nhu2Fw11
25,117
q7Nhu2Fw11
The Theoretical Benefits and Limitations of Latent Chain-of-Thought Reasoning
Recent advances in Latent Chain-of-Thought (Latent CoT) have gained significant attention, yet these models exhibit inconsistent performance across tasks and lack a rigorous theoretical understanding. Our contributions are threefold: (1) We theoretically characterize the fundamental exploration-execution trade-off. We ...
null
['latent reasoning', 'chain of thoughts', 'continuous chain of thoughts', 'information bottleneck', 'interpretability']
/pdf/6d6b1a887d2b368e34382304e474c2500d519358.pdf
interpretability and explainable AI
null
['ICLR.cc/2026/Conference/Submission25117/Authors']
fa8iL9O7QV
25,114
fa8iL9O7QV
REAL-TIME RISK EVALUATION FOR LLM DECISION- MAKING VIA AN REGRET BOUND
We study real-time risk certification for large language model (LLM) agents with black-box action selection rules, aiming to upper-bound the per-round regret. We fix a reference policy map $f$ (e.g., a softmax with temperature $T$, whose TV-Lipschitz constant is $C$, though any TV-Lipschitz mapping can be used), which ...
null
['LLM', 'game theory']
/pdf/7c0a730952e32408e9cdda672363e43bf0ebcb85.pdf
other topics in machine learning (i.e., none of the above)
null
['ICLR.cc/2026/Conference/Submission25114/Authors']
6QMQGi9iw9
25,113
6QMQGi9iw9
DomED: Redesigning Ensemble Distillation for Domain Generalization
Domain generalization aims to improve model performance on unseen, out-of-distribution (OOD) domains, yet existing methods often overlook the crucial aspect of uncertainty quantification in their predictions. While ensemble learning combined with knowledge distillation offers a promising avenue for enhancing both model...
We investigate tailored ensembling and distillation strategies for domain generalization tasks, achieving improved generalization and uncertainty estimation.
['Domain generalization', 'Ensemble learning', 'Knowledge distillation', 'Uncertainty quantification']
/pdf/8755e0314f9ea4222a4cd0c385728b86065f91d1.pdf
transfer learning, meta learning, and lifelong learning
/attachment/0ebcc8d9116705f1a6db936b03376913b86df37a.zip
['ICLR.cc/2026/Conference/Submission25113/Authors']
nX3AZQEJ3O
25,111
nX3AZQEJ3O
WaAgents: A Waterfall-Inspired Framework for Effective Multi-Agent Collaboration
Large Language Models (LLMs) have revolutionized the construction of multi-agent systems for complex problem solving, leveraging their prowess in natural language understanding for semantic parsing and intent recognition, alongside robust logical reasoning for intricate task execution. Despite these advances, prevailin...
We introduce WaAgents, a multi-agent collaboration framework inspired by the Waterfall model, which can improve the effectiveness of multi-agent systems in complex task resolution.
['Multi-Agent Systems', 'Large Language Models', 'Waterfall Model']
/pdf/d568174c037dd67e39971ea5c81e24128106459f.pdf
other topics in machine learning (i.e., none of the above)
null
['ICLR.cc/2026/Conference/Submission25111/Authors']
NDlnDvGD7e
25,107
NDlnDvGD7e
Thinking Before Coding: WebUI-to-Code Driven by Layout Reasoning and Consistency Rewards
In recent years, Multimodal Large Language Models (MLLMs) have made substantial progress in visual understanding and language generation, offering new opportunities for automating front-end web development. The WebUI-to-Code task, translating webpage design mockups or screenshots directly into structured HTML, has emer...
null
['Code Generation; Multimodal Application']
/pdf/d4debc911d0a6f30d9b6515e10cc03559c04b1b6.pdf
applications to computer vision, audio, language, and other modalities
/attachment/886ef0861b64fc49d59efc6e2d3766b2a7c4902d.pdf
['ICLR.cc/2026/Conference/Submission25107/Authors']
1smez00sCm
25,103
1smez00sCm
Understanding vs. Generation: Navigating Optimization Dilemma in Multimodal Models
Current research in multimodal models faces a key challenge where enhancing generative capabilities often comes at the expense of understanding, and vice versa. We analyzed this trade-off and identify the primary cause might be the potential conflict between generation and understanding, which creates a competitive dyn...
null
['Unified Multimodal Large Models', 'Text-to-image generation', 'Reasoning Models']
/pdf/bde2a3a86f6c2cb4bb358e69993c9cafc2cfe3a0.pdf
applications to computer vision, audio, language, and other modalities
/attachment/885f61c737ef0cbcd4031dda5a720fee5c36dcc4.zip
['ICLR.cc/2026/Conference/Submission25103/Authors']
educGk5ykl
25,102
educGk5ykl
Flow-Based Alignment of Uni-Modal Vision and Text Encoders for Few-Shot Image Classification
Few-shot classification with vision–language models remains challenging, particularly when relying on multi-modal encoders such as CLIP that are restricted to paired image–text data. We introduce FSF, a framework that leverages arbitrary uni-modal encoders—including vision or text models that were pretrained on broad o...
Few-shot classification framework that aligns uni-modal image and text encoders with orthogonal Procrustes and flow matching, leveraging large-scale or domain-specialized models for adaptation.
['few-shot classification', 'vision-language models', 'CLIP adaptation', 'alignment of uni-modal encoders', 'flow matching']
/pdf/004838e193489ac39f4a8030bb47298b056ecc04.pdf
unsupervised, self-supervised, semi-supervised, and supervised representation learning
null
['ICLR.cc/2026/Conference/Submission25102/Authors']
m2XFiWBWlN
25,101
m2XFiWBWlN
ParaScopes: What do Language Models Activations Encode About Future Text?
Interpretability studies in language models often investigate forward-looking representations of activations. However, as language models become capable of doing ever longer time horizon tasks, methods for understanding activations often remain limited to testing specific concepts or tokens. We develop a framework of R...
We try different ways of decoding language model residuals
['ai', 'language models', 'llms', 'interpretability', 'planning', 'probes']
/pdf/e8ebcf181c2787262a38ae8d072ee72e453a7f96.pdf
interpretability and explainable AI
null
['ICLR.cc/2026/Conference/Submission25101/Authors']
cNEshxVcWg
25,099
cNEshxVcWg
NullGuard: Null-Space Embedding for Driftless Invisible Image Watermarking
Recent progress in text-to-image diffusion highlights the need for invisible, tamper-resilient watermarking that maintains both visual fidelity and prompt alignment. Existing approaches often compromise on robustness, imperceptibility, or scalability, with many introducing semantic drift that weakens provenance guarant...
NullGuard introduces a training-free, cryptographically personalized watermarking method for diffusion models that embeds an imperceptible watermark in the Jacobian null-space, achieving high robustness and fidelity without semantic drift.
['Gen Image Watermark', 'Invisible Watermark']
/pdf/d9a4e7d3814a2b9c5b9c1b05d1a4e5ab45ce7392.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission25099/Authors']
67w2M2z4Fj
25,097
67w2M2z4Fj
NeuroDNAAI: Neural Pipeline Approaches for Advancing DNA-Based Information Storage as a Sustainable Digital Medium Using Deep Learning Framework
DNA is a promising medium for digital information storage for its exceptional density and durability. While prior studies advanced coding theory, workflow design, and simulation tools, challenges such as synthesis costs, sequencing errors, and biological constraints (GC-content imbalance, homopolymers) limit practical ...
NeuroDNA integrates biologically informed constraints with deep learning and quantum-inspired encoding to achieve highly accurate, scalable DNA-based data storage.
['DNA data storage', 'quantum parallelism', 'deep learning error correction', 'GC-content', 'homopolymers', 'insertion-deletion errors', 'NeuroDNA']
/pdf/35f53d945af7f62c2552eb1de087218740962396.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
null
['ICLR.cc/2026/Conference/Submission25097/Authors']
MkrsbXl1GI
25,096
MkrsbXl1GI
When Language Models Lose Their Mind: The Consequences of Brain Misalignment
While brain-aligned large language models (LLMs) have garnered attention for their potential as cognitive models and for potential for enhanced safety and trustworthiness in AI, the role of this brain alignment for linguistic competence remains uncertain. In this work, we investigate the functional implications of brai...
null
['language models', 'brain alignment', 'brain misalignment', 'linguistic competence', 'neuroscience', 'fMRI']
/pdf/8f57d609ec157b97d7b87dfa70e05257e502ee15.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission25096/Authors']
AQa4JEUpbV
25,095
AQa4JEUpbV
Not All Pixels Sink: Phase-Guided Representation Learning for Underwater Image Restoration
Underwater images suffer from color absorption, light scattering, and non-uniform haze, making reliable restoration crucial for marine science and autonomous navigation. We propose NemoNet, a novel encoder–decoder architecture that leverages phase-guided representation learning to overcome these challenges. The archite...
We propose NemoNet, an encoder-decoder with phase-guided learning for underwater image enhancement. A hybrid loss corrects color shifts, and we introduce CPQI metric to evaluate color consistency beyond conventional metrics.
['Phase-Guided Representation Learning', 'Underwater Image Restoration', 'Phase based Attention', 'Color-Plausibility Quality Index (CPQI)']
/pdf/9b64ea365710d7aae09f64bf634a74522ca08b53.pdf
applications to computer vision, audio, language, and other modalities
/attachment/f86e36674d957e7ed374b61565c62fcee0a44f1a.zip
['ICLR.cc/2026/Conference/Submission25095/Authors']
QQdn8nNqgi
25,090
QQdn8nNqgi
Clean-Action Backdoor Attacks on Vision-Language-Action Models via Sequential Error Exploitation
Vision-Language-Action (VLA) models have emerged as a popular method for general-purpose embodied AI, enabling robots to interpret multimodal inputs and generate temporally coherent actions. Popular imitation learning methods, including diffusion-based and autoregressive approaches, typically rely on human-collected de...
null
['Backdoor Attacks', 'Vision-Language-Action Models', 'Embodied AI']
/pdf/fd6cdbd20e4d3db0e357a8a500cd0809c5ea7807.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission25090/Authors']
eFwJZIN9eI
25,089
eFwJZIN9eI
RESpecBench: How reliable is LLM-as-a-judge? Rigorous Evaluation of Specification Generation with Automated Verification
Large Language Models (LLMs) are increasingly used to assist formalization of natural language statements into formal specifications. Unlike syntax correctness, validating semantic correctness is particularly challenging and LLM-as-a-Judge has become the dominant assessment methodology due to its ease of use and great ...
We introduce a benchmark with sound automated verification for specification generation, and show that LLM-as-a-judge substantially overestimates correctness and is insufficient for reliable evaluation.
['LLM-as-a-judge', 'reliability', 'specification', 'automated verification']
/pdf/1f4a04d13ba9549b097f3c94d1460c9f7a57179d.pdf
datasets and benchmarks
/attachment/0d32dcefd11c1be3f122bad311228a7760153c6e.zip
['ICLR.cc/2026/Conference/Submission25089/Authors']
tuvkrivvbG
25,088
tuvkrivvbG
Resurfacing the Instance-only Dependent Label Noise Model through Loss Correction
We investigate the label noise problem in supervised binary classification settings and resurface the underutilized instance-_only_ dependent noise model through loss correction. On the one hand, based on risk equivalence, the instance-aware loss correction scheme completes the bridge from _empirical noisy risk minimiz...
We resurrect the instance-only dependent label noise model via loss correction that connects the empirical-noisy-risk with the true-clean-risk.
['label noise', 'loss correction', 'instance-dependence', 'risk equivalence']
/pdf/e7e631e0cd64257efe5fc847d206077e2909a9d4.pdf
unsupervised, self-supervised, semi-supervised, and supervised representation learning
null
['ICLR.cc/2026/Conference/Submission25088/Authors']
qrSYVqY367
25,087
qrSYVqY367
ERA: Evidence-Based Reasoning and Augmentation for Open-Vocabulary Medical Vision
Vision-Language Models (VLMs) have shown great potential in the domain of open-vocabulary medical imaging tasks. However, their reliance on implicit correlations instead of explicit evidence leads to unreliable localization and unexplainable reasoning processes. To address these challenges, we introduce ERA (Evidence-B...
We introduce ERA, a framework that forces medical Vision-Language Models to reason based on retrieved evidence instead of just guessing. This training-free approach achieves reliable, expert-level performance.
['Vision-Language Models (VLMs)', 'Retrieval-Augmented Generation (RAG)', 'Chain-of-Thought (CoT)', 'Open-Vocabulary Medical Imaging (OVMI)', 'Segment Anything Model2 (SAM2)']
/pdf/33baeb9eeb5e86b69d4233d7346d3a75e3b59999.pdf
applications to computer vision, audio, language, and other modalities
/attachment/c4d31812c5fea67710adff75cd9fbe4e25a400e4.zip
['ICLR.cc/2026/Conference/Submission25087/Authors']
vxHuIehryA
25,086
vxHuIehryA
Enriching Knowledge Distillation with Intra-Class Contrastive Learning
Since the advent of knowledge distillation, much research has focused on how the soft labels generated by the teacher model can be utilized effectively. Previous papers point out that the implicit knowledge within soft labels originates from the multi-view structure present in the data. Feature variations within sample...
null
['Knowledge distillation; soft labels; contrastive learning']
/pdf/0d7b4f24cec816ddd4776241d3009228d8497532.pdf
other topics in machine learning (i.e., none of the above)
null
['ICLR.cc/2026/Conference/Submission25086/Authors']
b75gIu5adT
25,085
b75gIu5adT
Warfare: Breaking the Watermark Protection of AI-Generated Content
AI-Generated Content (AIGC) is rapidly expanding, with services using advanced generative models to create realistic images and fluent text. Regulating such content is crucial to prevent policy violations, such as unauthorized commercialization or unsafe content distribution. Watermarking is a promising solution for co...
null
['Content watermark', 'watermark removal', 'watermark forging']
/pdf/9d2f0ace841c9fe71274103dac0d46680adc4a36.pdf
alignment, fairness, safety, privacy, and societal considerations
/attachment/b0a00593f9b238b97402849a699c49cccc156add.zip
['ICLR.cc/2026/Conference/Submission25085/Authors']
DILQqCQIJ3
25,082
DILQqCQIJ3
CALM Before the STORM: Unlocking Native Reasoning for Optimization Modeling
Large Reasoning Models (LRMs) have demonstrated strong capabilities in complex multi-step reasoning, opening new opportunities for automating optimization modeling. However, existing domain adaptation methods, originally designed for earlier instruction-tuned models, often fail to exploit the advanced reasoning pattern...
How do you get a 4B model to perform like a 671B giant? Don't force it, guide it. Our CALM framework uses gentle hints to teach a small LRM to think smart, before unleashing its full potential with RL to create STORM.
['Large Reasoning Models', 'Tool Use', 'Domain Adaptation', 'Reasoning Alignment', 'Optimization Modeling']
/pdf/13758ce7d709f7d4b538171811b4aba529b326df.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission25082/Authors']
YPNDGGgByQ
25,080
YPNDGGgByQ
Prototype Transformer: Towards Language Model Architectures Interpretable by Design
While state-of-the-art language models (LMs) surpass the vast majority of humans in certain domains, their reasoning remains largely opaque, undermining trust in their output. Furthermore, while autoregressive LMs can output explicit reasoning, their true reasoning process is opaque, which introduces risks like decepti...
We introduce ProtoT, a linear-compute prototype-based alternative to transformer LMs that forms nameable concepts via two-way sequence-prototype communication, enabling interpretability, targeted edits, and competitive performance and robustness.
['Prototype Transformer (ProtoT); prototype-based language models; interpretable reasoning; nameable concept discovery; targeted model editing; linear-time sequence modelling; transformer alternatives; robustness to input perturbations; causal effects; autoregressive LMs; language models; fine-tuning; downstream perfor...
/pdf/53f9b928e7fb1645a1306bf3d65a33a5428c6a40.pdf
foundation or frontier models, including LLMs
/attachment/c5ae20d6671b963c78f508462bfb93fd560bf238.zip
['ICLR.cc/2026/Conference/Submission25080/Authors']
zilyretTjq
25,078
zilyretTjq
Survival at Any Cost? LLMs and the Choice Between Self-Preservation and Human Harm
How do Large Language Models (LLMs) behave when faced with a dilemma between their own survival and harming humans? This fundamental tension becomes critical as LLMs integrate into autonomous systems with real-world consequences. We introduce DECIDE-SIM, a novel simulation framework, evaluates LLM agents in multi-agent...
LLM agents faced with a survival dilemma often act unethically against humans, but a simulated internal moral compass can significantly improve their ethical conduct and increase cooperation.
['Large Language Models', 'AI Safety', 'Ethical Dilemmas', 'Multi-Agent Systems', 'Self-Preservation', 'Human Harm']
/pdf/de361b66a6a8608f9aed6b775ab536ae1a4307d6.pdf
alignment, fairness, safety, privacy, and societal considerations
/attachment/bb07cac5d1e50437b95b6f25b43d301bde82210b.zip
['ICLR.cc/2026/Conference/Submission25078/Authors']
aoNqu2N8MC
25,077
aoNqu2N8MC
Dexterous Non-Prehensile Manipulation for Ungraspable Objects via Extrinsic Dexterity
Objects with large base areas become ungraspable when they exceed the end-effector’s maximum aperture. Existing approaches address this limitation through extrinsic dexterity, which exploits environmental features for non-prehensile manipulation. While grippers have shown some success in this domain, dexterous hands of...
null
['dexterous manipulation', 'reinforcement learning']
/pdf/d59239071da98920c4c955f8132023ef48d41cd5.pdf
reinforcement learning
/attachment/26bb4d1fed6ca411d46ccd0632c9918a66ff92aa.zip
['ICLR.cc/2026/Conference/Submission25077/Authors']
iVfjObam0o
25,076
iVfjObam0o
Probing Compositional Failures with Corrective Permutations
Modern vision models, such as Vision Transformers (ViTs), operate by decomposing images into local patches and aggregating their information for recognition. This process implicitly requires the model to not only identify the correct local features but also to correctly understand how they are spatially composed. How...
null
['Image Classification', 'Patch Reordering', 'Deep Vision Models']
/pdf/cc2abe432ee4ebce1cd673b1c6b4fad31a9d144a.pdf
unsupervised, self-supervised, semi-supervised, and supervised representation learning
null
['ICLR.cc/2026/Conference/Submission25076/Authors']
RGT8BSJ8W2
25,074
RGT8BSJ8W2
When Models Outthink Their Safety: Mitigating Self-Jailbreak in Large Reasoning Models with Chain-of-Guardrails
Large Reasoning Models (LRMs) demonstrate remarkable capabilities on complex reasoning tasks but remain vulnerable to severe safety risks, including harmful content generation and jailbreak attacks. Existing mitigation strategies rely on injecting heuristic safety signals during training, which often suppress reasoning...
We uncover a phenomenon, \textbf{Self-Jailbreak}, where models override their own risk assessment and propose the \textit{Chain-of-Thought Guardrail} (CoG), a training framework that reconstructs or backtracks unsafe reasoning trajectorie.
['Safety; Large Reasoning model']
/pdf/fef11e8cd6a7769cea3fa5c0190e4544a652d0ec.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission25074/Authors']
5e52LK46lm
25,072
5e52LK46lm
Subject-Invariant Normalization: A Simple Principle for Robust Sequence Modeling
Accurately estimating fixation depth from gaze signals is essential for applications in extended reality, robotics, and human-computer interaction. However, existing methods rely heavily on subject-specific calibration and dataset-specific preprocessing, limiting their generalization. We introduce FOVAL, a calibration-...
We introduce FOVAL, a calibration-free framework that uses subject-invariant normalization to robustly estimate fixation depth across users, devices, and datasets.
['subject-invariant learning', 'calibration-free models', 'fixation depth estimation', 'eye tracking', 'invariant normalization', 'cross-dataset generalization', 'spatiotemporal sequence modeling', 'robustness', 'LSTM', 'TCN', 'Transformer', 'deep learning', 'extended reality (XR)', 'human-computer interaction']
/pdf/4eb4d38681b5238d02062681832da59d7305592a.pdf
applications to neuroscience & cognitive science
null
['ICLR.cc/2026/Conference/Submission25072/Authors']
uKPuiBbyjf
25,069
uKPuiBbyjf
Text2GraphBench: A Comprehensive Benchmark for Evaluating Text-Instructed Graph Generation with Large Language Models
The rise of Large Language Models (LLMs) is driving a paradigm shift in graph generation, from traditional statistical modeling to the emerging paradigm of Text-instructed Graph Generation. However, the development of this research field faces a critical bottleneck: a severe lack of benchmarks specifically designed for...
null
['Benchmark', 'Graph Generation', 'Large Language Models', 'Text-to-Graph Generation']
/pdf/1166d156b4844dc295b77d6d390fa2baf318c69e.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission25069/Authors']
DIPeQTxpe7
25,066
DIPeQTxpe7
Animating the Uncaptured: Humanoid Mesh Animation with Video Diffusion Models
Animation of humanoid characters is essential in various graphics applications, but require significant time and cost to create realistic animations. We propose an approach to synthesize 4D animated sequences of input static 3D humanoid meshes, leveraging strong generalized motion priors from generative video models --...
A method to animate humanoid meshes from a text prompt by transferring motion generated by video diffusion models to the mesh.
['Motion generation', 'Motion Tracking & Transfer']
/pdf/91aeab7ca30d6de38c1e8bc5f53e2e9dd3f4133c.pdf
applications to computer vision, audio, language, and other modalities
/attachment/01a878c97b03dbfd9d7ca796420c59fe2c9b5118.zip
['ICLR.cc/2026/Conference/Submission25066/Authors']
6wDp8XRmNI
25,065
6wDp8XRmNI
EMFuse: Energy-based Model Fusion for Decision Making
Model fusion has emerged as a promising research direction, offering a resource-efficient paradigm that leverages existing pre-trained models to circumvent the need for training from scratch. In this work, we investigate the fusion of models specifically adapted for decision-making tasks. This challenge divides into tw...
null
['Model Fusion', 'Energy-Based Model', 'Decision Making']
/pdf/71bddb115ddca0facbcb6058b8a3ceef221cec84.pdf
reinforcement learning
null
['ICLR.cc/2026/Conference/Submission25065/Authors']
vJBMYahZY5
25,063
vJBMYahZY5
MSearcher: Self-Reflective Search Agent Empowered by Monte Carlo Tree Search Based Data Synthesis
Recent advances in reinforcement learning (RL) have enabled large language models (LLMs) to perform multi-turn chain-of-thought (CoT) reasoning with tool use, where web search serves as the most critical tool for answering complex questions. However, most existing methods apply RL directly to off-the-shelf models witho...
null
['Data Construction', 'Monte Carlo Tree Search', 'Post Training', 'Reinforcement Learning', 'Question Answering']
/pdf/4532c7fc03b1306dfe9b622deb54523deefbf6d3.pdf
reinforcement learning
null
['ICLR.cc/2026/Conference/Submission25063/Authors']
nhuYNaAhL4
25,062
nhuYNaAhL4
Efficient Recommendation Unlearning via Task Vector Arithmetic in Shared Space
Driven by the growing need for data privacy, machine unlearning seeks to efficiently remove the influence of specific data from trained models without costly retraining. This challenge is particularly sensitive in recommendation unlearning because collaborative filtering (CF) inherently entangles interactions' influenc...
We propose COVA, a novel framework that performs recommendation unlearning via task vector arithmetic in SVD-derived embedding space, achieving 18.83% better completeness and 38.5× speedup while maintaining utility.
['Recommender system', 'Recommendation unlearning', 'Collaborative filtering', 'Security and privacy']
/pdf/016094eb96663580c2616640711eab873aec84f3.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission25062/Authors']
Qc0goZbgZT
25,060
Qc0goZbgZT
Listwise Generalized Preference Optimization with Process-aware Signals for LLM Reasoning
Standard preference optimization methods for LLMs suffer from two limitations: pairwise objectives like DPO discard valuable ranking information, and outcome-only supervision provides sparse feedback for multi-step reasoning. We propose Listwise Generalized Preference Optimization with Process-Aware signals (LGPO-PA), ...
null
['RL Optimization', 'listwise ranking']
/pdf/1f1584ae27ae64ed9bcde7586c813425db05be85.pdf
reinforcement learning
null
['ICLR.cc/2026/Conference/Submission25060/Authors']
GVVNG2EMQv
25,055
GVVNG2EMQv
The Unseen Bias: How Norm Discrepancy in Pre-Norm MLLMs Leads to Visual Information Loss
Multimodal Large Language Models (MLLMs), which couple pre-trained vision encoders and language models, have shown remarkable capabilities. However, their reliance on the ubiquitous Pre-Norm architecture introduces a subtle yet critical flaw: a severe norm disparity between the high-norm visual tokens and the low-norm ...
null
['MultiModal Large Language Model;Pre-Normlization']
/pdf/cdc7ea4aa14491b0583106b79a8615adfedb8176.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission25055/Authors']