Dataset Viewer
Auto-converted to Parquet Duplicate
title
stringlengths
1
214
abstract
stringlengths
1
4.31k
year
int64
2.03k
2.03k
url
stringlengths
42
42
pdf
stringlengths
0
71
authors
listlengths
0
84
venue
stringclasses
2 values
venueid
stringclasses
1 value
invitation
stringlengths
85
335
venue_type
stringclasses
1 value
reviews
listlengths
0
9
num_reviews
int64
0
9
_bibtex
stringlengths
112
601
_bibkey
stringlengths
7
45
Your Language Model Secretly Contains Personality Subnetworks
Large Language Models (LLMs) demonstrate remarkable flexibility in adopting different personas and behaviors. Existing approaches typically adapt such behavior through external knowledge such as prompting, retrieval-augmented generation (RAG), or fine-tuning. We ask: do LLMs really need external context or parameters t...
2,026
https://openreview.net/forum?id=zzo3Sy3NSX
https://openreview.net/pdf/fe6fc58735330235254f4523254d472b1e04288d.pdf
[]
ICLR 2026 Conference Submission
ICLR
['ICLR.cc/2026/Conference/-/Submission', 'ICLR.cc/2026/Conference/-/Post_Submission', 'ICLR.cc/2026/Conference/Submission4956/-/Full_Submission']
poster
[ { "confidence": 2, "date": 0, "rating": 4, "review": "", "review_id": "f8eJZxPaAh", "reviewer": "ICLR.cc/2026/Conference/Submission4956/Reviewer_NkPg", "strengths": "Compared to past prompt-based methods, this paper's approach of calculating a mask via pruning allows for the low-cost cre...
4
@inproceedings{ anonymous2025your, title={Your Language Model Secretly Contains Personality Subnetworks}, author={Anonymous}, booktitle={Submitted to The Fourteenth International Conference on Learning Representations}, year={2025}, url={https://openreview.net/forum?id=zzo3Sy3NSX}, note={under review} }
anonymous2025your
Polychromic Objectives for Reinforcement Learning
Reinforcement learning fine-tuning (RLFT) is a dominant paradigm for improving pretrained policies for downstream tasks. These pretrained policies, trained on large datasets, produce generations with a broad range of promising but unrefined behaviors. Often, a critical failure mode of RLFT arises when policies lose thi...
2,026
https://openreview.net/forum?id=zzTQISAGUp
https://openreview.net/pdf/647c24c93d1ac3d8bfc1d3f206a448e32bd03f47.pdf
[]
ICLR 2026 Conference Submission
ICLR
['ICLR.cc/2026/Conference/-/Submission', 'ICLR.cc/2026/Conference/-/Post_Submission', 'ICLR.cc/2026/Conference/Submission23782/-/Full_Submission', 'ICLR.cc/2026/Conference/Submission23782/-/Rebuttal_Revision']
poster
[ { "confidence": 3, "date": 0, "rating": 2, "review": "", "review_id": "DiRMNEHQhO", "reviewer": "ICLR.cc/2026/Conference/Submission23782/Reviewer_Bmic", "strengths": "The notion of set RL seems appealing and could inspire novel learning approaches that are distinct from existing classica...
4
@inproceedings{ anonymous2025polychromic, title={Polychromic Objectives for Reinforcement Learning}, author={Anonymous}, booktitle={Submitted to The Fourteenth International Conference on Learning Representations}, year={2025}, url={https://openreview.net/forum?id=zzTQISAGUp}, note={under review} }
anonymous2025polychromic
vAttention: Verified Sparse Attention via Sampling
State-of-the-art sparse attention methods for reducing decoding latency fall into two main categories: approximate top-$k$ (and its extension, top-$p$) and recently introduced sampling-based estimation. However, these approaches are fundamentally limited in their ability to approximate full attention: they fail to prov...
2,026
https://openreview.net/forum?id=zzTDulLys0
https://openreview.net/pdf/11280b5e6be148a1db3b7d2eaf3fc47eedcb4980.pdf
[]
ICLR 2026 Conference Submission
ICLR
['ICLR.cc/2026/Conference/-/Submission', 'ICLR.cc/2026/Conference/-/Post_Submission', 'ICLR.cc/2026/Conference/Submission9335/-/Full_Submission', 'ICLR.cc/2026/Conference/Submission9335/-/Rebuttal_Revision']
poster
[ { "confidence": 5, "date": 0, "rating": 2, "review": "", "review_id": "yzZyhoNCDS", "reviewer": "ICLR.cc/2026/Conference/Submission9335/Reviewer_rduG", "strengths": "1. The paper is well-written, with the exception of some details. It is concise, to the point and effective at communicati...
4
@inproceedings{ anonymous2025vattention, title={vAttention: Verified Sparse Attention via Sampling}, author={Anonymous}, booktitle={Submitted to The Fourteenth International Conference on Learning Representations}, year={2025}, url={https://openreview.net/forum?id=zzTDulLys0}, note={under review} }
anonymous2025vattention
Phased DMD: Few-step Distribution Matching Distillation via Score Matching within Subintervals
Distribution Matching Distillation (DMD) distills score-based generative models into efficient one-step generators, without requiring a one-to-one correspondence with the sampling trajectories of their teachers. However, limited model capacity causes one-step distilled models underperform on complex generative tasks, e...
2,026
https://openreview.net/forum?id=zzJTo7ujql
https://openreview.net/pdf/e71773613d64368792595f5adf47cf22041311cc.pdf
[ "Xiangyu Fan", "Zesong Qiu", "Zhuguanyu Wu", "Fanzhou Wang", "Zhiqian Lin", "Tianxiang Ren", "Dahua Lin", "Ruihao Gong", "Lei Yang" ]
ICLR 2026 Conference Withdrawn Submission
ICLR
['ICLR.cc/2026/Conference/-/Submission', 'ICLR.cc/2026/Conference/-/Post_Submission', 'ICLR.cc/2026/Conference/Submission10813/-/Full_Submission', 'ICLR.cc/2026/Conference/-/Withdrawn_Submission']
poster
[ { "confidence": 4, "date": 0, "rating": 4, "review": "", "review_id": "us3Mj7Oiym", "reviewer": "ICLR.cc/2026/Conference/Submission10813/Reviewer_PJDq", "strengths": "- While the idea of progressive diffusion distillation under various criteria has been explored in previous studies such ...
3
@misc{ fan2025phased, title={Phased {DMD}: Few-step Distribution Matching Distillation via Score Matching within Subintervals}, author={Xiangyu Fan and Zesong Qiu and Zhuguanyu Wu and Fanzhou Wang and Zhiqian Lin and Tianxiang Ren and Dahua Lin and Ruihao Gong and Lei Yang}, year={2025}, url={https://openreview.net/for...
fan2025phased
Learning activation functions with PCA on a set of diverse piecewise-linear self-trained mappings
This work explores a novel approach to learning activation functions, moving beyond the current reliance on human-engineered designs like the ReLU. Activation functions are crucial for the performance of deep neural networks, yet selecting an optimal one remains challenging. While recent efforts have focused on automat...
2,026
https://openreview.net/forum?id=zz3El6hqbs
https://openreview.net/pdf/5c2083093945b12142ac89448a624de1f7279d3e.pdf
[]
ICLR 2026 Conference Submission
ICLR
['ICLR.cc/2026/Conference/-/Submission', 'ICLR.cc/2026/Conference/-/Post_Submission', 'ICLR.cc/2026/Conference/Submission19895/-/Full_Submission']
poster
[ { "confidence": 4, "date": 0, "rating": 2, "review": "", "review_id": "6hd51Ytryy", "reviewer": "ICLR.cc/2026/Conference/Submission19895/Reviewer_WARg", "strengths": "- The topic of the submission is very interesting: many aspects of deep learning architectures are iteratively designed t...
4
@inproceedings{ anonymous2025learning, title={Learning activation functions with {PCA} on a set of diverse piecewise-linear self-trained mappings}, author={Anonymous}, booktitle={Submitted to The Fourteenth International Conference on Learning Representations}, year={2025}, url={https://openreview.net/forum?id=zz3El6hq...
anonymous2025learning
Sobolev acceleration for neural networks
$\textit{Sobolev training}$, which integrates target derivatives into the loss functions, has been shown to accelerate convergence and improve generalization compared to conventional $L^2$ training. However, the underlying mechanisms of this training method remain incompletely understood. In this work, we show that Sob...
2,026
https://openreview.net/forum?id=zz06hwkH37
https://openreview.net/pdf/c051d040c4fd039cab69daed99bece8b60144928.pdf
[]
ICLR 2026 Conference Submission
ICLR
['ICLR.cc/2026/Conference/-/Submission', 'ICLR.cc/2026/Conference/-/Post_Submission', 'ICLR.cc/2026/Conference/Submission23675/-/Full_Submission', 'ICLR.cc/2026/Conference/Submission23675/-/Rebuttal_Revision']
poster
[ { "confidence": 3, "date": 0, "rating": 4, "review": "", "review_id": "Z9CKDs5NgD", "reviewer": "ICLR.cc/2026/Conference/Submission23675/Reviewer_VVCF", "strengths": "This paper presents several key strengths, most notably its establishment of the first rigorous theoretical framework for...
4
@inproceedings{ anonymous2025sobolev, title={Sobolev acceleration for neural networks}, author={Anonymous}, booktitle={Submitted to The Fourteenth International Conference on Learning Representations}, year={2025}, url={https://openreview.net/forum?id=zz06hwkH37}, note={under review} }
anonymous2025sobolev
MINT: Causally Tracing Information Fusion in Multimodal Large Language Models
Multimodal Large Language Models (MLLMs) have demonstrated impressive performance on tasks that involve understanding and integrating information across different modalities, particularly vision and language. Despite their effectiveness, the internal representations of these Vision Language Models (VLMs) remain poorly ...
2,026
https://openreview.net/forum?id=zyu1tXMcbh
https://openreview.net/pdf/b8b86038e600dd05d4b796221a461ee4c688e0a4.pdf
[]
ICLR 2026 Conference Submission
ICLR
['ICLR.cc/2026/Conference/-/Submission', 'ICLR.cc/2026/Conference/-/Post_Submission', 'ICLR.cc/2026/Conference/Submission22929/-/Full_Submission']
poster
[ { "confidence": 4, "date": 0, "rating": 6, "review": "", "review_id": "mRxJcajRUA", "reviewer": "ICLR.cc/2026/Conference/Submission22929/Reviewer_qNLH", "strengths": "1. The introduced probing method MINT, is systematic and causal method to trace multimodal fusion within VLMs, advancing ...
4
@inproceedings{ anonymous2025mint, title={{MINT}: Causally Tracing Information Fusion in Multimodal Large Language Models}, author={Anonymous}, booktitle={Submitted to The Fourteenth International Conference on Learning Representations}, year={2025}, url={https://openreview.net/forum?id=zyu1tXMcbh}, note={under review}...
anonymous2025mint
DoMiNO: Down-scaling Molecular Dynamics with Neural Graph Ordinary Differential Equations
DoMiNO: Down-scaling Molecular Dynamics with Neural Graph Ordinary Differential Equations
2,026
https://openreview.net/forum?id=zyq1JIuIhL
https://openreview.net/pdf/99983c740e057ab5240b1e4426d5c4a9fe111da6.pdf
[ "Fang Sun", "Zijie Huang", "Yadi Cao", "Xiao Luo", "Wei Wang", "Yizhou Sun" ]
ICLR 2026 Conference Withdrawn Submission
ICLR
['ICLR.cc/2026/Conference/-/Submission', 'ICLR.cc/2026/Conference/-/Post_Submission', 'ICLR.cc/2026/Conference/Submission13342/-/Full_Submission', 'ICLR.cc/2026/Conference/Submission13342/-/Rebuttal_Revision', 'ICLR.cc/2026/Conference/-/Withdrawn_Submission']
poster
[ { "confidence": 4, "date": 0, "rating": 2, "review": "", "review_id": "vzanZOtJ1N", "reviewer": "ICLR.cc/2026/Conference/Submission13342/Reviewer_LGqt", "strengths": "The authors tackle an important problem with a creative and, in principle, intuitive idea. The reduced scaling from O(T) ...
4
@misc{ sun2025domino, title={DoMi{NO}: Down-scaling Molecular Dynamics with Neural Graph Ordinary Differential Equations}, author={Fang Sun and Zijie Huang and Yadi Cao and Xiao Luo and Wei Wang and Yizhou Sun}, year={2025}, url={https://openreview.net/forum?id=zyq1JIuIhL} }
sun2025domino
Learning with Interaction: Agentic Distillation for Large Language Model Reasoning
Recent advancements in large language models (LLMs) have demonstrated remarkable reasoning abilities to solve complex tasks. However, these gains come with significant computational costs, limiting their practical deployment. A promising direction is to distill reasoning skills from larger teacher models into smaller, ...
2,026
https://openreview.net/forum?id=zyp9QT5Gf1
https://openreview.net/pdf/83e3c72f3b786cbec6676a0267401ad0cd12b8bd.pdf
[]
ICLR 2026 Conference Submission
ICLR
['ICLR.cc/2026/Conference/-/Submission', 'ICLR.cc/2026/Conference/-/Post_Submission', 'ICLR.cc/2026/Conference/Submission17783/-/Full_Submission', 'ICLR.cc/2026/Conference/Submission17783/-/Rebuttal_Revision']
poster
[ { "confidence": 4, "date": 0, "rating": 4, "review": "", "review_id": "GBwzyKXich", "reviewer": "ICLR.cc/2026/Conference/Submission17783/Reviewer_qKrD", "strengths": "1. The detailed discussion of several issues when trying to inject teacher-generated tokens into the student LM is insigh...
4
@inproceedings{ anonymous2025learning, title={Learning with Interaction: Agentic Distillation for Large Language Model Reasoning}, author={Anonymous}, booktitle={Submitted to The Fourteenth International Conference on Learning Representations}, year={2025}, url={https://openreview.net/forum?id=zyp9QT5Gf1}, note={under ...
anonymous2025learning
LitePruner: A Lightweight Realtime Token Pruner before Large Language Models
Tokenization is one of the core steps of the language model pipeline. However, the tokenizer yields more tokens for the same context in non-English languages, especially in low-resource languages due to the shared multilingual settings, which results in unexpected fairness problems in terms of token fees, response late...
2,026
https://openreview.net/forum?id=zyTGgLUdCb
https://openreview.net/pdf/f1089989f30f9fb47778643e1c055836f291b1f3.pdf
[]
ICLR 2026 Conference Submission
ICLR
['ICLR.cc/2026/Conference/-/Submission', 'ICLR.cc/2026/Conference/-/Post_Submission', 'ICLR.cc/2026/Conference/Submission16269/-/Full_Submission']
poster
[ { "confidence": 3, "date": 0, "rating": 2, "review": "", "review_id": "rIU4bPd3Xi", "reviewer": "ICLR.cc/2026/Conference/Submission16269/Reviewer_KCka", "strengths": "1. The paper addresses a real fairness issue where non-English users pay significantly more for LLM services due to token...
4
@inproceedings{ anonymous2025litepruner, title={LitePruner: A Lightweight Realtime Token Pruner before Large Language Models}, author={Anonymous}, booktitle={Submitted to The Fourteenth International Conference on Learning Representations}, year={2025}, url={https://openreview.net/forum?id=zyTGgLUdCb}, note={under revi...
anonymous2025litepruner
Diffusion Bridge Variational Inference for Deep Gaussian Processes
Deep Gaussian processes (DGPs) enable expressive hierarchical Bayesian modeling but pose substantial challenges for posterior inference, especially over inducing variables. Denoising diffusion variational inference (DDVI) addresses this by modeling the posterior as a time-reversed diffusion from a simple Gaussian prior...
2,026
https://openreview.net/forum?id=zyRmy0Ch9a
https://openreview.net/pdf/53c9c6bc86a1153ef4a88043c1f49e49ce4cfb91.pdf
[]
ICLR 2026 Conference Submission
ICLR
['ICLR.cc/2026/Conference/-/Submission', 'ICLR.cc/2026/Conference/-/Post_Submission', 'ICLR.cc/2026/Conference/Submission6981/-/Full_Submission', 'ICLR.cc/2026/Conference/Submission6981/-/Rebuttal_Revision']
poster
[ { "confidence": 3, "date": 0, "rating": 6, "review": "", "review_id": "8hAMzNMbA4", "reviewer": "ICLR.cc/2026/Conference/Submission6981/Reviewer_vk1c", "strengths": "Originality:\nThe paper proposes the novel idea of reinterpreting DDVI as a kind of diffusion bridge using Doob’s h-transf...
4
@inproceedings{ anonymous2025diffusion, title={Diffusion Bridge Variational Inference for Deep Gaussian Processes}, author={Anonymous}, booktitle={Submitted to The Fourteenth International Conference on Learning Representations}, year={2025}, url={https://openreview.net/forum?id=zyRmy0Ch9a}, note={under review} }
anonymous2025diffusion
Preference-based Policy Optimization from Sparse-reward Offline Dataset
Offline reinforcement learning (RL) holds the promise of training effective policies from static datasets without the need for costly online interactions. However, offline RL faces key limitations, most notably the challenge of generalizing to unseen or infrequently encountered state-action pairs. When a value function...
2,026
https://openreview.net/forum?id=zyLI9LEmry
https://openreview.net/pdf/4ef43b31950eff949a4099d4cb6f9c962b012a4a.pdf
[]
ICLR 2026 Conference Submission
ICLR
['ICLR.cc/2026/Conference/-/Submission', 'ICLR.cc/2026/Conference/-/Post_Submission', 'ICLR.cc/2026/Conference/Submission10578/-/Full_Submission']
poster
[ { "confidence": 3, "date": 0, "rating": 6, "review": "", "review_id": "k0n2MAUUPo", "reviewer": "ICLR.cc/2026/Conference/Submission10578/Reviewer_uvFa", "strengths": "- This paper proposes a contrastive preference learning framework to bypass direct value function estimation.\n- This pap...
4
@inproceedings{ anonymous2025preferencebased, title={Preference-based Policy Optimization from Sparse-reward Offline Dataset}, author={Anonymous}, booktitle={Submitted to The Fourteenth International Conference on Learning Representations}, year={2025}, url={https://openreview.net/forum?id=zyLI9LEmry}, note={under revi...
anonymous2025preferencebased
Teaching LLMs to Admit Uncertainty in OCR
Vision language models (VLMs) are increasingly replacing traditional OCR pipelines, but on visually degraded documents they often hallucinate, producing fluent yet incorrect text without signaling uncertainty. This occurs because current post-training emphasizes accuracy, which encourages models to guess even when unce...
2,026
https://openreview.net/forum?id=zyCjizqOxB
https://openreview.net/pdf/e2a795c9abb1a38a8b9c19099e6e5c79caef476c.pdf
[]
ICLR 2026 Conference Submission
ICLR
['ICLR.cc/2026/Conference/-/Submission', 'ICLR.cc/2026/Conference/-/Post_Submission', 'ICLR.cc/2026/Conference/Submission1052/-/Full_Submission', 'ICLR.cc/2026/Conference/Submission1052/-/Rebuttal_Revision']
poster
[ { "confidence": 4, "date": 0, "rating": 4, "review": "", "review_id": "SJjGbrxrVZ", "reviewer": "ICLR.cc/2026/Conference/Submission1052/Reviewer_ixBu", "strengths": "**Clear problem formulation**: The paper addresses a real problem—VLM-based OCR systems hallucinate on degraded documents ...
4
@inproceedings{ anonymous2025teaching, title={Teaching {LLM}s to Admit Uncertainty in {OCR}}, author={Anonymous}, booktitle={Submitted to The Fourteenth International Conference on Learning Representations}, year={2025}, url={https://openreview.net/forum?id=zyCjizqOxB}, note={under review} }
anonymous2025teaching
Emergence of Machine Language in LLM-based Agent Communication
Language emergence is a hallmark of human intelligence, as well as a key indicator for assessing artificial intelligence. Unlike prior studies grounded in multi-agent reinforcement learning, this paper asks whether machine language, potentially not human-interpretable, can emerge between large language model (LLM) agen...
2,026
https://openreview.net/forum?id=zy06mHNoO2
https://openreview.net/pdf/dd385254607d317329de7f1ab96728b480363cb4.pdf
[]
ICLR 2026 Conference Submission
ICLR
['ICLR.cc/2026/Conference/-/Submission', 'ICLR.cc/2026/Conference/-/Post_Submission', 'ICLR.cc/2026/Conference/Submission3748/-/Full_Submission']
poster
[ { "confidence": 4, "date": 0, "rating": 6, "review": "", "review_id": "0acJkXshT6", "reviewer": "ICLR.cc/2026/Conference/Submission3748/Reviewer_LQV4", "strengths": "## Strengths\n- The paper introduces an interesting and innovative approach for generating natural-like communication that...
4
@inproceedings{ anonymous2025emergence, title={Emergence of Machine Language in {LLM}-based Agent Communication}, author={Anonymous}, booktitle={Submitted to The Fourteenth International Conference on Learning Representations}, year={2025}, url={https://openreview.net/forum?id=zy06mHNoO2}, note={under review} }
anonymous2025emergence
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
347