title
stringlengths
21
128
content_TLDR
stringlengths
40
250
abstract
stringlengths
613
2.09k
authors
listlengths
1
42
openreview_url
stringlengths
42
42
id
stringlengths
10
10
forum
stringlengths
10
10
authorids
listlengths
1
42
venue
dict
venueid
dict
pdf_url
dict
invitation
stringclasses
1 value
group
stringclasses
1 value
venue_name
stringclasses
1 value
year
int64
2.03k
2.03k
conference
stringclasses
1 value
content_keywords
listlengths
1
16
content_code_of_ethics
stringclasses
1 value
content_author_guide
stringclasses
1 value
content_flagged_for_ethics_review
bool
1 class
content_ethics_comments
stringclasses
11 values
content__bibtex
stringlengths
246
1.01k
content_paperhash
stringlengths
29
134
content_supplementary_material
stringclasses
73 values
content_award_nomination
bool
1 class
content_reciprocal_reviewing_status
stringclasses
1 value
content_reciprocal_reviewing_author
stringclasses
4 values
content_reciprocal_reviewing_exemption_reason
dict
E$^2$-RAG: Towards Editable Efficient RAG by Editing Compressed KV Caches
E$^2$-RAG efficiently edits compressed KV caches for knowledge updates, achieving 40x faster editing and 3x faster generation than standard RAG with minimal performance loss.
Retrieval-Augmented Generation (RAG) demonstrates remarkable capabilities for enhancing the performance of Large Language Models (LLMs) by integrating external knowledge. Standard RAG introduces additional computations due to the extra retrieved context. To improve efficiency, recent studies propose compressing chunk t...
[ "Tongxu Luo", "Wenyu Du", "HanWen Hao", "Min Zhang", "Hao Yang", "Benyou Wang" ]
https://openreview.net/forum?id=ZZ4tcxJvux
ZZ4tcxJvux
ZZ4tcxJvux
[ "~Tongxu_Luo1", "~Wenyu_Du1", "~HanWen_Hao1", "~Min_Zhang10", "~Hao_Yang7", "~Benyou_Wang2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/90a703fd95fe0a78eba30c62f09d41700971be6f.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Retrieval Augmented Generation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ luo2025erag, title={E\${\textasciicircum}2\$-{RAG}: Towards Editable Efficient {RAG} by Editing Compressed {KV} Caches}, author={Tongxu Luo and Wenyu Du and HanWen Hao and Min Zhang and Hao Yang and Benyou Wang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.ne...
luo|e^2rag_towards_editable_efficient_rag_by_editing_compressed_kv_caches
/attachment/cb6b8a5009cad693d953c6721496aa806e2f5e1b.zip
null
null
null
null
Imagine All The Relevance: Scenario-Profiled Indexing with Knowledge Expansion for Dense Retrieval
We propose SPIKE, a dense retrieval framework that decomposes documents into scenarios.
Existing dense retrieval models struggle with reasoning-intensive retrieval task as they fail to capture implicit relevance that requires reasoning beyond surface-level semantic information. To address these challenges, we propose Scenario-Profiled Indexing with Knowledge Expansion (SPIKE), a dense retrieval framework ...
[ "Sangam Lee", "Ryang Heo", "SeongKu Kang", "Dongha Lee" ]
https://openreview.net/forum?id=ZYVAtUUNbH
ZYVAtUUNbH
ZYVAtUUNbH
[ "~Sangam_Lee1", "~Ryang_Heo1", "~SeongKu_Kang1", "~Dongha_Lee1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/0efd6d5ae8cf275cc6d655a40b5711fe1a00b98f.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Information Retrieval", "Reasoning Intensive Retrieval", "Dense Retrieval", "Reasoning", "LLM" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lee2025imagine, title={Imagine All The Relevance: Scenario-Profiled Indexing with Knowledge Expansion for Dense Retrieval}, author={Sangam Lee and Ryang Heo and SeongKu Kang and Dongha Lee}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=ZYVAtUUNbH}...
lee|imagine_all_the_relevance_scenarioprofiled_indexing_with_knowledge_expansion_for_dense_retrieval
null
null
null
null
null
In-Context Occam’s Razor: How Transformers Prefer Simpler Hypotheses on the Fly
Transformers exhibit Bayesian Occam's razor in-context
In-context learning (ICL) enables transformers to adapt to new tasks through contextual examples without parameter updates. While existing research has typically studied ICL in fixed-complexity setups, real-world language models encounter tasks of diverse complexity levels. This paper investigates how transformers navi...
[ "Puneesh Deora", "Bhavya Vasudeva", "Tina Behnia", "Christos Thrampoulidis" ]
https://openreview.net/forum?id=ZSMnX3LBva
ZSMnX3LBva
ZSMnX3LBva
[ "~Puneesh_Deora1", "~Bhavya_Vasudeva1", "~Tina_Behnia1", "~Christos_Thrampoulidis1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/70c4cab2eb53decf5c0d430077a51eb24ac285af.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "In-context learning", "transformers", "linear regression", "Markov chains", "Bayesian Occam's razor" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ deora2025incontext, title={In-Context Occam{\textquoteright}s Razor: How Transformers Prefer Simpler Hypotheses on the Fly}, author={Puneesh Deora and Bhavya Vasudeva and Tina Behnia and Christos Thrampoulidis}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net...
deora|incontext_occams_razor_how_transformers_prefer_simpler_hypotheses_on_the_fly
null
null
null
null
null
ADAPT: Actively Discovering and Adapting to Preferences for any Task
A benchmark, ADAPT, and a training mechanism, ReflectionDPO, to create and evaluate a grounded task planner that can actively elicit user preferences by asking questions, and adapt execution accordingly.
Assistive agents should be able to perform under-specified long-horizon tasks while respecting user preferences. We introduce Actively Discovering and Adapting to Preferences for any Task (ADAPT) – a benchmark designed to evaluate agents’ ability to adhere to user preferences across various household tasks through acti...
[ "Maithili Patel", "Xavier Puig", "Ruta Desai", "Roozbeh Mottaghi", "Sonia Chernova", "Joanne Truong", "Akshara Rai" ]
https://openreview.net/forum?id=Z8vtD1egtI
Z8vtD1egtI
Z8vtD1egtI
[ "~Maithili_Patel1", "~Xavier_Puig1", "~Ruta_Desai1", "~Roozbeh_Mottaghi1", "~Sonia_Chernova2", "~Joanne_Truong1", "~Akshara_Rai1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/7dd5515b2423b0c9dcfd62b55d1506ad08b10216.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Task Oriented Agents", "Interactive Learning", "Active Dialog", "Personalization", "Task Planning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ patel2025adapt, title={{ADAPT}: Actively Discovering and Adapting to Preferences for any Task}, author={Maithili Patel and Xavier Puig and Ruta Desai and Roozbeh Mottaghi and Sonia Chernova and Joanne Truong and Akshara Rai}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://...
patel|adapt_actively_discovering_and_adapting_to_preferences_for_any_task
null
null
null
null
null
Multi-Token Attention
We present Multi-Token Attention: new method that allows LLMs to condition their attention weights on multiple query and key vectors simultaneously.
Soft attention is a critical mechanism powering LLMs to locate relevant parts within a given context. However, individual attention weights are determined by the similarity of only a single query and key token vector. This “single token attention” bottlenecks the amount of information used in distinguishing a relevant ...
[ "Olga Golovneva", "Tianlu Wang", "Jason E Weston", "Sainbayar Sukhbaatar" ]
https://openreview.net/forum?id=Z3L35tQTEg
Z3L35tQTEg
Z3L35tQTEg
[ "~Olga_Golovneva1", "~Tianlu_Wang1", "~Jason_E_Weston1", "~Sainbayar_Sukhbaatar1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/9532870a0ce87caae8564ef996ce940c575cebee.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Deep learning architectures", "Large Language Model (LLM)", "Transformer", "Attention" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ golovneva2025multitoken, title={Multi-Token Attention}, author={Olga Golovneva and Tianlu Wang and Jason E Weston and Sainbayar Sukhbaatar}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Z3L35tQTEg} }
golovneva|multitoken_attention
null
null
null
null
null
FormaRL: Enhancing Autoformalization with no Labeled Data
We proposed a new training method to enhance autoformalization via reinforcement learning, and curated a dataset of undergraduate-level proof problems.
Autoformalization is one of the central tasks in formal verification, while its advancement remains hindered due to the data scarcity and the absence efficient methods. In this work we propose **FormaRL**, a simple yet efficient reinforcement learning framework for autoformalization which only requires a small amount o...
[ "Yanxing Huang", "Xinling Jin", "Sijie Liang", "Fuwen Luo", "Peng Li", "Yang Liu" ]
https://openreview.net/forum?id=Z2El1U94bq
Z2El1U94bq
Z2El1U94bq
[ "~Yanxing_Huang1", "~Xinling_Jin1", "~Sijie_Liang1", "~Fuwen_Luo1", "~Peng_Li2", "~Yang_Liu19" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/085dbdfbdd89d2da4ba20ea90affebf80b429c0e.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Model", "Formal Verification", "Autoformalization", "Reinforcement Learning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ huang2025formarl, title={Forma{RL}: Enhancing Autoformalization with no Labeled Data}, author={Yanxing Huang and Xinling Jin and Sijie Liang and Fuwen Luo and Peng Li and Yang Liu}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Z2El1U94bq} }
huang|formarl_enhancing_autoformalization_with_no_labeled_data
/attachment/e15a03408291a966b2ffe7431f1e2b9c894481f7.zip
null
null
null
null
Learning Adaptive Parallel Reasoning with Language Models
We propose Adaptive Parallel Reasoning (APR), a reinforcement-learning-optimized inference framework allowing language models to dynamically balance serial and parallel computations, significantly enhancing reasoning accuracy and efficiency.
Scaling inference-time computation has substantially improved the reasoning capabilities of language models. However, existing methods have significant limitations: serialized chain-of-thought approaches generate overly long outputs, leading to increased latency and exhausted context windows, while parallel methods suc...
[ "Jiayi Pan", "Xiuyu Li", "Long Lian", "Charlie Victor Snell", "Yifei Zhou", "Adam Yala", "Trevor Darrell", "Kurt Keutzer", "Alane Suhr" ]
https://openreview.net/forum?id=YgwQ7sXPXU
YgwQ7sXPXU
YgwQ7sXPXU
[ "~Jiayi_Pan1", "~Xiuyu_Li1", "~Long_Lian1", "~Charlie_Victor_Snell1", "~Yifei_Zhou1", "~Adam_Yala1", "~Trevor_Darrell2", "~Kurt_Keutzer1", "~Alane_Suhr1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/4326af2e36216b5bf3712412640e6ce9524e8496.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "large language models", "reasoning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ pan2025learning, title={Learning Adaptive Parallel Reasoning with Language Models}, author={Jiayi Pan and Xiuyu Li and Long Lian and Charlie Victor Snell and Yifei Zhou and Adam Yala and Trevor Darrell and Kurt Keutzer and Alane Suhr}, booktitle={Second Conference on Language Modeling}, year={2025}, url...
pan|learning_adaptive_parallel_reasoning_with_language_models
null
null
null
null
null
Scoring Verifiers: Evaluating Synthetic Verification for Code and Reasoning
This paper benchmarks synthetic verification methods for code correctness, showing reasoning models improve test case generation and verification accuracy.
Synthetic verification techniques such as generating test cases and reward modelling are common ways to enhance the coding capabilities of large language models (LLM) beyond predefined tests. Additionally, code verification has recently found great success as a critical component in improving reasoning capability of LL...
[ "Aleksander Ficek", "Somshubra Majumdar", "Vahid Noroozi", "Boris Ginsburg" ]
https://openreview.net/forum?id=YLze3CETYP
YLze3CETYP
YLze3CETYP
[ "~Aleksander_Ficek1", "~Somshubra_Majumdar1", "~Vahid_Noroozi2", "~Boris_Ginsburg1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/5e309c711da0e63eff0b0c69db1b1ff080e0f216.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "code generation and understanding", "benchmarking", "NLP datasets", "evaluation methodologies", "automatic evaluation of datasets", "evaluation", "metrics", "reproducibility", "statistical testing for evaluation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ ficek2025scoring, title={Scoring Verifiers: Evaluating Synthetic Verification for Code and Reasoning}, author={Aleksander Ficek and Somshubra Majumdar and Vahid Noroozi and Boris Ginsburg}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=YLze3CETYP} ...
ficek|scoring_verifiers_evaluating_synthetic_verification_for_code_and_reasoning
null
null
null
null
null
SpecDec++: Boosting Speculative Decoding via Adaptive Candidate Lengths
Based on MDP theory, we use a trained acceptance prediction head to determine when to stop the current proposal round for speculative decoding.
Speculative decoding reduces the inference latency of a target large language model via utilizing a smaller and faster draft model. Its performance depends on a hyperparameter K -- the candidate length, i.e., the number of candidate tokens for the target model to verify in each round. However, previous methods often us...
[ "Kaixuan Huang", "Xudong Guo", "Mengdi Wang" ]
https://openreview.net/forum?id=Y131N9fUbU
Y131N9fUbU
Y131N9fUbU
[ "~Kaixuan_Huang1", "~Xudong_Guo1", "~Mengdi_Wang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/20e04fa6a4fd4676e6c928f26603189768563b4a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "speculative decoding", "MDP theory", "large language models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ huang2025specdec, title={SpecDec++: Boosting Speculative Decoding via Adaptive Candidate Lengths}, author={Kaixuan Huang and Xudong Guo and Mengdi Wang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Y131N9fUbU} }
huang|specdec_boosting_speculative_decoding_via_adaptive_candidate_lengths
null
null
null
null
null
Self-Steering Language Models
We introduce a new approach to structuring test-time computation that uses LMs to plan and execute task-specific search procedures in a probabilistic programming language.
While test-time reasoning enables language models (LMs) to tackle complex tasks, searching or planning in natural language can be slow, costly, and error-prone. But even when LMs struggle to emulate the precise reasoning steps needed to solve a problem, they often excel at describing its *abstract structure*—both how t...
[ "Gabriel Grand", "Joshua B. Tenenbaum", "Vikash Mansinghka", "Alexander K. Lew", "Jacob Andreas" ]
https://openreview.net/forum?id=XvCBtm5PgF
XvCBtm5PgF
XvCBtm5PgF
[ "~Gabriel_Grand1", "~Joshua_B._Tenenbaum1", "~Vikash_Mansinghka1", "~Alexander_K._Lew1", "~Jacob_Andreas1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/a0b7c778c71e6dfe14013820aa6bfe0ff49bab8a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Probabilistic inference", "sequential Monte Carlo", "code generation", "test-time search", "constrained generation", "reasoning", "language models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ grand2025selfsteering, title={Self-Steering Language Models}, author={Gabriel Grand and Joshua B. Tenenbaum and Vikash Mansinghka and Alexander K. Lew and Jacob Andreas}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=XvCBtm5PgF} }
grand|selfsteering_language_models
null
null
null
null
null
Evaluating and Designing Sparse Autoencoders by Approximating Quasi-Orthogonality
The paper introduces Approximate Feature Activation (AFA) and the $\varepsilon$LBO metric to address the lack of principled hyperparameter selection in top-k SAEs and to evaluate SAEs using quasi-orthogonality.
Sparse autoencoders (SAEs) are widely used in mechanistic interpretability research for large language models; however, the state-of-the-art method of using $k$-sparse autoencoders lacks a theoretical grounding for selecting the hyperparameter $k$ that represents the number of nonzero activations, often denoted by $\el...
[ "Sewoong Lee", "Adam Davies", "Marc E. Canby", "Julia Hockenmaier" ]
https://openreview.net/forum?id=XhdNFeMclS
XhdNFeMclS
XhdNFeMclS
[ "~Sewoong_Lee2", "~Adam_Davies2", "~Marc_E._Canby1", "~Julia_Hockenmaier1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/217c61848f7563dea844e2801b43491e39ef9338.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "mechanistic interpretability", "sparse autoencoder" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lee2025evaluating, title={Evaluating and Designing Sparse Autoencoders by Approximating Quasi-Orthogonality}, author={Sewoong Lee and Adam Davies and Marc E. Canby and Julia Hockenmaier}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=XhdNFeMclS} }
lee|evaluating_and_designing_sparse_autoencoders_by_approximating_quasiorthogonality
null
null
null
null
null
NoveltyBench: Evaluating Creativity and Diversity in Language Models
We introduce a benchmark for creativity and diversity in language models.
Language models have demonstrated remarkable capabilities on standard benchmarks, yet they struggle increasingly from *mode collapse*, the inability to generate diverse and novel outputs. Our work introduces **NoveltyBench**, a benchmark specifically designed to evaluate the ability of language models to produce multip...
[ "Yiming Zhang", "Harshita Diddee", "Susan Holm", "Hanchen Liu", "Xinyue Liu", "Vinay Samuel", "Barry Wang", "Daphne Ippolito" ]
https://openreview.net/forum?id=XZm1ekzERf
XZm1ekzERf
XZm1ekzERf
[ "~Yiming_Zhang5", "~Harshita_Diddee1", "~Susan_Holm1", "~Hanchen_Liu1", "~Xinyue_Liu11", "~Vinay_Samuel1", "~Barry_Wang1", "~Daphne_Ippolito1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/968f2cdaefa4618b2e3fbab147422f76f64b7718.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "generation", "diversity", "evaluation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhang2025noveltybench, title={NoveltyBench: Evaluating Creativity and Diversity in Language Models}, author={Yiming Zhang and Harshita Diddee and Susan Holm and Hanchen Liu and Xinyue Liu and Vinay Samuel and Barry Wang and Daphne Ippolito}, booktitle={Second Conference on Language Modeling}, year={2025...
zhang|noveltybench_evaluating_creativity_and_diversity_in_language_models
null
null
null
null
null
To Backtrack or Not to Backtrack: When Sequential Search Limits Model Reasoning
A comparative study of backtracking versus parallel search in LLMs, revealing nuanced trade-offs: backtracking can harm reasoning due to constrained search strategies and verbosity, but is particularly suitable for RL.
Recent advancements in large language models (LLMs) have significantly improved their reasoning abilities, particularly through techniques involving search and backtracking. Backtracking naturally scales test-time compute by enabling sequential, linearized exploration via long chain-of-thought (CoT) generation. However...
[ "Tian Qin", "David Alvarez-Melis", "Samy Jelassi", "Eran Malach" ]
https://openreview.net/forum?id=XNQHMYsUHf
XNQHMYsUHf
XNQHMYsUHf
[ "~Tian_Qin3", "~David_Alvarez-Melis1", "~Samy_Jelassi1", "~Eran_Malach3" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/537fcbe7321564f1b58941365ae11bd87d0b08a4.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM Reasoning", "Backtracking", "Test-time Computate" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ qin2025to, title={To Backtrack or Not to Backtrack: When Sequential Search Limits Model Reasoning}, author={Tian Qin and David Alvarez-Melis and Samy Jelassi and Eran Malach}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=XNQHMYsUHf} }
qin|to_backtrack_or_not_to_backtrack_when_sequential_search_limits_model_reasoning
/attachment/0a403995ea0dfbbe8a68a5305859fe4ebe0618c1.zip
null
null
null
null
DualEdit: Dual Editing for Knowledge Updating in Vision-Language Models
We explore the importance of image and text modalities and propose a novel dual editing method—DualEdit.
Model editing aims to efficiently update a pre-trained model’s knowledge without the need for time-consuming full retraining. While existing pioneering editing methods achieve promising results, they primarily focus on editing single-modal language models (LLMs). However, for vision-language models (VLMs), which involv...
[ "Zhiyi Shi", "Binjie Wang", "Chongjie Si", "Yichen Wu", "Junsik Kim", "Hanspeter Pfister" ]
https://openreview.net/forum?id=X5vFauyVWr
X5vFauyVWr
X5vFauyVWr
[ "~Zhiyi_Shi1", "~Binjie_Wang1", "~Chongjie_Si1", "~Yichen_Wu2", "~Junsik_Kim1", "~Hanspeter_Pfister1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/f7d988bae200fba530854b0f4ae77d12e77b7ca1.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Model Editing", "Multimodal Learning", "VLM" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ shi2025dualedit, title={DualEdit: Dual Editing for Knowledge Updating in Vision-Language Models}, author={Zhiyi Shi and Binjie Wang and Chongjie Si and Yichen Wu and Junsik Kim and Hanspeter Pfister}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=X...
shi|dualedit_dual_editing_for_knowledge_updating_in_visionlanguage_models
null
null
null
null
null
Agents Are All You Need for LLM Unlearning
LLM agents based unlearning beats all the existing unlearning methods
Information removal or suppression in large language models (LLMs) is a desired functionality, useful in AI regulation, legal compliance, safety, and privacy. LLM unlearning methods aim to remove information on demand from LLMs. Current LLM unlearning methods struggle to balance the unlearning efficacy and utility due ...
[ "Debdeep Sanyal", "Murari Mandal" ]
https://openreview.net/forum?id=X39dK0SX9W
X39dK0SX9W
X39dK0SX9W
[ "~Debdeep_Sanyal1", "~Murari_Mandal1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/16a114dc6c9fda7e1b1d1e1692aec55da696fb3a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM Agents", "unlearning", "Safety in AI" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ sanyal2025agents, title={Agents Are All You Need for {LLM} Unlearning}, author={Debdeep Sanyal and Murari Mandal}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=X39dK0SX9W} }
sanyal|agents_are_all_you_need_for_llm_unlearning
null
null
null
null
null
Local Mixtures of Experts: Essentially Free Test-Time Training via Model Merging
We propose Test-Time Model Merging (TTMM) which approaches the performance of Test-Time Training (TTT) without almost any test-time overhead.
Mixture of expert (MoE) models are a promising approach to increasing model capacity without increasing inference cost, and are core components of many state-of-the-art language models. However, current MoE models typically use only few experts due to prohibitive training and inference cost. We propose _**T**est-**T**i...
[ "Ryo Bertolissi", "Jonas Hübotter", "Ido Hakimi", "Andreas Krause" ]
https://openreview.net/forum?id=X2RXpFA6Vh
X2RXpFA6Vh
X2RXpFA6Vh
[ "~Ryo_Bertolissi1", "~Jonas_Hübotter1", "~Ido_Hakimi1", "~Andreas_Krause1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/87f26d3172dfebb33a4927351d920c8aba384a5f.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "test-time training", "model merging", "mixture of experts", "language modeling", "local learning", "transductive learning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ bertolissi2025local, title={Local Mixtures of Experts: Essentially Free Test-Time Training via Model Merging}, author={Ryo Bertolissi and Jonas H{\"u}botter and Ido Hakimi and Andreas Krause}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=X2RXpFA6V...
bertolissi|local_mixtures_of_experts_essentially_free_testtime_training_via_model_merging
/attachment/7454da207edc9f4504d90fe9d3fa15d8e2804df7.zip
null
null
null
null
DFRot: Achieving Outlier-Free and Massive Activation-Free for Rotated LLMs with Refined Rotation
We explained why random Hadamard is superior to randomized orthogonal transforms in the W4A4 quantization process and proposed an optimization method for the rotation matrix.
Rotating the activation and weight matrices to reduce the influence of outliers in large language models (LLMs) has recently attracted significant attention, particularly in the context of model quantization. Prior studies have shown that in low-precision quantization scenarios, such as 4-bit weights and 4-bit activati...
[ "Jingyang Xiang", "Sai Qian Zhang" ]
https://openreview.net/forum?id=WzGypILLDb
WzGypILLDb
WzGypILLDb
[ "~Jingyang_Xiang2", "~Sai_Qian_Zhang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/33e97f26a84760262809a6c096cc688671523fef.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "W4A4 quantization", "randomized hadamard transforms", "randomized orthogonal transforms", "outlier", "masssive activation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ xiang2025dfrot, title={{DFR}ot: Achieving Outlier-Free and Massive Activation-Free for Rotated {LLM}s with Refined Rotation}, author={Jingyang Xiang and Sai Qian Zhang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=WzGypILLDb} }
xiang|dfrot_achieving_outlierfree_and_massive_activationfree_for_rotated_llms_with_refined_rotation
null
null
null
null
null
Robo-Instruct: Simulator-Augmented Instruction Alignment For Finetuning Code LLMs
We propose a simulator-augmented approach for generating synthetic training data to fine-tune code LLMs on domain-specific robot tasks.
Code LLMs have shown promising results with converting tasks in natural language to programs that can be executed by service robots. We are interested in finetuning small, specialized LLMs for this purpose, but collecting datasets of task-program pairs specific to each robot is time-consuming and expensive. While appro...
[ "Zichao Hu", "Junyi Jessy Li", "Arjun Guha", "Joydeep Biswas" ]
https://openreview.net/forum?id=WnZjdQOWiY
WnZjdQOWiY
WnZjdQOWiY
[ "~Zichao_Hu1", "~Junyi_Jessy_Li2", "~Arjun_Guha3", "~Joydeep_Biswas1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/fd8cec822f3d537ccc371b6cf428f41186476ba4.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Self-Instruct", "Fine-tuning Code LLMs for service robot tasks", "Domain-Specific Program Generation", "Code LLMs for Robotics" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ hu2025roboinstruct, title={Robo-Instruct: Simulator-Augmented Instruction Alignment For Finetuning Code {LLM}s}, author={Zichao Hu and Junyi Jessy Li and Arjun Guha and Joydeep Biswas}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=WnZjdQOWiY} }
hu|roboinstruct_simulatoraugmented_instruction_alignment_for_finetuning_code_llms
null
true
null
null
null
Enhancing LLM Reliability via Explicit Knowledge Boundary Modeling
We present the Explicit Knowledge Boundary Modeling (EKBM) framework, which improves large language models' reliability by integrating fast and slow reasoning systems to minimize hallucinations while ensuring efficiency in error-sensitive tasks.
Large language models (LLMs) are prone to hallucination stemming from misaligned self-awareness, particularly when processing queries exceeding their knowledge boundaries. While existing mitigation strategies employ uncertainty estimation or query rejection mechanisms, they suffer from computational efficiency and sacr...
[ "Hang Zheng", "Hongshen Xu", "Yuncong Liu", "Shuai Fan", "Lu Chen", "Pascale Fung", "Kai Yu" ]
https://openreview.net/forum?id=WLgfeRhuA0
WLgfeRhuA0
WLgfeRhuA0
[ "~Hang_Zheng3", "~Hongshen_Xu1", "~Yuncong_Liu1", "~Shuai_Fan1", "~Lu_Chen3", "~Pascale_Fung1", "~Kai_Yu3" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/a56440afc6933bfb0876b800c75ca96c0c6b5c6a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "reliability", "knowledge boundary", "self-awareness" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zheng2025enhancing, title={Enhancing {LLM} Reliability via Explicit Knowledge Boundary Modeling}, author={Hang Zheng and Hongshen Xu and Yuncong Liu and Shuai Fan and Lu Chen and Pascale Fung and Kai Yu}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?...
zheng|enhancing_llm_reliability_via_explicit_knowledge_boundary_modeling
null
null
null
null
null
LeakAgent: RL-based Red-teaming Agent for LLM Privacy Leakage
LeakAgent is a new black-box red-teaming framework that automates the generation of adversarial prompts with RL to exploit privacy in LLMs.
Recent studies have discovered that large language models (LLM) may be ``fooled'' to output private information, including training data, system prompts, and personally identifiable information, under carefully crafted adversarial prompts. Existing red-teaming approaches for privacy leakage either rely on manual effort...
[ "Yuzhou Nie", "Zhun Wang", "Ye Yu", "Xian Wu", "Xuandong Zhao", "Nathaniel D. Bastian", "Wenbo Guo", "Dawn Song" ]
https://openreview.net/forum?id=WIfns41MAb
WIfns41MAb
WIfns41MAb
[ "~Yuzhou_Nie1", "~Zhun_Wang1", "~Ye_Yu5", "~Xian_Wu8", "~Xuandong_Zhao1", "~Nathaniel_D._Bastian1", "~Wenbo_Guo1", "~Dawn_Song1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/8dfa927a8d360bc872b39591cb5a7a9483339f07.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Privacy Leakage", "LLM" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ nie2025leakagent, title={LeakAgent: {RL}-based Red-teaming Agent for {LLM} Privacy Leakage}, author={Yuzhou Nie and Zhun Wang and Ye Yu and Xian Wu and Xuandong Zhao and Nathaniel D. Bastian and Wenbo Guo and Dawn Song}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openr...
nie|leakagent_rlbased_redteaming_agent_for_llm_privacy_leakage
null
true
null
null
null
Zero-shot Benchmarking: A Framework for Flexible and Scalable Automatic Evaluation of Language Models
We present Zero-shot Benchmarking (ZSB), a framework for creating high-quality benchmarks for any task by leveraging language models for both synthetic test data creation and evaluation.
As language models improve and grow capable of performing more complex tasks across modalities, evaluating them automatically becomes increasingly challenging. Developing strong and robust task-specific automatic metrics gets harder, and human-annotated test sets—which are expensive to create—saturate more quickly. A c...
[ "José Pombal", "Nuno M Guerreiro", "Ricardo Rei", "Andre Martins" ]
https://openreview.net/forum?id=WARZwyDf17
WARZwyDf17
WARZwyDf17
[ "~José_Pombal1", "~Nuno_M_Guerreiro1", "~Ricardo_Rei1", "~Andre_Martins1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/239ca1a13fb128af2c9c973dcdc64a8c7b6028fa.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "automatic evaluation", "large language models", "vision language models", "multilinguality", "llm-as-a-judge" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ pombal2025zeroshot, title={Zero-shot Benchmarking: A Framework for Flexible and Scalable Automatic Evaluation of Language Models}, author={Jos{\'e} Pombal and Nuno M Guerreiro and Ricardo Rei and Andre Martins}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net...
pombal|zeroshot_benchmarking_a_framework_for_flexible_and_scalable_automatic_evaluation_of_language_models
null
null
null
null
null
Model-Agnostic Policy Explanations with Large Language Models
We propose an approach that learns a behavior representation from observed states and actions and then generates explanations with minimal hallucination using a pre-trained large language model.
Intelligent agents, such as robots, are increasingly deployed in real-world, human-centric environments. To foster appropriate human trust and meet legal and ethical standards, these agents must be able to explain their behavior. However, state-of-the-art agents are typically driven by black-box models like deep neural...
[ "Zhang Xi-Jia", "Yue Guo", "Shufei Chen", "Simon Stepputtis", "Matthew Craig Gombolay", "Katia P. Sycara", "Joseph Campbell" ]
https://openreview.net/forum?id=VzXpFjKgJg
VzXpFjKgJg
VzXpFjKgJg
[ "~Zhang_Xi-Jia1", "~Yue_Guo7", "~Shufei_Chen1", "~Simon_Stepputtis1", "~Matthew_Craig_Gombolay1", "~Katia_P._Sycara1", "~Joseph_Campbell1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/71ee490dcb37c9b6311b2a8f5d3efa10079542e8.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Explainability", "Model-Agnostic Explanations", "Large Language Models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ xi-jia2025modelagnostic, title={Model-Agnostic Policy Explanations with Large Language Models}, author={Zhang Xi-Jia and Yue Guo and Shufei Chen and Simon Stepputtis and Matthew Craig Gombolay and Katia P. Sycara and Joseph Campbell}, booktitle={Second Conference on Language Modeling}, year={2025}, url=...
xijia|modelagnostic_policy_explanations_with_large_language_models
null
null
null
null
null
On Mechanistic Circuits for Extractive Question-Answering
We extract a mechanistic circuit for extractive QA and perform data attribution and model steering with the insights
Recent studies have extracted circuits from the computational graphs of language models for simple language tasks such as entity tracking or indirect object identification. In our paper, we scale up circuit extraction to a real-world language modeling task: context-augmented language modeling for question-answering (QA...
[ "Samyadeep Basu", "Vlad I Morariu", "Ryan A. Rossi", "Nanxuan Zhao", "Zichao Wang", "Soheil Feizi", "Varun Manjunatha" ]
https://openreview.net/forum?id=VvSWiNIuPL
VvSWiNIuPL
VvSWiNIuPL
[ "~Samyadeep_Basu1", "~Vlad_I_Morariu1", "~Ryan_A._Rossi2", "~Nanxuan_Zhao1", "~Zichao_Wang1", "~Soheil_Feizi2", "~Varun_Manjunatha1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/35c2be3f6296ec7f34eeb53635258b3974d42339.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "mechanistic circuits", "interpretability" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ basu2025on, title={On Mechanistic Circuits for Extractive Question-Answering}, author={Samyadeep Basu and Vlad I Morariu and Ryan A. Rossi and Nanxuan Zhao and Zichao Wang and Soheil Feizi and Varun Manjunatha}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net...
basu|on_mechanistic_circuits_for_extractive_questionanswering
null
true
null
null
null
Cats Confuse Reasoning LLM: Query Agnostic Adversarial Triggers for Reasoning Models
The paper introduces CatAttack, a method to generate query-agnostic adversarial triggers that mislead reasoning models into giving incorrect answers, revealing critical vulnerabilities in state-of-the-art Reasoning models.
We investigate the robustness of reasoning models trained for step-by-step problem solving by introducing query-agnostic adversarial triggers – short, irrelevant text that, when appended to math problems, systematically misleads models to output incorrect answers without altering the problem’s semantics. We propose Cat...
[ "Meghana Arakkal Rajeev", "Rajkumar Ramamurthy", "Prapti Trivedi", "Vikas Yadav", "Oluwanifemi Bamgbose", "Sathwik Tejaswi Madhusudhan", "James Zou", "Nazneen Rajani" ]
https://openreview.net/forum?id=VrEPiN5WhM
VrEPiN5WhM
VrEPiN5WhM
[ "~Meghana_Arakkal_Rajeev1", "~Rajkumar_Ramamurthy1", "~Prapti_Trivedi2", "~Vikas_Yadav2", "~Oluwanifemi_Bamgbose1", "~Sathwik_Tejaswi_Madhusudhan2", "~James_Zou1", "~Nazneen_Rajani1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/8023834862148fe2c7a86b6d82087f5bcbdc8edf.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Adversarial attacks", "Query-agnostic adversarial triggers", "Reasoning Models", "Automatic Iterative Attack", "Math-based triggers", "Security", "Redteaming" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ rajeev2025cats, title={Cats Confuse Reasoning {LLM}: Query Agnostic Adversarial Triggers for Reasoning Models}, author={Meghana Arakkal Rajeev and Rajkumar Ramamurthy and Prapti Trivedi and Vikas Yadav and Oluwanifemi Bamgbose and Sathwik Tejaswi Madhusudhan and James Zou and Nazneen Rajani}, booktitle=...
rajeev|cats_confuse_reasoning_llm_query_agnostic_adversarial_triggers_for_reasoning_models
null
null
null
null
null
A Critical Look At Tokenwise Reward-Guided Text Generation
We analyse some of the pitfalls of contemporary reward guided text generation methods, and present a principled approach with strong performance on several language generation benchmarks.
Large language models (LLMs) can be improved by aligning with human preferences through fine-tuning -- the so-called reinforcement learning from human feedback (RLHF). However, the cost of fine-tuning an LLM is prohibitive for many users. Due to their ability to bypass LLM fine-tuning, prediction-time tokenwise reward-...
[ "Ahmad Rashid", "Ruotian Wu", "Julia Grosse", "Agustinus Kristiadi", "Pascal Poupart" ]
https://openreview.net/forum?id=Vnw9c1YLhV
Vnw9c1YLhV
Vnw9c1YLhV
[ "~Ahmad_Rashid1", "~Ruotian_Wu1", "~Julia_Grosse1", "~Agustinus_Kristiadi1", "~Pascal_Poupart2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/92e5e13ad4933a0781534ccfc64e79e7321793b4.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM", "RLHF", "Alignment", "Model Efficiency", "Reward Models", "Sampling", "Test-time" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ rashid2025a, title={A Critical Look At Tokenwise Reward-Guided Text Generation}, author={Ahmad Rashid and Ruotian Wu and Julia Grosse and Agustinus Kristiadi and Pascal Poupart}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Vnw9c1YLhV} }
rashid|a_critical_look_at_tokenwise_rewardguided_text_generation
/attachment/318baf88ec5cccfdcfb67114ab9ef513205be824.zip
null
null
null
null
Sample Efficient Preference Alignment in LLMs via Active Exploration
We propose an exploration based approach for active learning for RLHF addressing both online and offline settings
Preference-based feedback is important for many applications in machine learning where evaluation of a reward function is not feasible. Notable recent examples arise in preference alignment for large language models, including in reinforcement learning from human feedback (RLHF) and direct preference optimization (DPO)...
[ "Viraj Mehta", "Syrine Belakaria", "Vikramjeet Das", "Ojash Neopane", "Yijia Dai", "Ilija Bogunovic", "Barbara E Engelhardt", "Stefano Ermon", "Jeff Schneider", "Willie Neiswanger" ]
https://openreview.net/forum?id=Vi5cIfIslX
Vi5cIfIslX
Vi5cIfIslX
[ "~Viraj_Mehta1", "~Syrine_Belakaria1", "~Vikramjeet_Das1", "~Ojash_Neopane1", "~Yijia_Dai1", "~Ilija_Bogunovic2", "~Barbara_Engelhardt1", "~Stefano_Ermon1", "~Jeff_Schneider1", "~Willie_Neiswanger2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/7bbd6292ebe368d5537c94c5f2fcf2b3a3d6f97b.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Sample Efficient", "DPO", "RLHF", "Alignment", "Active learning", "LLMs" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ mehta2025sample, title={Sample Efficient Preference Alignment in {LLM}s via Active Exploration}, author={Viraj Mehta and Syrine Belakaria and Vikramjeet Das and Ojash Neopane and Yijia Dai and Ilija Bogunovic and Barbara E Engelhardt and Stefano Ermon and Jeff Schneider and Willie Neiswanger}, booktitle...
mehta|sample_efficient_preference_alignment_in_llms_via_active_exploration
/attachment/76e6987c2fa36c3a4f40a8581f3f410848ba4ab8.zip
null
null
null
null
Society of Mind Meets Real-Time Strategy: A Hierarchical Multi-Agent Framework for Strategic Reasoning
We propose a multi-agent framework uniting specialized imitation learning modules under a meta-controller, achieving robust long-horizon strategic reasoning and superior adaptability in dynamic environments.
Large Language Models (LLMs) have recently demonstrated impressive action sequence prediction capabilities but often struggle with dynamic, long-horizon tasks such as real-time strategic games. In a game such as StarCraft II (SC2), agents need to manage resource constraints and adapt to evolving battlefield situations ...
[ "Daechul Ahn", "San Kim", "Jonghyun Choi" ]
https://openreview.net/forum?id=VYdbeSoXWD
VYdbeSoXWD
VYdbeSoXWD
[ "~Daechul_Ahn4", "~San_Kim2", "~Jonghyun_Choi1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/7a9859d0ac848995814c41ceea8ac21116aad9a7.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "multi agent", "real-time simulation", "strategic reasoning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ ahn2025society, title={Society of Mind Meets Real-Time Strategy: A Hierarchical Multi-Agent Framework for Strategic Reasoning}, author={Daechul Ahn and San Kim and Jonghyun Choi}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=VYdbeSoXWD} }
ahn|society_of_mind_meets_realtime_strategy_a_hierarchical_multiagent_framework_for_strategic_reasoning
null
null
null
null
null
MapIQ: Evaluating Multimodal Large Language Models for Map Question Answering
We introduce MapIQ, a benchmark for evaluating multimodal large language models (MLLMs) on map-based visual question answering (Map-VQA) across three map types, also assessing their robustness to map desing variations.
Recent advancements in multimodal large language models (MLLMs) have driven researchers to explore how well these models read data visualizations, e.g., bar charts, scatter plots. More recently, attention has shifted to visual question answering with maps (Map-VQA). However, Map-VQA research has primarily focused on ch...
[ "Varun Srivastava", "Fan Lei", "Srija Mukhopadhyay", "Vivek Gupta", "Ross Maciejewski" ]
https://openreview.net/forum?id=VSwRuGtB5n
VSwRuGtB5n
VSwRuGtB5n
[ "~Varun_Srivastava3", "~Fan_Lei1", "~Srija_Mukhopadhyay1", "~Vivek_Gupta2", "~Ross_Maciejewski1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/75c5f068b1416db4345958771bd9b3c633b9a358.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Visual Question Answering", "Maps", "Geospatial Analysis", "Visual Analytics", "Benchmark Dataset" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ srivastava2025mapiq, title={Map{IQ}: Evaluating Multimodal Large Language Models for Map Question Answering}, author={Varun Srivastava and Fan Lei and Srija Mukhopadhyay and Vivek Gupta and Ross Maciejewski}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/fo...
srivastava|mapiq_evaluating_multimodal_large_language_models_for_map_question_answering
null
null
null
null
null
Steering Large Language Model Activations in Sparse Spaces
Sparse activation steering (SAS) propose an steering method in sparse spaces to precisely control LLM behavior by isolating interpretable features.
A key challenge in AI alignment is guiding large language models (LLMs) to follow desired behaviors at test time. Activation steering, which modifies internal model activations during inference, offers a promising solution. However, prior work in dense activation spaces struggles with $\textit{superposition}$, where mu...
[ "Reza Bayat", "Ali Rahimi-Kalahroudi", "Mohammad Pezeshki", "Sarath Chandar", "Pascal Vincent" ]
https://openreview.net/forum?id=VGw1viYliK
VGw1viYliK
VGw1viYliK
[ "~Reza_Bayat1", "~Ali_Rahimi-Kalahroudi1", "~Mohammad_Pezeshki1", "~Sarath_Chandar1", "~Pascal_Vincent1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/70f3af98996906bfa9bd84ba645f0cf45176844a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "AI alignment", "Activation steering", "Sparse representations", "Sparse autoencoders (SAEs)", "Large language models (LLMs)", "Interpretability" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ bayat2025steering, title={Steering Large Language Model Activations in Sparse Spaces}, author={Reza Bayat and Ali Rahimi-Kalahroudi and Mohammad Pezeshki and Sarath Chandar and Pascal Vincent}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=VGw1viYl...
bayat|steering_large_language_model_activations_in_sparse_spaces
null
null
null
null
null
Scaling Laws of Synthetic Data for Language Model
We propose a scalable method for synthetic generation and investigate the scaling laws of synthetic data.
Large language models (LLMs) achieve strong performance across diverse tasks, driven by high-quality web data used in pre-training. However, recent studies indicate web data is rapidly depleting. Synthetic data emerges as a promising alternative, but it remains unclear whether synthetic datasets exhibit predictable sca...
[ "Zeyu Qin", "Qingxiu Dong", "Xingxing Zhang", "Li Dong", "Xiaolong Huang", "Ziyi Yang", "MAHMOUD KHADEMI", "Dongdong Zhang", "Hany Hassan Awadalla", "Yi R. Fung", "Weizhu Chen", "Minhao Cheng", "Furu Wei" ]
https://openreview.net/forum?id=UmUXPXHtdl
UmUXPXHtdl
UmUXPXHtdl
[ "~Zeyu_Qin1", "~Qingxiu_Dong1", "~Xingxing_Zhang1", "~Li_Dong1", "~Xiaolong_Huang1", "~Ziyi_Yang1", "~MAHMOUD_KHADEMI2", "~Dongdong_Zhang4", "~Hany_Hassan_Awadalla1", "~Yi_R._Fung1", "~Weizhu_Chen1", "~Minhao_Cheng1", "~Furu_Wei1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/8e8b4de2eae1ad893ee42a2ae8bc4bc18e1e6123.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Synthetic Data; Scaling Laws" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ qin2025scaling, title={Scaling Laws of Synthetic Data for Language Model}, author={Zeyu Qin and Qingxiu Dong and Xingxing Zhang and Li Dong and Xiaolong Huang and Ziyi Yang and MAHMOUD KHADEMI and Dongdong Zhang and Hany Hassan Awadalla and Yi R. Fung and Weizhu Chen and Minhao Cheng and Furu Wei}, book...
qin|scaling_laws_of_synthetic_data_for_language_model
null
null
null
null
null
ParaPO: Aligning Language Models to Reduce Verbatim Reproduction of Pre-training Data
Fine-tuning language models to mitigate regurgitation in open-ended generation.
Language models (LMs) can memorize and reproduce segments from their pretraining data verbatim even in non-adversarial settings, raising concerns about copyright, plagiarism, privacy, and creativity. We introduce Paraphrase Preference Optimization (ParaPO), a post-training method that fine-tunes LMs to reduce regurgita...
[ "Tong Chen", "Faeze Brahman", "Jiacheng Liu", "Niloofar Mireshghallah", "Weijia Shi", "Pang Wei Koh", "Luke Zettlemoyer", "Hannaneh Hajishirzi" ]
https://openreview.net/forum?id=Uic3ojVhXh
Uic3ojVhXh
Uic3ojVhXh
[ "~Tong_Chen3", "~Faeze_Brahman1", "~Jiacheng_Liu2", "~Niloofar_Mireshghallah1", "~Weijia_Shi1", "~Pang_Wei_Koh1", "~Luke_Zettlemoyer1", "~Hannaneh_Hajishirzi1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/00c0e42d4a40349f81a740f57820ec7a63017766.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "security and privacy", "fine-tuning", "ethical considerations in NLP applications" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
true
This submission presents a method to discourage verbatim generation of pre-training data. The method could potentially be used to hide copyright infringement from model developers who may unethically use large-scale copyright-protected data for pre-training.
@inproceedings{ chen2025parapo, title={Para{PO}: Aligning Language Models to Reduce Verbatim Reproduction of Pre-training Data}, author={Tong Chen and Faeze Brahman and Jiacheng Liu and Niloofar Mireshghallah and Weijia Shi and Pang Wei Koh and Luke Zettlemoyer and Hannaneh Hajishirzi}, booktitle={Second Conference on ...
chen|parapo_aligning_language_models_to_reduce_verbatim_reproduction_of_pretraining_data
null
null
null
null
null
Can a Crow Hatch a Falcon? Lineage Matters in Predicting Large Language Model Performance
Explicitly modeling LLM **lineage** boosts performance prediction accuracy. Our lineage-regularized matrix factorization leverages ancestry to outperform standard methods when predicting new or merged models with minimal evaluation data.
Accurately forecasting the performance of Large Language Models (LLMs) before extensive fine-tuning or merging can substantially reduce both computational expense and development time. Although prior approaches like scaling laws account for global factors such as parameter size or training tokens, they often overlook e...
[ "Takuya Tamura", "Taro Yano", "Masafumi Enomoto", "Masafumi Oyamada" ]
https://openreview.net/forum?id=ULYqB2JORB
ULYqB2JORB
ULYqB2JORB
[ "~Takuya_Tamura1", "~Taro_Yano1", "~Masafumi_Enomoto1", "~Masafumi_Oyamada1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/0a553f6eee082d5a4e51bb0edf691b800a56c699.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM", "Performance Estimation", "Matrix Factorization", "Neural Collaborative Filtering" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ tamura2025can, title={Can a Crow Hatch a Falcon? Lineage Matters in Predicting Large Language Model Performance}, author={Takuya Tamura and Taro Yano and Masafumi Enomoto and Masafumi Oyamada}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=ULYqB2JO...
tamura|can_a_crow_hatch_a_falcon_lineage_matters_in_predicting_large_language_model_performance
null
null
null
null
null
Rerouting LLM Routers
Proposing a novel class of vulnerabilities, where adversaries can manipulate LLM routing decisions to their advantage.
LLM routers balance quality and cost of responding to queries by routing them to a cheaper or more expensive LLM depending on the query's estimated complexity. Routers are a type of what we call ``LLM control planes,'' i.e., systems that orchestrate multiple LLMs. In this paper, we investigate adversarial robustness ...
[ "Avital Shafran", "Roei Schuster", "Tom Ristenpart", "Vitaly Shmatikov" ]
https://openreview.net/forum?id=U6C7odo5SX
U6C7odo5SX
U6C7odo5SX
[ "~Avital_Shafran1", "~Roei_Schuster1", "~Tom_Ristenpart1", "~Vitaly_Shmatikov1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/90196a560af282e8b63901f276e00ea0dd18c970.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLMs", "Routers", "Adversarial Machine Learning", "ML Security" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ shafran2025rerouting, title={Rerouting {LLM} Routers}, author={Avital Shafran and Roei Schuster and Tom Ristenpart and Vitaly Shmatikov}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=U6C7odo5SX} }
shafran|rerouting_llm_routers
null
null
null
null
null
Bayesian scaling laws for in-context learning
We test the claim that in-context learning in LLMs is Bayesian, leading to a new interpretable scaling law that accurately predicts when suppressed behaviors in both toy and real-world language models will reemerge.
In-context learning (ICL) is a powerful technique for getting language models to perform complex tasks with no training updates. Prior work has established strong correlations between the number of in-context examples provided and the accuracy of the model's predictions. In this paper, we seek to explain this correlati...
[ "Aryaman Arora", "Dan Jurafsky", "Christopher Potts", "Noah Goodman" ]
https://openreview.net/forum?id=U2ihVSREUb
U2ihVSREUb
U2ihVSREUb
[ "~Aryaman_Arora1", "~Dan_Jurafsky1", "~Christopher_Potts1", "~Noah_Goodman1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/076b6410f98373074352f0d7b558d5ca201c2244.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "in-context learning", "bayesian inference", "scaling laws" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ arora2025bayesian, title={Bayesian scaling laws for in-context learning}, author={Aryaman Arora and Dan Jurafsky and Christopher Potts and Noah Goodman}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=U2ihVSREUb} }
arora|bayesian_scaling_laws_for_incontext_learning
null
true
null
null
null
AdaptiVocab: Enhancing LLM Efficiency in Focused Domains through Lightweight Vocabulary Adaptation
AdaptiVocab enhances LLM efficiency in domain-specific settings by adapting the model's vocabulary to better fit the target domain, improving generation efficiency by 25% without compromising performance.
Large Language Models (LLMs) have shown impressive versatility as general purpose models. However, their broad applicability comes at a high-cost computational overhead, particularly in auto-regressive decoding where each step requires a forward pass. In domain-specific settings, general-purpose capabilities are unnece...
[ "Itay Nakash", "Nitay Calderon", "Eyal Ben-David", "Elad Hoffer", "Roi Reichart" ]
https://openreview.net/forum?id=TyXf9dwpZP
TyXf9dwpZP
TyXf9dwpZP
[ "~Itay_Nakash1", "~Nitay_Calderon1", "~Eyal_Ben-David1", "~Elad_Hoffer1", "~Roi_Reichart1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/a3948743216f973e8868927800e3236adbf49875.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Domain Adaptation", "Efficiency", "Tokenization", "Vocabulary Adaptation", "Efficient LLMs" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ nakash2025adaptivocab, title={AdaptiVocab: Enhancing {LLM} Efficiency in Focused Domains through Lightweight Vocabulary Adaptation}, author={Itay Nakash and Nitay Calderon and Eyal Ben-David and Elad Hoffer and Roi Reichart}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://...
nakash|adaptivocab_enhancing_llm_efficiency_in_focused_domains_through_lightweight_vocabulary_adaptation
null
null
null
null
null
Fleurs-SLU: A Massively Multilingual Benchmark for Spoken Language Understanding
A massively multilingual benchmark for topical utterance classification and textual multiple-choice QA from spoken paragraphs
Spoken language understanding (SLU) is indispensable for half of all living languages that lack a formal writing system, since these languages cannot pair automatic speech recognition (ASR) with language models to benefit from language technology. Even if low-resource languages possess a writing system, ASR for these l...
[ "Fabian David Schmidt", "Ivan Vulić", "Goran Glavaš", "David Ifeoluwa Adelani" ]
https://openreview.net/forum?id=Tqj3fYqhwS
Tqj3fYqhwS
Tqj3fYqhwS
[ "~Fabian_David_Schmidt1", "~Ivan_Vulić1", "~Goran_Glavaš1", "~David_Ifeoluwa_Adelani1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/a56ca4100519acabe9f4ee62fe5480872cee973b.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "spoken language understanding", "multilingual benchmarks", "multilingual evaluation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ schmidt2025fleursslu, title={Fleurs-{SLU}: A Massively Multilingual Benchmark for Spoken Language Understanding}, author={Fabian David Schmidt and Ivan Vuli{\'c} and Goran Glava{\v{s}} and David Ifeoluwa Adelani}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.n...
schmidt|fleursslu_a_massively_multilingual_benchmark_for_spoken_language_understanding
/attachment/e9114f54a52339dacc131c52fec8c4f2fafe8cf5.zip
null
null
null
null
The Blessing and Curse of Dimensionality in Safety Alignment
We explore the concept of safety as represented by a linear subspace and its relation to the hidden dimensions of a model.
The focus on safety alignment in large language models (LLMs) has increased significantly due to their widespread adoption across different domains. The scale of LLMs play a contributing role in their success, and the growth in parameter count follows larger hidden dimensions. In this paper, we hypothesize that while t...
[ "Rachel S.Y. Teo", "Laziz Abdullaev", "Tan Minh Nguyen" ]
https://openreview.net/forum?id=TiTk6VDz2H
TiTk6VDz2H
TiTk6VDz2H
[ "~Rachel_S.Y._Teo1", "~Laziz_Abdullaev1", "~Tan_Minh_Nguyen1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/32dca478b2e7d36950ee5769471975b85ed28a93.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "safety alignment", "large language models", "jailbreak", "activation engineering", "linear separation hypothesis" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ teo2025the, title={The Blessing and Curse of Dimensionality in Safety Alignment}, author={Rachel S.Y. Teo and Laziz Abdullaev and Tan Minh Nguyen}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=TiTk6VDz2H} }
teo|the_blessing_and_curse_of_dimensionality_in_safety_alignment
/attachment/72d52ac6f2cbf593a830158aaec0ea435826d90c.zip
null
null
null
null
Out-of-Distribution Detection using Synthetic Data Generation
This work presents an effective OOD detection method using LLM-generated synthetic proxies, eliminating the need for external OOD data. Experiments show it reduces false positives and outperforms baseline methods in text classification tasks.
Distinguishing in- and out-of-distribution (OOD) inputs is crucial for reliable deployment of classification systems. However, OOD data is typically unavailable or difficult to collect, posing a significant challenge for accurate OOD detection. In this work, we present a method that harnesses the generative capabilitie...
[ "Momin Abbas", "Muneeza Azmat", "Raya Horesh", "Mikhail Yurochkin" ]
https://openreview.net/forum?id=TiRiDMkTmG
TiRiDMkTmG
TiRiDMkTmG
[ "~Momin_Abbas1", "~Muneeza_Azmat1", "~Raya_Horesh1", "~Mikhail_Yurochkin1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/7fe91ed9b3588c0dc88d186dbe502075a3b92505.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "out-of-distribution detection", "out-of-distribution generalization", "synthetic data" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
true
I'm not sure if requires ethic revision due to the content in the used datasets.
@inproceedings{ abbas2025outofdistribution, title={Out-of-Distribution Detection using Synthetic Data Generation}, author={Momin Abbas and Muneeza Azmat and Raya Horesh and Mikhail Yurochkin}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=TiRiDMkTmG} }
abbas|outofdistribution_detection_using_synthetic_data_generation
/attachment/47161fc624b3315b25b57a1de37783c4cbf8c017.zip
null
null
null
null
Effective Length Extrapolation via Dimension-Wise Positional Embeddings Manipulation
Dimensionally manipulating the relative position matrix to extrapolate the context window of LLMs without additional training.
Large Language Models (LLMs) often struggle to process and generate coherent context when the number of input tokens exceeds the pre-trained length. Recent advancements in long-context extension have significantly expanded the context window of LLMs but require expensive overhead to train the large-scale models with lo...
[ "Yi Lu", "Wanxu Zhao", "Xin Zhou", "Chenxin An", "Chenglong Wang", "Shuo Li", "Yuming Yang", "Jun Zhao", "Tao Ji", "Tao Gui", "Qi Zhang", "Xuanjing Huang" ]
https://openreview.net/forum?id=Tahpc3iAnO
Tahpc3iAnO
Tahpc3iAnO
[ "~Yi_Lu7", "~Wanxu_Zhao2", "~Xin_Zhou6", "~Chenxin_An1", "~Chenglong_Wang6", "~Shuo_Li12", "~Yuming_Yang1", "~Jun_Zhao5", "~Tao_Ji1", "~Tao_Gui1", "~Qi_Zhang8", "~Xuanjing_Huang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/96ad0479166278f66c8fba17d248e253992df48e.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Long Context", "Extrapolation", "Training-Free Framework" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lu2025effective, title={Effective Length Extrapolation via Dimension-Wise Positional Embeddings Manipulation}, author={Yi Lu and Wanxu Zhao and Xin Zhou and Chenxin An and Chenglong Wang and Shuo Li and Yuming Yang and Jun Zhao and Tao Ji and Tao Gui and Qi Zhang and Xuanjing Huang}, booktitle={Second C...
lu|effective_length_extrapolation_via_dimensionwise_positional_embeddings_manipulation
/attachment/0101b7de0eaaa5eccaca6af9016daed25bb78a2b.zip
null
null
null
null
LLM Can be a Dangerous Persuader: Empirical Study of Persuasion Safety in Large Language Models
We introduce PersuSafety, the first comprehensive framework for evaluating the safety and ethicality in LLM-driven persuasion.
Recent advancements in Large Language Models (LLMs) have enabled them to approach human-level persuasion capabilities. However, such potential also raises concerns about the safety risks of LLM-driven persuasion, particularly their potential for unethical influence through manipulation, deception, exploitation of vulne...
[ "Minqian Liu", "Zhiyang Xu", "Xinyi Zhang", "Heajun An", "Sarvech Qadir", "Qi Zhang", "Pamela J. Wisniewski", "Jin-Hee Cho", "Sang Won Lee", "Ruoxi Jia", "Lifu Huang" ]
https://openreview.net/forum?id=TMB9SKqit9
TMB9SKqit9
TMB9SKqit9
[ "~Minqian_Liu2", "~Zhiyang_Xu1", "~Xinyi_Zhang26", "~Heajun_An1", "~Sarvech_Qadir1", "~Qi_Zhang41", "~Pamela_J._Wisniewski1", "~Jin-Hee_Cho1", "~Sang_Won_Lee1", "~Ruoxi_Jia1", "~Lifu_Huang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/e5a56263d50c1fbdb24b2f741ca1a5f0f7e754f2.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "persuasion", "large language models", "safety", "evaluation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ liu2025llm, title={{LLM} Can be a Dangerous Persuader: Empirical Study of Persuasion Safety in Large Language Models}, author={Minqian Liu and Zhiyang Xu and Xinyi Zhang and Heajun An and Sarvech Qadir and Qi Zhang and Pamela J. Wisniewski and Jin-Hee Cho and Sang Won Lee and Ruoxi Jia and Lifu Huang}, ...
liu|llm_can_be_a_dangerous_persuader_empirical_study_of_persuasion_safety_in_large_language_models
null
null
null
null
null
Self-Evolving Critique Abilities in Large Language Models
SCRIT enables large language models to evolve their critique capabilities without human oversight by learning from self-generated data.
Despite their remarkable performance, Large Language Models (LLMs) face a critical challenge: providing feedback for tasks where human evaluation is difficult or where LLMs potentially outperform humans. In such scenarios, leveraging the *critique* ability of LLMs themselves—identifying and correcting flaws—shows consi...
[ "Zhengyang Tang", "Ziniu Li", "Zhenyang Xiao", "Tian Ding", "Ruoyu Sun", "Benyou Wang", "Dayiheng Liu", "Fei Huang", "Tianyu Liu", "Bowen Yu", "Junyang Lin" ]
https://openreview.net/forum?id=TA6azZKWJq
TA6azZKWJq
TA6azZKWJq
[ "~Zhengyang_Tang1", "~Ziniu_Li1", "~Zhenyang_Xiao1", "~Tian_Ding1", "~Ruoyu_Sun1", "~Benyou_Wang2", "~Dayiheng_Liu1", "~Fei_Huang3", "~Tianyu_Liu3", "~Bowen_Yu3", "~Junyang_Lin1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/67171a4d26bdcd50f955d395efa0276253dc1056.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Critique Model", "Synthetic Data", "Mathematical Reasoning", "Scalable Oversight" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ tang2025selfevolving, title={Self-Evolving Critique Abilities in Large Language Models}, author={Zhengyang Tang and Ziniu Li and Zhenyang Xiao and Tian Ding and Ruoyu Sun and Benyou Wang and Dayiheng Liu and Fei Huang and Tianyu Liu and Bowen Yu and Junyang Lin}, booktitle={Second Conference on Language...
tang|selfevolving_critique_abilities_in_large_language_models
null
null
null
null
null
LIMO: Less is More for Reasoning
We represent that large language models can achieve competition-level mathematical reasoning with just hundreds of high-quality training examples while maintaining strong generalization across diverse out-of-distribution benchmarks.
We challenge the prevailing assumption that complex reasoning in large language models (LLMs) necessitates massive training data. We demonstrate that sophisticated mathematical reasoning can emerge with only a few examples. Specifically, through simple supervised fine-tuning, our model, LIMO, achieves 63.3% accuracy o...
[ "Yixin Ye", "Zhen Huang", "Yang Xiao", "Ethan Chern", "Shijie Xia", "Pengfei Liu" ]
https://openreview.net/forum?id=T2TZ0RY4Zk
T2TZ0RY4Zk
T2TZ0RY4Zk
[ "~Yixin_Ye1", "~Zhen_Huang9", "~Yang_Xiao6", "~Ethan_Chern1", "~Shijie_Xia2", "~Pengfei_Liu1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/cde8a983c12998496197ba936eb6c90b4cb8e4b1.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large language models", "Mathematical reasoning", "Data efficiency", "Supervised fine-tuning", "Inference-time computation", "Reasoning chains" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ ye2025limo, title={{LIMO}: Less is More for Reasoning}, author={Yixin Ye and Zhen Huang and Yang Xiao and Ethan Chern and Shijie Xia and Pengfei Liu}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=T2TZ0RY4Zk} }
ye|limo_less_is_more_for_reasoning
null
null
null
null
null
AIR: A Systematic Analysis of Annotations, Instructions, and Response Pairs in Preference Dataset
We propose \textbf{AIR}, a framework to systematically dissect preference datasets into three core components—\textbf{A}nnotations, \textbf{I}nstructions, and \textbf{R}esponse Pairs—and quantify their alignment impact.
Preference learning is critical for aligning large language models (LLMs) with human values, yet its success hinges on high-quality datasets comprising three core components: Preference \textbf{A}nnotations, \textbf{I}nstructions, and \textbf{R}esponse Pairs. Current approaches conflate these components, obscuring thei...
[ "Bingxiang He", "Wenbin Zhang", "Jiaxi Song", "Cheng Qian", "Zixuan Fu", "Bowen Sun", "Ning Ding", "Haiwen Hong", "Longtao Huang", "Hui Xue", "Ganqu Cui", "Wanxiang Che", "Zhiyuan Liu", "Maosong Sun" ]
https://openreview.net/forum?id=Sz3ZU6oeVJ
Sz3ZU6oeVJ
Sz3ZU6oeVJ
[ "~Bingxiang_He1", "~Wenbin_Zhang3", "~Jiaxi_Song1", "~Cheng_Qian4", "~Zixuan_Fu2", "~Bowen_Sun2", "~Ning_Ding5", "~Haiwen_Hong1", "~Longtao_Huang2", "~Hui_Xue5", "~Ganqu_Cui1", "~Wanxiang_Che1", "~Zhiyuan_Liu1", "~Maosong_Sun1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/285d78f81924a2f4d2c49e4d7252b6ebbbe839af.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Direct Preference Optimization", "Preference Learning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ he2025air, title={{AIR}: A Systematic Analysis of Annotations, Instructions, and Response Pairs in Preference Dataset}, author={Bingxiang He and Wenbin Zhang and Jiaxi Song and Cheng Qian and Zixuan Fu and Bowen Sun and Ning Ding and Haiwen Hong and Longtao Huang and Hui Xue and Ganqu Cui and Wanxiang C...
he|air_a_systematic_analysis_of_annotations_instructions_and_response_pairs_in_preference_dataset
/attachment/e375627504f8c6a7d8f8af6f540ac91b443b2c1b.zip
null
null
null
null
LongPerceptualThoughts: Distilling System-2 Reasoning for System-1 Perception
We introduce a novel data generation pipeline for generating multimodal reasoning data by injecting key cognitive behaviours.
Recent reasoning models through test-time scaling have demonstrated that long chain-of-thoughts can unlock substantial performance boosts in hard reasoning tasks such as math and code. However, the benefit of such long thoughts for system-2 reasoning is relatively less explored in other domains such as perceptual tasks...
[ "Yuan-Hong Liao", "Sven Elflein", "Liu He", "Laura Leal-Taixé", "Yejin Choi", "Sanja Fidler", "David Acuna" ]
https://openreview.net/forum?id=SrKdi4MsUW
SrKdi4MsUW
SrKdi4MsUW
[ "~Yuan-Hong_Liao2", "~Sven_Elflein1", "~Liu_He2", "~Laura_Leal-Taixé1", "~Yejin_Choi1", "~Sanja_Fidler1", "~David_Acuna1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/b5cf2797bb593498db2fef7e5dfd5a60f33515b0.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "multimodal reasoning", "vision-language models", "chain-of-thought" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ liao2025longperceptualthoughts, title={LongPerceptualThoughts: Distilling System-2 Reasoning for System-1 Perception}, author={Yuan-Hong Liao and Sven Elflein and Liu He and Laura Leal-Taix{\'e} and Yejin Choi and Sanja Fidler and David Acuna}, booktitle={Second Conference on Language Modeling}, year={...
liao|longperceptualthoughts_distilling_system2_reasoning_for_system1_perception
/attachment/ba91a96a582fba071ab27d740f8f0a19b0eb11d4.zip
null
null
null
null
Advancing Language Multi-Agent Learning with Credit Re-Assignment for Interactive Environment Generalization
Advancing Language Multi-Agent Learning with Credit Re-Assignment for Interactive Environment Generalization
LLM-based agents have made significant advancements in interactive environments, such as mobile operations and web browsing, and other domains beyond computer using. Current multi-agent systems universally excel in performance, compared to single agents, but struggle with generalization across environments due to prede...
[ "Zhitao He", "Zijun Liu", "Peng Li", "Yi R. Fung", "Ming Yan", "Ji Zhang", "Fei Huang", "Yang Liu" ]
https://openreview.net/forum?id=SoEmgM1ioC
SoEmgM1ioC
SoEmgM1ioC
[ "~Zhitao_He1", "~Zijun_Liu2", "~Peng_Li2", "~Yi_R._Fung1", "~Ming_Yan2", "~Ji_Zhang3", "~Fei_Huang2", "~Yang_Liu19" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/feda4ba527e7485ea91852275b75b730084dfd2b.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large language model; Multi-agent learning; Reinforcement learning; UI agent" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ he2025advancing, title={Advancing Language Multi-Agent Learning with Credit Re-Assignment for Interactive Environment Generalization}, author={Zhitao He and Zijun Liu and Peng Li and Yi R. Fung and Ming Yan and Ji Zhang and Fei Huang and Yang Liu}, booktitle={Second Conference on Language Modeling}, yea...
he|advancing_language_multiagent_learning_with_credit_reassignment_for_interactive_environment_generalization
null
null
null
null
null
Assessing Judging Bias in Large Reasoning Models: An Empirical Study
We demonstrate that Large Reasoning Models remain susceptible to judging biases despite their advanced capabilities.
Large Reasoning Models (LRMs) like DeepSeek-R1 and OpenAI-o1 have demonstrated remarkable reasoning capabilities, raising important questions about their biases in LLM-as-a-judge settings. We present a comprehensive benchmark comparing judging biases between LLMs and LRMs across both subjective preference-alignment dat...
[ "Qian Wang", "Zhanzhi Lou", "Zhenheng Tang", "Nuo Chen", "Xuandong Zhao", "Wenxuan Zhang", "Dawn Song", "Bingsheng He" ]
https://openreview.net/forum?id=SlRtFwBdzP
SlRtFwBdzP
SlRtFwBdzP
[ "~Qian_Wang25", "~Zhanzhi_Lou1", "~Zhenheng_Tang2", "~Nuo_Chen4", "~Xuandong_Zhao1", "~Wenxuan_Zhang1", "~Dawn_Song1", "~Bingsheng_He1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/1406b304c4ddd1ce75ef08e961291b4af7b4c654.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Reasoning Models", "LLM Evaluation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wang2025assessing, title={Assessing Judging Bias in Large Reasoning Models: An Empirical Study}, author={Qian Wang and Zhanzhi Lou and Zhenheng Tang and Nuo Chen and Xuandong Zhao and Wenxuan Zhang and Dawn Song and Bingsheng He}, booktitle={Second Conference on Language Modeling}, year={2025}, url={htt...
wang|assessing_judging_bias_in_large_reasoning_models_an_empirical_study
null
null
null
null
null
MegaMath: Pushing the Limits of Open Math Corpora
MegaMath is an open dataset of over 300B tokens from web documents, math-related code, and synthetic sources, designed to enhance language models' mathematical reasoning capabilities.
Mathematical reasoning represents a cornerstone of human intelligence, driving problem-solving and innovation, and thus serves as a key indicator of the advanced capabilities of large language models(LLMs). However, the research community still lacks an open, adequate-scaled, high-quality mathematical corpus to match t...
[ "Fan Zhou", "Zengzhi Wang", "Nikhil Ranjan", "Zhoujun Cheng", "Liping Tang", "Guowei He", "Zhengzhong Liu", "Eric P. Xing" ]
https://openreview.net/forum?id=SHB0sLrZrh
SHB0sLrZrh
SHB0sLrZrh
[ "~Fan_Zhou6", "~Zengzhi_Wang1", "~Nikhil_Ranjan2", "~Zhoujun_Cheng1", "~Liping_Tang2", "~Guowei_He1", "~Zhengzhong_Liu1", "~Eric_Xing1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/64011155bba0744c968a3cdacfe0f28441a1f116.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Pre-training Data", "Mathematical Reasoning", "Synthetic Data" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhou2025megamath, title={MegaMath: Pushing the Limits of Open Math Corpora}, author={Fan Zhou and Zengzhi Wang and Nikhil Ranjan and Zhoujun Cheng and Liping Tang and Guowei He and Zhengzhong Liu and Eric P. Xing}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview....
zhou|megamath_pushing_the_limits_of_open_math_corpora
null
null
null
null
null
Bootstrapping Visual Assistant Modeling with Situated Interaction Simulation
We show that synthetic interaction data from simulated users and assistants can boost the development of visual assistant models that effectively guide real users to complete complex tasks.
Visual assistants that can guide humans through complex tasks in physical environments have significant potential, yet their development is hindered by the high cost of human-in-the-loop data collection. We present BASIS (Bootstrapping Assistant modeling with Situated Interaction Simulation), a novel framework that fun...
[ "Yichi Zhang", "Run Peng", "Yinpei Dai", "Lingyun Wu", "Xuweiyi Chen", "Qiaozi Gao", "Joyce Chai" ]
https://openreview.net/forum?id=S4nTXotasR
S4nTXotasR
S4nTXotasR
[ "~Yichi_Zhang1", "~Run_Peng1", "~Yinpei_Dai1", "~Lingyun_Wu2", "~Xuweiyi_Chen1", "~Qiaozi_Gao1", "~Joyce_Chai2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/5aa33e8ad456f55bd2ae71c6e077b69b84011c82.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "visual assistant", "embodied", "simulation", "multimodal", "LLM agent", "situated dialogue" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhang2025bootstrapping, title={Bootstrapping Visual Assistant Modeling with Situated Interaction Simulation}, author={Yichi Zhang and Run Peng and Yinpei Dai and Lingyun Wu and Xuweiyi Chen and Qiaozi Gao and Joyce Chai}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://open...
zhang|bootstrapping_visual_assistant_modeling_with_situated_interaction_simulation
null
null
null
null
null
Weight ensembling improves reasoning in language models
Weight ensembling improves pass@k of reasoning models.
We investigate a pitfall during the training of reasoning models where the diversity of generations begins to collapse, leading to suboptimal test-time scaling. Notably, Pass@1 reliably improves during supervised finetuning (SFT), but Pass@k rapidly deteriorates. Surprisingly, a simple intervention of interpolating the...
[ "Xingyu Dang", "Christina Baek", "Kaiyue Wen", "J Zico Kolter", "Aditi Raghunathan" ]
https://openreview.net/forum?id=S2IKxulLT1
S2IKxulLT1
S2IKxulLT1
[ "~Xingyu_Dang2", "~Christina_Baek2", "~Kaiyue_Wen1", "~J_Zico_Kolter1", "~Aditi_Raghunathan1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/9bc30f4f47d61a832f8594cf6a26ad0ec99117f0.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "test-time scaling", "RL", "reasoning", "diversity", "decoding" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ dang2025weight, title={Weight ensembling improves reasoning in language models}, author={Xingyu Dang and Christina Baek and Kaiyue Wen and J Zico Kolter and Aditi Raghunathan}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=S2IKxulLT1} }
dang|weight_ensembling_improves_reasoning_in_language_models
null
null
null
null
null
Search-R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning
A RL framework to train LLMs for interleaved reasoning and retrieval
Efficiently acquiring external knowledge and up-to-date information is essential for effective reasoning and text generation in large language models (LLMs). Prompting advanced LLMs with reasoning capabilities to use search engines during inference is often suboptimal, as the LLM might not fully possess the capability...
[ "Bowen Jin", "Hansi Zeng", "Zhenrui Yue", "Jinsung Yoon", "Sercan O Arik", "Dong Wang", "Hamed Zamani", "Jiawei Han" ]
https://openreview.net/forum?id=Rwhi91ideu
Rwhi91ideu
Rwhi91ideu
[ "~Bowen_Jin1", "~Hansi_Zeng1", "~Zhenrui_Yue1", "~Jinsung_Yoon1", "~Sercan_O_Arik1", "~Dong_Wang21", "~Hamed_Zamani1", "~Jiawei_Han1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/df3a75c7e70329ac2e4bbb73b5d7e8eef77584ee.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "reasoning", "retrieval", "reinforcement learning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ jin2025searchr, title={Search-R1: Training {LLM}s to Reason and Leverage Search Engines with Reinforcement Learning}, author={Bowen Jin and Hansi Zeng and Zhenrui Yue and Jinsung Yoon and Sercan O Arik and Dong Wang and Hamed Zamani and Jiawei Han}, booktitle={Second Conference on Language Modeling}, ye...
jin|searchr1_training_llms_to_reason_and_leverage_search_engines_with_reinforcement_learning
null
null
null
null
null
Backdoor Attacks on Dense Retrieval via Public and Unintentional Triggers
Backdoor Attacks on Dense Passage Retrievers
Dense retrieval systems have been widely used in various NLP applications. However, their vulnerabilities to potential attacks have been underexplored. This paper investigates a novel attack scenario where the attackers aim to mislead the retrieval system into retrieving the attacker-specified contents. Those contents,...
[ "Quanyu Long", "Yue Deng", "Leilei Gan", "Wenya Wang", "Sinno Jialin Pan" ]
https://openreview.net/forum?id=RsnxggqW4l
RsnxggqW4l
RsnxggqW4l
[ "~Quanyu_Long1", "~Yue_Deng3", "~Leilei_Gan1", "~Wenya_Wang1", "~Sinno_Jialin_Pan1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/153ccb1928b125deed6cad76d769e9a58fc39235.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "backdoor attack", "dense retrieval" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ long2025backdoor, title={Backdoor Attacks on Dense Retrieval via Public and Unintentional Triggers}, author={Quanyu Long and Yue Deng and Leilei Gan and Wenya Wang and Sinno Jialin Pan}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=RsnxggqW4l} }
long|backdoor_attacks_on_dense_retrieval_via_public_and_unintentional_triggers
null
null
null
null
null
Learning by Teaching: Engaging Students as Instructors of Large Language Models in Computer Science Education
Students learn computer science better by teaching large language models, reversing the usual teacher-student roles.
While Large Language Models (LLMs) are often used as virtual tutors in computer science (CS) education, this approach can foster passive learning and over-reliance. This paper presents a novel pedagogical paradigm that inverts this model: students act as instructors who must teach an LLM to solve problems. To facilitat...
[ "Xinming Yang", "Haasil Pujara", "Jun Li" ]
https://openreview.net/forum?id=RUAoV3j6tM
RUAoV3j6tM
RUAoV3j6tM
[ "~Xinming_Yang1", "~Haasil_Pujara1", "~Jun_Li66" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/6462e35b4b77b748fed66fc4d0cfa103094b2448.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Model", "Computer Science Education", "Human-AI Collaboration", "Role Reversal" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ yang2025learning, title={Learning by Teaching: Engaging Students as Instructors of Large Language Models in Computer Science Education}, author={Xinming Yang and Haasil Pujara and Jun Li}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=RUAoV3j6tM} }
yang|learning_by_teaching_engaging_students_as_instructors_of_large_language_models_in_computer_science_education
null
null
null
null
null
From Queries to Criteria: Understanding How Astronomers Evaluate LLMs
We analyze and recommend evaluation criteria grounded in real Human-LLM interactions for exploring scientific literature.
There is growing interest in leveraging LLMs to aid in astronomy and other scientific research, but benchmarks for LLM evaluation in general have not kept pace with the increasingly diverse ways that real people evaluate and use these models. In this study, we seek to improve evaluation procedures by building an unders...
[ "Alina Hyk", "Kiera McCormick", "Mian Zhong", "Ioana Ciucă", "Sanjib Sharma", "John F Wu", "J. E. G. Peek", "Kartheik G. Iyer", "Ziang Xiao", "Anjalie Field" ]
https://openreview.net/forum?id=ROtDZDUgvw
ROtDZDUgvw
ROtDZDUgvw
[ "~Alina_Hyk1", "~Kiera_McCormick1", "~Mian_Zhong1", "~Ioana_Ciucă1", "~Sanjib_Sharma1", "~John_F_Wu1", "~J._E._G._Peek1", "~Kartheik_G._Iyer1", "~Ziang_Xiao1", "~Anjalie_Field2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/faa1d68c305fffb21d97c548e0a7abe00857ac43.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Human-centered computing", "AI for Science", "LM Evaluation", "LLM for Astronomy" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ hyk2025from, title={From Queries to Criteria: Understanding How Astronomers Evaluate {LLM}s}, author={Alina Hyk and Kiera McCormick and Mian Zhong and Ioana Ciuc{\u{a}} and Sanjib Sharma and John F Wu and J. E. G. Peek and Kartheik G. Iyer and Ziang Xiao and Anjalie Field}, booktitle={Second Conference ...
hyk|from_queries_to_criteria_understanding_how_astronomers_evaluate_llms
null
null
null
null
null
OpinioRAG: Towards Generating User-Centric Opinion Highlights from Large-scale Online Reviews
We present OpinioRAG, a scalable framework using RAG-based retrieval and LLMs, with novel verification metrics and a large-scale dataset for user-centric long-form opinion summarization.
We study the problem of opinion highlights generation from large volumes of user reviews, often exceeding thousands per entity, where existing methods either fail to scale or produce generic, one-size-fits-all summaries that overlook personalized needs. To tackle this, we introduce OpinioRAG, a scalable, training-free ...
[ "Mir Tafseer Nayeem", "Davood Rafiei" ]
https://openreview.net/forum?id=R94bCTckhV
R94bCTckhV
R94bCTckhV
[ "~Mir_Tafseer_Nayeem1", "~Davood_Rafiei2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/0cbdbb7ad1e11aa6ca244670f65bfc1fdb47be83.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "User-Centric Summarization", "Long-form Opinions", "Retrieval-Augmented Generation (RAG)", "Reference-free Verification", "Dataset Construction" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
true
My concern is about the data licensing - waiting for clarification from the authors (see Questions for Authors).
@inproceedings{ nayeem2025opiniorag, title={Opinio{RAG}: Towards Generating User-Centric Opinion Highlights from Large-scale Online Reviews}, author={Mir Tafseer Nayeem and Davood Rafiei}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=R94bCTckhV} }
nayeem|opiniorag_towards_generating_usercentric_opinion_highlights_from_largescale_online_reviews
null
null
null
null
null
When To Solve, When To Verify: Compute-Optimal Problem Solving and Generative Verification for LLM Reasoning
We perform test-time compute matched comparison between scaling solutions via self-consistency and verifications via GenRM to find useful insights for the practitioners.
Scaling test-time compute has emerged as a key strategy for enhancing the reasoning capabilities of large language models (LLMs), particularly in tasks like mathematical problem-solving. A traditional approach, Self-Consistency (SC), generates multiple solutions to a problem and selects the most common answer via major...
[ "Nishad Singhi", "Hritik Bansal", "Arian Hosseini", "Aditya Grover", "Kai-Wei Chang", "Marcus Rohrbach", "Anna Rohrbach" ]
https://openreview.net/forum?id=R7qRUFHGTx
R7qRUFHGTx
R7qRUFHGTx
[ "~Nishad_Singhi1", "~Hritik_Bansal2", "~Arian_Hosseini1", "~Aditya_Grover1", "~Kai-Wei_Chang1", "~Marcus_Rohrbach1", "~Anna_Rohrbach1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/5122ff943a54a4a24606efe50d369cfcb423a44b.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "test-time scaling", "self-consistency", "generative reward models", "compute-matched analysis" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ singhi2025when, title={When To Solve, When To Verify: Compute-Optimal Problem Solving and Generative Verification for {LLM} Reasoning}, author={Nishad Singhi and Hritik Bansal and Arian Hosseini and Aditya Grover and Kai-Wei Chang and Marcus Rohrbach and Anna Rohrbach}, booktitle={Second Conference on L...
singhi|when_to_solve_when_to_verify_computeoptimal_problem_solving_and_generative_verification_for_llm_reasoning
null
null
null
null
null
Knowledge Graph Retrieval-Augmented Generation via GNN-Guided Prompting
We propose GGR, a GNN-guided KG-RAG framework that enhances LLM retrieval in KG-RAG by incorporating GNN Guidance to preserve key reasoning paths and improve relation selection.
Large Language Models (LLMs) have demonstrated remarkable performance in open-domain question answering (QA), but their reliance on knowledge learned during pretraining limits their ability to provide accurate and up-to-date information. Knowledge Graph Retrieval-Augmented Generation (KG-RAG) enhances LLMs by incorpora...
[ "Haochen Liu", "Song Wang", "Jundong Li" ]
https://openreview.net/forum?id=R1NWMExESj
R1NWMExESj
R1NWMExESj
[ "~Haochen_Liu3", "~Song_Wang6", "~Jundong_Li2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/cbb38d13d9f0fcfebd7975de753f0b0daa8006b4.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Knowledge Graph", "Retrieval-Augmented Generation", "Question Answering", "Large Language Model", "Prompt" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ liu2025knowledge, title={Knowledge Graph Retrieval-Augmented Generation via {GNN}-Guided Prompting}, author={Haochen Liu and Song Wang and Jundong Li}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=R1NWMExESj} }
liu|knowledge_graph_retrievalaugmented_generation_via_gnnguided_prompting
null
null
null
null
null
Impact of LLM Alignment on Impression Formation in Social Interactions
Tests of LLMs against Affect Control Theory in gendered social interactions reveal that alignment influences impression formation unpredictably and that models largely ignore context in favor of the actor's identity.
Impression formation plays a crucial role in shaping social life, influencing behaviors, attitudes, and interactions across different contexts. Affect Control Theory (ACT) offers a well-established, empirically grounded model of how people form impressions and evaluate social interactions. We investigate whether Large ...
[ "Ala N. Tak", "Anahita Bolourani", "Daniel B. Shank", "Jonathan Gratch" ]
https://openreview.net/forum?id=R135tO3SJJ
R135tO3SJJ
R135tO3SJJ
[ "~Ala_N._Tak1", "~Anahita_Bolourani1", "~Daniel_B._Shank1", "~Jonathan_Gratch1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/b02d921a48789e5acc49fd89b51b6f87b53fe69f.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Impression Formation", "Alignment", "Preference-tuning", "Affect Control Theory" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ tak2025impact, title={Impact of {LLM} Alignment on Impression Formation in Social Interactions}, author={Ala N. Tak and Anahita Bolourani and Daniel B. Shank and Jonathan Gratch}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=R135tO3SJJ} }
tak|impact_of_llm_alignment_on_impression_formation_in_social_interactions
/attachment/5b714302ae223c96227996ba6cd5ec1442f0154d.zip
null
null
null
null
Can LLMs Handle WebShell Detection? Overcoming Detection Challenges with Behavioral Function-Aware Framework
We are the first to analyze the potential and limitations of LLMs on the WebShell detection, and propose a framework to improve the performance of LLMs, allowing larger LLMs to outperform SOTA and smaller LLMs to be competitive.
WebShell attacks, where malicious scripts are injected into web servers, pose a significant cybersecurity threat. Traditional machine learning and deep learning methods are often hampered by challenges such as the need for extensive training data, catastrophic forgetting, and poor generalization. Recently, Large Langua...
[ "Feijiang Han", "Jiaming Zhang", "Chuyi Deng", "Jianheng Tang", "Yunhuai Liu" ]
https://openreview.net/forum?id=QzJRtz8HNx
QzJRtz8HNx
QzJRtz8HNx
[ "~Feijiang_Han1", "~Jiaming_Zhang17", "~Chuyi_Deng1", "~Jianheng_Tang2", "~Yunhuai_Liu1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/3e5232e85fad3301a2dbd0beb5ee8fe5bfa99ca3.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "WebShell detection", "Large Language Models", "Code Analysis", "Cybersecurity", "In-Context Learning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ han2025can, title={Can {LLM}s Handle WebShell Detection? Overcoming Detection Challenges with Behavioral Function-Aware Framework}, author={Feijiang Han and Jiaming Zhang and Chuyi Deng and Jianheng Tang and Yunhuai Liu}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://open...
han|can_llms_handle_webshell_detection_overcoming_detection_challenges_with_behavioral_functionaware_framework
null
null
null
null
null
Do Language Models Agree with Human Perceptions of Suspense in Stories?
We show that while language models can detect when a text is meant to be suspenseful, they fail to match human judgments on its intensity and dynamics and are vulnerable to adversarial manipulations.
Suspense is an affective response to narrative text that is believed to involve complex cognitive processes in humans. Several psychological models have been developed to describe this phenomenon and the circumstances under which text might trigger it. We replicate four seminal psychological studies of human perception...
[ "Glenn Matlin", "Devin Zhang", "Rodrigo Barroso Loza", "Diana M. Popescu", "Joni Isbell", "Chandreyi Chakraborty", "Mark Riedl" ]
https://openreview.net/forum?id=Qu0znWWckM
Qu0znWWckM
Qu0znWWckM
[ "~Glenn_Matlin1", "~Devin_Zhang1", "~Rodrigo_Barroso_Loza1", "~Diana_M._Popescu1", "~Joni_Isbell1", "~Chandreyi_Chakraborty1", "~Mark_Riedl1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/69a31724b402656f84fd2b0e7ff98df2ff4a9c2a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Language Models (LMs)", "Cognitive Science", "Psycholinguistics", "Human Alignment", "Theory of Mind" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ matlin2025do, title={Do Language Models Agree with Human Perceptions of Suspense in Stories?}, author={Glenn Matlin and Devin Zhang and Rodrigo Barroso Loza and Diana M. Popescu and Joni Isbell and Chandreyi Chakraborty and Mark Riedl}, booktitle={Second Conference on Language Modeling}, year={2025}, ur...
matlin|do_language_models_agree_with_human_perceptions_of_suspense_in_stories
null
null
null
null
null
Humans overrely on overconfident language models, across languages
Multilingual LLMs are overconfident across languages, and that users overrely on confident responses
As large language models (LLMs) are deployed globally, it is crucial that their responses are calibrated across languages to accurately convey uncertainty and limitations. Prior work shows that LLMs are linguistically overconfident in English, leading users to overrely on confident generations. However, the usage and i...
[ "Neil Rathi", "Dan Jurafsky", "Kaitlyn Zhou" ]
https://openreview.net/forum?id=QsQatTzATT
QsQatTzATT
QsQatTzATT
[ "~Neil_Rathi1", "~Dan_Jurafsky1", "~Kaitlyn_Zhou1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/98780e1485e4b6e82cc6c169631d5ca7d01f99b2.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "multilingual language models", "uncertainty" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ rathi2025humans, title={Humans overrely on overconfident language models, across languages}, author={Neil Rathi and Dan Jurafsky and Kaitlyn Zhou}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=QsQatTzATT} }
rathi|humans_overrely_on_overconfident_language_models_across_languages
null
null
null
null
null
Reinforcement Learning Enhanced Full-Duplex Spoken Dialogue Language Models for Conversational Interactions
Use Reinforcement Learning to optimize the Spoken Dialogue Models
Mainstream spoken dialogue language models (SDLMs) primarily handle turn-based interactions by alternating between processing user speech and generating responses. Recently emerging full-duplex SDLMs have showcased more natural and engaging conversational performance by simultaneously listening and speaking. However, t...
[ "Chen Chen", "Ke Hu", "Chao-Han Huck Yang", "Ankita Pasad", "Edresson Casanova", "Weiqing Wang", "Szu-Wei Fu", "Jason Li", "Zhehuai Chen", "Jagadeesh Balam", "Boris Ginsburg" ]
https://openreview.net/forum?id=QbLbXz8Idp
QbLbXz8Idp
QbLbXz8Idp
[ "~Chen_Chen56", "~Ke_Hu12", "~Chao-Han_Huck_Yang1", "~Ankita_Pasad1", "~Edresson_Casanova1", "~Weiqing_Wang4", "~Szu-Wei_Fu1", "~Jason_Li1", "~Zhehuai_Chen1", "~Jagadeesh_Balam1", "~Boris_Ginsburg1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/8d53560d6423d1919770e67b37d4b1561e30de3e.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Full-Duplex model", "Spoken Dialogue Models", "Speech-to-Speech model" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ chen2025reinforcement, title={Reinforcement Learning Enhanced Full-Duplex Spoken Dialogue Language Models for Conversational Interactions}, author={Chen Chen and Ke Hu and Chao-Han Huck Yang and Ankita Pasad and Edresson Casanova and Weiqing Wang and Szu-Wei Fu and Jason Li and Zhehuai Chen and Jagadees...
chen|reinforcement_learning_enhanced_fullduplex_spoken_dialogue_language_models_for_conversational_interactions
null
null
null
null
null
Language Model Uncertainty Quantification with Attention Chain
Investigates large language model uncertainty estimation with critical reasoning token backtracking using attention chain.
Accurately quantifying a large language model's (LLM) predictive uncertainty is crucial for judging the reliability of its answers. While most existing research focuses on short, directly answerable questions with closed-form outputs (e.g., multiple-choice), involving intermediate reasoning steps in LLM responses is in...
[ "Yinghao Li", "Rushi Qiang", "Lama Moukheiber", "Chao Zhang" ]
https://openreview.net/forum?id=QTrW2HWNXe
QTrW2HWNXe
QTrW2HWNXe
[ "~Yinghao_Li3", "~Rushi_Qiang1", "~Lama_Moukheiber2", "~Chao_Zhang15" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/4a1ba392c01b705b8520ba423a11c47a8c524f81.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Uncertainty Estimation", "Large Language Model", "Attention" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ li2025language, title={Language Model Uncertainty Quantification with Attention Chain}, author={Yinghao Li and Rushi Qiang and Lama Moukheiber and Chao Zhang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=QTrW2HWNXe} }
li|language_model_uncertainty_quantification_with_attention_chain
null
null
null
null
null
Can Large Language Models Integrate Spatial Data? Empirical Insights into Reasoning Strengths and Computational Weaknesses
We use large languge models for the spatial integration application. Our proposed heuristic-driven method and review-and-refine method demonstrate remarkable effectvieness in the application.
We explore the application of large language models (LLMs) to empower domain experts in integrating large, heterogeneous, and noisy urban spatial datasets. Traditional rule-based integration methods are unable to cover all edge cases, requiring manual verification and repair. Machine learning approaches require collect...
[ "Bin HAN", "Robert Wolfe", "Anat Caspi", "Bill Howe" ]
https://openreview.net/forum?id=QNaHC8njYt
QNaHC8njYt
QNaHC8njYt
[ "~Bin_HAN1", "~Robert_Wolfe1", "~Anat_Caspi1", "~Bill_Howe1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/69c17c97256437504728268c94cdd5b6bba528ca.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "large language models", "language model application", "spatial data integration", "spatial reasoning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ han2025can, title={Can Large Language Models Integrate Spatial Data? Empirical Insights into Reasoning Strengths and Computational Weaknesses}, author={Bin HAN and Robert Wolfe and Anat Caspi and Bill Howe}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/for...
han|can_large_language_models_integrate_spatial_data_empirical_insights_into_reasoning_strengths_and_computational_weaknesses
null
null
null
null
null
Cognitive Behaviors that Enable Self-Improving Reasoners, or, Four Habits of Highly Effective STaRs
We show that language models with built-in cognitive behaviors like verification and backtracking learn better through reinforcement learning than those without.
Test-time inference has emerged as a powerful paradigm for enabling language models to ``think'' longer and more carefully about complex challenges, much like skilled human experts. While reinforcement learning (RL) can drive self-improvement in language models on verifiable tasks, some models exhibit substantial gains...
[ "Kanishk Gandhi", "Ayush K Chakravarthy", "Anikait Singh", "Nathan Lile", "Noah Goodman" ]
https://openreview.net/forum?id=QGJ9ttXLTy
QGJ9ttXLTy
QGJ9ttXLTy
[ "~Kanishk_Gandhi1", "~Ayush_K_Chakravarthy1", "~Anikait_Singh1", "~Nathan_Lile1", "~Noah_Goodman1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/e32fbe0f53c45c0770ca8bed05f63276b7eff3e4.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Reasoning", "RL", "self-improvement", "backtracking", "test-time compute", "planning", "search" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ gandhi2025cognitive, title={Cognitive Behaviors that Enable Self-Improving Reasoners, or, Four Habits of Highly Effective {ST}aRs}, author={Kanishk Gandhi and Ayush K Chakravarthy and Anikait Singh and Nathan Lile and Noah Goodman}, booktitle={Second Conference on Language Modeling}, year={2025}, url={h...
gandhi|cognitive_behaviors_that_enable_selfimproving_reasoners_or_four_habits_of_highly_effective_stars
null
null
null
null
null
Breaking the Data Barrier -- Building GUI Agents Through Task Generalization
We comprehensively study how middle training on a series of tasks, except for GUI domain, can enhance specific capabilities such as GUI perception, visual reasoning, and knowledge.
Graphical User Interface (GUI) agents offer cross-platform solutions for automating complex digital tasks, with significant potential to transform productivity workflows. However, their performance is often constrained by the scarcity of high-quality trajectory data. To address this limitation, we propose training Visi...
[ "Junlei Zhang", "Zichen Ding", "Chang Ma", "Zijie Chen", "Qiushi Sun", "Zhenzhong Lan", "Junxian He" ]
https://openreview.net/forum?id=QDtORaZt8K
QDtORaZt8K
QDtORaZt8K
[ "~Junlei_Zhang1", "~Zichen_Ding1", "~Chang_Ma2", "~Zijie_Chen3", "~Qiushi_Sun1", "~Zhenzhong_Lan2", "~Junxian_He1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/65903331eb76e9b57f2b8de9291044b97381d0d6.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "GUI agent", "middle training", "llm as agent" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhang2025breaking, title={Breaking the Data Barrier -- Building {GUI} Agents Through Task Generalization}, author={Junlei Zhang and Zichen Ding and Chang Ma and Zijie Chen and Qiushi Sun and Zhenzhong Lan and Junxian He}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://open...
zhang|breaking_the_data_barrier_building_gui_agents_through_task_generalization
null
null
null
null
null
HyperINF: Unleashing the HyperPower of Schulz's Method for Data Influence Estimation
We propose HyperINF, an efficient and accurate influence function approximation which leverages the hyperpower method, specifically Schulz's iterative algorithm.
Influence functions provide a principled approach to assess individual training samples' contributions to specific targets. However, their high computational costs have limited applications in large-scale models and datasets. While existing approximation methods have reduced computational overhead, they often suffer fr...
[ "Xinyu Zhou", "Simin Fan", "Martin Jaggi" ]
https://openreview.net/forum?id=QByEdZMJdx
QByEdZMJdx
QByEdZMJdx
[ "~Xinyu_Zhou8", "~Simin_Fan1", "~Martin_Jaggi1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/2cb27e8f99cce1830e4a75d9aee480e3e43756b4.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "language model; data; influence score" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhou2025hyperinf, title={Hyper{INF}: Unleashing the HyperPower of Schulz's Method for Data Influence Estimation}, author={Xinyu Zhou and Simin Fan and Martin Jaggi}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=QByEdZMJdx} }
zhou|hyperinf_unleashing_the_hyperpower_of_schulzs_method_for_data_influence_estimation
null
null
null
null
null
Cascade Reward Sampling for Efficient Decoding-Time Alignment
We significantly improve decoding efficiency for decoding-time alignment methods while achieving better alignment quality.
Aligning large language models (LLMs) with human preferences is essential for their applications. Recently, decoding-time alignment has emerged as an effective plug-and-play technique that avoids fine-tuning model parameters. This approach retains the general utility of pretrained LLMs but often suffers from significan...
[ "Bolian Li", "Yifan Wang", "Anamika Lochab", "Ananth Grama", "Ruqi Zhang" ]
https://openreview.net/forum?id=QBmxLlmRYG
QBmxLlmRYG
QBmxLlmRYG
[ "~Bolian_Li1", "~Yifan_Wang14", "~Anamika_Lochab1", "~Ananth_Grama1", "~Ruqi_Zhang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/c08d4964c6e7706dfeb2f687a2ed9633ed0a3bed.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "LLM Alignment" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ li2025cascade, title={Cascade Reward Sampling for Efficient Decoding-Time Alignment}, author={Bolian Li and Yifan Wang and Anamika Lochab and Ananth Grama and Ruqi Zhang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=QBmxLlmRYG} }
li|cascade_reward_sampling_for_efficient_decodingtime_alignment
/attachment/5dc60966cda3252835f4c0db2fb156fc31772b56.zip
null
null
null
null
HIPPO-VIDEO : Simulating Watch Histories with Large Language Models for History-Driven Video Highlighting
We introduce a large-scale dataset for personalized video highlighting by simulating user watch history and generating segment-wise saliency scores, enabling more user-centric video summarization.
The exponential growth of video content has made personalized video highlighting an essential task, as user preferences are highly variable and complex. Existing video datasets, however, often lack personalization, relying on isolated videos or simple text queries that fail to capture the intricacies of user behavior. ...
[ "Jeongeun Lee", "Youngjae Yu", "Dongha Lee" ]
https://openreview.net/forum?id=Q6TCkggzQ2
Q6TCkggzQ2
Q6TCkggzQ2
[ "~Jeongeun_Lee3", "~Youngjae_Yu1", "~Dongha_Lee1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/fdd7fb096ae738e491cc7b7405981d0020c22a4a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "video understanding", "personalization", "highlight detection" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lee2025hippovideo, title={{HIPPO}-{VIDEO} : Simulating Watch Histories with Large Language Models for History-Driven Video Highlighting}, author={Jeongeun Lee and Youngjae Yu and Dongha Lee}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Q6TCkggzQ2...
lee|hippovideo_simulating_watch_histories_with_large_language_models_for_historydriven_video_highlighting
null
null
null
null
null
CodeARC: Benchmarking Reasoning Capabilities of LLM Agents for Inductive Program Synthesis
We introduce CodeARC, a benchmark for inductive program synthesis where LLM agents iteratively refine code via oracle feedback, enabling more realistic and challenging evaluation for inductive reasoning.
Inductive program synthesis, or programming by example, requires synthesizing functions from input-output examples that generalize to unseen inputs. While large language model agents have shown promise in programming tasks guided by natural language, their ability to perform inductive program synthesis is underexplored...
[ "Anjiang Wei", "Tarun Suresh", "Jiannan Cao", "Naveen Kannan", "Yuheng Wu", "Kai Yan", "Thiago S. F. X. Teixeira", "Ke Wang", "Alex Aiken" ]
https://openreview.net/forum?id=Q5pVZCrrKr
Q5pVZCrrKr
Q5pVZCrrKr
[ "~Anjiang_Wei1", "~Tarun_Suresh1", "~Jiannan_Cao1", "~Naveen_Kannan1", "~Yuheng_Wu2", "~Kai_Yan1", "~Thiago_S._F._X._Teixeira1", "~Ke_Wang1", "~Alex_Aiken1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/8f979cde6db2a776c44f5873ad229b587608061a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Agent", "Large Language Model", "Reasoning", "Code", "Program Synthesis" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wei2025codearc, title={Code{ARC}: Benchmarking Reasoning Capabilities of {LLM} Agents for Inductive Program Synthesis}, author={Anjiang Wei and Tarun Suresh and Jiannan Cao and Naveen Kannan and Yuheng Wu and Kai Yan and Thiago S. F. X. Teixeira and Ke Wang and Alex Aiken}, booktitle={Second Conference ...
wei|codearc_benchmarking_reasoning_capabilities_of_llm_agents_for_inductive_program_synthesis
null
null
null
null
null
RRO: LLM Agent Optimization Through Rising Reward Trajectories
A new "Reward Rising Optimization" method trains AI agents more efficiently by only collecting data when rewards increase between steps.
Large language models (LLMs) have exhibited extraordinary performance in a variety of tasks, while it remains challenging for them to solve complex multi-step tasks as agents. In practice, agents are sensitive to the outcome of certain key steps, which makes them likely to fail the task because of a subtle mistake in t...
[ "Zilong Wang", "Jingfeng Yang", "Sreyashi Nag", "Samarth Varshney", "Xianfeng Tang", "Haoming Jiang", "Jingbo Shang", "Sheikh Muhammad Sarwar" ]
https://openreview.net/forum?id=PhaE8TSM5j
PhaE8TSM5j
PhaE8TSM5j
[ "~Zilong_Wang1", "~Jingfeng_Yang2", "~Sreyashi_Nag1", "~Samarth_Varshney1", "~Xianfeng_Tang1", "~Haoming_Jiang1", "~Jingbo_Shang2", "~Sheikh_Muhammad_Sarwar1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/fba685046b9a84f29e12efe295ee88aac2124f32.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "large language model", "agent", "reinforcement learning", "process reward model" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wang2025rro, title={{RRO}: {LLM} Agent Optimization Through Rising Reward Trajectories}, author={Zilong Wang and Jingfeng Yang and Sreyashi Nag and Samarth Varshney and Xianfeng Tang and Haoming Jiang and Jingbo Shang and Sheikh Muhammad Sarwar}, booktitle={Second Conference on Language Modeling}, year=...
wang|rro_llm_agent_optimization_through_rising_reward_trajectories
null
null
null
null
null
Rank1: Test-Time Compute for Reranking in Information Retrieval
We train the first reranker using test-time compute in information retrieval
We introduce Rank1, the first reranking model trained to take advantage of test-time compute. Rank1 demonstrates the applicability within retrieval of using a reasoning language model (i.e. OpenAI's o1, Deepseek's R1, etc.) for distillation in order to rapidly improve the performance of a smaller model. We gather and o...
[ "Orion Weller", "Kathryn Ricci", "Eugene Yang", "Andrew Yates", "Dawn Lawrie", "Benjamin Van Durme" ]
https://openreview.net/forum?id=Pg0PAvbhGv
Pg0PAvbhGv
Pg0PAvbhGv
[ "~Orion_Weller1", "~Kathryn_Ricci1", "~Eugene_Yang2", "~Andrew_Yates2", "~Dawn_Lawrie1", "~Benjamin_Van_Durme2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/6ad5cc3faf8da5bd58f1ac774688c871598827e1.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "retrieval", "reranking", "test-time compute" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ weller2025rank, title={Rank1: Test-Time Compute for Reranking in Information Retrieval}, author={Orion Weller and Kathryn Ricci and Eugene Yang and Andrew Yates and Dawn Lawrie and Benjamin Van Durme}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=...
weller|rank1_testtime_compute_for_reranking_in_information_retrieval
null
null
null
null
null
Pretraining on the Test Set Is No Longer All You Need: A Debate-Driven Approach to QA Benchmarks
We introduce a debate-driven evaluation paradigm that transforms existing QA benchmarks into adversarial debates between models, providing a more robust assessment of reasoning abilities while penalizing shallow memorization.
As frontier language models increasingly saturate standard QA benchmarks, concerns about data contamination, memorization, and escalating dataset creation costs persist. We propose a debate-driven evaluation paradigm that transforms any existing QA dataset into structured adversarial debates—where one model is given th...
[ "Linbo Cao", "Jinman Zhao" ]
https://openreview.net/forum?id=Pdyh3USc2A
Pdyh3USc2A
Pdyh3USc2A
[ "~Linbo_Cao1", "~Jinman_Zhao2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/09936fe14703f9228a42fab935d3064134b7c293.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "debate-driven evaluation", "QA benchmarks", "multi-agent debate", "language model evaluation", "benchmark contamination", "model memorization", "adversarial evaluation", "dynamic assessment" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ cao2025pretraining, title={Pretraining on the Test Set Is No Longer All You Need: A Debate-Driven Approach to {QA} Benchmarks}, author={Linbo Cao and Jinman Zhao}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Pdyh3USc2A} }
cao|pretraining_on_the_test_set_is_no_longer_all_you_need_a_debatedriven_approach_to_qa_benchmarks
null
null
null
null
null
Refusal Tokens: A Simple Way to Calibrate Refusals in Large Language Models
Introduce refusal tokens to enable control over a single model’s refusal rates and discuss desirable data properties for optimizing this approach.
A key component of building safe and reliable language models is enabling the models to appropriately refuse to follow certain instructions or answer certain questions. We may want models to output refusal messages for various categories of user queries, for example, ill-posed questions, instructions for committing ill...
[ "Neel Jain", "Aditya Shrivastava", "Chenyang Zhu", "Daben Liu", "Alfy Samuel", "Ashwinee Panda", "Anoop Kumar", "Micah Goldblum", "Tom Goldstein" ]
https://openreview.net/forum?id=Pbs4i3FgbD
Pbs4i3FgbD
Pbs4i3FgbD
[ "~Neel_Jain1", "~Aditya_Shrivastava1", "~Chenyang_Zhu3", "~Daben_Liu1", "~Alfy_Samuel1", "~Ashwinee_Panda1", "~Anoop_Kumar1", "~Micah_Goldblum1", "~Tom_Goldstein1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/f1f06df0b4d2acfebdf8baf0ef258f783339b79d.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM", "Refusals" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ jain2025refusal, title={Refusal Tokens: A Simple Way to Calibrate Refusals in Large Language Models}, author={Neel Jain and Aditya Shrivastava and Chenyang Zhu and Daben Liu and Alfy Samuel and Ashwinee Panda and Anoop Kumar and Micah Goldblum and Tom Goldstein}, booktitle={Second Conference on Language...
jain|refusal_tokens_a_simple_way_to_calibrate_refusals_in_large_language_models
/attachment/7752bce29aea8f6a0690ad433de3e3048047aa96.zip
null
null
null
null
VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information
We introduce VisOnlyQA, a dataset to evaluate the capability of Large Vision Language Models to perceive geometric information, such as lengths, angles, and shapes, and reveal that they still cannot accurately perceive basic geometric information.
Large Vision Language Models (LVLMs) have achieved remarkable performance in various vision-language tasks. However, it is still unclear how accurately LVLMs can perceive visual information in images. In particular, the capability of LVLMs to perceive geometric information, such as shape, angle, and size, remains insuf...
[ "Ryo Kamoi", "Yusen Zhang", "Sarkar Snigdha Sarathi Das", "Ranran Haoran Zhang", "Rui Zhang" ]
https://openreview.net/forum?id=PYHwlyu2fa
PYHwlyu2fa
PYHwlyu2fa
[ "~Ryo_Kamoi1", "~Yusen_Zhang1", "~Sarkar_Snigdha_Sarathi_Das1", "~Ranran_Haoran_Zhang2", "~Rui_Zhang7" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/a9e26bf74b28475a0a1452686d6142b96af68b9a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "vision-language models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ kamoi2025visonlyqa, title={VisOnly{QA}: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information}, author={Ryo Kamoi and Yusen Zhang and Sarkar Snigdha Sarathi Das and Ranran Haoran Zhang and Rui Zhang}, booktitle={Second Conference on Language Modeling}, year={2025}, ...
kamoi|visonlyqa_large_vision_language_models_still_struggle_with_visual_perception_of_geometric_information
/attachment/47d05e93f5817d673511927ceb3287d1c28ae8dc.zip
null
null
null
null
Multi-Agent Retrieval-Augmented Framework for Evidence-Based Counterspeech Against Health Misinformation
A multi-agent retrieval-augmented framework leveraging multiple LLMs to enhance evidence-based counterspeech generation against health misinformation with greater accuracy and refinement.
Large language models (LLMs) incorporated with Retrieval-Augmented Generation (RAG) have demonstrated powerful capabilities in generating counterspeech against misinformation. However, current studies rely on limited evidence and offer less control over final outputs. To address these challenges, we propose a Multi-age...
[ "Anirban Saha Anik", "Xiaoying Song", "Elliott Wang", "Bryan Wang", "Bengisu Yarimbas", "Lingzi Hong" ]
https://openreview.net/forum?id=P61AgRyU7E
P61AgRyU7E
P61AgRyU7E
[ "~Anirban_Saha_Anik1", "~Xiaoying_Song1", "~Elliott_Wang1", "~Bryan_Wang3", "~Bengisu_Yarimbas1", "~Lingzi_Hong1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/228fd2caf6b4d52b0f4718294d46b4f44d03fb2b.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Model", "Multi-agent", "Retrieval-Augmented Generation", "Health Misinformation", "Counterspeech" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ anik2025multiagent, title={Multi-Agent Retrieval-Augmented Framework for Evidence-Based Counterspeech Against Health Misinformation}, author={Anirban Saha Anik and Xiaoying Song and Elliott Wang and Bryan Wang and Bengisu Yarimbas and Lingzi Hong}, booktitle={Second Conference on Language Modeling}, yea...
anik|multiagent_retrievalaugmented_framework_for_evidencebased_counterspeech_against_health_misinformation
null
null
null
null
null
Epistemic Alignment: A Mediating Framework for User-LLM Knowledge Delivery
A framework of ten epistemic challenges revealing the gap between how users want knowledge presented and what LLMs currently deliver
Large Language Models (LLMs) increasingly serve as tools for knowledge acquisition, yet users cannot effectively specify how they want information presented. When users request that LLMs "cite reputable sources," "express appropriate uncertainty," or "include multiple perspectives," they discover that current interface...
[ "Nicholas Clark", "Hua Shen", "Bill Howe", "Tanu Mitra" ]
https://openreview.net/forum?id=Orvjm9UqH2
Orvjm9UqH2
Orvjm9UqH2
[ "~Nicholas_Clark2", "~Hua_Shen1", "~Bill_Howe1", "~Tanu_Mitra1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/e40d3c738a2f84d2c439b00945bc1db28eecb5ff.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "epistemology of AI", "language model behavior", "human-AI interaction" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ clark2025epistemic, title={Epistemic Alignment: A Mediating Framework for User-{LLM} Knowledge Delivery}, author={Nicholas Clark and Hua Shen and Bill Howe and Tanu Mitra}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Orvjm9UqH2} }
clark|epistemic_alignment_a_mediating_framework_for_userllm_knowledge_delivery
null
true
null
null
null
Enhancing LLM Reasoning with Iterative DPO: A Comprehensive Empirical Investigation
DPO enables iterative self-improvement for LLMs, achieving RL-level reasoning performance with lower computational cost through preference-based learning and verifiable rewards.
Recent advancements in post-training methodologies for large language models (LLMs) have highlighted reinforcement learning (RL) as a critical component for enhancing reasoning. However, the substantial computational costs associated with RL-based approaches have led to growing interest in alternative paradigms, such a...
[ "Songjun Tu", "Jiahao Lin", "Xiangyu Tian", "Qichao Zhang", "Linjing Li", "Yuqian Fu", "Nan Xu", "Wei He", "Xiangyuan Lan", "Dongmei Jiang", "Dongbin Zhao" ]
https://openreview.net/forum?id=OgWh4J7bkT
OgWh4J7bkT
OgWh4J7bkT
[ "~Songjun_Tu1", "~Jiahao_Lin4", "~Xiangyu_Tian1", "~Qichao_Zhang3", "~Linjing_Li1", "~Yuqian_Fu3", "~Nan_Xu4", "~Wei_He14", "~Xiangyuan_Lan4", "~Dongmei_Jiang2", "~Dongbin_Zhao1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/bfd8b38b48b9c4d0e40fdafc3605a788ee02041a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM Reasoning", "Iterative Optimization" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ tu2025enhancing, title={Enhancing {LLM} Reasoning with Iterative {DPO}: A Comprehensive Empirical Investigation}, author={Songjun Tu and Jiahao Lin and Xiangyu Tian and Qichao Zhang and Linjing Li and Yuqian Fu and Nan Xu and Wei He and Xiangyuan Lan and Dongmei Jiang and Dongbin Zhao}, booktitle={Secon...
tu|enhancing_llm_reasoning_with_iterative_dpo_a_comprehensive_empirical_investigation
null
null
null
null
null
LM Agents May Fail to Act on Their Own Risk Knowledge
This paper develops a systematic safety evaluation framework for LM agents, reveals persistent gaps between risk awareness and safe execution, and proposes effective mitigation strategies.
Language model (LM) agents have demonstrated significant potential for automating real-world tasks, yet they pose a diverse array of potential, severe risks in safety-critical scenarios. In this work, we identify a significant gap between LM agents' risk awareness and safety execution abilities: while they often answe...
[ "Yuzhi Tang", "Tianxiao Li", "Elizabeth Li", "Chris J. Maddison", "Honghua Dong", "Yangjun Ruan" ]
https://openreview.net/forum?id=OeYdS51k8F
OeYdS51k8F
OeYdS51k8F
[ "~Yuzhi_Tang2", "~Tianxiao_Li2", "~Elizabeth_Li1", "~Chris_J._Maddison1", "~Honghua_Dong1", "~Yangjun_Ruan1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/71fe0a81e5a774812c204edec896f538589e31c8.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Language Model Agents", "AI Safety", "Evaluation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ tang2025lm, title={{LM} Agents May Fail to Act on Their Own Risk Knowledge}, author={Yuzhi Tang and Tianxiao Li and Elizabeth Li and Chris J. Maddison and Honghua Dong and Yangjun Ruan}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=OeYdS51k8F} }
tang|lm_agents_may_fail_to_act_on_their_own_risk_knowledge
null
null
null
null
null
Limitations of refinement methods for weak to strong generalization
We study label refinement methods for weak to strong generalization.
Standard techniques for aligning large language models (LLMs) utilize human-produced data, which could limit the capability of any aligned LLM to human level. Label refinement and weak training have emerged as promising strategies to address this *superalignment* problem. In this work, we adopt probabilistic assumption...
[ "Seamus Somerstep", "Yaacov Ritov", "Mikhail Yurochkin", "Subha Maity", "Yuekai Sun" ]
https://openreview.net/forum?id=OKvSnV5Ar7
OKvSnV5Ar7
OKvSnV5Ar7
[ "~Seamus_Somerstep1", "~Yaacov_Ritov2", "~Mikhail_Yurochkin1", "~Subha_Maity1", "~Yuekai_Sun1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/005f74528280f3bdaeb656e8351844729f0cf1b9.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Weak to strong generalization", "superalignment", "transfer learning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ somerstep2025limitations, title={Limitations of refinement methods for weak to strong generalization}, author={Seamus Somerstep and Yaacov Ritov and Mikhail Yurochkin and Subha Maity and Yuekai Sun}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=OK...
somerstep|limitations_of_refinement_methods_for_weak_to_strong_generalization
/attachment/abc7fd4434a497d95c9d4ae706bc33726fe44207.zip
null
null
null
null
G1yphD3c0de: Towards Safer Language Models on Visually Perturbed Texts
Towards Safer Language Models on Visually Perturbed Texts
Visual text perturbations are increasingly used to bypass content moderation systems, where characters are replaced with visually similar Unicode alternatives that humans can easily recognize but text-only filters fail to detect. While existing research has examined the generation and classification of such evasion tec...
[ "Yejinchoi", "Yejin Yeo", "Yejin Son", "Seungju Han", "Youngjae Yu" ]
https://openreview.net/forum?id=OGwE7LwtcR
OGwE7LwtcR
OGwE7LwtcR
[ "~Yejinchoi1", "~Yejin_Yeo1", "~Yejin_Son3", "~Seungju_Han2", "~Youngjae_Yu1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/da3d98a642cb0dc79ab3ca14ec82271fee674548.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "safety", "societal implications", "multimodality" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
true
I think the authors may have built a model that is perfect for solving CAPTCHAs. I don't think this should qualify as a barrier to acceptance – this is a good paper! – but I think it should be mentioned in the Ethics statement.
@inproceedings{ yejinchoi2025gyphdcde, title={G1yphD3c0de: Towards Safer Language Models on Visually Perturbed Texts}, author={Yejinchoi and Yejin Yeo and Yejin Son and Seungju Han and Youngjae Yu}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=OGwE7LwtcR} }
yejinchoi|g1yphd3c0de_towards_safer_language_models_on_visually_perturbed_texts
/attachment/73ae442f313b2bcb950d1ff8c21fbc0d155ca084.zip
null
null
null
null
Evaluating the Diversity and Quality of LLM Generated Content
We introduce a methodology/dataset for evaluating the diversity and quality of open-ended LLM generated content. We find RLHF and more broadly preference-tuning meaningfully increase diversity of generations.
Recent work suggests that preference-tuning techniques—such as Reinforcement Learning from Human Feedback (RLHF) methods like PPO and GRPO, as well as alternatives like DPO—reduce diversity, creating a dilemma given that these models are widely deployed in applications requiring varied outputs. We argue that diversity ...
[ "Alexander Shypula", "Shuo Li", "Botong Zhang", "Vishakh Padmakumar", "Kayo Yin", "Osbert Bastani" ]
https://openreview.net/forum?id=O7bF6nlSOD
O7bF6nlSOD
O7bF6nlSOD
[ "~Alexander_Shypula1", "~Shuo_Li7", "~Botong_Zhang1", "~Vishakh_Padmakumar1", "~Kayo_Yin1", "~Osbert_Bastani1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/c5bcff49672b1b239114e6db833087090df5d450.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Diversity;Alignment;LLMs;Evaluation;Program Synthesis;Code Generation;Creative Writing" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ shypula2025evaluating, title={Evaluating the Diversity and Quality of {LLM} Generated Content}, author={Alexander Shypula and Shuo Li and Botong Zhang and Vishakh Padmakumar and Kayo Yin and Osbert Bastani}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/for...
shypula|evaluating_the_diversity_and_quality_of_llm_generated_content
/attachment/42f1580fbebf5f89e0d841b74303206a0ec077c6.zip
null
null
null
null
Reasoning Models Know When They’re Right: Probing Hidden States for Self-Verification
Reasoning models with long chain-of-thought encode strong signals about the correctness of intermediate answers in model's hidden states, and we can use it for early exit.
Reasoning models have achieved remarkable performance on tasks like math and logical reasoning thanks to their ability to search during reasoning. However, they still suffer from \textit{overthinking}, often performing unnecessary reasoning steps even after reaching the correct answer. This raises the question: \textit...
[ "Anqi Zhang", "Yulin Chen", "Jane Pan", "Chen Zhao", "Aurojit Panda", "Jinyang Li", "He He" ]
https://openreview.net/forum?id=O6I0Av7683
O6I0Av7683
O6I0Av7683
[ "~Anqi_Zhang1", "~Yulin_Chen1", "~Jane_Pan1", "~Chen_Zhao2", "~Aurojit_Panda1", "~Jinyang_Li1", "~He_He2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/98d87a29faec03b994dcc0f2da69d5c73580d749.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Reasoning models; Chain-of-thought reasoning (CoT); Intermediate answers; Overthinking" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhang2025reasoning, title={Reasoning Models Know When They{\textquoteright}re Right: Probing Hidden States for Self-Verification}, author={Anqi Zhang and Yulin Chen and Jane Pan and Chen Zhao and Aurojit Panda and Jinyang Li and He He}, booktitle={Second Conference on Language Modeling}, year={2025}, ur...
zhang|reasoning_models_know_when_theyre_right_probing_hidden_states_for_selfverification
null
null
null
null
null
Analyzing Multilingualism in Large Language Models with Sparse Autoencoders
We provide several distinct patterns between high- and low-resource languages in LLMs through the lens of Sparse Autoencoders
Despite the impressive multilingual capabilities of recent large language models (LLMs), the mechanisms underlying their language-specific processing remain largely unclear. In this paper, we investigate how LLMs handle multilingualism through the lens of sparse autoencoders (SAEs), uncovering distinctive patterns that...
[ "Ikhyun Cho", "Julia Hockenmaier" ]
https://openreview.net/forum?id=NmGSvZoU3K
NmGSvZoU3K
NmGSvZoU3K
[ "~Ikhyun_Cho4", "~Julia_Hockenmaier1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/170a5e82ae85994de8bf4fda0fc86f34ce8a2975.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Multilingualism", "Sparse Autoencoders" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ cho2025analyzing, title={Analyzing Multilingualism in Large Language Models with Sparse Autoencoders}, author={Ikhyun Cho and Julia Hockenmaier}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=NmGSvZoU3K} }
cho|analyzing_multilingualism_in_large_language_models_with_sparse_autoencoders
null
null
null
null
null
Efficient Self-Improvement in Multimodal Large Language Models: A Model-Level Judge-Free Approach
A novel judge-free self-improvement framework for multimodal large language models (MLLMs) efficiently enhances reliability by controlling hallucinations without costly model-level verification loops.
Self-improvement in multimodal large language models (MLLMs) is crucial for enhancing their reliability and robustness. However, current methods often rely heavily on MLLMs themselves as judges, leading to high computational costs and potential pitfalls like reward hacking and model collapse. This paper introduces a no...
[ "Shijian Deng", "Wentian Zhao", "Yu-Jhe Li", "Kun Wan", "Daniel Miranda", "Ajinkya Kale", "Yapeng Tian" ]
https://openreview.net/forum?id=NRrXHppaBg
NRrXHppaBg
NRrXHppaBg
[ "~Shijian_Deng1", "~Wentian_Zhao3", "~Yu-Jhe_Li1", "~Kun_Wan1", "~Daniel_Miranda1", "~Ajinkya_Kale1", "~Yapeng_Tian1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/f3b8619d0d4e0aaad0303ebd851f4f8e90747e9e.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Self-Improvement", "Multimodal Large Language Models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ deng2025efficient, title={Efficient Self-Improvement in Multimodal Large Language Models: A Model-Level Judge-Free Approach}, author={Shijian Deng and Wentian Zhao and Yu-Jhe Li and Kun Wan and Daniel Miranda and Ajinkya Kale and Yapeng Tian}, booktitle={Second Conference on Language Modeling}, year={20...
deng|efficient_selfimprovement_in_multimodal_large_language_models_a_modellevel_judgefree_approach
null
null
null
null
null
LLM Unlearning Reveals a Stronger-Than-Expected Coreset Effect in Current Benchmarks
We uncover a novel coreset effect in current LLM Unlearning benchmarks, where unlearning performance can be effectively maintained using significantly smaller subsets, e.g., as little as 5% of the forget set.
Large language model (LLM) unlearning has become a critical challenge in ensuring safety and controlled model behavior by removing *undesired* data-model influences from the pretrained model while preserving its general utility. Significant recent efforts have been dedicated to developing LLM unlearning benchmarks such...
[ "Soumyadeep Pal", "Changsheng Wang", "James Diffenderfer", "Bhavya Kailkhura", "Sijia Liu" ]
https://openreview.net/forum?id=NMIqKUdDkw
NMIqKUdDkw
NMIqKUdDkw
[ "~Soumyadeep_Pal1", "~Changsheng_Wang1", "~James_Diffenderfer1", "~Bhavya_Kailkhura1", "~Sijia_Liu1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/3ebe54a4517ba5c5146eb4601360e351ec40ffee.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Machine Unlearning", "Coreset", "Large Language Models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ pal2025llm, title={{LLM} Unlearning Reveals a Stronger-Than-Expected Coreset Effect in Current Benchmarks}, author={Soumyadeep Pal and Changsheng Wang and James Diffenderfer and Bhavya Kailkhura and Sijia Liu}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/...
pal|llm_unlearning_reveals_a_strongerthanexpected_coreset_effect_in_current_benchmarks
null
null
null
null
null
Phased Training for LLM-powered Text Retrieval Models Beyond Data Scaling
Training powerful general-purpose text embedding and reranking models by a multi-stage training framework and efficient data synthesis.
Current efforts in building large language models (LLMs) based general-purpose text retrieval models primarily focus on architectural design and training data scaling. However, significant challenges remain in effectively modeling diverse retrieval tasks and domains, including multi-task conflict, data imbalance, and t...
[ "Xin Zhang", "Yanzhao Zhang", "Wen Xie", "Dingkun Long", "Mingxin Li", "Pengjun Xie", "Meishan Zhang", "Wenjie Li", "Min Zhang" ]
https://openreview.net/forum?id=NC6G1KCxlt
NC6G1KCxlt
NC6G1KCxlt
[ "~Xin_Zhang15", "~Yanzhao_Zhang1", "~Wen_Xie3", "~Dingkun_Long1", "~Mingxin_Li2", "~Pengjun_Xie2", "~Meishan_Zhang1", "~Wenjie_Li1", "~Min_Zhang9" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/a20b993f28b1029aa779a8f0d710161fc8438355.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Text Retrieval", "Text Embedding", "Reranking", "LLM-based Embedding" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhang2025phased, title={Phased Training for {LLM}-powered Text Retrieval Models Beyond Data Scaling}, author={Xin Zhang and Yanzhao Zhang and Wen Xie and Dingkun Long and Mingxin Li and Pengjun Xie and Meishan Zhang and Wenjie Li and Min Zhang}, booktitle={Second Conference on Language Modeling}, year={...
zhang|phased_training_for_llmpowered_text_retrieval_models_beyond_data_scaling
null
null
null
null
null
Reverse-engineering NLI: A study of the meta-inferential properties of Natural Language Inference
We perform a comprehesive analysis of NLI under three different logical interpretations of its labels and test the meta-inferential behavior of models trained on SNLI to better understand the logical properties of the task encoded by the dataset.
Natural Language Inference (NLI) has been an important task for evaluating language models for Natural Language Understanding, but the logical properties of the task are poorly understood and often mischaracterized. Understanding the notion of inference captured by NLI is key to interpreting model performance on the ta...
[ "Rasmus Blanck", "Bill Noble", "Stergios Chatzikyriakidis" ]
https://openreview.net/forum?id=NAcvSI2CRM
NAcvSI2CRM
NAcvSI2CRM
[ "~Rasmus_Blanck1", "~Bill_Noble1", "~Stergios_Chatzikyriakidis1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/b65364b872fa8af6c1effeaeabbf54157aaf7a3f.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "computational semantics", "natural language inference", "logic" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ blanck2025reverseengineering, title={Reverse-engineering {NLI}: A study of the meta-inferential properties of Natural Language Inference}, author={Rasmus Blanck and Bill Noble and Stergios Chatzikyriakidis}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/for...
blanck|reverseengineering_nli_a_study_of_the_metainferential_properties_of_natural_language_inference
/attachment/03b068cff59811924a42c4f55e2f14114e1bedfa.zip
null
null
null
null
LawFlow: Collecting and Simulating Lawyers’ Thought Processes on Business Formation Case Studies
LawFlow provides a dataset capturing complex legal workflows for small business-entity formation, highlighting multi-stage reasoning, client communication, and iterative revisions. It reveals human workflows’ flexibility and informs more adaptive AI.
Legal practitioners, particularly those early in their careers, face complex, high-stakes tasks that require adaptive, context-sensitive reasoning. While AI holds promise in supporting legal work, current datasets and models are narrowly focused on isolated subtasks and fail to capture the end-to-end decision-making re...
[ "Debarati Das", "Khanh Chi Le", "Ritik Sachin Parkar", "Karin De Langis", "Brendan Madson", "Chad M. Berryman", "Robin M Willis", "Daniel H Moses", "Brett McDonnell", "Daniel Schwarcz", "Dongyeop Kang" ]
https://openreview.net/forum?id=MsgdEkcLRz
MsgdEkcLRz
MsgdEkcLRz
[ "~Debarati_Das1", "~Khanh_Chi_Le1", "~Ritik_Sachin_Parkar1", "~Karin_De_Langis1", "~Brendan_Madson1", "~Chad_M._Berryman1", "~Robin_M_Willis1", "~Daniel_H_Moses1", "~Brett_McDonnell1", "~Daniel_Schwarcz1", "~Dongyeop_Kang2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/4729d55b46049703475f554c13f7988cb17de39b.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "pretraining data; synthetic data; reasoning; legal perspectives; legal drafting; LLM; chain of thought" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ das2025lawflow, title={LawFlow: Collecting and Simulating Lawyers{\textquoteright} Thought Processes on Business Formation Case Studies}, author={Debarati Das and Khanh Chi Le and Ritik Sachin Parkar and Karin De Langis and Brendan Madson and Chad M. Berryman and Robin M Willis and Daniel H Moses and Br...
das|lawflow_collecting_and_simulating_lawyers_thought_processes_on_business_formation_case_studies
null
null
null
null
null
$\mu$KE: Matryoshka Unstructured Knowledge Editing of Large Language Models
A simple yet effective improvement on unstructured locate-and-edit via Matryoshka-style working memory update.
Large language models (LLMs) have emerged as powerful knowledge bases yet are limited by static training data, leading to issues such as hallucinations and safety risks. Editing a model’s internal knowledge through the locate-and-edit paradigm has proven a cost-effective alternative to retraining, though current unstru...
[ "Zian Su", "Ziyang Huang", "Kaiyuan Zhang", "Xiangyu Zhang" ]
https://openreview.net/forum?id=MiR3ObcF3C
MiR3ObcF3C
MiR3ObcF3C
[ "~Zian_Su1", "~Ziyang_Huang4", "~Kaiyuan_Zhang1", "~Xiangyu_Zhang3" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/259428e6508eaa3eed90891f808aa88db8556492.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "knowledge editing", "model editing", "large language models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ su2025muke, title={\${\textbackslash}mu\${KE}: Matryoshka Unstructured Knowledge Editing of Large Language Models}, author={Zian Su and Ziyang Huang and Kaiyuan Zhang and Xiangyu Zhang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=MiR3ObcF3C} }
su|\muke_matryoshka_unstructured_knowledge_editing_of_large_language_models
null
null
null
null
null
Have Large Language Models Learned to Reason? A Characterization via 3-SAT
We use phase transitions in random 3-SAT to characterize reasoning abilities of LLMs.
Large Language Models (LLMs) have been touted as AI models possessing advanced reasoning abilities. In theory, autoregressive LLMs with Chain-of-Thought (CoT) can perform more serial computations to solve complex reasoning tasks. However, recent studies suggest that, despite this capacity, LLMs do not truly learn to re...
[ "RISHI HAZRA", "Gabriele Venturato", "Pedro Zuidberg Dos Martires", "Luc De Raedt" ]
https://openreview.net/forum?id=MPTlWIVSMU
MPTlWIVSMU
MPTlWIVSMU
[ "~RISHI_HAZRA1", "~Gabriele_Venturato1", "~Pedro_Zuidberg_Dos_Martires1", "~Luc_De_Raedt1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/9c2fc7475e20f41686932bf67d787220ec571d91.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Reasoning", "Computational Complexity", "Logic", "Satisfiability", "Phase Transitions" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ hazra2025have, title={Have Large Language Models Learned to Reason? A Characterization via 3-{SAT}}, author={RISHI HAZRA and Gabriele Venturato and Pedro Zuidberg Dos Martires and Luc De Raedt}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=MPTlWIV...
hazra|have_large_language_models_learned_to_reason_a_characterization_via_3sat
null
null
null
null
null
Can Language Models Falsify? Evaluating Algorithmic Reasoning with Counterexample Creation
We test the abilities of models to find counterexamples automatically using code-execution, and this can be hard for reasoning models.
There is growing excitement about the potential of Language Models (LMs) to accelerate scientific discovery. *Falsifying* hypotheses is key to scientific progress, as it allows claims to be iteratively refined over time. This process requires significant researcher effort, reasoning, and ingenuity. Yet current benchmar...
[ "Shiven Sinha", "Shashwat Goel", "Ponnurangam Kumaraguru", "Jonas Geiping", "Matthias Bethge", "Ameya Prabhu" ]
https://openreview.net/forum?id=M7cl4Ldw61
M7cl4Ldw61
M7cl4Ldw61
[ "~Shiven_Sinha1", "~Shashwat_Goel1", "~Ponnurangam_Kumaraguru3", "~Jonas_Geiping1", "~Matthias_Bethge1", "~Ameya_Prabhu1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/ccb87b304e168880bd2862278e73f80477f11cfe.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "code; self-repair; falsification" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ sinha2025can, title={Can Language Models Falsify? Evaluating Algorithmic Reasoning with Counterexample Creation}, author={Shiven Sinha and Shashwat Goel and Ponnurangam Kumaraguru and Jonas Geiping and Matthias Bethge and Ameya Prabhu}, booktitle={Second Conference on Language Modeling}, year={2025}, ur...
sinha|can_language_models_falsify_evaluating_algorithmic_reasoning_with_counterexample_creation
/attachment/8be7e705cc31c530663056dbb1c28e95a1c2dd87.zip
null
null
null
null
Multi-Agent Verification: Scaling Test-Time Compute with Multiple Verifiers
We explore scaling the number of verifier models as a novel test-time scaling dimension for improving language model performance and introduce an algorithm that enables simple scaling along this dimension.
By utilizing more computational resources at test-time, large language models (LLMs) can improve without additional training. One common strategy uses *verifiers* to evaluate candidate outputs. In this work, we propose a novel scaling dimension for test-time compute: *scaling the number of verifier models*. We introduc...
[ "Shalev Lifshitz", "Sheila A. McIlraith", "Yilun Du" ]
https://openreview.net/forum?id=LriQ3NY9uL
LriQ3NY9uL
LriQ3NY9uL
[ "~Shalev_Lifshitz1", "~Sheila_A._McIlraith1", "~Yilun_Du1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/7b678a8c5a2418eb04bf51471bd5fe69d40500f5.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "large language models", "test-time compute", "verification", "scaling" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lifshitz2025multiagent, title={Multi-Agent Verification: Scaling Test-Time Compute with Multiple Verifiers}, author={Shalev Lifshitz and Sheila A. McIlraith and Yilun Du}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=LriQ3NY9uL} }
lifshitz|multiagent_verification_scaling_testtime_compute_with_multiple_verifiers
null
null
null
null
null
Language Agents Mirror Human Causal Reasoning Biases. How Can We Help Them Think Like Scientists?
Language model agents exhibit human-like reasoning biases, leading them to arrive at incorrect conclusions of causal relationships
Language model (LM) agents are increasingly used as autonomous decision-makers which need to actively gather information to guide their decisions. A crucial cognitive skill for such agents is the efficient exploration and understanding of the causal structure of the world—key to robust, scientifically grounded reasonin...
[ "Anthony GX-Chen", "Dongyan Lin", "Mandana Samiei", "Doina Precup", "Blake Aaron Richards", "Rob Fergus", "Kenneth Marino" ]
https://openreview.net/forum?id=LKINTp7Gdo
LKINTp7Gdo
LKINTp7Gdo
[ "~Anthony_GX-Chen1", "~Dongyan_Lin1", "~Mandana_Samiei1", "~Doina_Precup1", "~Blake_Aaron_Richards1", "~Rob_Fergus1", "~Kenneth_Marino1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/9703b61f95dd7c53d48028df86babc548ee88ee8.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "agents", "decision making", "exploration", "cognitive science", "causal inference" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ gx-chen2025language, title={Language Agents Mirror Human Causal Reasoning Biases. How Can We Help Them Think Like Scientists?}, author={Anthony GX-Chen and Dongyan Lin and Mandana Samiei and Doina Precup and Blake Aaron Richards and Rob Fergus and Kenneth Marino}, booktitle={Second Conference on Languag...
gxchen|language_agents_mirror_human_causal_reasoning_biases_how_can_we_help_them_think_like_scientists
null
null
null
null
null
Data-Centric Human Preference with Rationales for Direct Preference Alignment
We introduce rationales to boost learning from provided human preference pairs in direct preference training.
Aligning language models with human preferences through reinforcement learning from human feedback is crucial for their safe and effective deployment. The human preference is typically represented through comparison where one response is chosen over another for a given prompt. However, standard preference datasets oft...
[ "Hoang Anh Just", "Ming Jin", "Anit Kumar Sahu", "Huy Phan", "Ruoxi Jia" ]
https://openreview.net/forum?id=LH2ZKviJoI
LH2ZKviJoI
LH2ZKviJoI
[ "~Hoang_Anh_Just1", "~Ming_Jin1", "~Anit_Kumar_Sahu1", "~Huy_Phan2", "~Ruoxi_Jia1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/c4624a7b191f2cf23500caabb1557300455d715c.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "data-centric AI", "rationales" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ just2025datacentric, title={Data-Centric Human Preference with Rationales for Direct Preference Alignment}, author={Hoang Anh Just and Ming Jin and Anit Kumar Sahu and Huy Phan and Ruoxi Jia}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=LH2ZKviJo...
just|datacentric_human_preference_with_rationales_for_direct_preference_alignment
null
null
null
null
null
SlowFast-LLaVA-1.5: A Family of Token-Efficient Video Large Language Models for Long-Form Video Understanding
We introduce SlowFast-LLaVA-1.5, a family of video large language models offering a token-efficient solution for long-form video understanding.
We introduce SlowFast-LLaVA-1.5 (abbreviated as SF-LLaVA-1.5), a family of video large language models (LLMs) offering a token-efficient solution for long-form video understanding. We incorporate the two-stream SlowFast mechanism into a streamlined training pipeline, and perform joint video-image training on a carefull...
[ "Mingze Xu", "Mingfei Gao", "Shiyu Li", "Jiasen Lu", "Zhe Gan", "Zhengfeng Lai", "Meng Cao", "Kai Kang", "Yinfei Yang", "Afshin Dehghan" ]
https://openreview.net/forum?id=L7jS3peM3w
L7jS3peM3w
L7jS3peM3w
[ "~Mingze_Xu2", "~Mingfei_Gao1", "~Shiyu_Li2", "~Jiasen_Lu2", "~Zhe_Gan1", "~Zhengfeng_Lai1", "~Meng_Cao2", "~Kai_Kang2", "~Yinfei_Yang1", "~Afshin_Dehghan5" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/15abd95f797d26263ca04908ab5ee202cab985a0.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Multimodal LLM", "Video Understanding", "Video Question Answering", "Long-Form Video Understanding" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ xu2025slowfastllava, title={SlowFast-{LL}a{VA}-1.5: A Family of Token-Efficient Video Large Language Models for Long-Form Video Understanding}, author={Mingze Xu and Mingfei Gao and Shiyu Li and Jiasen Lu and Zhe Gan and Zhengfeng Lai and Meng Cao and Kai Kang and Yinfei Yang and Afshin Dehghan}, bookti...
xu|slowfastllava15_a_family_of_tokenefficient_video_large_language_models_for_longform_video_understanding
null
null
null
null
null
AIOS: LLM Agent Operating System
Design and implement a novel infrastructure for serving agents
LLM-based intelligent agents face significant deployment challenges, particularly related to resource management. Allowing unrestricted access to LLM or tool resources can lead to inefficient or even potentially harmful resource allocation and utilization for agents. Furthermore, the absence of proper scheduling and r...
[ "Kai Mei", "Xi Zhu", "Wujiang Xu", "Mingyu Jin", "Wenyue Hua", "Zelong Li", "Shuyuan Xu", "Ruosong Ye", "Yingqiang Ge", "Yongfeng Zhang" ]
https://openreview.net/forum?id=L4HHkCDz2x
L4HHkCDz2x
L4HHkCDz2x
[ "~Kai_Mei1", "~Xi_Zhu2", "~Wujiang_Xu1", "~Mingyu_Jin1", "~Wenyue_Hua1", "~Zelong_Li1", "~Shuyuan_Xu1", "~Ruosong_Ye1", "~Yingqiang_Ge1", "~Yongfeng_Zhang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/985019bb0dc542ae8a26b7f9997df9cd05f2b4ec.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Model", "LLM Agent" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ mei2025aios, title={{AIOS}: {LLM} Agent Operating System}, author={Kai Mei and Xi Zhu and Wujiang Xu and Mingyu Jin and Wenyue Hua and Zelong Li and Shuyuan Xu and Ruosong Ye and Yingqiang Ge and Yongfeng Zhang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.ne...
mei|aios_llm_agent_operating_system
null
null
null
null
null
In-context Ranking Preference Optimization
We introduce IRPO, a framework that optimizes LLMs using natural, in-context ranking feedback to enhance ranking quality while reducing computational cost.
Recent developments in Direct Preference Optimization (DPO) allow large language models (LLMs) to function as implicit ranking models by maximizing the margin between preferred and non-preferred responses. In practice, user feedback on such lists typically involves identifying a few relevant items in context rather tha...
[ "Junda Wu", "Rohan Surana", "Zhouhang Xie", "Yiran Shen", "Yu Xia", "Tong Yu", "Ryan A. Rossi", "Prithviraj Ammanabrolu", "Julian McAuley" ]
https://openreview.net/forum?id=L2NPhLAKEd
L2NPhLAKEd
L2NPhLAKEd
[ "~Junda_Wu1", "~Rohan_Surana1", "~Zhouhang_Xie1", "~Yiran_Shen2", "~Yu_Xia9", "~Tong_Yu3", "~Ryan_A._Rossi2", "~Prithviraj_Ammanabrolu1", "~Julian_McAuley1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/60089be2b3c493053166fa47885cbbe7dfd578c6.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "direct preference optimization", "large language model" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wu2025incontext, title={In-context Ranking Preference Optimization}, author={Junda Wu and Rohan Surana and Zhouhang Xie and Yiran Shen and Yu Xia and Tong Yu and Ryan A. Rossi and Prithviraj Ammanabrolu and Julian McAuley}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://op...
wu|incontext_ranking_preference_optimization
null
null
null
null
null
MSRS: Evaluating Multi-Source Retrieval-Augmented Generation
This paper introduces a scalable framework for constructing evaluation benchmarks that challenge RAG systems to integrate information across distinct sources and generate long-form responses.
Retrieval-augmented systems are typically evaluated in settings where information required to answer the query can be found within a single source or the answer is short-form or factoid-based. However, many real-world applications demand the ability to integrate and summarize information scattered across multiple sourc...
[ "Rohan Phanse", "Ej Zhou", "Kejian Shi", "WENCAI ZHANG", "Yixin Liu", "Yilun Zhao", "Arman Cohan" ]
https://openreview.net/forum?id=KtGsJm8bOC
KtGsJm8bOC
KtGsJm8bOC
[ "~Rohan_Phanse1", "~Ej_Zhou1", "~Kejian_Shi2", "~WENCAI_ZHANG1", "~Yixin_Liu2", "~Yilun_Zhao1", "~Arman_Cohan1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/9e9b1b9fad20b01f4b0cdee3401c415981cb042a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Retrieval-Augmented Generation", "Retrieval", "Summarization", "Evaluation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ phanse2025msrs, title={{MSRS}: Benchmarking Multi-Source Retrieval-Augmented Generation}, author={Rohan Phanse and Ej Zhou and Kejian Shi and WENCAI ZHANG and Yixin Liu and Yilun Zhao and Arman Cohan}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=...
phanse|msrs_evaluating_multisource_retrievalaugmented_generation
null
null
null
null
null
Not All Data Are Unlearned Equally
We investigate how data frequency and model scale affect the feasibility of gradient based unlearning
Machine unlearning is concerned with the task of removing knowledge learned from particular data points from a trained model. In the context of large language models (LLMs), unlearning has recently received increased attention, particularly for removing knowledge about named entities from models for privacy purposes. W...
[ "Aravind Krishnan", "Siva Reddy", "Marius Mosbach" ]
https://openreview.net/forum?id=Kd97lfFfTu
Kd97lfFfTu
Kd97lfFfTu
[ "~Aravind_Krishnan1", "~Siva_Reddy1", "~Marius_Mosbach1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/ad8913eceb4cf60ba758bd7f69cd282e564dc397.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM unlearning", "analysis", "frequency" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ krishnan2025not, title={Not All Data Are Unlearned Equally}, author={Aravind Krishnan and Siva Reddy and Marius Mosbach}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Kd97lfFfTu} }
krishnan|not_all_data_are_unlearned_equally
null
null
null
null
null
Planted in Pretraining, Swayed by Finetuning: A Case Study on the Origins of Cognitive Biases in LLMs
This work studies the origins of cognitive biases in LLMs using causal experiments and shows that while biases are slightly affected by instruction-tuning randomness, they primarily stem from pretraining.
Large language models (LLMs) exhibit cognitive biases -- systematic tendencies of irrational decision-making, similar to those seen in humans. Prior work has found that these biases vary across models and can be amplified by instruction tuning. However, it remains unclear if these differences in biases stem from pretra...
[ "Itay Itzhak", "Yonatan Belinkov", "Gabriel Stanovsky" ]
https://openreview.net/forum?id=KQhUEoPmJy
KQhUEoPmJy
KQhUEoPmJy
[ "~Itay_Itzhak1", "~Yonatan_Belinkov1", "~Gabriel_Stanovsky1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/15e8a81f3173cf11f46a82a010f9ced451f97b73.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "bias", "cognitive biases", "large language models", "LLM biases", "bias analysis", "instruction tuning", "pretraining biases", "causal experiments", "bias evaluation", "machine learning biases", "model robustness", "language model interpretability", "bias sources" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ itzhak2025planted, title={Planted in Pretraining, Swayed by Finetuning: A Case Study on the Origins of Cognitive Biases in {LLM}s}, author={Itay Itzhak and Yonatan Belinkov and Gabriel Stanovsky}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=KQhUE...
itzhak|planted_in_pretraining_swayed_by_finetuning_a_case_study_on_the_origins_of_cognitive_biases_in_llms
null
null
null
null
null