title
stringlengths
21
128
content_TLDR
stringlengths
40
250
abstract
stringlengths
613
2.09k
authors
listlengths
1
42
openreview_url
stringlengths
42
42
id
stringlengths
10
10
forum
stringlengths
10
10
authorids
listlengths
1
42
venue
dict
venueid
dict
pdf_url
dict
invitation
stringclasses
1 value
group
stringclasses
1 value
venue_name
stringclasses
1 value
year
int64
2.03k
2.03k
conference
stringclasses
1 value
content_keywords
listlengths
1
16
content_code_of_ethics
stringclasses
1 value
content_author_guide
stringclasses
1 value
content_flagged_for_ethics_review
bool
1 class
content_ethics_comments
stringclasses
11 values
content__bibtex
stringlengths
246
1.01k
content_paperhash
stringlengths
29
134
content_supplementary_material
stringclasses
73 values
content_award_nomination
bool
1 class
content_reciprocal_reviewing_status
stringclasses
1 value
content_reciprocal_reviewing_author
stringclasses
4 values
content_reciprocal_reviewing_exemption_reason
dict
Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources
Open-Qwen2VL is a fully open-source multimodal LLM which is trained on only 5B high-quality multimodal tokens but outperforms Qwen2-VL, trained on 1.4T tokens, on various benchmarks.
The reproduction of state-of-the-art multimodal LLM pre-training faces barriers at every stage of the pipeline, including high-quality data filtering, multimodal data mixture strategies, sequence packing techniques, and training frameworks. We introduce Open-Qwen2VL, a fully open-source 2B-parameter Multimodal Large La...
[ "Weizhi Wang", "Yu Tian", "Linjie Yang", "Heng Wang", "Xifeng Yan" ]
https://openreview.net/forum?id=nVQmW1af6j
nVQmW1af6j
nVQmW1af6j
[ "~Weizhi_Wang1", "~Yu_Tian4", "~Linjie_Yang4", "~Heng_Wang2", "~Xifeng_Yan1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/ee174c9b1df40753c61805a156d2891016605052.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "multimodal large language model", "efficient pre-training", "high quality image-text filtering" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wang2025openqwenvl, title={Open-Qwen2{VL}: Compute-Efficient Pre-Training of Fully-Open Multimodal {LLM}s on Academic Resources}, author={Weizhi Wang and Yu Tian and Linjie Yang and Heng Wang and Xifeng Yan}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/fo...
wang|openqwen2vl_computeefficient_pretraining_of_fullyopen_multimodal_llms_on_academic_resources
null
null
null
null
null
Plancraft: an evaluation dataset for planning with LLM agents
We introduce Plancraft, a multi-modal evaluation dataset for LLMs, designed to assess tool use, planning, and decision-making with both solvable and unsolvable examples.
We present Plancraft, a multi-modal evaluation dataset for LLM agents. Plancraft has both a text-only and multi-modal interface, based on the Minecraft crafting GUI. We include the Minecraft Wiki to evaluate tool use and Retrieval Augmented Generation (RAG), as well as a handcrafted planner and Oracle Retriever, to abl...
[ "Gautier Dagan", "Frank Keller", "Alex Lascarides" ]
https://openreview.net/forum?id=nSV8Depcpx
nSV8Depcpx
nSV8Depcpx
[ "~Gautier_Dagan1", "~Frank_Keller1", "~Alex_Lascarides1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/9dbb6b20cff2ad8aca8728850e46fd135d8a2353.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "planning", "multi-modal", "agents", "LLMs", "tool use", "minecraft", "RAG" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ dagan2025plancraft, title={Plancraft: an evaluation dataset for planning with {LLM} agents}, author={Gautier Dagan and Frank Keller and Alex Lascarides}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=nSV8Depcpx} }
dagan|plancraft_an_evaluation_dataset_for_planning_with_llm_agents
/attachment/73266a3ae8ba8b61a11ec99084fc33ef47b66c57.zip
null
null
null
null
Teaching Models to Understand (but not Generate) High-risk Data
A new pre-training paradigm that enables language models to understand high-risk data without learning to generate it.
Language model developers typically filter out high-risk content—such as toxic or copyrighted text—from their pre-training data to prevent models from generating similar outputs. However, removing such data altogether limits models’ ability to recognize and appropriately respond to harmful or sensitive content. In this...
[ "Ryan Yixiang Wang", "Matthew Finlayson", "Luca Soldaini", "Swabha Swayamdipta", "Robin Jia" ]
https://openreview.net/forum?id=n6mTO5JS4j
n6mTO5JS4j
n6mTO5JS4j
[ "~Ryan_Yixiang_Wang1", "~Matthew_Finlayson1", "~Luca_Soldaini1", "~Swabha_Swayamdipta1", "~Robin_Jia1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/4c4b33c1dca8f5ac130e0bc60c949aed1976381b.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLMs", "Language Models", "Pre-training Data" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wang2025teaching, title={Teaching Models to Understand (but not Generate) High-risk Data}, author={Ryan Yixiang Wang and Matthew Finlayson and Luca Soldaini and Swabha Swayamdipta and Robin Jia}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=n6mTO5...
wang|teaching_models_to_understand_but_not_generate_highrisk_data
null
null
null
null
null
Defending LLM Watermarking Against Spoofing Attacks with Contrastive Representation Learning
The paper proposes a semantic-aware watermarking method that defends against spoofing attacks by using a contrastively trained semantic mapping model to detect semantic distortions.
Watermarking has emerged as a promising technique for detecting texts generated by LLMs. Current research has primarily focused on three design criteria -- high quality of the watermarked text, high detectability, and robustness against removal attack. However, the security against spoofing attacks remains relatively u...
[ "Li An", "Yujian Liu", "Yepeng Liu", "Yang Zhang", "Yuheng Bu", "Shiyu Chang" ]
https://openreview.net/forum?id=n5hmtkdl7k
n5hmtkdl7k
n5hmtkdl7k
[ "~Li_An3", "~Yujian_Liu1", "~Yepeng_Liu1", "~Yang_Zhang3", "~Yuheng_Bu1", "~Shiyu_Chang2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/3244e3dcdb7ec5c678906e32428bd780da24db80.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM watermarking", "Spoofing attack", "Piggyback Spoofing attack" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ an2025defending, title={Defending {LLM} Watermarking Against Spoofing Attacks with Contrastive Representation Learning}, author={Li An and Yujian Liu and Yepeng Liu and Yang Zhang and Yuheng Bu and Shiyu Chang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net...
an|defending_llm_watermarking_against_spoofing_attacks_with_contrastive_representation_learning
null
null
null
null
null
True Multimodal In-Context Learning Needs Attention to the Visual Context
We propose both a Dynamic Attention Reallocation tuning algorithm and a dedicated dataset to improve the true MICL ability of MLLMs
Multimodal Large Language Models (MLLMs), built on powerful language backbones, have enabled Multimodal In-Context Learning (MICL)—adapting to new tasks from a few multimodal demonstrations consisting of images, questions, and answers. Despite showing noticeable improvement on standard vision-language datasets, current...
[ "Shuo Chen", "Jianzhe Liu", "Zhen Han", "Yan Xia", "Daniel Cremers", "Philip Torr", "Volker Tresp", "Jindong Gu" ]
https://openreview.net/forum?id=n4JdyBGu6T
n4JdyBGu6T
n4JdyBGu6T
[ "~Shuo_Chen12", "~Jianzhe_Liu1", "~Zhen_Han3", "~Yan_Xia5", "~Daniel_Cremers1", "~Philip_Torr1", "~Volker_Tresp1", "~Jindong_Gu1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/da3367f29bfb2c448b1ae58988478c487bd06b7c.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Multimodal In-Context Learning", "Multimodal Large Language Models", "Vision Language Models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ chen2025true, title={True Multimodal In-Context Learning Needs Attention to the Visual Context}, author={Shuo Chen and Jianzhe Liu and Zhen Han and Yan Xia and Daniel Cremers and Philip Torr and Volker Tresp and Jindong Gu}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://o...
chen|true_multimodal_incontext_learning_needs_attention_to_the_visual_context
null
null
null
null
null
Mixture of Attention Spans: Optimizing LLM Inference Efficiency with Heterogeneous Sliding-Window Lengths
We design heterogeneous elastic rules for sliding-window lengths of attention for efficient large language models
Sliding-window attention offers a hardware-efficient solution to the memory and throughput challenges of Large Language Models (LLMs) in long-context scenarios. Existing methods typically employ a single window length across all attention heads and input sizes. However, this uniform approach fails to capture the hetero...
[ "Tianyu Fu", "Haofeng Huang", "Xuefei Ning", "Genghan Zhang", "Boju Chen", "Tianqi Wu", "Hongyi Wang", "Zixiao Huang", "Shiyao Li", "Shengen Yan", "Guohao Dai", "Huazhong Yang", "Yu Wang" ]
https://openreview.net/forum?id=n3rZJrWPLE
n3rZJrWPLE
n3rZJrWPLE
[ "~Tianyu_Fu3", "~Haofeng_Huang3", "~Xuefei_Ning1", "~Genghan_Zhang1", "~Boju_Chen1", "~Tianqi_Wu2", "~Hongyi_Wang8", "~Zixiao_Huang2", "~Shiyao_Li2", "~Shengen_Yan1", "~Guohao_Dai4", "~Huazhong_Yang2", "~Yu_Wang3" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/c9da10b21088f5f6d7b33ab09e49c617dd22c0c3.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Efficient Attention", "Sparse Attention", "KV Cache Management", "Large Language Models", "Efficiency" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ fu2025mixture, title={Mixture of Attention Spans: Optimizing {LLM} Inference Efficiency with Heterogeneous Sliding-Window Lengths}, author={Tianyu Fu and Haofeng Huang and Xuefei Ning and Genghan Zhang and Boju Chen and Tianqi Wu and Hongyi Wang and Zixiao Huang and Shiyao Li and Shengen Yan and Guohao ...
fu|mixture_of_attention_spans_optimizing_llm_inference_efficiency_with_heterogeneous_slidingwindow_lengths
null
null
null
null
null
Fluid Language Model Benchmarking
Fluid Benchmarking improves evaluation by adapting items to the capability level of language models.
Language model (LM) benchmarking faces several challenges: comprehensive evaluations are costly, benchmarks often fail to measure the intended capabilities, and evaluation quality can degrade due to labeling errors and benchmark saturation. Although various strategies have been proposed to mitigate these issues, they t...
[ "Valentin Hofmann", "David Heineman", "Ian Magnusson", "Kyle Lo", "Jesse Dodge", "Maarten Sap", "Pang Wei Koh", "Chun Wang", "Hannaneh Hajishirzi", "Noah A. Smith" ]
https://openreview.net/forum?id=mxcCg9YRqj
mxcCg9YRqj
mxcCg9YRqj
[ "~Valentin_Hofmann1", "~David_Heineman1", "~Ian_Magnusson1", "~Kyle_Lo1", "~Jesse_Dodge1", "~Maarten_Sap1", "~Pang_Wei_Koh1", "~Chun_Wang8", "~Hannaneh_Hajishirzi1", "~Noah_A._Smith2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/998cc5e4815d857cf1218ede711a4cc0e044ac3d.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "language models", "evaluation", "item response theory", "efficiency", "robustness" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ hofmann2025fluid, title={Fluid Language Model Benchmarking}, author={Valentin Hofmann and David Heineman and Ian Magnusson and Kyle Lo and Jesse Dodge and Maarten Sap and Pang Wei Koh and Chun Wang and Hannaneh Hajishirzi and Noah A. Smith}, booktitle={Second Conference on Language Modeling}, year={2025...
hofmann|fluid_language_model_benchmarking
null
true
This submission is NOT exempt from the Reciprocal Reviewing requirement. (We expect most submissions to fall in this category.)
~Valentin_Hofmann1
{ "readers": [ "colmweb.org/COLM/2025/Conference", "colmweb.org/COLM/2025/Conference/Submission1586/Authors" ] }
Rethinking Multilingual Continual Pretraining: Data Mixing for Adapting LLMs Across Languages and Resources
This study evaluates 36 continual pretraining configurations across three multilingual LLMs and 30+ languages, analyzing their effects across resource levels and language behavior categories.
Large Language Models (LLMs) exhibit significant disparities in performance across languages, primarily benefiting high-resource languages while marginalizing underrepresented ones. Continual Pretraining (CPT) has emerged as a promising approach to address this imbalance, although the relative effectiveness of monoling...
[ "Zihao Li", "Shaoxiong Ji", "Hengyu Luo", "Jörg Tiedemann" ]
https://openreview.net/forum?id=mpTIzK4Zca
mpTIzK4Zca
mpTIzK4Zca
[ "~Zihao_Li13", "~Shaoxiong_Ji1", "~Hengyu_Luo1", "~Jörg_Tiedemann1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/574bedec3f3e2b12092d3af357d15c9c990d9a65.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Multilingual continual pretraining", "data mixing" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ li2025rethinking, title={Rethinking Multilingual Continual Pretraining: Data Mixing for Adapting {LLM}s Across Languages and Resources}, author={Zihao Li and Shaoxiong Ji and Hengyu Luo and J{\"o}rg Tiedemann}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/...
li|rethinking_multilingual_continual_pretraining_data_mixing_for_adapting_llms_across_languages_and_resources
null
null
null
null
null
SPIN-Bench: How Well Do LLMs Plan Strategically and Reason Socially?
SPIN-Bench tests LLMs on strategic planning and social intelligence, showing they struggle with complex multi-agent scenarios requiring extended reasoning and negotiation.
Reasoning and strategic behavior in social interactions is a hallmark of intelligence. This form of reasoning is significantly more sophisticated than isolated planning or reasoning tasks in static settings (e.g., math problem solving). In this paper, we present Strategic Planning, Interaction, and Negotiation (SPIN-Be...
[ "Jianzhu Yao", "Kevin Wang", "Ryan Hsieh", "Haisu Zhou", "Tianqing Zou", "Zerui Cheng", "Zhangyang Wang", "Pramod Viswanath" ]
https://openreview.net/forum?id=mgsS73kvOA
mgsS73kvOA
mgsS73kvOA
[ "~Jianzhu_Yao1", "~Kevin_Wang4", "~Ryan_Hsieh2", "~Haisu_Zhou1", "~Tianqing_Zou1", "~Zerui_Cheng1", "~Zhangyang_Wang1", "~Pramod_Viswanath2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/527b4ef621918badbf88541f24daecd678a6888d.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Planning", "Game", "Benchmark", "Social Intelligence" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ yao2025spinbench, title={{SPIN}-Bench: How Well Do {LLM}s Plan Strategically and Reason Socially?}, author={Jianzhu Yao and Kevin Wang and Ryan Hsieh and Haisu Zhou and Tianqing Zou and Zerui Cheng and Zhangyang Wang and Pramod Viswanath}, booktitle={Second Conference on Language Modeling}, year={2025},...
yao|spinbench_how_well_do_llms_plan_strategically_and_reason_socially
/attachment/b27f210e2adf29e68a923dd43e9fb24d62894bf2.zip
null
null
null
null
Improving Fisher Information Estimation and Efficiency for LoRA-based LLM Unlearning
We propose VILA, a scalable and efficient unlearning method for LLMs that addresses FILA's limitations by improving parameter importance estimation and reducing computational overhead.
LLMs have demonstrated remarkable performance across various tasks but face challenges related to unintentionally generating outputs containing sensitive information. A straightforward approach to address this issue is to retrain the model after excluding the problematic data. However, this approach incurs prohibitivel...
[ "Yejin Kim", "Eunwon Kim", "Buru Chang", "Junsuk Choe" ]
https://openreview.net/forum?id=mTJW8Y1nd8
mTJW8Y1nd8
mTJW8Y1nd8
[ "~Yejin_Kim7", "~Eunwon_Kim1", "~Buru_Chang1", "~Junsuk_Choe1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/8cc2caa1fac4c87a9c4e3277fd4b747ae7b3802f.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Machine Unlearning", "Large Language Model" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ kim2025improving, title={Improving Fisher Information Estimation and Efficiency for Lo{RA}-based {LLM} Unlearning}, author={Yejin Kim and Eunwon Kim and Buru Chang and Junsuk Choe}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=mTJW8Y1nd8} }
kim|improving_fisher_information_estimation_and_efficiency_for_lorabased_llm_unlearning
null
null
null
null
null
ICQuant: Index Coding enables Low-bit LLM Quantization
This paper presents ICQuant, a novel framework that leverages outlier statistics to design an efficient index coding scheme for low-bit weight-only quantization.
The rapid deployment of Large Language Models (LLMs) highlights the need for efficient low-bit post-training quantization (PTQ) due to their high memory costs. A key challenge in weight quantization is the presence of outliers, which inflate quantization ranges and lead to large errors. While a number of outlier suppre...
[ "Xinlin Li", "Osama Hanna", "Christina Fragouli", "Suhas Diggavi" ]
https://openreview.net/forum?id=m6nBgFSMTL
m6nBgFSMTL
m6nBgFSMTL
[ "~Xinlin_Li3", "~Osama_Hanna1", "~Christina_Fragouli1", "~Suhas_Diggavi1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/ed1fe4cea396c575853e69ba69b116e193e547d6.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM Quantization", "LLM Compression", "Post Training Quantization" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ li2025icquant, title={{ICQ}uant: Index Coding enables Low-bit {LLM} Quantization}, author={Xinlin Li and Osama Hanna and Christina Fragouli and Suhas Diggavi}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=m6nBgFSMTL} }
li|icquant_index_coding_enables_lowbit_llm_quantization
null
null
null
null
null
LLM Unlearning Without an Expert Curated Dataset
Synthetic forget set to replace expert-curation for LLM unlearning.
Modern large language models often encode sensitive, harmful, or copyrighted knowledge, raising the need for post-hoc unlearning—the ability to remove specific domains of knowledge from a model without full retraining. A major bottleneck in current unlearning pipelines is constructing effective forget sets—datasets tha...
[ "Xiaoyuan Zhu", "Muru Zhang", "Ollie Liu", "Robin Jia", "Willie Neiswanger" ]
https://openreview.net/forum?id=m4F3kQCfGX
m4F3kQCfGX
m4F3kQCfGX
[ "~Xiaoyuan_Zhu2", "~Muru_Zhang1", "~Ollie_Liu1", "~Robin_Jia1", "~Willie_Neiswanger2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/0a5710cc0d3e8a268ba4e8727366abd1bd3e6b14.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "NLP", "Machine Unlearning", "Synthetic Data Generation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhu2025llm, title={{LLM} Unlearning Without an Expert Curated Dataset}, author={Xiaoyuan Zhu and Muru Zhang and Ollie Liu and Robin Jia and Willie Neiswanger}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=m4F3kQCfGX} }
zhu|llm_unlearning_without_an_expert_curated_dataset
null
null
null
null
null
How does Watermarking Affect Visual Language Models in Document Understanding?
Investigation of the robustness of VLM for document understanding task
Visual Language Models (VLMs) have become foundational models for document understanding tasks, widely used in the processing of complex multimodal documents across domains such as finance, law, and academia. However, documents often contain noise-like information, such as watermarks, which inevitably leads us to inqui...
[ "Chunxue Xu", "Yiwei Wang", "Bryan Hooi", "Yujun Cai", "Songze Li" ]
https://openreview.net/forum?id=lvQwn8eiRf
lvQwn8eiRf
lvQwn8eiRf
[ "~Chunxue_Xu1", "~Yiwei_Wang2", "~Bryan_Hooi1", "~Yujun_Cai1", "~Songze_Li1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/2d9289a5e67f0efe5aad593f326b2bedf28bf50f.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "VLMs", "Document Understanding", "Robustness" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ xu2025how, title={How does Watermarking Affect Visual Language Models in Document Understanding?}, author={Chunxue Xu and Yiwei Wang and Bryan Hooi and Yujun Cai and Songze Li}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=lvQwn8eiRf} }
xu|how_does_watermarking_affect_visual_language_models_in_document_understanding
null
null
null
null
null
DynaSaur: Large Language Agents Beyond Predefined Actions
We propose a flexible LLM agent framework for open-ended environments, where it dynamically generates new actions when existing ones are insufficient. These actions accumulate over time for future reuse.
Existing LLM agent systems typically select actions from a fixed and predefined set at every step. While effective in closed, narrowly scoped environments, this approach presents two major challenges for real-world, open-ended scenarios: (1) it significantly restricts the planning and acting capabilities of LLM agents,...
[ "Dang Nguyen", "Viet Dac Lai", "Seunghyun Yoon", "Ryan A. Rossi", "Handong Zhao", "Ruiyi Zhang", "Puneet Mathur", "Nedim Lipka", "Yu Wang", "Trung Bui", "Franck Dernoncourt", "Tianyi Zhou" ]
https://openreview.net/forum?id=lv0cJ2pWVd
lv0cJ2pWVd
lv0cJ2pWVd
[ "~Dang_Nguyen3", "~Viet_Dac_Lai1", "~Seunghyun_Yoon1", "~Ryan_A._Rossi2", "~Handong_Zhao3", "~Ruiyi_Zhang3", "~Puneet_Mathur2", "~Nedim_Lipka1", "~Yu_Wang41", "~Trung_Bui1", "~Franck_Dernoncourt1", "~Tianyi_Zhou2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/9c0c2a1ca434928a59d8610c8e4f3cb8fe728cb9.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM", "LLM agents" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ nguyen2025dynasaur, title={DynaSaur: Large Language Agents Beyond Predefined Actions}, author={Dang Nguyen and Viet Dac Lai and Seunghyun Yoon and Ryan A. Rossi and Handong Zhao and Ruiyi Zhang and Puneet Mathur and Nedim Lipka and Yu Wang and Trung Bui and Franck Dernoncourt and Tianyi Zhou}, booktitle...
nguyen|dynasaur_large_language_agents_beyond_predefined_actions
null
null
null
null
null
Inducing Programmatic Skills for Agentic Tasks
We propose ASI, Agent Skill Induction, which induces and applies skill programs from web navigation experiences without supervision, yielding improved correctness and efficiency.
To succeed in common digital tasks such as web navigation, agents must carry out a variety of specialized tasks such as searching for products or planning a travel route. To tackle these tasks, agents can bootstrap themselves by learning task-specific skills online through interaction with the web environment. In this ...
[ "Zora Zhiruo Wang", "Apurva Gandhi", "Graham Neubig", "Daniel Fried" ]
https://openreview.net/forum?id=lsAY6fWsog
lsAY6fWsog
lsAY6fWsog
[ "~Zora_Zhiruo_Wang1", "~Apurva_Gandhi1", "~Graham_Neubig1", "~Daniel_Fried1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/367b824215ee6bfc801df80d296736d22b5ea8f8.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "agent", "skill learning", "web navigation", "scalability", "generalization" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wang2025inducing, title={Inducing Programmatic Skills for Agentic Tasks}, author={Zora Zhiruo Wang and Apurva Gandhi and Graham Neubig and Daniel Fried}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=lsAY6fWsog} }
wang|inducing_programmatic_skills_for_agentic_tasks
/attachment/557818dcceefef95e993d00d56cf6bd03deaf89f.zip
null
null
null
null
C3PO: Critical-Layer, Core-Expert, Collaborative Pathway Optimization for Test-Time Expert Re-Mixing
Optimizing MoE LLM pathways at test-time via reference-based expert re-weighting, boosting accuracy by 7-15%.
Mixture-of-Experts (MoE) Large Language Models (LLMs) suffer from severely sub-optimal expert pathways—our study reveals that naive expert selection learned from pretraining leaves a surprising 10-20% accuracy gap for improvement. Motivated by this observation, we develop a novel class of test-time optimization methods...
[ "Zhongyang Li", "Ziyue Li", "Tianyi Zhou" ]
https://openreview.net/forum?id=lqC5J7pBP9
lqC5J7pBP9
lqC5J7pBP9
[ "~Zhongyang_Li5", "~Ziyue_Li1", "~Tianyi_Zhou2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/27513ccd493f683c9f199ca9978819cb08488942.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Mixture-of-Experts", "Large Language Models", "Test-Time Learning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ li2025cpo, title={C3{PO}: Critical-Layer, Core-Expert, Collaborative Pathway Optimization for Test-Time Expert Re-Mixing}, author={Zhongyang Li and Ziyue Li and Tianyi Zhou}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=lqC5J7pBP9} }
li|c3po_criticallayer_coreexpert_collaborative_pathway_optimization_for_testtime_expert_remixing
null
null
null
null
null
Recycling the Web: A Method to Enhance Pre-training Data Quality and Quantity for Language Models
We propose rewriting low-quality web documents to improve their utility, thereby increasing the availability of pre-training data for language models
Scaling laws predict that the performance of large language models improves with increasing model size and data size. In practice, pre-training has been relying on massive web crawls, using almost all data sources publicly available on the internet so far. However, this pool of natural data does not grow at the same ra...
[ "Thao Nguyen", "Yang Li", "Olga Golovneva", "Luke Zettlemoyer", "Sewoong Oh", "Ludwig Schmidt", "Xian Li" ]
https://openreview.net/forum?id=lkjhBdz3rn
lkjhBdz3rn
lkjhBdz3rn
[ "~Thao_Nguyen3", "~Yang_Li112", "~Olga_Golovneva1", "~Luke_Zettlemoyer1", "~Sewoong_Oh3", "~Ludwig_Schmidt1", "~Xian_Li1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/467604cfb06de462129149a92b8208c1fb47d348.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "pretraining", "synthetic data", "rewriting", "data curation for LLMs", "data filtering" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ nguyen2025recycling, title={Recycling the Web: A Method to Enhance Pre-training Data Quality and Quantity for Language Models}, author={Thao Nguyen and Yang Li and Olga Golovneva and Luke Zettlemoyer and Sewoong Oh and Ludwig Schmidt and Xian Li}, booktitle={Second Conference on Language Modeling}, year...
nguyen|recycling_the_web_a_method_to_enhance_pretraining_data_quality_and_quantity_for_language_models
null
null
null
null
null
SuperBPE: Space Travel for Language Models
Superword tokenization leads to better and more efficient language models
The assumption across nearly all language model (LM) tokenization schemes is that tokens should be subwords, i.e., contained within word boundaries. While providing a seemingly reasonable inductive bias, is this common practice limiting the potential of modern LMs? Whitespace is not a reliable delimiter of meaning, as ...
[ "Alisa Liu", "Jonathan Hayase", "Valentin Hofmann", "Sewoong Oh", "Noah A. Smith", "Yejin Choi" ]
https://openreview.net/forum?id=lcDRvffeNP
lcDRvffeNP
lcDRvffeNP
[ "~Alisa_Liu1", "~Jonathan_Hayase2", "~Valentin_Hofmann1", "~Sewoong_Oh3", "~Noah_A._Smith2", "~Yejin_Choi1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/f503ab922e85bb5ffab9a14ae81b12062a9064f7.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "tokenization", "language modeling", "efficiency" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ liu2025superbpe, title={Super{BPE}: Space Travel for Language Models}, author={Alisa Liu and Jonathan Hayase and Valentin Hofmann and Sewoong Oh and Noah A. Smith and Yejin Choi}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=lcDRvffeNP} }
liu|superbpe_space_travel_for_language_models
null
null
null
null
null
A Survey on Personalized and Pluralistic Preference Alignment in Large Language Models
A survey on personalized and pluralistic preference Alignment in Large Language Models
Personalized preference alignment for large language models (LLMs), the process of tailoring LLMs to individual users' preferences, is an emerging research direction spanning the area of NLP and personalization. In this survey, we present an analysis of works on personalized alignment and modeling for LLMs. We introduc...
[ "Zhouhang Xie", "Junda Wu", "Yiran Shen", "Raghav Jain", "Yu Xia", "Xintong Li", "Aaron Chang", "Ryan A. Rossi", "Tong Yu", "Sachin Kumar", "Bodhisattwa Prasad Majumder", "Jingbo Shang", "Prithviraj Ammanabrolu", "Julian McAuley" ]
https://openreview.net/forum?id=lSWOMjonL7
lSWOMjonL7
lSWOMjonL7
[ "~Zhouhang_Xie1", "~Junda_Wu1", "~Yiran_Shen2", "~Raghav_Jain1", "~Yu_Xia9", "~Xintong_Li2", "~Aaron_Chang1", "~Ryan_A._Rossi2", "~Tong_Yu3", "~Sachin_Kumar1", "~Bodhisattwa_Prasad_Majumder1", "~Jingbo_Shang2", "~Prithviraj_Ammanabrolu1", "~Julian_McAuley1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/2ce1323cfbae336695d501d9e61dbff93d15ea57.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Preference Alignment", "Personalization", "Pluralistic Alignment", "Large Language Models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ xie2025a, title={A Survey on Personalized and Pluralistic Preference Alignment in Large Language Models}, author={Zhouhang Xie and Junda Wu and Yiran Shen and Raghav Jain and Yu Xia and Xintong Li and Aaron Chang and Ryan A. Rossi and Tong Yu and Sachin Kumar and Bodhisattwa Prasad Majumder and Jingbo S...
xie|a_survey_on_personalized_and_pluralistic_preference_alignment_in_large_language_models
null
null
null
null
null
Task Vectors in In-Context Learning: Emergence, Formation, and Benefits
We study task vectors in transformers trained from scratch on synthetic tasks and find they emerge but remain indistinct. To enhance their formation, we introduce TVP-loss. Strong task vectors in deeper layers improve ICL on OOD prompts.
In-context learning is a remarkable capability of transformers, referring to their ability to adapt to specific tasks based on a short history or context. Previous research has found that task-specific information is locally encoded within models, though their emergence and functionality remain unclear due to opaque pr...
[ "Liu Yang", "Ziqian Lin", "Kangwook Lee", "Dimitris Papailiopoulos", "Robert D Nowak" ]
https://openreview.net/forum?id=lODGn1Rp5t
lODGn1Rp5t
lODGn1Rp5t
[ "~Liu_Yang6", "~Ziqian_Lin1", "~Kangwook_Lee1", "~Dimitris_Papailiopoulos1", "~Robert_D_Nowak1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/7e185d2734f706b41e72249356e60ec9fd37135e.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "in-context learning; task vector" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ yang2025task, title={Task Vectors in In-Context Learning: Emergence, Formation, and Benefits}, author={Liu Yang and Ziqian Lin and Kangwook Lee and Dimitris Papailiopoulos and Robert D Nowak}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=lODGn1Rp5...
yang|task_vectors_in_incontext_learning_emergence_formation_and_benefits
null
null
null
null
null
SEAM: Semantically Equivalent Across Modalities Benchmark for Vision-Language Models
We created a benchmark testing VLMs' ability to reason consistently across different modalities, revealing limitations in how models process semantically equivalent information based on format rather than meaning.
Evaluating whether vision–language models (VLMs) reason consistently across representations is challenging because modality comparisons are typically confounded by task differences and asymmetric information. We introduce SEAM, a benchmark that pairs semantically equivalent inputs across four domains that have existing...
[ "Zhenwei Tang", "Difan Jiao", "Blair Yang", "Ashton Anderson" ]
https://openreview.net/forum?id=lI4LgGv4sX
lI4LgGv4sX
lI4LgGv4sX
[ "~Zhenwei_Tang1", "~Difan_Jiao1", "~Blair_Yang1", "~Ashton_Anderson1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/980a13fe20145a2f7419388bb1bf725b023950e3.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Vision-Language Models", "Benchmark", "Modality Imbalance" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ tang2025seam, title={{SEAM}: Semantically Equivalent Across Modalities Benchmark for Vision-Language Models}, author={Zhenwei Tang and Difan Jiao and Blair Yang and Ashton Anderson}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=lI4LgGv4sX} }
tang|seam_semantically_equivalent_across_modalities_benchmark_for_visionlanguage_models
null
null
null
null
null
Beyond the Reported Cutoff: Where Large Language Models Fall Short on Financial Knowledge
This paper evaluates Large Language Models' knowledge of historical financial data, finding they know more about larger, recent companies but are also prone to hallucinations about these firms.
Large Language Models (LLMs) are frequently utilized as sources of knowledge for question-answering. While it is known that LLMs may lack access to real-time data or newer data produced after the model's cutoff date, it is less clear how their knowledge spans across *historical* information. In this study, we assess th...
[ "Agam Shah", "Liqin Ye", "Sebastian Jaskowski", "Wei Xu", "Sudheer Chava" ]
https://openreview.net/forum?id=lEpPFmGH3L
lEpPFmGH3L
lEpPFmGH3L
[ "~Agam_Shah1", "~Liqin_Ye1", "~Sebastian_Jaskowski1", "~Wei_Xu5", "~Sudheer_Chava1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/b24fa9a8fd3c6721e3f957e197d5b44ec2782ba3.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Knowledge Cutoff", "Model Hallucinations" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ shah2025beyond, title={Beyond the Reported Cutoff: Where Large Language Models Fall Short on Financial Knowledge}, author={Agam Shah and Liqin Ye and Sebastian Jaskowski and Wei Xu and Sudheer Chava}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=l...
shah|beyond_the_reported_cutoff_where_large_language_models_fall_short_on_financial_knowledge
null
null
null
null
null
Overcoming Vocabulary Constraints with Pixel-level Fallback
We augment pretrained language models with pixel-based text representations, overcoming vocabulary constraints, improving multilingual and cross-script performance
Subword tokenization requires balancing computational efficiency and vocabulary coverage, often leading to suboptimal performance on languages and scripts not prioritized during training. We propose to augment pretrained language models with a vocabulary-free encoder that generates input embeddings from text rendered ...
[ "Jonas F. Lotz", "Hendra Setiawan", "Stephan Peitz", "Yova Kementchedjhieva" ]
https://openreview.net/forum?id=lEaHNs2qEv
lEaHNs2qEv
lEaHNs2qEv
[ "~Jonas_F._Lotz1", "~Hendra_Setiawan1", "~Stephan_Peitz1", "~Yova_Kementchedjhieva1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/bd9290fab67e049c39a67c9f2e6dde45900c894c.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "pixel-based text representations", "machine translation", "multilinguality", "cross-lingual transfer", "unseen scripts" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lotz2025overcoming, title={Overcoming Vocabulary Constraints with Pixel-level Fallback}, author={Jonas F. Lotz and Hendra Setiawan and Stephan Peitz and Yova Kementchedjhieva}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=lEaHNs2qEv} }
lotz|overcoming_vocabulary_constraints_with_pixellevel_fallback
null
null
null
null
null
EvidenceBench: A Benchmark for Extracting Evidence from Biomedical Papers
A large biomedical benchmark to evaluate LLMs' evidence-finding ability for scientific hypotheses and an automatic pipeline to create such benchmarks.
We study the task of automatically finding evidence relevant to hypotheses in biomedical papers. Finding relevant evidence is an important step when researchers investigate scientific hypotheses. We introduce EvidenceBench to measure models performance on this task, which is created by a novel pipeline that consists of...
[ "Jianyou Wang", "Weili Cao", "Kaicheng Wang", "Xiaoyue Wang", "Ashish Dalvi", "Gino Prasad", "Qishan Liang", "Hsuan-lin Her", "Mingwang", "Qin Yang", "Gene W. Yeo", "David E Neal", "Maxim Khan", "Christopher D. Rosin", "Ramamohan Paturi", "Leon Bergen" ]
https://openreview.net/forum?id=lEQnUI5lEA
lEQnUI5lEA
lEQnUI5lEA
[ "~Jianyou_Wang1", "~Weili_Cao1", "~Kaicheng_Wang1", "~Xiaoyue_Wang3", "~Ashish_Dalvi1", "~Gino_Prasad1", "~Qishan_Liang1", "~Hsuan-lin_Her1", "~Mingwang1", "~Qin_Yang5", "~Gene_W._Yeo1", "~David_E_Neal1", "~Maxim_Khan1", "~Christopher_D._Rosin1", "~Ramamohan_Paturi1", "~Leon_Bergen1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/b28cd425cb358879df700d6cdeb0bbdbc1a689df.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Biomedical Benchmark", "Scientific Information Extraction", "Large Language Models", "BioNLP" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wang2025evidencebench, title={EvidenceBench: A Benchmark for Extracting Evidence from Biomedical Papers}, author={Jianyou Wang and Weili Cao and Kaicheng Wang and Xiaoyue Wang and Ashish Dalvi and Gino Prasad and Qishan Liang and Hsuan-lin Her and Mingwang and Qin Yang and Gene W. Yeo and David E Neal a...
wang|evidencebench_a_benchmark_for_extracting_evidence_from_biomedical_papers
null
null
null
null
null
SEAL: Steerable Reasoning Calibration of Large Language Models for Free
We developed a training-free method that improves LLM accuracy and efficiency by calibrating CoT reasoning traces through a learned steering vector, reducing redundant thoughts and enhancing performance across multiple benchmarks.
Large Language Models (LLMs), such as OpenAI’s o1-series have demonstrated compelling capabilities for complex reasoning tasks via the extended chain-of-thought (CoT) reasoning mechanism. However, recent studies reveal substantial redundancy in the CoT reasoning traces, which not only increases inference latency but a...
[ "Runjin Chen", "Zhenyu Zhang", "Junyuan Hong", "Souvik Kundu", "Zhangyang Wang" ]
https://openreview.net/forum?id=klPszYDIRT
klPszYDIRT
klPszYDIRT
[ "~Runjin_Chen1", "~Zhenyu_Zhang4", "~Junyuan_Hong1", "~Souvik_Kundu2", "~Zhangyang_Wang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/78d6d04c381b639fc7c04ce750bff4724dec7a4f.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM", "Reasoning", "Representation Engineering" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ chen2025seal, title={{SEAL}: Steerable Reasoning Calibration of Large Language Models for Free}, author={Runjin Chen and Zhenyu Zhang and Junyuan Hong and Souvik Kundu and Zhangyang Wang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=klPszYDIRT} }
chen|seal_steerable_reasoning_calibration_of_large_language_models_for_free
null
null
null
null
null
ReasonIR: Training Retrievers for Reasoning Tasks
We present the first bi-encoder retriever specially trained to retrieve helpful documents for reasoning tasks.
We present ReasonIR-8B, the first retriever specifically trained for general reasoning tasks. Existing retrievers have shown limited gains on reasoning tasks, in part because existing training datasets focus on short factual queries tied to documents that straightforwardly answer them. We develop a synthetic data gener...
[ "Rulin Shao", "Rui Qiao", "Varsha Kishore", "Niklas Muennighoff", "Xi Victoria Lin", "Daniela Rus", "Bryan Kian Hsiang Low", "Sewon Min", "Wen-tau Yih", "Pang Wei Koh", "Luke Zettlemoyer" ]
https://openreview.net/forum?id=kkBCNLMbGj
kkBCNLMbGj
kkBCNLMbGj
[ "~Rulin_Shao1", "~Rui_Qiao3", "~Varsha_Kishore1", "~Niklas_Muennighoff1", "~Xi_Victoria_Lin1", "~Daniela_Rus1", "~Bryan_Kian_Hsiang_Low1", "~Sewon_Min1", "~Wen-tau_Yih1", "~Pang_Wei_Koh1", "~Luke_Zettlemoyer1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/08ca247ba1def28f58c2066c936038f437578e07.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "retriever", "information retrieval", "retrieval-augmented generation", "reasoning", "synthetic data" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ shao2025reasonir, title={Reason{IR}: Training Retrievers for Reasoning Tasks}, author={Rulin Shao and Rui Qiao and Varsha Kishore and Niklas Muennighoff and Xi Victoria Lin and Daniela Rus and Bryan Kian Hsiang Low and Sewon Min and Wen-tau Yih and Pang Wei Koh and Luke Zettlemoyer}, booktitle={Second C...
shao|reasonir_training_retrievers_for_reasoning_tasks
null
null
null
null
null
How Multimodal LLMs Solve Image Tasks: A Lens on Visual Grounding, Task Reasoning, and Answer Decoding
We propose a probing framework to analyze how multimodal LLMs process visual and textual inputs across layers
Multimodal Large Language Models (MLLMs) have demonstrated strong performance across a wide range of vision-language tasks, yet their internal processing dynamics remain underexplored. In this work, we introduce a probing framework to systematically analyze how MLLMs process visual and textual inputs across layers. We ...
[ "Zhuoran Yu", "Yong Jae Lee" ]
https://openreview.net/forum?id=kjNJYWvfPA
kjNJYWvfPA
kjNJYWvfPA
[ "~Zhuoran_Yu2", "~Yong_Jae_Lee2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/f1fc281cc94ab858757534be1c0097d1b92201cc.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Multimodal LLM", "Interpretability" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ yu2025how, title={How Multimodal {LLM}s Solve Image Tasks: A Lens on Visual Grounding, Task Reasoning, and Answer Decoding}, author={Zhuoran Yu and Yong Jae Lee}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=kjNJYWvfPA} }
yu|how_multimodal_llms_solve_image_tasks_a_lens_on_visual_grounding_task_reasoning_and_answer_decoding
null
null
null
null
null
SAEs Can Improve Unlearning: Dynamic Sparse Autoencoder Guardrails for Precision Unlearning in LLMs
We introduce Dynamic SAE Guardrails, a computationally efficient method that leverages sparse autoencoders for precision unlearning in LLMs, outperforming existing approaches while maintaining model utility.
Machine unlearning is a promising approach to improve LLM safety by removing unwanted knowledge from a trained model. However, prevailing gradient-based unlearning methods suffer from issues such as high computational costs, hyperparameter instability, poor sequential unlearning capability, vulnerability to relearning ...
[ "Aashiq Muhamed", "Jacopo Bonato", "Mona T. Diab", "Virginia Smith" ]
https://openreview.net/forum?id=kaPAalWAp3
kaPAalWAp3
kaPAalWAp3
[ "~Aashiq_Muhamed1", "~Jacopo_Bonato1", "~Mona_T._Diab1", "~Virginia_Smith1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/15a48f1a716e9205e87c0e7e53e18753bc29238d.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "unlearning", "sparse autoencoders", "representation editing", "steering vectors", "relearning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ muhamed2025saes, title={{SAE}s Can Improve Unlearning: Dynamic Sparse Autoencoder Guardrails for Precision Unlearning in {LLM}s}, author={Aashiq Muhamed and Jacopo Bonato and Mona T. Diab and Virginia Smith}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/fo...
muhamed|saes_can_improve_unlearning_dynamic_sparse_autoencoder_guardrails_for_precision_unlearning_in_llms
null
null
null
null
null
Putting the Value Back in RL: Better Test-Time Scaling by Unifying LLM Reasoners With Verifiers
We propose to improve inference-time scaling by jointly training a unified model for both reasoning and verification.
Prevalent reinforcement learning~(RL) methods for fine-tuning LLM reasoners, such as GRPO or Leave-one-out PPO, abandon the learned value function in favor of empirically estimated returns. This hinders test-time compute scaling that relies on using the value-function for verification. In this work, we propose RL$^V$ ...
[ "Kusha Sareen", "Morgane M Moss", "Alessandro Sordoni", "Arian Hosseini", "Rishabh Agarwal" ]
https://openreview.net/forum?id=kVOrGZM5N7
kVOrGZM5N7
kVOrGZM5N7
[ "~Kusha_Sareen1", "~Morgane_M_Moss1", "~Alessandro_Sordoni2", "~Arian_Hosseini1", "~Rishabh_Agarwal2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/3acf5484382857c965093513560345a4a8c7d01f.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM Reasoning", "Verifiers", "Test-time scaling", "Reinforcement Learning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ sareen2025putting, title={Putting the Value Back in {RL}: Better Test-Time Scaling by Unifying {LLM} Reasoners With Verifiers}, author={Kusha Sareen and Morgane M Moss and Alessandro Sordoni and Arian Hosseini and Rishabh Agarwal}, booktitle={Second Conference on Language Modeling}, year={2025}, url={ht...
sareen|putting_the_value_back_in_rl_better_testtime_scaling_by_unifying_llm_reasoners_with_verifiers
null
null
null
null
null
Corrupted by Reasoning: Reasoning Language Models Become Free-Riders in Public Goods Games
We evaluate the cooperative behavior of LLMs in a public goods game with norm enforcement and find that reasoning models are less effective at maintaining cooperation than traditional ones.
As large language models (LLMs) are increasingly deployed as autonomous agents, understanding their cooperation and social mechanisms is becoming increasingly important. In particular, how LLMs balance self-interest and collective well-being is a critical challenge for ensuring alignment, robustness, and safe deploymen...
[ "David Guzman Piedrahita", "Yongjin Yang", "Mrinmaya Sachan", "Giorgia Ramponi", "Bernhard Schölkopf", "Zhijing Jin" ]
https://openreview.net/forum?id=kH6LOHGjEl
kH6LOHGjEl
kH6LOHGjEl
[ "~David_Guzman_Piedrahita1", "~Yongjin_Yang1", "~Mrinmaya_Sachan3", "~Giorgia_Ramponi1", "~Bernhard_Schölkopf1", "~Zhijing_Jin1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/a207da379702863ad85f59003fc2c5b4b7c47ac7.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Multi-Agent", "Cooperation", "Social Dilemmas", "Public Goods Games", "Institutional Choice", "Costly Sanctioning", "Norm Enforcement", "Reasoning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ piedrahita2025corrupted, title={Corrupted by Reasoning: Reasoning Language Models Become Free-Riders in Public Goods Games}, author={David Guzman Piedrahita and Yongjin Yang and Mrinmaya Sachan and Giorgia Ramponi and Bernhard Sch{\"o}lkopf and Zhijing Jin}, booktitle={Second Conference on Language Mode...
piedrahita|corrupted_by_reasoning_reasoning_language_models_become_freeriders_in_public_goods_games
/attachment/e77bf91abe796499b424be1e4bdb5bc6a3541fc2.zip
null
null
null
null
AdaptMI: Adaptive Skill-based In-context Math Instructions for Small Language Models
We introduce AdaptMI, an adaptive approach to selecting skill-based in-context math instructions for small language models.
In-context learning (ICL) allows a language model to improve its problem-solving capability when provided with suitable information in context. Since the choice of in-context information can be determined based on the problem itself, in-context learning is analogous to human learning from teachers in a classroom. Recen...
[ "Yinghui He", "Abhishek Panigrahi", "Yong Lin", "Sanjeev Arora" ]
https://openreview.net/forum?id=k72RxnoS5g
k72RxnoS5g
k72RxnoS5g
[ "~Yinghui_He1", "~Abhishek_Panigrahi1", "~Yong_Lin2", "~Sanjeev_Arora1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/4d231204c9988dda0204c56d1dcf4623ef1bd631.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Small language models", "large language models", "in-context learning", "natural language processing" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ he2025adaptmi, title={Adapt{MI}: Adaptive Skill-based In-context Math Instructions for Small Language Models}, author={Yinghui He and Abhishek Panigrahi and Yong Lin and Sanjeev Arora}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=k72RxnoS5g} }
he|adaptmi_adaptive_skillbased_incontext_math_instructions_for_small_language_models
null
null
null
null
null
FineWeb2: One Pipeline to Scale Them All — Adapting Pre-Training Data Processing to Every Language
We introduce a new pre-training dataset curation pipeline based on FineWeb that we use to create a new large multilingual dataset, FineWeb2
Pre-training state-of-the-art large language models (LLMs) requires vast amounts of clean and diverse text data. While the open development of large high-quality English pre-training datasets has seen substantial recent progress, training performant multilingual LLMs remains a challenge, in large part due to the inhere...
[ "Guilherme Penedo", "Hynek Kydlíček", "Vinko Sabolčec", "Bettina Messmer", "Negar Foroutan", "Amir Hossein Kargaran", "Colin Raffel", "Martin Jaggi", "Leandro Von Werra", "Thomas Wolf" ]
https://openreview.net/forum?id=jnRBe6zatP
jnRBe6zatP
jnRBe6zatP
[ "~Guilherme_Penedo1", "~Hynek_Kydlíček1", "~Vinko_Sabolčec1", "~Bettina_Messmer1", "~Negar_Foroutan1", "~Amir_Hossein_Kargaran1", "~Colin_Raffel1", "~Martin_Jaggi1", "~Leandro_Von_Werra1", "~Thomas_Wolf1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/8a8d8d04776824b23e7bd4683c04c4f4104f6682.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "multilingual", "dataset", "pretraining", "web data", "llm" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
true
While there might not be a direct risk but since there are model released on large quantity of web-crawled data, it would be useful to include a potential risk statement at the end of the paper.
@inproceedings{ penedo2025fineweb, title={FineWeb2: One Pipeline to Scale Them All {\textemdash} Adapting Pre-Training Data Processing to Every Language}, author={Guilherme Penedo and Hynek Kydl{\'\i}{\v{c}}ek and Vinko Sabol{\v{c}}ec and Bettina Messmer and Negar Foroutan and Amir Hossein Kargaran and Colin Raffel and...
penedo|fineweb2_one_pipeline_to_scale_them_all_adapting_pretraining_data_processing_to_every_language
null
true
null
null
null
AI-Slop to AI-Polish? Aligning Language Models through Edit-Based Writing Rewards and Test-time computation
AI models struggle to assess writing quality, but the new Writing Quality Benchmark and specialized reward model enable significant improvements to AI-generated text.
AI-generated text is proliferating across domains, from creative writing and journalism to marketing content and scientific articles. Models can follow user-provided instructions to generate coherent and grammatically correct outputs but in this work, we study a more fundamental question: how do we evaluate and improve...
[ "Tuhin Chakrabarty", "Philippe Laban", "Chien-Sheng Wu" ]
https://openreview.net/forum?id=jeDYcjuZIV
jeDYcjuZIV
jeDYcjuZIV
[ "~Tuhin_Chakrabarty2", "~Philippe_Laban1", "~Chien-Sheng_Wu1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/9c9ec96e904859c276d16b3667db86b735894ec3.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM", "Writing", "Reward Models", "Alignment", "Human Centered NLP", "Test Time Compute", "Text Editing", "Behavioral Science" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ chakrabarty2025aislop, title={{AI}-Slop to {AI}-Polish? Aligning Language Models through Edit-Based Writing Rewards and Test-time computation}, author={Tuhin Chakrabarty and Philippe Laban and Chien-Sheng Wu}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/f...
chakrabarty|aislop_to_aipolish_aligning_language_models_through_editbased_writing_rewards_and_testtime_computation
null
null
null
null
null
EuroBERT: Scaling Multilingual Encoders for European Languages
We introduce EuroBERT, a family of multilingual encoders leveraging recent architectural advances, achieving state-of-the-art performance across diverse tasks with support for sequences up to 8,192 tokens.
General-purpose multilingual vector representations, used in retrieval, regression and classification, are traditionally obtained from bidirectional encoder models. Despite their wide applicability, encoders have been recently overshadowed by advances in generative decoder-only models. However, many innovations driving...
[ "Nicolas Boizard", "Hippolyte Gisserot-Boukhlef", "Duarte Miguel Alves", "Andre Martins", "Ayoub Hammal", "Caio Corro", "CELINE HUDELOT", "Emmanuel Malherbe", "Etienne Malaboeuf", "Fanny Jourdan", "Gabriel Hautreux", "João Alves", "Kevin El Haddad", "Manuel Faysse", "Maxime Peyrard", "...
https://openreview.net/forum?id=jdOC24msVq
jdOC24msVq
jdOC24msVq
[ "~Nicolas_Boizard1", "~Hippolyte_Gisserot-Boukhlef1", "~Duarte_Miguel_Alves1", "~Andre_Martins1", "~Ayoub_Hammal1", "~Caio_Corro2", "~CELINE_HUDELOT1", "~Emmanuel_Malherbe3", "~Etienne_Malaboeuf1", "~Fanny_Jourdan1", "~Gabriel_Hautreux2", "~João_Alves2", "~Kevin_El_Haddad1", "~Manuel_Fayss...
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/5b43e38fbf30639e53eb283215e3f86c4dfa0b8c.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Encoder", "Multilingual", "EuroBERT", "Training", "Vector Representations", "Bidirectional", "European" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ boizard2025eurobert, title={Euro{BERT}: Scaling Multilingual Encoders for European Languages}, author={Nicolas Boizard and Hippolyte Gisserot-Boukhlef and Duarte Miguel Alves and Andre Martins and Ayoub Hammal and Caio Corro and CELINE HUDELOT and Emmanuel Malherbe and Etienne Malaboeuf and Fanny Jourda...
boizard|eurobert_scaling_multilingual_encoders_for_european_languages
null
null
null
null
null
MALT: Improving Reasoning with Multi-Agent LLM Training
We introduce a multi-agent post-training approach to improve the reasoning and self-correction performance of a generator, verifier, and refinement model working together
Large Language Models (LLMs) often produce answers with a single chain-of-thought, which restricts their ability to explore reasoning paths or self-correct flawed outputs in complex tasks. In this paper, we introduce MALT (Multi-Agent LLM Training), a novel post-training strategy that divides the reasoning process into...
[ "Sumeet Ramesh Motwani", "Chandler Smith", "Rocktim Jyoti Das", "Rafael Rafailov", "Philip Torr", "Ivan Laptev", "Fabio Pizzati", "Ronald Clark", "Christian Schroeder de Witt" ]
https://openreview.net/forum?id=jXP9bgFack
jXP9bgFack
jXP9bgFack
[ "~Sumeet_Ramesh_Motwani1", "~Chandler_Smith1", "~Rocktim_Jyoti_Das2", "~Rafael_Rafailov1", "~Philip_Torr1", "~Ivan_Laptev1", "~Fabio_Pizzati1", "~Ronald_Clark2", "~Christian_Schroeder_de_Witt1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/da3f367150d10b65b2dcaa8152a1aea745c9a7bd.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "reasoning", "multi-agent systems", "post-training", "reinforcement learning", "large language models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ motwani2025malt, title={{MALT}: Improving Reasoning with Multi-Agent {LLM} Training}, author={Sumeet Ramesh Motwani and Chandler Smith and Rocktim Jyoti Das and Rafael Rafailov and Philip Torr and Ivan Laptev and Fabio Pizzati and Ronald Clark and Christian Schroeder de Witt}, booktitle={Second Conferen...
motwani|malt_improving_reasoning_with_multiagent_llm_training
/attachment/c362d320dbebaac8a22e3ce459c2bf0981b28842.zip
null
null
null
null
Can Test-Time Scaling Improve World Foundation Model?
Test time scaling could work for world foundation models and shows a clear scaling law. Smaller model with test time scaling is even better than x2 larger pretrained model. A extensible tookit will be opensourced for evaluating performance of WFM.
World foundation models, which simulate the physical world by predicting future states from current observations and inputs, have become central to many applications in physical intelligence, including autonomous driving and robotics. However, these models require substantial computational resources for pretraining and...
[ "Wenyan Cong", "Hanqing Zhu", "Peihao Wang", "Bangya Liu", "Dejia Xu", "Kevin Wang", "David Z. Pan", "Yan Wang", "Zhiwen Fan", "Zhangyang Wang" ]
https://openreview.net/forum?id=jSmpq7IRYe
jSmpq7IRYe
jSmpq7IRYe
[ "~Wenyan_Cong1", "~Hanqing_Zhu1", "~Peihao_Wang1", "~Bangya_Liu1", "~Dejia_Xu1", "~Kevin_Wang4", "~David_Z._Pan1", "~Yan_Wang10", "~Zhiwen_Fan2", "~Zhangyang_Wang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/fb37c34624a030a952ca49f34e233a4b8989d236.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "world foundation model", "test time scaling", "autoregressive video generation", "evaluation tookit" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ cong2025can, title={Can Test-Time Scaling Improve World Foundation Model?}, author={Wenyan Cong and Hanqing Zhu and Peihao Wang and Bangya Liu and Dejia Xu and Kevin Wang and David Z. Pan and Yan Wang and Zhiwen Fan and Zhangyang Wang}, booktitle={Second Conference on Language Modeling}, year={2025}, ur...
cong|can_testtime_scaling_improve_world_foundation_model
null
null
null
null
null
Implicit In-Context Learning: Evidence from Artificial Language Experiments
LLMs show human-like implicit learning during inference, but different models excel in different linguistic domains, pointing to distinct in-context learning mechanisms influenced by model architecture.
Humans acquire language through implicit learning, absorbing complex patterns without explicit awareness. While large language models (LLMs) demonstrate impressive linguistic capabilities, it remains unclear whether they exhibit human-like pattern recognition during in-context learning at inferencing level. We adapted ...
[ "Xiaomeng Ma", "Qihui Xu" ]
https://openreview.net/forum?id=jST2VzWUFb
jST2VzWUFb
jST2VzWUFb
[ "~Xiaomeng_Ma1", "~Qihui_Xu1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/ee53e57c5115c95f27c7d0afcfe22b6389effb30.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "implicit learning", "artificial language learning", "in-context learning", "psycholinguistics", "cognitive science" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
true
Only a minor thing. The paper uses probably copyrighted images from other papers in Figure 1 and 3. They should produce new figures with the original numbers as they did in Figure 2.
@inproceedings{ ma2025implicit, title={Implicit In-Context Learning: Evidence from Artificial Language Experiments}, author={Xiaomeng Ma and Qihui Xu}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=jST2VzWUFb} }
ma|implicit_incontext_learning_evidence_from_artificial_language_experiments
null
null
null
null
null
Post-training for Efficient Communication via Convention Formation
We introduce a post-training method and new tasks to test and improve LLMs' abilities in forming conventions for efficient communication.
Humans communicate with increasing efficiency in multi-turn interactions, by adapting their language and forming ad-hoc conventions. In contrast, prior work shows that LLMs do not naturally show this behavior. We develop a post-training process to develop this ability through targeted fine-tuning on heuristically ident...
[ "Yilun Hua", "Evan Wang", "Yoav Artzi" ]
https://openreview.net/forum?id=jRGGmbhX2s
jRGGmbhX2s
jRGGmbhX2s
[ "~Yilun_Hua1", "~Evan_Wang2", "~Yoav_Artzi1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/90eee569c161e32eb84ea65de4a8205e9fa60de2.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Interaction", "Communication Efficiency", "Linguistic Convention", "Post-training", "Alignment", "LLM", "In-context learning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ hua2025posttraining, title={Post-training for Efficient Communication via Convention Formation}, author={Yilun Hua and Evan Wang and Yoav Artzi}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=jRGGmbhX2s} }
hua|posttraining_for_efficient_communication_via_convention_formation
null
null
null
null
null
Tulu 3: Pushing Frontiers in Open Language Model Post-Training
A new multi-stage post-training recipe scaling preference learning and reinforcement learning with verifiable rewards.
Language model post-training is applied to refine behaviors and unlock new skills across a wide range of language models, but open recipes for applying these techniques lag behind proprietary ones. The underlying training data and recipes for post-training are simultaneously the most im- portant pieces of the puzzle an...
[ "Nathan Lambert", "Jacob Morrison", "Valentina Pyatkin", "Shengyi Huang", "Hamish Ivison", "Faeze Brahman", "Lester James Validad Miranda", "Alisa Liu", "Nouha Dziri", "Xinxi Lyu", "Yuling Gu", "Saumya Malik", "Victoria Graf", "Jena D. Hwang", "Jiangjiang Yang", "Ronan Le Bras", "Oyv...
https://openreview.net/forum?id=i1uGbfHHpH
i1uGbfHHpH
i1uGbfHHpH
[ "~Nathan_Lambert1", "~Jacob_Morrison2", "~Valentina_Pyatkin1", "~Shengyi_Huang1", "~Hamish_Ivison1", "~Faeze_Brahman1", "~Lester_James_Validad_Miranda1", "~Alisa_Liu1", "~Nouha_Dziri2", "~Xinxi_Lyu1", "~Yuling_Gu1", "~Saumya_Malik1", "~Victoria_Graf1", "~Jena_D._Hwang1", "~Jiangjiang_Yan...
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/9c0d921fcc7c90227f3e1bcba16bfea6f63ad480.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "post-training", "reinforcement learning", "preference learning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lambert2025tulu, title={Tulu 3: Pushing Frontiers in Open Language Model Post-Training}, author={Nathan Lambert and Jacob Morrison and Valentina Pyatkin and Shengyi Huang and Hamish Ivison and Faeze Brahman and Lester James Validad Miranda and Alisa Liu and Nouha Dziri and Xinxi Lyu and Yuling Gu and Sa...
lambert|tulu_3_pushing_frontiers_in_open_language_model_posttraining
null
true
null
null
null
Beyond Blanket Masking: Examining Granularity for Privacy Protection in Images Captured by Blind and Low Vision Users
We propose a fine-grained privacy protection framework that selectively masks only high-risk private information while preserving low-risk information in images taken by blind and low vision users for improved usability.
As visual assistant systems powered by visual language models (VLMs) become more prevalent, concerns over user privacy have grown, particularly for blind and low vision users who may unknowingly capture personal private information in their images. Existing privacy protection methods rely on coarse-grained segmentation...
[ "Jeffri Murrugarra-Llerena", "Haoran Niu", "K. Suzanne Barber", "Hal Daumé III", "Yang Trista Cao", "Paola Cascante-Bonilla" ]
https://openreview.net/forum?id=hLjoekkPiJ
hLjoekkPiJ
hLjoekkPiJ
[ "~Jeffri_Murrugarra-Llerena1", "~Haoran_Niu1", "~K._Suzanne_Barber1", "~Hal_Daumé_III1", "~Yang_Trista_Cao1", "~Paola_Cascante-Bonilla1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/c24fbd6345ba739cef3f37e046cb5b2524fc8652.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "visual language models", "privacy", "safety", "accessibility" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ murrugarra-llerena2025beyond, title={Beyond Blanket Masking: Examining Granularity for Privacy Protection in Images Captured by Blind and Low Vision Users}, author={Jeffri Murrugarra-Llerena and Haoran Niu and K. Suzanne Barber and Hal Daum{\'e} III and Yang Trista Cao and Paola Cascante-Bonilla}, bookt...
murrugarrallerena|beyond_blanket_masking_examining_granularity_for_privacy_protection_in_images_captured_by_blind_and_low_vision_users
null
null
null
null
null
Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning
This paper proposes a new RL framework to pursue the performance limit that can be achieved through outcome reward-based reinforcement learning for mathematical reasoning tasks, where only binary outcome rewards are easily accessible.
Reasoning abilities, especially those for solving complex math problems, are crucial components of general intelligence. Recent advances by proprietary companies, such as o-series models of OpenAI, have made remarkable progress on reasoning tasks. However, the complete technical details remain unrevealed, and the techn...
[ "Chengqi Lyu", "Songyang Gao", "Yuzhe Gu", "Wenwei Zhang", "Jianfei Gao", "Kuikun Liu", "Ziyi Wang", "Shuaibin Li", "Qian Zhao", "Haian Huang", "Weihan Cao", "Jiangning Liu", "Hongwei Liu", "Junnan Liu", "Songyang Zhang", "Dahua Lin", "Kai Chen" ]
https://openreview.net/forum?id=hLg2rzBJR2
hLg2rzBJR2
hLg2rzBJR2
[ "~Chengqi_Lyu1", "~Songyang_Gao1", "~Yuzhe_Gu1", "~Wenwei_Zhang1", "~Jianfei_Gao1", "~Kuikun_Liu1", "~Ziyi_Wang30", "~Shuaibin_Li2", "~Qian_Zhao10", "~Haian_Huang1", "~Weihan_Cao1", "~Jiangning_Liu1", "~Hongwei_Liu2", "~Junnan_Liu1", "~Songyang_Zhang1", "~Dahua_Lin1", "~Kai_Chen4" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/5d9ab08857ddd9047f14e3abbcb0430b6b09ada9.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Model", "Reinforcement Learning", "Mathmatical Reasoning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lyu2025exploring, title={Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning}, author={Chengqi Lyu and Songyang Gao and Yuzhe Gu and Wenwei Zhang and Jianfei Gao and Kuikun Liu and Ziyi Wang and Shuaibin Li and Qian Zhao and Haian Huang and Weihan Cao and Jiangning Liu and Hongwei ...
lyu|exploring_the_limit_of_outcome_reward_for_learning_mathematical_reasoning
null
null
null
null
null
The World According to LLMs: How Geographic Origin Influences LLMs' Entity Deduction Capabilities
LLMs demonstrate geographic bias in entity deduction games, with notably better performance on entities from the Global North and West despite controlling for language, popularity and frequency factors.
Large Language Models (LLMs) have been extensively tuned to mitigate explicit biases, yet they often exhibit subtle implicit biases rooted in their pre-training data. Rather than directly probing LLMs with human-crafted questions that may trigger guardrails, we propose studying how models behave when they proactively a...
[ "Harsh Nishant Lalai", "Raj Sanjay Shah", "Jiaxin Pei", "Sashank Varma", "Yi-Chia Wang", "Ali Emami" ]
https://openreview.net/forum?id=hJtvCfDfs1
hJtvCfDfs1
hJtvCfDfs1
[ "~Harsh_Nishant_Lalai1", "~Raj_Sanjay_Shah2", "~Jiaxin_Pei1", "~Sashank_Varma1", "~Yi-Chia_Wang2", "~Ali_Emami3" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/ac4433d186627b5e757785570169152bc828290f.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "geographic representation", "LLM evaluation", "fairness and bias", "reasoning capabilities", "cross-cultural NLP", "interactive question answering" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lalai2025the, title={The World According to {LLM}s: How Geographic Origin Influences {LLM}s' Entity Deduction Capabilities}, author={Harsh Nishant Lalai and Raj Sanjay Shah and Jiaxin Pei and Sashank Varma and Yi-Chia Wang and Ali Emami}, booktitle={Second Conference on Language Modeling}, year={2025}, ...
lalai|the_world_according_to_llms_how_geographic_origin_influences_llms_entity_deduction_capabilities
/attachment/074af523bf6b035bfdf44772c9856eb01071c044.zip
null
null
null
null
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
We introduce FacTool, a tool augmented factuality detection framework that can effectively detect diverse factual errors generated by LLMs.
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text but has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) T...
[ "Ethan Chern", "Steffi Chern", "Shiqi Chen", "Weizhe Yuan", "Kehua Feng", "Chunting Zhou", "Junxian He", "Graham Neubig", "Pengfei Liu" ]
https://openreview.net/forum?id=hJkQL9VtWT
hJkQL9VtWT
hJkQL9VtWT
[ "~Ethan_Chern1", "~Steffi_Chern1", "~Shiqi_Chen3", "~Weizhe_Yuan1", "~Kehua_Feng1", "~Chunting_Zhou1", "~Junxian_He1", "~Graham_Neubig1", "~Pengfei_Liu1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/56beb802a5975ad23bc08b1f979c1bbbfdf9b831.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "factuality", "llm", "hallucination" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ chern2025factool, title={FacTool: Factuality Detection in Generative {AI} -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios}, author={Ethan Chern and Steffi Chern and Shiqi Chen and Weizhe Yuan and Kehua Feng and Chunting Zhou and Junxian He and Graham Neubig and Pengfei Liu}, book...
chern|factool_factuality_detection_in_generative_ai_a_tool_augmented_framework_for_multitask_and_multidomain_scenarios
null
null
null
null
null
Overflow Prevention Enhances Long-Context Recurrent LLMs
We identify that recurrent LLMs suffer from recurrent memory overflows that limit their performance in long-context tasks. We propose OPRM, a training-free overflow-prevention mechanism that achieves significant gains in many long-context tasks.
A recent trend in LLMs is developing recurrent sub-quadratic models that improve long-context processing efficiency. We investigate leading large long-context models, focusing on how their fixed-size recurrent memory affects their performance. Our experiments reveal that, even when these models are trained for extended...
[ "Assaf Ben-Kish", "Itamar Zimerman", "Muhammad Jehanzeb Mirza", "Lior Wolf", "James R. Glass", "Leonid Karlinsky", "Raja Giryes" ]
https://openreview.net/forum?id=h99hJlU99U
h99hJlU99U
h99hJlU99U
[ "~Assaf_Ben-Kish1", "~Itamar_Zimerman1", "~Muhammad_Jehanzeb_Mirza1", "~Lior_Wolf1", "~James_R._Glass1", "~Leonid_Karlinsky3", "~Raja_Giryes1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/efdf046a9a269338d76057e9cbd738e378ebcfd4.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Mamba", "Sub-Quadratic Models", "Long Context", "Long-Range Language Modeling", "RNNs" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ ben-kish2025overflow, title={Overflow Prevention Enhances Long-Context Recurrent {LLM}s}, author={Assaf Ben-Kish and Itamar Zimerman and Muhammad Jehanzeb Mirza and Lior Wolf and James R. Glass and Leonid Karlinsky and Raja Giryes}, booktitle={Second Conference on Language Modeling}, year={2025}, url={h...
benkish|overflow_prevention_enhances_longcontext_recurrent_llms
/attachment/d052d7d6c9526dfe93dd56e2282dfe73efb8d2a8.zip
null
null
null
null
Both Direct and Indirect Evidence Contribute to Dative Alternation Preferences in Language Models
Through manipulating word-order preferences in datives and non-datives in the training sets of language models, we find that they acquire dative alternation preferences from both direct and indirect evidence.
Language models (LMs) tend to show human-like preferences on a number of syntactic phenomena, but the extent to which these are attributable to direct exposure to the phenomena or more general properties of language is unclear. We explore this with the English dative alternation (DO: "gave Y the X" vs. PO: "gave the X ...
[ "Qing Yao", "Kanishka Misra", "Leonie Weissweiler", "Kyle Mahowald" ]
https://openreview.net/forum?id=h5SRsDax8v
h5SRsDax8v
h5SRsDax8v
[ "~Qing_Yao1", "~Kanishka_Misra1", "~Leonie_Weissweiler1", "~Kyle_Mahowald1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/8e4eb5c7ff5e692e1ee71bfed0fc2db5ec51f2cc.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "linguistics", "dative alternation", "indirect evidence", "language learning", "cognitive science", "linguistic constructions" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ yao2025both, title={Both Direct and Indirect Evidence Contribute to Dative Alternation Preferences in Language Models}, author={Qing Yao and Kanishka Misra and Leonie Weissweiler and Kyle Mahowald}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=h5S...
yao|both_direct_and_indirect_evidence_contribute_to_dative_alternation_preferences_in_language_models
/attachment/95f95f9867e13b8b85bd505fc06b68fc5343fa85.zip
null
null
null
null
Training Plug-and-Play Knowledge Modules with Deep Context Distillation
We encapsulate knowledge from a document inside a LoRA adapter via distillation
Dynamically integrating new or rapidly evolving information after (Large) Language Model pre-training remains challenging, particularly in low-data scenarios or when dealing with private and specialized documents. In-context learning and retrieval-augmented generation (RAG) face limitations, including their high infere...
[ "Lucas Caccia", "Alan Ansell", "Edoardo Ponti", "Ivan Vulić", "Alessandro Sordoni" ]
https://openreview.net/forum?id=ghyyHZYORi
ghyyHZYORi
ghyyHZYORi
[ "~Lucas_Caccia1", "~Alan_Ansell1", "~Edoardo_Ponti1", "~Ivan_Vulić1", "~Alessandro_Sordoni2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/e6440e102c61ff2e77b350b613c2c70b72b3c8cc.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "knowledge extraction", "document understanding", "modular learning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ caccia2025training, title={Training Plug-and-Play Knowledge Modules with Deep Context Distillation}, author={Lucas Caccia and Alan Ansell and Edoardo Ponti and Ivan Vuli{\'c} and Alessandro Sordoni}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=gh...
caccia|training_plugandplay_knowledge_modules_with_deep_context_distillation
null
null
null
null
null
StagFormer: Time Staggering Decoder only Transformers
We propose a novel variant of the Transformer architecture for decoder only language modeling where we break the causal flow of information along the layers of a model by staggering in the time axis.
Standard decoding in a Transformer based language model is inherently sequential as we wait for a token’s embedding to pass through all the layers in the network before starting the generation of the next token. In this work, we propose anew architecture StagFormer (Staggered Transformer), which staggered execution alo...
[ "Dylan J Cutler", "Arun Kandoor", "Nishanth Dikkala", "Nikunj Saunshi", "Xin Wang", "Rina Panigrahy" ]
https://openreview.net/forum?id=gOKTe1KI8K
gOKTe1KI8K
gOKTe1KI8K
[ "~Dylan_J_Cutler1", "~Arun_Kandoor1", "~Nishanth_Dikkala1", "~Nikunj_Saunshi1", "~Xin_Wang30", "~Rina_Panigrahy1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/ba36b6a43243381888f9fe458c1dd690d8466235.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "staggered execution", "decoder only language models", "efficiency", "novel architectures", "generative models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ cutler2025stagformer, title={StagFormer: Time Staggering Decoder only Transformers}, author={Dylan J Cutler and Arun Kandoor and Nishanth Dikkala and Nikunj Saunshi and Xin Wang and Rina Panigrahy}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=gOK...
cutler|stagformer_time_staggering_decoder_only_transformers
null
null
null
null
null
X-Teaming: Multi-Turn Jailbreaks and Defenses with Adaptive Multi-Agents
X-Teaming, a scalable multi-agent framework that achieves state-of-the-art multi-turn jailbreaking of language models while X-Guard trains models to defend against these attacks.
Multi-turn interactions with language models (LMs) pose critical safety risks, as harmful intent can be strategically spread across exchanges. Yet, the vast majority of prior work has focused on single-turn safety, while adaptability and diversity remain among the key challenges of multi-turn red-teaming. To address th...
[ "Salman Rahman", "Liwei Jiang", "James Shiffer", "Genglin Liu", "Sheriff Issaka", "Md Rizwan Parvez", "Hamid Palangi", "Kai-Wei Chang", "Yejin Choi", "Saadia Gabriel" ]
https://openreview.net/forum?id=gKfj7Jb1kj
gKfj7Jb1kj
gKfj7Jb1kj
[ "~Salman_Rahman1", "~Liwei_Jiang2", "~James_Shiffer1", "~Genglin_Liu1", "~Sheriff_Issaka1", "~Md_Rizwan_Parvez1", "~Hamid_Palangi1", "~Kai-Wei_Chang1", "~Yejin_Choi1", "~Saadia_Gabriel1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/3e4a1c4d08cce66b395c76b84a9a11c7dcae8c65.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Multi-turn Jailbreaks", "Adaptive Multi-Agent", "Conversational AI Safety", "Red-Teaming", "Defensive Alignment" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ rahman2025xteaming, title={X-Teaming: Multi-Turn Jailbreaks and Defenses with Adaptive Multi-Agents}, author={Salman Rahman and Liwei Jiang and James Shiffer and Genglin Liu and Sheriff Issaka and Md Rizwan Parvez and Hamid Palangi and Kai-Wei Chang and Yejin Choi and Saadia Gabriel}, booktitle={Second ...
rahman|xteaming_multiturn_jailbreaks_and_defenses_with_adaptive_multiagents
null
null
null
null
null
SQuat: Subspace-orthogonal KV Cache Quantization
We propose a KV cache quantization method that preserves task-critical information throughout the quantization process.
The key-value (KV) cache accelerates LLMs decoding by storing KV tensors from previously generated tokens. It reduces redundant computation at the cost of increased memory usage. To mitigate this overhead, existing approaches compress KV tensors into lower-bit representations; however, quantization errors can accumulat...
[ "Hao Wang", "Ligong Han", "Kai Xu", "Akash Srivastava" ]
https://openreview.net/forum?id=gKdhzBiHay
gKdhzBiHay
gKdhzBiHay
[ "~Hao_Wang22", "~Ligong_Han1", "~Kai_Xu4", "~Akash_Srivastava1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/ed8d739e7f3da1993f38d6c9bb76c233b8435be1.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "KV cache quantization", "LLMs" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wang2025squat, title={{SQ}uat: Subspace-orthogonal {KV} Cache Quantization}, author={Hao Wang and Ligong Han and Kai Xu and Akash Srivastava}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=gKdhzBiHay} }
wang|squat_subspaceorthogonal_kv_cache_quantization
null
null
null
null
null
KVSink: Understanding and Enhancing the Preservation of Attention Sinks in KV Cache Quantization for LLMs
Understanding and Enhancing the Preservation of Attention Sinks in KV Cache Quantization for LLMs
Key-Value (KV) cache quantization has become a widely adopted optimization technique for efficient large language models (LLMs) inference by reducing KV cache memory usage and mitigating memory-bound constraints. Recent studies have emphasized the importance of preserving the original precision of KVs for the first few...
[ "Zunhai Su", "Kehong Yuan" ]
https://openreview.net/forum?id=gIqb6zWZoO
gIqb6zWZoO
gIqb6zWZoO
[ "~Zunhai_Su1", "~Kehong_Yuan1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/b192f1bfa40f51a5501e78eb839aa697e1a1a6c3.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "quantization", "kv cache", "transformer", "llm", "attention" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ su2025kvsink, title={{KVS}ink: Understanding and Enhancing the Preservation of Attention Sinks in {KV} Cache Quantization for {LLM}s}, author={Zunhai Su and Kehong Yuan}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=gIqb6zWZoO} }
su|kvsink_understanding_and_enhancing_the_preservation_of_attention_sinks_in_kv_cache_quantization_for_llms
null
true
null
null
null
Efficient Construction of Model Family through Progressive Training Using Model Expansion
We propose a progressive training approach that efficiently builds a family of LLMs, reducing total computational requirements while achieving comparable or even better performance.
As Large Language Models (LLMs) gain widespread practical applica- tion, offering model families with varying parameter sizes has become standard practice to accommodate diverse computational requirements. Traditionally, each model in the family is trained independently, incurring computational costs that scale additiv...
[ "Kazuki Yano", "Sho Takase", "Sosuke Kobayashi", "Shun Kiyono", "Jun Suzuki" ]
https://openreview.net/forum?id=fuBrcTH8NM
fuBrcTH8NM
fuBrcTH8NM
[ "~Kazuki_Yano1", "~Sho_Takase2", "~Sosuke_Kobayashi1", "~Shun_Kiyono1", "~Jun_Suzuki1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/5558b4586a726c29e22f45704837021d08ddb3bf.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "pre-training", "model familly", "compute efficiency" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ yano2025efficient, title={Efficient Construction of Model Family through Progressive Training Using Model Expansion}, author={Kazuki Yano and Sho Takase and Sosuke Kobayashi and Shun Kiyono and Jun Suzuki}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/foru...
yano|efficient_construction_of_model_family_through_progressive_training_using_model_expansion
null
null
null
null
null
UNVEILING: What Makes Linguistics Olympiad Puzzles Tricky for LLMs?
The study presents a linguistic features-based annotation of Linguistics Olympiad puzzles to find LLM weaknesses; LLMs are bad at puzzles with higher morphological complexity, dissimilar to English, and when the puzzle is data constrained.
Large language models (LLMs) have demonstrated potential in reasoning tasks, but their performance on linguistics puzzles remains consistently poor. These puzzles, often derived from Linguistics Olympiad (LO) contests, provide a minimal contamination environment to assess LLMs' linguistic reasoning abilities across low...
[ "Mukund Choudhary", "KV Aditya Srivatsa", "Gaurja Aeron", "Antara Raaghavi Bhattacharya", "Dang Khoa Dang Dinh", "Ikhlasul Akmal Hanif", "Daria Kotova", "Ekaterina Kochmar", "Monojit Choudhury" ]
https://openreview.net/forum?id=fcRcl1EXc4
fcRcl1EXc4
fcRcl1EXc4
[ "~Mukund_Choudhary1", "~KV_Aditya_Srivatsa1", "~Gaurja_Aeron1", "~Antara_Raaghavi_Bhattacharya1", "~Dang_Khoa_Dang_Dinh1", "~Ikhlasul_Akmal_Hanif1", "~Daria_Kotova1", "~Ekaterina_Kochmar2", "~Monojit_Choudhury1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/e2a9b0fcf4a176a28b878dd7d16d5725a0164c0c.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "linguistic reasoning", "metalinguistics", "LLM evaluation", "morphology", "linguistics olympiad", "interpretability", "low resource languages", "annotation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ choudhary2025unveiling, title={{UNVEILING}: What Makes Linguistics Olympiad Puzzles Tricky for {LLM}s?}, author={Mukund Choudhary and KV Aditya Srivatsa and Gaurja Aeron and Antara Raaghavi Bhattacharya and Dang Khoa Dang Dinh and Ikhlasul Akmal Hanif and Daria Kotova and Ekaterina Kochmar and Monojit C...
choudhary|unveiling_what_makes_linguistics_olympiad_puzzles_tricky_for_llms
null
null
null
null
null
AgentRewardBench: Evaluating Automatic Evaluations of Web Agent Trajectories
AgentRewardBench is a benchmark that evaluates how well Large Language Model (LLM) judges align with human preferences when assessing autonomous web agents, using over 1,200 expert-reviewed trajectories across five benchmarks
Web agents enable users to perform tasks on web browsers through natural language interaction. Evaluating web agents trajectories is an important problem, since it helps us determine whether the agent successfully completed the tasks. Rule-based methods are widely used for this purpose, but they are challenging to exte...
[ "Xing Han Lù", "Amirhossein Kazemnejad", "Nicholas Meade", "Arkil Patel", "Dongchan Shin", "Alejandra Zambrano", "Karolina Stanczak", "Peter Shaw", "Christopher Pal", "Siva Reddy" ]
https://openreview.net/forum?id=fQcUZMPIvu
fQcUZMPIvu
fQcUZMPIvu
[ "~Xing_Han_Lù1", "~Amirhossein_Kazemnejad1", "~Nicholas_Meade1", "~Arkil_Patel1", "~Dongchan_Shin1", "~Alejandra_Zambrano1", "~Karolina_Stanczak1", "~Peter_Shaw1", "~Christopher_Pal1", "~Siva_Reddy1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/9049329e5673f53bcd91fc9e3157946c0407caff.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Agent", "Web Agent", "LLM Judge", "LLM-as-a-Judge", "Digital Agent", "Benchmark", "Reward Modeling" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lu2025agentrewardbench, title={AgentRewardBench: Evaluating Automatic Evaluations of Web Agent Trajectories}, author={Xing Han L{\`u} and Amirhossein Kazemnejad and Nicholas Meade and Arkil Patel and Dongchan Shin and Alejandra Zambrano and Karolina Stanczak and Peter Shaw and Christopher Pal and Siva R...
lù|agentrewardbench_evaluating_automatic_evaluations_of_web_agent_trajectories
null
null
null
null
null
Inside-Out: Hidden Factual Knowledge in LLMs
We introduce a framework for evaluating the gap between the knowledge LLMs encode internally and what they express in their outputs, and provide strong evidence of this gap across popular LLMs.
This work presents a framework for assessing whether large language models (LLMs) encode more factual knowledge in their parameters than what they express in their outputs. While a few studies hint at this possibility, none has clearly defined or demonstrated this phenomenon. We first propose a formal definition of kno...
[ "Zorik Gekhman", "Eyal Ben-David", "Hadas Orgad", "Eran Ofek", "Yonatan Belinkov", "Idan Szpektor", "Jonathan Herzig", "Roi Reichart" ]
https://openreview.net/forum?id=f7GG1MbsSM
f7GG1MbsSM
f7GG1MbsSM
[ "~Zorik_Gekhman1", "~Eyal_Ben-David1", "~Hadas_Orgad1", "~Eran_Ofek1", "~Yonatan_Belinkov1", "~Idan_Szpektor1", "~Jonathan_Herzig2", "~Roi_Reichart1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/13c1300909f9eb3d5e428b69bc513b2e0ea8d642.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLMs", "Knowledge" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ gekhman2025insideout, title={Inside-Out: Hidden Factual Knowledge in {LLM}s}, author={Zorik Gekhman and Eyal Ben-David and Hadas Orgad and Eran Ofek and Yonatan Belinkov and Idan Szpektor and Jonathan Herzig and Roi Reichart}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https:/...
gekhman|insideout_hidden_factual_knowledge_in_llms
null
null
null
null
null
The Unlearning Mirage: A Dynamic Framework for Evaluating LLM Unlearning
A Dynamic Framework for Evaluating LLM Unlearning
Unlearning in Large Language Models (LLMs) aims to enhance safety, mitigate biases, and comply with legal mandates, such as the right to be forgotten. However, existing unlearning methods are brittle: minor query modifications, such as multi-hop reasoning and entity aliasing, can recover supposedly forgotten informatio...
[ "Raj Sanjay Shah", "Jing Huang", "Keerthiram Murugesan", "Nathalie Baracaldo", "Diyi Yang" ]
https://openreview.net/forum?id=exW2SFJK4H
exW2SFJK4H
exW2SFJK4H
[ "~Raj_Sanjay_Shah2", "~Jing_Huang2", "~Keerthiram_Murugesan1", "~Nathalie_Baracaldo1", "~Diyi_Yang2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/c17d1f36ddb6d284517474f548310cba9ce3ae00.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Unlearning evaluation", "Multi-hop reasoning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ shah2025the, title={The Unlearning Mirage: A Dynamic Framework for Evaluating {LLM} Unlearning}, author={Raj Sanjay Shah and Jing Huang and Keerthiram Murugesan and Nathalie Baracaldo and Diyi Yang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=ex...
shah|the_unlearning_mirage_a_dynamic_framework_for_evaluating_llm_unlearning
/attachment/3effe50b2de608ddbd7719e8b65eb2fd13f7c786.zip
true
null
null
null
EvalAgents: Discovering Implicit Evaluation Criteria from the Web
We propose a novel framework EvalAgent that dynamically generates grounded, implicit evaluation criteria for a given prompt based on retrieved expert advice.
Evaluation of language model outputs on structured writing tasks is typically conducted with a number of desirable criteria presented to human evaluators or large language models (LLMs). For instance, on a prompt like "Help me draft an academic talk on coffee intake vs research productivity", a model response may be ev...
[ "Manya Wadhwa", "Zayne Rea Sprague", "Chaitanya Malaviya", "Philippe Laban", "Junyi Jessy Li", "Greg Durrett" ]
https://openreview.net/forum?id=erGpkHCybv
erGpkHCybv
erGpkHCybv
[ "~Manya_Wadhwa1", "~Zayne_Rea_Sprague1", "~Chaitanya_Malaviya1", "~Philippe_Laban1", "~Junyi_Jessy_Li2", "~Greg_Durrett1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/231c145f57a04150de2e76cfc963745238701095.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "evaluation", "writing", "large language models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wadhwa2025evalagents, title={EvalAgents: Discovering Implicit Evaluation Criteria from the Web}, author={Manya Wadhwa and Zayne Rea Sprague and Chaitanya Malaviya and Philippe Laban and Junyi Jessy Li and Greg Durrett}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openre...
wadhwa|evalagents_discovering_implicit_evaluation_criteria_from_the_web
null
null
null
null
null
Discovering Knowledge Deficiencies of Language Models on Massive Knowledge Base
We propose stochastic error ascend, an optimization-based framework that efficiently identifies and refines failure modes in LLMs, discovering significantly more errors than existing methods while reducing evaluation costs.
Large language models (LLMs) possess impressive linguistic capabilities but often fail to faithfully retain factual knowledge, leading to hallucinations and unreliable outputs. Understanding LLMs' knowledge deficiencies by exhaustively evaluating against full-scale knowledge bases is computationally prohibitive, especi...
[ "Linxin Song", "Xuwei Ding", "Jieyu Zhang", "Taiwei Shi", "Ryotaro Shimizu", "Rahul Gupta", "Yang Liu", "Jian Kang", "Jieyu Zhao" ]
https://openreview.net/forum?id=eqNItk1sWo
eqNItk1sWo
eqNItk1sWo
[ "~Linxin_Song1", "~Xuwei_Ding1", "~Jieyu_Zhang1", "~Taiwei_Shi1", "~Ryotaro_Shimizu1", "~Rahul_Gupta3", "~Yang_Liu60", "~Jian_Kang1", "~Jieyu_Zhao1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/084e39ff598ebc6cfeb04aecfccaafaf4bf56c00.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Evaluation", "Misinformation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ song2025discovering, title={Discovering Knowledge Deficiencies of Language Models on Massive Knowledge Base}, author={Linxin Song and Xuwei Ding and Jieyu Zhang and Taiwei Shi and Ryotaro Shimizu and Rahul Gupta and Yang Liu and Jian Kang and Jieyu Zhao}, booktitle={Second Conference on Language Modelin...
song|discovering_knowledge_deficiencies_of_language_models_on_massive_knowledge_base
null
null
null
null
null
MuSeD: A Multimodal Spanish Dataset for Sexism Detection in Social Media Videos
We present MuSeD, a multimodal Spanish dataset for sexism detection in videos, with an innovative annotation framework for analyzing the contribution of textual and multimodal labels. We evaluate the performance of LLMs and multimodal LLMs on MuSeD.
Sexism is generally defined as prejudice and discrimination based on sex or gender, affecting every sector of society, from social institutions to relationships and individual behavior. Social media platforms amplify the impact of sexism by conveying discriminatory content not only through text but also across multiple...
[ "Laura De Grazia", "Pol Pastells", "Mauro Vázquez Chas", "Desmond Elliott", "Danae Sanchez Villegas", "Mireia Farrús", "Mariona Taulé Delor" ]
https://openreview.net/forum?id=eSAv7GKVFt
eSAv7GKVFt
eSAv7GKVFt
[ "~Laura_De_Grazia1", "~Pol_Pastells1", "~Mauro_Vázquez_Chas1", "~Desmond_Elliott1", "~Danae_Sanchez_Villegas1", "~Mireia_Farrús1", "~Mariona_Taulé_Delor1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/19195e99dd51cc5a9d15e559e82f4506a2d30845.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "sexism", "multimodal", "classification", "social media", "LLMs" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
true
Dataset contains samples with discrimination/stereotype/inequality/objectification in videos, especially sensitive samples from BitChute platform where annotators returned ~94% samples as sexist. The proposed Spanish dataset for sexism detection may contribute to the emergence of sexual harassment-related public opini...
@inproceedings{ grazia2025mused, title={MuSeD: A Multimodal Spanish Dataset for Sexism Detection in Social Media Videos}, author={Laura De Grazia and Pol Pastells and Mauro V{\'a}zquez Chas and Desmond Elliott and Danae Sanchez Villegas and Mireia Farr{\'u}s and Mariona Taul{\'e} Delor}, booktitle={Second Conference on...
grazia|mused_a_multimodal_spanish_dataset_for_sexism_detection_in_social_media_videos
null
null
null
null
null
Meta-Learning for Speeding Up Large Model Inference in Decentralized Environments
We propose a meta-learning framework to optimize inference acceleration in decentralized AI systems based on task-specific data, promoting more cost-effective and scalable AI deployment.
The deployment of large-scale models, such as large language models (LLMs), incurs substantial costs due to their computational demands. To mitigate these costs and address challenges related to scalability and data security, there is a growing shift towards decentralized systems for model deployment, where choosing ef...
[ "Yipeng Du", "Zihao Wang", "Ahmad Farhan", "Claudio Angione", "Harry Yang", "Fielding Johnston", "James P. Buban", "Yue Zhao", "Yuzhe Yang" ]
https://openreview.net/forum?id=eLWn2XVMHA
eLWn2XVMHA
eLWn2XVMHA
[ "~Yipeng_Du2", "~Zihao_Wang36", "~Ahmad_Farhan1", "~Claudio_Angione1", "~Harry_Yang2", "~Fielding_Johnston1", "~James_P._Buban1", "~Yue_Zhao13", "~Yuzhe_Yang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/2448403da60ff435885d137ba6343485b7151611.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Fast inference", "LLMs", "meta learning", "decentralized systems" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ du2025metalearning, title={Meta-Learning for Speeding Up Large Model Inference in Decentralized Environments}, author={Yipeng Du and Zihao Wang and Ahmad Farhan and Claudio Angione and Harry Yang and Fielding Johnston and James P. Buban and Yue Zhao and Yuzhe Yang}, booktitle={Second Conference on Langu...
du|metalearning_for_speeding_up_large_model_inference_in_decentralized_environments
null
null
null
null
null
(Im)possibility of Automated Hallucination Detection in Large Language Models
We propose a novel theoretical model for hallucination detection and show that it is generally impossible to automate this task only with positive examples; however, if we have negative examples, the task becomes much easier.
Is automated hallucination detection fundamentally possible? In this paper, we introduce a theoretical framework to rigorously study the (im)possibility of automatically detecting hallucinations produced by large language models (LLMs). Our model builds on the classical Gold-Angluin framework of language identification...
[ "Amin Karbasi", "Omar Montasser", "John Sous", "Grigoris Velegkas" ]
https://openreview.net/forum?id=e5jWdZIX0Q
e5jWdZIX0Q
e5jWdZIX0Q
[ "~Amin_Karbasi3", "~Omar_Montasser1", "~John_Sous1", "~Grigoris_Velegkas1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/6b7c25bf886c16a2b1e8e9a0fb45dfbf9edae089.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "hallucinations", "theory", "RLHF" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ karbasi2025impossibility, title={(Im)possibility of Automated Hallucination Detection in Large Language Models}, author={Amin Karbasi and Omar Montasser and John Sous and Grigoris Velegkas}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=e5jWdZIX0Q}...
karbasi|impossibility_of_automated_hallucination_detection_in_large_language_models
null
null
null
null
null
Overfill: Two-Stage Models for Efficient Language Model Decoding
Leveraging more compute during prefill, OverFill improves generation quality with minimal latency overhead.
Large language models (LLMs) excel across diverse tasks but face significant deployment challenges due to high inference costs. LLM inference comprises prefill (compute-bound) and decode (memory-bound) stages, with decode dominating latency particularly for long sequences. Current decoder-only models handle both stages...
[ "Woojeong Kim", "Junxiong Wang", "Jing Nathan Yan", "Mohamed S. Abdelfattah", "Alexander M Rush" ]
https://openreview.net/forum?id=e112iu5ssg
e112iu5ssg
e112iu5ssg
[ "~Woojeong_Kim1", "~Junxiong_Wang1", "~Jing_Nathan_Yan1", "~Mohamed_S._Abdelfattah1", "~Alexander_M_Rush1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/a53da3fa4a7edb32a0e94623f81ffc790ebd31f8.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "deep learning", "large language models", "inference efficiency" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ kim2025overfill, title={Overfill: Two-Stage Models for Efficient Language Model Decoding}, author={Woojeong Kim and Junxiong Wang and Jing Nathan Yan and Mohamed S. Abdelfattah and Alexander M Rush}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=e1...
kim|overfill_twostage_models_for_efficient_language_model_decoding
null
null
null
null
null
URANIA: Differentially Private Insights into AI Use
We introduce a novel framework primarily designed for generating insights about LLM chatbot interactions while providing differential privacy guarantees, while also being applicable to other text corpora.
We introduce _Urania_, a novel framework for generating insights about LLM chatbot interactions with rigorous differential privacy (DP) guarantees. The framework employs a private clustering mechanism and innovative keyword extraction methods, including frequency-based, TF-IDF-based, and LLM-guided approaches. By lever...
[ "Daogao Liu", "Edith Cohen", "Badih Ghazi", "Peter Kairouz", "Pritish Kamath", "Alexander Knop", "Ravi Kumar", "Pasin Manurangsi", "Adam Sealfon", "Da Yu", "Chiyuan Zhang" ]
https://openreview.net/forum?id=dujG4nGClA
dujG4nGClA
dujG4nGClA
[ "~Daogao_Liu1", "~Edith_Cohen1", "~Badih_Ghazi1", "~Peter_Kairouz1", "~Pritish_Kamath2", "~Alexander_Knop1", "~Ravi_Kumar1", "~Pasin_Manurangsi2", "~Adam_Sealfon1", "~Da_Yu1", "~Chiyuan_Zhang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/ed4ba9cf9757208ce5d3327f79b31d0c9f1d18b9.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Differential Privacy", "Clustering", "Summarization" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ liu2025urania, title={{URANIA}: Differentially Private Insights into {AI} Use}, author={Daogao Liu and Edith Cohen and Badih Ghazi and Peter Kairouz and Pritish Kamath and Alexander Knop and Ravi Kumar and Pasin Manurangsi and Adam Sealfon and Da Yu and Chiyuan Zhang}, booktitle={Second Conference on La...
liu|urania_differentially_private_insights_into_ai_use
null
null
null
null
null
PersonaEval: Are LLM Evaluators Human Enough to Judge Role-Play?
We introduce PersonaEval, a benchmark showing that LLMs struggle to judge role-play like humans. Even top models fail basic role identification, highlighting a need for more human-like reasoning in LLM evaluators.
Current role-play studies often rely on unvalidated LLM-as-a-judge paradigms, which may fail to reflect how humans perceive role fidelity. A key prerequisite for human-aligned evaluation is role identification, the ability to recognize who is speaking based on dialogue context. We argue that any meaningful judgment of ...
[ "Lingfeng Zhou", "Jialing Zhang", "Jin Gao", "Mohan Jiang", "Dequan Wang" ]
https://openreview.net/forum?id=drdrFhKYjP
drdrFhKYjP
drdrFhKYjP
[ "~Lingfeng_Zhou1", "~Jialing_Zhang1", "~Jin_Gao3", "~Mohan_Jiang1", "~Dequan_Wang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/7d82995478de95962a51816e94a3e690c84e9d2e.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Role-play", "evaluating LLM evaluators", "benchmark" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhou2025personaeval, title={PersonaEval: Are {LLM} Evaluators Human Enough to Judge Role-Play?}, author={Lingfeng Zhou and Jialing Zhang and Jin Gao and Mohan Jiang and Dequan Wang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=drdrFhKYjP} }
zhou|personaeval_are_llm_evaluators_human_enough_to_judge_roleplay
/attachment/6a25cb35829c49d55a1670963857bef598b95c9e.zip
null
null
null
null
Learning to Reason for Long-Form Story Generation
We propose a long-form story generation task, Next-Chapter Prediction, and a novel reward formulation that allows us to learn reasoning traces which improve predicted chapters without a labeled dataset.
Generating high-quality stories spanning thousands of tokens requires competency across a variety of skills, from tracking plot and character arcs to keeping a consistent and engaging style. Due to the difficulty of sourcing labeled datasets and precise quality measurements, most work using large language models (LLMs)...
[ "Alexander Gurung", "Mirella Lapata" ]
https://openreview.net/forum?id=dr3eg5ehR2
dr3eg5ehR2
dr3eg5ehR2
[ "~Alexander_Gurung1", "~Mirella_Lapata1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/bfa23a57d4114b772b5398b348d3308aa0ebfd0f.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "story generation", "reasoning", "reinforcement learning", "long-context generation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ gurung2025learning, title={Learning to Reason for Long-Form Story Generation}, author={Alexander Gurung and Mirella Lapata}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=dr3eg5ehR2} }
gurung|learning_to_reason_for_longform_story_generation
null
null
null
null
null
Echo Chamber: RL Post-training Amplifies Behaviors Learned in Pretraining
We conduct a systematic end-to-end study of RL fine-tuning from scratch for mathematical reasoning, uncovering how RL shapes model behavior across scales and data mixtures.
Reinforcement learning (RL)-based fine-tuning has become a crucial step in post-training language models for advanced mathematical reasoning and coding. Following the success of frontier reasoning models, recent work has demonstrated that RL fine-tuning consistently improves performance, even in smaller-scale models; h...
[ "Rosie Zhao", "Alexandru Meterez", "Sham M. Kakade", "Cengiz Pehlevan", "Samy Jelassi", "Eran Malach" ]
https://openreview.net/forum?id=dp4KWuSDzj
dp4KWuSDzj
dp4KWuSDzj
[ "~Rosie_Zhao1", "~Alexandru_Meterez1", "~Sham_M._Kakade1", "~Cengiz_Pehlevan2", "~Samy_Jelassi1", "~Eran_Malach3" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/8cfad45ef47f433c49770fdee1de3c3007efecc7.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "reinforcement learning", "language models", "post-training", "ppo", "pretraining", "reasoning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhao2025echo, title={Echo Chamber: {RL} Post-training Amplifies Behaviors Learned in Pretraining}, author={Rosie Zhao and Alexandru Meterez and Sham M. Kakade and Cengiz Pehlevan and Samy Jelassi and Eran Malach}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.n...
zhao|echo_chamber_rl_posttraining_amplifies_behaviors_learned_in_pretraining
null
null
null
null
null
Evaluating LLMs on Chinese Idiom Translation
We evaluate large language models on Chinese idiom translation across multiple domains and introduce new metrics that reliably measures idiom translation quality, significantly outperforming existing translation metrics.
Idioms, whose figurative meanings usually differ from their literal interpretations, are common in everyday language, especially in Chinese, where they often contain historical references and follow specific structural patterns. Despite recent progress in machine translation with large language models, little is known ...
[ "Cai Yang", "Yao Dou", "David Heineman", "Xiaofeng Wu", "Wei Xu" ]
https://openreview.net/forum?id=dkE5rveDuh
dkE5rveDuh
dkE5rveDuh
[ "~Cai_Yang1", "~Yao_Dou1", "~David_Heineman1", "~Xiaofeng_Wu5", "~Wei_Xu5" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/288488f3af0e0caf3be74ad0d0cefb3b2efbb459.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Model Evaluation", "Chinese Idioms", "Meta Analysis", "Multilingual Evaluation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ yang2025evaluating, title={Evaluating {LLM}s on Chinese Idiom Translation}, author={Cai Yang and Yao Dou and David Heineman and Xiaofeng Wu and Wei Xu}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=dkE5rveDuh} }
yang|evaluating_llms_on_chinese_idiom_translation
null
null
This submission is NOT exempt from the Reciprocal Reviewing requirement. (We expect most submissions to fall in this category.)
~Yao_Dou1
{ "readers": [ "colmweb.org/COLM/2025/Conference", "colmweb.org/COLM/2025/Conference/Submission992/Authors" ] }
QAPyramid: Fine-grained Evaluation of Content Selection for Text Summarization
Fine-grained Evaluation of Content Selection for Text Summarization
How to properly conduct human evaluations for text summarization is a longstanding challenge. The Pyramid human evaluation protocol, which assesses content selection by breaking the reference summary into sub-units and verifying their presence in the system summary, has been widely adopted. However, it suffers from a l...
[ "Shiyue Zhang", "David Wan", "Arie Cattan", "Ayal Klein", "Ido Dagan", "Mohit Bansal" ]
https://openreview.net/forum?id=dZRzInscvA
dZRzInscvA
dZRzInscvA
[ "~Shiyue_Zhang1", "~David_Wan1", "~Arie_Cattan1", "~Ayal_Klein1", "~Ido_Dagan1", "~Mohit_Bansal2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/4120b6c3ef2897eadd231d9b69c23815f5944c22.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Evaluation", "Summarization" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhang2025qapyramid, title={{QAP}yramid: Fine-grained Evaluation of Content Selection for Text Summarization}, author={Shiyue Zhang and David Wan and Arie Cattan and Ayal Klein and Ido Dagan and Mohit Bansal}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/fo...
zhang|qapyramid_finegrained_evaluation_of_content_selection_for_text_summarization
null
null
null
null
null
Steering the CensorShip: Uncovering Representation Vectors for LLM "Thought'' Control
We examine safety-tuned LLMs and discover representation vectors for measuring and controlling censorship imposed through refusal and thought suppression.
Large language models (LLMs) have transformed the way we access information. These models are often tuned to refuse to comply with requests that are considered harmful and to produce responses that better align with the preferences of those who control the models. To understand how this "censorship'" works. We use repr...
[ "Hannah Cyberey", "David Evans" ]
https://openreview.net/forum?id=dVqZBagXF3
dVqZBagXF3
dVqZBagXF3
[ "~Hannah_Cyberey1", "~David_Evans1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/70893e89e94379603250043cfb412820efbd25a4.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "censorship", "activation steering", "representation engineering", "reasoning LLMs" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ cyberey2025steering, title={Steering the CensorShip: Uncovering Representation Vectors for {LLM} ''Thought'' Control}, author={Hannah Cyberey and David Evans}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=dVqZBagXF3} }
cyberey|steering_the_censorship_uncovering_representation_vectors_for_llm_thought_control
null
null
null
null
null
Cutting the Root of Hallucination: Structural Trimming for Vulnerability Mitigation in Code LLMs
LLMs hallucinate code often with security risks. We trace these structurally, prune them surgically, and predict repair effectiveness. Our method patches code and mitigates risk using a transferable score (CSHS).
We introduce a structural perspective on hallucinations in code-generating language models, framing them as causality anchors in syntax graphs that trigger cascading semantic errors and latent security flaws. This work is the first to systematically connect code hallucinations with vulnerability risks, offering a unifi...
[ "Yage Zhang" ]
https://openreview.net/forum?id=dU4Y2sNfJ2
dU4Y2sNfJ2
dU4Y2sNfJ2
[ "~Yage_Zhang2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/94228dff9c0d4e67d4c92dc5132ee037a4340a14.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM hallucinations", "code generation", "program repair", "vulnerability mitigation", "structural pruning", "abstract syntax tree", "hallucination detection", "CSHS", "model-agnostic risk estimation", "generative code safety" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhang2025cutting, title={Cutting the Root of Hallucination: Structural Trimming for Vulnerability Mitigation in Code {LLM}s}, author={Yage Zhang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=dU4Y2sNfJ2} }
zhang|cutting_the_root_of_hallucination_structural_trimming_for_vulnerability_mitigation_in_code_llms
null
null
null
null
null
Algorithm Discovery With LLMs: Evolutionary Search Meets Reinforcement Learning
We propose an approach that integrates LLM-based evolutionary search with RL fine-tuning for accelerated discovery of algorithms, as demonstrated on combinatorial optimization tasks.
Discovering efficient algorithms for solving complex problems has been an outstanding challenge in mathematics and computer science, requiring substantial human expertise over the years. Recent advancements in evolutionary search with large language models (LLMs) have shown promise in accelerating the discovery of algo...
[ "Anja Šurina", "Amin Mansouri", "Lars C.P.M. Quaedvlieg", "Amal Seddas", "Maryna Viazovska", "Emmanuel Abbe", "Caglar Gulcehre" ]
https://openreview.net/forum?id=dNW3RGW0gi
dNW3RGW0gi
dNW3RGW0gi
[ "~Anja_Šurina1", "~Amin_Mansouri1", "~Lars_C.P.M._Quaedvlieg1", "~Amal_Seddas1", "~Maryna_Viazovska1", "~Emmanuel_Abbe1", "~Caglar_Gulcehre1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/92babb3133c8776c7888a177c504da576de4c7f4.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Reinforcement Learning", "Evolutionary Search", "Algorithm Discovery", "Self-Improvement", "AI for Math" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ surina2025algorithm, title={Algorithm Discovery With {LLM}s: Evolutionary Search Meets Reinforcement Learning}, author={Anja {\v{S}}urina and Amin Mansouri and Lars C.P.M. Quaedvlieg and Amal Seddas and Maryna Viazovska and Emmanuel Abbe and Caglar Gulcehre}, booktitle={Second Conference on Language Mod...
urina|algorithm_discovery_with_llms_evolutionary_search_meets_reinforcement_learning
null
null
null
null
null
You Cannot Feed Two Birds with One Score: the Accuracy-Naturalness Tradeoff in Translation
We prove mathematically and demonstrate empirically that optimizing a single metric for machine translation *cannot* lead to a system that is both accurate and fluent. We also establish a connection between no-reference metrics and our theory.
The goal of translation, be it by human or by machine, is, given some text in a source language, to produce text in a target language that simultaneously 1) preserves the meaning of the source text and 2) achieves natural expression in the target language. However, researchers in the machine translation community usual...
[ "Gergely Flamich", "David Vilar", "Jan-Thorsten Peter", "Markus Freitag" ]
https://openreview.net/forum?id=d9EkgbZZH9
d9EkgbZZH9
d9EkgbZZH9
[ "~Gergely_Flamich1", "~David_Vilar1", "~Jan-Thorsten_Peter1", "~Markus_Freitag2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/e125a1f2f817db9daf4c4a2ed3b825ae306572f5.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "translation", "accuracy", "naturalness", "tradeoff", "distortion", "perception", "no-reference metric" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ flamich2025you, title={You Cannot Feed Two Birds with One Score: the Accuracy-Naturalness Tradeoff in Translation}, author={Gergely Flamich and David Vilar and Jan-Thorsten Peter and Markus Freitag}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=d9...
flamich|you_cannot_feed_two_birds_with_one_score_the_accuracynaturalness_tradeoff_in_translation
null
null
null
null
null
Teach Old SAEs New Domain Tricks with Boosting
We propose a method to add domain-specific features into a pre-trained SAE.
Sparse Autoencoders have emerged as powerful tools for interpreting the internal representations of Large Language Models, yet they often fail to capture domain-specific features not prevalent in their training corpora. This paper introduces a residual learning approach that addresses this feature blindness without req...
[ "Nikita Koriagin", "Yaroslav Aksenov", "Daniil Laptev", "Gleb Gerasimov", "Nikita Balagansky", "Daniil Gavrilov" ]
https://openreview.net/forum?id=d4XXFVAlV7
d4XXFVAlV7
d4XXFVAlV7
[ "~Nikita_Koriagin1", "~Yaroslav_Aksenov1", "~Daniil_Laptev1", "~Gleb_Gerasimov1", "~Nikita_Balagansky3", "~Daniil_Gavrilov1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/239b22ba90f8847d2c8c75fead3882620a2ec8d6.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM", "Interpretability", "SAE" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ koriagin2025teach, title={Teach Old {SAE}s New Domain Tricks with Boosting}, author={Nikita Koriagin and Yaroslav Aksenov and Daniil Laptev and Gleb Gerasimov and Nikita Balagansky and Daniil Gavrilov}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id...
koriagin|teach_old_saes_new_domain_tricks_with_boosting
null
null
null
null
null
CultureCLIP: Empowering CLIP with Cultural Awareness through Synthetic Images and Contextualized Captions
We introduce CulTwin, a synthetic cultural dataset of visually similar concept pairs with contextualized captions, and CultureCLIP, a CLIP-based model fine-tuned to better distinguish visually similar yet culturally distinct concepts.
Pretrained vision-language models (VLMs) such as CLIP excel in general multimodal comprehension but often struggle to capture nuanced, context-dependent visual cues. This makes it difficult to distinguish between similar-looking concepts with potentially different cultural meanings. Such deficiencies are mainly due to ...
[ "Yuchen Huang", "Zhiyuan Fan", "Zhitao He", "Sandeep Polisetty", "Wenyan Li", "Yi R. Fung" ]
https://openreview.net/forum?id=cWVpXWARbt
cWVpXWARbt
cWVpXWARbt
[ "~Yuchen_Huang4", "~Zhiyuan_Fan2", "~Zhitao_He1", "~Sandeep_Polisetty1", "~Wenyan_Li1", "~Yi_R._Fung1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/43578f00390a568a2e3b91a5e2c0dd7ab5d002b7.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Vision-Language Models", "Cultural Understanding", "Fine-Grained Recognition", "Contextual Knowledge", "Synthetic Data Generation", "Contrastive Learning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ huang2025cultureclip, title={Culture{CLIP}: Empowering {CLIP} with Cultural Awareness through Synthetic Images and Contextualized Captions}, author={Yuchen Huang and Zhiyuan Fan and Zhitao He and Sandeep Polisetty and Wenyan Li and Yi R. Fung}, booktitle={Second Conference on Language Modeling}, year={2...
huang|cultureclip_empowering_clip_with_cultural_awareness_through_synthetic_images_and_contextualized_captions
null
null
null
null
null
Detecting and Pruning Prominent but Detrimental Neurons in Large Language Models
We fine-tune LLMs by pruning shortcut neurons using Integrated Gradients, improving generalization and performance on multiple-choice benchmarks.
Large language models (LLMs) often develop learned mechanisms specialized to specific datasets, such as reliance on domain-specific correlations, which yield high-confidence predictions without generalizable reasoning. While beneficial in one setting, these dataset-specific mechanisms typically degrade performance when...
[ "Ameen Ali Ali", "Shahar Katz", "Lior Wolf", "Ivan Titov" ]
https://openreview.net/forum?id=cRE1XrHf1h
cRE1XrHf1h
cRE1XrHf1h
[ "~Ameen_Ali_Ali1", "~Shahar_Katz1", "~Lior_Wolf1", "~Ivan_Titov1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/3e75fad4784de03c4f8695f26d4c0af5020dc3b9.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLMs", "spurious correlations", "Integrated Gradients", "generalization", "model adaptation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ ali2025detecting, title={Detecting and Pruning Prominent but Detrimental Neurons in Large Language Models}, author={Ameen Ali Ali and Shahar Katz and Lior Wolf and Ivan Titov}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=cRE1XrHf1h} }
ali|detecting_and_pruning_prominent_but_detrimental_neurons_in_large_language_models
/attachment/0f250681dff84bc33314f010f39c137641aa6d12.zip
null
null
null
null
Approximating Language Model Training Data from Weights
we recover suitable training data given only model weights
Modern language models often have open weights but closed training data. We formalize the problem of data recovery from model weights and propose several baselines and metrics. We develop a gradient-based approach that selects the highest-matching data from a large public text corpus and show its effectiveness at recov...
[ "John Xavier Morris", "Junjie Oscar Yin", "Woojeong Kim", "Vitaly Shmatikov", "Alexander M Rush" ]
https://openreview.net/forum?id=cQechnXCQt
cQechnXCQt
cQechnXCQt
[ "~John_Xavier_Morris1", "~Junjie_Oscar_Yin1", "~Woojeong_Kim1", "~Vitaly_Shmatikov1", "~Alexander_M_Rush1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/0566a6c6e8416f4fc6d0bedebfc25b7df8462f68.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "inversion", "training data reconstruction" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ morris2025approximating, title={Approximating Language Model Training Data from Weights}, author={John Xavier Morris and Junjie Oscar Yin and Woojeong Kim and Vitaly Shmatikov and Alexander M Rush}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=cQe...
morris|approximating_language_model_training_data_from_weights
null
null
null
null
null
Self-Rewarding PPO: Aligning Large Language Models with Demonstrations Only
We propose Self-Rewarding PPO, a novel fine-tuning method that combines the strengths of SFT and proximal policy optimization (PPO) to achieve more effective alignment from demonstration data.
Supervised fine-tuning (SFT) has emerged as a crucial method for aligning large language models (LLMs) with human-annotated demonstrations. However, SFT, being an off-policy approach similar to behavior cloning, often struggles with overfitting and poor out-of-domain generalization, especially in limited-data scenarios...
[ "Qingru Zhang", "Liang Qiu", "Ilgee Hong", "Zhenghao Xu", "Tianyi Liu", "Shiyang Li", "Rongzhi Zhang", "Zheng Li", "Lihong Li", "Bing Yin", "Chao Zhang", "Jianshu Chen", "Haoming Jiang", "Tuo Zhao" ]
https://openreview.net/forum?id=cOlHP5E3qF
cOlHP5E3qF
cOlHP5E3qF
[ "~Qingru_Zhang2", "~Liang_Qiu2", "~Ilgee_Hong1", "~Zhenghao_Xu1", "~Tianyi_Liu2", "~Shiyang_Li1", "~Rongzhi_Zhang2", "~Zheng_Li9", "~Lihong_Li1", "~Bing_Yin1", "~Chao_Zhang15", "~Jianshu_Chen1", "~Haoming_Jiang1", "~Tuo_Zhao2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/c544f1212296d14078181345cb70fcb3f3316194.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Alignment with Demonstration", "Self-Rewarding PPO", "Large Language Models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhang2025selfrewarding, title={Self-Rewarding {PPO}: Aligning Large Language Models with Demonstrations Only}, author={Qingru Zhang and Liang Qiu and Ilgee Hong and Zhenghao Xu and Tianyi Liu and Shiyang Li and Rongzhi Zhang and Zheng Li and Lihong Li and Bing Yin and Chao Zhang and Jianshu Chen and Hao...
zhang|selfrewarding_ppo_aligning_large_language_models_with_demonstrations_only
null
null
null
null
null
MS-SSM: A Multi-Scale State Space Model for Efficient Sequence Modeling
We introduce MS-SSM which enhances traditional SSMs by modeling sequence dynamics at multiple resolutions using independent SSMs, scale-dependent initialization, and an input-dependent scale-mixer.
State-space models (SSMs) have recently attention as an efficient alternative to computationally expensive attention-based models for sequence modeling. They rely on linear recurrences to integrate information over time, enabling fast inference, parallelizable training, and control over recurrence stability. However, ...
[ "Mahdi Karami", "Ali Behrouz", "Peilin Zhong", "Razvan Pascanu", "Vahab Mirrokni" ]
https://openreview.net/forum?id=cCYWeCzAv0
cCYWeCzAv0
cCYWeCzAv0
[ "~Mahdi_Karami2", "~Ali_Behrouz1", "~Peilin_Zhong1", "~Razvan_Pascanu1", "~Vahab_Mirrokni2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/483868b1b70acb79914fb7e065ea71d3d5a37968.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "MS-SSM: Sequence Models", "Language models", "State Space Model", "Multi-Scale", "Multi-Resolution" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ karami2025msssm, title={{MS}-{SSM}: A Multi-Scale State Space Model for Efficient Sequence Modeling}, author={Mahdi Karami and Ali Behrouz and Peilin Zhong and Razvan Pascanu and Vahab Mirrokni}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=cCYWeC...
karami|msssm_a_multiscale_state_space_model_for_efficient_sequence_modeling
null
null
null
null
null
DEL: Context-Aware Dynamic Exit Layer for Efficient Self-Speculative Decoding
We propose DEL, a dynamic method that adaptively selects the exit layer and speculation length in self-speculative decoding to accelerate large language model inference.
Speculative Decoding (SD) is a widely used approach to accelerate the inference of large language models (LLMs) without reducing generation quality. It operates by first using a compact model to draft multiple tokens efficiently, followed by parallel verification using the target LLM. This approach leads to faster infe...
[ "Hossein Entezari Zarch", "Lei Gao", "Chaoyi Jiang", "Murali Annavaram" ]
https://openreview.net/forum?id=cAFxSuXQvT
cAFxSuXQvT
cAFxSuXQvT
[ "~Hossein_Entezari_Zarch1", "~Lei_Gao3", "~Chaoyi_Jiang1", "~Murali_Annavaram1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/e1dbcba837d14d0fa9c917359c3c5f0b9d4cb9d5.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Speculative Decoding", "Efficient Large Language Model", "Inference Acceleration" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zarch2025del, title={{DEL}: Context-Aware Dynamic Exit Layer for Efficient Self-Speculative Decoding}, author={Hossein Entezari Zarch and Lei Gao and Chaoyi Jiang and Murali Annavaram}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=cAFxSuXQvT} }
zarch|del_contextaware_dynamic_exit_layer_for_efficient_selfspeculative_decoding
null
null
null
null
null
LLMs Are In-Context Bandit Reinforcement Learners
LLMs can learn in-context from online rewards like in reinforcement learning, instead of just supervised examples
Large Language Models (LLMs) excel at in-context learning (ICL), a supervised learning technique that relies on adding annotated examples to the model context. We investigate a contextual bandit version of in-context reinforcement learning (ICRL), where models learn in-context, online, from external reward, instead of ...
[ "Giovanni Monea", "Antoine Bosselut", "Kianté Brantley", "Yoav Artzi" ]
https://openreview.net/forum?id=c0RsezY2D1
c0RsezY2D1
c0RsezY2D1
[ "~Giovanni_Monea1", "~Antoine_Bosselut1", "~Kianté_Brantley2", "~Yoav_Artzi1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/628804cb3e9ad8568125424f3e5621da8ba4504d.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "In-context reinforcement learning", "in-context learning", "contextual bandits", "online learning", "large language models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ monea2025llms, title={{LLM}s Are In-Context Bandit Reinforcement Learners}, author={Giovanni Monea and Antoine Bosselut and Kiant{\'e} Brantley and Yoav Artzi}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=c0RsezY2D1} }
monea|llms_are_incontext_bandit_reinforcement_learners
/attachment/856e277943bbf727353282951700d1741ae0833b.zip
null
null
null
null
Stop-Think-AutoRegress: Language Modeling with Latent Diffusion Planning
We introduce a unified architecture that pauses autoregressive text generation for latent diffusion planning, enabling higher quality and more controllable text generation with improved language understanding.
The Stop-Think-AutoRegress Language Diffusion Model (STAR-LDM) integrates latent diffusion planning with autoregressive generation. Unlike conventional autoregressive language models limited to token-by-token decisions, STAR-LDM incorporates a ``thinking'' phase that pauses generation to refine a semantic plan through ...
[ "Justin Lovelace", "Christian K Belardi", "Sofian Zalouk", "Adhitya Polavaram", "Srivatsa R Kundurthy", "Kilian Q Weinberger" ]
https://openreview.net/forum?id=c05qIG1Z2B
c05qIG1Z2B
c05qIG1Z2B
[ "~Justin_Lovelace1", "~Christian_K_Belardi1", "~Sofian_Zalouk1", "~Adhitya_Polavaram1", "~Srivatsa_R_Kundurthy1", "~Kilian_Q_Weinberger1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/a567037e14bb0b772ee2dbb773ee14c211a329eb.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "diffusion", "latent diffusion", "language generation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lovelace2025stopthinkautoregress, title={Stop-Think-AutoRegress: Language Modeling with Latent Diffusion Planning}, author={Justin Lovelace and Christian K Belardi and Sofian Zalouk and Adhitya Polavaram and Srivatsa R Kundurthy and Kilian Q Weinberger}, booktitle={Second Conference on Language Modeling...
lovelace|stopthinkautoregress_language_modeling_with_latent_diffusion_planning
null
null
null
null
null
Yourbench: Dynamic Evaluation Set Generation with LLMs
a new system to generate diverse question answers from source documents ensuring maximum document coverage
Large language models (LLMs) have rapidly outpaced traditional evaluation methodologies, with static benchmarks suffering from saturation, contamination, and domain-specificity limitations while human evaluation remains prohibitively expensive. We present YourBench, an open-source framework that transforms this evaluat...
[ "Sumuk Shashidhar", "Clémentine Fourrier", "Alina Lozovskaya", "Thomas Wolf", "Gokhan Tur", "Dilek Hakkani-Tür" ]
https://openreview.net/forum?id=bkWERVKzuP
bkWERVKzuP
bkWERVKzuP
[ "~Sumuk_Shashidhar1", "~Clémentine_Fourrier1", "~Alina_Lozovskaya1", "~Thomas_Wolf1", "~Gokhan_Tur2", "~Dilek_Hakkani-Tür1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/a24fd7f123f1ad2093ae9bd1a7d79ae8e52f2822.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "benchmarking", "contemporary dataset", "dataset", "reference-free", "automated", "llm" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ shashidhar2025yourbench, title={Yourbench: Dynamic Evaluation Set Generation with {LLM}s}, author={Sumuk Shashidhar and Cl{\'e}mentine Fourrier and Alina Lozovskaya and Thomas Wolf and Gokhan Tur and Dilek Hakkani-T{\"u}r}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://op...
shashidhar|yourbench_dynamic_evaluation_set_generation_with_llms
/attachment/6f599491b12fd956502386d694a034247fe66463.zip
null
null
null
null
Hawkeye: Model Collaboration for Efficient Reasoning
We provide an efficient inference pipeline that optimizes Chain-of-Thought (CoT) reasoning by instructing a Large Language Model (LLM) to generate concise yet effective CoTs for a Small Language Model (SLM) to decode through reinforcement learning.
Chain-of-Thought (CoT) reasoning has demonstrated remarkable effectiveness in enhancing the reasoning abilities of large language models (LLMs). However, its efficiency remains a challenge due to excessive intermediate reasoning tokens, which introduce both semantic redundancy and unnecessarily detailed reasoning steps...
[ "Jianshu She", "Zhuohao Li", "Zhemin Huang", "Qi Li", "Peiran Xu", "Haonan Li", "Qirong Ho" ]
https://openreview.net/forum?id=bdCWK4NkK7
bdCWK4NkK7
bdCWK4NkK7
[ "~Jianshu_She1", "~Zhuohao_Li3", "~Zhemin_Huang1", "~Qi_Li39", "~Peiran_Xu2", "~Haonan_Li2", "~Qirong_Ho1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/1e689a967422ea14219cc7b7d5e1234d934f7ed1.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "reinforcement learning (with human feedback)", "fine-tuning", "compression", "decoding algorithms", "reasoning algorithms" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ she2025hawkeye, title={Hawkeye: Model Collaboration for Efficient Reasoning}, author={Jianshu She and Zhuohao Li and Zhemin Huang and Qi Li and Peiran Xu and Haonan Li and Qirong Ho}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=bdCWK4NkK7} }
she|hawkeye_model_collaboration_for_efficient_reasoning
null
null
null
null
null
LoRe: Personalizing LLMs via Low-Rank Reward Modeling
We introduce low-rank personalized preference modeling for LLMs, enabling scalable and efficient user-specific reward learning with superior generalization and few-shot adaptation.
Personalizing large language models (LLMs) to accommodate diverse user preferences is essential for enhancing alignment and user satisfaction. Traditional reinforcement learning from human feedback (RLHF) approaches often rely on monolithic value representations, limiting their ability to adapt to individual preference...
[ "Avinandan Bose", "Zhihan Xiong", "Yuejie Chi", "Simon Shaolei Du", "Lin Xiao", "Maryam Fazel" ]
https://openreview.net/forum?id=bYu4DOqRY8
bYu4DOqRY8
bYu4DOqRY8
[ "~Avinandan_Bose1", "~Zhihan_Xiong1", "~Yuejie_Chi1", "~Simon_Shaolei_Du1", "~Lin_Xiao1", "~Maryam_Fazel1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/350a08ec6b099c62f340e25b0560b73653f3a110.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "preference learning", "personalization", "reward modeling", "plurality" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ bose2025lore, title={LoRe: Personalizing {LLM}s via Low-Rank Reward Modeling}, author={Avinandan Bose and Zhihan Xiong and Yuejie Chi and Simon Shaolei Du and Lin Xiao and Maryam Fazel}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=bYu4DOqRY8} }
bose|lore_personalizing_llms_via_lowrank_reward_modeling
/attachment/378b641524b06586290a0a9e85ffb9de08e61924.zip
null
null
null
null
The Dual-Route Model of Induction
We find that LLMs can do in-context copying in two different ways: either by copying individual tokens verbatim, or by copying entire word meanings (which may span multiple tokens).
Prior work on in-context copying has shown the existence of *induction heads*, which attend to and promote individual tokens during copying. In this work we discover a new type of induction head: *concept-level* induction heads, which copy entire lexical units instead of individual tokens. Concept induction heads learn...
[ "Sheridan Feucht", "Eric Todd", "Byron C Wallace", "David Bau" ]
https://openreview.net/forum?id=bNTrKqqnG9
bNTrKqqnG9
bNTrKqqnG9
[ "~Sheridan_Feucht1", "~Eric_Todd1", "~Byron_C_Wallace1", "~David_Bau1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/44805a940b65757f691cc13691a90581dcdb453a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "interpretability", "induction heads", "in-context learning", "ICL", "detokenization" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ feucht2025the, title={The Dual-Route Model of Induction}, author={Sheridan Feucht and Eric Todd and Byron C Wallace and David Bau}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=bNTrKqqnG9} }
feucht|the_dualroute_model_of_induction
null
null
null
null
null
CrossWordBench: Evaluating the Reasoning Capabilities of LLMs and LVLMs with Controllable Puzzle Generation
We propose CrossWordBench, a benchmark to evaluate the reasoning capabilities of both LLMs and LVLMs though the medium of crossword puzzles.
Existing reasoning evaluation frameworks for Large Language Models (LLMs) and Large Vision-Language Models (LVLMs) predominantly assess either text-based reasoning or vision-language understanding capabilities, with limited dynamic interplay between textual and visual constraints. To address this limitation, we introdu...
[ "Jixuan Leng", "Chengsong Huang", "Langlin Huang", "Bill Yuchen Lin", "William W. Cohen", "Haohan Wang", "Jiaxin Huang" ]
https://openreview.net/forum?id=bJCQMKwPVq
bJCQMKwPVq
bJCQMKwPVq
[ "~Jixuan_Leng1", "~Chengsong_Huang1", "~Langlin_Huang1", "~Bill_Yuchen_Lin1", "~William_W._Cohen2", "~Haohan_Wang1", "~Jiaxin_Huang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/4387a92ca259f0b87c9d01fd2c80a1dd97ba0084.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLMs", "LVLMs", "Evaluation", "Benchmark" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ leng2025crosswordbench, title={CrossWordBench: Evaluating the Reasoning Capabilities of {LLM}s and {LVLM}s with Controllable Puzzle Generation}, author={Jixuan Leng and Chengsong Huang and Langlin Huang and Bill Yuchen Lin and William W. Cohen and Haohan Wang and Jiaxin Huang}, booktitle={Second Confere...
leng|crosswordbench_evaluating_the_reasoning_capabilities_of_llms_and_lvlms_with_controllable_puzzle_generation
null
null
null
null
null
From Next-Token to Mathematics: The Learning Dynamics of Mathematical Reasoning in Language Models
We conduct the first analysis of how math reasoning skills are learned during pre- and post-training using open checkpoint and open weight models.
Large Language Models (LLMs) solely trained on next-token prediction learn to solve a wide range of problems involving mathematical reasoning. How does this ability evolve during training? We show the first analysis of how mathematical reasoning abilities of several open-weight LLMs develop during pre-training and post...
[ "Shubhra Mishra", "Gabriel Poesia", "Noah Goodman" ]
https://openreview.net/forum?id=bJ9aARjtBu
bJ9aARjtBu
bJ9aARjtBu
[ "~Shubhra_Mishra1", "~Gabriel_Poesia1", "~Noah_Goodman1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/af194ebe55dcf8e07919154014cfe0ed016dd5db.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "math reasoning", "training dynamics", "reasoning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ mishra2025from, title={From Next-Token to Mathematics: The Learning Dynamics of Mathematical Reasoning in Language Models}, author={Shubhra Mishra and Gabriel Poesia and Noah Goodman}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=bJ9aARjtBu} }
mishra|from_nexttoken_to_mathematics_the_learning_dynamics_of_mathematical_reasoning_in_language_models
null
null
null
null
null
LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation
LoRI is a simple yet effective method for parameter-efficient fine-tuning that reduces cross-task interference by freezing projection matrices $A$ and sparsifying $B$.
Low-Rank Adaptation (LoRA) has emerged as a popular parameter-efficient fine-tuning (PEFT) method for Large Language Models (LLMs), yet it still incurs notable overhead and suffers from parameter interference in multi-task scenarios. We propose LoRA with Reduced Interference (LoRI), a simple yet effective approach that...
[ "Juzheng Zhang", "Jiacheng You", "Ashwinee Panda", "Tom Goldstein" ]
https://openreview.net/forum?id=b8cW86QcOD
b8cW86QcOD
b8cW86QcOD
[ "~Juzheng_Zhang2", "~Jiacheng_You1", "~Ashwinee_Panda1", "~Tom_Goldstein1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/31e650897c20dd83f40dca4cf8678bd5c44bdfac.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Parameter-Efficient Fine-Tuning", "Model Merging", "Sparsity" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhang2025lori, title={Lo{RI}: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation}, author={Juzheng Zhang and Jiacheng You and Ashwinee Panda and Tom Goldstein}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=b8cW86QcOD} }
zhang|lori_reducing_crosstask_interference_in_multitask_lowrank_adaptation
null
null
null
null
null
OpenCodeReasoning: Advancing Data Distillation for Competitive Coding
We construct, analyze and release a curated code reasoning dataset of 735K samples that instills SOTA reasoning capabilities, evaluated on LiveCodeBench.
Since the advent of reasoning-based large language models, many have found great success from distilling reasoning capabilities into student models. Such techniques have significantly bridged the gap between reasoning and standard LLMs on coding tasks. Despite this, much of the progress on distilling reasoning models r...
[ "Wasi Uddin Ahmad", "Sean Narenthiran", "Somshubra Majumdar", "Aleksander Ficek", "Siddhartha Jain", "Jocelyn Huang", "Vahid Noroozi", "Boris Ginsburg" ]
https://openreview.net/forum?id=aykM7KUVJZ
aykM7KUVJZ
aykM7KUVJZ
[ "~Wasi_Uddin_Ahmad1", "~Sean_Narenthiran1", "~Somshubra_Majumdar1", "~Aleksander_Ficek1", "~Siddhartha_Jain1", "~Jocelyn_Huang1", "~Vahid_Noroozi2", "~Boris_Ginsburg1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/544a715a36130a61830a4ddadb3420047cc5cf02.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Code Generation", "Code Reasoning", "Large Language Model", "LiveCodeBench" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ ahmad2025opencodereasoning, title={OpenCodeReasoning: Advancing Data Distillation for Competitive Coding}, author={Wasi Uddin Ahmad and Sean Narenthiran and Somshubra Majumdar and Aleksander Ficek and Siddhartha Jain and Jocelyn Huang and Vahid Noroozi and Boris Ginsburg}, booktitle={Second Conference o...
ahmad|opencodereasoning_advancing_data_distillation_for_competitive_coding
null
null
null
null
null
PyramidKV: Dynamic KV Cache Compression based on Pyramidal Information Funneling
We developed PyramidKV, a novel and effective KV cache compression method.
In this study, we investigate whether attention-based information flow inside large language models (LLMs) is aggregated through noticeable patterns for long context processing. Our observations reveal that LLMs aggregate information through Pyramidal Information Funneling where attention is scattering widely in lower ...
[ "Zefan Cai", "Yichi Zhang", "Bofei Gao", "Yuliang Liu", "Yucheng Li", "Tianyu Liu", "Keming Lu", "Wayne Xiong", "Yue Dong", "Junjie Hu", "Wen Xiao" ]
https://openreview.net/forum?id=ayi7qezU87
ayi7qezU87
ayi7qezU87
[ "~Zefan_Cai1", "~Yichi_Zhang16", "~Bofei_Gao1", "~Yuliang_Liu5", "~Yucheng_Li5", "~Tianyu_Liu3", "~Keming_Lu1", "~Wayne_Xiong1", "~Yue_Dong2", "~Junjie_Hu2", "~Wen_Xiao2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/40900f3b1b3442ab76c9934ef2acc1c2a3ebb3e5.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "KV Cache Compression", "Inference Acceleration" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ cai2025pyramidkv, title={Pyramid{KV}: Dynamic {KV} Cache Compression based on Pyramidal Information Funneling}, author={Zefan Cai and Yichi Zhang and Bofei Gao and Yuliang Liu and Yucheng Li and Tianyu Liu and Keming Lu and Wayne Xiong and Yue Dong and Junjie Hu and Wen Xiao}, booktitle={Second Conferen...
cai|pyramidkv_dynamic_kv_cache_compression_based_on_pyramidal_information_funneling
null
true
null
null
null
RWKV-7 "Goose" with Expressive Dynamic State Evolution
RWKV-7 is a new sequence modeling architecture with constant memory usage and inference time per token, SoTA performance on multilingual tasks, and near SoTA English LLM performance at 3B scale, with dramatically less training than top 3B models.
We present RWKV-7 "Goose", a new sequence modeling architecture with constant memory usage and constant inference time per token. Despite being trained on dramatically fewer tokens than other top models, our 2.9 billion parameter language model achieves a new 3B SoTA on multilingual tasks and matches the current 3B SoT...
[ "Bo Peng", "Ruichong Zhang", "Daniel Goldstein", "Eric Alcaide", "Xingjian Du", "Haowen Hou", "Jiaju Lin", "Jiaxing Liu", "Janna Lu", "William Merrill", "Guangyu Song", "Kaifeng Tan", "Saiteja Utpala", "Nathan Wilce", "Johan S. Wind", "Tianyi Wu", "Daniel Wuttke", "Christian Zhou-Z...
https://openreview.net/forum?id=ayB1PACN5j
ayB1PACN5j
ayB1PACN5j
[ "~Bo_Peng21", "~Ruichong_Zhang1", "~Daniel_Goldstein2", "~Eric_Alcaide2", "~Xingjian_Du1", "~Haowen_Hou2", "~Jiaju_Lin1", "~Jiaxing_Liu2", "~Janna_Lu1", "~William_Merrill1", "~Guangyu_Song1", "~Kaifeng_Tan1", "~Saiteja_Utpala1", "~Nathan_Wilce1", "~Johan_S._Wind2", "~Tianyi_Wu11", "~...
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/74011cffc1082ac9394afde92802c810c51baabf.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Goose", "LLM", "RWKV", "RWKV-7", "RWKV7", "Linear", "Linear Attention", "SSM", "subquadratic" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ peng2025rwkv, title={{RWKV}-7 ''Goose'' with Expressive Dynamic State Evolution}, author={Bo Peng and Ruichong Zhang and Daniel Goldstein and Eric Alcaide and Xingjian Du and Haowen Hou and Jiaju Lin and Jiaxing Liu and Janna Lu and William Merrill and Guangyu Song and Kaifeng Tan and Saiteja Utpala and...
peng|rwkv7_goose_with_expressive_dynamic_state_evolution
null
null
null
null
null
Navigating the Rabbit Hole: Emergent Biases in LLM-Generated Attack Narratives Targeting Mental Health Groups
We propose a framework to analyze the disproportionate targeting of mental health groups in LLM-generated attack chains.
Large Language Models (LLMs) have been shown to demonstrate imbalanced biases against certain groups. However, the study of unprovoked targeted attacks by LLMs towards at-risk populations remains underexplored. Our paper presents three novel contributions: (1) the explicit evaluation of LLM-generated attacks on highly ...
[ "Rijul Magu", "Arka Dutta", "Sean Kim", "Ashiqur R. KhudaBukhsh", "Munmun De Choudhury" ]
https://openreview.net/forum?id=am6p8VFm9l
am6p8VFm9l
am6p8VFm9l
[ "~Rijul_Magu1", "~Arka_Dutta2", "~Sean_Kim2", "~Ashiqur_R._KhudaBukhsh1", "~Munmun_De_Choudhury1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/1b19b81748a05fae5824c5d1f9c1564132c741c6.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Mental Health", "Network Analysis", "Stigmatization", "Emergent Bias", "Toxicity" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ magu2025navigating, title={Navigating the Rabbit Hole: Emergent Biases in {LLM}-Generated Attack Narratives Targeting Mental Health Groups}, author={Rijul Magu and Arka Dutta and Sean Kim and Ashiqur R. KhudaBukhsh and Munmun De Choudhury}, booktitle={Second Conference on Language Modeling}, year={2025}...
magu|navigating_the_rabbit_hole_emergent_biases_in_llmgenerated_attack_narratives_targeting_mental_health_groups
null
null
null
null
null
CLIPPER: Compression enables long-context synthetic data generation
We introduce CLIPPER, a compression-based approach for generating synthetic data tailored to narrative claim verification—a task that requires reasoning over a book to verify a given claim.
LLM developers are increasingly reliant on synthetic data, but generating high-quality data for complex long-context reasoning tasks remains challenging. We introduce CLIPPER, a compression-based approach for generating synthetic data tailored to narrative claim verification—a task that requires reasoning over a book t...
[ "Chau Minh Pham", "Yapei Chang", "Mohit Iyyer" ]
https://openreview.net/forum?id=akHq1QcqeZ
akHq1QcqeZ
akHq1QcqeZ
[ "~Chau_Minh_Pham1", "~Yapei_Chang1", "~Mohit_Iyyer1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/f2024d79618cdeeb40b50c0b1836ce0cee87c511.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "synthetic data", "fine-tuning", "instruction-tuning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ pham2025clipper, title={{CLIPPER}: Compression enables long-context synthetic data generation}, author={Chau Minh Pham and Yapei Chang and Mohit Iyyer}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=akHq1QcqeZ} }
pham|clipper_compression_enables_longcontext_synthetic_data_generation
null
null
null
null
null
EvalTree: Profiling Language Model Weaknesses via Hierarchical Capability Trees
Formulating the problem of LM weakness profiling and proposing a method EvalTree
An ideal model evaluation should achieve two goals: identifying where the model fails and providing actionable improvement guidance. Toward these goals for language model (LM) evaluations, we formulate the problem of generating a weakness profile, a set of weaknesses expressed in natural language, given an LM's perform...
[ "Zhiyuan Zeng", "Yizhong Wang", "Hannaneh Hajishirzi", "Pang Wei Koh" ]
https://openreview.net/forum?id=aV2hQN9vkp
aV2hQN9vkp
aV2hQN9vkp
[ "~Zhiyuan_Zeng3", "~Yizhong_Wang2", "~Hannaneh_Hajishirzi1", "~Pang_Wei_Koh1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/64efd69a5024b2582a5aca08978a2cae631b8b00.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Evaluation", "Interpretability" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zeng2025evaltree, title={EvalTree: Profiling Language Model Weaknesses via Hierarchical Capability Trees}, author={Zhiyuan Zeng and Yizhong Wang and Hannaneh Hajishirzi and Pang Wei Koh}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=aV2hQN9vkp} }
zeng|evaltree_profiling_language_model_weaknesses_via_hierarchical_capability_trees
null
null
null
null
null
Shared Global and Local Geometry of Language Model Embeddings
We characterize the global and local geometry of language model token embeddings and find similarities across language models.
Researchers have recently suggested that models share common representations. In our work, we find numerous geometric similarities across the token embeddings of large language models. First, we find “global” similarities: token embeddings often share similar relative orientations. Next, we characterize local geometry ...
[ "Andrew Lee", "Melanie Weber", "Fernanda Viégas", "Martin Wattenberg" ]
https://openreview.net/forum?id=aJDykpJAYF
aJDykpJAYF
aJDykpJAYF
[ "~Andrew_Lee2", "~Melanie_Weber1", "~Fernanda_Viégas1", "~Martin_Wattenberg1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/3cb8501d114bc2f5dbfc0cb0be5bdffcc5d4d3e1.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Embeddings", "Alignment", "Interpretability" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lee2025shared, title={Shared Global and Local Geometry of Language Model Embeddings}, author={Andrew Lee and Melanie Weber and Fernanda Vi{\'e}gas and Martin Wattenberg}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=aJDykpJAYF} }
lee|shared_global_and_local_geometry_of_language_model_embeddings
null
true
null
null
null
Sharpe Ratio-Guided Active Learning for Preference Optimization in RLHF
We provide a novel active learning method for RLHF based on the Sharpe Ratio.
Reinforcement learning from human feedback (RLHF) has become a cornerstone of the training and alignment pipeline for large language models (LLMs). Recent advances, such as direct preference optimization (DPO), have simplified the preference learning step. However, collecting preference data remains a challenging and c...
[ "Syrine Belakaria", "Joshua Kazdan", "Charles Marx", "Chris Cundy", "Willie Neiswanger", "Sanmi Koyejo", "Barbara E Engelhardt", "Stefano Ermon" ]
https://openreview.net/forum?id=a6xzTqMUFQ
a6xzTqMUFQ
a6xzTqMUFQ
[ "~Syrine_Belakaria1", "~Joshua_Kazdan1", "~Charles_Marx1", "~Chris_Cundy1", "~Willie_Neiswanger2", "~Sanmi_Koyejo1", "~Barbara_Engelhardt1", "~Stefano_Ermon1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/25abf5aa983f29b83734f87acba47b58d8e8c5f6.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "active learning", "RLHF", "llm" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ belakaria2025sharpe, title={Sharpe Ratio-Guided Active Learning for Preference Optimization in {RLHF}}, author={Syrine Belakaria and Joshua Kazdan and Charles Marx and Chris Cundy and Willie Neiswanger and Sanmi Koyejo and Barbara E Engelhardt and Stefano Ermon}, booktitle={Second Conference on Language...
belakaria|sharpe_ratioguided_active_learning_for_preference_optimization_in_rlhf
null
null
null
null
null
Can Performant LLMs Be Ethical? Quantifying the Impact of Web Crawling Opt-Outs
We study how respecting web crawling opt-outs (robots.txt) affects LLM performance by introducing the concept of Data Compliance Gap (DCG).
The increasing adoption of web crawling opt-outs by copyright holders of online content raises critical questions about the impact of data compliance on large language model (LLM) performance. However, little is known about how these restrictions (and the resultant filtering of pretraining datasets) affect the capabili...
[ "Dongyang Fan", "Vinko Sabolčec", "Matin Ansaripour", "Ayush Kumar Tarun", "Martin Jaggi", "Antoine Bosselut", "Imanol Schlag" ]
https://openreview.net/forum?id=a6QsOjr3wo
a6QsOjr3wo
a6QsOjr3wo
[ "~Dongyang_Fan2", "~Vinko_Sabolčec1", "~Matin_Ansaripour1", "~Ayush_Kumar_Tarun1", "~Martin_Jaggi1", "~Antoine_Bosselut1", "~Imanol_Schlag3" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/c04ef11771a5e838ccf85b71b78636ed913f5ae0.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Responsible AI", "AI and Fair Use", "Robots.txt Opt-out", "LLMs" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ fan2025can, title={Can Performant {LLM}s Be Ethical? Quantifying the Impact of Web Crawling Opt-Outs}, author={Dongyang Fan and Vinko Sabol{\v{c}}ec and Matin Ansaripour and Ayush Kumar Tarun and Martin Jaggi and Antoine Bosselut and Imanol Schlag}, booktitle={Second Conference on Language Modeling}, ye...
fan|can_performant_llms_be_ethical_quantifying_the_impact_of_web_crawling_optouts
null
null
null
null
null
Task-Circuit Quantization: Leveraging Knowledge Localization and Interpretability for Compression
Using interpretability informed saliency scores based on task-specific information to localize important weights to preserve during model compression, yielding SOTA method for both general and task specific quantization
​​Post-training quantization reduces a model's memory footprint by mapping full precision weights into low bit weights without costly retraining, but can degrade its downstream performance especially in low 2- to 3-bit settings. Existing methods mitigate these drops by keeping some important weights in higher precision...
[ "Hanqi Xiao", "Yi-Lin Sung", "Elias Stengel-Eskin", "Mohit Bansal" ]
https://openreview.net/forum?id=a201nfn3xX
a201nfn3xX
a201nfn3xX
[ "~Hanqi_Xiao1", "~Yi-Lin_Sung1", "~Elias_Stengel-Eskin1", "~Mohit_Bansal2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/e79023ddb144530c26e3752115728796c7dc13cb.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Quantization", "Mixed Precision", "Interpretability" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ xiao2025taskcircuit, title={Task-Circuit Quantization: Leveraging Knowledge Localization and Interpretability for Compression}, author={Hanqi Xiao and Yi-Lin Sung and Elias Stengel-Eskin and Mohit Bansal}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum...
xiao|taskcircuit_quantization_leveraging_knowledge_localization_and_interpretability_for_compression
null
null
null
null
null
Rethinking Safety in LLM Fine-tuning: An Optimization Perspective
Fine-tuning can preserve safety without extra interventions by optimizing hyperparameters and using EMA momentum to stabilize training.
Fine-tuning language models is commonly believed to inevitably harm their safety, i.e., refusing to respond to harmful user requests, even when using harmless datasets, thus requiring additional safety measures. We challenge this belief through systematic testing, showing that poor optimization choices, rather than inh...
[ "Minseon Kim", "Jin Myung Kwak", "Lama Alssum", "Bernard Ghanem", "Philip Torr", "David Krueger", "Fazl Barez", "Adel Bibi" ]
https://openreview.net/forum?id=ZnOoEA2nDn
ZnOoEA2nDn
ZnOoEA2nDn
[ "~Minseon_Kim1", "~Jin_Myung_Kwak1", "~Lama_Alssum1", "~Bernard_Ghanem1", "~Philip_Torr1", "~David_Krueger1", "~Fazl_Barez1", "~Adel_Bibi1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/69c14ab3b7570c1351454fdff6e3989c37b2932d.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Finetuning LLM", "Safety alignment" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ kim2025rethinking, title={Rethinking Safety in {LLM} Fine-tuning: An Optimization Perspective}, author={Minseon Kim and Jin Myung Kwak and Lama Alssum and Bernard Ghanem and Philip Torr and David Krueger and Fazl Barez and Adel Bibi}, booktitle={Second Conference on Language Modeling}, year={2025}, url=...
kim|rethinking_safety_in_llm_finetuning_an_optimization_perspective
null
null
null
null
null
Hell or High Water: Evaluating Agentic Recovery from External Failures
Evaluating how well LLMs can find backup plans
As language model agents are applied to real world problems of increasing complexity, they will be expected to formulate plans across large search spaces. If those plans fail for reasons beyond their control, how well do language agents search for alternative ways to achieve their goals? We devise a specialized agentic...
[ "Andrew Wang", "Sophia Hager", "Adi Asija", "Daniel Khashabi", "Nicholas Andrews" ]
https://openreview.net/forum?id=Zk224WPT42
Zk224WPT42
Zk224WPT42
[ "~Andrew_Wang3", "~Sophia_Hager1", "~Adi_Asija1", "~Daniel_Khashabi2", "~Nicholas_Andrews2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/30827c25a31a17f74cbb9a570f0d6f26a95ad034.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "planning", "tool-use" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wang2025hell, title={Hell or High Water: Evaluating Agentic Recovery from External Failures}, author={Andrew Wang and Sophia Hager and Adi Asija and Daniel Khashabi and Nicholas Andrews}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Zk224WPT42} }
wang|hell_or_high_water_evaluating_agentic_recovery_from_external_failures
null
null
null
null
null
Do LLMs Understand Your Translations? Evaluating Paragraph-level MT with Question Answering
An extrinsic approach to evaluate long translations, using LLMs to generate and answer reading comprehension questions.
Despite the steady progress in machine translation evaluation, existing automatic metrics struggle to capture how well meaning is preserved beyond sentence boundaries. We posit that reliance on a single intrinsic quality score, trained to mimic human judgments, might be insufficient for evaluating translations of long,...
[ "Patrick Fernandes", "Sweta Agrawal", "Emmanouil Zaranis", "Andre Martins", "Graham Neubig" ]
https://openreview.net/forum?id=Zfa9jCYGCz
Zfa9jCYGCz
Zfa9jCYGCz
[ "~Patrick_Fernandes1", "~Sweta_Agrawal1", "~Emmanouil_Zaranis1", "~Andre_Martins1", "~Graham_Neubig1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/e337466478c0587324eac3059bdd874f58d81acd.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "llm-based metric", "machine translation", "evaluation", "question generation", "question answering" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ fernandes2025do, title={Do {LLM}s Understand Your Translations? Evaluating Paragraph-level {MT} with Question Answering}, author={Patrick Fernandes and Sweta Agrawal and Emmanouil Zaranis and Andre Martins and Graham Neubig}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://...
fernandes|do_llms_understand_your_translations_evaluating_paragraphlevel_mt_with_question_answering
null
null
null
null
null