title
stringlengths
21
128
content_TLDR
stringlengths
40
250
abstract
stringlengths
613
2.09k
authors
listlengths
1
42
openreview_url
stringlengths
42
42
id
stringlengths
10
10
forum
stringlengths
10
10
authorids
listlengths
1
42
venue
dict
venueid
dict
pdf_url
dict
invitation
stringclasses
1 value
group
stringclasses
1 value
venue_name
stringclasses
1 value
year
int64
2.03k
2.03k
conference
stringclasses
1 value
content_keywords
listlengths
1
16
content_code_of_ethics
stringclasses
1 value
content_author_guide
stringclasses
1 value
content_flagged_for_ethics_review
bool
1 class
content_ethics_comments
stringclasses
11 values
content__bibtex
stringlengths
246
1.01k
content_paperhash
stringlengths
29
134
content_supplementary_material
stringclasses
73 values
content_award_nomination
bool
1 class
content_reciprocal_reviewing_status
stringclasses
1 value
content_reciprocal_reviewing_author
stringclasses
4 values
content_reciprocal_reviewing_exemption_reason
dict
Don’t lie to your friends: Learning what you know from collaborative self-play
Task-level reward in a multi-agent collaborative game can teach the agents about calibration and selective tool use
To be helpful assistants, AI agents must be aware of their own capabilities and limitations. This includes knowing when to answer from parametric knowledge versus using tools, when to trust tool outputs, and when to abstain or hedge. Such capabilities are hard to teach through supervised fine-tuning because they requir...
[ "Jacob Eisenstein", "Reza Aghajani", "Adam Fisch", "Dheeru Dua", "Fantine Huot", "Mirella Lapata", "Vicky Zayats", "Jonathan Berant" ]
https://openreview.net/forum?id=2vDJiGUfhV
2vDJiGUfhV
2vDJiGUfhV
[ "~Jacob_Eisenstein1", "~Reza_Aghajani2", "~Adam_Fisch2", "~Dheeru_Dua2", "~Fantine_Huot1", "~Mirella_Lapata1", "~Vicky_Zayats1", "~Jonathan_Berant1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/33b60b49c794c267262ad17bde96770ae321292d.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "self-play", "calibration", "tool use", "multiagent" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ eisenstein2025dont, title={Don{\textquoteright}t lie to your friends: Learning what you know from collaborative self-play}, author={Jacob Eisenstein and Reza Aghajani and Adam Fisch and Dheeru Dua and Fantine Huot and Mirella Lapata and Vicky Zayats and Jonathan Berant}, booktitle={Second Conference on ...
eisenstein|dont_lie_to_your_friends_learning_what_you_know_from_collaborative_selfplay
null
true
null
null
null
RepoST: Scalable Repository-Level Coding Environment Construction with Sandbox Testing
Automatically constructing repo-level execution-based environments for training and evaluation.
We present RepoST, a scalable method to construct environments that provide execution feedback for repository-level code generation for both training and evaluation. Unlike existing works that aim to build entire repositories for execution, which is challenging for both human and LLMs, we provide execution feedback wit...
[ "Yiqing Xie", "Alex Xie", "Divyanshu Sheth", "Pengfei Liu", "Daniel Fried", "Carolyn Rose" ]
https://openreview.net/forum?id=2txrMBpw3q
2txrMBpw3q
2txrMBpw3q
[ "~Yiqing_Xie1", "~Alex_Xie1", "~Divyanshu_Sheth1", "~Pengfei_Liu1", "~Daniel_Fried1", "~Carolyn_Rose1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/0f0642854eb629ef6d848738749e7fc2ffc1b800.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "code generation training", "repo-level code generation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ xie2025repost, title={Repo{ST}: Scalable Repository-Level Coding Environment Construction with Sandbox Testing}, author={Yiqing Xie and Alex Xie and Divyanshu Sheth and Pengfei Liu and Daniel Fried and Carolyn Rose}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openrevie...
xie|repost_scalable_repositorylevel_coding_environment_construction_with_sandbox_testing
/attachment/bc410e83bf051f2b0b987d2ac6fc926e03c3548d.zip
null
null
null
null
2 OLMo 2 Furious (COLM’s Version)
We release AnonModel, a family of fully open 7B, 13B and 32B models achieving competitive performance with fewer computational cost while providing transparency through released training data, code, checkpoints, and more.
We present OLMo 2, the next generation of our fully open language models. OLMo 2 includes a family of dense autoregressive language models at 7B, 13B, and 32B scales with fully released artifacts—model weights, full training data, training code and recipes, training logs, and thousands of intermediate checkpoints. In t...
[ "Evan Pete Walsh", "Luca Soldaini", "Dirk Groeneveld", "Kyle Lo", "Shane Arora", "Akshita Bhagia", "Yuling Gu", "Shengyi Huang", "Matt Jordan", "Nathan Lambert", "Dustin Schwenk", "Oyvind Tafjord", "Taira Anderson", "David Atkinson", "Faeze Brahman", "Christopher Clark", "Pradeep Das...
https://openreview.net/forum?id=2ezugTT9kU
2ezugTT9kU
2ezugTT9kU
[ "~Evan_Pete_Walsh1", "~Luca_Soldaini1", "~Dirk_Groeneveld1", "~Kyle_Lo1", "~Shane_Arora1", "~Akshita_Bhagia1", "~Yuling_Gu1", "~Shengyi_Huang1", "~Matt_Jordan1", "~Nathan_Lambert1", "~Dustin_Schwenk1", "~Oyvind_Tafjord2", "~Taira_Anderson1", "~David_Atkinson3", "~Faeze_Brahman1", "~Chr...
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/ee2c137da42a7d7cd97b58127c3b38b1bd47107d.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "language model", "pretraining", "training stability", "training data", "instruction tuning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ walsh2025, title={2 {OLM}o 2 Furious ({COLM}{\textquoteright}s Version)}, author={Evan Pete Walsh and Luca Soldaini and Dirk Groeneveld and Kyle Lo and Shane Arora and Akshita Bhagia and Yuling Gu and Shengyi Huang and Matt Jordan and Nathan Lambert and Dustin Schwenk and Oyvind Tafjord and Taira Anders...
walsh|2_olmo_2_furious_colms_version
null
null
This submission is NOT exempt from the Reciprocal Reviewing requirement. (We expect most submissions to fall in this category.)
~Kyle_Lo1
{ "readers": [ "colmweb.org/COLM/2025/Conference", "colmweb.org/COLM/2025/Conference/Submission1136/Authors" ] }
SUV: Scalable Large Language Model Copyright Compliance with Regularized Selective Unlearning
TL;DR: We propose SUV, a selective unlearning framework that excises memorized copyrighted content from LLMs without compromising their overall performance.
Large Language Models (LLMs) have transformed natural language processing by learning from massive datasets, yet this rapid progress has also drawn legal scrutiny, as the ability to unintentionally generate copyrighted content has already prompted several prominent lawsuits. In this work, we introduce SUV (Selective Un...
[ "Tianyang Xu", "Xiaoze Liu", "Feijie Wu", "Xiaoqian Wang", "Jing Gao" ]
https://openreview.net/forum?id=2YdSsi0bxK
2YdSsi0bxK
2YdSsi0bxK
[ "~Tianyang_Xu3", "~Xiaoze_Liu1", "~Feijie_Wu1", "~Xiaoqian_Wang1", "~Jing_Gao1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/23716f5cef278592db52547b21cc4ce2972d69ed.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLMs; Calibration; Copyright; Unlearning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ xu2025suv, title={{SUV}: Scalable Large Language Model Copyright Compliance with Regularized Selective Unlearning}, author={Tianyang Xu and Xiaoze Liu and Feijie Wu and Xiaoqian Wang and Jing Gao}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=2YdS...
xu|suv_scalable_large_language_model_copyright_compliance_with_regularized_selective_unlearning
null
null
null
null
null
PredGen: Accelerated Inference of Large Language Models through Input-Time Speculation for Real-Time Speech Interaction
We leverage speculative decoding at user input time to reduce the latency of speech interactions.
Large Language Models (LLMs) are widely used in real-time voice chat applications, typically in combination with text-to-speech (TTS) systems to generate audio responses. However, their large size often leads to noticeable latency between the end of user input and the start of audio output, resulting in suboptimal user...
[ "Shufan Li", "Aditya Grover" ]
https://openreview.net/forum?id=2Kl8Ztw6wk
2Kl8Ztw6wk
2Kl8Ztw6wk
[ "~Shufan_Li1", "~Aditya_Grover1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/9999eac5fd953cddbee91e9c9d9168d017c657f5.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Inference", "Speculative Decoding" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ li2025predgen, title={PredGen: Accelerated Inference of Large Language Models through Input-Time Speculation for Real-Time Speech Interaction}, author={Shufan Li and Aditya Grover}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=2Kl8Ztw6wk} }
li|predgen_accelerated_inference_of_large_language_models_through_inputtime_speculation_for_realtime_speech_interaction
null
null
null
null
null
Language models align with brain regions that represent concepts across modalities
We find that LMs predict brain signal better in areas that respond more consistently to the same concept encoded in different modalities.
Cognitive science and neuroscience have long faced the challenge of disentangling representations of language from representations of conceptual meaning. As the same problem arises in today's language models (LMs), we investigate the relationship between LM--brain alignment and two neural metrics: (1) the level of brai...
[ "Maria Ryskina", "Greta Tuckute", "Alexander Fung", "Ashley Malkin", "Evelina Fedorenko" ]
https://openreview.net/forum?id=2JohTFaGbW
2JohTFaGbW
2JohTFaGbW
[ "~Maria_Ryskina1", "~Greta_Tuckute1", "~Alexander_Fung1", "~Ashley_Malkin1", "~Evelina_Fedorenko1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/429ed5a7923c63dc595d8547a5686bc7ac1fc9bd.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LM–brain alignment", "fMRI", "conceptual meaning", "cognitive neuroscience" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ ryskina2025language, title={Language models align with brain regions that represent concepts across modalities}, author={Maria Ryskina and Greta Tuckute and Alexander Fung and Ashley Malkin and Evelina Fedorenko}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.n...
ryskina|language_models_align_with_brain_regions_that_represent_concepts_across_modalities
null
null
null
null
null
Truth-value judgment in language models: ‘truth directions’ are context sensitive
Investigation of the in-context behaviour of LLM 'truth directions'.
Recent work has demonstrated that the latent spaces of large language models (LLMs) contain directions predictive of the truth of sentences. Multiple methods recover such directions and build probes that are described as uncovering a model’s “knowledge” or “beliefs”. We investigate this phenomenon, looking closely at t...
[ "Stefan F. Schouten", "Peter Bloem", "Ilia Markov", "Piek Vossen" ]
https://openreview.net/forum?id=2H85485yAb
2H85485yAb
2H85485yAb
[ "~Stefan_F._Schouten1", "~Peter_Bloem1", "~Ilia_Markov2", "~Piek_Vossen2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/e6dc394daf825c536dba3787225b727ce1977be1.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "mechinterp", "mechanistic interpretability", "interpretability", "truth directions", "LLM beliefs", "large language model", "llm" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ schouten2025truthvalue, title={Truth-value judgment in language models: {\textquoteleft}truth directions{\textquoteright} are context sensitive}, author={Stefan F. Schouten and Peter Bloem and Ilia Markov and Piek Vossen}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://ope...
schouten|truthvalue_judgment_in_language_models_truth_directions_are_context_sensitive
null
null
null
null
null
Context-Adaptive Multi-Prompt Embedding with Large Language Models for Vision-Language Alignment
We propose a method that uses multiple context-adaptive prompts with a pretrained LLM to generate diverse text embeddings for contrastive vision-language learning.
We propose Context-Adaptive Multi-Prompt Embedding, a novel approach to enrich semantic representations in vision-language contrastive learning. Unlike standard CLIP-style models that rely on a single text embedding, our method introduces multiple structured prompts, each containing a distinct adaptive token that captu...
[ "Dahun Kim", "Anelia Angelova" ]
https://openreview.net/forum?id=29jP6OsrIQ
29jP6OsrIQ
29jP6OsrIQ
[ "~Dahun_Kim1", "~Anelia_Angelova1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/f81ca9025bba2c317f120f349e32aa1c6814c7b8.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "CLIP", "contrastive learning", "LLM embedding" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ kim2025contextadaptive, title={Context-Adaptive Multi-Prompt Embedding with Large Language Models for Vision-Language Alignment}, author={Dahun Kim and Anelia Angelova}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=29jP6OsrIQ} }
kim|contextadaptive_multiprompt_embedding_with_large_language_models_for_visionlanguage_alignment
/attachment/5bc95b106630bdf9b944809b0efbe6b8eb7c0a1f.zip
null
null
null
null
FalseReject: A Resource for Improving Contextual Safety and Mitigating Over-Refusals in LLMs via Structured Reasoning
We introduce FalseReject, a dataset and method to reduce unnecessary refusals in language models, significantly improving their balance between safety and helpfulness.
Safety alignment approaches in large language models (LLMs) often lead to the over-refusal of benign queries, significantly diminishing their utility in sensitive scenarios. To address this challenge, we introduce FalseReject, a comprehensive resource containing 16k seemingly toxic queries accompanied by structured res...
[ "Zhehao Zhang", "Weijie Xu", "Fanyou Wu", "Chandan K. Reddy" ]
https://openreview.net/forum?id=1w9Hay7tvm
1w9Hay7tvm
1w9Hay7tvm
[ "~Zhehao_Zhang1", "~Weijie_Xu1", "~Fanyou_Wu1", "~Chandan_K._Reddy1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/ec166f8ec049c9b30eba1ec7ea72c691f741fb37.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Over-refusal", "Safety", "Instruction Tunning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhang2025falsereject, title={FalseReject: A Resource for Improving Contextual Safety and Mitigating Over-Refusals in {LLM}s via Structured Reasoning}, author={Zhehao Zhang and Weijie Xu and Fanyou Wu and Chandan K. Reddy}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://ope...
zhang|falsereject_a_resource_for_improving_contextual_safety_and_mitigating_overrefusals_in_llms_via_structured_reasoning
null
null
null
null
null
Modifying Large Language Model Post-Training for Diverse Creative Writing
In creative writing generation, we facilitate diversity in LLM outputs by counting in how each training instance differs from other instances with the same prompt.
As creative writing tasks do not have singular correct answers, large language models (LLMs) trained to perform these tasks should be able to generate diverse valid outputs. However, LLM post-training often focuses on improving generation quality but neglects to facilitate output diversity. Hence, in creative writing g...
[ "John Joon Young Chung", "Vishakh Padmakumar", "Melissa Roemmele", "Yuqian Sun", "Max Kreminski" ]
https://openreview.net/forum?id=1Pmuw08LoM
1Pmuw08LoM
1Pmuw08LoM
[ "~John_Joon_Young_Chung1", "~Vishakh_Padmakumar1", "~Melissa_Roemmele1", "~Yuqian_Sun1", "~Max_Kreminski1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/46f97e4db02cec3f04f7d04c3fc6e7a599a585af.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "creative writing generation", "diversity", "post-training" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ chung2025modifying, title={Modifying Large Language Model Post-Training for Diverse Creative Writing}, author={John Joon Young Chung and Vishakh Padmakumar and Melissa Roemmele and Yuqian Sun and Max Kreminski}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net...
chung|modifying_large_language_model_posttraining_for_diverse_creative_writing
/attachment/4ba9fd43d8499d750dec728005bede64a9ef32bf.zip
null
null
null
null
BigCharts-R1: Enhanced Chart Reasoning with Visual Reinforcement Finetuning
We propose BigCharts, a diverse chart dataset, and a training framework that improves VLMs’ chart comprehension.
Charts are essential to data analysis, transforming raw data into clear visual representations that support human decision-making. Although current vision-language models (VLMs) have made significant progress, they continue to struggle with chart comprehension due to training on datasets that lack diversity and real-w...
[ "Ahmed Masry", "Abhay Puri", "Masoud Hashemi", "Juan A. Rodriguez", "Megh Thakkar", "Khyati Mahajan", "Vikas Yadav", "Sathwik Tejaswi Madhusudhan", "Alexandre Piché", "Dzmitry Bahdanau", "Christopher Pal", "David Vazquez", "Enamul Hoque", "Perouz Taslakian", "Sai Rajeswar", "Spandana G...
https://openreview.net/forum?id=19fydz1QnW
19fydz1QnW
19fydz1QnW
[ "~Ahmed_Masry1", "~Abhay_Puri1", "~Masoud_Hashemi1", "~Juan_A._Rodriguez1", "~Megh_Thakkar1", "~Khyati_Mahajan1", "~Vikas_Yadav2", "~Sathwik_Tejaswi_Madhusudhan2", "~Alexandre_Piché1", "~Dzmitry_Bahdanau1", "~Christopher_Pal1", "~David_Vazquez1", "~Enamul_Hoque2", "~Perouz_Taslakian1", "...
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/1a434bfeb2d15ce12abaf1e3a291e089decb700a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "charts", "chartqa", "vision language models", "multimodal" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ masry2025bigchartsr, title={BigCharts-R1: Enhanced Chart Reasoning with Visual Reinforcement Finetuning}, author={Ahmed Masry and Abhay Puri and Masoud Hashemi and Juan A. Rodriguez and Megh Thakkar and Khyati Mahajan and Vikas Yadav and Sathwik Tejaswi Madhusudhan and Alexandre Pich{\'e} and Dzmitry Ba...
masry|bigchartsr1_enhanced_chart_reasoning_with_visual_reinforcement_finetuning
null
null
null
null
null
Noiser: Bounded Input Perturbations for Attributing Large Language Models
We introduce Noiser, a bounded perturbation-based method to explain LLMs predictions and tackle distribution shift. We also propose answerability, a new metric that evaluates the relevance of rationales without gold data or human evaluation.
Feature attribution (FA) methods are common post-hoc approaches that explain how Large Language Models (LLMs) make predictions. Accordingly, generating faithful attributions that reflect the actual inner behavior of the model is crucial. In this paper, we introduce Noiser, a perturbation-based FA method that imposes bo...
[ "Mohammad Reza Ghasemi Madani", "Aryo Pradipta Gema", "Yu Zhao", "Gabriele Sarti", "Pasquale Minervini", "Andrea Passerini" ]
https://openreview.net/forum?id=17yFbHmblo
17yFbHmblo
17yFbHmblo
[ "~Mohammad_Reza_Ghasemi_Madani1", "~Aryo_Pradipta_Gema1", "~Yu_Zhao13", "~Gabriele_Sarti1", "~Pasquale_Minervini4", "~Andrea_Passerini2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/b3e0eeffcd43001aba85a5e79caa97470f363e5b.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Feature Attribution", "Post-hoc Explanations", "Large Language Model", "LLMs" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ madani2025noiser, title={Noiser: Bounded Input Perturbations for Attributing Large Language Models}, author={Mohammad Reza Ghasemi Madani and Aryo Pradipta Gema and Yu Zhao and Gabriele Sarti and Pasquale Minervini and Andrea Passerini}, booktitle={Second Conference on Language Modeling}, year={2025}, u...
madani|noiser_bounded_input_perturbations_for_attributing_large_language_models
null
null
null
null
null
ALFA: Aligning LLMs to Ask Good Questions A Case Study in Clinical Reasoning
Novel alignment recipe to teach LLMs perform complex goals by (1) decomposing it into more tangible attributes, (2) creating synthetic data, and (3) integrating the attributes.
Large language models (LLMs) often fail to ask effective questions under uncertainty, making them unreliable in domains where proactive information-gathering is essential for decision-making. We present ALignment via Fine-grained Attributes, (ALFA) a framework that improves LLM question-asking by (i) decomposing the no...
[ "Shuyue Stella Li", "Jimin Mun", "Faeze Brahman", "Pedram Hosseini", "Bryceton G. Thomas", "Jessica M. Sin", "Bing Ren", "Jonathan S. Ilgen", "Yulia Tsvetkov", "Maarten Sap" ]
https://openreview.net/forum?id=12u7diwku0
12u7diwku0
12u7diwku0
[ "~Shuyue_Stella_Li1", "~Jimin_Mun1", "~Faeze_Brahman1", "~Pedram_Hosseini1", "~Bryceton_G._Thomas1", "~Jessica_M._Sin1", "~Bing_Ren1", "~Jonathan_S._Ilgen1", "~Yulia_Tsvetkov1", "~Maarten_Sap1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/8083d74723508dd138fb4da403569d14619daf33.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Information Seeking", "Question Asking", "Reliable LLM", "Clinical Reasoning", "Structured Rewards" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ li2025alfa, title={{ALFA}: Aligning {LLM}s to Ask Good Questions A Case Study in Clinical Reasoning}, author={Shuyue Stella Li and Jimin Mun and Faeze Brahman and Pedram Hosseini and Bryceton G. Thomas and Jessica M. Sin and Bing Ren and Jonathan S. Ilgen and Yulia Tsvetkov and Maarten Sap}, booktitle={...
li|alfa_aligning_llms_to_ask_good_questions_a_case_study_in_clinical_reasoning
null
null
null
null
null
Off-Policy Corrected Reward Modeling for Reinforcement Learning from Human Feedback
We propose to use importance weighting to iteratively retrain an off-policy corrected reward model, resulting in a signficantly better final policy.
Reinforcement Learning from Human Feedback (RLHF) allows us to train models, such as language models (LMs), to follow complex human preferences. In RLHF for LMs, we first train an LM using supervised fine-tuning, sample pairs of responses, obtain human feedback, and use the resulting data to train a reward model (RM). ...
[ "Johannes Ackermann", "Takashi Ishida", "Masashi Sugiyama" ]
https://openreview.net/forum?id=0zxugBcgF5
0zxugBcgF5
0zxugBcgF5
[ "~Johannes_Ackermann1", "~Takashi_Ishida1", "~Masashi_Sugiyama1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/f388799f7d6563f4c4688c9d4afd2a33cba72370.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "alignment", "reinforcement learning from human feedback", "reinforcement learning", "reward modeling" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ ackermann2025offpolicy, title={Off-Policy Corrected Reward Modeling for Reinforcement Learning from Human Feedback}, author={Johannes Ackermann and Takashi Ishida and Masashi Sugiyama}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=0zxugBcgF5} }
ackermann|offpolicy_corrected_reward_modeling_for_reinforcement_learning_from_human_feedback
null
null
null
null
null
MAC: A Live Benchmark for Multimodal Large Language Models in Scientific Understanding
A Live Benchmark for Multimodal Large Language Models in Scientific Understanding
As multimodal large language models (MLLMs) grow increasingly capable, fixed benchmarks are gradually losing their effectiveness in evaluating high-level scientific understanding. In this paper, we introduce the Multimodal Academic Cover benchmark (MAC), a live benchmark that could continuously evolve with scientific a...
[ "Mohan Jiang", "Jin Gao", "Jiahao Zhan", "Dequan Wang" ]
https://openreview.net/forum?id=0aHOVhkuOB
0aHOVhkuOB
0aHOVhkuOB
[ "~Mohan_Jiang1", "~Jin_Gao3", "~Jiahao_Zhan1", "~Dequan_Wang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/a1789c20b738bb8892ffc7f3489a13f8d78352e4.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "MLLM", "Benchmark", "Scientific Understanding Capabilities" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ jiang2025mac, title={{MAC}: A Live Benchmark for Multimodal Large Language Models in Scientific Understanding}, author={Mohan Jiang and Jin Gao and Jiahao Zhan and Dequan Wang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=0aHOVhkuOB} }
jiang|mac_a_live_benchmark_for_multimodal_large_language_models_in_scientific_understanding
/attachment/9e5107c434a576e2bea79b55e38158745b152410.zip
null
null
null
null
Impact-driven Context Filtering For Cross-file Code Completion
We introduce an adaptive retrieval-augmented framework for repository-level code completion, which automatically filters irrelevant context to enhance completion accuracy and efficiency.
Retrieval-augmented generation (RAG) has recently demonstrated considerable potential for repository-level code completion, as it integrates cross-file knowledge with in-file preceding code to provide comprehensive contexts for generation. To better understand the contribution of the retrieved cross-file contexts, we i...
[ "Yanzhou Li", "Shangqing Liu", "Kangjie Chen", "Tianwei Zhang", "Yang Liu" ]
https://openreview.net/forum?id=0Y2zXLFBji
0Y2zXLFBji
0Y2zXLFBji
[ "~Yanzhou_Li1", "~Shangqing_Liu1", "~Kangjie_Chen1", "~Tianwei_Zhang1", "~Yang_Liu36" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/9d676b553f9f39c1dd7d5e1a823ecbbc1e0d6f51.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Code Completion; Adaptive Retreivla-augmented Generation; Large Language Model" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ li2025impactdriven, title={Impact-driven Context Filtering For Cross-file Code Completion}, author={Yanzhou Li and Shangqing Liu and Kangjie Chen and Tianwei Zhang and Yang Liu}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=0Y2zXLFBji} }
li|impactdriven_context_filtering_for_crossfile_code_completion
null
null
null
null
null
Learning Effective Language Representations for Sequential Recommendation via Joint Embedding Predictive Architecture
JEPA4Rec improves sequential recommendation by combining joint embedding predictive architecture with language modeling, achieving better item representations and outperforming existing methods, especially in low-resource and cross-domain scenarios.
Language representation learning has emerged as a promising approach for sequential recommendation, thanks to its ability to learn generalizable representations. However, despite its advantages, this approach still struggles with data sparsity and a limited understanding of common-sense user preferences. To address the...
[ "Nguyen Anh Minh", "Dung D. Le" ]
https://openreview.net/forum?id=0Qbwjd0fxB
0Qbwjd0fxB
0Qbwjd0fxB
[ "~Nguyen_Anh_Minh1", "~Dung_D._Le2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/15165329da5a76904efdb6e90d529cb9b849bc5f.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "language models", "joint embedding predictive architecture", "sequential recommendation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ minh2025learning, title={Learning Effective Language Representations for Sequential Recommendation via Joint Embedding Predictive Architecture}, author={Nguyen Anh Minh and Dung D. Le}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=0Qbwjd0fxB} }
minh|learning_effective_language_representations_for_sequential_recommendation_via_joint_embedding_predictive_architecture
/attachment/39c271ff9b08d923e1db47f3b6618f8d65046970.zip
null
null
null
null
BEARCUBS: A benchmark for computer-using web agents
We introduce BEARCUBS, a benchmark of 111 information-seeking questions designed to evaluate a web agent’s ability to search, browse, and identify factual information from the web.
Modern web agents possess computer use abilities that allow them to interact with webpages by sending commands to a virtual keyboard and mouse. While such agents have considerable potential to assist human users with complex tasks, evaluating their capabilities in real-world settings poses a major challenge. To this en...
[ "Yixiao Song", "Katherine Thai", "Chau Minh Pham", "Yapei Chang", "Mazin Nadaf", "Mohit Iyyer" ]
https://openreview.net/forum?id=0JzWiigkUy
0JzWiigkUy
0JzWiigkUy
[ "~Yixiao_Song1", "~Katherine_Thai1", "~Chau_Minh_Pham1", "~Yapei_Chang1", "~Mazin_Nadaf1", "~Mohit_Iyyer1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/0c8e8f33881d200624dd89fc08cb7bc3ef3e7f55.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "computer-use agent", "benchmark", "multimodal" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ song2025bearcubs, title={{BEARCUBS}: A benchmark for computer-using web agents}, author={Yixiao Song and Katherine Thai and Chau Minh Pham and Yapei Chang and Mazin Nadaf and Mohit Iyyer}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=0JzWiigkUy} }
song|bearcubs_a_benchmark_for_computerusing_web_agents
/attachment/a29bd3e2638735810acd3292f411a4a586ca58c3.zip
null
null
null
null