Speak or Stay Silent: Context-Aware Turn-Taking in Multi-Party Dialogue
Paper • 2603.11409 • Published
LoRA adapter for Qwen/Qwen3-8B fine-tuned on the AMI meeting corpus for proactive response prediction in multi-party conversations. Given a conversational context and a current utterance, the model predicts whether a target speaker will SPEAK next or remain SILENT.
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-8B")
model = PeftModel.from_pretrained(base_model, "kraken07/qwen3-8b-ami")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-8B")
# Your input format should match training: context turns + current turn
# Output: SPEAK or SILENT prediction for the target speaker
If you use this model, please cite our work:
@misc{bhagtani2026speakstaysilentcontextaware,
title={Speak or Stay Silent: Context-Aware Turn-Taking in Multi-Party Dialogue},
author={Bhagtani, Kratika and Anand, Mrinal and Xu, Yu Chen and Yadav, Amit Kumar Singh},
year={2026},
archivePrefix={arXiv},
url={https://arxiv.org/abs/2603.11409}
}