Qwen3-8B-AMI: Proactive Response Prediction in Multi-Party Dialogue

LoRA adapter for Qwen/Qwen3-8B fine-tuned on the AMI meeting corpus for proactive response prediction in multi-party conversations. Given a conversational context and a current utterance, the model predicts whether a target speaker will SPEAK next or remain SILENT.

Model Details

  • Model type: LoRA adapter for causal language model (text classification / sequence classification)
  • Language(s): English
  • License: Apache 2.0
  • Finetuned from: Qwen/Qwen3-8B
  • AMI Corpus: Meeting recordings and transcripts: AMI Corpus

Model Sources

How to Get Started with the Model

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-8B")
model = PeftModel.from_pretrained(base_model, "kraken07/qwen3-8b-ami")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-8B")

# Your input format should match training: context turns + current turn
# Output: SPEAK or SILENT prediction for the target speaker

Citation

If you use this model, please cite our work:

@misc{bhagtani2026speakstaysilentcontextaware,
  title={Speak or Stay Silent: Context-Aware Turn-Taking in Multi-Party Dialogue},
  author={Bhagtani, Kratika and Anand, Mrinal and Xu, Yu Chen and Yadav, Amit Kumar Singh},
  year={2026},
  archivePrefix={arXiv},
  url={https://arxiv.org/abs/2603.11409}
}
Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ishiki-labs/qwen3-8b-ami

Finetuned
Qwen/Qwen3-8B
Adapter
(1073)
this model

Paper for ishiki-labs/qwen3-8b-ami