Papers
arxiv:2604.08567

Multi-User Large Language Model Agents

Published on Mar 19
· Submitted by
Shenzhe Zhu
on Apr 13
Authors:
,
,
,
,
,
,
,

Abstract

Multi-user large language model agents face challenges in handling conflicting objectives, privacy preservation, and coordination efficiency in multi-principal decision-making scenarios.

AI-generated summary

Large language models (LLMs) and LLM-based agents are increasingly deployed as assistants in planning and decision making, yet most existing systems are implicitly optimized for a single-principal interaction paradigm, in which the model is designed to satisfy the objectives of one dominant user whose instructions are treated as the sole source of authority and utility. However, as they are integrated into team workflows and organizational tools, they are increasingly required to serve multiple users simultaneously, each with distinct roles, preferences, and authority levels, leading to multi-user, multi-principal settings with unavoidable conflicts, information asymmetry, and privacy constraints. In this work, we present the first systematic study of multi-user LLM agents. We begin by formalizing multi-user interaction with LLM agents as a multi-principal decision problem, where a single agent must account for multiple users with potentially conflicting interests and associated challenges. We then introduce a unified multi-user interaction protocol and design three targeted stress-testing scenarios to evaluate current LLMs' capabilities in instruction following, privacy preservation, and coordination. Our results reveal systematic gaps: frontier LLMs frequently fail to maintain stable prioritization under conflicting user objectives, exhibit increasing privacy violations over multi-turn interactions, and suffer from efficiency bottlenecks when coordination requires iterative information gathering.

Community

Paper submitter

Current LLM agents are primarily designed and trained for single-user settings, overlooking the challenges inherent in multi-user environments. We propose the first definition and stress test for multi-user LLM agents, aiming to evaluate their ability to handle multiple principals, achieve shared objectives, and maintain alignment across diverse user interests.

Nice work!

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2604.08567
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2604.08567 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2604.08567 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2604.08567 in a Space README.md to link it from this page.

Collections including this paper 1