Say Something Else: Rethinking Contextual Privacy as Information Sufficiency
Abstract
Privacy-preserving large language model communication is formalized as an Information Sufficiency task with free-text pseudonymization as a third strategy, evaluated through conversational protocols showing superior privacy-utility tradeoffs.
LLM agents increasingly draft messages on behalf of users, yet users routinely overshare sensitive information and disagree on what counts as private. Existing systems support only suppression (omitting sensitive information) and generalization (replacing information with an abstraction), and are typically evaluated on single isolated messages, leaving both the strategy space and evaluation setting incomplete. We formalize privacy-preserving LLM communication as an Information Sufficiency (IS) task, introduce free-text pseudonymization as a third strategy that replaces sensitive attributes with functionally equivalent alternatives, and propose a conversational evaluation protocol that assesses strategies under realistic multi-turn follow-up pressure. Across 792 scenarios spanning three power-relation types (institutional, peer, intimate) and three sensitivity categories (discrimination risk, social cost, boundary), we evaluate seven frontier LLMs on privacy at two granularities, covertness, and utility. Pseudonymization yields the strongest privacy\textendash utility tradeoff overall, and single-message evaluation systematically underestimates leakage, with generalization losing up to 16.3 percentage points of privacy under follow-up.
Get this paper in your agent:
hf papers read 2604.06409 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper