Papers
arxiv:2604.06409

Say Something Else: Rethinking Contextual Privacy as Information Sufficiency

Published on Apr 7
Authors:
,
,
,
,
,

Abstract

Privacy-preserving large language model communication is formalized as an Information Sufficiency task with free-text pseudonymization as a third strategy, evaluated through conversational protocols showing superior privacy-utility tradeoffs.

AI-generated summary

LLM agents increasingly draft messages on behalf of users, yet users routinely overshare sensitive information and disagree on what counts as private. Existing systems support only suppression (omitting sensitive information) and generalization (replacing information with an abstraction), and are typically evaluated on single isolated messages, leaving both the strategy space and evaluation setting incomplete. We formalize privacy-preserving LLM communication as an Information Sufficiency (IS) task, introduce free-text pseudonymization as a third strategy that replaces sensitive attributes with functionally equivalent alternatives, and propose a conversational evaluation protocol that assesses strategies under realistic multi-turn follow-up pressure. Across 792 scenarios spanning three power-relation types (institutional, peer, intimate) and three sensitivity categories (discrimination risk, social cost, boundary), we evaluate seven frontier LLMs on privacy at two granularities, covertness, and utility. Pseudonymization yields the strongest privacy\textendash utility tradeoff overall, and single-message evaluation systematically underestimates leakage, with generalization losing up to 16.3 percentage points of privacy under follow-up.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2604.06409
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2604.06409 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2604.06409 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2604.06409 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.