From Chat to Agent: Why Consent Breaks When AI Acts on Your Behalf
Description
The rapid evolution from conversational chatbots to autonomous AI agents has outpaced consent
mechanisms designed to protect users. While millions share sensitive information with LLM-based
systems, evidence shows a widening gap between users’ mental models of data handling and actual
platform practices. This disconnect deepens as AI browsers navigate the web, handle credentials, and
execute transactions. This paper argues that consent is structurally broken across the LLM ecosystem
and identifies three design tensions: personalization versus data minimization, agent autonomy
versus user control, and transparency versus usability. Building on these tensions, this paper proposes
LLM-specific consent mechanisms: contextual in-flow transparency, task-scoped permissions,
and AI-mediated privacy guardians, and highlight their consequences for security, comprehension,
and power asymmetries in agentic systems.
Files
FromChatToAgent_final.pdf
Files
(77.9 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:ce3057f549ee8d588ebb925bd7e9c1c5
|
77.9 kB | Preview Download |