Navaie, Keivan (2025) From rights to runtime : Privacy engineering for agentic AI. Ai Magazine, 46 (4): e70036. ISSN 0738-4602
From_Rights_to_Runtime_Privacy_Engineering_for_Agentic_AI_Accepted.pdf - Accepted Version
Available under License Creative Commons Attribution.
Download (630kB)
Abstract
Agentic AI shifts stacks from request-response to plan-execute. Systems no longer just answer; they act—planning tasks, calling tools, keeping memory, and changing external state. That shift moves privacy from policy docs into the runtime. This opinion piece argues that we do not need a new privacy theory for agents; we need enforceable, observable controls that render existing rights as product behavior. Anchoring on GDPR—with portable touchpoints to CPRA, LGPD, and PDPA, we propose a developer-first toolkit: optional, bounded, user-visible memory; a purpose-aware egress gate that enforces minimization and transfer rules; proportional safeguards that scale with stakes; and traces that tell a coherent story across components and suppliers. We show how the EU AI Act's risk management, logging, and oversight can scaffold these controls and enable evidence reuse. The result is an agentic runtime that keeps people in control and teams audit-ready by design.