Governing What You Cannot Observe: Adaptive Runtime Governance for Autonomous AI Agents

📰 ArXiv cs.AI

arXiv:2604.24686v1 Announce Type: new Abstract: Autonomous AI agents can remain fully authorized and still become unsafe as behavior drifts, adversaries adapt, and decision patterns shift without any code change. We propose the \textbf{Informational Viability Principle}: governing an agent reduces to estimating a bound on unobserved risk $\hat{B}(x) = U(x) + SB(x) + RG(x)$ and allowing an action only when its capacity $S(x)$ exceeds $\hat{B}(x)$ by a safety margin. The \textbf{Agent Viability Fr

Published 28 Apr 2026
Read full paper → ← Back to Reads