Keynote: The Boring Seams
Keynote: The Boring Seams
๐๏ธ Julie Davila, Vice President of Product Security at GitLab
๐ Presented at SANS AI Cybersecurity Summit 2026
The industry is fixated on the model. Jailbreaking it, guarding it, aligning it. But the most consequential AI security vulnerabilities aren't in the AI. They reside in the orchestration layer: serialization boundaries, state management, credential stores, and trust boundaries between agents. Old bug classes, new topology.
Julie Davila (VP of Product Security, GitLab) opens with a confession: her own team found two critical RCEs in GitLab's AI agent platform, one before and one after general availability. Neither was caused by prompt injection. Both lived in the plumbing. From there, she traces the same structural pattern across LangChain, MCP tooling, and cross-platform agent integrations, and borrows an idea from early twentieth-century mathematics to explain why this class of failure keeps showing up, why most security teams haven't threat-modeled the layer that produces it, and what to do about it on Monday.
Explore upcoming SANS Summits to continue learning from leading voices in cybersecurity: https://go.sans.org/summits
Watch on YouTube โ
(saves to browser)
Sign in to unlock AI tutor explanation ยท โก30
More on: Prompt Systems Engineering
View skill โ
๐
Tutor Explanation
DeepCamp AI