Introducing Interwhen: Steering reasoning agents with real-time verification

Microsoft Research · Advanced ·🤖 AI Agents & Automation ·6h ago
What if AI agents could check their work as they go? This verification method extracts verifiable properties from natural language and evaluates them using symbolic or model-based verifiers. Interwhen, a new open-source library, enables real-time verification of each step, helping agents act more safely and reliably in complex, real-world tasks. Paper: https://arxiv.org/abs/2602.11202 GitHub: https://github.com/microsoft/interwhen This session aired on May 14, 2026, at Microsoft Research Forum, Season 2 Episode 4. Register for the series to hear about new releases: https://www.microsoft.com/en-us/research/event/microsoft-research-forum/?OCID=msr_researchforum_YTDescription Explore all previous episodes: https://aka.ms/researchforumYTplaylist
Watch on YouTube ↗ (saves to browser)
Sign in to unlock AI tutor explanation · ⚡30

Related AI Lessons

Up next
Inside Abridge: The AI Listening to 100 Million Doctor Visits — Abridge's Janie Lee & Chai Asawa
Latent Space
Watch →