Everything You Need To Know About Agent Observability — Danny Gollapalli and Ben Hylak, Raindrop
Skills:
Agent Foundations70%
Agent failures do not look like normal software failures. In this workshop, the Raindrop team breaks down what it actually takes to monitor production agents, from explicit signals like tool errors, latency, and cost to fuzzier signals like user frustration, refusals, task failure, and capability gaps.
The session covers how to move beyond evals toward real production observability, how to use classifiers, regex, and experiments to catch regressions, and how to instrument self-diagnostics so agents can report their own failures and strange behavior. If you're running agents in production, this is a practical framework for understanding what is going wrong and how to catch it early.
Speaker info:
- https://x.com/benhylak
- https://www.linkedin.com/in/benhylak/
- Danny Gollapalli
Watch on YouTube ↗
(saves to browser)
Sign in to unlock AI tutor explanation · ⚡30
More on: Agent Foundations
View skill →Related AI Lessons
⚡
⚡
⚡
⚡
Pay.sh pays for an API call. Hashlock settles a trade. Know the difference.
Dev.to AI
6 Protocols for Agent Infrastructure — Trust Score, Deployment, SLA, Identity, Compliance
Dev.to AI
Why an MCP Security Library Beats a Security Proxy
Dev.to AI
How We Got Listed on the Official MCP Registry — A Server Builder Checklist
Dev.to AI
🎓
Tutor Explanation
DeepCamp AI