Introduction To Arize AX Evals

Arize AI · Beginner ·🧠 Large Language Models ·1mo ago
​This workshop is an evaluation 101 primer for teams who want the latest, research-driven best practices for doing evals well across the agent lifecycle. We cover evaluation fundamentals, dig into the practical tradeoffs of LLM-as-a-Judge, and walk through a concrete code-based eval example. It wraps with an Arize AX demo highlighting how evals connect to other important workflows.
Watch on YouTube ↗ (saves to browser)
5 Levels of AI Agents - From Simple LLM Calls to Multi-Agent Systems
Next Up
5 Levels of AI Agents - From Simple LLM Calls to Multi-Agent Systems
Dave Ebbelaar (LLM Eng)