Beyond Code Coverage: Functionality Testing with Playwright — Marlene Mhangami, Microsoft
Skills:
AI Pair Programming90%
When an LLM writes your tests, it tends to write tests that confirm what the code does rather than tests that verify what the user experiences. Your test suite goes green. The app still breaks in ways none of those tests would catch.
Marlene Mhangami from Microsoft makes the case for flipping the order: get the agent to write failing Playwright tests against the expected behavior first, then generate code to pass them. The demo runs this live with GitHub Copilot and the Playwright MCP server on a toy store search feature, with the browser open so you can watch the agent click through filters and validate results in real time.
Speaker info:
- https://x.com/marlene_zw
- https://www.linkedin.com/in/marlenemhangami/
- https://github.com/marlenemhangami
Watch on YouTube ↗
(saves to browser)
Sign in to unlock AI tutor explanation · ⚡30
More on: AI Pair Programming
View skill →Related AI Lessons
⚡
⚡
⚡
⚡
How Your I Phone Sees in the Dark From IR Lasers to Vector Databases: Engineering Biometrics at…
Medium · Machine Learning
Your 1M-Token Context Window Is a Lie: How to Plan Real Capacity for RAG, MCP, and Agents
Medium · Machine Learning
Part 2: Beyond “Just Ask”: Advanced Prompt Engineering Strategies for Complex Tasks
Medium · LLM
Anthropic Built an AI So Powerful It Refused to Release It to the Public
Medium · Data Science
🎓
Tutor Explanation
DeepCamp AI