Full Workshop: Build Your Own Deep Research Agents - Louis-François Bouchard, Paul Iusztin, Samridhi

AI Engineer · Intermediate ·🧠 Large Language Models ·2w ago
Deep research is one of the best ways to learn how to build real AI systems because it forces you to combine reasoning, planning, autonomy, tools, grounding, and feedback loops in a single end-to-end workflow. In this hands-on workshop, you will build an MCP-powered deep research agent that can plan a research strategy, search the web, analyze YouTube videos, gather grounded evidence, filter for relevance and trustworthiness, and synthesize its findings into a cited research artifact. Rather than treating research as just another chatbot interaction, we will frame it as a goal-directed research loop: one that can search, inspect, pivot, and progressively refine its understanding of a topic. From there, we will connect that research artifact to a lightweight technical writing workflow that turns raw findings into polished, non-sloppy technical multimodal content. This second part of the system is deliberately more constrained: you will see how research and writing require much different architectures, why exploratory work benefits from agentic behavior, and why writing quality often improves with tighter workflows, review loops, and explicit guidance. Along the way, we will show how to choose between prompts, workflows, and agents depending on the task, and how to keep the overall system practical rather than over-engineered. We will also cover observability and evaluation so the system is not only impressive in a demo, but measurable and improvable in practice. Most importantly, the workshop is grounded in experience: it distills what we learned over the past year building and using this research-and-writing pipeline internally. Attendees will leave with their own deep research agent, connecting it to a reliable technical writing workflow, and understanding the engineering tradeoffs behind both. Speaker info: - https://x.com/Whats_AI - https://www.linkedin.com/in/pauliusztin - https://www.linkedin.com/in/samridhivaid/ Timestamps (00:00) Introduction and problem
Watch on YouTube ↗ (saves to browser)
Sign in to unlock AI tutor explanation · ⚡30

Related AI Lessons

PagedAttention: vLLM’s Solution to GPU Memory Waste
Learn how PagedAttention solves GPU memory waste for large language models (LLMs) and improve your LLM serving efficiency
Medium · ChatGPT
From 30 to 60 Tokens/Second: How I Got vLLM Running on 2x RTX 3090
Learn how to install and run vLLM on 2x RTX 3090 to achieve 60 tokens/second, a significant performance boost for LLM applications
Medium · LLM
Running an Offline LLM in React Native (2026): Building Privacy-First AI That Works Without the…
Learn to build a privacy-first offline LLM in React Native, enabling AI functionality without internet connectivity
Medium · LLM
Google Chrome is Now Automatically Downloading 4GB AI Models to User Computers: What You Need to…
Google Chrome now downloads 4GB AI models to user computers, understand the implications and how it affects your device
Medium · LLM
Up next
5 Levels of AI Agents - From Simple LLM Calls to Multi-Agent Systems
Dave Ebbelaar (LLM Eng)
Watch →