How We Cut LLM Latency 70% With TensorRT in Production

MLOps.community · Advanced ·🧠 Large Language Models ·1w ago
Maher is an engineering leader who went from zero AI experience to self-hosting LLMs at enterprise scale — managing GPU costs, optimizing inference with TensorRT LLM, and building an AI platform for HR tech. In this conversation, he breaks down exactly how his team cut latency by 70%, reduced GPU spend through counterintuitive scaling strategies, and navigated the messy reality of taking AI from proof-of-concept to production. How We Cut LLM Latency 70% With TensorRT in Production // MLOps Podcast #369 with Maher Hanafi, SVP of Engineering at Betterworks Key topics covered: The AI Iceberg — Why the invisible work behind AI (performance, latency, throughput, cost, accuracy) is harder than building the features themselves GPU Cost Optimization — How upgrading to more expensive GPUs actually saved money by reducing total runtime hours TensorRT LLM Deep Dive — Rewiring neural networks to match GPU architecture for 50-70% latency reduction Cold Start Solutions — Using AWS FSx, baking models into container images, and cutting minutes off spin-up times KV Cache & In-Flight Batching — Why using one model per GPU with maximum KV cache beats cramming multiple models together Scheduled & Dynamic Scaling — Pattern-based scaling for HR tech workloads (nights, weekends, end-of-quarter spikes) Verticalized AI Platform — Building horizontal AI infrastructure that serves multiple HR product verticals AI Engineering Lab — How junior vs. senior engineers adopted AI coding tools differently, and the cultural shift that followed Agentic Coding in Practice — Navigating AI coding agent costs, quality control, and redefining the SDLC Chinese Models & Compliance — Why enterprise customers block DeepSeek/Qwen and the geopolitics of model training data This episode is for engineering leaders building AI in production, MLOps engineers optimizing GPU infrastructure, and anyone navigating the gap between AI demos and enterprise-scale deployment. Links & Resources: TensorRT LLM: https://git
Watch on YouTube ↗ (saves to browser)
Sign in to unlock AI tutor explanation · ⚡30

Related AI Lessons

The Day Suno AI Made Me Cry — And What It Taught Me About Creative Limits
Discover how Suno AI pushed creative limits and evoked emotions, teaching a valuable lesson about AI's potential in art
Medium · AI
ChatGPT Prompts for Clinical Lab Scientists and Pathologists: Reports, Communication, and Quality Systems
Learn how to leverage ChatGPT for clinical lab scientists and pathologists to improve reports, communication, and quality systems
Dev.to AI
Few-Shot Prompting — Deep Dive + Problem: Minimum Window Substring
Learn Few-Shot Prompting for Large Language Models and apply it to solve the Minimum Window Substring problem
Dev.to AI
GEO Ghost Stack — Seven-Layer Structured Data That Makes AI Systems Cite Your Site
Learn how to create a GEO Ghost Stack with seven layers of structured data to increase AI citations on your site
Dev.to · Aaron
Up next
5 Levels of AI Agents - From Simple LLM Calls to Multi-Agent Systems
Dave Ebbelaar (LLM Eng)
Watch →