How Google DeepMind is researching the next Frontier of AI for Gemini — Raia Hadsell, VP of Research

AI Engineer · Beginner ·🧠 Large Language Models ·2w ago
In this presentation, Raia Hadsell, VP of Research at Google DeepMind and AI Ambassador for the United Kingdom, opens AIE Europe and explores what's open in Frontier AI and the future of intelligence by focusing on advancements beyond standard large language models. She categorizes these innovations into three key areas: 00:00 Introduction 05:05 Advanced Embedding Models: Raia discusses the importance of embedding models for fast retrieval and recognition, similar to how the human brain uses 'Jennifer Aniston cells' to identify concepts across modalities. She highlights Gemini Embeddings 2, a fully omnimodal model that processes text, video, and audio into unified semantic vectors. 09:53 AI for Weather Forecasting: The team has developed revolutionary models for atmospheric prediction, moving away from traditional physics simulations. Notable breakthroughs include: 11:00 GraphCast: A spherical graph neural network that provides accurate 15-day weather forecasts. 12:47 GenCast: A probabilistic model that offers higher efficiency and accuracy (97% of the time compared to gold-standard benchmarks). 13:51 FGN: A functional generative network that directly predicts cyclone behavior, which is currently being utilized by the US National Hurricane Center. 14:35 World Models: Hadsell introduces Genie, a project focused on creating interactive, real-time environments. Starting from Genie 1 (2D platformers) and progressing to Genie 3, these models allow users to create and interact with high-quality, 3D photorealistic worlds. These environments demonstrate capabilities like memory, consistency, and the ability to be dynamically prompted by the user to change the surroundings in real-time. Speaker info: - https://uk.linkedin.com/in/raia-hadsell-35400266 - https://github.com/raiah
Watch on YouTube ↗ (saves to browser)
Sign in to unlock AI tutor explanation · ⚡30

Related AI Lessons

PagedAttention: vLLM’s Solution to GPU Memory Waste
Learn how PagedAttention solves GPU memory waste for large language models (LLMs) and improve your LLM serving efficiency
Medium · ChatGPT
From 30 to 60 Tokens/Second: How I Got vLLM Running on 2x RTX 3090
Learn how to install and run vLLM on 2x RTX 3090 to achieve 60 tokens/second, a significant performance boost for LLM applications
Medium · LLM
Running an Offline LLM in React Native (2026): Building Privacy-First AI That Works Without the…
Learn to build a privacy-first offline LLM in React Native, enabling AI functionality without internet connectivity
Medium · LLM
Google Chrome is Now Automatically Downloading 4GB AI Models to User Computers: What You Need to…
Google Chrome now downloads 4GB AI models to user computers, understand the implications and how it affects your device
Medium · LLM

Chapters (7)

Introduction
5:05 Advanced Embedding Models: Raia discusses the importance of embedding models for
9:53 AI for Weather Forecasting: The team has developed revolutionary models for atmo
11:00 GraphCast: A spherical graph neural network that provides accurate 15-day weathe
12:47 GenCast: A probabilistic model that offers higher efficiency and accuracy (97% o
13:51 FGN: A functional generative network that directly predicts cyclone behavior, wh
14:35 World Models: Hadsell introduces Genie, a project focused on creating interactiv
Up next
5 Levels of AI Agents - From Simple LLM Calls to Multi-Agent Systems
Dave Ebbelaar (LLM Eng)
Watch →