This AI Learns in Two Minds (Slow RL, Fast GEPA)
Skills:
LLM Foundations80%
The video's central move is to stop treating LLM adaptation as a single process that must all be written into the tensor weights. Instead, it models adaptation as a coupled cognitive system with two interacting channels: a slow parametric channel θ and a fast textual/contextual channel ϕ. Slow and Fast together - a coherent learning process for AI. We bring together the classical RL with verifiable rewards and the skill.md and memory.md complexity from the AI harness.
The slow channel is the ordinary model update path: expensive, persistent, and global. The fast channel is the prompt, instruction, reflection, and context layer: cheap, editable, and temporary. The key claim is that these two channels should co-evolve, not be optimized in isolation.
This AI Learns in Two Minds
Why LLMs Need Two Timescales of Learning
all rights w/ authors:
Learning, Fast and Slow: Towards LLMs
That Adapt Continually
Rishabh Tiwari∗ 1,4 Kusha Sareen∗ 2 Lakshya A Agrawal∗ 1
Joseph E. Gonzalez 1 Matei Zaharia 1 Kurt Keutzer 1 Inderjit S Dhillon 3
Rishabh Agarwal† 2,5 Devvrit Khatri† 3,6
from
1 UC Berkeley
2 Mila
3 UT Austin
4 Eragon
5 Periodic Labs
6 Mirendil
#aithinking
#scienceexplained
#aiexplained
#airesearch
Watch on YouTube ↗
(saves to browser)
Sign in to unlock AI tutor explanation · ⚡30
More on: LLM Foundations
View skill →Related AI Lessons
⚡
⚡
⚡
⚡
GPT-5.5 vs Claude Opus 4.7: Which Frontier Model Should You Actually Use?
Medium · LLM
GPT-5.5 vs Claude Opus 4.7: Which Frontier Model Should You Actually Use?
Medium · ChatGPT
I Audited 70 Companies' llms.txt Files. Most Don't Have One.
Dev.to · Intally
AI is a Bubble … But It’s Also the Future
Medium · AI
🎓
Tutor Explanation
DeepCamp AI