RL for Agents Workshop - Deep Dive on Training Agents with RL and Open Source
Reinforcement learning is becoming central to agentic systems, but moving from RL for LLMs to RL for agents introduces a new set of challenges: environments, rollouts, tool use, inference bottlenecks, reward design, and evaluating multi-step behavior in the real world.
In this live Hugging Face workshop, we bring together researchers and builders working on the frontier of RL for agents. The session will feature short talks followed by a discussion on what is working today, where open methods still fall short, and what comes next.
Speakers include:
- Lewis Tunstall, Hugging Face
- Will Brown, Prime Intellect
- Ofir Press, Princeton University
- Alex Zhang, MIT CSAIL
additional guests TBA
Topics include:
- training agents with open source tools
- scaling RL for language agents
- multi-step verification and reward design
- benchmarking agent capability beyond static tasks
- recursive reasoning and new agent architectures
Watch on YouTube ↗
(saves to browser)
Sign in to unlock AI tutor explanation · ⚡30
More on: RL Foundations
View skill →Related AI Lessons
⚡
⚡
⚡
⚡
Sauce Labs Launches AI Agent to Automate Test Creation and Close the DevOps “Velocity Gap”
InfoQ AI/ML
BizNode runs entirely on your machine — no cloud, no subscriptions, no monthly fees. Your AI business operator that works 24/7
Dev.to AI
We Built a 3-Layer Audit Trail (AI + GPS + Blockchain) to Eliminate Greenwashing in Ocean Conservation
Dev.to AI
How to Move Fast Without Breaking Judgment
Medium · AI
🎓
Tutor Explanation
DeepCamp AI