Shuo Li Liu - Coherence in RLHF Preference Data
Skills:
Reading ML Papers90%
RLHF usually learn from pairwise comparisons, often through Bradley-Terry-style models. I will discuss what coherence requirements, such as Weak Stochastic Transitivity and the Weak Axiom of Revealed Preference, mean for preference trained AI systems.
Shuo Li Liu is a PhD student in Economics at Princeton University. His work connects axiomatic decision theory and AI alignment, with current projects on stochastic choice, preference learning, and the foundations of RLHF evaluation.
This session is brought to you by the Cohere Labs Open Science Community - a space where
ML researchers, engineers, linguists, social scientists, and lifelong learners connect and collaborate with each other. We'd like to extend a special thank you to Katrina Lawrence and Neel Ghoshal, Leads of our ML Math group for their dedication in organizing this event.
If you’re interested in sharing your work, we welcome you to join us! Simply fill out the form at https://forms.gle/ALND9i6KouEEpCnz6 to express your interest in becoming a speaker.
Join the Cohere Labs Open Science Community to see a full list of upcoming events (https://tinyurl.com/CohereLabsCommunityApp).
Watch on YouTube ↗
(saves to browser)
Sign in to unlock AI tutor explanation · ⚡30
More on: Reading ML Papers
View skill →Related AI Lessons
⚡
⚡
⚡
⚡
#1 DevLog Meta-research: I Got Tired of Tab Chaos While Reading Research Papers.
Dev.to AI
How to Set Up a Karpathy-Style Wiki for Your Research Field
Medium · AI
The Non-Optimality of Scientific Knowledge: Path Dependence, Lock-In, and The Local Minimum Trap
ArXiv cs.AI
How Archimedes Started: A Research Tool I Built for Myself
Dev.to AI
🎓
Tutor Explanation
DeepCamp AI