Fine-tuning 101 | Prompt Engineering Conference
Intro to fine-tuning LLMs from the Prompt Engineering Conference (2023)
Presented by Mark Hennings, founder of Entry Point AI.
00:13 - Part 1: Background Info
-How a foundation model is born
-Instruct tuning and safety tuning
-Unpredictability of raw LLM behavior
-Showing LLMs how to apply knowledge
-Characteristics of fine-tuning
06:25 - Part 2: When to use it
-Examples of specialized tasks that fine-tuning benefits
-Reasons to fine-tune a model
-Speed and cost benefits
-Prompt length before and after fine-tuning
-Fine-tuning in the team environment
-LLM workflow from prompt engineering and fine-tuning to production
-Size of dataset for fine-tuning
11:27 - Part 3: No-code Demo
-Demo of no-code fine-tuning on Entry Point AI
Learn more at https://www.entrypointai.com
Watch on YouTube ↗
(saves to browser)
Sign in to unlock AI tutor explanation · ⚡30
More on: LLM Foundations
View skill →Related AI Lessons
⚡
⚡
⚡
⚡
Self-Improving Python Scripts with LLMs: My Journey
Dev.to AI
Don't Build Your MCP Server as an API Wrapper
Dev.to · logiQode
A beginner’s guide to Instructor: Get Structured Outputs from LLMs
Dev.to · Srujana Maddula
DeepSeek previews new AI model that ‘closes the gap’ with frontier models
TechCrunch AI
Chapters (3)
0:13
Part 1: Background Info
6:25
Part 2: When to use it
11:27
Part 3: No-code Demo
🎓
Tutor Explanation
DeepCamp AI