Fine-tuning 101 | Prompt Engineering Conference

Mark Hennings · Beginner ·🧠 Large Language Models ·2y ago
Intro to fine-tuning LLMs from the Prompt Engineering Conference (2023) Presented by Mark Hennings, founder of Entry Point AI. 00:13 - Part 1: Background Info -How a foundation model is born -Instruct tuning and safety tuning -Unpredictability of raw LLM behavior -Showing LLMs how to apply knowledge -Characteristics of fine-tuning 06:25 - Part 2: When to use it -Examples of specialized tasks that fine-tuning benefits -Reasons to fine-tune a model -Speed and cost benefits -Prompt length before and after fine-tuning -Fine-tuning in the team environment -LLM workflow from prompt engineering and fine-tuning to production -Size of dataset for fine-tuning 11:27 - Part 3: No-code Demo -Demo of no-code fine-tuning on Entry Point AI Learn more at https://www.entrypointai.com
Watch on YouTube ↗ (saves to browser)
Sign in to unlock AI tutor explanation · ⚡30

Related AI Lessons

Self-Improving Python Scripts with LLMs: My Journey
Learn to create self-improving Python scripts using Large Language Models (LLMs) and automate tasks with AI-generated code
Dev.to AI
Don't Build Your MCP Server as an API Wrapper
Learn why building your MCP server as an API wrapper is not recommended and how to improve your approach for better performance and scalability
Dev.to · logiQode
A beginner’s guide to Instructor: Get Structured Outputs from LLMs
Learn to get structured outputs from LLMs using Instructor, a framework for controlling LLM outputs
Dev.to · Srujana Maddula
DeepSeek previews new AI model that ‘closes the gap’ with frontier models
DeepSeek's new AI models outperform previous versions and close the gap with leading models on reasoning benchmarks
TechCrunch AI

Chapters (3)

0:13 Part 1: Background Info
6:25 Part 2: When to use it
11:27 Part 3: No-code Demo
Up next
5 Levels of AI Agents - From Simple LLM Calls to Multi-Agent Systems
Dave Ebbelaar (LLM Eng)
Watch →