Core AI
Large Language Models
Deep dives into GPT, Claude, Gemini, Llama and the transformers powering modern AI
Skills in this topic
5 skills — Sign in to track your progress
LLM Foundations
beginner
Explain how transformers generate text
Prompt Craft
beginner
Write zero-shot and few-shot prompts
LLM Engineering
intermediate
Call LLM APIs with function/tool use
Fine-tuning LLMs
advanced
Prepare fine-tuning datasets
Multimodal LLMs
advanced
Use GPT-4V / Claude Vision for image understanding
Showing 5,560 reads from curated sources
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
6y ago
OpenAI Robotics Symposium 2019
We hosted the first OpenAI Robotics Symposium on April 27, 2019.
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
6y ago
OpenAI Scholars 2019: Final projects
Our second class of OpenAI Scholars has concluded, with all eight scholars producing an exciting final project showcased at Scholars Demo Day at OpenAI.
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
6y ago
OpenAI Fellows Fall 2018: Final projects
Our second class of OpenAI Fellows has wrapped up, with each Fellow going from a machine learning beginner to core OpenAI contributor in the course of a 6-month
Lilian Weng's Blog
🧠 Large Language Models
⚡ AI Lesson
6y ago
Domain Randomization for Sim2Real Transfer
In Robotics, one of the hardest problems is how to make your model transfer to the real world. Due to the sample inefficiency of deep RL algorithms and the cost
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
MuseNet
We’ve created MuseNet, a deep neural network that can generate 4-minute musical compositions with 10 different instruments, and can combine styles from country
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
Generative modeling with sparse transformers
We’ve developed the Sparse Transformer, a deep neural network which sets new records at predicting what comes next in a sequence—whether text, images, or sound.
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
OpenAI Five defeats Dota 2 world champions
OpenAI Five is the first AI to beat the world champions in an esports game, having won two back-to-back games versus the world champion Dota 2 team, OG, at Fina
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
OpenAI Five Finals
We’ll be holding our final live event for OpenAI Five at 11:30am PT on April 13.
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
Implicit generation and generalization methods for energy-based models
We’ve made progress towards stable and scalable training of energy-based models (EBMs) resulting in better sample quality and generalization ability than existi
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
OpenAI Scholars 2019: Meet our Scholars
Our class of eight scholars (out of 550 applicants) brings together collective expertise in literature, philosophy, cell biology, statistics, economics, quantum
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
Introducing Activation Atlases
We’ve created activation atlases (in collaboration with Google researchers), a new technique for visualizing what interactions between neurons can represent. As
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
Neural MMO: A massively multiagent game environment
We’re releasing a Neural MMO, a massively multiagent game environment for reinforcement learning agents. Our platform supports a large, variable number of agent
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
Spinning Up in Deep RL: Workshop review
On February 2, we held our first Spinning Up Workshop as part of our new education initiative at OpenAI.
Distill.pub
🧠 Large Language Models
📄 Paper
⚡ AI Lesson
7y ago
AI Safety Needs Social Scientists
If we want to train AI to do what humans want, we need to study humans.
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
AI safety needs social scientists
We’ve written a paper arguing that long-term AI safety research needs social scientists to ensure AI alignment algorithms succeed when actual humans are involve
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
Better language models and their implications
We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language mode
Lilian Weng's Blog
🧠 Large Language Models
⚡ AI Lesson
7y ago
Generalized Language Models
[Updated on 2019-02-14: add ULMFiT and GPT-2 .] [Updated on 2020-02-29: add ALBERT .] <span class="updat
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
OpenAI Fellows Summer 2018: Final projects
Our first cohort of OpenAI Fellows has concluded, with each Fellow going from a machine learning beginner to core OpenAI contributor in the course of a 6-month
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
How AI training scales
We’ve discovered that the gradient noise scale, a simple statistical metric, predicts the parallelizability of neural network training on a wide range of tasks.
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
Quantifying generalization in reinforcement learning
We’re releasing CoinRun, a training environment which provides a metric for an agent’s ability to transfer its experience to novel situations and has already he
Lilian Weng's Blog
🧠 Large Language Models
⚡ AI Lesson
7y ago
Meta-Learning: Learning to Learn Fast
[Updated on 2019-10-01: thanks to Tianhao, we have
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
Learning concepts with energy functions
We’ve developed an energy-based model that can quickly learn to identify and generate instances of concepts, such as near, above, between, closest, and furthest
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
Reinforcement learning with prediction-based rewards
We’ve developed Random Network Distillation (RND), a prediction-based method for encouraging reinforcement learning agents to explore their environments through
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
Learning complex goals with iterated amplification
We’re proposing an AI safety technique called iterated amplification that lets us specify complicated behaviors and goals that are beyond human scale, by demons
Lilian Weng's Blog
🧠 Large Language Models
⚡ AI Lesson
7y ago
Flow-based Deep Generative Models
So far, I’ve written about two types of generative models, GAN and VAE . Neither of them explicitly learns the probability density function of real da
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
OpenAI Scholars 2019: Applications open
We are now accepting applications for our second cohort of OpenAI Scholars, a program where we provide 6–10 stipends and mentorship to individuals from underrep
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
OpenAI Fellows Winter 2019 & Interns Summer 2019
We are now accepting applications for OpenAI Fellows and Interns for 2019.
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
OpenAI Scholars 2018: Final projects
Our first cohort of OpenAI Scholars has now completed the program.
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
The International 2018: Results
OpenAI Five lost two games against top Dota 2 players at The International in Vancouver this week, maintaining a good chance of winning for the first 20–35 minu
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
OpenAI Five Benchmark: Results
Yesterday, OpenAI Five won a best-of-three against a team of 99.95th percentile Dota players: Blitz, Cap, Fogged, Merlini, and MoonMeander—four of whom have pla
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
Learning dexterity
We’ve trained a human-like robot hand to manipulate physical objects with unprecedented dexterity.
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
OpenAI Scholars 2018: Meet our Scholars
Our first class of OpenAI Scholars is underway, and you can now follow along as this group of experienced software developers becomes machine learning practitio
Distill.pub
🧠 Large Language Models
📄 Paper
⚡ AI Lesson
7y ago
Feature-wise transformations
A simple and surprisingly effective family of conditioning mechanisms.
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
Glow: Better reversible generative models
We introduce Glow, a reversible generative model which uses invertible 1x1 convolutions. It extends previous work on reversible generative models and simplifies
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
Learning Montezuma’s Revenge from a single demonstration
We’ve trained an agent to achieve a high score of 74,500 on Montezuma’s Revenge from a single human demonstration, better than any previously published result.
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
OpenAI Five
Our team of five neural networks, OpenAI Five, has started to defeat amateur human teams at Dota 2.
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
Retro Contest: Results
The first run of our Retro Contest—exploring the development of algorithms that can generalize from previous experience—is now complete.
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
OpenAI Fellows Fall 2018
We’re now accepting applications for the next cohort of OpenAI Fellows, a program which offers a compensated 6-month apprenticeship in AI research at OpenAI.
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
Gym Retro
We’re releasing the full version of Gym Retro, a platform for reinforcement learning research on games. This brings our publicly-released game count from around
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
AI and compute
We’re releasing an analysis showing that since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.4-mon
Lilian Weng's Blog
🧠 Large Language Models
⚡ AI Lesson
7y ago
Implementing Deep Reinforcement Learning Models with Tensorflow + OpenAI Gym
The full implementation is available in lilianweng/deep-reinforcement-learning-gym In the previous two posts, I have introduced the algorithms of many deep rein
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
7y ago
AI safety via debate
We’re proposing an AI safety technique which trains agents to debate topics with one another, using a human to judge who wins.
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
8y ago
Evolved Policy Gradients
We’re releasing an experimental metalearning approach called Evolved Policy Gradients, a method that evolves the loss function of learning agents, which can ena
Lilian Weng's Blog
🧠 Large Language Models
⚡ AI Lesson
8y ago
Policy Gradient Algorithms
[Updated on 2018-06-30: add two new policy gradient methods, SAC and D4PG .] [Updated on 2018-09-30: ad
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
8y ago
Retro Contest
We’re launching a transfer learning contest that measures a reinforcement learning algorithm’s ability to generalize from previous experience.
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
8y ago
Report from the OpenAI hackathon
On March 3rd, we hosted our first hackathon with 100 members of the artificial intelligence community.
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
8y ago
Reptile: A scalable meta-learning algorithm
We’ve developed a simple meta-learning algorithm called Reptile which works by repeatedly sampling a task, performing stochastic gradient descent on it, and upd
OpenAI News
🧠 Large Language Models
⚡ AI Lesson
8y ago
OpenAI Scholars
We’re providing 6–10 stipends and mentorship to individuals from underrepresented groups to study deep learning full-time for 3 months and open-source a project
DeepCamp AI