Core AI

Large Language Models

Deep dives into GPT, Claude, Gemini, Llama and the transformers powering modern AI

24,908
lessons
Skills in this topic
View full skill map →
LLM Foundations
beginner
Explain how transformers generate text
Prompt Craft
beginner
Write zero-shot and few-shot prompts
LLM Engineering
intermediate
Call LLM APIs with function/tool use
Fine-tuning LLMs
advanced
Prepare fine-tuning datasets
Multimodal LLMs
advanced
Use GPT-4V / Claude Vision for image understanding

Showing 5,449 reads from curated sources

OpenAI News 🧠 Large Language Models ⚡ AI Lesson 7y ago
Better language models and their implications
We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language mode
Lilian Weng's Blog 🧠 Large Language Models ⚡ AI Lesson 7y ago
Generalized Language Models
[Updated on 2019-02-14: add ULMFiT and GPT-2 .] [Updated on 2020-02-29: add ALBERT .] <span class="updat
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 7y ago
OpenAI Fellows Summer 2018: Final projects
Our first cohort of OpenAI Fellows has concluded, with each Fellow going from a machine learning beginner to core OpenAI contributor in the course of a 6-month
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 7y ago
How AI training scales
We’ve discovered that the gradient noise scale, a simple statistical metric, predicts the parallelizability of neural network training on a wide range of tasks.
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 7y ago
Quantifying generalization in reinforcement learning
We’re releasing CoinRun, a training environment which provides a metric for an agent’s ability to transfer its experience to novel situations and has already he
Lilian Weng's Blog 🧠 Large Language Models ⚡ AI Lesson 7y ago
Meta-Learning: Learning to Learn Fast
[Updated on 2019-10-01: thanks to Tianhao, we have
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 7y ago
Learning concepts with energy functions
We’ve developed an energy-based model that can quickly learn to identify and generate instances of concepts, such as near, above, between, closest, and furthest
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 7y ago
Reinforcement learning with prediction-based rewards
We’ve developed Random Network Distillation (RND), a prediction-based method for encouraging reinforcement learning agents to explore their environments through
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 7y ago
Learning complex goals with iterated amplification
We’re proposing an AI safety technique called iterated amplification that lets us specify complicated behaviors and goals that are beyond human scale, by demons
Lilian Weng's Blog 🧠 Large Language Models ⚡ AI Lesson 7y ago
Flow-based Deep Generative Models
So far, I&rsquo;ve written about two types of generative models, GAN and VAE . Neither of them explicitly learns the probability density function of real da
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 7y ago
OpenAI Scholars 2019: Applications open
We are now accepting applications for our second cohort of OpenAI Scholars, a program where we provide 6–10 stipends and mentorship to individuals from underrep
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 7y ago
OpenAI Fellows Winter 2019 & Interns Summer 2019
We are now accepting applications for OpenAI Fellows and Interns for 2019.
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 7y ago
OpenAI Scholars 2018: Final projects
Our first cohort of OpenAI Scholars has now completed the program.
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 7y ago
The International 2018: Results
OpenAI Five lost two games against top Dota 2 players at The International in Vancouver this week, maintaining a good chance of winning for the first 20–35 minu
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 7y ago
OpenAI Five Benchmark: Results
Yesterday, OpenAI Five won a best-of-three against a team of 99.95th percentile Dota players: Blitz, Cap, Fogged, Merlini, and MoonMeander—four of whom have pla
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 7y ago
Learning dexterity
We’ve trained a human-like robot hand to manipulate physical objects with unprecedented dexterity.
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 7y ago
OpenAI Scholars 2018: Meet our Scholars
Our first class of OpenAI Scholars is underway, and you can now follow along as this group of experienced software developers becomes machine learning practitio
Distill.pub 🧠 Large Language Models 📄 Paper ⚡ AI Lesson 7y ago
Feature-wise transformations
A simple and surprisingly effective family of conditioning mechanisms.
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 7y ago
Glow: Better reversible generative models
We introduce Glow, a reversible generative model which uses invertible 1x1 convolutions. It extends previous work on reversible generative models and simplifies
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 7y ago
Learning Montezuma’s Revenge from a single demonstration
We’ve trained an agent to achieve a high score of 74,500 on Montezuma’s Revenge from a single human demonstration, better than any previously published result.
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 7y ago
OpenAI Five
Our team of five neural networks, OpenAI Five, has started to defeat amateur human teams at Dota 2.
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 7y ago
Retro Contest: Results
The first run of our Retro Contest—exploring the development of algorithms that can generalize from previous experience—is now complete.
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 7y ago
OpenAI Fellows Fall 2018
We’re now accepting applications for the next cohort of OpenAI Fellows, a program which offers a compensated 6-month apprenticeship in AI research at OpenAI.
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 7y ago
Gym Retro
We’re releasing the full version of Gym Retro, a platform for reinforcement learning research on games. This brings our publicly-released game count from around
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 7y ago
AI and compute
We’re releasing an analysis showing that since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.4-mon
Lilian Weng's Blog 🧠 Large Language Models ⚡ AI Lesson 7y ago
Implementing Deep Reinforcement Learning Models with Tensorflow + OpenAI Gym
The full implementation is available in lilianweng/deep-reinforcement-learning-gym In the previous two posts, I have introduced the algorithms of many deep rein
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 7y ago
AI safety via debate
We’re proposing an AI safety technique which trains agents to debate topics with one another, using a human to judge who wins.
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 8y ago
Evolved Policy Gradients
We’re releasing an experimental metalearning approach called Evolved Policy Gradients, a method that evolves the loss function of learning agents, which can ena
Lilian Weng's Blog 🧠 Large Language Models ⚡ AI Lesson 8y ago
Policy Gradient Algorithms
[Updated on 2018-06-30: add two new policy gradient methods, SAC and D4PG .] [Updated on 2018-09-30: ad
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 8y ago
Retro Contest
We’re launching a transfer learning contest that measures a reinforcement learning algorithm’s ability to generalize from previous experience.
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 8y ago
Report from the OpenAI hackathon
On March 3rd, we hosted our first hackathon with 100 members of the artificial intelligence community.
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 8y ago
Reptile: A scalable meta-learning algorithm
We’ve developed a simple meta-learning algorithm called Reptile which works by repeatedly sampling a task, performing stochastic gradient descent on it, and upd
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 8y ago
OpenAI Scholars
We’re providing 6–10 stipends and mentorship to individuals from underrepresented groups to study deep learning full-time for 3 months and open-source a project
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 8y ago
OpenAI hackathon
Come to OpenAI’s office in San Francisco’s Mission District for talks and a hackathon on Saturday, March 3rd.
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 8y ago
Preparing for malicious uses of AI
We’ve co-authored a paper that forecasts how malicious actors could misuse AI technology, and potential ways we can prevent and mitigate these threats. This pap
Lilian Weng's Blog 🧠 Large Language Models ⚡ AI Lesson 8y ago
A (Long) Peek into Reinforcement Learning
[Updated on 2020-09-03: Updated the algorithm of SARSA and Q-learning so that the diffe
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 8y ago
Interpretable machine learning through teaching
We’ve designed a method that encourages AIs to teach each other with examples that also make sense to humans. Our approach automatically selects the most inform
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 8y ago
Block-sparse GPU kernels
We’re releasing highly-optimized GPU kernels for an underexplored class of neural network architectures: networks with block-sparse weights. Depending on the ch
Distill.pub 🧠 Large Language Models 📄 Paper ⚡ AI Lesson 8y ago
Using Artificial Intelligence to Augment Human Intelligence
By creating user interfaces which let us work with the representations inside machine learning models, we can give people new tools for reasoning.
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 8y ago
Learning a hierarchy
We’ve developed a hierarchical reinforcement learning algorithm that learns high-level actions useful for solving a range of tasks, allowing fast solving of tas
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 8y ago
Generalizing from simulation
Our latest robotics techniques allow robot controllers, trained entirely in simulation and deployed on physical robots, to react to unplanned changes in the env
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 8y ago
Competitive self-play
We’ve found that self-play allows simulated AIs to discover physical skills like tackling, ducking, faking, kicking, catching, and diving for the ball, without
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 8y ago
Meta-learning for wrestling
We show that for the task of simulated robot wrestling, a meta-learning agent can learn to quickly defeat a stronger non-meta-learning agent, and also show that
Lilian Weng's Blog 🧠 Large Language Models ⚡ AI Lesson 8y ago
Anatomize Deep Learning with Information Theory
Professor Naftali Tishby passed away in 2021. Hope the post can introduce his cool idea of information bottleneck to more people. Recently I watched the talk &l
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 8y ago
Learning to model other minds
We’re releasing an algorithm which accounts for the fact that other agents are learning too, and discovers self-interested yet collaborative strategies like tit
Lilian Weng's Blog 🧠 Large Language Models ⚡ AI Lesson 8y ago
From GAN to WGAN
[Updated on 2018-09-30: thanks to Yoonju, we have this post translated in Korean !] [
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 8y ago
OpenAI Baselines: ACKTR & A2C
We’re releasing two new OpenAI Baselines implementations: ACKTR and A2C. A2C is a synchronous, deterministic variant of Asynchronous Advantage Actor Critic (A3C
OpenAI News 🧠 Large Language Models ⚡ AI Lesson 8y ago
More on Dota 2
Our Dota 2 result shows that self-play can catapult the performance of machine learning systems from far below human level to superhuman, given sufficient compu