"Watermarking Language Models" paper and GPTZero EXPLAINED | How to detect text by ChatGPT?

AI Coffee Break with Letitia ยท Beginner ยท๐Ÿง  Large Language Models ยท3y ago
Did ChatGPT write this text? We explain two ways of knowing whether AI has written a text: GPTZero and Watermarking language models. โ–บ Sponsor: Cohere ๐Ÿ‘‰ https://t1p.de/22srn UPDATE: Further research about the Reliability of Watermarks for LLMs! https://arxiv.org/abs/2306.04634 shows that watermarking works even when watermarked text is re-written by humans or paraphrased by another non-watermarked LLM. Check out our daily #MachineLearning Quiz Questions: https://www.youtube.com/c/AICoffeeBreak/community โžก๏ธ AI Coffee Break Merch! ๐Ÿ›๏ธ https://aicoffeebreak.creator-spring.com/ ๐Ÿ“œ Watermarking LMs paper: Kirchenbauer, John, Jonas Geiping, Yuxin Wen, Jonathan Katz, Ian Miers and Tom Goldstein. โ€œA Watermark for Large Language Models.โ€ (2023). https://arxiv.org/abs/2301.10226 ๐Ÿ“– Tom Goldsteinโ€™s tweet summary: https://twitter.com/tomgoldsteincs/status/1618287665006403585 ๐Ÿ”— GPTZero: https://gptzero.me/ ๐Ÿ”— First version of GPTZero (easy and free to try out): https://etedward-gptzero-main-zqgfwb.streamlit.app/ Thanks to our Patrons who support us in Tier 2, 3, 4: ๐Ÿ™ Dres. Trost GbR, Siltax, Edvard Grรธdem, Vignesh Valliappan, Mutual Information, Mike Ton Outline: 00:00 Did ChatGPT generate this text? 01:14 Cohere [Sponsor] 02:43 How does GPTZero work: Perplexity explained 05:42 GPTZero โ€œBurstinessโ€ 07:08 Robust detection with watermarking 07:46 Language modelling decoding explained 09:21 Watermarking explained 11:05 โ€œsoftโ€ watermark 12:49 Attacking watermarks โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€ ๐Ÿ”ฅ Optionally, pay us a coffee to help with our Coffee Bean production! โ˜• Patreon: https://www.patreon.com/AICoffeeBreak Ko-fi: https://ko-fi.com/aicoffeebreak โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€โ–€ ๐Ÿ”— Links: AICoffeeBreakQuiz: https://www.youtube.com/c/AICoffeeBreak/community Twitter: https://twitter.com/AICoffeeBreak Reddit: https://www.reddit.com/r/AICoffeeBreak/ YouTube: https://www.youtube.com/AICoffeeBreak #AICoffeeBreak #MsCoffeeBean #MachineLearning #AI #researchโ€‹ Music ๐ŸŽต : Sunset n Be
Watch on YouTube โ†— (saves to browser)
Sign in to unlock AI tutor explanation ยท โšก30

Playlist

Uploads from AI Coffee Break with Letitia ยท AI Coffee Break with Letitia ยท 0 of 60

โ† Previous Next โ†’
1 AI Coffee Break - Channel Trailer
AI Coffee Break - Channel Trailer
AI Coffee Break with Letitia
2 How to check if a neural network has learned a specific phenomenon?
How to check if a neural network has learned a specific phenomenon?
AI Coffee Break with Letitia
3 A brief history of the Transformer architecture in NLP
A brief history of the Transformer architecture in NLP
AI Coffee Break with Letitia
4 Our paper at CVPR 2020 - MUL Workshop and ACL 2020 - ALVR Workshop
Our paper at CVPR 2020 - MUL Workshop and ACL 2020 - ALVR Workshop
AI Coffee Break with Letitia
5 The Transformer neural network architecture EXPLAINED. โ€œAttention is all you needโ€
The Transformer neural network architecture EXPLAINED. โ€œAttention is all you needโ€
AI Coffee Break with Letitia
6 Transformer combining Vision and Language? ViLBERT - NLP meets Computer Vision
Transformer combining Vision and Language? ViLBERT - NLP meets Computer Vision
AI Coffee Break with Letitia
7 Pre-training of BERT-based Transformer architectures explained โ€“ language and vision!
Pre-training of BERT-based Transformer architectures explained โ€“ language and vision!
AI Coffee Break with Letitia
8 GPT-3 explained with examples. Possibilities, and implications.
GPT-3 explained with examples. Possibilities, and implications.
AI Coffee Break with Letitia
9 Adversarial Machine Learning explained! | With examples.
Adversarial Machine Learning explained! | With examples.
AI Coffee Break with Letitia
10 BERTology meets Biology | Solving biological problems with Transformers
BERTology meets Biology | Solving biological problems with Transformers
AI Coffee Break with Letitia
11 Can a neural network tell if an image is mirrored? โ€“ Visual Chirality
Can a neural network tell if an image is mirrored? โ€“ Visual Chirality
AI Coffee Break with Letitia
12 The ultimate intro to Graph Neural Networks. Maybe.
The ultimate intro to Graph Neural Networks. Maybe.
AI Coffee Break with Letitia
13 Can language models understand? Bender and Koller argument.
Can language models understand? Bender and Koller argument.
AI Coffee Break with Letitia
14 GANs explained | Generative Adversarial Networks video with showcase!
GANs explained | Generative Adversarial Networks video with showcase!
AI Coffee Break with Letitia
15 What nobody tells you about MULTIMODAL Machine Learning! ๐Ÿ™Š THE definition.
What nobody tells you about MULTIMODAL Machine Learning! ๐Ÿ™Š THE definition.
AI Coffee Break with Letitia
16 Multimodal Machine Learning models do not work. Here is why. Part 1/2 โ€“ The SYMPTOMS
Multimodal Machine Learning models do not work. Here is why. Part 1/2 โ€“ The SYMPTOMS
AI Coffee Break with Letitia
17 Why Multimodal Machine Learning models do not work. Part 2/2 โ€“ The CAUSES
Why Multimodal Machine Learning models do not work. Part 2/2 โ€“ The CAUSES
AI Coffee Break with Letitia
18 An image is worth 16x16 words: ViT | Vision Transformer explained
An image is worth 16x16 words: ViT | Vision Transformer explained
AI Coffee Break with Letitia
19 AI understanding language!? A roadmap to natural language understanding.
AI understanding language!? A roadmap to natural language understanding.
AI Coffee Break with Letitia
20 "What Can We Do to Improve Peer Review in NLP?" ๐Ÿ‘€
"What Can We Do to Improve Peer Review in NLP?" ๐Ÿ‘€
AI Coffee Break with Letitia
21 The curse of dimensionality. Or is it a blessing?
The curse of dimensionality. Or is it a blessing?
AI Coffee Break with Letitia
22 PCA explained with intuition, a little math and code
PCA explained with intuition, a little math and code
AI Coffee Break with Letitia
23 Data-efficient Image Transformers EXPLAINED! Facebook AI's DeiT paper
Data-efficient Image Transformers EXPLAINED! Facebook AI's DeiT paper
AI Coffee Break with Letitia
24 OpenAI's DALL-E explained. How GPT-3 creates images from descriptions.
OpenAI's DALL-E explained. How GPT-3 creates images from descriptions.
AI Coffee Break with Letitia
25 Leaking training data from GPT-2. How is this possible?
Leaking training data from GPT-2. How is this possible?
AI Coffee Break with Letitia
26 OpenAIโ€™s CLIP explained! | Examples, links to code and pretrained model
OpenAIโ€™s CLIP explained! | Examples, links to code and pretrained model
AI Coffee Break with Letitia
27 Transformers can do both images and text. Here is why.
Transformers can do both images and text. Here is why.
AI Coffee Break with Letitia
28 UMAP explained | The best dimensionality reduction?
UMAP explained | The best dimensionality reduction?
AI Coffee Break with Letitia
29 NVIDIA Jarvis (now NVIDIA Riva) meets Ms. Coffee Bean
NVIDIA Jarvis (now NVIDIA Riva) meets Ms. Coffee Bean
AI Coffee Break with Letitia
30 Transformer in Transformer: Paper explained and visualized | TNT
Transformer in Transformer: Paper explained and visualized | TNT
AI Coffee Break with Letitia
31 [RANT] Adversarial attack on OpenAIโ€™s CLIP? Are we the fools or the foolers?
[RANT] Adversarial attack on OpenAIโ€™s CLIP? Are we the fools or the foolers?
AI Coffee Break with Letitia
32 Pattern Exploiting Training explained! | PET, iPET, ADAPET
Pattern Exploiting Training explained! | PET, iPET, ADAPET
AI Coffee Break with Letitia
33 Deep Learning for Symbolic Mathematics!? | Paper EXPLAINED
Deep Learning for Symbolic Mathematics!? | Paper EXPLAINED
AI Coffee Break with Letitia
34 FNet: Mixing Tokens with Fourier Transforms โ€“ Paper Explained
FNet: Mixing Tokens with Fourier Transforms โ€“ Paper Explained
AI Coffee Break with Letitia
35 Are Pre-trained Convolutions Better than Pre-trained Transformers? โ€“ Paper Explained
Are Pre-trained Convolutions Better than Pre-trained Transformers? โ€“ Paper Explained
AI Coffee Break with Letitia
36 "Please Commit More Blatant Academic Fraud" โ€“ A fellow PhD student's response.
"Please Commit More Blatant Academic Fraud" โ€“ A fellow PhD student's response.
AI Coffee Break with Letitia
37 Scaling Vision Transformers? How much data can a transformer get? #Shorts
Scaling Vision Transformers? How much data can a transformer get? #Shorts
AI Coffee Break with Letitia
38 How cross-modal are vision and language models really? ๐Ÿ‘€ Seeing past words. [Own work]
How cross-modal are vision and language models really? ๐Ÿ‘€ Seeing past words. [Own work]
AI Coffee Break with Letitia
39 Charformer: Fast Character Transformers via Gradient-based Subword Tokenization +Tokenizer explained
Charformer: Fast Character Transformers via Gradient-based Subword Tokenization +Tokenizer explained
AI Coffee Break with Letitia
40 Positional embeddings in transformers EXPLAINED | Demystifying positional encodings.
Positional embeddings in transformers EXPLAINED | Demystifying positional encodings.
AI Coffee Break with Letitia
41 Adding vs. concatenating positional embeddings & Learned positional encodings
Adding vs. concatenating positional embeddings & Learned positional encodings
AI Coffee Break with Letitia
42 Self-Attention with Relative Position Representations โ€“ Paper explained
Self-Attention with Relative Position Representations โ€“ Paper explained
AI Coffee Break with Letitia
43 Saddle points vs. local minima in high dimensional spaces | โ“ #AICoffeeBreakQuiz #Shorts
Saddle points vs. local minima in high dimensional spaces | โ“ #AICoffeeBreakQuiz #Shorts
AI Coffee Break with Letitia
44 What is the model identifiability problem? | Explained in 60 seconds! | โ“ #AICoffeeBreakQuiz #Shorts
What is the model identifiability problem? | Explained in 60 seconds! | โ“ #AICoffeeBreakQuiz #Shorts
AI Coffee Break with Letitia
45 Data leakage during data preparation? | Using AntiPatterns to avoid MLOps Mistakes
Data leakage during data preparation? | Using AntiPatterns to avoid MLOps Mistakes
AI Coffee Break with Letitia
46 Is today's AI smarter than YOU? #Shorts
Is today's AI smarter than YOU? #Shorts
AI Coffee Break with Letitia
47 Convolution vs Cross-Correlation. How most CNNs do not compute convolutions. | โ“ #Shorts
Convolution vs Cross-Correlation. How most CNNs do not compute convolutions. | โ“ #Shorts
AI Coffee Break with Letitia
48 Why do we care about cross-correlations vs convolutions | โ“ #AICoffeeBreakQuiz #Shorts
Why do we care about cross-correlations vs convolutions | โ“ #AICoffeeBreakQuiz #Shorts
AI Coffee Break with Letitia
49 The convolution is not shift invariant. | Invariance vs Equivariance | โ“ #AICoffeeBreakQuiz #Shorts
The convolution is not shift invariant. | Invariance vs Equivariance | โ“ #AICoffeeBreakQuiz #Shorts
AI Coffee Break with Letitia
50 How to increase the receptive field in CNNs? | #AICoffeeBreakQuiz #Shorts
How to increase the receptive field in CNNs? | #AICoffeeBreakQuiz #Shorts
AI Coffee Break with Letitia
51 What is tokenization and how does it work? Tokenizers explained.
What is tokenization and how does it work? Tokenizers explained.
AI Coffee Break with Letitia
52 Foundation Models | On the opportunities and risks of calling pre-trained models โ€œFoundation Modelsโ€
Foundation Models | On the opportunities and risks of calling pre-trained models โ€œFoundation Modelsโ€
AI Coffee Break with Letitia
53 How modern search engines work โ€“ Vector databases explained! | Weaviate open-source
How modern search engines work โ€“ Vector databases explained! | Weaviate open-source
AI Coffee Break with Letitia
54 Eyes tell all: How to tell that an AI generated a face?
Eyes tell all: How to tell that an AI generated a face?
AI Coffee Break with Letitia
55 Swin Transformer paper animated and explained
Swin Transformer paper animated and explained
AI Coffee Break with Letitia
56 Data BAD | What Will it Take to Fix Benchmarking for NLU?
Data BAD | What Will it Take to Fix Benchmarking for NLU?
AI Coffee Break with Letitia
57 SimVLM explained | What the paper doesnโ€™t tell you
SimVLM explained | What the paper doesnโ€™t tell you
AI Coffee Break with Letitia
58 Generalization โ€“ Interpolation โ€“ Extrapolation in Machine Learning: Which is it now!?
Generalization โ€“ Interpolation โ€“ Extrapolation in Machine Learning: Which is it now!?
AI Coffee Break with Letitia
59 Do Transformers process sequences of FIXED or of VARIABLE length? | #AICoffeeBreakQuiz
Do Transformers process sequences of FIXED or of VARIABLE length? | #AICoffeeBreakQuiz
AI Coffee Break with Letitia
60 The efficiency misnomer | Size does not matter | What does the number of parameters mean in a model?
The efficiency misnomer | Size does not matter | What does the number of parameters mean in a model?
AI Coffee Break with Letitia

Related AI Lessons

โšก
You're Flying Blind: Adding LLM Observability to Spring AI with OpenTelemetry and Self-Hosted Langfuse
Add LLM observability to Spring AI using OpenTelemetry and Self-Hosted Langfuse to fix the observability gap in LLM-enabled Java services
Dev.to AI
โšก
The Orbital Response Network
Learn about the Orbital Response Network, a concept network architecture similar to transformers, and its potential applications
Medium ยท LLM
โšก
Measuring What Matters with NeMo Agent Toolkit
Learn to measure what matters in LLMs using NeMo Agent Toolkit for observability, evaluations, and model comparisons
Medium ยท LLM
โšก
Without google's transformers, there is no GPT-ishs
Learn how Google's Transformers enabled the creation of GPT-2 and the modern generative AI industry
Dev.to AI

Chapters (9)

Did ChatGPT generate this text?
1:14 Cohere [Sponsor]
2:43 How does GPTZero work: Perplexity explained
5:42 GPTZero โ€œBurstinessโ€
7:08 Robust detection with watermarking
7:46 Language modelling decoding explained
9:21 Watermarking explained
11:05 โ€œsoftโ€ watermark
12:49 Attacking watermarks
Up next
5 Levels of AI Agents - From Simple LLM Calls to Multi-Agent Systems
Dave Ebbelaar (LLM Eng)
Watch โ†’