Skills › Cybersecurity

AI Security

Understand and defend against prompt injection, data poisoning, and LLM exploits.

intermediate 🔐 Cybersecurity
0%
Confidence · no data yet
Sign in to track

After this skill you can…

  • Identify and patch prompt injection vulnerabilities
  • Test LLM apps for data exfiltration risks
  • Apply sandboxing and output validation

Prerequisites

Learn this skill (0 videos)

No videos classified for this skill yet — check back soon.