Progress1 / 6
1
Lesson 1: Prompt Injection — how attackers hijack AI agents
⏱️ 20 Min
Prompt Injection is the most common attack on LLMs. Attackers manipulate the prompt to make the agent perform unwanted actions. Protection: Input validation, prompt template isolation, output filtering, sandboxing.
1
Lesson 1: Prompt Injection — how attackers hijack AI agents
⏱️ 20 Min
2
Lesson 2: LLM Gateway Hardening
⏱️ 20 Min
3
Lesson 3: AI Agent Sandboxing
⏱️ 20 Min
4
Lesson 4: Threat Modeling for AI Systems
⏱️ 20 Min
5
Lesson 5: OWASP Top 10 for LLMs
⏱️ 25 Min
6
Lesson 6: Compliance (EU AI Act)
⏱️ 20 Min