As AI rapidly becomes embedded across every business function, cybersecurity teams are racing to understand what threats actually apply -- and how to model them. Traditional frameworks like STRIDE are a good starting point, but they weren’t designed for black-box models, emergent behavior or adversarial prompt injection.
In this session, we'll explore how to evolve your threat modeling practices for AI systems. Drawing from frameworks like AI-Adapted STRIDE, Maestro, and the AWS GenAI Scoping Model, as well as MITRE Atlas, this session will help CISOs and security architects map, model and mitigate threats across the AI lifecycle. We’ll walk through practical examples, explore tooling and unpack what security teams can do right now to stay ahead of the weird, wild risks that AI brings.
You’ll learn: * Why threat modeling AI is fundamentally different * How to identify new attack surfaces and failure modes * What tools, frameworks, and controls can actually help
Whether you're deploying LLMs, training custom models, or just trying to keep your engineers from prompt-leaking your roadmap, this talk will help you move from overwhelmed to operational.
Learning Objectives:
Understand why threat modeling AI is fundamentally different.
Learn how to identify new attack surfaces and failure modes.
Identify which tools, frameworks, and controls can actually help.