Organizations are now rushing beyond securing GenAI applications to devoting time, effort and resources to threat modeling their Agentic AI implementations. With the introduction of the Multi-Agent Environment, Security, Threat, Risk and Outcome (MAESTRO) framework, security engineers and threat modelers are now able to take a multi-layered approach to evaluating the special security needs of Agentic AI in a more holistic systems-of-systems approach that goes beyond legacy threat modeling methodologies such as STRIDE, PASTA and LINNDUN.
This one-day workshop provides a hands-on approach to threat model Agentic AI systems through a real-world demonstration of an Agentic AI system undergoing a semi-automated approach to threat modeling to uncover complex layers of threats and risks. Workshop participants will assess an agentic AI systems architecture and build a dataflow diagram using a popular threat modeling tool community edition. They will then gain an understanding of the different threat modeling paradigms and learn and apply MAESTRO to the system under threat evaluation. They will also gain an appreciation for the different kinds of threats to AI systems through a review of industry best practices and taxonomies.