In the ever-evolving cybersecurity threat landscape, offensive security operations are crucial for staying ahead of the latest actors. But how can we become efficient and escalate the continuous emulation of real Tactics, Techniques, and Procedures (TTPs)?
This talk digs into leveraging AI models to augment and scale penetration testing, red teaming, and attack emulation, from reading and interpreting Cyber Threat Intelligence to building and executing threat scenarios. It covers:
- The applicability of varying AI models to offensive security; - Creating an AI-based workflow from threat intelligence reports to test execution and remediation support; - Based on existing frameworks, practical examples of scripting and AI to automate steps in the Offensive Security Operations workflow.
Learning Objectives:
Describe how to expand beyond traditional penetration testing engagements to establish a continuous, AI-augmented Offensive Security Operations workflow that identifies and addresses vulnerabilities.
Demonstrate the use of AI models and tools in penetration testing, red teaming, and breach & attack simulations, including practical methods for harnessing Generative AI to create, trigger, and report on test cases.
Implement strategies to overcome the challenges of using AI in offensive security—such as prompt engineering, model selection, and ethical constraints—while maintaining a critical human element for accurate and actionable results.