Threat Modeling AI/ML Systems
This course empowers software architects, engineers, product managers, and security professionals to adopt a Secure-by-Design mindset, ensuring that security is treated not as an afterthought, but as a foundational design principle.

-
Course duration
4 hours -
Chapters
6 -
Difficulty
Advanced -
CPE credits
4 -
Cost
Free of charge -
Badge
Included -
Certification
Included
Audience
This course is ideal for:
• AI/ML Engineers and Data Scientists seeking to embed security into their models and workflows.
• Security Engineers and Analysts responsible for identifying and mitigating threats in intelligent systems.
• Cybersecurity and IT Managers overseeing AI/ML security strategies and platform adoption.
• Compliance and Risk Managers ensuring AI/ML systems align with governance and regulatory standards.
• Students and Enthusiasts looking to build foundational knowledge of AI/ML threat modeling and security frameworks.
Learning Objective
This course offers a comprehensive introduction to securing AI/ML systems through structured threat modeling. Participants will gain foundational knowledge in AI/ML security, including common vulnerabilities, threat vectors, and regulatory considerations unique to intelligent systems. The course introduces proven methodologies such as STRIDE and the 4-Question Framework, enabling learners to systematically identify, assess, and mitigate risks across AI/ML pipelines.
Through hands-on exercises and an interactive walkthrough using IriusRisk, participants will learn how to automate threat modeling for AI/ML systems—mapping threats to architectural components, analyzing risks, and identifying appropriate countermeasures. The course is designed to bridge the gap between AI development and security practices, helping teams proactively address evolving AI-specific risks.