SecAppDev 2026 workshop details
Threat modeling with AI
Learning goal: This session with theorethical points and an integrated exercise provides an engaging, end-to-end view of how AI can support, but not replace, human judgment in threat modeling.
Thursday June 5th, 09:00 - 17:30
Room West Wing
Abstract
This workshop aims to introduce SecAppDev participants to integrating AI assistance into their threat modeling workflows. Participants will learn how to leverage AI for diagramming, threat identification, and countermeasure recommendations to speed up threat model analysis.
To bring these concepts to life, the workshop includes a guided case study on a Digital Wallet / Payment App, where participants will use AI tools to generate a data flow diagram, identify threats using STRIDE, propose mitigations mapped to industry standards, and summarize findings for business stakeholders.
Content overview
- integrating AI into threat modeling workflows
- using AI for diagramming (data flow diagrams)
- identifying threats using STRIDE with AI support
- recommending countermeasures with AI
- understanding benefits and dangers of AI use
- end-to-end case study covering the full process
- summarizing findings for business stakeholders
Content level
Introductory
Target audience
This workshop is geared towards everyone that needs to perform or participate in risk analysis of a system (a.k.a. threat modeling).
Prerequisites
A basic knowledge of threat modeling is useful
Technical requirements
During the workshop you will need to use a laptop. Some exercise parts can be run locally if the participant can run docker, this is not mandatory.
Join us for SecAppDev. You will not regret it!
Grab your seat now
Steven Wierckx
Application Security Consultant, Toreon
Expertise: Threat modeling, product security
Join us for SecAppDev. You will not regret it!
Grab your seat nowOther workshops
Enterprise AI Coding with Claude Code
One-day workshop by Jim Manico in room Lemaire
Thursday June 5th, 09:00 - 17:30
This training teaches engineers to use Claude Code with professional discipline: machine-readable requirements, secure coding prompts, and repeatable GitHub workflows. Participants learn to convert issues into structured plans, refine them before code generation, and enforce review gates for architecture, security, and quality. The course also covers repo governance files (CLAUDE.md, REQUIREMENTS.md, ARCHITECTURE.md, SECURITY.md) to constrain AI behavior and maintain traceability from requirements → plan → code → review.
Learning goal: Attendees will learn a disciplined workflow for using Claude Code professionally: defining machine-readable requirements, generating and reviewing implementation plans, enforcing architecture and security constraints, and producing AI-assisted code.
Attacking and Defending LLMs
One-day workshop by Katharine Jarmul in room Lemaire
Friday June 6th, 09:00 - 17:30
This workshop gives you hands-on experience with attacking large language models (LLMs) using a range of prompt-based strategies. You will actively explore how these attacks work in practice and what their impact is on real systems. The workshop also gives you insight into defensive techniques, and shows how architectural choices, testing approaches, and security observability can be used to strengthen applications built with generative models.
Learning goal: Practical strategies, best practices, and tools to improve the security posture of modern AI systems.
Practical web application security guided by real-world CVEs
One-day workshop by Philippe De Ryck in room West Wing
Friday June 6th, 09:00 - 17:30
This workshop explores modern web application security through the lens of recent real-world CVEs. Instead of focusing on theory, we analyze how vulnerabilities such as path traversal, JWT handling flaws, authorization bypasses, and command injection appear in practice. By dissecting real incidents, we uncover common patterns, root causes, and exploitation techniques. The workshop connects these findings to concrete defensive strategies, helping you understand not just what goes wrong, but how to prevent it in modern applications.
Learning goal: Learn core web application security concepts and how they manifest in real-world vulnerabilities, using recent CVEs as context to understand and prevent common issues.