Lectures at SecAppDev 2026
SecAppDev 2026 offers three days of in-depth lectures and two days of workshops, organized in a dual-track program.
SecAppDev lectures are 90 minutes each, allowing our expert faculty members to take a deep-dive into their topics. Throughout the lectures and the course, there is ample time to ask questions or discuss scenarios with our faculty members.
Check out the program for SecAppDev 2026 below. More sessions will be announced soon!
SecAppDev offers in-depth lectures of an exceptional quality
Grab your seat nowSecurity by default - A European perspective on cyber resilience
Deep-dive lecture by Freddy Dezeure in room Lemaire
A technical deep dive into how Microsoft implements security, resilience, and regulatory compliance at scale—mapping NIS2, DORA, and Secure‑by‑Default principles to concrete controls, engineering processes, and tenant‑level protections
Key takeaway: Learn how regulatory requirements become enforceable controls, measurable metrics, and practical Secure‑by‑Default engineering across cloud systems
How to (still) trick AI: Adversarial ML for Today
Introductory lecture by Katharine Jarmul
There's many known (and still being discovered) attack vectors against deep learning models. In this session, we'll walk through some of the history of adversarial ML and deep learning and find what's changed and what's stayed the same.
Key takeaway: AI/DL models are inherently nondeterministic and have other properties that allow for old, new and interesting attacks.
OAuth 2.1 Best Practices
Deep-dive lecture by Philippe De Ryck
A practical and up-to-date overview of OAuth 2.1, covering core concepts, modern security best practices, and key extensions like PAR and DPoP, with guidance on applying them in real-world architectures and preparing for what’s coming next.
Key takeaway: Learn how to apply OAuth 2.1 best practices and supporting technologies to build secure applications and stay aligned with evolving standards.
Building secure applications in the age of AI agents
Introductory lecture by Pieter Philippaerts
This session explores real-world security risks in AI-assisted coding and presents best practices to mitigate them and securely integrate AI into the development lifecycle.
Key takeaway: AI is a powerful force multiplier, but only when paired with strong security practices, verification, and human oversight.
Post-Quantum Cryptography (PQC): The Risk of Being Late
Deep-dive lecture by Bart Preneel
Post-Quantum Cryptography (PQC) answers the threat posed by quantum computers. We discuss the emerging standards and national agencies' recommendations for migration. We conclude with performance benchmarks and crypto agility challenges.
Key takeaway: If you have not yet developed a PQC migration strategy, you should do so in the next 6 months.
Model Context Protocol (MCP) Security
Advanced lecture by Jim Manico
An introduction to the Model Context Protocol (MCP) and its security risks. Covers MCP architecture, threat models, and practical defenses to prevent prompt injection, tool abuse, and data leakage in AI tool integrations.
Key takeaway: Understand MCP risks and apply concrete controls to secure AI tool integrations and prevent prompt injection, tool abuse, and data exfiltration.
EU CRA: Survival Workshop for Enterprise & Open Source
Deep-dive lecture by Roman Zhukov
A practical deep-dive into the EU CRA for Enterprise and Open Source. Features interactive "In Scope?", "Who Am I?" and a “Live Gap-Analysis” exercises to help navigating your compliance confidently.
Key takeaway: Transform CRA rules from a legal burden into an engineering advantage using open standards, clear role mapping, and practical guidelines.
The ongoing crypto wars
Introductory lecture by Bart Preneel
This talk traces crypto wars from limits on research and key escrow to Apple vs. FBI. It covers debates on scanning communications and EU plans for access to encrypted data, ending with privacy risks of the EU Digital Identity Wallet.
Key takeaway: Crypto wars show ongoing tension between privacy & surveillance, with growing risks to online privacy
Privacy Attacks on Deep Learning Systems
Advanced lecture by Katharine Jarmul
In this session, you'll dive into how this creates interesting vectors for privacy attacks on AI/ML systems. You'll also be introduced to what types of interventions might work to address such issues.
Key takeaway: Information exfiltration due to memorization is an interesting attack vector for today's AI/deep learning models.
Demystifying CSP for Modern Applications
Deep-dive lecture by Philippe De Ryck
CSP is often seen as complex and frustrating. This session explains why most policies fail, how to fix them, and how to apply CSP effectively in modern applications, including single page apps.
Key takeaway: Understand why CSP often fails and learn how to implement it correctly with practical, actionable guidance.