SecAppDev 2026 - Secure Coding
SecAppDev 2026 offers three days of in-depth lectures and two days of hands-on workshops. Use the buttons below to navigate between the topics.
AI / ML security
Threat modeling
OWASP top 10
Authorization
Architecture
Secure Coding
Web security
Cryptography
Governance
Application Security
Privacy
Offensive security
Enterprise AI Coding with Claude Code
One-day workshop by Jim Manico in room Lemaire
Thursday June 4th, 09:00 - 17:30
This training teaches engineers to use Claude Code with professional discipline: machine-readable requirements, secure coding prompts, and repeatable GitHub workflows. Participants learn to convert issues into structured plans, refine them before code generation, and enforce review gates for architecture, security, and quality. The course also covers repo governance files (CLAUDE.md, REQUIREMENTS.md, ARCHITECTURE.md, SECURITY.md) to constrain AI behavior and maintain traceability from requirements → plan → code → review.
Learning goal: Attendees will learn a disciplined workflow for using Claude Code professionally: defining machine-readable requirements, generating and reviewing implementation plans, enforcing architecture and security constraints, and producing AI-assisted code.
Practical web application security guided by real-world CVEs
One-day workshop by Philippe De Ryck in room West Wing
Friday June 5th, 09:00 - 17:30
This workshop explores modern web application security through the lens of recent real-world CVEs. Instead of focusing on theory, we analyze how vulnerabilities such as path traversal, JWT handling flaws, authorization bypasses, and command injection appear in practice. By dissecting real incidents, we uncover common patterns, root causes, and exploitation techniques. The workshop connects these findings to concrete defensive strategies, helping you understand not just what goes wrong, but how to prevent it in modern applications.
Learning goal: Learn core web application security concepts and how they manifest in real-world vulnerabilities, using recent CVEs as context to understand and prevent common issues.
AI Memory, Mapped
Deep-dive lecture by Natalie Isak
AI memory is not just another RAG plugin; it is a stateful, persistent attack surface. Securing it requires new threat models, new detection primitives, and architectural decisions made well before deployment.
Key takeaway: Treat AI memory as an attack surface; design for safety and observability from day one.
Building secure applications in the age of AI agents
Introductory lecture by Pieter Philippaerts
This session explores real-world security risks in AI-assisted coding and presents best practices to mitigate them and securely integrate AI into the development lifecycle.
Key takeaway: AI is a powerful force multiplier, but only when paired with strong security practices, verification, and human oversight.
What's New in ASVS v5
Advanced lecture by Eden Sofia Yardeni
A practical session for security practitioners already familiar with ASVS, covering what changed in v5, how to apply it in code review, how it can be used alongside other AppSec tools, and common pitfalls / best practices.
Key takeaway: Coding standards are even more relevant in an age where LLMs are writing most code, making ASVS an increasingly useful resource.
Achieving Risk-based and Effective Security Testing
Deep-dive lecture by Ruben De Visscher
This talk discusses how to achieve a risk-based and effective security testing strategy by taking ownership of what and how to test instead of relying on limited built-in checkers of off-the-shelf security scanning tools.
Key takeaway: Take ownership of your security testing strategy to improve coverage and efficiency, do not let tool vendors create a sub-optimal strategy for you.