SecAppDev 2026 lecture details
Building secure applications in the age of AI agents
This session explores real-world security risks in AI-assisted coding and presents best practices to mitigate them and securely integrate AI into the development lifecycle.
Tuesday June 2nd, 09:00 - 10:30
Room Lemaire
Add to calendar (ICS) Add to Google calendarAbstract
In the past two years, AI-assisted development has moved from novelty to mainstream, with recent estimates suggesting that over 80% of developers now use AI tools in their daily workflows. While these tools significantly accelerate development, they also introduce new risks that are often overlooked.
This session explores the security challenges associated with AI-assisted development. We examine real-world issues found in generated code, highlight common pitfalls, and present practical approaches for improving code quality, validating outputs, and maintaining robust security standards.
Key takeaway
AI is a powerful force multiplier, but only when paired with strong security practices, verification, and human oversight.
Content level
Introductory
Target audience
Software engineers who use AI-assisted development and want to ensure secure coding practices.
Prerequisites
None
Join us for SecAppDev. You will not regret it!
Grab your seat now
Pieter Philippaerts
Research Manager, KU Leuven
Expertise: Application security, authentication, authorization
Join us for SecAppDev. You will not regret it!
Grab your seat nowRelated lectures
AI Memory, Mapped
Deep-dive lecture by Natalie Isak in room West Wing
Monday June 1st, 16:00 - 17:30
AI memory is not just another RAG plugin; it is a stateful, persistent attack surface. Securing it requires new threat models, new detection primitives, and architectural decisions made well before deployment.
Key takeaway: Treat AI memory as an attack surface; design for safety and observability from day one.
How to (still) trick AI: Adversarial ML for Today
Introductory lecture by Katharine Jarmul in room Lemaire
Wednesday June 3rd, 11:00 - 12:30
There's many known (and still being discovered) attack vectors against deep learning models. In this session, we'll walk through some of the history of adversarial ML and deep learning and find what's changed and what's stayed the same.
Key takeaway: AI/DL models are inherently nondeterministic and have other properties that allow for old, new and interesting attacks.
Secure by Design — Ideas and Techniques
Introductory lecture by Dan Bergh Johnsson and Daniel Deogun in room West Wing
Monday June 1st, 11:00 - 12:30
Security is a design concern, not just an implementation concern. This session shows how domain modelling, type design, and boundary thinking can structurally eliminate entire classes of vulnerability - before attackers ever get a chance.
Key takeaway: Security is a quality aspect of software - like maintainability or correctness. Teams that design for quality get security as an emergent benefit