SecAppDev 2026 lecture details
Building secure applications in the age of AI agents
This session explores real-world security risks in AI-assisted coding and presents best practices to mitigate them and securely integrate AI into the development lifecycle.
Schedule TBD
Abstract
In the past two years, AI-assisted development has moved from novelty to mainstream, with recent estimates suggesting that over 80% of developers now use AI tools in their daily workflows. While these tools significantly accelerate development, they also introduce new risks that are often overlooked.
This session explores the security challenges associated with AI-assisted development. We examine real-world issues found in generated code, highlight common pitfalls, and present practical approaches for improving code quality, validating outputs, and maintaining robust security standards.
Key takeaway
AI is a powerful force multiplier, but only when paired with strong security practices, verification, and human oversight.
Content level
Introductory
Target audience
Software engineers who use AI-assisted development and want to ensure secure coding practices.
Prerequisites
None
Join us for SecAppDev. You will not regret it!
Grab your seat now
Pieter Philippaerts
Research Manager, KU Leuven
Expertise: Application security, authentication, authorization
Join us for SecAppDev. You will not regret it!
Grab your seat nowRelated lectures
How to (still) trick AI: Adversarial ML for Today
Introductory lecture by Katharine Jarmul
There's many known (and still being discovered) attack vectors against deep learning models. In this session, we'll walk through some of the history of adversarial ML and deep learning and find what's changed and what's stayed the same.
Key takeaway: AI/DL models are inherently nondeterministic and have other properties that allow for old, new and interesting attacks.
Model Context Protocol (MCP) Security
Advanced lecture by Jim Manico
An introduction to the Model Context Protocol (MCP) and its security risks. Covers MCP architecture, threat models, and practical defenses to prevent prompt injection, tool abuse, and data leakage in AI tool integrations.
Key takeaway: Understand MCP risks and apply concrete controls to secure AI tool integrations and prevent prompt injection, tool abuse, and data exfiltration.
What's New in ASVS v5
Advanced lecture by Eden Sofia Yardeni
A practical session for security practitioners already familiar with ASVS, covering what changed in v5, how to apply it in code review, how it can be used alongside other AppSec tools, and common pitfalls / best practices.
Key takeaway: Coding standards are even more relevant in an age where LLMs are writing most code, making ASVS an increasingly useful resource.