SecAppDev 2026 lecture details
How to (still) trick AI: Adversarial ML for Today
There's many known (and still being discovered) attack vectors against deep learning models. In this session, we'll walk through some of the history of adversarial ML and deep learning and find what's changed and what's stayed the same.
Schedule TBD
Abstract
There's many known (and still being discovered) attack vectors against deep learning models. In this session, we walk through some of the history of adversarial ML and deep learning and find what's changed and what's stayed the same. We will look at mixtures of research and actual real world attacks to get an idea for the attack vectors that matter and where to focus on in upcoming years.
Key takeaway
AI/DL models are inherently nondeterministic and have other properties that allow for old, new and interesting attacks.
Content level
Introductory
Target audience
People curious about how ML security has evolved and where it's going
Prerequisites
Some understanding of deep learning is useful but not necessary..
Join us for SecAppDev. You will not regret it!
Grab your seat now
Join us for SecAppDev. You will not regret it!
Grab your seat nowRelated lectures
AI Memory, Mapped
Deep-dive lecture by Natalie Isak
AI memory is not just another RAG plugin; it is a stateful, persistent attack surface. Securing it requires new threat models, new detection primitives, and architectural decisions made well before deployment.
Key takeaway: Treat AI memory as an attack surface; design for safety and observability from day one.
Building secure applications in the age of AI agents
Introductory lecture by Pieter Philippaerts
This session explores real-world security risks in AI-assisted coding and presents best practices to mitigate them and securely integrate AI into the development lifecycle.
Key takeaway: AI is a powerful force multiplier, but only when paired with strong security practices, verification, and human oversight.
Model Context Protocol (MCP) Security
Advanced lecture by Jim Manico
An introduction to the Model Context Protocol (MCP) and its security risks. Covers MCP architecture, threat models, and practical defenses to prevent prompt injection, tool abuse, and data leakage in AI tool integrations.
Key takeaway: Understand MCP risks and apply concrete controls to secure AI tool integrations and prevent prompt injection, tool abuse, and data exfiltration.