SecAppDev 2026 lecture details
How to (still) trick AI: Adversarial ML for Today
There's many known (and still being discovered) attack vectors against deep learning models. In this session, we'll walk through some of the history of adversarial ML and deep learning and find what's changed and what's stayed the same.
Schedule TBD
Abstract
There's many known (and still being discovered) attack vectors against deep learning models. In this session, we walk through some of the history of adversarial ML and deep learning and find what's changed and what's stayed the same. We will look at mixtures of research and actual real world attacks to get an idea for the attack vectors that matter and where to focus on in upcoming years.
Key takeaway
AI/DL models are inherently nondeterministic and have other properties that allow for old, new and interesting attacks.
Content level
Introductory
Target audience
People curious about how ML security has evolved and where it's going
Prerequisites
Some understanding of deep learning is useful but not necessary..
Join us for SecAppDev. You will not regret it!
Grab your seat now
Join us for SecAppDev. You will not regret it!
Grab your seat nowRelated lectures
Building secure applications in the age of AI agents
Introductory lecture by Pieter Philippaerts
This session explores real-world security risks in AI-assisted coding and presents best practices to mitigate them and securely integrate AI into the development lifecycle.
Key takeaway: AI is a powerful force multiplier, but only when paired with strong security practices, verification, and human oversight.
Model Context Protocol (MCP) Security
Advanced lecture by Jim Manico
An introduction to the Model Context Protocol (MCP) and its security risks. Covers MCP architecture, threat models, and practical defenses to prevent prompt injection, tool abuse, and data leakage in AI tool integrations.
Key takeaway: Understand MCP risks and apply concrete controls to secure AI tool integrations and prevent prompt injection, tool abuse, and data exfiltration.
Privacy Attacks on Deep Learning Systems
Advanced lecture by Katharine Jarmul
In this session, you'll dive into how this creates interesting vectors for privacy attacks on AI/ML systems. You'll also be introduced to what types of interventions might work to address such issues.
Key takeaway: Information exfiltration due to memorization is an interesting attack vector for today's AI/deep learning models.