SecAppDev 2026 lecture details
Privacy Attacks on Deep Learning Systems
In this session, you'll dive into how this creates interesting vectors for privacy attacks on AI/ML systems. You'll also be introduced to what types of interventions might work to address such issues.
Schedule TBD
Abstract
Today's AI systems are massive models. They are often trained in part by data scraped from the web or digitized under questionable intellectual property and privacy practices. In this session, we will dive into how this creates interesting vectors for privacy attacks on AI/ML systems. You'll also learn about what types of interventions might work to address such issues.
Key takeaway
Information exfiltration due to memorization is an interesting attack vector for today's AI/deep learning models.
Content level
Advanced
Target audience
Data-oriented security researchers/engineers
Prerequisites
This talk will require a bit more understanding of deep learning, but I will try to answer questions as they come to keep it interesting for everyone
Join us for SecAppDev. You will not regret it!
Grab your seat now
Join us for SecAppDev. You will not regret it!
Grab your seat nowRelated lectures
How to (still) trick AI: Adversarial ML for Today
Introductory lecture by Katharine Jarmul
There's many known (and still being discovered) attack vectors against deep learning models. In this session, we'll walk through some of the history of adversarial ML and deep learning and find what's changed and what's stayed the same.
Key takeaway: AI/DL models are inherently nondeterministic and have other properties that allow for old, new and interesting attacks.
Building secure applications in the age of AI agents
Introductory lecture by Pieter Philippaerts
This session explores real-world security risks in AI-assisted coding and presents best practices to mitigate them and securely integrate AI into the development lifecycle.
Key takeaway: AI is a powerful force multiplier, but only when paired with strong security practices, verification, and human oversight.
Model Context Protocol (MCP) Security
Advanced lecture by Jim Manico
An introduction to the Model Context Protocol (MCP) and its security risks. Covers MCP architecture, threat models, and practical defenses to prevent prompt injection, tool abuse, and data leakage in AI tool integrations.
Key takeaway: Understand MCP risks and apply concrete controls to secure AI tool integrations and prevent prompt injection, tool abuse, and data exfiltration.