SecAppDev 2026 lecture details
Model Context Protocol (MCP) Security
An introduction to the Model Context Protocol (MCP) and its security risks. Covers MCP architecture, threat models, and practical defenses to prevent prompt injection, tool abuse, and data leakage in AI tool integrations.
Schedule TBD
Abstract
The Model Context Protocol (MCP) allows AI systems to interact with external tools, services, and data sources. While this expands capability, it also introduces new security risks including prompt injection, data exfiltration, tool abuse, and trust boundary violations. This session explains the MCP architecture and threat model, analyzes common attack patterns, and presents practical defenses such as OAuth 2.1 integration, AI validation, capability scoping, and policy enforcement. Attendees will learn how to design and operate MCP integrations safely in real-world AI system.
Key takeaway
Understand MCP risks and apply concrete controls to secure AI tool integrations and prevent prompt injection, tool abuse, and data exfiltration.
Content level
Advanced
Target audience
Software engineers, AppSec engineers, AI engineers, and security architects building AI systems.
Prerequisites
Basic understanding of AI/LLMs, APIs, and common application security concepts such as injection and access control.
Join us for SecAppDev. You will not regret it!
Grab your seat now
Join us for SecAppDev. You will not regret it!
Grab your seat nowRelated lectures
How to (still) trick AI: Adversarial ML for Today
Introductory lecture by Katharine Jarmul
There's many known (and still being discovered) attack vectors against deep learning models. In this session, we'll walk through some of the history of adversarial ML and deep learning and find what's changed and what's stayed the same.
Key takeaway: AI/DL models are inherently nondeterministic and have other properties that allow for old, new and interesting attacks.
OAuth 2.1 Best Practices
Deep-dive lecture by Philippe De Ryck
A practical and up-to-date overview of OAuth 2.1, covering core concepts, modern security best practices, and key extensions like PAR and DPoP, with guidance on applying them in real-world architectures and preparing for what’s coming next.
Key takeaway: Learn how to apply OAuth 2.1 best practices and supporting technologies to build secure applications and stay aligned with evolving standards.
Privacy Attacks on Deep Learning Systems
Advanced lecture by Katharine Jarmul
In this session, you'll dive into how this creates interesting vectors for privacy attacks on AI/ML systems. You'll also be introduced to what types of interventions might work to address such issues.
Key takeaway: Information exfiltration due to memorization is an interesting attack vector for today's AI/deep learning models.