SecAppDev 2026 lecture details
AI Memory, Mapped
AI memory is not just another RAG plugin; it is a stateful, persistent attack surface. Securing it requires new threat models, new detection primitives, and architectural decisions made well before deployment.
Schedule TBD
Abstract
Persistent memory transforms AI systems from stateless tools into context-aware systems, but it also introduces a class of risks. We will cover key risks including continuous exfiltration via prompt injection, delayed tool invocation, and negative psychosocial impacts. The second half focuses on building memory-safe systems by design: threat modeling memory, observability strategies, and runtime safety monitoring at scale (including BinaryShield, a novel privacy-preserving method for sharing textual customer content to detect coordinated spray attacks).
Key takeaway
Treat AI memory as an attack surface; design for safety and observability from day one.
Content level
Deep-dive
Target audience
Developers, architects, researchers
Prerequisites
None
Join us for SecAppDev. You will not regret it!
Grab your seat now
Join us for SecAppDev. You will not regret it!
Grab your seat nowRelated lectures
Building secure applications in the age of AI agents
Introductory lecture by Pieter Philippaerts
This session explores real-world security risks in AI-assisted coding and presents best practices to mitigate them and securely integrate AI into the development lifecycle.
Key takeaway: AI is a powerful force multiplier, but only when paired with strong security practices, verification, and human oversight.
How to (still) trick AI: Adversarial ML for Today
Introductory lecture by Katharine Jarmul
There's many known (and still being discovered) attack vectors against deep learning models. In this session, we'll walk through some of the history of adversarial ML and deep learning and find what's changed and what's stayed the same.
Key takeaway: AI/DL models are inherently nondeterministic and have other properties that allow for old, new and interesting attacks.
Model Context Protocol (MCP) Security
Advanced lecture by Jim Manico
An introduction to the Model Context Protocol (MCP) and its security risks. Covers MCP architecture, threat models, and practical defenses to prevent prompt injection, tool abuse, and data leakage in AI tool integrations.
Key takeaway: Understand MCP risks and apply concrete controls to secure AI tool integrations and prevent prompt injection, tool abuse, and data exfiltration.