SecAppDev 2026 lecture details
Dark Patterns and the AI Era
This lecture introduces the concepts of dark patterns from interdisciplinary (HCI, privacy, and legal) literature to highlight the evolution of this UX design phenomena, with implications for the age of AI.
Tuesday June 2nd, 16:00 - 17:30
Room Lemaire
Abstract
Since coined as a term in 2010, 'dark patterns' represent a series of design phenomena that can deceive, manipulate, or coerce end users into undesired behavior. In the years since, dark patterns have caught the attention of regulatory bodies and public interest, pointing to consumer protections needs in the interface design space. With potential negative impact on users' finances, privacy, autonomy, and more, and in light of the EU DSA, AI Act, and upcoming Digital Fairness Act, broader compliance needs will continue to arise for organizations to be aware of.
Key takeaway
Dark patterns are a persistent 'threat' to users in a different fashion; security perspectives can contribute to ongoing mitigation efforts.
Content level
Introductory
Target audience
Anyone working on user-facing technology
Prerequisites
None
Join us for SecAppDev. You will not regret it!
Grab your seat now
Johanna Gunawan
Assistant Professor of Computer Science and Law, Maastricht University Law & Tech Lab
Expertise: Human-computered interactions and consumer protections (law), UX, and general cybersecurity + compliance
Join us for SecAppDev. You will not regret it!
Grab your seat nowRelated lectures
Security by default - A European perspective on cyber resilience
Deep-dive lecture by Freddy Dezeure in room Lemaire
Monday June 1st, 09:15 - 10:30
A technical deep dive into how Microsoft implements security, resilience, and regulatory compliance at scale—mapping NIS2, DORA, and Secure‑by‑Default principles to concrete controls, engineering processes, and tenant‑level protections
Key takeaway: Learn how regulatory requirements become enforceable controls, measurable metrics, and practical Secure‑by‑Default engineering across cloud systems
How to (still) trick AI: Adversarial ML for Today
Introductory lecture by Katharine Jarmul in room Lemaire
Wednesday June 3rd, 11:00 - 12:30
There's many known (and still being discovered) attack vectors against deep learning models. In this session, we'll walk through some of the history of adversarial ML and deep learning and find what's changed and what's stayed the same.
Key takeaway: AI/DL models are inherently nondeterministic and have other properties that allow for old, new and interesting attacks.
AI Memory, Mapped
Deep-dive lecture by Natalie Isak in room West Wing
Monday June 1st, 16:00 - 17:30
AI memory is not just another RAG plugin; it is a stateful, persistent attack surface. Securing it requires new threat models, new detection primitives, and architectural decisions made well before deployment.
Key takeaway: Treat AI memory as an attack surface; design for safety and observability from day one.