SecAppDev 2026 workshop details
Attacking and Defending LLMs
Learning goal: Practical strategies, best practices, and tools to improve the security posture of modern AI systems.
Schedule TBD
Abstract
This workshop gives you hands-on experience with attacking large language models (LLMs) using a range of prompt-based strategies. You will actively explore how these attacks work in practice and what their impact is on real systems. The workshop also gives you insight into defensive techniques, and shows how architectural choices, testing approaches, and security observability can be used to strengthen applications built with generative models.
Content overview
- AI security fundamentals in the context of LLMs
- Prompt injection attacks and their practical impact
- Document-assisted generation and associated risks
- Guardrails and defensive techniques
- Prompt routing strategies
- Security observability for LLM-based systems
Content level
Introductory
Target audience
Developers, engineers, and practitioners who are building or planning to build systems that leverage LLMs.
Prerequisites
Basic Python knowledge and familiarity with running code locally. Some exposure to LLM APIs or AI tooling is helpful but not required.
Technical requirements
Laptop capable of running Python and local LLM tooling. Participants will receive a repository with exercises and instructions for running models locally.
Join us for SecAppDev. You will not regret it!
Grab your seat now
Join us for SecAppDev. You will not regret it!
Grab your seat nowOther workshops
Enterprise AI Coding with Claude Code
One-day workshop by Jim Manico in room Lemaire
This training teaches engineers to use Claude Code with professional discipline: machine-readable requirements, secure coding prompts, and repeatable GitHub workflows. Participants learn to convert issues into structured plans, refine them before code generation, and enforce review gates for architecture, security, and quality. The course also covers repo governance files (CLAUDE.md, REQUIREMENTS.md, ARCHITECTURE.md, SECURITY.md) to constrain AI behavior and maintain traceability from requirements → plan → code → review.
Learning goal: Attendees will learn a disciplined workflow for using Claude Code professionally: defining machine-readable requirements, generating and reviewing implementation plans, enforcing architecture and security constraints, and producing AI-assisted code.
Practical web application security guided by real-world CVEs
One-day workshop by Philippe De Ryck in room West Wing
This workshop explores modern web application security through the lens of recent real-world CVEs. Instead of focusing on theory, we analyze how vulnerabilities such as path traversal, JWT handling flaws, authorization bypasses, and command injection appear in practice. By dissecting real incidents, we uncover common patterns, root causes, and exploitation techniques. The workshop connects these findings to concrete defensive strategies, helping you understand not just what goes wrong, but how to prevent it in modern applications.
Learning goal: Learn core web application security concepts and how they manifest in real-world vulnerabilities, using recent CVEs as context to understand and prevent common issues.