SecAppDev 2026 workshop details
Enterprise AI Coding with Claude Code
Learning goal: Attendees will learn a disciplined workflow for using Claude Code professionally: defining machine-readable requirements, generating and reviewing implementation plans, enforcing architecture and security constraints, and producing AI-assisted code.
Schedule TBD
Abstract
This training teaches engineers to use Claude Code with professional discipline: machine-readable requirements, secure coding prompts, and repeatable GitHub workflows. Participants learn to convert issues into structured plans, refine them before code generation, and enforce review gates for architecture, security, and quality. The course also covers repo governance files (CLAUDE.md, REQUIREMENTS.md, ARCHITECTURE.md, SECURITY.md) to constrain AI behavior and maintain traceability from requirements → plan → code → review.
Content overview
- Claude Code operating model (disciplined use)
- Repo guardrails: CLAUDE.md / REQUIREMENTS.md / ARCHITECTURE.md / SECURITY.md
- Machine-readable requirements + traceability (req → plan → code → PR)
- Issue → plan workflow; plan review/edit before coding
- Secure coding role prompts + framework prompts
- Architecture constraints + prohibited-change rules
- PR discipline: summaries, requirement links, reviewer checklist
- Review gates: tests, SAST/lint, dependency hygiene, secrets handling
- Safety: prompt injection/tool/MCP response risks
Content level
Deep-dive
Target audience
Software engineers, application security engineers, security architects, and technical leads who want to master Claude Code with discipline.
Prerequisites
A good understanding of the software development lifecycle
Technical requirements
Laptop, a Anthropic/Claude paid account (entry paid tier) and a GitHub account capable of creating new repositories (a free GitHub account works).
Join us for SecAppDev. You will not regret it!
Grab your seat now
Join us for SecAppDev. You will not regret it!
Grab your seat nowOther workshops
Attacking and Defending LLMs
One-day workshop by Katharine Jarmul in room Lemaire
This workshop gives you hands-on experience with attacking large language models (LLMs) using a range of prompt-based strategies. You will actively explore how these attacks work in practice and what their impact is on real systems. The workshop also gives you insight into defensive techniques, and shows how architectural choices, testing approaches, and security observability can be used to strengthen applications built with generative models.
Learning goal: Practical strategies, best practices, and tools to improve the security posture of modern AI systems.
Practical web application security guided by real-world CVEs
One-day workshop by Philippe De Ryck in room West Wing
This workshop explores modern web application security through the lens of recent real-world CVEs. Instead of focusing on theory, we analyze how vulnerabilities such as path traversal, JWT handling flaws, authorization bypasses, and command injection appear in practice. By dissecting real incidents, we uncover common patterns, root causes, and exploitation techniques. The workshop connects these findings to concrete defensive strategies, helping you understand not just what goes wrong, but how to prevent it in modern applications.
Learning goal: Learn core web application security concepts and how they manifest in real-world vulnerabilities, using recent CVEs as context to understand and prevent common issues.