SecAppDev 2026 workshop details

Attacking and Defending LLMs

Learning goal: Practical strategies, best practices, and tools to improve the security posture of modern AI systems.

Schedule TBD
Abstract

This workshop gives you hands-on experience with attacking large language models (LLMs) using a range of prompt-based strategies. You will actively explore how these attacks work in practice and what their impact is on real systems. The workshop also gives you insight into defensive techniques, and shows how architectural choices, testing approaches, and security observability can be used to strengthen applications built with generative models.

Content overview
  • AI security fundamentals in the context of LLMs
  • Prompt injection attacks and their practical impact
  • Document-assisted generation and associated risks
  • Guardrails and defensive techniques
  • Prompt routing strategies
  • Security observability for LLM-based systems
Content level

Introductory

Target audience

Developers, engineers, and practitioners who are building or planning to build systems that leverage LLMs.

Prerequisites

Basic Python knowledge and familiarity with running code locally. Some exposure to LLM APIs or AI tooling is helpful but not required.

Technical requirements

Laptop capable of running Python and local LLM tooling. Participants will receive a repository with exercises and instructions for running models locally.

Join us for SecAppDev. You will not regret it!

Grab your seat now
Katharine Jarmul
Katharine Jarmul

Founder, Probably Private

Expertise: Privacy and Security in AI/ML

More details

Join us for SecAppDev. You will not regret it!

Grab your seat now

Other workshops

SecAppDev offers the most in-depth content you will find in a conference setting

Grab your seat now