SecAppDev 2025 workshop details

LLM Security Bootcamp: Foundations, Threats, and Defensive Techniques

Learning goal: Learn how LLM applications work and are architected, the unique security challenges they introduce, and the current best practices in LLM security—along with their limitations.

Schedule TBD
Abstract

Large Language Models (LLMs) open up a new realm of possibilities in application development, but they also pose significant challenges. Their non-deterministic nature and broad use cases complicate testing, while unpredictable failures (“hallucinations”) and novel attack vectors (“prompt injections”) add risk.

This workshop covers LLM-based applications, highlights unique threats, and offers hands-on testing and hardening techniques. Attendees will learn to set up and secure basic LLM-driven solutions in their organizations.

Content overview
  • TBD
Content level

Introductory

Target audience

This workshop targets both developers and security engineers wanting to understand the challenges and risks of building LLM-empowered applications.

Prerequisites

General developer experience

Technical requirements

Laptop with browser

Join us for SecAppDev. You will not regret it!

Grab your seat now
Thomas Vissers
Thomas Vissers

Postdoctoral researcher, KU Leuven

Expertise: Security & AI

More details

Tim Van Hamme
Tim Van Hamme

Post-doctoral researcher, DistriNet, KU Leuven

Expertise: Adversarial ML

More details

Join us for SecAppDev. You will not regret it!

Grab your seat now

Other workshops

SecAppDev offers the most in-depth content you will find in a conference setting

Grab your seat now