SecAppDev 2025 workshop details

LLM Security Bootcamp: Foundations, Threats, and Defensive Techniques

Learning goal: Learn how LLM applications work and are architected, the unique security challenges they introduce, and the current best practices in LLM security—along with their limitations.

Thursday June 5th, 09:00 - 17:30
Room Lemaire
Add to calendar (ICS) Add to Google calendar
Abstract

Large Language Models (LLMs) open up a new realm of possibilities in application development, but they also pose significant challenges. Their non-deterministic nature and broad use cases complicate testing, while unpredictable failures (“hallucinations”) and novel attack vectors (“prompt injections”) add risk.

This workshop covers LLM-based applications, highlights unique threats, and offers hands-on testing and hardening techniques. Attendees will learn to set up and secure basic LLM-driven solutions in their organizations.

Content overview
  • LLM Security 101
  • Build your LLM-powered chatbot (RAG)
  • Threat modeling of your application
  • Securing your LLM-powered application
  • Hacking, euhm, security testing
Content level

Introductory

Target audience

This workshop targets both developers and security engineers wanting to understand the challenges and risks of building LLM-empowered applications.

Prerequisites

General developer experience

Technical requirements

Laptop with browser

Join us for SecAppDev. You will not regret it!

Thomas Vissers
Thomas Vissers

Postdoctoral researcher, KU Leuven

Expertise: Security & AI

More details

Tim Van Hamme
Tim Van Hamme

Post-doctoral researcher, DistriNet, KU Leuven

Expertise: Adversarial ML

More details

Laurens Sion
Laurens Sion

Research Expert, DistriNet, KU Leuven

Expertise: Security and privacy threat modeling

More details

Join us for SecAppDev. You will not regret it!

Other workshops

SecAppDev offers the most in-depth content you will find in a conference setting