SecAppDev 2025 - AI / ML security
SecAppDev 2025 offers three days of in-depth lectures and two days of hands-on workshops. Use the buttons below to navigate between the topics.
AI / ML security
Threat modeling
OWASP top 10
Authorization
Architecture
Secure Coding
Supply chain security
API security
Web security
Governance
Application Security
LLM Security Bootcamp: Foundations, Threats, and Defensive Techniques
One-day workshop by Thomas Vissers and Tim Van Hamme
Large Language Models (LLMs) open up a new realm of possibilities in application development, but they also pose significant challenges. Their non-deterministic nature and broad use cases complicate testing, while unpredictable failures (“hallucinations”) and novel attack vectors (“prompt injections”) add risk.
This workshop covers LLM-based applications, highlights unique threats, and offers hands-on testing and hardening techniques. Attendees will learn to set up and secure basic LLM-driven solutions in their organizations.
Learning goal: Learn how LLM applications work and are architected, the unique security challenges they introduce, and the current best practices in LLM security—along with their limitations.
Navigating the Security Landscape of Modern AI
Deep-dive lecture by Vera Rimmer
In this session, we will overview the general security landscape of AI technologies, including foundational machine learning, deep learning, and large language models.
Key takeaway: Integrating AI inevitably increases the threat landscape of a system. Understanding how AI can be exploited is key to developing effective mitigations