SecAppDev 2024 lecture details

Vulnerabilities of Large Language Model Applications

The session will start with a quick primer on data-driven AI and the key mechanisms behind LLMs. Then we will explore the general threat landscape, including academic attacks and more practical threats (OWASP Top 10 for LLMs).

Wednesday June 5th, 11:00 - 12:30
Room West Wing
Add to calendar (ICS) Add to Google calendar
Abstract

Large Language Models (LLMs) have recently emerged as a transformative technology with a potential to affect every industry. While the internal workings of LLMs are not entirely understood even by their creators, their rapid adoption has already revealed alarming failures.

In this lecture, we will overview the complex interplay of previously known and newly introduced vulnerabilities underpinning real-world LLM applications. The goal is to raise awareness and move towards a fundamental understanding of what it might take to ensure privacy and security of this fast-evolving ecosystem.

Key takeaway

LLMs are a vulnerable intermediary between users and information. Increasing autonomy, complexity and integration of AI amplifies all existing risks.

Content level

Deep-dive

Target audience

Developers, industry professionals, technology executives, policy makers, educators

Prerequisites

Participants with varying levels of expertise can gain valuable insights. Session "AI Security: Essentials to Advanced" is a recommended prerequisite.

Join us for SecAppDev. You will not regret it!

Grab your seat now
Vera Rimmer
Vera Rimmer

Research expert, DistriNet, KU Leuven

Expertise: Computer security and privacy, applied machine learning and deep learning

More details

Join us for SecAppDev. You will not regret it!

Grab your seat now

SecAppDev offers the most in-depth content you will find in a conference setting

Grab your seat now