SecAppDev 2023 lecture details

Attacks against machine learning pipelines

This session will explore various attacks against machine learning pipelines and their life cycle, present countermeasures and discuss best practices to make your ML models more robust in adversarial settings.

Wednesday June 14th, 09:00 - 10:30
Room West Wing
Download handouts
Abstract

In this session, we will elaborate on how using artificial intelligence (AI) and machine learning (ML) makes applications vulnerable to new types of attacks, and how all components of an application's ML pipeline are potentially vulnerable. ML-driven applications often deal with sensitive data that needs safeguarding. When the ML models offer a competitive advantage, they require protection too. We will explore various attacks against ML pipelines and present countermeasures and best practices to make your ML models more robust in adversarial environments.

Key takeaway

ML adds value to applications but also increases the attack surface, imposing a holistic approach to secure the ML pipeline and lifecycle

Content level

Introductory

Target audience

Application developers, researchers, ML developers, data scientists

Prerequisites

Basic understanding of machine learning

Join us for SecAppDev. You will not regret it!

Davy Preuveneers
Davy Preuveneers

Research manager, DistriNet, KU Leuven

Expertise: Identity and access management, biometrics, machine learning for security and privacy, adversarial machine learning

More details

Join us for SecAppDev. You will not regret it!

Related lectures

SecAppDev offers the most in-depth content you will find in a conference setting