SecAppDev 2023 lecture details
Attacks against machine learning pipelines
This session will explore various attacks against machine learning pipelines and their life cycle, present countermeasures and discuss best practices to make your ML models more robust in adversarial settings.
Wednesday June 14th, 09:00 - 10:30
Room West Wing
Download handoutsAbstract
In this session, we will elaborate on how using artificial intelligence (AI) and machine learning (ML) makes applications vulnerable to new types of attacks, and how all components of an application's ML pipeline are potentially vulnerable. ML-driven applications often deal with sensitive data that needs safeguarding. When the ML models offer a competitive advantage, they require protection too. We will explore various attacks against ML pipelines and present countermeasures and best practices to make your ML models more robust in adversarial environments.
Key takeaway
ML adds value to applications but also increases the attack surface, imposing a holistic approach to secure the ML pipeline and lifecycle
Content level
Introductory
Target audience
Application developers, researchers, ML developers, data scientists
Prerequisites
Basic understanding of machine learning
Davy Preuveneers
Research manager, DistriNet, KU Leuven
Expertise: Identity and access management, biometrics, machine learning for security and privacy, adversarial machine learning
Related lectures
Security engineering for machine learning
Keynote lecture by Gary McGraw in room Lemaire
Monday June 12th, 09:15 - 10:30
How can the adoption of machine learning introduce systematic risk into our applications? This session discusses the results of applying architectural risk analysis to identify the top risks in engineering ML systems.
Key takeaway: The results of an architectural risk analysis (sometimes called a threat model) of ML systems, including the top five (of 78 known) ML security risks
The security model of the web
Introductory lecture by Philippe De Ryck in room Lemaire
Monday June 12th, 11:00 - 12:30
In this session, we explore how to leverage the fundamental security model of the web for security. We also explore complex attack patterns, such as CSRF, and how they impact even modern API-based applications.
Key takeaway: Understand how the browser reasons about web security, and how you can leverage this fundamental security model to secure your applications
Demystifying Zero Trust
Introductory lecture by Bart Preneel in room Lemaire
Wednesday June 14th, 09:00 - 10:30
We discuss the principles of zero trust and explain how it can be implemented. We also discuss how we can build up trust in devices, software and hardware components.
Key takeaway: Understand whether zero trust is useful for your organization or system. Reflect on which products and services you trust and why