SecAppDev 2023 lecture details

Security engineering for machine learning

How can the adoption of machine learning introduce systematic risk into our applications? This session discusses the results of applying architectural risk analysis to identify the top risks in engineering ML systems.

Monday June 12th, 09:15 - 10:30
Room Lemaire
Download handouts
Abstract

Machine Learning (ML) appears to have made impressive progress on tasks such as image classification, machine translation, and autonomous vehicle control. The surrounding hype and almost magical status of deep learing further drives the adoption of ML, often in a haphazard fashion. But can the adoption of ML introduce systematic risk into our applications?

Our research at the Berryville Institute of Machine Learning (BIIML) focuses on understanding and categorizing ML security engineering risks at the design level. In this session, we apply architectural risk analysis to ML systems and discuss the top risks that have been identified.

Key takeaway

The results of an architectural risk analysis (sometimes called a threat model) of ML systems, including the top five (of 78 known) ML security risks

Content level

Keynote

Target audience

All SecAppDev participants

Prerequisites

None

Join us for SecAppDev. You will not regret it!

Grab your seat now
Gary McGraw
Gary McGraw

CEO, Berryville Institute of Machine Learning

Expertise: Software security, machine learning security, security engineering

More details

Join us for SecAppDev. You will not regret it!

Grab your seat now

Related lectures

SecAppDev offers the most in-depth content you will find in a conference setting

Grab your seat now