SecAppDev 2026 lecture details

Privacy Attacks on Deep Learning Systems

In this session, you'll dive into how this creates interesting vectors for privacy attacks on AI/ML systems. You'll also be introduced to what types of interventions might work to address such issues.

Schedule TBD
Abstract

Today's AI systems are massive models. They are often trained in part by data scraped from the web or digitized under questionable intellectual property and privacy practices. In this session, we will dive into how this creates interesting vectors for privacy attacks on AI/ML systems. You'll also learn about what types of interventions might work to address such issues.

Key takeaway

Information exfiltration due to memorization is an interesting attack vector for today's AI/deep learning models.

Content level

Advanced

Target audience

Data-oriented security researchers/engineers

Prerequisites

This talk will require a bit more understanding of deep learning, but I will try to answer questions as they come to keep it interesting for everyone

Join us for SecAppDev. You will not regret it!

Grab your seat now
Katharine Jarmul
Katharine Jarmul

Founder, Probably Private

Expertise: Privacy and Security in AI/ML

More details

Join us for SecAppDev. You will not regret it!

Grab your seat now

Related lectures

SecAppDev offers the most in-depth content you will find in a conference setting

Grab your seat now