HomepageData + AI Summit 2023 Logo
JUNE 26-29, 2023
SAN FRANCISCO + VIRTUAL
Register Now

Defending Against Adversarial Model Attacks

On Demand

Overview

The application of AI algorithms in domains such as self-driving cars, facial recognition, and hiring holds great promise. At the same time, it raises legitimate concerns about AI algorithms robustness against adversarial attacks. Widespread adoption of AI algorithms where the predictions are hidden or obscured from the trained eye of the subject expert, opportunities for a malicious actor to take advantage of the AI algorithms grow considerably, necessitating the addition of adversarial robustness training and checking.  To protect against and mitigate the damages caused by these malicious actors,  this talk will examine how to build a pipeline that’s robust against adversarial attacks by leveraging Kubeflow Pipelines and integration with LFAI Adversarial Robustness Toolbox (ART). Additionally we will show how to test a machine learning model's adversarial robustness in production on Kubeflow Serving, by virtue of Payload logging (KNative eventing) and ART. This presentation focuses on adversarial robustness instead of fairness and bias.

Type

  • Session

Format

  • In-Person

Track

  • Data Security and Governance

Difficulty

  • Beginner

Room

  • Moscone South | Upper Mezzanine | 152

Duration

  • 35 min

Session Speakers

Headshot of Tommy Li

Tommy Li

Senior Software Developer

IBM

Headshot of Animesh Singh

Animesh Singh

Distinguished Engineer and CTO - Watson Data and AI OSS Platform

IBM

See the best of Data+AI Summit

Watch on demand