JUNE 26-29, 2023
SAN FRANCISCO + VIRTUAL
지금 등록하기

Defending Against Adversarial Model Attacks

On Demand

Type

  • Session

Format

  • In-Person

Track

  • 데이터 보안 및 거버넌스

Difficulty

  • Beginner

Room

  • Moscone South | Upper Mezzanine | 152

Duration

  • 35 min

개요

The application of AI algorithms in domains such as self-driving cars, facial recognition, and hiring holds great promise. At the same time, it raises legitimate concerns about AI algorithms robustness against adversarial attacks. Widespread adoption of AI algorithms where the predictions are hidden or obscured from the trained eye of the subject expert, opportunities for a malicious actor to take advantage of the AI algorithms grow considerably, necessitating the addition of adversarial robustness training and checking.  To protect against and mitigate the damages caused by these malicious actors,  this talk will examine how to build a pipeline that’s robust against adversarial attacks by leveraging Kubeflow Pipelines and integration with LFAI Adversarial Robustness Toolbox (ART). Additionally we will show how to test a machine learning model's adversarial robustness in production on Kubeflow Serving, by virtue of Payload logging (KNative eventing) and ART. This presentation focuses on adversarial robustness instead of fairness and bias.

Session Speakers

Headshot of Tommy Li

Tommy Li

Senior Software Developer

IBM

Headshot of Animesh Singh

Animesh Singh

Distinguished Engineer and CTO - Watson Data and AI OSS Platform

IBM

Data+AI Summit 하이라이트 보기

Watch on demand