Adversarial AI—The Nature of the Threat, Impacts, and Mitigation Strategies
- Data Security and Governance
- Public Sector
- 40 min
Adversarial AI/ML is an emerging research area focused on the vulnerabilities of Artificial Intelligence (AI)/Machine Learning (ML) models to adversarial exploitation such as data poisoning, adversarial perturbations, inference, and extraction attacks. This research area is of particular interest to domains where AI/ML models play an essential role in the mission-critical decision-making processes. In this presentation, we will give a review of the four principal categories of Adversarial AI in the context of a data and machine learning lifecycle as well as the general adversarial intent. We will discuss each of the four principal areas (including the threat of deep fakes), supported by the relevant and interesting examples, and we will discuss the future implications of these. We will present in greater depth our research in Adversarial NLP methods, backed by specific data poisoning and adversarial perturbation example attacks on NLP classifiers.
We will conclude the presentation by discussing the current mitigation approaches and methods, and offer some general recommendations for how to best address and mitigate the vulnerabilities to adversarial exploits of AI/ML models.