We present a novel deep learning approach to create a robust object detection network for use in an infra-red, UAV-based, poacher recognition system. More specifically we have used Microsoft AirSim to generate thousands of hours of simulated drone footage in the African Savannah. We then used deep domain adaptation to translate our simulation into a form that is adversarially indistinguishable from real infrared drone footage. This yields a programmable data generator that can be used to dramatically improve the accuracy of algorithms without requiring expensive human curated annotations. Furthermore, we extend this work and contribute a photorealism extension to AirSim, automating much of the domain specific expertise needed for computer graphics work, and enabling the generation of limitless quantities of photorealistic data for use in reinforcement learning and autonomous vehicles.
Session hashtag: #SAISDD2
Anand is the GM and Chief of Staff for Microsoft AI. Previously he was the Chief of Staff for Microsoft Azure Data Group covering Data Platforms and Machine Learning. In the last decade, he ran the product management and the development teams at Azure Data Services, Visual Studio and Windows Server User Experience teams at Microsoft. Anand holds a PhD in Computational fluid mechanics and worked several years as researcher before joining Microsoft.
Mark is a software engineer on Microsoft’s Applied AI team and a machine learning PhD student at the MIT Computer Science and AI Lab. Mark leads Microsoft ML for Apache Spark (http://aka.ms/spark), a distributed machine learning and microservice orchestration library. He has applied this work to problems in wildlife conservation, accessibility, and art museum outreach. Mark is currently researching how information theory and abstract algebra can yield new deep learning architectures in professor William T Freeman’s lab.