Zipline is Airbnb’s data management platform specifically designed for ML use cases. Previously, ML practitioners at Airbnb spent roughly 60% of their time on collecting and writing transformations for machine learning tasks. Zipline reduces this task from months to days – by making the process declarative. It allows data scientists to easily define features in a simple configuration language. The framework then provides access to point-in-time correct features – for both – offline model training and online inference. In this talk we will describe the architecture of our system and the algorithm that makes the problem of efficient point-in-time correct feature generation, tractable.
The attendee will learn
While the talk if fairly technical – we will introduce all the concepts from first principles with examples. Basic understanding of data-parallel distributed computation and machine learning might help, but are not required.
Varant Zanoyan is a software engineer on the Machine Learning Infrastructure team at Airbnb, where he works on tools for building and productionizing ML models. Previously, he worked closely with data scientists and engineers within Airbnb to build and deploy machine learning models. During this time he identified data management and feature engineering as the primary challenges faced by machine learning practitioners at Airbnb. Seeing these problems motivated him to work on solving them at the infrastructure level, and these efforts resulted in Zipline, the feature store and data management platform for machine learning. Zipline remains his primary focus currently. Prior to Airbnb, he solved data infrastructure problems at Palantir Technologies.
Evgeny Shapiro is a software engineer on the Data Infrastructure team at Airbnb, where he works on the next generation of data architecture in Airbnb. Previously he worked on the Trust team where he was implementing infrastructure to catch fraud in real-time. Many of the requirements in fraud were particularly challenging for existing infrastructure, because of latency, volume and correctness requirements. To address these challenges he joined the Zipline project where he worked on core data aggregation algorithms and optimizations required to run large feature backfills for production machine learning models as well as online feature serving infrastructure.