Matei Zaharia is an Assistant Professor of Computer Science at Stanford University and Chief Technologist at Databricks. He started the Apache Spark project during his PhD at UC Berkeley in 2009, and has worked broadly in datacenter systems, co-starting the Apache Mesos project and contributing as a committer on Apache Hadoop. Today, Matei tech-leads the MLflow development effort at Databricks in addition to other aspects of the platform. Matei’s research work was recognized through the 2014 ACM Doctoral Dissertation Award for the best PhD dissertation in computer science, an NSF CAREER Award, and the US Presidential Early Career Award for Scientists and Engineers (PECASE).
The pursuit of AI is one of the biggest priorities in data today. The Thursday morning keynote will be led by Databricks Cofounder and CEO Ali Ghodsi and cover advances in data science, machine learning, MLOps and more in both open source and the Databricks Lakehouse Platform.
We’ll also be joined by data leaders from McDonalds and Microsoft, as well as the legendary Bill Nye, a scientist, engineer, comedian and author.
May 26, 2021 08:00 AM PT
Join the Wednesday morning keynote to hear from Databricks co-founders and original creators of popular projects Apache Spark, Delta Lake, and MLflow on how the open source community is tackling the biggest challenges in data.
Stay tuned for them to reveal some of the latest innovations in data engineering and data analytics to simplify and scale your work.
Data sharing has become important in the digital economy as enterprises wish to easily and securely exchange data with their customers, partners, and suppliers, but to date, data sharing solutions have been tied to a single vendor or commercial product. Today, Databricks unveiled "Delta Sharing" -- the industry’s first open protocol for data sharing -- making it simple to share data with other organizations regardless of where the data lives.Join Databricks Co-Founder and Chief Technologist Matei Zaharia, as well as Databricks engineer and product manager Michael Armbrust and Todd Greengstein for a 'Ask Me Anything' session on Delta Sharing. Whether you want to dive deep into the technology, or a better understanding of the scenarios, this is the session where you get to ask your questions!
[daisna21-sessions-od]
November 18, 2020 04:00 PM PT
Matei Zaharia
Assistant Professor of Computer Science Original Creator of Apache Spark & MLflow, Databricks
Deploying and operating machine learning applications is challenging because they are highly dependent on input data and can fail in complex ways. Problems such as training/inference differences in data format, data skew, and misconfigured software environments can easily sneak into a production application and impact its quality. To address these types of problems, organizations are adopting ML Platform software and MLOps practices specifically for managing machine learning applications.
In this talk, I’ll present some of the latest functionality added for productionizing machine learning in MLflow, the popular open source machine learning platform started by Databricks in 2018. These include built-in support for model management and review using the Model Registry, APIs for automatic Continuous Integration and Delivery (CI/CD), model schemas to catch differences in a model’s expected data format, and integration with model explainability tools. I’ll also talk about other work happening in the open source MLflow community, including deep integration with PyTorch and its growing ecosystem of model productionization tools.
Kasey Uhlenhuth
Sr Product Manager, Machine Learning, Databricks
Lin Qiao
Engineering Director, PyTorch, Facebook
Lin Qiao, engineering director on the Facebook AI team, talks about bringing machine learning to production at scale, including the PyTorch integration with MLflow. She talks about the guiding principles for PyTorch and the goals set back in 2016 during initial development through the present day, with a focus on ecosystem compatibility.
Lin reviews the PyTorch production ecosystem and discusses how MLflow and PyTorch are integrated for tracking, models and model serving.
Clemens Mewald
Director of Product Management, Data Science and Machine Learning, Databricks
It is no longer a secret that data driven insights and decision making are essential in any company’s strategy to keep up with today’s rapid pace of change and remain relevant. Although we take this realization for granted, we are still in the very early stage of enabling data teams to deliver on their promise. One of the reasons is that we haven’t equipped this profession with the modern toolkit they deserve.
Existing solutions leave data teams with impossible trade-offs. Giving Data Scientists the freedom to use any open source tools on their laptops doesn’t provide a clear path to production and governance. Simply hosting those same tools in the Cloud may solve some of the data privacy and security issues, but doesn’t improve productivity nor collaboration. On the other hand, most robust and scalable production environments hinder innovation and experimentation by slowing Data Scientists down.
In this talk we will give an update on the next generation Data Science Workspace on Databricks, originally unveiled at Spark + AI Summit 2020. Specifically, we will cover new capabilities added to Databricks Notebooks as well as Git-based Databricks Projects. Until now, the industry has assumed that collaborative notebooks are for experimentation only, and not for production. Our approach solved for these challenges and, for the first time, provides a single platform for data teams to rapidly and confidently move from experimentation to production.
In this talk, we will unveil the next generation of the Databricks Data Science Workspace: An open and unified experience for modern data teams specifically designed to address these hard tradeoffs. We will introduce new features that leverage the open source tools you are familiar with to give you a laptop-like experience that provides the flexibility to experiment and the robustness to create reliable and reproducible production solutions.
Stephan Schwarz
Production Planning: Manager Smart Data Processing (Mercedes Operations), Daimler
Sebastian Findeisen
Data Scientist, Daimler
When we think about luxury cars, what first comes to mind is often the end product-- the sleek design, how fast it goes, and so on. But we often overlook the enormous amount of effort it takes before that car rolls off the assembly line. In this talk, Daimler will give us a peek into how data and ML is playing a critical role to drive car production automation, with MLOps and tools like MLflow being leveraged to automate a number of complex processes, and provide insights that create production efficiencies.
Rohan Kumar
Corporate Vice President, Azure Data, Microsoft
Responsible ML is the most talked about field in AI at the moment. With the growing importance of ML, it is even more important for us to exercise ethical AI practices and ensure that the models we create live up to the highest standards of inclusiveness and transparency. Join Rohan Kumar, as he talks about how Microsoft brings cutting-edge research into the hands of customers to make them more accountable for their models and responsible in their use of AI. For the AI community, this is an open invitation to collaborate and contribute to shape the future of Responsible ML. This keynote is brought to you as an encore presentation from the global Summit.
Sarah Bird
Principal Program Manager, Microsoft Azure AI
Keynote from Mae Jemison
First woman of color in the world to go into space, former NASA astronaut
Exploration of the opportunities and obstacles encountered and clarity of purpose needed to achieve an extraordinary future -- such as human interstellar travel or a sustainable human existence on planet Earth -- and what roles can big data and advancing IT play.
Clemens Mewal - Next Generation Data Science Workspace (Databricks) - 9:06
Lauren Richie - DEMO: Next Generation Data Science Workspace (Databricks) - 17:55
Matei Zaharia - MLflow Community and Product Updates (Databricks) - 27:40
Sue Ann Hong - DEMO: MLflow (Databricks) - 42:57
Rohan Kumar - Responsible ML (Microsoft) - 51:52
Sarah Bird - DEMO: Responsible ML (Microsoft) - 1:00:21
Anurag Sehgal - Data and AI (Credit Suisse) - 1:12:58
Introducing the Next Generation Data Science Workspace
Ali Ghodsi, Clemens Mewald and Lauren Richie
It is no longer a secret that data driven insights and decision making are essential in any company’s strategy to keep up with today’s rapid pace of change and remain relevant. Although we take this realization for granted, we are still in the very early stage of enabling data teams to deliver on their promise. One of the reasons is that we haven’t equipped this profession with the modern toolkit they deserve.
Existing solutions leave data teams with impossible trade-offs. Giving Data Scientists the freedom to use any open source tools on their laptops doesn’t provide a clear path to production and governance. Simply hosting those same tools in the Cloud may solve some of the data privacy and security issues, but doesn’t improve productivity nor collaboration. On the other hand, most robust and scalable production environments hinder innovation and experimentation by slowing Data Scientists down.
In this talk, we will unveil the next generation of the Databricks Data Science Workspace: An open and unified experience for modern data teams specifically designed to address these hard tradeoffs. We will introduce new features that leverage the open source tools you are familiar with to give you a laptop-like experience that provides the flexibility to experiment and the robustness to create reliable and reproducible production solutions.
Simplifying Model Development and Management with MLflow
Matei Zaharia and Sue Ann Hong
As organizations continue to develop their machine learning (ML) practice, the need for robust and reliable platforms capable of handling the entire ML lifecycle is becoming crucial for successful outcomes. Building models is difficult enough to do once, but deploying them into production in a reproducible, agile, and predictable way is exponentially harder due to the dependencies on parameters, environments, and the ever changing nature of data and business needs.
Introduced by Databricks in 2018, MLflow is the most widely used open source platform for managing the full ML lifecycle. With over 2 million PyPI downloads a month and over 200 contributors, the growing support from the developer community demonstrates the need for an open source approach to standardize tools, processes, and frameworks involved throughout the ML lifecycle. MLflow significantly simplifies the complex process of standardizing MLOps and productionizing ML models. In this talk, we’ll cover what’s new in MLflow, including simplified experiment tracking, new innovations to the model format to improve portability, new features to manage and compare model schemas, and new capabilities for deploying models faster.
Responsible ML - Bringing Accountability to Data Science
Rohan Kumar and Sarah Bird
Responsible ML is the most talked about field in AI at the moment. With the growing importance of ML, it is even more important for us to exercise ethical AI practices and ensure that the models we create live up to the highest standards of inclusiveness and transparency. Join Rohan Kumar, as he talks about how Microsoft brings cutting-edge research into the hands of customers to make them more accountable for their models and responsible in their use of AI. For the AI community, this is an open invitation to collaborate and contribute to shape the future of Responsible ML.
How Credit Suisse Is Leveraging Open Source Data and AI Platforms to Drive Digital Transformation, Innovation and Growth
Anurag Sehgal
Despite the increasing embrace of big data and AI, most financial services companies still experience significant challenges around data types, privacy, and scale. Credit Suisse is overcoming these obstacles by standardizing on open, cloud-based platforms, including Azure Databricks, to increase the speed and scale of operations, and the democratization of ML across the organization. Now, Credit Suisse is leading the way by successfully employing data and analytics to drive digital transformation, delivering new products to market faster, and driving business growth and operational efficiency.
Ali Ghodsi - Intro to Lakehouse, Delta Lake (Databricks) - 46:40
Matei Zaharia - Spark 3.0, Koalas 1.0 (Databricks) - 17:03
Brooke Wenig - DEMO: Koalas 1.0, Spark 3.0 (Databricks) - 35:46
Reynold Xin - Introducing Delta Engine (Databricks) - 1:01:50
Arik Fraimovich - Redash Overview & DEMO (Databricks) - 1:27:25
Vish Subramanian - Brewing Data at Scale (Starbucks) - 1:39:50
Realizing the Vision of the Data Lakehouse
Ali Ghodsi
Data warehouses have a long history in decision support and business intelligence applications. But, data warehouses were not well suited to dealing with the unstructured, semi-structured, and streaming data common in modern enterprises. This led to organizations building data lakes of raw data about a decade ago. But, they also lacked important capabilities. The need for a better solution has given rise to the data lakehouse, which implements similar data structures and data management features to those in a data warehouse, directly on the kind of low cost storage used for data lakes.
This keynote by Databricks CEO, Ali Ghodsi, explains why the open source Delta Lake project takes the industry closer to realizing the full potential of the data lakehouse, including new capabilities within the Databricks Unified Data Analytics platform to significantly accelerate performance. In addition, Ali will announce new open source capabilities to collaboratively run SQL queries against your data lake, build live dashboards, and alert on important changes to make it easier for all data teams to analyze and understand their data.
Introducing Apache Spark 3.0:
A retrospective of the Last 10 Years, and a Look Forward to the Next 10 Years to Come.
Matei Zaharia and Brooke Wenig
In this keynote from Matei Zaharia, the original creator of Apache Spark, we will highlight major community developments with the release of Apache Spark 3.0 to make Spark easier to use, faster, and compatible with more data sources and runtime environments. Apache Spark 3.0 continues the project’s original goal to make data processing more accessible through major improvements to the SQL and Python APIs and automatic tuning and optimization features to minimize manual configuration. This year is also the 10-year anniversary of Spark’s initial open source release, and we’ll reflect on how the project and its user base has grown, as well as how the ecosystem around Spark (e.g. Koalas, Delta Lake and visualization tools) is evolving to make large-scale data processing simpler and more powerful.
Delta Engine: High Performance Query Engine for Delta Lake
Reynold Xin
How Starbucks is Achieving its 'Enterprise Data Mission' to Enable Data and ML at Scale and Provide World-Class Customer Experiences
Vish Subramanian
Starbucks makes sure that everything we do is through the lens of humanity – from our commitment to the highest quality coffee in the world, to the way we engage with our customers and communities to do business responsibly. A key aspect to ensuring those world-class customer experiences is data. This talk highlights the Enterprise Data Analytics mission at Starbucks that helps making decisions powered by data at tremendous scale. This includes everything ranging from processing data at petabyte scale with governed processes, deploying platforms at the speed-of-business and enabling ML across the enterprise. This session will detail how Starbucks has built world-class Enterprise data platforms to drive world-class customer experiences.
Last summer, Databricks launched MLflow, an open source platform to manage the machine learning lifecycle, including experiment tracking, reproducible runs and model packaging. MLflow has grown quickly since then, with over 120 contributors from dozens of companies, including major contributions from R Studio and Microsoft. It has also gained new capabilities such as automatic logging from TensorFlow and Keras, Kubernetes integrations, and a high-level Java API. In this talk, we’ll cover some of the new features that have come to MLflow, and then focus on a major upcoming feature: model management with the MLflow Model Registry. Many organizations face challenges tracking which models are available in the organization and which ones are in production. The MLflow Model Registry provides a centralized database to keep track of these models, share and describe new model versions, and deploy the latest version of a model through APIs. We’ll demonstrate how these features can simplify common ML lifecycle tasks.
Last year, Databricks launched MLflow, an open source framework to manage the machine learning lifecycle that works with any ML library to simplify ML engineering. MLflow provides tools for experiment tracking, reproducible runs and model management that make machine learning applications easier to develop and deploy. In the past year, the MLflow community has grown quickly: 80 contributors from over 40 companies have contributed code to the project, and over 200 companies are using MLflow. In this talk, we’ll present our development plans for MLflow 1.0, the next release of MLflow, which will stabilize the MLflow APIs and introduce multiple new features to simplify the ML lifecycle. We’ll also discuss additional MLflow components that Databricks and other companies are working on for the rest of 2019, such as improved tools for model management, multi-step pipelines and online monitoring.
Successfully building and deploying a machine learning model can be difficult to do once. Enabling other data scientists (or yourself, one month later) to reproduce your pipeline, to compare the results of different versions, to track what's running where, and to redeploy and rollback updated models is much harder.
In this talk, I'll introduce MLflow, a new open source project from Databricks that simplifies the machine learning lifecycle. MLflow provides APIs for tracking experiment runs between multiple users within a reproducible environment, and for managing the deployment of models to production. MLflow is designed to be an open, modular platform, in the sense that you can use it with any existing ML library and development process. MLflow was launched in June 2018 and has already seen significant community contributions, with 45 contributors and new features new multiple language APIs, integrations with popular ML libraries, and storage backends. I’ll go through some of the newly released features and explain how to get started with MLflow.
ML development brings many new complexities beyond the traditional software development lifecycle. Unlike in traditional software development, ML developers want to try multiple algorithms, tools and parameters to get the best results, and they need to track this information to reproduce work. In addition, developers need to use many distinct systems to productionize models. To address these problems, many companies are building custom "ML platforms" that automate this lifecycle, but even these platforms are limited to a few supported algorithms and to each company's internal infrastructure. In this talk, I present MLflow, a new open source project from Databricks that aims to design an open ML platform where organizations can use any ML library and development tool of their choice to reliably build and share ML applications. MLflow introduces simple abstractions to package reproducible projects, track results, and encapsulate models that can be used with many existing tools, accelerating the ML lifecycle for organizations of any size.
Over the past three years, Spark has quickly grown from a research project to one of the most active open source projects in parallel computing. I’ll go through a summary of recent growth, highlighting key contributions from across the community. At the same time, much remains to be done to make big data analysis truly accessible and fast. I’ll sketch how we at Databricks are approaching this problem through our continuing work on Apache Spark, and the aspects of the system that we believe make Spark truly unique for big data.
Apache Spark continues to grow quickly in both community size and technical capabilities. Since the last Spark Summit, in December 2013, Spark’s contributor base has grown from 100 contributors to more than 200, and Spark has become the most active open source project in big data. We’ve also seen significant new components added, such as the Spark SQL runtime, a larger machine learning library, and rich integration with other data processing systems. Given all this activity, where is Spark heading? I’ll share our goal of Spark as a unifying platform between the diverse applications (e.g. stream processing, machine learning and SQL) and diverse storage and runtime systems in big data.
As the Apache Spark userbase grows, the developer community is working to adapt it for ever-wider use cases. 2014 saw fast adoption of Spark in the enterprise and major improvements in its performance, scalability and standard libraries. In 2015, we also want to make Spark accessible to a wider set of users, through new high-level APIs targeted at data science: machine learning pipelines, data frames, and R language bindings. In addition, we are defining extension points to let Spark grow as a platform, making it easy to plug in data sources, algorithms, and third-party packages. Like all work on Spark, these APIs are designed to plug seamlessly into existing Spark applications, giving users a unified platform for streaming, batch and interactive data processing.
2015 was a year of continued growth for Spark, with numerous additions to the core project and very fast growth of use cases across the industry. In this talk, I'll look back at how the Spark community is has grown and changed in 2015, based on a large Apache Spark user survey conducted by Databricks. We see some interesting trends in the diversity of runtime environments (which are increasingly not just Hadoop); the types of applications run on Spark; and the types of users, now that features like R support and DataFrames are available in Spark. I'll also cover the ongoing work in the upcoming releases of Spark to support new use cases.
The next release of Spark will be 2.0, marking a big milestone for the project. In this talk, I'll cover some of the large upcoming features that made us increase the version number to 2.0, as well as some of the roadmap for Spark in 2016.
The next release of Apache Spark will be 2.0, marking a big milestone for the project. In this talk, I'll cover how the community has grown to reach this point, and some of the major features in 2.0. The largest additions are performance improvements for Datasets, DataFrames and SQL through Project Tungsten, as well as a new Structured Streaming API that provides simpler and more powerful stream processing. I'll also discuss a bit of what's in the works for future versions.
October 25, 2016 05:00 PM PT
Apache Spark 2.0 was released this summer and is already being widely adopted. I'll talk about how changes in the API have made it easier to write batch, streaming and realtime applications. The Dataset API, which is now integrated with DataFrames, makes it possible to benefit from powerful optimizations such as pushing queries into data sources, while the Structured Streaming extension to this API makes it possible to run many of the same computations in a streaming fashion automatically.
Big data remains a rapidly evolving field with new applications and infrastructure appearing every year. In this talk, I'll cover new trends in 2016 / 2017 and how Apache Spark is moving to meet them. In particular, I'll talk about work Databricks is doing to make Apache Spark interact better with native code (e.g. deep learning libraries), support heterogeneous hardware, and simplify production data pipelines in both streaming and batch settings through Structured Streaming.
2017 continues to be an exciting year for big data and Apache Spark. I will talk about two major initiatives that Databricks has been building: Structured Streaming, the new high-level API for stream processing, and new libraries that we are developing for machine learning. These initiatives can provide order of magnitude performance improvements over current open source systems while making stream processing and machine learning more accessible than ever before.
2017 continues to be an exciting year for Apache Spark. I will talk about new updates in two major areas in the Spark community this year: stream processing with Structured Streaming, and deep learning with high-level libraries such as Deep Learning Pipelines and TensorFlowOnSpark. In both areas, the community is making powerful new functionality available in the same high-level APIs used in the rest of the Spark ecosystem (e.g., DataFrames and ML Pipelines), and improving both the scalability and ease of use of stream processing and machine learning.