Delta Lake on Databricks Demo

With Delta Lake on Databricks, you can build a lakehouse architecture that combines the best parts of data lakes and data warehouses on a simple and open platform that stores and manages all of your data and supports all of your analytics and AI use cases.
In this demo, we cover the main features of Delta Lake, including unified batch and streaming data processing, schema enforcement and evolution, time travel, and support for UPDATEs/MERGEs/DELETEs, as well as touching on some of the performance enhancements available with Delta Lake on Databricks.
Dive deeper into Delta Lake
Video transcript
Delta Lake Demo: Introduction
The lakehouse is a simple and open data platform for storing and managing all of your data, that supports all of your analytics and AI use cases. Delta Lake provides the open, reliable, performant and secure foundation for the lakehouse.
It’s an open-source data format and transactional data management system, based on Parquet, that makes your data lake reliable by implementing ACID transactions on top of cloud object storage. Delta Lake tables unify batch and streaming data processing right out of the box. And finally, Delta Lake is designed to be 100% compatible with Apache SparkTM. So it’s easy to convert your existing data pipelines to begin using Delta Lake with minimal changes to your code.