Skip to main content

Getting started with DLT

We know you loved our original DLT guide, but it was time for an upgrade. We’ve replaced it with something even better — a brand-new tutorial that showcases the latest advancement in Databricks data engineering: Lakeflow Spark Declarative Pipelines (SDP).

Building on the foundation of DLT, Lakeflow SDP represents our vision for modern data engineering. Now fully integrated within the Lakeflow ecosystem, it delivers end-to-end pipeline capabilities — connecting data ingestion, transformation and orchestration through a single intelligent platform that effortlessly scales with your business.

For existing data pipelines, you don’t need to change a thing: All DLT pipelines continue to run seamlessly within Lakeflow, with no upgrade or code modifications required. All DLT capabilities — including streaming tables, materialized views and data quality expectations — remain available, and pipelines are now even more tightly integrated into the lakehouse.

Our new hands-on SDP tutorial puts you in the cockpit with a real-world avionics example. You’ll build a production-ready pipeline processing IoT data from thousands of aircraft. Ready to elevate your data engineering skills? 

Check out the new How to get started with Apache Spark Declarative Pipelines guide today.