Skip to main content

Build Data Pipelines with Lakeflow Declarative Pipelines

This course introduces users to the essential concepts and skills needed to build data pipelines using Lakeflow Declarative Pipelines in Databricks for incremental batch or streaming ingestion and processing through multiple streaming tables and materialized views. Designed for data engineers new to Lakeflow Declarative Pipelines, the course provides a comprehensive overview of core components such as incremental data processing, streaming tables, materialized views, and temporary views, highlighting their specific purposes and differences.


Topics covered include:

- Developing and debugging ETL pipelines with the multi-file editor in Lakeflow using SQL (with Python code examples provided)

- How Lakeflow Declarative Pipelines track data dependencies in a pipeline through the pipeline graph

- Configuring pipeline compute resources, data assets, trigger modes, and other advanced options


Next, the course introduces data quality expectations in Lakeflow, guiding users through the process of integrating expectations into pipelines to validate and enforce data integrity. Learners will then explore how to put a pipeline into production, including scheduling options, production mode, and enabling pipeline event logging to monitor pipeline performance and health.


Finally, the course covers how to implement Change Data Capture (CDC) using the APPLY CHANGES INTO syntax within Lakeflow Declarative Pipelines to manage slowly changing dimensions (SCD Type 1 and Type 2), preparing users to integrate CDC into their own pipelines.

Skill Level
Associate
Duration
2h
Prerequisites

⇾ Basic understanding of the Databricks Data Intelligence platform, including Databricks Workspaces, Apache Spark, Delta Lake, the Medallion Architecture and Unity Catalog.

⇾ Experience ingesting raw data into Delta tables, including using the read_files SQL function to load formats like CSV, JSON, TXT, and Parquet.

⇾ Proficiency in transforming data using SQL, including writing intermediate-level queries and a basic understanding of SQL joins.

Outline

Introduction to Data Engineering in Databricks

⇾ Data Engineering in Databricks

⇾ What are Lakeflow Declarative Pipelines?

⇾ Course Setup and Creating a Pipeline

⇾ Course Project Overview


Lakeflow Declarative Pipeline Fundamentals

⇾ Dataset Types Overview

⇾ Simplified Pipeline Development

⇾ Common Pipeline Settings

⇾ Developing a Simple Pipeline

⇾ Ensure Data Quality with Expectations


Building Lakeflow Declarative Pipelines

⇾ Streaming Joins Overview

⇾ Deploying a Pipeline to Production

⇾ Change Data Capture (CDC) Overview

⇾ Change Data Capture with Apply CHANGE INTO

⇾ Additional Features Overview

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

See all our registration options

Registration options

Databricks has a delivery method for wherever you are on your learning journey

Runtime

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

Register now

Instructors

Instructor-Led

Public and private courses taught by expert instructors across half-day to two-day courses

Register now

Learning

Blended Learning

Self-paced and weekly instructor-led sessions for every style of learner to optimize course completion and knowledge retention. Go to Subscriptions Catalog tab to purchase

Purchase now

Scale

Skills@Scale

Comprehensive training offering for large scale customers that includes learning elements for every style of learning. Inquire with your account executive for details

Upcoming Public Classes

Data Engineer

Data Ingestion with Lakeflow Connect

This course provides a comprehensive introduction to Lakeflow Connect as a scalable and simplified solution for ingesting data into Databricks from a variety of data sources. You will begin by exploring the different types of connectors within Lakeflow Connect (Standard and Managed), learn about various ingestion techniques, including batch, incremental batch, and streaming, and then review the key benefits of Delta tables and the Medallion architecture.

From there, you will gain practical skills to efficiently ingest data from cloud object storage using Lakeflow Connect Standard Connectors with methods such as CREATE TABLE AS (CTAS), COPY INTO, and Auto Loader, along with the benefits and considerations of each approach. You will then learn how to append metadata columns to your bronze level tables during ingestion into the Databricks data intelligence platform. This is followed by working with the rescued data column, which handles records that don’t match the schema of your bronze table, including strategies for managing this rescued data.

The course also introduces techniques for ingesting and flattening semi-structured JSON data, as well as enterprise-grade data ingestion using Lakeflow Connect Managed Connectors.

Finally, learners will explore alternative ingestion strategies, including MERGE INTO operations and leveraging the Databricks Marketplace, equipping you with foundational knowledge to support modern data engineering ingestion.

Free
2h
Associate

Questions?

If you have any questions, please refer to our Frequently Asked Questions page.