Advanced Techniques with Spark Declarative Pipeline
This course explores Databricks' Lakeflow Spark Declarative Pipelines (SDP) for building production-grade streaming pipelines. You will learn advanced design patterns, robust data quality enforcement, and cross-platform integration essential for real-world lakehouse engineering.
Throughout the course, you will dive into modern data ingestion and processing techniques, mastering tools like Liquid Clustering for layout optimization and the Multiplex Streaming pattern for mixed-schema events. By the end of the modules, you will know how to confidently handle schema evolution, automate Change Data Capture (CDC), and ensure data integrity.
Through lectures and hands-on demos, you will:
• Build multi-flow pipelines to ingest multi-source data into a unified Bronze table.
• Apply Liquid Clustering and Data Quality Expectations across Silver and Gold layers.
• Implement the Multiplex pattern with Iceberg UniForm for cross-platform data access.
• Automate SCD Type 2 history tracking using AUTO CDC INTO.
• Design zero-data-loss quarantine pipelines to audit and manage invalid records.
Note:
1. This course is the first in the 'Advanced Data Engineering with Databricks' series.
2. For SCORM lecture files, please ensure that you close the SCORM window after completing the content. Do not click the ‘Next Lesson’ button, as doing so may prevent the SCORM module from being marked as complete.
The content was developed for participants with these skills/knowledge/abilities:
• Spark Declarative Pipelines — Completion of the "Build Data Pipelines with Lakeflow Spark Declarative Pipelines" course, or familiarity with CREATE OR REFRESH STREAMING TABLE, CONSTRAINTS, and the Pipelines UI
• Delta Lake Fundamentals — Understanding of Delta tables and how Delta manages data files and transaction logs
• Streaming Concepts — Knowledge of micro-batch streaming, checkpointing, and event-time processing in SDP
• SQL Proficiency — Ability to read and write SQL, including SELECT, JOIN, MERGE, CASE WHEN, and common aggregate functions
• Python in Databricks Notebooks — Comfort with reading and running Python code in Databricks notebooks
• Unity Catalog Basics — Understanding of catalogs, schemas, tables, and volumes in Unity Catalog
Self-Paced
Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos
Registration options
Databricks has a delivery method for wherever you are on your learning journey
Self-Paced
Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos
Register nowInstructor-Led
Public and private courses taught by expert instructors across half-day to two-day courses
Register nowBlended Learning
Self-paced and weekly instructor-led sessions for every style of learner to optimize course completion and knowledge retention. Go to Subscriptions Catalog tab to purchase
Purchase nowSkills@Scale
Comprehensive training offering for large scale customers that includes learning elements for every style of learning. Inquire with your account executive for details

