Skip to main content

Advanced Data Engineering with Databricks

This course serves as an appropriate entry point to learn Advanced Data Engineering with Databricks. 

Below, we describe each of the four, four-hour modules included in this course.


Databricks Streaming and Lakeflow Spark Declarative Pipelines

This course provides a comprehensive understanding of Spark Structured Streaming and Delta Lake, including computation models, configuration for streaming read, and maintaining data quality in a streaming environment.


Databricks Data Privacy

This content is intended for the learner persona of data engineers or for customers, partners, and employees who complete data engineering tasks with Databricks. It aims to provide them with the necessary knowledge and skills to execute these activities effectively on the Databricks platform.


Databricks Performance Optimization

In this course, you’ll learn how to optimize workloads and physical layout with Spark and Delta Lake and and analyze the Spark UI to assess performance and debug applications. We’ll cover topics like streaming, liquid clustering, data skipping, caching, photons, and more.


Automated Deployment with Databricks Asset Bundles

This course provides a comprehensive review of DevOps principles and their application to Databricks projects. It begins with an overview of core DevOps, DataOps, continuous integration (CI), continuous deployment (CD), and testing, and explores how these principles can be applied to data engineering pipelines.

The course then focuses on continuous deployment within the CI/CD process, examining tools like the Databricks REST API, SDK, and CLI for project deployment. You will learn about Databricks Asset Bundles (DABs) and how they fit into the CI/CD process. You’ll dive into their key components, folder structure, and how they streamline deployment across various target environments in Databricks. You will also learn how to add variables, modify, validate, deploy, and execute Databricks Asset Bundles for multiple environments with different configurations using the Databricks CLI.

Finally, the course introduces Visual Studio Code as an Interactive Development Environment (IDE) for building, testing, and deploying Databricks Asset Bundles locally, optimizing your development process. The course concludes with an introduction to automating deployment pipelines using GitHub Actions to enhance the CI/CD workflow with Databricks Asset Bundles.

By the end of this course, you will be equipped to automate Databricks project deployments with Databricks Asset Bundles, improving efficiency through DevOps practices.


Languages Available: English | 日本語 | Português BR | 한국어

Skill Level
Professional
Duration
16h
Prerequisites

Prerequisites

• Ability to perform basic code development tasks using the Databricks Data Engineering and Data Science workspace (create clusters, run code in notebooks, use basic notebook operations, import repos from git, etc.)

• Intermediate programming experience with PySpark

• Extract data from a variety of file formats and data sources

• Apply a number of common transformations to clean data

• Reshape and manipulate complex data using advanced built-in functions

• Intermediate programming experience with Delta Lake (create tables, perform complete and incremental updates, compact files, restore previous versions, etc.) 

• Beginner experience configuring and scheduling data pipelines using the Lakeflow Spark Declarative Pipelines UI 

• Beginner experience defining Lakeflow Spark Declarative Pipelines using PySpark 

• Ingest and process data using Auto Loader and PySpark syntax

• Process Change Data Capture feeds with APPLY CHANGES INTO syntax

• Review pipeline event logs and results to troubleshoot Declarative Pipeline syntax

• Strong knowledge of the Databricks platform, including experience with Databricks Workspaces, Apache Spark, Delta Lake, the Medallion Architecture, Unity Catalog, Lakeflow Declarative Pipelines, and Workflows. In particular, knowledge of leveraging Expectations with Lakeflow Declarative Pipelines. 

• Experience in data ingestion and transformation, with proficiency in PySpark for data processing and DataFrame manipulation. Candidates should also have experience writing intermediate-level SQL queries for data analysis and transformation.

• Proficiency in Python programming, including the ability to design and implement functions and classes, and experience with creating, importing, and utilizing Python packages.

• Familiarity with DevOps practices, particularly continuous integration and continuous delivery/deployment (CI/CD) principles.

• A basic understanding of Git version control.

• Prerequisite course DevOps Essentials for Data Engineering Course

Outline

Databricks Streaming and Lakeflow Spark Declarative Pipelines

⇾ Streaming Data Concepts

⇾ Introduction to Structured Streaming

⇾ Demo: Reading from a Streaming Query

⇾ Streaming from Delta Lake

⇾ Streaming Query Lab

⇾ Aggregation, Time Windows, Watermarks

⇾ Event Time + Aggregatios over Time Windows

⇾ Trigger Types and Output Modes

⇾ Stream Aggregation Lab

⇾ Demo: Windowed Aggregation with Watermark

⇾ Stream Joins(Optional)

⇾ Demo: Stream Joins(Optional)

⇾ Data Ingestion Pattern

⇾ Demo: Auto Load to Bronze

⇾ Demo: Stream from Multiplex Bronze

⇾ Data Quality Enforcement

⇾ Demo: Data Quality Enforcement

⇾ Streaming ETL Lab


Databricks Data Privacy

⇾ Regulatory Compliance

⇾ Data Privacy

⇾ Key Concepts and Components

⇾ Audit Your Data

⇾ Data Isolation

⇾ Demo: Securing Data in Unity Catalog 

⇾ Pseudonymization & Anonymization

⇾ Summary & Best Practices

⇾ Demo: PII Data Security

⇾ Capturing Changed Data

⇾ Deleting Data in Databricks

⇾ Demo: Processing Records from CDF and Propagating Changes

⇾ Lab: Propagating Changes with CDF Lab


Databricks Performance Optimization

⇾ DevOps Spark UI Introduction

⇾ Introduction to Designing Foundation

⇾ Demo: File Explosion

⇾ Data Skipping and Liquid Clustering

⇾ Lab: Data Skipping and Liquid Clustering

⇾ Skew

⇾ Shuffles

⇾ Demo: Shuffle

⇾ Spill

⇾ Lab: Exploding Join

⇾ Serialization

⇾ Demo: User-Defined Functions

⇾ Fine-Tuning: Choosing the Right Cluster

⇾ Pick the Best Instance Types


Automated Deployment with Databricks Asset Bundles

⇾ DevOps Review

⇾ Continuous Integration and Continuous Deployment/Delivery (CI/CD) Review

⇾ Demo: Course Setup and Authentication

⇾ Deploying Databricks Projects

⇾ Introduction to Databricks Asset Bundles (DABs)

⇾ Demo: Deploying a Simple DAB

⇾ Lab: Deploying a Simple DAB

⇾ Variable Substitutions in DABs

⇾ Demo: Deploying a DAB to Multiple Environments

⇾ Lab: Deploy a DAB to Multiple Environments

⇾ DAB Project Templates Overview

⇾ Lab: Use a Databricks Default DAB Template

⇾ CI/CD Project Overview with DABs

⇾ Demo: Continuous Integration and Continuous Deployment with DABs

⇾ Lab: Adding ML to Engineering Workflows with DABs

⇾ Developing Locally with Visual Studio Code (VSCode)

⇾ Demo: Using VSCode with Databricks

⇾ CI/CD Best Practices for Data Engineering

⇾ Next Steps: Automated Deployment with GitHub Actions

Upcoming Public Classes

Date
Time
Language
Price
Nov 11 - 14
11 AM - 03 PM (Asia/Singapore)
English
$1500.00
Dec 01 - 02
09 AM - 05 PM (Australia/Sydney)
English
$1500.00
Dec 01 - 02
09 AM - 05 PM (America/New_York)
English
$1500.00
Dec 08 - 11
11 AM - 03 PM (Asia/Singapore)
English
$1500.00
Dec 08 - 09
09 AM - 05 PM (Europe/London)
English
$1500.00
Dec 15 - 16
09 AM - 05 PM (Europe/London)
English
$1500.00
Dec 15 - 18
02 PM - 06 PM (America/New_York)
English
$1500.00
Jan 05 - 06
09 AM - 05 PM (Australia/Sydney)
English
$1500.00
Jan 05 - 06
09 AM - 05 PM (America/New_York)
English
$1500.00
Jan 12 - 13
09 AM - 05 PM (Europe/London)
English
$1500.00
Jan 13 - 16
11 AM - 03 PM (Asia/Singapore)
English
$1500.00
Jan 19 - 20
09 AM - 05 PM (Europe/London)
English
$1500.00
Jan 19 - 22
02 PM - 06 PM (America/New_York)
English
$1500.00

Public Class Registration

If your company has purchased success credits or has a learning subscription, please fill out the Training Request form. Otherwise, you can register below.

Private Class Request

If your company is interested in private training, please submit a request.

See all our registration options

Registration options

Databricks has a delivery method for wherever you are on your learning journey

Runtime

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

Register now

Instructors

Instructor-Led

Public and private courses taught by expert instructors across half-day to two-day courses

Register now

Learning

Blended Learning

Self-paced and weekly instructor-led sessions for every style of learner to optimize course completion and knowledge retention. Go to Subscriptions Catalog tab to purchase

Purchase now

Scale

Skills@Scale

Comprehensive training offering for large scale customers that includes learning elements for every style of learning. Inquire with your account executive for details

Upcoming Public Classes

Questions?

If you have any questions, please refer to our Frequently Asked Questions page.