Skip to main content

Data Engineering with Databricks

This is an introductory course that serves as an appropriate entry point to learn Data Engineering with Databricks. 

Below, we describe each of the four, four-hour modules included in this course.


1. Data Ingestion with Lakeflow Connect

This course provides a comprehensive introduction to Lakeflow Connect as a scalable and simplified solution for ingesting data into Databricks from a variety of data sources. You will begin by exploring the different types of connectors within Lakeflow Connect (Standard and Managed), learn about various ingestion techniques, including batch, incremental batch, and streaming, and then review the key benefits of Delta tables and the Medallion architecture.


From there, you will gain practical skills to efficiently ingest data from cloud object storage using Lakeflow Connect Standard Connectors with methods such as CREATE TABLE AS (CTAS), COPY INTO, and Auto Loader, along with the benefits and considerations of each approach. You will then learn how to append metadata columns to your bronze level tables during ingestion into the Databricks data intelligence platform. This is followed by working with the rescued data column, which handles records that don’t match the schema of your bronze table, including strategies for managing this rescued data.


The course also introduces techniques for ingesting and flattening semi-structured JSON data, as well as enterprise-grade data ingestion using Lakeflow Connect Managed Connectors.


Finally, learners will explore alternative ingestion strategies, including MERGE INTO operations and leveraging the Databricks Marketplace, equipping you with foundational knowledge to support modern data engineering ingestion. 


2. Deploy Workloads with Lakeflow Jobs

Deploy Workloads with Lakeflow Jobs course teaches how to orchestrate and automate data, analytics, and AI workflows using Lakeflow Jobs. You will learn to make robust, production-ready pipelines with flexible scheduling, advanced orchestration, and best practices for reliability and efficiency-all natively integrated within the Databricks Data intelligence Platform. Prior experience with Databricks, Python and SQL is recommended.


3. Build Data Pipelines with Lakeflow Spark Declarative Pipelines 

This course introduces users to the essential concepts and skills needed to build data pipelines using Lakeflow Spark Declarative Pipelines (SDP) in Databricks for incremental batch or streaming ingestion and processing through multiple streaming tables and materialized views. Designed for data engineers new to Spark Declarative Pipelines, the course provides a comprehensive overview of core components such as incremental data processing, streaming tables, materialized views, and temporary views, highlighting their specific purposes and differences.


Topics covered include:

- Developing and debugging ETL pipelines with the multi-file editor in Spark Declarative Pipelines using SQL (with Python code examples provided)

- How Spark Declarative Pipelines track data dependencies in a pipeline through the pipeline graph

- Configuring pipeline compute resources, data assets, trigger modes, and other advanced options


Next, the course introduces data quality expectations in Spark Declarative Pipelines, guiding users through the process of integrating expectations into pipelines to validate and enforce data integrity. Learners will then explore how to put a pipeline into production, including scheduling options, and enabling pipeline event logging to monitor pipeline performance and health.


Finally, the course covers how to implement Change Data Capture (CDC) using the AUTO CDC INTO syntax within Spark Declarative Pipelines to manage slowly changing dimensions (SCD Type 1 and Type 2), preparing users to integrate CDC into their own pipelines.


4. DevOps Essentials for Data Engineering

This course explores software engineering best practices and DevOps principles, specifically designed for data engineers working with Databricks. Participants will build a strong foundation in key topics such as code quality, version control, documentation, and testing. The course emphasizes DevOps, covering core components, benefits, and the role of continuous integration and delivery (CI/CD) in optimizing data engineering workflows.


You will learn how to apply modularity principles in PySpark to create reusable components and structure code efficiently. Hands-on experience includes designing and implementing unit tests for PySpark functions using the pytest framework, followed by integration testing for Databricks data pipelines with DLT and Workflows to ensure reliability.


The course also covers essential Git operations within Databricks, including using Databricks Git Folders to integrate continuous integration practices. Finally, you will take a high level look at various deployment methods for Databricks assets, such as REST API, CLI, SDK, and Declarative Automation Bundles (DABs), providing you with the knowledge of techniques to deploy and manage your pipelines.



Languages Available: English | 日本語 | Português BR | 한국어 | Español | française

Skill Level
Associate
Duration
16h
Prerequisites

1. Data Ingestion with Lakeflow Connect

Basic understanding of the Databricks Data Intelligence platform, including Databricks Workspaces, Apache Spark, Delta Lake, the Medallion Architecture and Unity Catalog.

Experience working with various file formats (e.g., Parquet, CSV, JSON, TXT).

Proficiency in SQL and Python.

Familiarity with running code in Databricks notebooks.


2. Deploy Workloads with Lakeflow Jobs

⇾ Beginner familiarity with basic cloud concepts (virtual machines, object storage, identity management)

⇾ Ability to perform basic code development tasks (create compute, run code in notebooks, use basic notebook operations, import repos from git, etc.)

⇾ Intermediate familiarity with basic SQL concepts (CREATE, SELECT, INSERT, UPDATE, DELETE, WHILE, GROUP BY, JOIN, etc.)


3. Build Data Pipelines with Lakeflow Spark Declarative Pipelines

⇾ Basic understanding of the Databricks Data Intelligence platform, including Databricks Workspaces, Apache Spark, Delta Lake, the Medallion Architecture and Unity Catalog.

⇾ Experience ingesting raw data into Delta tables, including using the read_files SQL function to load formats like CSV, JSON, TXT, and Parquet.

⇾ Proficiency in transforming data using SQL, including writing intermediate-level queries and a basic understanding of SQL joins.


4. DevOps Essentials for Data Engineering

⇾ Proficient knowledge of the Databricks platform, including experience with Databricks Workspaces, Apache Spark, Delta Lake and the Medallion Architecture, Unity Catalog, Delta Live Tables, and Workflows. A basic understanding of Git version control is also required.

⇾ Experience ingesting and transforming data, with proficiency in PySpark for data processing and DataFrame manipulations. Additionally, candidates should have experience writing intermediate level SQL queries for data analysis and transformation.

⇾ Knowledge of Python programming, with proficiency in writing intermediate level Python code, including the ability to design and implement functions and classes. Users should also be skilled in creating, importing, and effectively utilizing Python packages.

Outline

1. Data Ingestion with Lakeflow Connect

Introduction to Data Engineering in Databricks

Cloud Storage Ingestion with LakeFlow Connect Standard Connector

Enterprise Data Ingestion with LakeFlow Connect Managed Connectors

Ingestion Alternatives


2. Deploy Workloads with Lakeflow Jobs

⇾ Introduction to Data Engineering in Databricks

⇾ Lakeflow Jobs Core Concepts

⇾ Creating and Scheduling Jobs

⇾ Advance Lakeflow Jobs Features


3. Build Data Pipelines with Lakeflow Spark Declarative Pipelines
Introduction to Data Engineering in Databricks
Lakeflow Spark Declarative Pipeline Fundamentals
Building Lakeflow Spark Declarative Pipelines

4. DevOps Essentials for Data Engineering

 Introduction to Software Engineering (SWE) Best Practices

 Introduction to Modularizing PySpark Code

 Demo: Modularizing PySpark Code - REQUIRED

 Lab: Modularize PySpark Code

 DevOps Fundamentals

 The Role of CI/CD in DevOps

 Knowledge Check/Discussion

 Planning the Project

 Demo: Project Setup Exploration (Required)

 Introduction to Unit Tests for PySpark

 Demo: Creating and Executing Unit Tests

 Lab: Create and Execute Unit Tests

 Executing Integration Tests with DLT and Workflows

 Demo: Performing Integration Tests with DLT and Workflows

 Version Control with Git Overview

 Lab: Version Control with Databricks Git Folders and GitHub

 Deplyoying Databricks Assets Overview

 Demo: Deploying the Databricks Project

Upcoming Public Classes

Date
Time
Your Local Time
Language
Price
May 19 - 20
08 AM - 04 PM (Asia/Kolkata)
-
English
$1500.00
May 26 - 27
09 AM - 05 PM (Australia/Sydney)
-
English
$1500.00
May 26 - 27
09 AM - 05 PM (America/Los_Angeles)
-
English
$1500.00
Jun 02 - 03
08 AM - 04 PM (Asia/Kolkata)
-
English
$1500.00
Jun 02 - 03
09 AM - 05 PM (Europe/Paris)
-
English
$1500.00
Jun 02 - 03
09 AM - 05 PM (America/New_York)
-
English
$1500.00
Jun 23 - 24
08 AM - 04 PM (Asia/Kolkata)
-
English
$1500.00
Jun 23 - 24
09 AM - 05 PM (America/Denver)
-
English
$1500.00
Jul 07 - 08
08 AM - 04 PM (Asia/Kolkata)
-
English
$1500.00
Jul 07 - 08
09 AM - 05 PM (Europe/Paris)
-
English
$1500.00
Jul 07 - 08
09 AM - 05 PM (America/New_York)
-
English
$1500.00
Jul 22 - 23
08 AM - 04 PM (Asia/Kolkata)
-
English
$1500.00
Jul 28 - 29
09 AM - 05 PM (Australia/Sydney)
-
English
$1500.00

Public Class Registration

If your company has purchased success credits or has a learning subscription, please fill out the Training Request form. Otherwise, you can register below.

Private Class Request

If your company is interested in private training, please submit a request.

See all our registration options

Registration options

Databricks has a delivery method for wherever you are on your learning journey

Runtime

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

Register now

Instructors

Instructor-Led

Public and private courses taught by expert instructors across half-day to two-day courses

Register now

Learning

Blended Learning

Self-paced and weekly instructor-led sessions for every style of learner to optimize course completion and knowledge retention. Go to Subscriptions Catalog tab to purchase

Purchase now

Scale

Skills@Scale

Comprehensive training offering for large scale customers that includes learning elements for every style of learning. Inquire with your account executive for details

Upcoming Public Classes

Machine Learning Practitioner

Advanced Machine Learning with Databricks

This course is aimed at data scientists and machine learning practitioners and consists of two, four-hours modules. 

Machine Learning at Scale

In this course, you will gain theoretical and practical knowledge of Apache Spark’s architecture and its application to machine learning workloads within Databricks. You will learn when to use Spark for data preparation, model training, and deployment, while also gaining hands-on experience with Spark ML and pandas APIs on Spark. This course will introduce you to advanced concepts like hyperparameter tuning and scaling Optuna with Spark. This course will use features and concepts introduced in the associate course such as MLflow and Unity Catalog for comprehensive model packaging and governance.

Advanced Machine Learning Operations

In this course, you will be provided with a comprehensive understanding of the machine learning lifecycle and MLOps, emphasizing best practices for data and model management, testing, and scalable architectures. It covers key MLOps components, including CI/CD, pipeline management, and environment separation, while showcasing Databricks’ tools for automation and infrastructure management, such as Databricks Asset Bundles (DABs), Workflows, and Mosaic AI Model Serving. You will learn about monitoring, custom metrics, drift detection, model rollout strategies, A/B testing, and the principles of reliable MLOps systems, providing a holistic view of implementing and managing ML projects in Databricks.

Paid
8h
Lab
instructor-led
Professional
Data Engineer

Advanced Data Engineering with Databricks

This course serves as an appropriate entry point to learn Advanced Data Engineering with Databricks. 

Note: Databricks Academy is transitioning to a notebook-based format for classroom sessions within the Databricks environment, discontinuing the use of slide decks for lectures in the first module. You can access the lecture notebooks in the Vocareum lab environment.

Below, we describe each of the four, four-hour modules included in this course.

Advanced Techniques with Spark Declarative Pipelines

This course explores Databricks' Lakeflow Spark Declarative Pipelines (SDP) for building production-grade streaming pipelines. You will learn advanced design patterns, robust data quality enforcement, and cross-platform integration essential for real-world lakehouse engineering.

Throughout the course, you will dive into modern data ingestion and processing techniques, mastering tools like Liquid Clustering for layout optimization and the Multiplex Streaming pattern for mixed-schema events. By the end of the modules, you will know how to confidently handle schema evolution, automate Change Data Capture (CDC), and ensure data integrity.

Through lectures and hands-on demos, you will:

• Build multi-flow pipelines to ingest multi-source data into a unified Bronze table.

• Apply Liquid Clustering and Data Quality Expectations across Silver and Gold layers.

• Implement the Multiplex pattern with Iceberg UniForm for cross-platform data access.

• Automate SCD Type 2 history tracking using AUTO CDC INTO.

• Design zero-data-loss quarantine pipelines to audit and manage invalid records.

Databricks Data Privacy

This content is intended for the learner persona of data engineers or for customers, partners, and employees who complete data engineering tasks with Databricks. It aims to provide them with the necessary knowledge and skills to execute these activities effectively on the Databricks platform.

Databricks Performance Optimization

In this course, you’ll learn how to optimize workloads and physical layout with Spark and Delta Lake and and analyze the Spark UI to assess performance and debug applications. We’ll cover topics like streaming, liquid clustering, data skipping, caching, photons, and more.

Automated Deployment with Declarative Automation Bundles

This course provides a comprehensive review of DevOps principles and their application to Databricks projects. It begins with an overview of core DevOps, DataOps, continuous integration (CI), continuous deployment (CD), and testing, and explores how these principles can be applied to data engineering pipelines.

The course then focuses on continuous deployment within the CI/CD process, examining tools like the Databricks REST API, SDK, and CLI for project deployment. You will learn about Declarative Automation Bundles (DABs) and how they fit into the CI/CD process. You’ll dive into their key components, folder structure, and how they streamline deployment across various target environments in Databricks. You will also learn how to add variables, modify, validate, deploy, and execute Declarative Automation Bundles for multiple environments with different configurations using the Databricks CLI.

Finally, the course introduces Visual Studio Code as an Interactive Development Environment (IDE) for building, testing, and deploying Declarative Automation Bundles locally, optimizing your development process. The course concludes with an introduction to automating deployment pipelines using GitHub Actions to enhance the CI/CD workflow with Declarative Automation Bundles.

By the end of this course, you will be equipped to automate Databricks project deployments with Declarative Automation Bundles, improving efficiency through DevOps practices.

Languages Available: English | 日本語 | Português BR | 한국어

Paid
16h
Lab
instructor-led
Professional

Questions?

If you have any questions, please refer to our Frequently Asked Questions page.