Skip to main content

Advanced Data Engineering with Databricks

Languages Available: English | 日本語 | Português BR


In this course, students will build upon their existing knowledge of Apache Spark, Structured Streaming, and Delta Lake to unlock the full potential of the data lakehouse by utilizing the suite of tools provided by Databricks. This course places a heavy emphasis on designs favoring incremental data processing, enabling systems optimized to continuously ingest and analyze ever-growing data. By designing workloads that leverage built-in platform optimizations, data engineers can reduce the burden of code maintenance and on-call emergencies, and quickly adapt production code to new demands with minimal refactoring or downtime. 

 

The topics in this course should be mastered prior to attempting the Databricks Certified Data Engineer Professional exam. 

Skill Level
Professional
Duration
16h
Prerequisites
  • Experience using PySpark APIs to perform advanced data transformations
  • Familiarity implementing classes with Python
  • Experience using SQL in production data warehouse or data lake implementations
  • Experience working in Databricks notebooks and configuring clusters
  • Familiarity with creating and manipulating data in Delta Lake tables with SQL


The prerequisites listed above can be learned by taking the Data Engineering with Databricks and Apache Spark Programming with Databricks instructor-led courses (can be taken in either order) and validated by passing the Databricks Certified Data Engineer Associate and Databricks Certified Associate Developer for Apache Spark certification exams.

Outline

Day 1

  • The Lakehouse Architecture
  • Optimizing Data Storage
  • Understanding Delta Lake Transactions
  • Delta Lake Isolation with Optimistic Concurrency
  • Streaming Design Patterns
  • Clone for Development and Data Backup
  • Auto Loader and Bronze Ingestion Patterns
  • Streaming Deduplication and Quality Enforcement
  • Slowly Changing Dimensions
  • Streaming Joins and Statefulness


Day 2

  • Stored and Materialized Views
  • Storing Data Securely
  • Granting Privileged Access to PII
  • Deleting Data in the Lakehouse
  • Orchestration and Scheduling with Multi-Task Jobs
  • Monitoring, Logging, and Handling Errors
  • Promoting Code with Databricks Repos
  • Programmatic Platform Interactions (Databricks CLI and REST API)
  • Managing Costs and Latency with Streaming Workloads

Upcoming Public Classes

Date
Time
Language
Price
May 21 - 22
09 AM - 05 PM (Europe/Paris)
English
$1500.00
May 27 - 30
01 PM - 05 PM (Australia/Sydney)
English
$1500.00
Jun 12 - 13
09 AM - 05 PM (Europe/Paris)
English
$1500.00
Jun 19 - 20
09 AM - 05 PM (Europe/London)
English
$1500.00
Jun 24 - 25
09 AM - 05 PM (America/Chicago)
English
$1500.00
Jul 10 - 11
09 AM - 05 PM (Europe/Paris)
English
$1500.00
Jul 15 - 16
09 AM - 05 PM (Europe/London)
English
$1500.00
Jul 29 - 30
09 AM - 05 PM (Australia/Sydney)
English
$1500.00
Aug 05 - 06
09 AM - 05 PM (America/Chicago)
English
$1500.00
Aug 19 - 20
09 AM - 05 PM (Australia/Sydney)
English
$1500.00
Aug 26 - 27
09 AM - 05 PM (Australia/Sydney)
English
$1500.00

Public Class Registration

If your company has purchased success credits or has a learning subscription, please fill out the Training Request form. Otherwise, you can register below.

Private Class Request

If your company is interested in private training, please submit a request.

See all our registration options

Registration options

Databricks has a delivery method for wherever you are on your learning journey

Runtime

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

Register now

Instructors

Instructor-Led

Public and private courses taught by expert instructors across half-day to two-day courses

Register now

Learning

Blended Learning

Self-paced and weekly instructor-led sessions for every style of learner to optimize course completion and knowledge retention. Go to Subscriptions Catalog tab to purchase

Purchase now

Scale

Skills@Scale

Comprehensive training offering for large scale customers that includes learning elements for every style of learning. Inquire with your account executive for details

Upcoming Public Classes

Data Engineer

Data Pipelines with Delta Live Tables

In this course, you'll use Delta Live Tables with your choice of Spark SQL or Python to define and schedule pipelines that incrementally process new data from a variety of data sources into the Lakehouse. Learning objectives Describe how Delta Live Tables tracks data dependencies in data pipelines. Configure and run data pipelines using the Delta Live Tables UI. Use Python or Spark SQL to define data pipelines that ingest and process data through multiple tables in the lakehouse using Auto Loader and Delta Live Tables. Use APPLY CHANGES INTO syntax to process Change Data Capture feeds. Review event logs and data artifacts created by pipelines and troubleshoot DLT syntaxPrerequisites Beginner familiarity with cloud computing concepts (virtual machines, object storage, etc.) Ability to perform basic code development tasks using the Databricks Data Engineering & Data Science workspace (create clusters, run code in notebooks, use basic notebook operations, import repos from git, etc) Beginning programming experience with Delta Lake,Use Delta Lake DDL to create tables, compact files, restore previous table versions, and perform garbage collection of tables in the Lakehouse.Use CTAS to store data derived from a query in a Delta Lake table.Use SQL to perform complete and incremental updates to existing tables. Beginner programming experience with Python (syntax, conditions, loops, functions) Beginning programming experience with Spark SQL or PySpark. Extract data from a variety of file formats and data sources. Apply a number of common transformations to clean data. Reshape and manipulate complex data using advanced built-in functions. Production experience working with data warehouses and data lakes. Last course update April 2023
Paid
4h
Lab
instructor-led
Associate
Career Workshop

Career Workshop/

March 20

Careers at Databricks

We're on a mission to help data teams solve the world's toughest problems. Will you join us?
Advance my career now

Questions?

If you have any questions, please refer to our Frequently Asked Questions page.