Skip to main content

Get Started with Databricks for Data Warehousing

This course provides a comprehensive overview of Databricks’ modern approach to data warehousing, highlighting how a data lakehouse architecture combines the strengths of traditional data warehouses with the flexibility and scalability of the cloud. You’ll learn about the AI-driven features that enhance data transformation and analysis on the Databricks Data Intelligence Platform. Designed for data warehousing practitioners, this course provides you with the foundational information needed to begin building and managing high-performant, AI-powered data warehouses on Databricks. 

Skill Level
Onboarding
Duration
3h
Prerequisites

The content was developed for participants with these skills/knowledge/abilities:  

A basic understanding of data warehousing principles and topics such as database administration, SQL, data manipulation, and storage.

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

See all our registration options

Registration options

Databricks has a delivery method for wherever you are on your learning journey

Runtime

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

Register now

Instructors

Instructor-Led

Public and private courses taught by expert instructors across half-day to two-day courses

Register now

Learning

Blended Learning

Self-paced and weekly instructor-led sessions for every style of learner to optimize course completion and knowledge retention. Go to Subscriptions Catalog tab to purchase

Purchase now

Scale

Skills@Scale

Comprehensive training offering for large scale customers that includes learning elements for every style of learning. Inquire with your account executive for details

Upcoming Public Classes

Data Engineer

Advanced Techniques with Spark Declarative Pipeline

This course explores Databricks' Lakeflow Spark Declarative Pipelines (SDP) for building production-grade streaming pipelines. You will learn advanced design patterns, robust data quality enforcement, and cross-platform integration essential for real-world lakehouse engineering.

Throughout the course, you will dive into modern data ingestion and processing techniques, mastering tools like Liquid Clustering for layout optimization and the Multiplex Streaming pattern for mixed-schema events. By the end of the modules, you will know how to confidently handle schema evolution, automate Change Data Capture (CDC), and ensure data integrity.

Through lectures and hands-on demos, you will:

• Build multi-flow pipelines to ingest multi-source data into a unified Bronze table.

• Apply Liquid Clustering and Data Quality Expectations across Silver and Gold layers.

• Implement the Multiplex pattern with Iceberg UniForm for cross-platform data access.

• Automate SCD Type 2 history tracking using AUTO CDC INTO.

• Design zero-data-loss quarantine pipelines to audit and manage invalid records.

Note: 

1. This course is the first in the 'Advanced Data Engineering with Databricks' series.

2. For SCORM lecture files, please ensure that you close the SCORM window after completing the content. Do not click the ‘Next Lesson’ button, as doing so may prevent the SCORM module from being marked as complete.

Paid & Subscription
3h
Lab
Professional

Questions?

If you have any questions, please refer to our Frequently Asked Questions page.