Advanced Data Engineering with Databricks

Description
In this course, students will build upon their existing knowledge of Apache Spark, Structured Streaming and Delta Lake to unlock the full potential of the data lakehouse by utilizing the suite of tools provided by Databricks. This course places a heavy emphasis on designs favoring incremental data processing, enabling systems optimized to continuously ingest and analyze ever-growing data. By designing workloads that leverage built-in platform optimizations, data engineers can reduce the burden of code maintenance and on-call emergencies, and quickly adapt production code to new demands with minimal refactoring or downtime.
The topics in this course should be mastered prior to attempting the Databricks Certified Data Engineer Professional exam.
Duration
2 full days or 4 half days
Objectives
- Design databases and pipelines optimized for the Databricks Lakehouse Platform
- Implement efficient incremental data processing to validate and enrich data driving business decisions and applications
- Leverage Databricks-native features for managing access to sensitive data and fulfilling right-to-be-forgotten requests
- Manage error troubleshooting, code promotion, task orchestration and production job monitoring using Databricks tools
Prerequisites
These are hard prerequisites for our partners - please do not register for this class unless you meet most of the requirements:
- Experience using PySpark APIs to perform advanced data transformations
- Familiarity implementing classes with Python
- Experience using SQL in production data warehouse or data lake implementations
- Experience working in Databricks notebooks and configuring clusters
- Familiarity with creating and manipulating data in Delta Lake tables with SQL
- Ability to use Spark Structured Streaming to incrementally read from a Delta table
Outline
Day 1
- The Lakehouse Architecture
- Optimizing Data Storage
- Understanding Delta Lake Transactions
- Delta Lake Isolation with Optimistic Concurrency
- Streaming Design Patterns
- Clone for Development and Data Backup
- Auto Loader and Bronze Ingestion Patterns
- Streaming Deduplication and Quality Enforcement
- Slowly Changing Dimensions
- Streaming Joins and Statefulness
Day 2
- Stored and Materialized Views
- Storing Data Securely
- Granting Privileged Access to PII
- Deleting Data in the Lakehouse
- Orchestration and Scheduling with Multitask Jobs
- Monitoring, Logging, and Handling Errors
- Promoting Code with Databricks Repos
- Programmatic Platform Interactions (Databricks CLI and REST API)
- Managing Costs and Latency with Streaming Workloads