Skip to main content

Feature Engineering at Scale

In this course, you will gain a comprehensive understanding of how to design, scale, and operationalize end-to-end feature engineering pipelines on the Databricks platform. The curriculum is structured across three progressive modules: mastering the fundamentals of Spark’s distributed execution and optimization, implementing scalable data ingestion with Auto Loader and declarative Lakeflow pipelines, and advancing to production-grade MLOps with the Databricks Feature Store.


You will engage in hands-on learning experiences such as debugging Spark performance with the Catalyst Optimizer and Spark UI, building robust Bronze-Silver-Gold medallion architectures with automated quality checks, and implementing scalable feature transformations using SparkML. The course culminates in deploying real-time feature serving through Online Feature Stores, defining FeatureSpecs with on-demand transformations, and applying governance and lineage tracking with Unity Catalog.

Skill Level
Professional
Duration
3h
Prerequisites

The content was developed for participants with these skills/knowledge/abilities:  

1. Completed the “Introduction to Apache Spark” course or possess equivalent foundational knowledge of Spark, including basic data transformations and Spark SQL.

   * Learners should be comfortable with Spark’s role in distributed data processing. This course will build on that foundation to explain how Spark enables scalable machine learning workflows.

2. Intermediate-level proficiency in Python programming, particularly for data manipulation using libraries such as `pandas`, `numpy`, or `scikit-learn`.

3. Intermediate understanding of traditional machine learning workflows, including model training, evaluation, and hyperparameter tuning.

4. Familiarity with the Databricks platform and workflows.

   * Learners are strongly encouraged to complete the Databricks Machine Learning Associate course prior to this course. This course assumes knowledge of ML development using the Databricks environment.

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

See all our registration options

Registration options

Databricks has a delivery method for wherever you are on your learning journey

Runtime

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

Register now

Instructors

Instructor-Led

Public and private courses taught by expert instructors across half-day to two-day courses

Register now

Learning

Blended Learning

Self-paced and weekly instructor-led sessions for every style of learner to optimize course completion and knowledge retention. Go to Subscriptions Catalog tab to purchase

Purchase now

Scale

Skills@Scale

Comprehensive training offering for large scale customers that includes learning elements for every style of learning. Inquire with your account executive for details

Upcoming Public Classes

Data Engineer

Build Data Pipelines with Lakeflow Declarative Pipelines

This course introduces users to the essential concepts and skills needed to build data pipelines using Lakeflow Declarative Pipelines in Databricks for incremental batch or streaming ingestion and processing through multiple streaming tables and materialized views. Designed for data engineers new to Lakeflow Declarative Pipelines, the course provides a comprehensive overview of core components such as incremental data processing, streaming tables, materialized views, and temporary views, highlighting their specific purposes and differences.

Topics covered include:

- Developing and debugging ETL pipelines with the multi-file editor in Lakeflow using SQL (with Python code examples provided)

- How Lakeflow Declarative Pipelines track data dependencies in a pipeline through the pipeline graph

- Configuring pipeline compute resources, data assets, trigger modes, and other advanced options

Next, the course introduces data quality expectations in Lakeflow, guiding users through the process of integrating expectations into pipelines to validate and enforce data integrity. Learners will then explore how to put a pipeline into production, including scheduling options, and enabling pipeline event logging to monitor pipeline performance and health.

Finally, the course covers how to implement Change Data Capture (CDC) using the AUTO CDC INTO syntax within Lakeflow Declarative Pipelines to manage slowly changing dimensions (SCD Type 1 and Type 2), preparing users to integrate CDC into their own pipelines.

Note: Databricks Academy is transitioning from video lectures to a more streamlined PDF format with slides and notes for all self-paced courses. Please note that demo videos will still be available in their original format. We would love to hear your thoughts on this change, so please share your feedback through the course survey at the end. Thank you for being a part of our learning community!

Languages Available: English | 日本語 | Português BR | 한국어

Free
2h
Associate

Questions?

If you have any questions, please refer to our Frequently Asked Questions page.