Skip to main content

Feature Engineering at Scale

In this course, you will gain a comprehensive understanding of how to design, scale, and operationalize end-to-end feature engineering pipelines on the Databricks platform. The curriculum is structured across three progressive modules: mastering the fundamentals of Spark’s distributed execution and optimization, implementing scalable data ingestion with Auto Loader and declarative Lakeflow pipelines, and advancing to production-grade MLOps with the Databricks Feature Store.


You will engage in hands-on learning experiences such as debugging Spark performance with the Catalyst Optimizer and Spark UI, building robust Bronze-Silver-Gold medallion architectures with automated quality checks, and implementing scalable feature transformations using SparkML. The course culminates in deploying real-time feature serving through Online Feature Stores, defining FeatureSpecs with on-demand transformations, and applying governance and lineage tracking with Unity Catalog.

Skill Level
Professional
Duration
3h
Prerequisites

The content was developed for participants with these skills/knowledge/abilities:  

1. Completed the “Introduction to Apache Spark” course or possess equivalent foundational knowledge of Spark, including basic data transformations and Spark SQL.

   * Learners should be comfortable with Spark’s role in distributed data processing. This course will build on that foundation to explain how Spark enables scalable machine learning workflows.

2. Intermediate-level proficiency in Python programming, particularly for data manipulation using libraries such as `pandas`, `numpy`, or `scikit-learn`.

3. Intermediate understanding of traditional machine learning workflows, including model training, evaluation, and hyperparameter tuning.

4. Familiarity with the Databricks platform and workflows.

   * Learners are strongly encouraged to complete the Databricks Machine Learning Associate course prior to this course. This course assumes knowledge of ML development using the Databricks environment.

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

See all our registration options

Registration options

Databricks has a delivery method for wherever you are on your learning journey

Runtime

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

Register now

Instructors

Instructor-Led

Public and private courses taught by expert instructors across half-day to two-day courses

Register now

Learning

Blended Learning

Self-paced and weekly instructor-led sessions for every style of learner to optimize course completion and knowledge retention. Go to Subscriptions Catalog tab to purchase

Purchase now

Scale

Skills@Scale

Comprehensive training offering for large scale customers that includes learning elements for every style of learning. Inquire with your account executive for details

Upcoming Public Classes

Data Engineer

Automated Deployment with Databricks Asset Bundles

This course provides a comprehensive review of DevOps principles and their application to Databricks projects. It begins with an overview of core DevOps, DataOps, continuous integration (CI), continuous deployment (CD), and testing, and explores how these principles can be applied to data engineering pipelines.

The course then focuses on continuous deployment within the CI/CD process, examining tools like the Databricks REST API, SDK, and CLI for project deployment. You will learn about Databricks Asset Bundles (DABs) and how they fit into the CI/CD process. You’ll dive into their key components, folder structure, and how they streamline deployment across various target environments in Databricks. You will also learn how to add variables, modify, validate, deploy, and execute Databricks Asset Bundles for multiple environments with different configurations using the Databricks CLI.

Finally, the course introduces Visual Studio Code as an Interactive Development Environment (IDE) for building, testing, and deploying Databricks Asset Bundles locally, optimizing your development process. The course concludes with an introduction to automating deployment pipelines using GitHub Actions to enhance the CI/CD workflow with Databricks Asset Bundles.

By the end of this course, you will be equipped to automate Databricks project deployments with Databricks Asset Bundles, improving efficiency through DevOps practices.

Note: 

1. Databricks Academy is transitioning from video lectures to a more streamlined PDF format with slides and notes for all self-paced courses. Please note that demo videos will still be available in their original format. We would love to hear your thoughts on this change, so please share your feedback through the course survey at the end. Thank you for being a part of our learning community!

2. This course is the fourth in the 'Advanced Data Engineering with Databricks' series.

Paid & Subscription
3h
Lab
Professional

Questions?

If you have any questions, please refer to our Frequently Asked Questions page.