Skip to main content

Machine Learning Model Development

This comprehensive course provides a practical guide to developing traditional machine learning models on Databricks, emphasizing hands-on demonstrations and workflows using popular ML libraries. Participants will explore key ML techniques, including regression and clustering, while leveraging Databricks’ powerful capabilities. The course covers MLflow integration for model tracking, Databricks Feature Store for feature management, and Optuna for hyperparameter tuning. Additionally, participants will learn how to accelerate model training with Databricks AutoML. By the end of the course, learners will have real-world, practical skills to develop, optimize, and deploy machine learning models efficiently in the Databricks environment.


Note: 

  1. This is the second course in the 'Machine Learning with Databricks’ series.
  2. Databricks Academy is transitioning from video lectures to a more streamlined PDF format with slides and notes for all self-paced courses. Please note that demo videos will still be available in their original format. We would love to hear your thoughts on this change, so please share your feedback through the course survey at the end. Thank you for being a part of our learning community!
Skill Level
Associate
Duration
3h
Prerequisites

At a minimum, you should be familiar with the following before attempting to take this content:

• Familiarity with the Databricks Data Intelligence Platform and basic workspace operations (create clusters, run code in notebooks, use basic notebook operations, import repos from git)

• Intermediate programming experience with Python, including data manipulation libraries (pandas, numpy) and working with APIs (databricks-sdk, REST endpoints)

• Basic knowledge of MLflow for experiment tracking, model logging, model registry operations, and model versioning

• Understanding of machine learning fundamentals, including model training, evaluation, batch inference, and real-time deployment concepts

• Intermediate experience with Unity Catalog for data governance and model registry management

• Basic familiarity with Feature Engineering concepts, including feature tables, feature lookups, and offline vs online feature stores

• Understanding of Delta Lake operations (create tables, perform updates, optimize files, and liquid clustering) and data storage optimization techniques

• Basic knowledge of Apache Spark and PySpark for distributed data processing and User Defined Functions (UDFs)

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

See all our registration options

Registration options

Databricks has a delivery method for wherever you are on your learning journey

Runtime

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

Register now

Instructors

Instructor-Led

Public and private courses taught by expert instructors across half-day to two-day courses

Register now

Learning

Blended Learning

Self-paced and weekly instructor-led sessions for every style of learner to optimize course completion and knowledge retention. Go to Subscriptions Catalog tab to purchase

Purchase now

Scale

Skills@Scale

Comprehensive training offering for large scale customers that includes learning elements for every style of learning. Inquire with your account executive for details

Upcoming Public Classes

Data Engineer

Automated Deployment with Declarative Automation Bundles

This course provides a comprehensive review of DevOps principles and their application to Databricks projects. It begins with an overview of core DevOps, DataOps, continuous integration (CI), continuous deployment (CD), and testing, and explores how these principles can be applied to data engineering pipelines.

The course then focuses on continuous deployment within the CI/CD process, examining tools like the Databricks REST API, SDK, and CLI for project deployment. You will learn about Declarative Automation Bundles (DABs) and how they fit into the CI/CD process. You’ll dive into their key components, folder structure, and how they streamline deployment across various target environments in Databricks. You will also learn how to add variables, modify, validate, deploy, and execute Declarative Automation Bundles for multiple environments with different configurations using the Databricks CLI.

Finally, the course introduces Visual Studio Code as an Interactive Development Environment (IDE) for building, testing, and deploying Declarative Automation Bundles locally, optimizing your development process. The course concludes with an introduction to automating deployment pipelines using GitHub Actions to enhance the CI/CD workflow with Declarative Automation Bundles.

By the end of this course, you will be equipped to automate Databricks project deployments with Declarative Automation Bundles, improving efficiency through DevOps practices.

Note: 

1. Databricks Academy is transitioning from video lectures to a more streamlined PDF format with slides and notes for all self-paced courses. Please note that demo videos will still be available in their original format. We would love to hear your thoughts on this change, so please share your feedback through the course survey at the end. Thank you for being a part of our learning community!

2. This course is the fourth in the 'Advanced Data Engineering with Databricks' series.

Paid & Subscription
3h
Lab
Professional

Questions?

If you have any questions, please refer to our Frequently Asked Questions page.