Skip to main content

Scalable Machine Learning with Apache Spark™

This course teaches you how to scale ML pipelines with Spark, including distributed training, hyperparameter tuning, and inference. You will build and tune ML models with SparkML while leveraging MLflow to track, version, and manage these models. This course covers the latest ML features in Apache Spark, such as Pandas UDFs, Pandas Functions, and the pandas API on Spark, as well as the latest ML product offerings, such as Feature Store and AutoML.  


Skill Level
Associate
Duration
16h
Prerequisites
Intermediate experience with Python, Experience building machine learning models, Beginner experience with PySpark DataFrame API

Outline

Day 1

  • Spark / ML overview    
  • Exploratory data analysis (EDA) and feature engineering with Spark    
  • Linear regression with SparkML: transformers, estimators, pipelines, and evaluators    
  • MLflow Tracking and Model Registry


Day 2

  • Tree-based models: Hyperparameter tuning and parallelism     
  • HyperOpt for distributed hyperparameter tuning    
  • Databricks AutoML and Feature Store    
  • Integrating 3rd party packages (distributed XGBoost)     
  • Distributed inference of scikit-learn models with pandas UDFs    
  • Distributed training with pandas function API     
  • Pandas API on Spark for data manipulation

Public Class Registration

If your company has purchased success credits or has a learning subscription, please fill out the Training Request form. Otherwise, you can register below.

Private Class Request

If your company is interested in private training, please submit a request.

See all our registration options

Registration options

Databricks has a delivery method for wherever you are on your learning journey

Runtime

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

Register now

Instructors

Instructor-Led

Public and private courses taught by expert instructors across half-day to two-day courses

Register now

Learning

Blended Learning

Self-paced and weekly instructor-led sessions for every style of learner to optimize course completion and knowledge retention. Go to Subscriptions Catalog tab to purchase

Purchase now

Scale

Skills@Scale

Comprehensive training offering for large scale customers that includes learning elements for every style of learning. Inquire with your account executive for details

Upcoming Public Classes

Data Engineer

Data Pipelines with Delta Live Tables

In this course, you'll use Delta Live Tables with your choice of Spark SQL or Python to define and schedule pipelines that incrementally process new data from a variety of data sources into the Lakehouse. Learning objectives Describe how Delta Live Tables tracks data dependencies in data pipelines. Configure and run data pipelines using the Delta Live Tables UI. Use Python or Spark SQL to define data pipelines that ingest and process data through multiple tables in the lakehouse using Auto Loader and Delta Live Tables. Use APPLY CHANGES INTO syntax to process Change Data Capture feeds. Review event logs and data artifacts created by pipelines and troubleshoot DLT syntaxPrerequisites Beginner familiarity with cloud computing concepts (virtual machines, object storage, etc.) Ability to perform basic code development tasks using the Databricks Data Engineering & Data Science workspace (create clusters, run code in notebooks, use basic notebook operations, import repos from git, etc) Beginning programming experience with Delta Lake,Use Delta Lake DDL to create tables, compact files, restore previous table versions, and perform garbage collection of tables in the Lakehouse.Use CTAS to store data derived from a query in a Delta Lake table.Use SQL to perform complete and incremental updates to existing tables. Beginner programming experience with Python (syntax, conditions, loops, functions) Beginning programming experience with Spark SQL or PySpark. Extract data from a variety of file formats and data sources. Apply a number of common transformations to clean data. Reshape and manipulate complex data using advanced built-in functions. Production experience working with data warehouses and data lakes. Last course update April 2023
Paid
4h
Lab
instructor-led
Associate
Career Workshop

Career Workshop/

March 20

Careers at Databricks

We're on a mission to help data teams solve the world's toughest problems. Will you join us?
Advance my career now

Questions?

If you have any questions, please refer to our Frequently Asked Questions page.