Skip to main content

Data Preparation for Machine Learning

This course focuses on the fundamentals of preparing data for machine learning using Databricks. Participants will learn essential skills for exploring, cleaning, and organizing data tailored for traditional machine learning applications. Key topics include data visualization, feature engineering, and optimal feature storage strategies. Through practical exercises, participants will gain hands-on experience in efficiently preparing data sets for machine learning within the Databricks. This course is designed for associate-level data scientists and machine learning practitioners. and individuals seeking to enhance their proficiency in data preparation, ensuring a solid foundation for successful machine learning model deployment.


Note:

1. This is the first course in the 'Machine Learning with Databricks’ series.

2. Databricks Academy is transitioning from video lectures to a more streamlined PDF format with slides and notes for all self-paced courses. Please note that demo videos will still be available in their original format. We would love to hear your thoughts on this change, so please share your feedback through the course survey at the end. Thank you for being a part of our learning community!

Skill Level
Associate
Duration
3h
Prerequisites

In this course, the content was developed for participants with these skills/knowledge/abilities: 

• Completed the Get Started with Databricks for Machine Learning (Onboarding) course or possess equivalent foundational knowledge of working in the Databricks environment.

    - Learners should be familiar with navigating the Databricks workspace, creating and running notebooks, and understanding the basic machine learning workflow on Databricks. This course builds on that foundation to focus on data preparation for machine learning.

• Intermediate-level proficiency in Python programming for data preparation and analysis.

    - Learners should be comfortable using libraries such as pandas, numpy, and scikit-learn for data manipulation, handling missing values, and basic feature transformations.

• Basic understanding of machine learning fundamentals.

    - This includes familiarity with concepts such as training and test datasets, feature engineering, and model development pipelines.

• Familiarity with Databricks platform workflows.

    - Learners should be able to perform basic tasks such as creating clusters, running code in notebooks, and using common notebook operations.

• Basic knowledge of data formats and lakehouse concepts.

    - Learners should be familiar with common data formats such as CSV, JSON, and Parquet, and have introductory knowledge of Delta Lake and the Lakehouse architecture.

• Foundational understanding of exploratory data analysis and basic statistics.

    - This includes awareness of data distributions, missing values, outliers, and simple data visualization techniques used to assess data quality.

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

See all our registration options

Registration options

Databricks has a delivery method for wherever you are on your learning journey

Runtime

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

Register now

Instructors

Instructor-Led

Public and private courses taught by expert instructors across half-day to two-day courses

Register now

Learning

Blended Learning

Self-paced and weekly instructor-led sessions for every style of learner to optimize course completion and knowledge retention. Go to Subscriptions Catalog tab to purchase

Purchase now

Scale

Skills@Scale

Comprehensive training offering for large scale customers that includes learning elements for every style of learning. Inquire with your account executive for details

Upcoming Public Classes

Get Started with Lakebase

This get started course introduces Databricks Lakebase, a fully managed PostgreSQL service built into the Databricks Data Intelligence Platform that brings operational (OLTP) and analytical (OLAP) workloads closer together.

The course begins with a conceptual lecture that compares OLTP and OLAP systems, explaining their different performance characteristics, storage models, and typical use cases. You will also explore the challenges organizations face when maintaining separate transactional databases and analytical platforms, including data movement, latency, and architectural complexity.

You will then learn how Databricks Lakebase helps address these challenges by providing a PostgreSQL-compatible operational database that integrates directly with the Databricks Lakehouse, enabling operational applications and analytics to work together within a unified platform.

Through hands-on labs, you will:

Create and explore a Lakebase project using autoscaling compute

• Navigate the Lakebase UI, including branching, monitoring, and configuration settings

• Create and query tables using the Lakebase SQL Editor

• Query Lakebase data from Databricks using Lakehouse Federation and foreign catalogs

• Perform Reverse ETL by synchronizing Delta tables to Lakebase

• Connect to Lakebase from Python and perform basic CRUD operations

This is a Get Started course, so the focus is on understanding the core concepts and basic workflows for working with Lakebase. Building full production applications on top of Lakebase is outside the scope of this course.

Note: For SCORM lecture files, please ensure that you close the SCORM window after completing the content. Do not click the ‘Next Lesson’ button, as doing so may prevent the SCORM module from being marked as complete.

Paid & Subscription
3h
Lab
Onboarding

Questions?

If you have any questions, please refer to our Frequently Asked Questions page.