Skip to main content

Data Governance at Scale

In this course, you will learn how to implement data governance at scale on Databricks using Unity Catalog, with a focus on attribute-based access control, observability, and federated sharing. You will configure ABAC with governed tags, migrate from legacy fine-grained controls, enable and use system tables for audit and cost monitoring, deploy Lakehouse Monitoring for data and model quality, interpret lineage for impact and compliance, and apply federated governance and Delta Sharing patterns for secure cross-cloud collaboration.


Note: Databricks Academy is transitioning to a notebook-based format for classroom sessions within the Databricks environment, discontinuing the use of slide decks for lectures. You can access the lecture notebooks in the Vocareum lab environment.

Skill Level
Associate
Duration
4h
Prerequisites

Complete the following course before taking this course: 

• Databricks Fundamentals (or equivalent introductory Databricks course)


In this course, the content was developed for participants with these skills/knowledge/abilities:  

• Familiarity with the Databricks platform and basic workspace operations (creating and attaching clusters, running notebooks, managing basic job runs).  

• Working knowledge of core data governance concepts such as access control, permissions, and security policies in a data platform.  

• Intermediate SQL experience, including creating and managing tables, views, and functions, and granting/revoking privileges on database objects.  

• Understanding of Unity Catalog’s basic object model (metastore, catalogs, schemas, tables, volumes, functions, models).  

• Basic understanding of data lineage and how data moves between sources, transformations, and downstream analytics or ML assets.  

• Familiarity with fine-grained security techniques like row-level filters and column masking, even if not yet implemented in Unity Catalog.  

• Beginner-level knowledge of cloud concepts (compute, storage, identities/groups) on at least one major cloud provider.  

• Basic awareness of metadata management and data discovery practices in modern data platforms.

Upcoming Public Classes

Date
Time
Your Local Time
Language
Price
Jun 09
08 AM - 12 PM (Asia/Kolkata)
-
English
$750.00
Jun 30
01 PM - 05 PM (Europe/London)
-
English
$750.00
Jul 23
01 PM - 05 PM (Australia/Sydney)
-
English
$750.00
Jul 23
09 AM - 01 PM (America/New_York)
-
English
$750.00

Public Class Registration

If your company has purchased success credits or has a learning subscription, please fill out the Training Request form. Otherwise, you can register below.

Private Class Request

If your company is interested in private training, please submit a request.

See all our registration options

Registration options

Databricks has a delivery method for wherever you are on your learning journey

Runtime

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

Register now

Instructors

Instructor-Led

Public and private courses taught by expert instructors across half-day to two-day courses

Register now

Learning

Blended Learning

Self-paced and weekly instructor-led sessions for every style of learner to optimize course completion and knowledge retention. Go to Subscriptions Catalog tab to purchase

Purchase now

Scale

Skills@Scale

Comprehensive training offering for large scale customers that includes learning elements for every style of learning. Inquire with your account executive for details

Upcoming Public Classes

Machine Learning Practitioner

Machine Learning with Databricks

Welcome to Machine Learning with Databricks!

This course is your gateway to mastering machine learning workflows on Databricks. Dive into data preparation, model development, deployment, and operations, guided by expert instructors. Learn essential skills for data exploration, model training, and deployment strategies tailored for Databricks. By course end, you'll have the knowledge and confidence to navigate the entire machine learning lifecycle on the Databricks platform, empowering you to build and deploy robust machine learning solutions efficiently.

Data Preparation for Machine Learning

This course focuses on the fundamentals of preparing data for machine learning using Databricks. Participants will learn essential skills for exploring, cleaning, and organizing data tailored for traditional machine learning applications. Key topics include data visualization, feature engineering, and optimal feature storage strategies. Through practical exercises, participants will gain hands-on experience in efficiently preparing data sets for machine learning within the Databricks. This course is designed for associate-level data scientists and machine learning practitioners. and individuals seeking to enhance their proficiency in data preparation, ensuring a solid foundation for successful machine learning model deployment.

Machine Learning Model Development

This comprehensive course provides a practical guide to developing traditional machine learning models on Databricks, emphasizing hands-on demonstrations and workflows using popular ML libraries. Participants will explore key ML techniques, including regression and clustering, while leveraging Databricks’ powerful capabilities. The course covers MLflow integration for model tracking, Databricks Feature Store for feature management, and Optuna for hyperparameter tuning. Additionally, participants will learn how to accelerate model training with Databricks AutoML. By the end of the course, learners will have real-world, practical skills to develop, optimize, and deploy machine learning models efficiently in the Databricks environment.

Machine Learning Model Deployment

This course is designed to introduce three primary machine learning deployment strategies and illustrate the implementation of each strategy on Databricks. Following an exploration of the fundamentals of model deployment, the course delves into batch inference, offering hands-on demonstrations and labs for utilizing a model in batch inference scenarios, along with considerations for performance optimization. The second part of the course comprehensively covers pipeline deployment, while the final segment focuses on real-time deployment. Participants will engage in hands-on demonstrations and labs, deploying models with Model Serving and utilizing the serving endpoint for real-time inference.

Machine Learning Operations

This course will guide participants through a comprehensive exploration of machine learning model operations, focusing on MLOps and model lifecycle management. The initial segment covers essential MLOps components and best practices, providing participants with a strong foundation for effectively operationalizing machine learning models. In the latter part of the course, we will delve into the basics of the model lifecycle, demonstrating how to navigate it seamlessly using the Model Registry in conjunction with the Unity Catalog for efficient model management. By the course's conclusion, participants will have gained practical insights and a well-rounded understanding of MLOps principles, equipped with the skills needed to navigate the intricate landscape of machine learning model operations.

Languages Available: English | 日本語 | Português BR | 한국어

Paid
16h
Lab
instructor-led
Associate
Data Engineer

DevOps Essentials for Data Engineering

This course explores software engineering best practices and DevOps principles, specifically designed for data engineers working with Databricks. Participants will build a strong foundation in key topics such as code quality, version control, documentation, and testing. The course emphasizes DevOps, covering core components, benefits, and the role of continuous integration and delivery (CI/CD) in optimizing data engineering workflows.

You will learn how to apply modularity principles in PySpark to create reusable components and structure code efficiently. Hands-on experience includes designing and implementing unit tests for PySpark functions using the pytest framework, followed by integration testing for Databricks data pipelines with Spark Declarative Pipeline and Jobs to ensure reliability.

The course also covers essential Git operations within Databricks, including using Databricks Git Folders to integrate continuous integration practices. Finally, you will take a high level look at various deployment methods for Databricks assets, such as REST API, CLI, SDK, and Declarative Automation Bundles (DABs), providing you with the knowledge of techniques to deploy and manage your pipelines.

By the end of the course, you will be proficient in software engineering and DevOps best practices, enabling you to build scalable, maintainable, and efficient data engineering solutions.

Languages Available: English | 日本語 | Português BR | 한국어 | Español | française

Paid
4h
Lab
instructor-led
Associate

Questions?

If you have any questions, please refer to our Frequently Asked Questions page.