Skip to main content

Catalog Management and Data Organization

In this course, you will learn how to design, implement, and govern catalog structures and large-scale data organization on the Databricks Data Intelligence Platform. It offers a comprehensive view of Unity Catalog as the centralized governance layer for an enterprise lakehouse. Divided into five modules, it begins by placing Unity Catalog within the cloud deployment model — covering the Account Console, metastore creation, and the administrator role hierarchy. You will then translate organizational topology (business units, regions, and dev/QA/prod environments) into a scalable catalog and schema design using naming conventions, ownership patterns, and MANAGE delegation. The course then covers secure storage integration with storage credentials, external locations, the managed-storage hierarchy, managed versus external tables, and UC Volumes for non-tabular data. Next, you will apply access patterns and isolation strategies — the three-level GRANT chain, workspace-catalog binding, and schema-level Attribute-Based Access Control (ABAC) policies — to enforce fine-grained data protection at scale. Finally, the course closes with best practices for catalog design, automation, least-privilege permissions, and group-based access management. By the end, you will have the foundational skills to design, build, and govern a secure, well-structured lakehouse on Databricks, and the course also concludes with a comprehensive hands-on lab to practice these skills in a live Databricks Workspace environment.


Note: Databricks Academy is transitioning to a notebook-based format for classroom sessions within the Databricks environment, discontinuing the use of slide decks for lectures. You can access the lecture notebooks in the Vocareum lab environment.

Skill Level
Associate
Duration
4h
Prerequisites

In this course, the content was developed for participants with these skills/knowledge/abilities:  

• Familiarity with the Databricks Data Intelligence Platform and basic workspace operations (creating clusters and SQL warehouses, running SQL and Python in notebooks, basic notebook navigation)

• Working knowledge of Unity Catalog fundamentals, including the three-level namespace (metastore → catalog → schema → table) and the core securable objects (tables, views, volumes, models)

• Hands-on experience with Unity Catalog–enabled workspaces, including creating and managing catalogs, schemas, and tables via SQL

• Solid understanding of identity and access management concepts (users, groups, service principals, authentication, authorization) and the difference between account-level and workspace-level administration

• Working knowledge of data governance principles — access control, data classification, compliance, and audit — and how they map to platform features

• Familiarity with cloud object storage concepts (S3, ADLS, or GCS), including IAM roles, bucket-level paths, and the role of storage credentials in cross-service access

• Introductory familiarity with Delta Lake (managed vs external tables, Delta lifecycle) and Lakeflow for orchestrating data workflows

• Practical experience with enterprise architecture in larger organizations — multi-business-unit topology, regional data residency, and dev/QA/prod environment separation

Outline

Unity Catalog's Role in Cloud Architecture

• Unity Catalog in the Cloud

• From Account to Workspace Enablement

• Demo: Account Console Walkthrough: Setting up/managing a metastore


Designing & Structuring Catalogs and Namespaces

• From Org Model to Catalog Topology

• Ownership, MANAGE, and Delegation

• Privilege Inheritance That Scales

• Workspace-Catalog Binding & Read-only

• Demo: Designing Catalogs and Schemas

• Lab: Enterprise Catalog Design for a Multinational Organization


Data Management and Storage

• From Metadata to Tables & Volumes

• Managed Locations and Storage Credentials

• External Locations and Foreign Catalogs

• Demo: Storage Credentials and External Locations

• Lab: Cloud Storage Integration and Data Management


Access Patterns, Isolation Strategies and FGAC in UC

• Isolation by Environment & Business Unit

• Privileges in Practice

• Fine-grained Access with RLS & Masking

• ABAC: What Good Looks Like

• Validating Outcomes & Auditing Signals

• Demo: Catalog Isolation, Access Patterns, and FGAC

• Lab: Enterprise Data Isolation and Access Governance


Best Practices

Upcoming Public Classes

Date
Time
Your Local Time
Language
Price
May 21
01 PM - 05 PM (America/New_York)
-
English
$750.00

Public Class Registration

If your company has purchased success credits or has a learning subscription, please fill out the Training Request form. Otherwise, you can register below.

Private Class Request

If your company is interested in private training, please submit a request.

See all our registration options

Registration options

Databricks has a delivery method for wherever you are on your learning journey

Runtime

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

Register now

Instructors

Instructor-Led

Public and private courses taught by expert instructors across half-day to two-day courses

Register now

Learning

Blended Learning

Self-paced and weekly instructor-led sessions for every style of learner to optimize course completion and knowledge retention. Go to Subscriptions Catalog tab to purchase

Purchase now

Scale

Skills@Scale

Comprehensive training offering for large scale customers that includes learning elements for every style of learning. Inquire with your account executive for details

Upcoming Public Classes

Machine Learning Practitioner

Advanced Machine Learning with Databricks

This course is aimed at data scientists and machine learning practitioners and consists of two, four-hours modules. 

Machine Learning at Scale

In this course, you will gain theoretical and practical knowledge of Apache Spark’s architecture and its application to machine learning workloads within Databricks. You will learn when to use Spark for data preparation, model training, and deployment, while also gaining hands-on experience with Spark ML and pandas APIs on Spark. This course will introduce you to advanced concepts like hyperparameter tuning and scaling Optuna with Spark. This course will use features and concepts introduced in the associate course such as MLflow and Unity Catalog for comprehensive model packaging and governance.

Advanced Machine Learning Operations

In this course, you will be provided with a comprehensive understanding of the machine learning lifecycle and MLOps, emphasizing best practices for data and model management, testing, and scalable architectures. It covers key MLOps components, including CI/CD, pipeline management, and environment separation, while showcasing Databricks’ tools for automation and infrastructure management, such as Databricks Asset Bundles (DABs), Workflows, and Mosaic AI Model Serving. You will learn about monitoring, custom metrics, drift detection, model rollout strategies, A/B testing, and the principles of reliable MLOps systems, providing a holistic view of implementing and managing ML projects in Databricks.

Paid
8h
Lab
instructor-led
Professional
Data Engineer

Advanced Data Engineering with Databricks

This course serves as an appropriate entry point to learn Advanced Data Engineering with Databricks. 

Note: Databricks Academy is transitioning to a notebook-based format for classroom sessions within the Databricks environment, discontinuing the use of slide decks for lectures in the first module. You can access the lecture notebooks in the Vocareum lab environment.

Below, we describe each of the four, four-hour modules included in this course.

Advanced Techniques with Spark Declarative Pipelines

This course explores Databricks' Lakeflow Spark Declarative Pipelines (SDP) for building production-grade streaming pipelines. You will learn advanced design patterns, robust data quality enforcement, and cross-platform integration essential for real-world lakehouse engineering.

Throughout the course, you will dive into modern data ingestion and processing techniques, mastering tools like Liquid Clustering for layout optimization and the Multiplex Streaming pattern for mixed-schema events. By the end of the modules, you will know how to confidently handle schema evolution, automate Change Data Capture (CDC), and ensure data integrity.

Through lectures and hands-on demos, you will:

• Build multi-flow pipelines to ingest multi-source data into a unified Bronze table.

• Apply Liquid Clustering and Data Quality Expectations across Silver and Gold layers.

• Implement the Multiplex pattern with Iceberg UniForm for cross-platform data access.

• Automate SCD Type 2 history tracking using AUTO CDC INTO.

• Design zero-data-loss quarantine pipelines to audit and manage invalid records.

Databricks Data Privacy

This content is intended for the learner persona of data engineers or for customers, partners, and employees who complete data engineering tasks with Databricks. It aims to provide them with the necessary knowledge and skills to execute these activities effectively on the Databricks platform.

Databricks Performance Optimization

In this course, you’ll learn how to optimize workloads and physical layout with Spark and Delta Lake and and analyze the Spark UI to assess performance and debug applications. We’ll cover topics like streaming, liquid clustering, data skipping, caching, photons, and more.

Automated Deployment with Declarative Automation Bundles

This course provides a comprehensive review of DevOps principles and their application to Databricks projects. It begins with an overview of core DevOps, DataOps, continuous integration (CI), continuous deployment (CD), and testing, and explores how these principles can be applied to data engineering pipelines.

The course then focuses on continuous deployment within the CI/CD process, examining tools like the Databricks REST API, SDK, and CLI for project deployment. You will learn about Declarative Automation Bundles (DABs) and how they fit into the CI/CD process. You’ll dive into their key components, folder structure, and how they streamline deployment across various target environments in Databricks. You will also learn how to add variables, modify, validate, deploy, and execute Declarative Automation Bundles for multiple environments with different configurations using the Databricks CLI.

Finally, the course introduces Visual Studio Code as an Interactive Development Environment (IDE) for building, testing, and deploying Declarative Automation Bundles locally, optimizing your development process. The course concludes with an introduction to automating deployment pipelines using GitHub Actions to enhance the CI/CD workflow with Declarative Automation Bundles.

By the end of this course, you will be equipped to automate Databricks project deployments with Declarative Automation Bundles, improving efficiency through DevOps practices.

Languages Available: English | 日本語 | Português BR | 한국어

Paid
16h
Lab
instructor-led
Professional

Questions?

If you have any questions, please refer to our Frequently Asked Questions page.