Skip to main content

Get Started with Databricks Platform Administration

In this course, you will learn the basics of platform administration on the Databricks Data Intelligence Platform. It offers a comprehensive overview of the Unity Catalog, a vital component for effective data governance within Databricks environments. Divided into five modules, it begins with a detailed introduction to Databricks infrastructure and its data intelligence platform, including an in-depth walkthrough of the Databricks Workspace. You will explore data governance principles within Unity Catalog, covering its key concepts, architecture, and roles. The course further emphasizes managing Unity Catalog metastores and compute resources, including clusters and SQL warehouses. Finally, you'll master data access control by learning about privileges, fine-grained access, and how to govern data objects. By the end, you will be equipped with essential skills to administer the Unity Catalog to implement effective data governance, optimize compute resources, and enforce robust data security strategies.


Languages Available: English | 日本語 | Português BR | 한국어

Skill Level
Onboarding
Duration
2h
Prerequisites

The content was developed for participants with these skills/knowledge/abilities:

  • Basic knowledge of cloud computing and SQL concepts such as networking basics, SQL commands, aggregate functions, filters and sorting, indexes, tables, and views.
  • Basic knowledge of Python programming, Jupyter notebook interface, and PySpark fundamentals.

Outline

Databricks Overview

  • Databricks Infrastructure
  • Databricks Data Intelligence Platform
  • Unity Catalog Overview
  • Databricks Workspace Walkthrough

Databricks Platform Administration

  • Data Governance in Unity Catalog
  • Managing Principles in Unity Catalog
  • Managing Unity Catalog Metastores
  • Compute Resources and Unity Catalog
  • Data Access Control in Unity Catalog

Upcoming Public Classes

Date
Time
Language
Price
Oct 23
03 PM - 05 PM (Europe/London)
English
Free
Nov 04
09 AM - 11 AM (America/Los_Angeles)
English
Free
Dec 03
03 PM - 05 PM (Europe/London)
English
Free
Jan 06
09 AM - 11 AM (America/Los_Angeles)
English
Free

Public Class Registration

If your company has purchased success credits or has a learning subscription, please fill out the Training Request form. Otherwise, you can register below.

Private Class Request

If your company is interested in private training, please submit a request.

See all our registration options

Registration options

Databricks has a delivery method for wherever you are on your learning journey

Runtime

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

Register now

Instructors

Instructor-Led

Public and private courses taught by expert instructors across half-day to two-day courses

Register now

Learning

Blended Learning

Self-paced and weekly instructor-led sessions for every style of learner to optimize course completion and knowledge retention. Go to Subscriptions Catalog tab to purchase

Purchase now

Scale

Skills@Scale

Comprehensive training offering for large scale customers that includes learning elements for every style of learning. Inquire with your account executive for details

Upcoming Public Classes

Data Engineer

Build Data Pipelines with Lakeflow Declarative Pipelines

This course introduces users to the essential concepts and skills needed to build data pipelines using Lakeflow Declarative Pipelines in Databricks for incremental batch or streaming ingestion and processing through multiple streaming tables and materialized views. Designed for data engineers new to Lakeflow Declarative Pipelines, the course provides a comprehensive overview of core components such as incremental data processing, streaming tables, materialized views, and temporary views, highlighting their specific purposes and differences.

Topics covered include:

- Developing and debugging ETL pipelines with the multi-file editor in Lakeflow using SQL (with Python code examples provided)

- How Lakeflow Declarative Pipelines track data dependencies in a pipeline through the pipeline graph

- Configuring pipeline compute resources, data assets, trigger modes, and other advanced options

Next, the course introduces data quality expectations in Lakeflow, guiding users through the process of integrating expectations into pipelines to validate and enforce data integrity. Learners will then explore how to put a pipeline into production, including scheduling options, and enabling pipeline event logging to monitor pipeline performance and health.

Finally, the course covers how to implement Change Data Capture (CDC) using the AUTO CDC INTO syntax within Lakeflow Declarative Pipelines to manage slowly changing dimensions (SCD Type 1 and Type 2), preparing users to integrate CDC into their own pipelines.

Note: Databricks Academy is transitioning from video lectures to a more streamlined PDF format with slides and notes for all self-paced courses. Please note that demo videos will still be available in their original format. We would love to hear your thoughts on this change, so please share your feedback through the course survey at the end. Thank you for being a part of our learning community!

Languages Available: English | 日本語 | Português BR | 한국어

Free
2h
Associate

Questions?

If you have any questions, please refer to our Frequently Asked Questions page.