Skip to main content

Get Started with Lakebase

This course introduces Databricks Lakebase, a fully managed PostgreSQL service built into the Databricks Data Intelligence Platform that brings operational (OLTP) and analytical (OLAP) workloads closer together.


The course begins with a conceptual lecture that compares OLTP and OLAP systems, explaining their different performance characteristics, storage models, and typical use cases. You will also explore the challenges organizations face when maintaining separate transactional databases and analytical platforms, including data movement, latency, and architectural complexity.


You will then learn how Databricks Lakebase helps address these challenges by providing a PostgreSQL-compatible operational database that integrates directly with the Databricks Lakehouse, enabling operational applications and analytics to work together within a unified platform.


This is a Get Started course, so the focus is on understanding the core concepts and basic workflows for working with Lakebase. Building full production applications on top of Lakebase is outside the scope of this course.


Note: Databricks Academy is transitioning to a notebook-based format for classroom sessions within the Databricks environment, discontinuing the use of slide decks for lectures.

Skill Level
Onboarding
Duration
3h
Prerequisites

In this course, the content was developed for participants with these skills/knowledge/abilities:  

• Access to a Databricks workspace with the Lakebase Database feature enabled

• An available All-purpose-compute OR Serverless cluster and a SQL Warehouse (2X-Small is sufficient).

• Create permissions for catalogs in your workspace.

• Intermediate SQL skills - Able to write and understand SELECT, INSERT, UPDATE, and DELETE statements.

• Intermediate Python knowledge - Comfortable with Python functions, exceptions, and working with dictionaries/lists.

• Familiarity with OLTP fundamentals - Understands client-server relationships, ACID properties, database authentication, and concurrent access.

Upcoming Public Classes

Date
Time
Your Local Time
Language
Price
May 15
12 PM - 02 PM (Asia/Singapore)
-
English
Free
May 21
03 PM - 05 PM (Europe/London)
-
English
Free
May 27
09 AM - 11 AM (America/Los_Angeles)
-
English
Free
Jun 24
12 PM - 02 PM (Asia/Singapore)
-
English
Free
Jun 26
03 PM - 05 PM (Europe/London)
-
English
Free
Jul 10
09 AM - 11 AM (America/Los_Angeles)
-
English
Free
Jul 10
09 AM - 11 AM (America/Los_Angeles)
-
English
Free
Jul 23
03 PM - 05 PM (Europe/London)
-
English
Free
Jul 30
12 PM - 02 PM (Asia/Singapore)
-
English
Free

Public Class Registration

If your company has purchased success credits or has a learning subscription, please fill out the Training Request form. Otherwise, you can register below.

Private Class Request

If your company is interested in private training, please submit a request.

See all our registration options

Registration options

Databricks has a delivery method for wherever you are on your learning journey

Runtime

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

Register now

Instructors

Instructor-Led

Public and private courses taught by expert instructors across half-day to two-day courses

Register now

Learning

Blended Learning

Self-paced and weekly instructor-led sessions for every style of learner to optimize course completion and knowledge retention. Go to Subscriptions Catalog tab to purchase

Purchase now

Scale

Skills@Scale

Comprehensive training offering for large scale customers that includes learning elements for every style of learning. Inquire with your account executive for details

Upcoming Public Classes

Data Engineer

DevOps Essentials for Data Engineering

This course explores software engineering best practices and DevOps principles, specifically designed for data engineers working with Databricks. Participants will build a strong foundation in key topics such as code quality, version control, documentation, and testing. The course emphasizes DevOps, covering core components, benefits, and the role of continuous integration and delivery (CI/CD) in optimizing data engineering workflows.

You will learn how to apply modularity principles in PySpark to create reusable components and structure code efficiently. Hands-on experience includes designing and implementing unit tests for PySpark functions using the pytest framework, followed by integration testing for Databricks data pipelines with Spark Declarative Pipeline and Jobs to ensure reliability.

The course also covers essential Git operations within Databricks, including using Databricks Git Folders to integrate continuous integration practices. Finally, you will take a high level look at various deployment methods for Databricks assets, such as REST API, CLI, SDK, and Declarative Automation Bundles (DABs), providing you with the knowledge of techniques to deploy and manage your pipelines.

By the end of the course, you will be proficient in software engineering and DevOps best practices, enabling you to build scalable, maintainable, and efficient data engineering solutions.

Note: This is the fourth course in the 'Data Engineering with Databricks' series.

Note: Databricks Academy is transitioning from video lectures to a more streamlined PDF format with slides and notes for all self-paced courses. Please note that demo videos will still be available in their original format. We would love to hear your thoughts on this change, so please share your feedback through the course survey at the end. Thank you for being a part of our learning community!

Languages Available: English | 日本語 | Português BR | 한국어

Free
2h
Associate

Questions?

If you have any questions, please refer to our Frequently Asked Questions page.