Skip to main content

Automated Deployment with Declarative Automation Bundles

This course provides a comprehensive review of DevOps principles and their application to Databricks projects. It begins with an overview of core DevOps, DataOps, continuous integration (CI), continuous deployment (CD), and testing, and explores how these principles can be applied to data engineering pipelines.


The course then focuses on continuous deployment within the CI/CD process, examining tools like the Databricks REST API, SDK, and CLI for project deployment. You will learn about Declarative Automation Bundles (DABs) and how they fit into the CI/CD process. You’ll dive into their key components, folder structure, and how they streamline deployment across various target environments in Databricks. You will also learn how to add variables, modify, validate, deploy, and execute Declarative Automation Bundles for multiple environments with different configurations using the Databricks CLI.


Finally, the course introduces Visual Studio Code as an Interactive Development Environment (IDE) for building, testing, and deploying Declarative Automation Bundles locally, optimizing your development process. The course concludes with an introduction to automating deployment pipelines using GitHub Actions to enhance the CI/CD workflow with Declarative Automation Bundles.


By the end of this course, you will be equipped to automate Databricks project deployments with Declarative Automation Bundles, improving efficiency through DevOps practices.


Note: 

1. Databricks Academy is transitioning from video lectures to a more streamlined PDF format with slides and notes for all self-paced courses. Please note that demo videos will still be available in their original format. We would love to hear your thoughts on this change, so please share your feedback through the course survey at the end. Thank you for being a part of our learning community!

2. This course is the fourth in the 'Advanced Data Engineering with Databricks' series.

Skill Level
Professional
Duration
3h
Prerequisites

In this course, the content was developed for participants with these skills/knowledge/abilities:

• Strong knowledge of the Databricks platform, including experience with Databricks Workspaces, Apache Spark, Delta Lake, the Medallion Architecture, Unity Catalog, Delta Live Tables, and Workflows. In particular, knowledge of leveraging Expectations with DLTs. 

• Experience in data ingestion and transformation, with proficiency in PySpark for data processing and DataFrame manipulation. Candidates should also have experience writing intermediate-level SQL queries for data analysis and transformation.

• Proficiency in Python programming, including the ability to design and implement functions and classes, and experience with creating, importing, and utilizing Python packages.

• Familiarity with DevOps practices, particularly continuous integration and continuous delivery/deployment (CI/CD) principles.

• A basic understanding of Git version control.

• Prerequisite course DevOps Essentials for Data Engineering Course

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

See all our registration options

Registration options

Databricks has a delivery method for wherever you are on your learning journey

Runtime

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

Register now

Instructors

Instructor-Led

Public and private courses taught by expert instructors across half-day to two-day courses

Register now

Learning

Blended Learning

Self-paced and weekly instructor-led sessions for every style of learner to optimize course completion and knowledge retention. Go to Subscriptions Catalog tab to purchase

Purchase now

Scale

Skills@Scale

Comprehensive training offering for large scale customers that includes learning elements for every style of learning. Inquire with your account executive for details

Upcoming Public Classes

Get Started with Lakebase

This get started course introduces Databricks Lakebase, a fully managed PostgreSQL service built into the Databricks Data Intelligence Platform that brings operational (OLTP) and analytical (OLAP) workloads closer together.

The course begins with a conceptual lecture that compares OLTP and OLAP systems, explaining their different performance characteristics, storage models, and typical use cases. You will also explore the challenges organizations face when maintaining separate transactional databases and analytical platforms, including data movement, latency, and architectural complexity.

You will then learn how Databricks Lakebase helps address these challenges by providing a PostgreSQL-compatible operational database that integrates directly with the Databricks Lakehouse, enabling operational applications and analytics to work together within a unified platform.

Through hands-on labs, you will:

Create and explore a Lakebase project using autoscaling compute

• Navigate the Lakebase UI, including branching, monitoring, and configuration settings

• Create and query tables using the Lakebase SQL Editor

• Query Lakebase data from Databricks using Lakehouse Federation and foreign catalogs

• Perform Reverse ETL by synchronizing Delta tables to Lakebase

• Connect to Lakebase from Python and perform basic CRUD operations

This is a Get Started course, so the focus is on understanding the core concepts and basic workflows for working with Lakebase. Building full production applications on top of Lakebase is outside the scope of this course.

Note: For SCORM lecture files, please ensure that you close the SCORM window after completing the content. Do not click the ‘Next Lesson’ button, as doing so may prevent the SCORM module from being marked as complete.

Paid & Subscription
3h
Lab
Onboarding

Questions?

If you have any questions, please refer to our Frequently Asked Questions page.