Session

Architecting Data Warehouses for Large-Scale Deployments

Overview

ExperienceIn Person

This advanced, half-day course covers how to architect Databricks SQL environments for enterprise scale — supporting hundreds or thousands of users across multiple business units. Participants will learn about designing scalable data ingestion pipelines with Lakeflow, planning multi-workspace and Unity Catalog strategies, implementing fine-grained security using row filters, column masks, and ABAC policies, and deploy solutions using Declarative Automation Bundles and GitOps. The course includes hands-on labs and demos.PrerequisitesWorking knowledge of Databricks SQL (SQL Editor, warehouses, dashboards, query profiles)Unity Catalog fundamentals (three-level namespace, privileges, managed vs. external tables)Delta Lake essentials (ACID, time travel, OPTIMIZE, VACUUM)Cloud infrastructure concepts: Cloud storage, Networking, IAM, and Infrastructure-as-Code basicsData warehousing concepts (medallion architecture, ETL/ELT, star schemas, multi-environment strategies)Databricks workspace administration experience (users, groups, service principals, SSO)Note: Hands-on training courses will be updated to reflect the newest product and feature announcements from Data + AI Summit in June 2026.