Build and Deploy Databricks Projects Using Automation Bundles and Agents
Overview
| Experience | In Person |
|---|---|
| Track | Data Engineering & Streaming |
| Industry | Healthcare & Life Sciences, Financial Services, Transportation |
| Technologies | Lakeflow |
| Skill Level | Beginner |
Modern data engineering teams struggle with environment drift, manual deployment errors, and fragmented assets that make it difficult to reliably scale, govern, and productionize complex data and ML workflows.
With Declarative Automation Bundles (DABs), engineers can easily build, deploy, and operate data and AI projects on Databricks. By defining your entire project as a declarative bundle of assets, DABs help you apply proven software engineering practices like source control, testing, and CI/CD consistently and at scale.
In this session, you’ll walk through structuring real DABs projects, handling dev vs. prod differences, integrating with GitHub Actions or Azure DevOps, and using new workspace authoring and debugging capabilities to keep deployments safe and repeatable.
We’ll close with a first look at BrickOps, Databricks’ new operating model for data engineering in the age of AI, showing how DABs‑powered blueprints, governed deployments, monitoring, and agents work together to reduce toil, detect and troubleshoot pipeline and job issues, and give teams an auditable, AI‑ready foundation for production data products.
Session Speakers
Kristóf Molnár
/Sr. Staff Product Manager
Databricks
Lennart Kats
/Principal Engineer
Databricks