From Imperative to Declarative: Modernizing ETL with Lakeflow Spark Declarative Pipelines
Overview
| Experience | In Person |
|---|---|
| Track | Data Engineering & Streaming |
| Industry | Energy & Utilities, Manufacturing, Financial Services |
| Technologies | Lakeflow |
| Skill Level | Intermediate |
Did you ever wonder why Lakeflow Spark Declarative Pipelines (SDP) do not allow dynamic parameters or partial recalculations for specific dates? Or why for-loops are restricted in table definitions? Furthermore, how do you choose between a Streaming Table and a Materialized View?
These are common struggles users face when starting. While often perceived as tool limitations, these hurdles usually stem from attempting to apply imperative habits to a declarative framework.
In this session we will discuss real-world customer examples to demonstrate how to transition from imperative to declarative approach.Stop fighting the framework and start leveraging the declarative paradigm for more resilient data engineering.
Session Speakers
Tomasz Bacewicz
/Solution Architect
Databricks
Aleksandra Chashchina
/Specialist Solutions Architect
Databricks