Declarative Pipelines: Simplifying Data Engineering Workloads
Overview
Experience | In Person |
---|---|
Type | Breakout |
Track | Data Engineering and Streaming |
Industry | Enterprise Technology |
Technologies | Apache Spark, DLT |
Skill Level | Beginner |
Duration | 40 min |
DLT has made it dramatically easier to build production-grade pipelines, using a declarative framework that abstracts away orchestration and complexity. It’s become a go-to solution for teams who want reliable, maintainable pipelines without reinventing the wheel.
But we’re just getting started. In this session, we’ll take a step back and share how DLT fits into a broader vision for the future of data engineering pipelines — one that opens the door to a new level of openness, standardization and community momentum.
We’ll cover the core concepts behind declarative pipelines, where the architecture is headed, and what this shift means for data engineers building procedural code. Don’t miss this session — we’ll be sharing something new that sets the direction for what comes next.
Session Speakers
IMAGE COMING SOON
Michael Armbrust
/Databricks
IMAGE COMING SOON
Sandy Ryza
/Software Engineer