Session
From Legacy Bottlenecks to High-Performance Platforms—Databricks Makes It Possible
Overview
| Experience | In Person |
|---|---|
| Track | Data Engineering & Streaming |
| Industry | Healthcare & Life Sciences, Retail & Consumer Goods |
| Technologies | Lakeflow, Unity Catalog |
| Skill Level | Intermediate |
Do you want to modernize your data processes by moving from on‑prem to Cloud Databricks? At 84.51°, we leveraged Lakeflow Spark Declarative Pipelines to build dimension and fact tables, used Delta Lake and Unity Catalog for data reliability, and applied Databricks Asset Bundles and workflows to create a cost‑efficient, scalable end‑to‑end system.This migration enables teams to work with health data to support our mission to improve patient outcomes through same‑day, personalized engagement with at‑risk or high‑value patients. The solution processes ~40 tables and millions of daily rows, supporting append logic, SCD Type 2 handling, and identity columns.Join us to learn how 84.51° applied SDPs, DABs, Delta Lake, and Unity Catalog to successfully migrate from legacy systems.84.51° is a retail insights, media, and marketing company using first‑party data from 60M households to drive Kroger’s customer‑centric journey.
Session Speakers
Piu Mallick
/Senior Data Engineer
84.51
Alex Yan
/Data Engineer
84.51º