When Redis Isn’t the Answer: Serving Lakehouse Data at Scale
Overview
| Experience | In Person |
|---|---|
| Track | Lakebase |
| Industry | Enterprise Technology, Communications - Media & Entertainment, Consulting & Services |
| Technologies | Lakebase |
| Skill Level | Intermediate |
As our feature and operational use cases grew, maintaining separate Redis/DynamoDB stacks alongside the data lake added latency, skew and ops burden. At Superhuman (formerly Grammarly), we adopted Lakebase to serve low‑latency data directly from Databricks, simplifying our architecture while meeting strict SLAs. This session details our setup and results: how we modelled entities and keys, tuned Lakebase capacity and read replicas for high QPS and enforced freshness with Change Data Feed–driven syncs. We’ll show code and configs for provisioning online stores, publishing tables, and querying endpoints, plus before/after latency and maintenance metrics. Attendees will leave with reference patterns to consolidate online serving on Lakebase without sacrificing performance.
Session Speakers
Michael Kobelev
/Superhuman