Sponsored by: Imply | From Splunk to Databricks: Unlocking Open, Scalable Observability in the Lakehouse
Overview
| Experience | In Person |
|---|---|
| Track | Data Warehousing |
| Industry | Enterprise Technology, Manufacturing, Financial Services |
| Technologies | Databricks SQL |
| Skill Level | Intermediate |
Databricks offers a lakehouse-native model for security and observability built on open formats and an open ecosystem. It enables teams to ingest, retain, and analyze large volumes of telemetry data while reducing cost and eliminating vendor lock-in. But for organizations running Splunk today, realizing this promise requires more than simply moving data.In this session, we show how teams transition from Splunk to Databricks by moving data into open lakehouse tables and querying it directly. The challenge is not storage, but making data interactive at scale. Instead of pre-indexing everything or moving data between systems, queries run in place across large historical datasets while maintaining performance. Teams can adopt the lakehouse without changing dashboards or workflows.We cover how to make lakehouse data interactive, reduce infrastructure overhead, and make machine data fast and accessible for real-world observability use cases.
Session Speakers
Gian Merlino
/Co-Founder & CTO
Imply