Dashboard performance issues rarely come from a single place. They’re usually the combined effect of dashboard design, warehouse concurrency and caching and data layout in your lakehouse. If you optimize only one layer—SQL, or compute sizing, or table layout—you’ll often see partial wins, but the dashboard can still feel slow or unpredictable under real usage.
In this post, we take a holistic approach to Databricks AI/BI performance. We’ll follow a dashboard interaction end-to-end: from the browser and AI/BI orchestration layer, through Databricks SQL admission and caching behavior, down to file scanning and data skipping in the Lakehouse. Along the way, we’ll highlight the patterns that most often drive latency spikes, queueing, and cost at scale—especially when many users interact with the same dashboards concurrently.

To optimize performance, you must first understand the journey a single click takes through the stack. When a user opens a dashboard or changes a filter, a chain reaction occurs across multiple layers. If any layer is misconfigured, the user feels the lag.
By optimizing each of these four touchpoints, you move away from brute-force compute and toward a streamlined architecture that scales with your users.
Before optimizing anything, you must first define what you are optimizing for. Dashboard performance is not a single concept, and improvements only make sense when tied to a clear target. Common goals include reducing time to first visual, improving interaction latency, keeping performance stable under concurrency, or lowering the cost per dashboard view.
Once the goal is clear, you need to understand the parameters that shape it. These include the size and growth of the data, the number of users and their access patterns, and how queries behave in practice—how many fires on page load, how much data they scan, and whether results are reused or constantly recomputed. Without this context, optimization becomes guesswork and often shifts cost or latency from one layer to another.
Effective dashboard optimization is, therefore, intentional: pick a measurable target, understand the data and usage patterns that influence it, and only then apply the technical optimizations that follow.
Every visible tile is a potential trigger: it runs on first load and can re-run when filters/parameters change, on refresh, and when users navigate back to a page. Tabs limit those re-executions to the active page, reducing bursts and head-of-line blocking.
AI/BI dashboards let you build multi‑page reports. Group visuals into pages aligned to user intent (Overview → Investigate → Deep dive), so only the current page executes. This reduces head‑of‑line blocking, shapes concurrency into smaller bursts, and increases cache hit rates for repeated deterministic queries.
Recommended page types:
Favor deterministic tiles (avoid NOW()) to maximize result cache hits, monitor Peak Queued Queries and increase cluster size or max clusters if persistently > 0.
The drill-through feature in AI/BI Dashboards enables navigation from high-level visuals to detailed pages while carrying the selected context. It is a useful strategy to enforce a page-based design by deferring expensive queries until user intent is clear, improving first-paint performance and reducing unnecessary concurrency spikes.
Callout — Why this helps on any warehouse type: Smaller, predictable bursts make Serverless IWM react fast and avoid over-scaling, and they prevent Pro/Classic from saturating cluster slots during page loads.
For more details, see: https://www.databricks.com/blog/whats-new-in-aibi-dashboards-fall24
The first impression of a dashboard is defined by its first paint
