Databricks co-founder Arsalan Tavakoli-Shiraji on what separates enterprises generating real AI value from those stuck in a cycle of sprawl
The question surfacing in boardrooms and data strategy sessions right now: why do so many AI initiatives generate activity without generating value? It sounds simple until you try to answer it.
Arsalan Tavakoli-Shiraji has watched this pattern play out across hundreds of enterprise conversations. As co-founder and Senior Vice President of Field Engineering at Databricks, he sits at the intersection of technical architecture and AI business strategy.
In this conversation, Arsalan and I talked through what CDOs and CTOs need to understand about getting agentic systems into production, what governance failures look like once AI moves from producing outputs to taking actions — sending messages, updating records, executing decisions — and how to find a meaningful win without creating the kind of AI sprawl that haunts organizations for years.
The Distance Between AI Activity and AI Value
Catherine Brown: You work with enterprises at all stages of AI adoption. Where do most of them actually land when you look at it honestly?
Arsalan Tavakoli-Shiraji: A few different categories. Some are still experimenting — getting their hands on models, running pilots, seeing what's possible. Others have moved further and are automating specific tasks: generating copy, transcribing notes, letting people ask questions of their data. And then there is the much smaller group who has figured out how to design from the ground up with AI capabilities in mind. Most organizations are still in the first two categories. There's a lot of AI sprawl, a lot of AI activity. There's much less AI value.
The big difference is where you start. The ones who are getting to meaningful value start with the outcome they want to drive — greater productivity, a new business capability, risk reduction — and work backwards. They don't start with the technology..
Catherine: What is the most common architectural mistake that prevents agentic systems from ever reaching production?
Arsalan: The mistake I see most is thinking that selecting a model is the hard part. Right now, getting a high-quality foundation model is the easiest part of the problem. The hardest part is everything underneath it.
In the enterprise, you have to think through a few things. Where is your data and how do you connect to it? Most organizations have data spread across a dozen different places, locked in proprietary formats that don't talk to each other. And once you start tying agents to that data, you need serious governance. Not just governance over the data itself, but governance that understands the agents: what they're doing, what permissions they have, where they're going, and how multiple agents from multiple systems connect. And finally, agents need a deep semantic understanding of your organization. They are, in effect, virtual workers executing on your behalf. They need to know what good looks like, what the key definitions and metrics are, and what the context of the business actually is.
The anti-pattern is simple: data locked in silos, governance skipped or treated as a secondary problem, and then a scramble to figure out why the agents don't work in production. Organizations fall down on those three things almost every time.
Why Dashboards and Batch Pipelines Are the Wrong Foundation
Catherine: Structurally, why are dashboards and batch pipelines mismatched for where enterprises need to go?
Arsalan: They're band-aids. Dashboards provide a visual reference point that is still important for enterprises when making decisions. But most are built to answer one question someone asked one time. They get built, they get viewed a few times, and then they join what I'd call the dashboard graveyard.
Dashboards are also hard to interrogate. You see something in the data and you want to know why it happened. You want to tie it to an event, dig underneath it, and ask a follow-up. Historically, that means someone goes off, pulls the underlying data, runs an analysis, and comes back to you. That latency is brutal in a world where things are moving fast.
Batch pipelines have a similar issue. Batch processing made sense when decisions happened slowly enough that daily or weekly data was fine. But in an agentic world, the window between when you see something and when you can act on it is shrinking fast. When you have disconnected systems running on batch cycles, you simply cannot respond at the speed agents require.
What Lakebase Actually Solves
Catherine: As enterprises shift from AI experimentation to agentic execution, where does Lakebase fit in?
Arsalan: The infrastructure that most organizations have built around their analytics layers, and their data warehouses, was designed for a specific kind of work: large-scale queries, aggregate insights, and human analysts running reports. That is a fundamentally different workload than what agentic applications require.
When you start building for agents, you are building for a very different consumer. Think about a telecommunications company that wants to put an intelligent application in the hands of every field technician. Or a wealth management firm deploying an AI assistant to each of their advisors. Or a retailer surfacing real-time recommendations at the point of sale. Those applications need to serve enormous numbers of users simultaneously with very low latency. And all of this needs to happen at a cost that makes sense at scale.
That is where Lakebase comes in. Agents need a transactional database, not an analytics database. And they need one built specifically for the demands of the agentic world. Lakebase is that foundation. It is what allows organizations to move from experimenting with AI to actually running it at scale in production and without the infrastructure collapsing under the load. And, it works alongside the analytic layer organizations already have. It is not a replacement. It is the piece that was missing.
Governance failures when agents take action
Catherine: What governance failures tend to emerge once systems stop generating only outputs and start actually taking action?
Arsalan:
It's common to assume that whatever permissions a person has, their agent should also have those permissions. And while that logic makes sense, the reality is that almost no organization has perfect permissions set up correctly for every person and in every system. Humans navigate that reality imperfectly, but because we have gut instincts, we can work around the challenges. We have awareness of context that tells us, “yes, technically I can do this. But I probably shouldn't without checking first.” Agents don't have that situational awareness. They have a goal and a set of constraints. And they find a path to the goal within those constraints.
When agents are only generating outputs, the worst case scenario is that it's generating low-quality content. When they start acting, sending messages, placing orders, deleting records, communicating on your behalf, the stakes are completely different. Governance is one of the core pieces that determines whether you can actually unlock value from agents at all. The enterprises that will get this right are the ones who treat governance as a prerequisite, not an afterthought.
The Fastest Path to Success
Catherine: What is the fastest path you have seen to a successful deployment of AI agents without creating more technological sprawl?
Arsalan: Two things stand out. First, clarity about what success looks like before you start. Sounds obvious, but most teams skip it. If you cannot define the specific outcome you are driving (such as the productivity gain, new revenue capability, cost reduction, risk avoidance), then you cannot work backward to the right approach. Technology is not the goal.
The second is isolation. It is genuinely hard to transform a large, critical team from the inside out while they are still doing their day jobs. What I see work is standing up a small, focused pilot team with a clearly defined use case, giving them the freedom to iterate quickly, and keeping them away from legacy technical debt and existing policy constraints. They are not encumbered, so they move fast. You learn what good actually looks like in a real-world context. And then you take those learnings and figure out how to scale and enable the broader organization. You want to be able to find out what works quickly, and then scale rapidly once you do.
The Uncomfortable Truth About the Agentic Era
Catherine: What is the uncomfortable truth that leaders need to accept about redesigning for this moment?
Arsalan: From an infrastructure standpoint, the agentic era requires a set of core components that need to work together: a governed analytic layer, a transactional database that can handle the speed and scale agents demand, a platform to build and monitor those agents, and an application layer that people can actually use.
At Databricks, Lakebase anchors the transactional side. AgentBricks provides the development and monitoring layer for building and managing agents at scale. Databricks Apps gives you the application layer for delivering those experiences to end users. And Genie is how people actually talk to their data — the conversational interface that lets business users ask questions and get answers without a data analyst in the loop. If you get to scale, running not tens but potentially thousands of agents, you need a system where all of those pieces were built to work together from the start.
But the harder truth is this: the enterprises that will get the most out of this moment are the ones willing to rethink the underlying process, not just add AI on top of the existing one. There is a well-known example from the second industrial revolution. Factories that replaced steam engines with electric ones but kept the same floor layout got almost none of the efficiency gains. The technology changed. The system didn't. That is exactly where many organizations are right now.
The teams who are starting to ask "if we built this from scratch with AI capabilities in mind, what would it look like?" Those are the ones who will see transformational results. It requires change management, enablement, and a clear definition of what good looks like. None of that is easy. But the successful organizations are tackling all of it together, not one piece at a time.
The System Is the Strategy
The question CDOs and CTOs should be sitting with right now is not whether to invest in agentic AI. That decision has largely been made by the market. The question is whether the underlying system, the data architecture, the governance layer, the transactional infrastructure, the development platform, is actually built for what they are trying to do.
To learn more about developing a roadmap to embed AI across your enterprise, download the State of AI Agents.
Subscribe to our blog and get the latest posts delivered to your inbox.