As enterprises move beyond pilots and proofs of concept, a new question is emerging in executive conversations: when does AI stop being a series of projects and start becoming part of how the business runs?
Naveen Zutshi, CIO at Databricks works closely with CIOs and business leaders navigating the shift from experimentation to enterprise-scale AI. In this Q&A, Naveen draws on prior leadership roles at companies like Palo Alto Networks, Gap Inc., and Walmart, where he led complex modernization efforts that transformed legacy environments into scalable, cloud-first architectures.
What emerged in our conversation is clear: the inflection point is not about models. It is about modernization, governance, and operational discipline.
Catherine: What is the clearest sign you are seeing that AI experimentation is giving way to AI as an operational capability?
Naveen: I believe the industry still has more work to do in generating real value from AI. But over the last six to twelve months, I have seen a remarkable shift. I spend time with CIOs and business leaders across industries, and three patterns stand out.
First, I am hearing increasingly concrete examples of AI being used in daily work. Interestingly, regulated industries that were considered laggards in the cloud journey—healthcare and financial services, for example—are now early adopters. We are seeing AI used for back-office automation, fraud detection, generating alpha in investment returns, clinician note taking, drug discovery, and even crisis center support and prevention. Second, business leaders are increasingly involved in the conversation. Historically, AI discussions were dominated by data engineers and data scientists. Now business groups are coming to the table to discuss how data and AI can transform their functions. More importantly, they are sharing examples of how they have already done it. AI has truly arrived when it shows up in business KPIs.
Third, funding has shifted. AI used to come out of innovation budgets or discretionary funds. Now it is a major line item in the P&L—either funded directly by business units or centrally through the CIO or CTO organization. That shift alone signals operational commitment. It may not be long before AI spending on tools will be a major line item after headcount and cloud spend. At Databricks, we are separating out AI spend from overall SaaS spend.
Catherine: In conversations with your industry peers, what common themes come up as friction points for productionizing AI projects?
Naveen: I was just with 20 CIOs this week, and talent was again at the top of the survey results as a top constraint. But in my experience, the root cause is often legacy.
Organizations are saddled with legacy systems, SaaS sprawl, on-prem sprawl, and architectural complexity. Over time, whether due to inaction or competing priorities, they have not taken decisive action to eliminate it. But keeping legacy systems around is insidious. Not only does modernization increase speed, but legacy systems also drains talent. It becomes harder to attract and retain top engineers when their primary job is keeping the lights on rather than building modern systems.
Every time I have chosen to modernize—whether compute, storage, data architecture, or application layers—I have regretted not doing it sooner. Modernization unleashes productivity, restores a sense of mission, and simplifies the environment. It has always been a no-regret move.
A modern, open architecture that allows you to plug in the best AI models without ripping and replacing your stack delivers these benefits:
That is often the real fix.
Catherine: What are the key platform decisions that most strongly determine whether AI scales?
Naveen: First, the data layer. Both structured and unstructured (which makes up nearly 80% of enterprise data). You must combine both under a common governance layer. Most critically, bring the models to the data, not the data to the models. Shipping data across environments creates complexity and control challenges. A unified architecture simplifies management and improves security.
It’s also critical to avoid locking yourself into a single model provider. The frontier models are evolving rapidly. An AI gateway or abstraction layer allows you to use multiple models and choose the best one for the task at hand.
Finally, treat AI as a core capability by investing heavily in observability, quality, validation, and testing. Development is accelerating. Testing is where discipline matters. You may spend 80% of your time validating and refining and only 20% building. And I would add one more – increasingly, context and state matter. AI systems need memory and continuity so they can improve over time.
Catherine: What are the consequences of keeping business executives out of data and AI initiatives?
Naveen: In many companies, AI strategy is led by data teams. But it is also a business imperative. Without clean, high-quality enterprise data, AI will not be useful in an enterprise setting. Frontier labs train models on the web. Enterprises must post train models on their own data. At the same time, innovation can happen at the edge. If you have a consistent data and AI stack with proper authentication and access controls, teams can safely build agents and applications without fragmenting the architecture. The key is consistency and governance underneath distributed innovation.
Catherine: Which workflows are most ready for agentic ownership?
Naveen: Beyond software development workflows which are mature in using AI, we are seeing strong success in go-to-market workflows. Marketing and pre-sales teams are using agents to improve outbound reach and targeting, often outperforming manual processes.
Agents also excel when processing large volumes of information to support decisions. Instead of waiting weeks for ad hoc reports from analysts, leaders can ask the data directly and receive insights quickly, across both structured and unstructured data.
Where agents are not yet ready is in deterministic workflows that require 100% consistency and accuracy. AI can assist, but it should not replace human judgment. There is also a risk of what’s called “AI slop”—outputs that sound plausible but lack depth. Leaders must pair adoption with oversight.
Catherine: How do you define success when scaling data and AI?
Naveen: I anchor on four dimensions:
For AI systems, I also focus on controllable inputs. For example, in a sales AI system, what percentage of data entry is now automated by an agent? That input metric should correlate to productivity gains. Or, what percentage of agent recommendations are adopted, and what is their efficacy compared to manual approaches? You can A/B test those. Cycle time reduction and cost savings matter—but only in the context of broader business outcomes.
Catherine: If you had to give your peers a 12-month start, stop, continue, what would it be?
Naveen: I’d say stop feeding the beast of legacy. Stop treating AI governance and security as an afterthought. And avoid replacing SaaS sprawl with agent sprawl. If agents are not adopted or delivering value, prune them.
Then I’d say take a skill based or jobs-to-be-done approach. Rather than replacing entire applications, identify specific tasks agents can perform better. Build credibility through focused wins. Map your crawl, walk, run journey. And finally, I would say continue investing in data and governance—especially for unstructured data. And most importantly, stay business-centric. Start with the user, the customer, and the outcome. Technology alone does not create value.
The executive inflection point is about operational readiness, modern architecture, unified governance, disciplined testing, measurable outcomes, and business alignment.
AI becomes an operational capability when it moves from experimentation to accountability—when it shows up in KPIs, budget lines, and architectural decisions. The organizations that recognize this shift early will not simply deploy more AI. They will build enterprises that are structurally ready for it.
To learn more about building an effective operational model, download the Databricks AI Maturity Model.
Product
August 30, 2024/6 min read

