As enterprises move from early experimentation with generative AI to building agentic, goal-driven systems, the questions executives are asking have shifted. The conversation is less about what AI can do and much more about how it can be trusted, governed, and integrated into the way the business actually runs.
To explore how leading organizations are preparing for this next stage, I sat down with Craig Wiley, Senior Director of Product at Databricks, as part of our Executive Lens series. This series is designed to surface the strategic shifts shaping enterprise data and AI, through direct conversations with executives who are navigating these changes in real time.
Craig and I talked candidly about what readiness really looks like, how architecture and governance need to evolve, and what milestones leadership teams and boards should be planning for as they begin to scale agentic systems.
Craig Wiley is Senior Director of Product for Artificial Intelligence at Databricks. Previously, he was the founding General Manager of AWS SageMaker and a leader of AI products at Google Cloud. He brings deep experience building scalable machine learning and AI platforms that help enterprises bring data and intelligent systems together in practical, durable ways.
Catherine: You’ve been talking to a lot of CIOs, CDOs, and CTOs lately. What are you seeing change as companies move from GenAI experimentation to more agentic, goal-driven systems?
Craig: Early on, I think a lot of folks were just confused about how to take advantage of GenAI in a useful way. We still hear about a huge percentage of use cases that are very deterministic. People say, “I want to build a system that does this,” whether it’s supply chain, customer service management, or whatever.
The problem was that with early GenAI, building or deploying anything deterministic was really hard. With agents, we can now use GenAI to build nearly deterministic systems, and we can also get much smarter about accuracy.
If you think about what it takes for a CXO to say yes to deploying an agentic solution, it comes down to control and accuracy. Can I control it, and does it actually work? This shift toward agents has made it possible to drive levels of accuracy we just couldn’t get to when everything was prompt-and-response based.
Catherine: What tells you an organization is actually ready for agentic AI?
Craig: The boring answer is the right one: is your data in order?
You can be very excited about agentic AI, but for enterprises it really comes down to context. And when we say context, we mean data and information. Can you deliver the right information to the agent at the right moment in its reasoning?
We see this all the time. Smaller, cheaper, less sophisticated models can perform just as well as the most advanced ones if they get the right context at the right time. There’s no shortcut to that. You need a well-curated data lake with strong metadata. If you don’t have that, it’s very similar to classical machine learning. You say, “Let’s build this model,” and two and a half months are spent getting the data in order, and the last couple of weeks are actually building the system. Without the data work, there’s no success.
Catherine: A lot of organizations aren’t as mature with their data as they’d like to be. If an executive looks at their environment and thinks, “This is a mess, where do I even start?” what have you seen work?
Craig: There are really two paths.
One is bottom-up. You look at all of your data and say, “How do I get this into a good place?” The good news is the tools have improved dramatically. Moving data out of legacy systems is easier, and GenAI can even help write some of the code to do that.
The other path is use-case driven. If a CEO or CIO says, “We have a big agentic ambition and we want to do X,” and the data is a mess, you can start by asking: what data do I actually need for this use case? Then you go find those pieces, modernize them, and bring them forward in service of that goal.
Neither approach is universally better. Bottom-up gives you more flexibility later. Use-case first can be faster when the problem is existential. The only real mistake is not giving the data the time and attention it needs.
Catherine: Where are early adopters focusing right now? What kinds of use cases are you seeing gain traction?
Craig: A year ago, a lot of early adopters were leaning into marketing and other use cases where the generative nature of the models wasn’t a liability. Now, because of things like tool calling and better accuracy, customers can go after much more. People are still very chat-centric. “I want my employees to talk to something.” “I want customers to talk to something.”
But the real excitement I’m seeing is around automation and workflow optimization. I talked to a large bank recently that’s trying to agentify their entire loan origination process. That used to be hours of humans going through documents. Now they’re hoping to make it fully agentically run, with tight human oversight. That’s a much more compelling outcome than just another chatbot.
Catherine: How are leaders rethinking architecture and governance as systems become more autonomous?
Craig: For decades, we’ve focused on managing structured data and making sure the right people have access and the wrong people don’t. Now we have to think about that for unstructured data too, and we have to think about agents as new entities. How do I make sure these agents have access to the right data at the right time?
You also have to think about the user on the other end of the agent. A classic example is building a chatbot on top of Jira. Often Jira or other similar systems can contain confidential information. If it’s not governed, anyone could surface that information. So it’s not just about what the agent can access. It’s also about what the agent can return based on who’s asking. The building blocks exist, but governance has to be treated as a first-class problem, not an afterthought.
Catherine: This sounds a lot like identity and access management. How should leaders think about that as they prepare?
Craig: Fundamentally, it is identity and access management, but with a new class of identity: agents.
If you don’t have strong identity and access policies, the world is about to get a lot harder. If you do, this fits more naturally.
A simple way to think about it is:
If identity systems and documentation are good, it becomes much easier to point an agent at it and move quickly.
Catherine: Over the next year or two, what should leadership teams be planning for as agentic systems scale?
Craig: A lot of companies are stuck on the build versus buy question. If I were a CEO, I’d want clarity on that. My view is that you should be able to build. I can’t imagine running a large company and outsourcing all of my software development.
If you have developers, you should plan on building this muscle. In the near term, I care much less about ROI and much more about whether my people can build and deliver these systems. Practice comes before competition. Get the talent right in the first six months. In six to twelve months, build things you’re proud of. After that, start driving real business outcomes.
There are times to buy. If the functionality isn’t central to your differentiation, then consider buying it. But if you already build software to differentiate your company, your teams should be building agents to differentiate your company.
Catherine: What’s the biggest misconception you see when companies try agentic AI for the first time?
Craig: Dismissal after failure.
They build something, it answers wrong once, and they say, “See? I told you it would be wrong. I’m done.” That’s not how growth works. If it was wrong, ask why. Fix the root cause and move forward.
GenAI felt easy at the beginning, so people expect it to always be easy. But building great AI systems is hard. You’re going to have failures. Success is about continuous improvement, not getting it right the first time.
I gave a talk a couple years ago where a global financial services firm talked about an agent they built to help call center employees onboard faster. I asked how they measured success. The response was, “That wasn’t the point. The point was to get my team experience building.”
That mindset stuck with me. Companies that show up with that attitude are the ones that are going to win.
Catherine: The growth mindset.
Craig: Exactly.
What stood out to me most from this conversation is that agentic AI doesn’t reward shortcuts. The organizations that move fastest aren’t skipping the hard parts. They’re doing the unglamorous work around data, identity, governance, and documentation, and they’re investing early in building internal capability.
Agentic systems don’t just change what technology can do. They raise the bar on how prepared an organization needs to be to use it well.
To learn more about building an effective operating model, download the Databricks AI Maturity Model.