Skip to main content

Getting AI Governance Right Without Slowing Everything Down

David Meyer, SVP Product, shares how leading enterprises balance speed, control, and trust as agentic AI scales

Getting AI Governance Right Without Slowing Everything Down

Published: January 30, 2026

Data Strategy6 min read

Summary

  • Foundational governance enables teams to move faster without sacrificing visibility or control.
  • Agent governance works best when it extends established data and engineering practices.
  • AI delivers durable value only when treated as a continuously managed production system.

As enterprises move from AI experimentation to scale, governance has become a board-level concern. The challenge for executives is no longer whether governance matters, but how to design it in a way that enables speed, innovation, and trust at the same time.

To explore how that balance is playing out in practice, I sat down with David Meyer, Senior Vice President of Product at Databricks. Working closely with customers across industries and regions, David has a clear view into where organizations are making real progress, where they are getting stuck, and how today’s governance decisions shape what’s possible tomorrow.

What stood out in our conversation was his pragmatism. Rather than treating AI governance as something new or abstract, David consistently returned to first principles: engineering discipline, visibility, and accountability.

AI Governance as a Way to Move Faster

Catherine Brown: You spend a lot of time with customers across industries. What’s changing in how leaders are thinking about governance as they plan for the next year or two?

David Meyer: One of the clearest patterns I see is that governance challenges are both organizational and technical, and the two are tightly connected. On the organizational side, leaders are trying to figure out how to let teams move quickly without creating chaos.

The organizations that struggle tend to be overly risk averse. They centralize every decision, add heavy approval processes, and unintentionally slow everything down. Ironically, that often leads to worse outcomes, not safer ones.

What’s interesting is that strong technical governance can actually unlock organizational flexibility. When leaders have real visibility into what data, models, and agents are being used, they don’t need to control every decision manually. They can give teams more freedom because they understand what’s happening across the system. In practice, that means teams don’t need to ask permission for every model or use case—access, auditing, and updates are handled centrally, and governance happens by design rather than by exception.

Catherine Brown: Many organizations seem stuck between moving too fast and locking everything down. Where do you see companies getting this right?

David Meyer: I usually see two extremes.

On one end, you have companies that decide they’re “AI first” and encourage everyone to build freely. That works for a little while. People move fast, there’s a lot of excitement. Then you blink, and suddenly you’ve got thousands of agents, no real inventory, no idea what they’re costing, and no clear picture of what’s actually running in production.

On the other end, there are organizations that try to control everything up front. They put a single choke point in place for approvals, and the result is that almost nothing meaningful ever gets deployed. Those teams usually feel constant pressure that they’re falling behind.

The companies that are doing this well tend to land somewhere in the middle. Within each business function, they identify people who are AI-literate and can guide experimentation locally. Those people compare notes across the organization, share what’s working, and narrow the set of recommended tools. Going from dozens of tools down to even two or three makes a much bigger difference than people expect.

Agents Aren’t as New as They Seem

Catherine: One thing you said earlier really stood out. You suggested that agents aren’t as fundamentally different as many people assume.

David: That’s right. Agents feel new, but a lot of their characteristics are actually very familiar.

They cost money continuously. They expand your security surface area. They connect to other systems. Those are all things we’ve dealt with before.

We already know how to govern data assets and APIs, and the same principles apply here. If you don’t know where an agent exists, you can’t turn it off. If an agent touches sensitive data, someone needs to be accountable for that. A lot of organizations assume agent systems require an entirely new rulebook. In reality, if you borrow proven lifecycle and governance practices from data management, you’re most of the way there.

Catherine: If an executive asked you for a simple place to start, what would you tell them?

David: I’d start with observability.

Meaningful AI almost always depends on proprietary data. You need to know what data is being used, which models are involved, and how those pieces come together to form agents.

A lot of companies are using multiple model providers across different clouds. When those models are managed in isolation, it becomes very difficult to understand cost, quality, or performance. When data and models are governed together, teams can test, compare, and improve much more effectively.

That observability matters even more because the ecosystem is changing so fast. Leaders need to be able to evaluate new models and approaches without rebuilding their entire stack every time something shifts.

Catherine: Where are organizations making fast progress, and where do they tend to get stuck?

David: Knowledge-based agents are usually the fastest to stand up. You point them at a set of documents and suddenly people can ask questions and get answers. That’s powerful. The problem is that many of these systems degrade over time. Content changes. Indexes fall out of date. Quality drops. Most teams don’t plan for that.

Sustaining value means thinking beyond the initial deployment. You need systems that continuously refresh data, evaluate outputs, and improve accuracy over time. Without that, a lot of organizations see a great first few months of activity, followed by declining usage and impact.

Treating Agentic AI Like an Engineering Discipline

Catherine: How are leaders balancing speed with trust and control in practice?

David: The organizations that do this well treat agentic AI as an engineering problem. They apply the same discipline they use for software: continuous testing, monitoring, and deployment. Failures are expected. The goal isn’t to prevent every issue—it’s to limit the blast radius and fix problems quickly. When teams can do that, they move faster and with more confidence. If nothing ever goes wrong, you’re probably being too conservative.

Catherine: How are expectations around trust and transparency evolving?

David: Trust doesn’t come from assuming systems will be perfect. It comes from knowing what happened after something went wrong. You need traceability—what data was used, which model was involved, who interacted with the system. When you have that level of auditability, you can afford to experiment more.

This is how large distributed systems have always been run. You optimize for recovery, not for the absence of failure. That mindset becomes even more important as AI systems grow more autonomous.

Building an AI Governance Strategy

Rather than treating agentic AI as a clean break from the past, it’s as an extension of disciplines enterprises already know how to run. For executives thinking about what actually matters next, three themes rise to the surface:

  • Use governance to enable speed, not constrain it. The strongest organizations put foundational controls in place so teams can move faster without losing visibility or accountability.
  • Apply familiar engineering and data practices to agents. Inventory, lifecycle management, and traceability matter just as much for agents as they do for data and APIs.
  • Treat AI as a production system, not a one-time launch. Sustained value depends on continuous evaluation, fresh data, and the ability to quickly detect and correct issues.

Together, these ideas point to a clear takeaway: durable AI value doesn’t come from chasing the newest tools or locking everything down, but from building foundations that let organizations learn, adapt, and scale with confidence.

To learn more about building an effective operating model, download the Databricks AI Maturity Model.

Never miss a Databricks post

Subscribe to our blog and get the latest posts delivered to your inbox

What's next?

The role of AI in changing company structures and dynamics

Data Strategy

November 12, 2024/9 min read

The role of AI in changing company structures and dynamics

From Data Warehousing to Data Intelligence: How Data Took Over

Insights

November 19, 2024/12 min read

From Data Warehousing to Data Intelligence: How Data Took Over