Skip to main content

The foundation of AI scalability: one team, one platform, one operating model

How Albertsons is building a centralized AI core to scale across 2,300 stores

by Aly McGue

  • Scaling AI is an architectural decision that needs to solve the fragmentation challenge.
  • Reusable accelerators and shared governance let business teams move 10x faster without rebuilding foundations.
  • The talent shift is cultural as much as technical: hire for the attitude to learn, experiment, and innovate.

In retail, margin pressure is structural. The companies pulling ahead make faster, more precise decisions across merchandising, labor, and supply chain, and do it consistently across thousands of locations. The question facing most large retailers: are their organizations built to scale AI fast enough to matter? Albertsons Companies is one of America's largest food and drug retailers, operating approximately 2,300 stores and generating $80 billion in revenue. Sunil Gopinath leads data and AI globally for the company, and also runs Albertsons Companies India, its largest technology and AI hub. His mandate:build the AI and data foundation to turn a great retailer into a data-driven enterprise, at speed and at scale.

The conviction running through our conversation was direct: stop tolerating fragmentation. The companies that connect AI ambition with a strong enterprise foundation will win. Everyone else is running expensive experiments.

Underpinning this strategy is the Databricks Platform, which Albertsons uses across data engineering, ML, governance, and analytics. This shared foundation makes the 'one platform’ mandate real, giving every team the same starting line rather than a different set of tools.

Building the AI Muscle: Why Centralization Was Non-Negotiable

Aly McGue: How did you move your organization from fragmented, business-unit-owned AI experiments to a centralized AI core team and operating model?

Sunil Gopinath: We stopped tolerating fragmentation and made a firm architectural decision. One team, one platform, one operating model. We organized around four big bets in AI: customer experience, merchandising intelligence, labor, and supply chain. Those gave us strategic focus. The centralized AI core gave us the muscle to execute.

The logic was straightforward. There was a clear organizational need for common horizontal components, things like governance, security and a central repository of reusable models. A dedicated team focused on those building blocks means the application teams don't have to worry about hygiene and foundations. They can focus entirely on making the business better, more predictable, more actionable.

We also have a company-wide governance committee that brings together senior stakeholders and leaders to establish shared, acceptable standards for AI and AI governance. It's collective decision-making at the leadership level. That's what makes it stick.

The franchise model for AI at scale

Aly: What was the strategy for building shared standards, a central platform, and reusable accelerators to drive efficiency across Albertsons while still allowing for local innovation and use cases?

Sunil: The best way to think about it is a franchise model. Common infrastructure, standards, and governance at the center. Local execution and innovation at the edges.

We built reusable accelerators: ingestion pipelines and templates; feature store patterns; model monitoring; performance observability; and governance wrappers. Any team can plug into those and go 10x faster. The whole point of the platform is that it doesn't constrain innovation. It accelerates it.

Our philosophy is that you have to balance innovation with trust and governance, both from our employees and our customers. So the standards aren't arbitrary. They reflect what it takes for the business, the merchants, and the customers actually to trust what AI is doing.

Talent that compounds in a changing landscape

Aly: How are you rethinking the skills and leadership required to run this central AI core, and how do you ensure that the platform effectively empowers non-technical teams?

Sunil: Our approach works in three layers: machine learning that predicts, genAI that answers, and agentic AI that acts. All of these are embedded into how our people work.

For technical teams, we've moved to AI-augmented engineering. In 9 months, we've accepted 1.38 million lines of AI-generated code, with over 90% of engineers engaging with AI tools. We have fundamentally changed how fast we can build and ship, and that compounds.

For non-technical teams, we've built low-code dashboards, prompt libraries, and conversational agent generation. We have our own agentic AI platform where even non-tech teams can drag and drop agents. And if they're not comfortable doing that, they can just have a conversation and say, "Build me an agent for monitoring these KPIs," and it will. The goal across both sides is the same: less time hunting for answers, more time making decisions.

On the talent question specifically, we don't just look for technical competency or familiarity with the latest AI tools. We hire for attitude: to learn, to experiment, to innovate. The tools will keep evolving at a record pace. But if those cultural traits are ingrained, people pick them up and run with them.

Discipline at the top

Aly: Who in your executive leadership team is ultimately accountable for the success of the enterprise AI core, and how have your KPIs changed?

Sunil: Ownership sits at the top. For us, AI is a business strategy. Our metrics reflect that: reuse rates across markets, time to deployment, responsible AI compliance, and most importantly, business outcomes linked to AI uplift. If an initiative can't show impact, it doesn't scale. That discipline has to be enforced from the top, and that's what makes AI a real advantage and not just an expensive experiment.

Closing Thoughts

Sunil doesn't describe a gradual evolution toward centralization. He describes a deliberate commitment: one team, one platform, one operating model, with strategic bets that focus the work and reusable accelerators that compound the speed.

Merchandising Intelligence is one of four strategic AI priorities, the big bets that Albertsons has committed to as part of its broader enterprise-wide transformation, and it illustrates what the centralized model looks like when it hits a real business problem. The platform is built on Databricks, with Genie at the interaction layer. Merchants can ask complex questions in plain language and get governed, trustworthy answers without writing a query or filing a ticket. Databricks provides the data engineering, ML, and analytics foundation underneath.

For executives wrestling with how to move AI from pockets of experimentation to enterprise capability, Albertsons’ franchise model offers a useful frame: govern the center, free the edges, and make sure every team builds on what's already been proven.

To benchmark your investments and develop your roadmap for embedding AI across your organization and products, download the Databricks State of AI Agents.

Get the latest posts in your inbox

Subscribe to our blog and get the latest posts delivered to your inbox.