Skip to main content

Google’s Gemini Models on Databricks

Announcing Availability of Google’s Gemini Models on Databricks

Google's Gemini

Published: November 5, 2025

Product4 min read

Summary

  • Access Google’s Gemini 2.5 Pro and Gemini 2.5 Flash models natively in Databricks
  • Build smarter AI agents on your enterprise data with no setup, no data movement, and full governance
  • Run models securely in Databricks using SQL, Python, and Databricks-native tools

Starting today, you can use Google’s Gemini models natively and securely on the Databricks Data Intelligence Platform. This marks a major milestone for enterprise AI: Databricks now offers secure, unified access to all of the world’s top LLMs, right where your data lives. 

As part of our Week of Agents, this release expands how customers can build, govern, and deploy powerful AI agents securely at scale, bringing the latest Gemini models into the same trusted, governed environment as your data and workflows.

With this release, any team can now:

  • Apply Gemini models to their data with Batch Inference using ai_query
  • Build intelligent agents using Agent Bricks that can reason over your private data in real time
  • Access Gemini models securely while maintaining compliance, with built-in governance and billing observability

This makes it even easier to deploy advanced conversational agents, automate document analysis, and accelerate business reasoning by applying the latest Gemini  models safely and efficiently right where your data is stored in Databricks. 

Gemini models generally available in Databricks Foundation Model API, which allows users to call Gemini models hosted on Vertex AI,  from within SQL, as an API endpoint in Model Serving, or via Agent Bricks. 

One way to leverage Gemini models on Databricks is as a built-in operator in SQL or Python. This dramatically simplifies the process for applying LLMs directly to enterprise data and can automate routine tasks like analyzing contracts, PDFs, transcripts or images. When you run these queries, Databricks automatically scales Gemini models capacity in the backend to handle everything from a handful of rows to millions, ensuring fast and reliable results without extra setup.

Figure 1: Try Gemini models with ai_query in your workspace today!

Additionally, Gemini models are available via our real-time APIs at scale! You can use either the OpenAI chat completions client or our REST API.

Figure 2: Use Gemini models to build a real-time agent with tools in Python

When should I use Google’s Gemini models?

You can now access two of Google's most advanced models, Gemini-2.5 Pro and Gemini 2.5 Flash, directly from the Data Intelligence Platform in Databricks.

Gemini 2.5 Flash: Optimized for Reasoning, Tools, and Cost-Efficient Agents

Gemini 2.5 Flash is distinct in the market for blending the highest levels of intelligence with incredibly low latency and high throughput. This model is one of Google’s hybrid reasoning model, designed to “think before it speaks” and allows developers to set the level of “thinking” according to the task. It excels at structured problem solving, tool use (like Python or calculators), and step-by-step logic. 

Gemini 2.5 Flash

Use Gemini 2.5 Flash when you need to:

  • Deliver reliable, step-by-step reasoning for structured queries
  • Automate troubleshooting or decision-making in real time
  • Deeply analyze both text and images in a single workflow
  • Deploy lightweight, cost-effective agents at scale across support, operations, or logistics

Gemini 2.5 Pro: Flagship Model for Complex Tasks, Code, and Long Context

Gemini 2.5 Pro is one of Google’s most capable large language models. It provides state of the art performance  in advanced reasoning, industry leading 1M context window for long-context processing, coding, and is natively multi-modal. 

Gemini 2.5 Pro

Use Gemini 2.5 Pro when you need to:

  • Perform advanced text processing tasks like summarization, entity extraction,  classification across very large datasets with its industry leading 1M context window.
  • Build agentic applications to automate complex business workflows which require advanced reasoning
  • Extract insights from massive multimodal documents  corpora - such as legal documents, contracts, or research papers
  • Generate high-quality code and technical content to support developer productivity and internal tools

What can our customers build with Gemini

Real-time customer support agents
With Gemini 2.5 Flash, enterprises can build chatbots that respond in milliseconds while pulling in enterprise data securely. For example, a telco company can deliver automated support that classifies an issue, retrieves account information, and suggests a fix—all before a human agent needs to step in.

Multi-modal product intelligence
Gemini 2.5 Pro enables workflows that combine images, text, and structured data. A retailer can analyze product photos, user reviews, and inventory data together to detect defects or predict sales trends.

Decision automation at enterprise scale
Using Databricks orchestration and governance, organizations can build agents that run thousands of structured reasoning tasks per minute—such as categorizing transactions, scoring risks, or generating compliance reports—balancing Flash for latency with Pro for accuracy.

Next Steps

Never miss a Databricks post

Subscribe to our blog and get the latest posts delivered to your inbox