Starting today, you can use Gemini 3 Pro, the latest frontier model from our partners at Google, natively and securely on Databricks. This state-of-the-art model is Google’s highest quality model across agentic reasoning, coding, deep research, and visual reasoning public benchmarks. Increasingly, enterprise users are building agents with Databricks Agent Bricks in order to simplify management and improve quality. Now, you can build enterprise agents with Gemini 3 Pro directly in Databricks using Agent Bricks, then deploy them on your data—all within the Databricks security perimeter.
This release expands your capabilities for building, governing, and deploying powerful AI agents securely at scale, bringing the latest Gemini models into the same trusted, governed environment as your data and workflows. Now, Databricks is the only enterprise platform where you can natively access all frontier models from OpenAI, Anthropic, and now Google Gemini.
Using Gemini models in Databricks
Gemini 3 Pro excels at visual reasoning, automated document analysis, agentic reasoning, and business data processing use cases. So that Databricks customers can leverage the standout multimodal capabilities of these models, starting today, our REST API for Gemini now supports images, and DBSQL ai_query supports large scale image inference directly from your data in the Lakehouse.
Using the model in SQL
Our built-in operator in DBSQL dramatically simplifies the process of applying LLMs directly to enterprise data and can automate routine tasks like analyzing contracts, PDFs, transcripts or images. When you run these queries, Databricks automatically scales Gemini model capacity in the backend to handle everything from a handful of rows to millions, ensuring fast and reliable results without extra setup.
Figure 1: Try Gemini models with ai_query in your workspace today
Figure 2: Use Gemini models with images from Unity Catalog
Realtime API
Additionally, Gemini models are available via our real-time APIs at scale. You can use either the OpenAI chat completions client or our REST API
Figure 3: Use Gemini models to build a real-time agent with tools in Python
You can now access Google’s latest Gemini 3 model directly from the Databricks Data Intelligence Platform.
Gemini 3 Pro is a state-of-the-art frontier model from Google that performs well across a wide range of public benchmarks and sets new standards for visual reasoning, agentic reasoning, coding, and conversational Q&A. Gemini 3 Pro excels at demanding tasks, especially when combined with the context of your enterprise data and use cases on Databricks.

Use Gemini 3 Pro when you need to:
Alongside the frontier Gemini 3 model, Gemini 2.5 Flash continues to excel as one of the fastest models in the market, delivering ultra-low latency and high throughput.
This model is one of Google’s hybrid reasoning models, designed to “think before it speaks” and allows developers to set the level of “thinking” according to the task. It excels at structured problem-solving, tool use (such as Python or calculators), and step-by-step logic.

Use Gemini 2.5 Flash when you need to:
Multi-modal product intelligence
Gemini 3 Pro enables workflows that combine images, text, and structured data. A retailer can analyze product photos, user reviews, and inventory data together to detect defects or predict sales trends.
Decision automation at enterprise scale
Using Databricks orchestration and governance, organizations can build agents that run thousands of structured reasoning tasks per minute—such as categorizing transactions, scoring risks, or generating compliance reports—balancing Flash for latency with Pro for accuracy.
Real-time customer support agents
With Gemini 2.5 Flash, Enterprises can build chatbots that respond in milliseconds while pulling in enterprise data securely. For example, a telco company can deliver automated support that classifies an issue, retrieves account information, and suggests a fix—all before a human agent needs to step in.
