Skip to main content
Company Blog

Databricks invests in Mistral AI and integrates Mistral AI’s models into the Databricks Data Intelligence Platform

Share this post

Sharing a belief that open source solutions will foster innovation and transparency in generative AI development, Databricks has announced a partnership and participation in the Series A funding of Mistral AI, one of Europe's leading providers of generative AI solutions. With this deeper partner relationship, Databricks and Mistral AI now offer Mistral AI’s open models natively integrated within the Databricks Data Intelligence Platform. Databricks customers can now access Mistral AI’s models in the Databricks Marketplace, interact with these models in the Mosaic AI Playground, use them as optimized model endpoints through Mosaic AI Model Serving, and customize them using their own data through adaptation.

Since the start of this year, we have already seen close to 1000 enterprises leverage Mistral models on the Databricks platform, making millions of model inferences. With these out-of-the-box integrations, we are making it even easier for enterprises to rapidly leverage Mistral AI’s models for their generative AI applications, without compromising on security, data privacy, and governance that are core to the Databricks platform. 

Arthur Mensch, Founder and CEO of Mistral AI, declared: "We are delighted to forge this strategic alliance with Databricks, reaffirming our shared commitment to the portability, openness and accessibility of generative artificial intelligence for all. By seamlessly integrating our models into Databricks' data intelligence platform, we are advancing our shared mission of democratizing AI. This integration marks an important step in extending our innovative solutions to Databricks' vast customer base and continues to drive innovation and significant advances in AI. Together, we are committed to delivering accessible and transformative AI solutions to users worldwide."

Introducing Mistral AI’s Open Models: Mistral 7B and Mixtral 8x7B

Mistral AI’s open models are fully integrated into the Databricks platform.

Mistral 7B is a small yet powerful dense transformer model, trained with 8k context length. It’s very efficient to serve, due to its relatively small size of 7 billion parameters, and its model architecture that leverages grouped query attention (GQA) and sliding window attention (SWA). To learn more about Mistral 7B, check out Mistral’s blog post.

Mixtral 8x7B is a sparse mixture of experts model (SMoE), supporting a context length of 32k, and capable of handling English, French, Italian, German, and Spanish. It outperforms Llama 2 70B on multiple benchmarks, while boasting faster inference thanks to its SMoE architecture which activates only 12 billion parameters during inference, out of a total of 45 billion trained parameters. To learn more about Mixtral 8x7B check out our previous blog post.

Our customers are already seeing the benefits of leveraging Mistral AI’s models:

“At Experian, we’re developing Gen AI models with the lowest rates of hallucination while preserving core functionality. Utilizing the Mixtral 8x7b model on Databricks has facilitated rapid prototyping, revealing its superior performance and quick response times,” said James Lin, Head of AI/ML Innovation at Experian.
“Databricks is driving innovation and adoption for generative Al in the enterprise. Partnering with Mistral on Databricks has delivered impressive results for RAG-based consumer chatbot, which answers bank-related user queries. Previously, the system was FAQ-based, which could not handle the variation in user queries. The Mistral-based Chatbot is able to handle the user queries in an appropriate manner and increased the accuracy of the system from 80% to 95%," said Luv Luhadia, Global Alliance at Celebal Technologies. "Their cutting-edge technology and expertise has elevated performance for our customers and we are excited to continue collaborating with Mistral and Databricks to push the boundary of what is possible with data and Al."

Using Mistral AI’s Models within Databricks Data Intelligence Platform

Discover Mistral AI models in the Databricks Marketplace

Databricks Marketplace is an open marketplace for data, analytics and AI, powered by the open source Delta Sharing standard. Through the Marketplace, customers can discover Mistral AI’s models, learn about their capabilities, and review examples demonstrating ways to leverage the models across the Databricks platform such as model deployment with Mosaic AI Model Serving, batch inference with Spark, and model inference in SQL using AI Functions. To learn more about the Databricks Marketplace and AI Model Sharing, check out our blog post.

Mistral Model Inference with Mosaic AI Model Serving

Mosaic AI Foundation Model APIs is a capability in Model Serving that allows customers to access and query Mixtral 8x7B (as well as other state-of-the-art models), leveraging highly optimized model deployments, and without having to create and maintain deployments and endpoints. Check out the Foundation Model APIs docs to learn more. 

With Databricks Mosaic AI Model Serving, customers can access Mistral’s models using the same APIs used for other Foundation Models. This lets customers deploy, govern, query, and monitor any Foundation Model across clouds and providers, enabling experimentation and productionization of large language models.

Customers can also invoke model inference directly from Databricks SQL using the ai_query SQL function. To learn more, check out the SQL code below, and the ai_query documentation.

Mistral Model adaptation with Mosaic AI 

Mosaic AI offers customers an easy and cost-effective way to create their own custom models. Customers can adapt Mistral AI’s models, as well as other foundational models, leveraging their own proprietary datasets. The goal of model adaptation is to increase a model’s understanding of a particular domain or use case, boost knowledge of a company’s vernacular and ultimately improve performance on a specific task.  Once a model is tuned or adapted, a user can quickly deploy the adapted model for inference using Mosaic AI Model Serving and benefit from cost-efficient serving, and gain ownership of a differentiated model IP (Intellectual Property).

Interactive Inference in the Mosaic AI Playground

To quickly experiment with pre-trained and fine-tuned Mistral models, customers can access the Mosaic AI Playground available in the Databricks console. The AI Playground enables interactive multi-turn conversations, experimentation with model inference sampling parameters such as temperature and max_tokens, and side-by-side inference of different models to observe model response quality and performance characteristics.


interactive inference

Databricks + Mistral AI

We are excited to welcome Mistral AI as a Databricks Ventures portfolio company and partner. Mistral AI models can now be consumed and customized in a variety of ways on Databricks, which offers the most comprehensive set of tools for building, testing and deploying end-to-end generative AI applications. Whether starting with a side-by-side comparison of pretrained models or consuming models through pay-per-tokens there are several options for getting started quickly.  For users who require improved accuracies for specific use cases, customizing Mistral AI models on proprietary data through Mosaic AI Foundation Model Adaptation is cost effective and easy to use. Finally - efficient and secure serverless inference is built upon our unified approach to governance and security. Enterprises can feel confident in AI solutions built with Mistral AI models on Databricks - an approach that combines some of the world's top foundation models with Databricks’ uncompromising posture for data privacy, transparency and control. 

Explore more about building GenAI apps with Databricks by joining the upcoming webinar: The GenAI Payoff in 2024.