Passa al contenuto principale

Modelli linguistici di grandi dimensioni

Accelerare l'innovazione utilizzando modelli LLM con Databricks

What are large language models?

Large language models (LLMs) are machine learning models that are very effective at performing language-related tasks such as translation, answering questions, chat and content summarization, as well as content and code generation. LLMs distill value from huge data sets and make that “learning” accessible out of the box. Databricks makes it simple to access these LLMs to integrate into your workflows as well as platform capabilities to fine-tune LLMs using your own data for better domain performance.

S&P Global image

Natural language processing with LLMs

S&P Global uses large language models on Databricks to better understand the key differences and similarities in companies’ filings, helping asset managers build a more diverse portfolio.

Using LLM image

Use LLMs for a variety of use cases

LLMs can drive business impact across use cases and industries — translate text into other languages, improve customer experience with chatbots and AI assistants, organize and classify customer feedback to the right departments, summarize large documents, such as earnings calls and legal documents, create new marketing content, and generate software code from natural language. They can even be used to feed into other models, such as those that generate art. Some popular LLMs are the GPT family of models (e.g., ChatGPT), BERT, T5 and BLOOM.

using pre-trained llm graphic image

Using pretrained LLMs in your apps

Integrate existing pretrained models — such as those from the Hugging Face transformers library or other open source libraries — into your workflow. Transformer pipelines make it easy to use GPUs and allow batching of items sent to the GPU for better throughput. 

With the MLflow flavor for Hugging Face Transformers, you get native integration of transformer pipelines, models and processing components to the MLflow tracking service. You can also integrate OpenAI models, or solutions from partners such as John Snow Labs, in your workflows on Databricks.

With AI functions, SQL data analysts can easily access LLM models, including from OpenAI, directly within their data pipelines and workflows.

fine tuning

Fine-tuning LLMs using your data

Customize a model on your data for your specific task. With the support of open source tooling, such as Hugging Face and DeepSpeed, you can quickly and efficiently take a foundation LLM and start training with your own data to have more accuracy for your domain and workload. This also gives you control to govern the data used for training so you can make sure you’re using AI responsibly.

Dolly from Databricks graphic image

Dolly 2.0 is a large language model that was trained by Databricks to demonstrate how you can inexpensively and quickly train your own LLM. The high-quality human-generated data set (databricks-dolly-15k) used to train the model has also been open sourced. With Dolly 2.0, customers can now own, operate and customize their own LLM. Enterprises can build and train an LLM on their own data, without the need to send data to proprietary LLMs. To get the Dolly 2.0 code, model weights or the databricks-dolly-15k data set, visit Hugging Face.

Built-in LLMOps graphic image

Built-in LLMOps (MLOps for LLMs)

Use built-in and production-ready MLOps with Managed MLflow for model tracking, management and deployment. Once the model is deployed, you can monitor things like latency, data drift and more with the ability to trigger retraining pipelines — all on the same unified Databricks Lakehouse Platform for end-to-end LLMOps.

data-centric platform graphic image

Data and models on a unified platform

Most models will be trained more than once, so having the training data on the same ML platform will become crucial for both performance and cost. Training LLMs on the Lakehouse gives you access to first-rate tools and compute — within an extremely cost-effective data Lakehouse — and lets you continue to retrain models as your data evolves over time.

Pronto per cominciare?