Using pretrained LLMs in your apps
Integrate existing pretrained models — such as those from the Hugging Face transformers library or other open source libraries — into your workflow. Transformer pipelines make it easy to use GPUs and allow batching of items sent to the GPU for better throughput.
With the MLflow flavor for Hugging Face Transformers, you get native integration of transformer pipelines, models and processing components to the MLflow tracking service. You can also integrate OpenAI models, or solutions from partners such as John Snow Labs, in your workflows on Databricks.
With AI functions, SQL data analysts can easily access LLM models, including from OpenAI, directly within their data pipelines and workflows.