Session

Streamlining Custom GPU Training and GenAI Finetuning with Serverless GPU Compute

Overview

ExperienceIn Person
TypeBreakout
TrackArtificial Intelligence
IndustryEnterprise Technology
TechnologiesMLFlow, Mosaic AI
Skill LevelIntermediate
Duration40 min

The last year has seen the rapid progress of Open Source GenAI models and frameworks. This talk covers best practices for custom training and OSS GenAI finetuning on Databricks, powered by the newly announced Serverless GPU Compute. 

We’ll cover how to use Serverless GPU compute to power AI training/GenAI finetuning workloads and framework support for libraries like LLM Foundry, Composer, HuggingFace, and more. Lastly, we’ll cover how to leverage MLFlow and the Databricks Lakehouse to streamlined the end to end development of these models.

Key takeaways include:

  • How Serverless GPU compute saves customers valuable developer time and overhead when dealing with GPU infrastructure
  • Best practices for training custom deep learning models (forecasting, recommendation, personalization) and finetuning OSS GenAI Models on GPUs across the Databricks stack
  • Leveraging distributed GPU training frameworks (e.g. Pytorch, Huggingface) on Databricks
  • Streamlining the path to production for these models 

Join us to learn about the newly announced Serverless GPU Compute and the latest updates to GPU training and finetuning on Databricks!

Session Speakers

Tejas Sundaresan

/Sr. Product Manager
Databricks