Session
Train and Fine-Tune AI Models Without Managing GPUs
Overview
| Experience | In Person |
|---|---|
| Track | Artificial Intelligence & Agents |
| Industry | Enterprise Technology |
| Technologies | Lakeflow, Agent Bricks |
| Skill Level | Intermediate |
"Training and fine-tuning AI models shouldn’t require becoming a GPU infrastructure expert. As enterprises scale AI beyond prototypes, managing clusters and distributed compute has become a major bottleneck.In this session, we’ll introduce AI Runtime, a new serverless GPU training experience on Databricks that provides on-demand NVIDIA A10 and H100 GPUs directly on Lakehouse data. You’ll see how teams can train and fine-tune LLMs, recommendation systems, forecasting models, and computer vision workloads without managing infrastructure.We’ll cover how AI Runtime combines optimized GPU compute and distributed training with native support for frameworks like PyTorch and Hugging Face, alongside MLflow, Unity Catalog, and Lakeflow to unify the full AI lifecycle.You’ll learn how to reduce infrastructure overhead, accelerate iteration cycles, and standardize end-to-end AI workflows from experimentation to production."
Session Speakers
Brian Law
/Sr. Specialist Solutions Architect
Databricks
Tejas Sundaresan
/Sr. Product Manager
Databricks