From Generic to Genius: Fine-Tuning LLMs with Precision and Speed
OVERVIEW
EXPERIENCE | In Person |
---|---|
TYPE | Breakout |
TRACK | Generative AI |
INDUSTRY | Financial Services |
TECHNOLOGIES | AI/Machine Learning, GenAI/LLMs, Governance |
SKILL LEVEL | Intermediate |
DURATION | 40 min |
DOWNLOAD SESSION SLIDES |
Imagine having an LLM model that is powerful and hyper-personalized to your business, internal jargon, and unique style. Prompt Engineering and RAG provide a great quickstart to your GenAI journey but often fail to adapt to your specific needs given its technical and cost limitations. Fine-tuning a SOTA LLM with your proprietary data is a great way to solve this problem while maintaining sensitivity around ownership, latency, and cost requirements. Databricks, with its Mosaic AI capabilities, is the only platform positioned to meet all your data and AI needs, resulting in an accelerated go-to-market. In this session, we will demystify fine-tuning principles and share our learnings to showcase how what traditionally took months can now be done in weeks while adhering to your business and data privacy requirements. Objectives: - Introduction to Fine Tuning - Scenarios for Fine-Tuning - Architecture Patterns - Notebooks (Code Examples) - Governance with Unity Catalog.
SESSION SPEAKERS
Manasa Parvathipuram
/Solution Architect
Deloitte
Sonali Guleria
/Solutions Architect
Databricks