SESSION
Fine-Tuning Large Language Models (repeat)
OVERVIEW
EXPERIENCE | In Person |
---|---|
TYPE | Paid Training |
TRACK | Paid Training |
DURATION | 240 min |
This session is repeated.
- Audience: Machine learning practitioners
- Hands-on labs: Yes
- Learning path: Advanced Generative AI Engineering with Databricks
- Description: In this cutting-edge course, you’ll develop the art of fine-tuning Large Language Models to unlock the full potential of LLMs for your specific use cases using Databricks MosaicAI. We introduce the fundamentals of processing large-scale data and data parallelism. You will learn how to prepare and ingest data in a format suitable for supervised fine-tuning. Gain an in-depth understanding of how to fine-tune downstream LLMs embedded within a larger enterprise AI application. We will explore how to integrate your fine-tuning pipeline with MLflow and Unity Catalog.
We will also dive into parameter-efficient fine-tuning (PEFT) methods that maximize resource utilization while maintaining high model quality. The course will navigate you through best practices and potential pitfalls in fine-tuning.
This is the second course in the GenAI Engineer Professional pathway.
Pre-requisites:
- Completed GenAI Engineer Associate pathway or equivalent practical knowledge of:
- Understanding of deep learning, including how neural networks work, what loss functions are, etc.
- Basic understanding of how to build an LLM application that involves prompt engineering, retrieval-augmented generation (RAG), embeddings and foundation models