SESSION
Large Language Model (LLM) Evaluation and Governance
OVERVIEW
EXPERIENCE | In Person |
---|---|
TYPE | Paid Training |
TRACK | Paid Training |
DURATION | 240 min |
- Audience: Machine learning practitioners
- Hands-on labs: Yes
- Certification path: Databricks Certified Generative AI Engineer Associate
- Description: This course introduces learners to evaluating and governing generative artificial intelligence (AI) systems. First, learners will explore the meaning behind and motivation of building evaluation and governance/security systems. Next, the course will connect evaluation and governance systems to the Databricks Data Intelligence Platform. Third, learners will be introduced to a variety of evaluation techniques for specific components and types of applications. And finally, the course will conclude with an analysis of evaluating entire AI systems with respect to performance and cost.