Skip to main content

Agent Evaluation on Databricks

This course teaches students how to systematically evaluate AI agents using MLflow's evaluation framework, addressing the unique challenges of non-deterministic AI systems that traditional software testing cannot handle. Students learn to implement various evaluation approaches including built-in judges for common criteria like correctness and safety, guideline judges for business-specific requirements, and custom judges for specialized needs. The course covers both offline evaluation using curated datasets and online production monitoring, with hands-on experience using MLflow's tracing capabilities to understand agent execution patterns and collect human feedback from different stakeholder types. Through practical demonstrations and labs, students develop skills in creating evaluation workflows that drive continuous quality improvements throughout the AI agent development lifecycle.


Note: Databricks Academy is transitioning to a notebook-based format for classroom sessions within the Databricks environment, discontinuing the use of slide decks for lectures. You can access the lecture notebooks in the Vocareum lab environment.


Languages Available: English | 日本語 | Português BR | 한국어

Skill Level
Associate
Duration
4h
Prerequisites
In this course, the content was developed for participants with these skills/knowledge/abilities:  

• Intermediate Python programming experience

• Basic SQL knowledge for querying and creating functions

• Familiarity with Databricks Data Intelligence Platform

• Understanding of Unity Catalog concepts including catalogs and schemas

• Basic understanding of large language models (LLMs) and prompt engineering

• Basic knowledge of MLflow



Outline

AI Agent Evaluation Fundamentals

• The Challenge of Evaluating AI Agents
• Agent Setup

• MLflow's Evaluation Framework


Built-In and Guideline Judges

• Types of Evaluation Judges
• Using MLflow Built-In Judges
• Guideline Judges with MLflow
• Applying Agent Evaluation
• Custom Judges with MLflow


Custom Judges and Human Feedback

• Offline vs. Online Evaluation Strategies
• Best Practices and Practical Application
• Developer and SME Feedback with MLflow


Upcoming Public Classes

Date
Time
Your Local Time
Language
Price
May 08
11 AM - 03 PM (Asia/Singapore)
-
English
$750.00
Jun 03
01 PM - 05 PM (Europe/London)
-
English
$750.00
Jun 05
08 AM - 12 PM (Asia/Kolkata)
-
English
$750.00
Jul 08
09 AM - 01 PM (Australia/Sydney)
-
English
$750.00
Jul 08
09 AM - 01 PM (America/New_York)
-
English
$750.00

Public Class Registration

If your company has purchased success credits or has a learning subscription, please fill out the Training Request form. Otherwise, you can register below.

Private Class Request

If your company is interested in private training, please submit a request.

See all our registration options

Registration options

Databricks has a delivery method for wherever you are on your learning journey

Runtime

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

Register now

Instructors

Instructor-Led

Public and private courses taught by expert instructors across half-day to two-day courses

Register now

Learning

Blended Learning

Self-paced and weekly instructor-led sessions for every style of learner to optimize course completion and knowledge retention. Go to Subscriptions Catalog tab to purchase

Purchase now

Scale

Skills@Scale

Comprehensive training offering for large scale customers that includes learning elements for every style of learning. Inquire with your account executive for details

Upcoming Public Classes

Building Reliable Conversational Agents with Genie

This course teaches you how to design, build, and maintain a Databricks Genie Space, a natural language interface that enables business users to ask questions about governed data and receive SQL-backed answers without writing code.

You will learn how Genie fits into the Databricks AI/BI product family and how it translates natural language into reliable SQL queries. The course focuses on what it takes to create a Genie Space that delivers accurate, consistent, and trustworthy results.

You will follow a complete end-to-end workflow, from understanding source data and defining benchmarks to configuring and refining a Genie Space using the full set of Knowledge Store curation tools. These include metadata, synonyms, prompt matching, SQL logic, example queries, and text instructions.

You will also learn how to share Genie Spaces with business users through Databricks One, understand how Unity Catalog governance is automatically enforced, and use monitoring and user feedback to continuously improve quality over time.

By the end of the course, you will be able to create and manage a production-ready Genie Space that delivers governed, self-service conversational analytics at scale.

Note: Databricks Academy is transitioning to a notebook-based format for classroom sessions within the Databricks environment, discontinuing the use of slide decks for lectures. You can access the lecture notebooks in the Vocareum lab environment.

Paid
4h
Lab
instructor-led
Associate

Questions?

If you have any questions, please refer to our Frequently Asked Questions page.