Session

Deploying and Monitoring Agents on Databricks

Overview

ExperienceIn Person

This hands-on course guides you through deploying and monitoring agentic AI systems in the Databricks environment. You will start by learning how to deploy agents in batch mode using AI Functions and integrate them into data pipelines. Next, you'll explore real-time deployment using Model Serving, including deploying agents as REST endpoints and managing autoscaling, versioning, and governance via Mosaic AI Model Serving. You'll then turn to observability and monitoring, using MLflow tracing and scoring functions to capture metrics, analyze requests and responses, and detect anomalies in production. Finally, you'll learn best practices for deploying agents with built-in trace collection, inference logging, and feedback loops to support ongoing evaluation and robustness.

Note: Hands-on training courses will be updated to reflect the newest product and feature announcements from Data + AI Summit in June 2026. 

Prerequisites

  • Familiarity with the Databricks Data Intelligence Platform, including Unity Catalog and Delta Lake
  • Intermediate Python programming experience
  • Familiarity with Generative AI fundamentals, including LLMs, RAG architectures, and prompt engineering
  • Familiarity with Databricks Asset Bundles (DABs) 
  • Basic SQL proficiency, including use of functions like ai_query
  • Basic understanding of MLOps/LLMOps principles and software deployment concepts, including CI/CD pipelines, REST APIs, and YAML configuration