Skip to main content

Generative AI Engineering with Databricks

Languages Available: English | 日本語 | Português BR


This course is aimed at data scientists, machine learning engineers, and other data practitioners looking to build LLM-centric applications with the latest and most popular frameworks. In this course, you will build common LLM applications using Hugging Face, develop retrieval-augmented generation (RAG) applications, create multi-stage reasoning pipelines using LangChain, fine-tune LLMs for specific tasks, assess and address societal considerations of using LLMs, and learn how to deploy your models at scale leveraging LLMOps best practices.


By the end of this course, you will have built an end-to-end LLM workflow that is ready for production.


Note: This course currently utilizes open-source technologies. Over time, additional Databricks capabilities will be leveraged by the course.

Skill Level
Associate
Duration
18h
Prerequisites
  • Intermediate-level experience with Python
  • Working knowledge of machine learning and deep learning is helpful


Outline

Day 1

  • Generative AI and LLMs
  • Primer on natural language processing
  • Databricks and LLMs
  • LLM applications
  • Retrieval augmented generation
  • Multistage reasoning

Day 2

  • Fine-tuning LLMs
  • Evaluating LLMs
  • Society and LLMs
  • LLMOps

Upcoming Public Classes

Date
Time
Language
Price
May 13
09 AM - 01 PM (America/Los_Angeles)
English
$1500.00
May 20
01 PM - 05 PM (Australia/Sydney)
English
$1500.00
May 27
09 AM - 05 PM (Asia/Tokyo)
Japanese
$1500.00
Jun 10
09 AM - 05 PM (Europe/Paris)
English
$1500.00
Jun 19
09 AM - 05 PM (Europe/London)
English
$1500.00
Jun 24
08 AM - 04 PM (America/Los_Angeles)
English
$1500.00
Jul 08
09 AM - 05 PM (Europe/Paris)
English
$1500.00
Jul 15
09 AM - 05 PM (Europe/London)
English
$1500.00
Jul 15
09 AM - 05 PM (America/Los_Angeles)
English
$1500.00
Jul 29
09 AM - 05 PM (America/Chicago)
English
$1500.00
Aug 19
09 AM - 05 PM (Australia/Sydney)
English
$1500.00

Public Class Registration

If your company has purchased success credits or has a learning subscription, please fill out the Training Request form. Otherwise, you can register below.

Private Class Request

If your company is interested in private training, please submit a request.

See all our registration options

Registration options

Databricks has a delivery method for wherever you are on your learning journey

Runtime

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

Register now

Instructors

Instructor-Led

Public and private courses taught by expert instructors across half-day to two-day courses

Register now

Learning

Blended Learning

Self-paced and weekly instructor-led sessions for every style of learner to optimize course completion and knowledge retention. Go to Subscriptions Catalog tab to purchase

Purchase now

Scale

Skills@Scale

Comprehensive training offering for large scale customers that includes learning elements for every style of learning. Inquire with your account executive for details

Upcoming Public Classes

Data Engineer

Data Workloads with Repos and Workflows

Moving a data pipeline to production means more than just confirming that code and data are working as expected. By scheduling tasks with Databricks Jobs, applications can be run automatically to keep tables in the Lakehouse fresh. Using Databricks SQL to schedule updates to queries and dashboards allows quick insights using the newest data. In this course, students will be introduced to task orchestration using the Databricks Workflow Jobs UI. Optionally, they will configure and schedule dashboards and alerts to reflect updates to production data pipelines. Learning objectives Version code with Databricks ReposOrchestrate tasks with Databricks Workflow Jobs. Use Databricks SQL for on-demand queries. Configure and schedule dashboards and alerts to reflect updates to production data pipelines.Prerequisites Ability to perform basic code development tasks using the Databricks Data Engineering & Data Science workspace (create clusters, run code in notebooks, use basic notebook operations, import repos from git, etc) Ability to configure and run data pipelines using the Delta Live Tables UI. Beginner experience defining Delta Live Tables (DLT) pipelines using PySpark Ingest and process data using Auto Loader and PySpark syntax. Process Change Data Capture feeds with APPLY CHANGES INTO syntax Review pipeline event logs and results to troubleshoot DLT syntax Reshape and manipulate complex data using advanced built-in functions. Production experience working with data warehouses and data lakes. Last course update April 2023
Paid
4h
Lab
instructor-led
Associate
Data Engineer

Data Pipelines with Delta Live Tables

In this course, you'll use Delta Live Tables with your choice of Spark SQL or Python to define and schedule pipelines that incrementally process new data from a variety of data sources into the Lakehouse. Learning objectives Describe how Delta Live Tables tracks data dependencies in data pipelines. Configure and run data pipelines using the Delta Live Tables UI. Use Python or Spark SQL to define data pipelines that ingest and process data through multiple tables in the lakehouse using Auto Loader and Delta Live Tables. Use APPLY CHANGES INTO syntax to process Change Data Capture feeds. Review event logs and data artifacts created by pipelines and troubleshoot DLT syntaxPrerequisites Beginner familiarity with cloud computing concepts (virtual machines, object storage, etc.) Ability to perform basic code development tasks using the Databricks Data Engineering & Data Science workspace (create clusters, run code in notebooks, use basic notebook operations, import repos from git, etc) Beginning programming experience with Delta Lake,Use Delta Lake DDL to create tables, compact files, restore previous table versions, and perform garbage collection of tables in the Lakehouse.Use CTAS to store data derived from a query in a Delta Lake table.Use SQL to perform complete and incremental updates to existing tables. Beginner programming experience with Python (syntax, conditions, loops, functions) Beginning programming experience with Spark SQL or PySpark. Extract data from a variety of file formats and data sources. Apply a number of common transformations to clean data. Reshape and manipulate complex data using advanced built-in functions. Production experience working with data warehouses and data lakes. Last course update April 2023
Paid
4h
Lab
instructor-led
Associate
Career Workshop

Career Workshop/

March 20

Careers at Databricks

We're on a mission to help data teams solve the world's toughest problems. Will you join us?
Advance my career now

Questions?

If you have any questions, please refer to our Frequently Asked Questions page.