Skip to main content

SQL Analytics on Databricks

In this course, you'll learn how to effectively use Databricks for data analytics, with a specific focus on Databricks SQL. As a Databricks Data Analyst, your responsibilities will include finding relevant data, analyzing it for potential applications, and transforming it into formats that provide valuable business insights. 


You will also understand your role in managing data objects and how to manipulate them within the Databricks Data Intelligence Platform, using tools such as Notebooks, the SQL Editor, and Databricks SQL. 


Additionally, you will learn about the importance of Unity Catalog in managing data assets and the overall platform. Finally, the course will provide an overview of how Databricks facilitates performance optimization and teach you how to access Query Insights to understand the processes occurring behind the scenes when executing SQL analytics on Databricks.


Languages Available: English | 日本語 | Português BR | 한국어

Skill Level
Associate
Duration
4h
Prerequisites

- A working knowledge of using SQL for data analysis purposes. 

- Be familiar with how data is created, stored, and managed. 

- A basic understanding of statistical analysis.

- Understand the structure and defining characteristics of specific data formats such as CSV, JSON, TXT, and Parquet.

- Be familiar with the user interface of the Databricks Data Intelligence Platform.

Outline

Data Discovery

Using Unity Catalog as a Data Discovery Tool

Understanding Data Object Ownership

Lab: Use Unity Catalog to Locate and Inspect Datasets


Data Importing

Ingesting Data into Databricks

Demo: Uploading Data to Databricks Using the UI

Demo: Programmatic Exploration and Data Ingestion to Unity Catalog

Lab: Import Data into Databricks


SQL Execution

Databricks SQL and Databricks SQL Warehouses

Demo: The Unified SQL Editor

Demo: Manipulate and Transform Data with Databricks SQL

Demo: Creating Views with Databricks SQL

Lab: Manipulate and Analyze a Table


Query Analysis

Databricks Photon and Optimization in Databricks

Demo: Query Insights

Best Practices for SQL Analytics

Upcoming Public Classes

Date
Time
Your Local Time
Language
Price
May 29
11 AM - 03 PM (Asia/Singapore)
-
English
$750.00
May 29
09 AM - 01 PM (America/New_York)
-
English
$750.00
Jun 24
08 AM - 12 PM (Asia/Kolkata)
-
English
$750.00
Jun 26
01 PM - 05 PM (Europe/London)
-
English
$750.00
Jul 24
01 PM - 05 PM (Australia/Sydney)
-
English
$750.00
Jul 24
09 AM - 01 PM (America/New_York)
-
English
$750.00

Public Class Registration

If your company has purchased success credits or has a learning subscription, please fill out the Training Request form. Otherwise, you can register below.

Private Class Request

If your company is interested in private training, please submit a request.

See all our registration options

Registration options

Databricks has a delivery method for wherever you are on your learning journey

Runtime

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

Register now

Instructors

Instructor-Led

Public and private courses taught by expert instructors across half-day to two-day courses

Register now

Learning

Blended Learning

Self-paced and weekly instructor-led sessions for every style of learner to optimize course completion and knowledge retention. Go to Subscriptions Catalog tab to purchase

Purchase now

Scale

Skills@Scale

Comprehensive training offering for large scale customers that includes learning elements for every style of learning. Inquire with your account executive for details

Upcoming Public Classes

Machine Learning Practitioner

Machine Learning with Databricks

Welcome to Machine Learning with Databricks!

This course is your gateway to mastering machine learning workflows on Databricks. Dive into data preparation, model development, deployment, and operations, guided by expert instructors. Learn essential skills for data exploration, model training, and deployment strategies tailored for Databricks. By course end, you'll have the knowledge and confidence to navigate the entire machine learning lifecycle on the Databricks platform, empowering you to build and deploy robust machine learning solutions efficiently.

Data Preparation for Machine Learning

This course focuses on the fundamentals of preparing data for machine learning using Databricks. Participants will learn essential skills for exploring, cleaning, and organizing data tailored for traditional machine learning applications. Key topics include data visualization, feature engineering, and optimal feature storage strategies. Through practical exercises, participants will gain hands-on experience in efficiently preparing data sets for machine learning within the Databricks. This course is designed for associate-level data scientists and machine learning practitioners. and individuals seeking to enhance their proficiency in data preparation, ensuring a solid foundation for successful machine learning model deployment.

Machine Learning Model Development

This comprehensive course provides a practical guide to developing traditional machine learning models on Databricks, emphasizing hands-on demonstrations and workflows using popular ML libraries. Participants will explore key ML techniques, including regression and clustering, while leveraging Databricks’ powerful capabilities. The course covers MLflow integration for model tracking, Databricks Feature Store for feature management, and Optuna for hyperparameter tuning. Additionally, participants will learn how to accelerate model training with Databricks AutoML. By the end of the course, learners will have real-world, practical skills to develop, optimize, and deploy machine learning models efficiently in the Databricks environment.

Machine Learning Model Deployment

This course is designed to introduce three primary machine learning deployment strategies and illustrate the implementation of each strategy on Databricks. Following an exploration of the fundamentals of model deployment, the course delves into batch inference, offering hands-on demonstrations and labs for utilizing a model in batch inference scenarios, along with considerations for performance optimization. The second part of the course comprehensively covers pipeline deployment, while the final segment focuses on real-time deployment. Participants will engage in hands-on demonstrations and labs, deploying models with Model Serving and utilizing the serving endpoint for real-time inference.

Machine Learning Operations

This course will guide participants through a comprehensive exploration of machine learning model operations, focusing on MLOps and model lifecycle management. The initial segment covers essential MLOps components and best practices, providing participants with a strong foundation for effectively operationalizing machine learning models. In the latter part of the course, we will delve into the basics of the model lifecycle, demonstrating how to navigate it seamlessly using the Model Registry in conjunction with the Unity Catalog for efficient model management. By the course's conclusion, participants will have gained practical insights and a well-rounded understanding of MLOps principles, equipped with the skills needed to navigate the intricate landscape of machine learning model operations.

Languages Available: English | 日本語 | Português BR | 한국어

Paid
16h
Lab
instructor-led
Associate
Data Engineer

DevOps Essentials for Data Engineering

This course explores software engineering best practices and DevOps principles, specifically designed for data engineers working with Databricks. Participants will build a strong foundation in key topics such as code quality, version control, documentation, and testing. The course emphasizes DevOps, covering core components, benefits, and the role of continuous integration and delivery (CI/CD) in optimizing data engineering workflows.

You will learn how to apply modularity principles in PySpark to create reusable components and structure code efficiently. Hands-on experience includes designing and implementing unit tests for PySpark functions using the pytest framework, followed by integration testing for Databricks data pipelines with Spark Declarative Pipeline and Jobs to ensure reliability.

The course also covers essential Git operations within Databricks, including using Databricks Git Folders to integrate continuous integration practices. Finally, you will take a high level look at various deployment methods for Databricks assets, such as REST API, CLI, SDK, and Declarative Automation Bundles (DABs), providing you with the knowledge of techniques to deploy and manage your pipelines.

By the end of the course, you will be proficient in software engineering and DevOps best practices, enabling you to build scalable, maintainable, and efficient data engineering solutions.

Languages Available: English | 日本語 | Português BR | 한국어 | Español | française

Paid
4h
Lab
instructor-led
Associate

Questions?

If you have any questions, please refer to our Frequently Asked Questions page.