Skip to main content

Create Your First Workspace Using Databricks Express

In this course, you will explore the core features and functionalities of Databricks Express Setup, a streamlined way to get started with Databricks. The course is designed to help you quickly set up and navigate a serverless workspace while providing a comprehensive understanding of the credit-based trial system, including the $400 allowance. Divided into six modules, it starts with an introduction to Databricks Express Setup, followed by an exploration of its key features and benefits. You will learn to create and manage serverless workspaces, perform exploratory data analysis using Unity Catalog, and gain insights into collaboration through data sharing.


Additionally, you’ll be guided through trial management and upgrade options, ensuring you can effectively transition from trial to paid accounts. The course also includes an internal-only module on the evolution of trial credits and account activation methods. By the end of the course, you will have a solid foundation in using Databricks Express Setup for data and AI workloads, enabling you to confidently explore and analyze data, collaborate with peers, and manage your Databricks environment efficiently.

Skill Level
Introductory
Duration
1h 30m
Prerequisites

- Basic knowledge of cloud computing and SQL concepts such as networking basics, SQL commands, aggregate functions, filters and sorting, indexes, tables, and views.

- Basic knowledge of Python programming, Jupyter notebook interface, and PySpark fundamentals.

Outline

Module 1: Introduction and Onboarding to Databricks Express


- Getting Started with Databricks Express

- Setting Up Your Databricks Express Account


Module 2: Express Setup and Workspace Navigation


- Exploring Serverless Workspaces and Data Intelligence

- Managing Storage and Data Sharing in Databricks


Module 3: Trial Management and Upgrade Options


- Exploring Credit-Based Trials in Databricks

- Setting Up Your Databricks Express Account


Module 4: Navigating Databricks Workspace and Data Sharing


- Demo: Databricks Workspace Navigation

- Demo: Data Management in Databricks Workspace

- Demo: Exploratory Data Analysis

- Demo: Delta Sharing Between Databricks Express Accounts

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

See all our registration options

Registration options

Databricks has a delivery method for wherever you are on your learning journey

Runtime

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

Register now

Instructors

Instructor-Led

Public and private courses taught by expert instructors across half-day to two-day courses

Register now

Learning

Blended Learning

Self-paced and weekly instructor-led sessions for every style of learner to optimize course completion and knowledge retention. Go to Subscriptions Catalog tab to purchase

Purchase now

Scale

Skills@Scale

Comprehensive training offering for large scale customers that includes learning elements for every style of learning. Inquire with your account executive for details

Upcoming Public Classes

Data Engineer

Data Ingestion with Lakeflow Connect

This course provides a comprehensive introduction to Lakeflow Connect as a scalable and simplified solution for ingesting data into Databricks from a variety of data sources. You will begin by exploring the different types of connectors within Lakeflow Connect (Standard and Managed), learn about various ingestion techniques, including batch, incremental batch, and streaming, and then review the key benefits of Delta tables and the Medallion architecture.

From there, you will gain practical skills to efficiently ingest data from cloud object storage using Lakeflow Connect Standard Connectors with methods such as CREATE TABLE AS (CTAS), COPY INTO, and Auto Loader, along with the benefits and considerations of each approach. You will then learn how to append metadata columns to your bronze level tables during ingestion into the Databricks data intelligence platform. This is followed by working with the rescued data column, which handles records that don’t match the schema of your bronze table, including strategies for managing this rescued data.

The course also introduces techniques for ingesting and flattening semi-structured JSON data, as well as enterprise-grade data ingestion using Lakeflow Connect Managed Connectors.

Finally, learners will explore alternative ingestion strategies, including MERGE INTO operations and leveraging the Databricks Marketplace, equipping you with foundational knowledge to support modern data engineering ingestion.

Free
2h
Associate

Questions?

If you have any questions, please refer to our Frequently Asked Questions page.