Skip to main content

Introduction to Python for Data Science and Data Engineering

This course is intended for complete beginners to Python to provide the basics of programmatically interacting with data. The course begins with a basic introduction to programming expressions, variables, and data types. It then progresses into conditional and control statements followed by an introduction to methods and functions. You will learn the basics of data structures, classes, and various string and utility functions. Lastly, you will gain experience using the pandas library for data analysis and visualization as well as the fundamentals of cloud computing. Throughout the course, you will gain hands-on practice through lab exercises with additional resources to deepen your knowledge of programming after the class.

Skill Level
Associate
Duration
12h
Prerequisites

None

Outline

Day 1

  • Introduction to the Databricks environment

  • Python overview

  • Variables and data types

  • Complex data types

  • Control flow

  • Loops

  • Functions

  • Classes


Day 2

  • Using libraries

  • Data analysis with pandas

  • Advanced methods in Pandas

  • Data visualization

  • Cloud computing 101

  • Capstone and next steps 

Upcoming Public Classes

Date
Time
Language
Price
May 07
09 AM - 01 PM (America/New_York)
English
$1500.00
May 20
09 AM - 01 PM (Europe/London)
English
$1500.00
Jun 04
02 PM - 06 PM (America/New_York)
English
$1500.00
Jun 19
09 AM - 05 PM (Asia/Tokyo)
Japanese
$1500.00
Aug 21
09 AM - 05 PM (America/New_York)
English
$1500.00

Public Class Registration

If your company has purchased success credits or has a learning subscription, please fill out the Training Request form. Otherwise, you can register below.

Private Class Request

If your company is interested in private training, please submit a request.

See all our registration options

Registration options

Databricks has a delivery method for wherever you are on your learning journey

Runtime

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

Register now

Instructors

Instructor-Led

Public and private courses taught by expert instructors across half-day to two-day courses

Register now

Learning

Blended Learning

Self-paced and weekly instructor-led sessions for every style of learner to optimize course completion and knowledge retention. Go to Subscriptions Catalog tab to purchase

Purchase now

Scale

Skills@Scale

Comprehensive training offering for large scale customers that includes learning elements for every style of learning. Inquire with your account executive for details

Upcoming Public Classes

Data Engineer

Data Pipelines with Delta Live Tables

In this course, you'll use Delta Live Tables with your choice of Spark SQL or Python to define and schedule pipelines that incrementally process new data from a variety of data sources into the Lakehouse. Learning objectives Describe how Delta Live Tables tracks data dependencies in data pipelines. Configure and run data pipelines using the Delta Live Tables UI. Use Python or Spark SQL to define data pipelines that ingest and process data through multiple tables in the lakehouse using Auto Loader and Delta Live Tables. Use APPLY CHANGES INTO syntax to process Change Data Capture feeds. Review event logs and data artifacts created by pipelines and troubleshoot DLT syntaxPrerequisites Beginner familiarity with cloud computing concepts (virtual machines, object storage, etc.) Ability to perform basic code development tasks using the Databricks Data Engineering & Data Science workspace (create clusters, run code in notebooks, use basic notebook operations, import repos from git, etc) Beginning programming experience with Delta Lake,Use Delta Lake DDL to create tables, compact files, restore previous table versions, and perform garbage collection of tables in the Lakehouse.Use CTAS to store data derived from a query in a Delta Lake table.Use SQL to perform complete and incremental updates to existing tables. Beginner programming experience with Python (syntax, conditions, loops, functions) Beginning programming experience with Spark SQL or PySpark. Extract data from a variety of file formats and data sources. Apply a number of common transformations to clean data. Reshape and manipulate complex data using advanced built-in functions. Production experience working with data warehouses and data lakes. Last course update April 2023
Paid
4h
Lab
instructor-led
Associate
Career Workshop

Career Workshop/

March 20

Careers at Databricks

We're on a mission to help data teams solve the world's toughest problems. Will you join us?
Advance my career now

Questions?

If you have any questions, please refer to our Frequently Asked Questions page.