Skip to main content

Data Engineering

Tens of millions of production workloads run daily on Databricks

data-engineering-header

Easily ingest and transform batch and streaming data on the Databricks Data Intelligence Platform. Orchestrate reliable production workflows while Databricks automatically manages your infrastructure at scale. Increase the productivity of your teams with built-in data quality testing and support for software development best practices.

Operate from First Principles

Unify batch and streaming

Eliminate silos on one platform with a single and unified API to ingest, transform and incrementally process batch and streaming data at scale.

Raise the Bar

Focus on getting value from data

Databricks automatically manages your infrastructure and the operational components of your production workflows so you can focus on value, not on tooling.

We Put the Company First

Connect your tools of choice

An open Data Intelligence Platform to connect and use your preferred data engineering tools for data ingestion, ETL/ELT and orchestration.

multicloud

Build on the Data Intelligence Platform

The Data Intelligence Platform provides the best foundation to build and share trusted data assets that are centrally governed, reliable and lightning-fast.

“To us, Databricks is becoming the one-stop shop for all our ETL work. The more we work with the Databricks Platform, the easier it is for both users and platform administrators.”

— Hillevi Crognale, Engineering Manager, YipitData

How does it work?

demarketecture

Simplified data ingestion

Automated ETL processing

Reliable workflow orchestration

End-to-end observability and monitoring

Next-generation data processing engine

Foundation of governance, reliability and performance

dataIngestion

Simplified data ingestion

Ingest data into your Data Intelligence Platform and power your analytics, AI and streaming applications from one place. Auto Loader incrementally and automatically processes files landing in cloud storage — without the need to manage state information — in scheduled or continuous jobs. It efficiently tracks new files (scaling to billions) without having to list them in a directory, and can also automatically infer the schema from the source data and evolve it as it changes over time. The COPY INTO command makes it easy for analysts to perform batch file ingestion into Delta Lake via SQL.

“We’ve seen a 40% productivity uplift for data engineering — reducing the time it takes to develop new ideas from days to minutes and increasing the availability and accuracy of our data.”
— Shaun Pearce, Chief Technology Officer, Gousto

automated-etl-processing

Automated ETL processing

Once ingested, raw data needs transforming so that it’s ready for analytics and AI. Databricks provides powerful ETL capabilities for data engineers, data scientists and analysts with Delta Live Tables (DLT). DLT is the first framework that uses a simple declarative approach to build ETL and ML pipelines on batch or streaming data, while automating operational complexities such as infrastructure management, task orchestration, error handling and recovery, and performance optimization. With DLT, engineers can also treat their data as code and apply software engineering best practices like testing, monitoring and documentation to deploy reliable pipelines at scale.

reliable-workflow

Reliable workflow orchestration

Databricks Workflows is the fully managed orchestration service for all your data, analytics and AI that is native to your Data Intelligence Platform. Orchestrate diverse workloads for the full lifecycle including Delta Live Tables and Jobs for SQL, Spark, notebooks, dbt, ML models and more. Deep integration with the underlying Data Intelligence Platform ensures you will create and run reliable production workloads on any cloud while providing deep and centralized monitoring with simplicity for end users.

“Our mission is to transform the way we power the planet. Our clients in the energy sector need data, consulting services and research to achieve that transformation. Databricks Workflows gives us the speed and flexibility to deliver the insights our clients need.”

— Yanyan Wu, Vice President of Data, Wood Mackenzie

wood mackenzie logo

observability

End-to-end observability and monitoring

The Data Intelligence Platform gives you visibility across the entire data and AI lifecycle so data engineers and operations teams can see the health of their production workflows in real time, manage data quality and understand historical trends. In Databricks Workflows you can access dataflow graphs and dashboards tracking the health and performance of your production jobs and Delta Live Tables pipelines. Event logs are also exposed as Delta Lake tables so you can monitor and visualize performance, data quality and reliability metrics from any angle.

next-generation

Next-generation data processing engine

Databricks data engineering is powered by Photon, the next-generation engine compatible with Apache Spark APIs delivering record-breaking price/performance while automatically scaling to thousands of nodes. Spark Structured Streaming provides a single and unified API for batch and stream processing, making it easy to adopt streaming on the lakehouse without changing code or learning new skills.

Data Engineering

State-of-the art data governance, reliability and performance

Data engineering on Databricks means you benefit from the foundational components of the Data Intelligence Platform — Unity Catalog and Delta Lake. Your raw data is optimized with Delta Lake, an open source storage format providing reliability through ACID transactions, and scalable metadata handling with lightning-fast performance. This combines with Unity Catalog to give you fine-grained governance for all your data and AI assets, simplifying how you govern, with one consistent model to discover, access and share data across clouds. Unity Catalog also provides native support for Delta Sharing, the industry’s first open protocol for simple and secure data sharing with other organizations.

Migrate to Databricks

Tired of the data silos, slow performance and high costs associated with legacy systems like Hadoop and enterprise data warehouses? Migrate to the Databricks Data Intelligence Platform: the modern platform for all your data, analytics and AI use cases.

Migrate to Databricks

Integrations

Provide maximum flexibility to your data teams — leverage Partner Connect and an ecosystem of technology partners to seamlessly integrate with popular data engineering tools. For example, you can ingest business-critical data with Fivetran, transform it in place with dbt, and orchestrate your pipelines with Apache Airflow.

Data Ingestion and ETL

fivetran
dbt
Arcion
Matillion
Informatica Logo
Confluent
logo qlik
airbyte-logo
prophecy-logo-dark-1
streamsets-sponsor
alteryx
snaplogic-logo

Customer Stories

comcast
logo-color-hsbc.svg
laliga
Atlassian
logo-color-columbia-1
comcast
logo-color-hsbc.svg
laliga
Atlassian
logo-color-columbia-1
comcast

Discover more

Delta

Delta Lake

Partner Connect

Workflows

Unity

Delta Live Tables

icon-orange-Collaborative-min

Delta Sharing

Related content

Ready to get started?