Unified Data Service

High quality data with great performance

Get Started Schedule a Demo

The Databricks Unified Data Service provides a reliable and scalable platform for your data pipelines, data lakes, and data platforms. Manage your full data journey, so you can ingest, process, store, and expose data throughout your organization.

Manage Your Whole Data Journey

Data Ingest

Pull data across all your different data sources, data storages, and data types, including batch and streaming. Leverage a library of connectors, integrations, and APIs for all your needs.

Data Pipelines

Run scalable and reliable data pipelines. Use Scala, Python, R, or SQL to run processing jobs quickly on distributed Spark runtimes, without having to worry about the underlying compute.

Data Lakes

Build reliable data lakes at scale. Improve data quality, optimize storage performance, and manage stored data, all while maintaining data lake compliance and security.

Data Consumers

Use your data lake as a shared source of truth across Data Science, Machine Learning, and Business Analytics teams — BI dashboards, production models, and everything in-between.

Product Components

Delta Lake for Databricks

Delta Lake brings enhanced reliability, performance, and lifecycle management to Data Lakes. No more incomplete jobs to rollback for clean up, suspect data added into your data lake, or difficulty deleting data for compliance changes.

Databricks Runtime

The Databrick Runtime is a distributed data processing engine built on a highly optimized version of Apache Spark, for up to 50x performance gains. Build pipelines, schedule jobs, and train models with easy self-service and cost-saving performance.

BI Reporting on Delta Lake

BI Reporting on Delta Lake delivers business analytics on your data lake. Connect directly to your most complete and recent data in your data lake with Delta Lake and SparkSQL, and use your preferred BI visualization and reporting tools for more timely business insights.


For Data Engineers

Build robust data pipelines that scale without having to worry about infrastructure, refine data quality across bronze-silver-gold tables in your data lakes, all while truly unifying batch and streaming data sources.

For Data Scientists

Simplified data engineering to help you clean and prep you data for exploratory data science or productionized ML models. Spin up autoscaling clusters on demand, for prep, training, or scoring, all available as self-service.

For Business Analysts

Run BI/SQL reporting on your data lake, for the most complete and up-to-date data possible. Use your BI tool of choice to visualize and dashboard the same single source of truth used for data science and machine learning.

Ecosystem Support


Data Sources


Visualization Tools

Customer Stories

How Australia’s National Health Services Directory Improved Data Quality, Reliability, and Integrity with Delta Lake

At Healthdirect we use Apache Spark and Delta Lake’s fine-grained table features and data versioning to solve duplication and eliminate data redundancy. This has enabled us to develop and provide high-quality data through federation and interoperability services whilst providing the analytics to improve Health Services demand forecasting and clinical outcomes in service lines, such as Aged Care and Preventative Health.

Ready to Get Started?