Skip to main content

Data Pipelines

If you work in a role that interacts with data, you'll have come across a data pipeline, whether you realize it or not.

Many modern organizations use a variety of cloud-based platforms and technologies to run their operations, and data pipelines are instrumental in accessing information from these.

We're going to take a look at the different types of data pipelines, how they're used, and consider best practices for building one for your organization.

What is a data pipeline?

A data pipeline encompasses the ways data flows from one system to another. It consists of a series of steps that are carried out in a specific order, with the output of one step acting as the input for the next step.

There are usually three key elements: the source, the data processing steps, and finally, the destination, or "sink." Data can be modified during the transfer process, and some pipelines may be used simply to transform data, with the source system and destination being the same.

In recent years, data pipelines have had to become powerful enough to cope with the big data demands of organizations as large volumes and varieties of new data have become more common.

Steps need to be taken to ensure that pipelines experience no data loss, provide high accuracy and quality, and can scale with the varying needs of businesses. They should be versatile enough to cope with structured, unstructured and semi-structured data.

Here’s more to explore

Big Book of Data Engineering

Big Book of Data Engineering: 2nd Edition

The latest technical guidance for building real-time data pipelines

Download now
Data Lakehouse Platform for Dummies

The Data Lakehouse Platform for Dummies

Learn why the lakehouse is the best platform for all your data and AI

Get the eBook
Promotional graphic for Eckerson Group's white paper on 'Data Pipeline Orchestration: Modernizing Workflows with AI/ML.' The graphic shows a modern building facade with geometric glass patterns in the background, with the white paper title in the foreground. A red circular badge on the right side contains the text 'LEARN THE WHAT, WHY AND HOW' in white letters.

Eckerson Report: Data Pipeline Orchestration

Learn how orchestration is critical for your data and AI efforts and how to choose the right tool.

Read now

Common examples of data pipelines

Various types of data pipeline architecture are available for use, each with different attributes that make them suited to different use cases.

Batch pipeline

Batch pipelines are, as the name suggests, used to process data in batches. If you need to move a large number of data points from a system, such as your payroll, to a data warehouse, a batch-based pipeline can be used.

The data is not transferred in real time; instead, it's usually allowed to build up and be transferred on a set schedule.

Streaming pipeline

A streaming pipeline can be used to process raw data almost instantly. The stream processing engine processes data in real time as it is generated, making it a solid option for organizations accessing information from a streaming location, such as financial markets or social media.

Lambda architecture

Lambda architecture provides a hybrid approach to processing data, combining batch-processing and stream-processing methods. While there are benefits to this approach, such as flexible scaling, the challenges may outweigh them.

It's often seen as outdated and unnecessarily complex, requiring multiple layers (batch, speed and serving). This means you need a substantial amount of computational time and power, not to mention cost. Due to the fact that it has two different code bases that need to remain in sync, it can be very difficult both to maintain and to debug.

Delta architecture

Delta architecture on Databricks offers an alternative to lambda architecture. With a focus on simplicity, Delta architecture ingests, processes, stores and manages data within Delta Lake. Delta architecture has less code to maintain, provides a single source of truth for downstream users and allows easy merging of new data sources. It also decreases job costs through fewer data hops and job fails as well as lower times for job completion and cluster spin-ups.

How to build a data pipeline

How a data pipeline is built and implemented will often be decided by the individual needs of the business. In most cases, a production data pipeline can be built by data engineers. Code can be written to access data sources through an API, perform the necessary transformations, and transfer data to the target systems.

However, without automation, this will require an ongoing investment of time, coding, and engineering and ops resources. By using Delta Live Tables (DLT), it's easy to define end-to-end pipelines. Rather than manually piecing together a variety of data processing jobs, you can specify the data source, the transformation logic and the destination state of the data. DLT will automatically maintain any dependencies — cutting down on how much time you need to manually spend tuning it.

The importance of data pipelines in modern organizations

"Data pipeline" is a term that encompasses a variety of processes and can serve various purposes. They're an important part of any business that relies on data.

They ensure that data ends up where it should go, help keep formats consistent and can maintain a high standard of data quality. Without the right pipelines in place, it's easy to end up with important information in silos, or with duplicate data spreading throughout the organization.

FAQs about data pipelines

What is the difference between ETL and a data pipeline?

To put it simply, ETL is a type of data pipeline, but not all data pipelines are ETL pipelines.

ETL stands for "extract, transform and load," three interdependent processes involved with data integration. These specific processes are used to pull data from one database and move it to another, such as cloud data warehouses, where it can be used for data analysis, visualization and reporting. The ETL tasks are accomplished using a data pipeline as the implementation detail.

Some data pipelines don't involve data transformation, and they may not implement ETL. For instance, the final step in a data pipeline could be to activate another workflow or process instead.

Which tools can be used for a data pipeline?

There are a variety of tools and apps available, such as Apache Spark™, that can be used to build and maintain data pipelines, facilitating better data management and business intelligence. As these apps can require a large amount of manual optimization, they are a good choice for organizations with the necessary expertise to build and customize their own pipelines.

Meanwhile, a solution like Databricks Delta Live Tables (DLT) offers users automation and reduced complexity. This solution makes it easy to build and manage reliable batch and streaming data pipelines that deliver high-quality data on the Databricks Lakehouse Platform. DLT helps data engineering teams simplify ETL development and management with declarative pipeline development and deep visibility for monitoring and recovery. Plus, these intelligent data pipelines include automatic data quality testing, preventing bad data from impacting your work.

    Back to Glossary