JUNE 26-29, 2023
SAN FRANCISCO + VIRTUAL
지금 등록하기

Optimizing Incremental Ingestion in the Context of a Lakehouse

On Demand

Type

  • Session

Format

  • Virtual

Track

  • 데이터 엔지니어링

Difficulty

  • Intermediate

Duration

  • 0 min

개요

Incremental ingestion of data is often trickier than one would assume, particularly when it comes to maintaining data consistency: for example, specific challenges arise depending on whether the data is ingested in a streaming or a batched fashion. In this session we want to share the real-life challenges encountered when setting up incremental ingestion pipeline in the context of a Lakehouse architecture.

In this session we outline how we used the recently introduced Databricks features, such as Autoloader and Change Data Feed, in addition to some more mature features, such as Spark Structured Streaming and Trigger Once functionality. These functionalities allowed us to transform batch processes into a “streaming” setup without having the need for the cluster to always run. This setup – which we are keen to share to the community - does not require reloading large amounts of data, and therefore represents a computationally, and consequently economically, cheaper solution.

In our presentation we dive deeper into each of the different aspects of the setup, with some extra focus on some essential Autoloader functionalities, such as schema inference, recovery mechanisms and file discovery modes.

Session Speakers

Headshot of Yoshi Coppens

Yoshi Coppens

Data Engineer

element61

Headshot of Ivana Pejeva

Ivana Pejeva

Cloud Solution Architect

Microsoft

Data+AI Summit 하이라이트 보기

Watch on demand