ホームData + AI Summit 2022 のロゴ
Watch on demand

Optimizing Incremental Ingestion in the Context of a Lakehouse

On Demand

Type

  • Session

フォーマット

  • Virtual

Track

  • データエンジニアリング

Difficulty

  • Intermediate

Duration

  • 0 min

概要

Incremental ingestion of data is often trickier than one would assume, particularly when it comes to maintaining data consistency: for example, specific challenges arise depending on whether the data is ingested in a streaming or a batched fashion. In this session we want to share the real-life challenges encountered when setting up incremental ingestion pipeline in the context of a Lakehouse architecture.

In this session we outline how we used the recently introduced Databricks features, such as Autoloader and Change Data Feed, in addition to some more mature features, such as Spark Structured Streaming and Trigger Once functionality. These functionalities allowed us to transform batch processes into a “streaming” setup without having the need for the cluster to always run. This setup – which we are keen to share to the community - does not require reloading large amounts of data, and therefore represents a computationally, and consequently economically, cheaper solution.

In our presentation we dive deeper into each of the different aspects of the setup, with some extra focus on some essential Autoloader functionalities, such as schema inference, recovery mechanisms and file discovery modes.

Session Speakers

Yoshi Coppens

データエンジニア

element61

Ivana Pejeva

Cloud Solution Architect

Microsoft

Data+AI サミットの様子をご覧いただけます

Watch on demand