Hendrik Frentrup

Founder, Systemati.co

Hendrik is the founder of systemati.co – a Data Engineering and Data Science consultancy. Previously, he worked as a Data Scientist fully immersed in developing applications on distributed computing infrastructure, writing analytics code and machine learning pipelines in Apache Spark. He discovered the world of high performance computing and parallelisation in the early 2000s, but is happy to not have to remote log into mainframe clusters anymore.

Past sessions

Summit Europe 2019 Maps and Meaning: Graph-based Entity Resolution in Apache Spark & GraphX

October 16, 2019 05:00 PM PT

Data integration and the automation of tedious data extraction tasks are the fundamental building blocks of a data-driven organizations and are overlooked or underestimated at times. Aside from data extraction, scraping and ETL tasks, entity resolution is a crucial step in successfully combining datasets. The combination of data sources is usually what provides richness in features and variance. Building an expertise in entity resolution is important for data engineerings to successfully combine data sources. Graph-based entity resolution algorithms have emerged as a highly effective approach.

This talk will present the implementation of a graph-bases entity resolution technique in GraphX and in GraphFrames respectively. Working from concept, through how to implement the algorithm in Spark, the technique will also be illustrated by walking through a practical example. The technique will exhibit an example where efficacy can be achieved based on simple heuristics, and at the same time map a path to a machine-learning assisted entity resolution engine with a powerful knowledge graph at its center.

The role of ML can be found upstream in building the graph, for example by using classification algorithms in determining the link strength between nodes based on data, or downstream where dimensionality reduction can play a role in clustering and reduce the computational load in the resolution stage. The audience will leave with a clear picture of a scalable data pipeline performing entity resolution effectively and a thorough understanding of the internal mechanism, ready to apply it to their use cases.