Experiences Migrating Hive Workload to SparkSQL

Download Slides

At Facebook, millions of Hive queries are executed on a daily basis, and the workload contributes to important analytics that drive product decisions and insights. Spark SQL in Apache Spark provides much of the same functionality as Hive query language (HQL) more efficiently, and Facebook is building a framework to migrate existing production Hive workload to Spark SQL with minimal user intervention.

Before Facebook began large-scale migration to SparkSQL, they worked on identifying the gap between HQL and SparkSQL. They built an offline syntax analysis tool that parses, analyzes, optimizes and generates physical plans on daily HQL workload. In this session, they’ll share their results. After finding their syntactic analysis encouraging, they built tooling for offline semantic analysis where they run HQL queries in their Spark shadow cluster and validate the outputs. Output validation is necessary since the runtime behavior in Spark SQL may be different from HQL. They have built a migration framework that supports HQL in both Hive and Spark execution engines, can shadow and validate HQL workloads in Spark, and makes it easy for users to convert their workloads.

Session hashtag: #SFdev8

Learn more:

  • Spark SQL: Manipulating Structured Data Using Apache Spark
  • Hive to Spark—Journey and Lessons Learned
  • Shark, Spark SQL, Hive on Spark, and the future of SQL on Apache Spark

    « back
  • About Jie Xiong

    Jie Xiong is a Software Engineer at Facebook, where she works in Ads Data Infra team, focusing large-scale data storage and processing that powering Facebook Ads. She obtained her PhD from University of Illinois, and is interested in High-Performance Computation and Large Scale Data Processing.