Skip to main content
Company Blog

Two weeks ago we held a live webinar – Databricks' Data Pipeline: Journey and Lessons Learned – to show how Databricks used Apache Spark to simplify our own log ETL pipeline. The webinar describes an architecture where you can develop your pipeline code in notebooks, create Jobs to productionize your notebooks, and utilize REST APIs to turn all of this into a continuous integration workflow.

We have answered the common questions raised by webinar viewers below. If you have additional questions, please check out the Databricks Forum.

Common webinar questions and answers

Click on the question to see answer: