We’re excited to announce that Spark Summit is expanding its coverage in 2018 to include in-depth content on artificial intelligence. We are also renaming the conference Spark + AI Summit. AI has always been one of the most exciting applications of big data and Apache Spark, so with this change, we are planning to bring in keynotes, talks and tutorials about the latest tools in AI in addition to the great data engineering and data science content we already have. We believe that with these two topics together, the conference will be a fantastic “one-stop shop” to learn how to practically apply the best tools in big data and AI to build innovative products.
Spark Summit has grown beyond our wildest expectations in the past four years - the series had a high of 5,700 total attendees in 2017 across North America and Europe - largely thanks to its focus on deep technical talks and real-world use cases. With Spark + AI Summit, we plan to provide the same kind of in-depth coverage for AI. Big data and AI are joined at the hip: the best AI applications require massive amounts of constantly updated training data to build state-of-the-art models. Thus, it is no accident that our AI sessions at past Summits, such as the Deep Learning Pipelines tutorial in Europe this year, were often standing room only. However, AI requires a wide range of other tools, including training frameworks, serving systems, models and algorithms, that are rapidly evolving. How can you make sense of this landscape and quickly build reliable, proven workflows that solve business problems?
We hope that Spark + AI Summit will cover this process end-to-end, starting with the raw data and moving through cleaning, quality assurance, training, serving, monitoring and live updates.
In the AI tracks at Spark + AI Summit, we plan to cover the best practices for how to build real-world AI applications—in natural language, image and video processing, speech recognition, recommendation engines and more—using a broad variety of open source tools. We will have deep dive sessions on popular software frameworks—e.g., TensorFlow, SciKit-Learn, Keras, PyTorch, DeepLearning4J, BigDL, and Deep Learning Pipelines—to cover best practices for them, as well as our usual sprinkling of research and application talks. In addition, we will cover best practices for productionizing AI: keeping training data fresh with stream processing, monitoring quality, debugging, testing, and serving models at massive scale or on the edge. If you are working in any of these areas, and especially at the intersection of engineering and AI, we encourage you to submit a talk to next year’s Summit.
Finally, we are bringing together what had been two separate US events—Spark Summit (West) and Spark Summit East—into one large North American conference in 2018. (We will still hold a European conference in the fall.) By having one large North American conference, we hope to attract even more attendees, facilitate broader sharing, and foster more connections within the big data and AI community through three days of amazing content. To make this new conference a grand success, we are doing an early call for papers (will add link when it’s available). We would love to hear your feedback on the direction of the conference and the topics we should cover. Please send us a note at [email protected].
We look forward to seeing you at the Spark + AI Summit on June 4-6, 2018 in San Francisco!