At today’s Spark Summit, Databricks and IBM announced a joint effort to contribute key machine learning capabilities to the Apache Spark Project. Over the course of the next few months, Databricks and IBM will collaborate to expand Spark’s machine learning capabilities. The companies plan to introduce new domain specific algorithms to the Spark ecosystem and add new machine learning primitives in the Apache Spark Project. IBM and Databricks will also collaborate to integrate IBM’s SystemML – a robust machine-learning engine for large-scale data, with the Spark platform.
“The size and scale of companies that are partnering with Databricks to support the Spark movement is both inspiring and validating,” said Ion Stoica, CEO at Databricks. “We are looking forward to IBM becoming a key member of the Spark community, as seen by their investment in a Spark Technology Center in San Francisco. This collaboration will help Spark continue to gain mainstream adoption and deliver next-generation big data analytics and applications.”
To keep up with Spark and Databricks news, don’t forget to sign up for our monthly newsletter.