Skip to main content
CUSTOMER STORY

Powering the Connected Home With AI

300

Types of sensor and user interaction data

350K

Households on the connected home platform

eneco hh header image color
CLOUD: AWS

"The Databricks Data Intelligence Platform has allowed us – as a team composed of data engineers, data scientists, but also the rest of the business – to democratize IoT and customer data to power analytics and machine learning to support the energy needs of our customers.”

— Stephen Glasworthy, Head of Data Science, Eneco

Eneco produces and supplies natural gas, electricity and heat in the Netherlands, serving more than 2 million business and residential customers. To support the growing demand for smart home technologies, Eneco has developed Toon, a connected home platform that provides home energy management solutions. With the Toon app, homeowners can control their heating, lighting, smoke detectors, and smart plugs on their mobile devices. To power their services, Enco relies on over 300 types of sensor and user interaction data, and over petabytes of energy usage data across customers. Unfortunately, Eneco’s legacy infrastructure was complex and time-consuming to maintain, distracting their team from value-added work and generating unnecessary operational costs. In order to leverage the incoming IoT data effectively, Eneco turned to the Databricks Data Intelligence Platform to simplify data and analytics, allowing them to leverage democratized, scalable data insights and ML-powered services to provide smart energy management services to their growing customer base.

Struggling to gain scalable, real-time data insights

With steady growth across countries, Quby (now fully integrated into its parent company, Eneco) was in an uphill battle trying to manage massive amounts of IoT data on their own Hadoop on-premises infrastructure, which was costly and complex to manage at scale. Stephen Glasworthy, Head of Data Science at Eneco explains, “A big challenge we faced from the data engineering perspective was just the amount of time that it took to manage our own infrastructure. We could see from just working with our on-site Hadoop cluster that as soon as a new package came along, we’d have to be having a data engineer spending a day or two making sure that that was installed correctly, or the dependencies were managed well.” He goes on to say, “In order to scale up to hundreds of thousands of users across Europe, we needed to take advantage of the cloud and adopt a more robust platform that our data scientists could easily interface with.”

Another area where Eneco struggled was processing streaming data reliably and efficiently for use in ML models. They struggled with real-time ingestion and the data scientists were forced to use a number of different tools on their individual machines, none of which were collaborative or scalable. Again, overcoming these issues was time-consuming and prevented the organization from innovating fast.

A unified approach with lakehouse architecture

Eneco selected the Databricks Data Intelligence Platform to simplify the unification of their data and to enable real-time data analytics and ML. Stephen explained, “Moving to Databricks allows us to take away some of that concentration on infrastructure management, and that means our data engineers can focus on where they’re really adding even more specific value to the business.” Now that Eneco can proficiently process petabytes of batch and streaming IoT data, they can easily build real-time data pipelines to support downstream analytics and ML. By providing a common API, Databricks allows Eneco to transactionally store large historical and streaming datasets to S3. This dramatically simplified Eneco’s data pipelines, finally making their massive datasets available for high-performance analytics.

Databricks’ collaborative notebooks opened the door for developers, engineers, and data scientists to work transparently across teams. “The collaboration allowed us to develop code in a systematic way, and what we were also able to do is very quickly switch between languages. So for data science, some things work really well in Python. You want to be able to use some of the latest developments in terms of flow and caress. And sometimes, you want to be switching more towards Scala for production code,” explained Stephen. The flexibility allows Eneco’s teams to leverage ML on distributed data to improve their data-driven services to remain competitive in the market.

Accelerating innovations that power the smart home

With Databricks, Eneco has created a myriad of smart home services. One service detects energy usage from appliances. The model extracts patterns from the electricity data in 10-second intervals for near-instant data insights. “One example of how Databricks has provided us with a lot of value, is we have a use case where we’re trying to detect anomalies in heating systems of homes,” said Stephen. “Databricks allows us to offer alerts to users with a very limited latency, so they’re able to react to problems within the home before it affects their comfort levels.”

Stephen goes on to highlight the value of having focused and enabled team members. “The real business impact of Databricks is it allows us to manage our whole data science stack with a very slick team, so we don’t have to spend our time really concentrating on the infrastructure. We can put our emphasis on developing the machine learning algorithms, that’s where we can really truly extract information from the data we’re collecting. Looking ahead, as new energy innovations continue to be developed, Eneco believes Databricks will continue to be a critical platform to expand their services and meet growing customer demand.

“The future of smart home energy is quickly advancing before our eyes. With Databricks, we are well equipped to lead this movement today and into the future,” concluded Stephen.