Implementing an End-to-End Demand Forecasting Solution Through Databricks and MLflow
On Demand
Type
- Session
Format
- Hybrid
Track
- Data Science, Machine Learning and MLOps
Industry
- Retail and Consumer Goods
Difficulty
- Intermediate
Room
- Moscone South | Upper Mezzanine | 156
Duration
- 35 min
Overview
In retail, the right quantity at the right time is crucial for success. Many fresh product retailers struggle with bad forecast which causes bad customer satisfaction but also a lot of food waste. In this session we share how an end-to-end demand forecasting solution has helped some of our retailers to improve efficiencies and sharpen fresh product production and delivery planning.
The presented setup goes a lot further than just running ARIMA or building singular demand forecasting models. Rather, various ML models are continuously ran & trained on various levels incl. models on store level, models on product level & combined.
In this session we outline why and how we run all these different predictive models - leveraging the distributed computation of Spark. With the setup in place, we train hundreds of models in parallel and do that in a scalable & fast way. Powered by Delta Lake, feature store and MLflow this session clarifies how we build & architecture a highly reliable ML factory.
This session outlines how this setup is running live at various retailers and feeds accurate demand forecasts back to the ERP system, supporting them in the planning of production and delivery. With a concrete demo and an outline of how it was set up we want to inspire retailers & conference attendants to use data & AI to not only gain efficiency but also decrease food waste.
The presented setup goes a lot further than just running ARIMA or building singular demand forecasting models. Rather, various ML models are continuously ran & trained on various levels incl. models on store level, models on product level & combined.
In this session we outline why and how we run all these different predictive models - leveraging the distributed computation of Spark. With the setup in place, we train hundreds of models in parallel and do that in a scalable & fast way. Powered by Delta Lake, feature store and MLflow this session clarifies how we build & architecture a highly reliable ML factory.
This session outlines how this setup is running live at various retailers and feeds accurate demand forecasts back to the ERP system, supporting them in the planning of production and delivery. With a concrete demo and an outline of how it was set up we want to inspire retailers & conference attendants to use data & AI to not only gain efficiency but also decrease food waste.
See the best of Data+AI Summit
Watch on demand