Panel Discussion: Charting a Path to Growth in Retail and Consumer Goods with Data + AI

The recent pandemic has accelerated the adoption of digital services and e-commerce by 10 years in 10 weeks. Companies have seen a surge in market share from new customers in digital channels. As we navigate through COVID and economic recovery, companies that are able to engage their customers in a relevant, meaningful way will realize stronger growth rates.

To power real-time hyper-personalized experiences, organizations need to be armed with a unified approach to data and analytics to rethink their ways of understanding and acting on the consumer. Personalized engagement, when done properly, can drive higher revenues, marketing efficiency and customer retention. Through the use of big data, machine learning and AI, companies can refocus their efforts on areas that will rapidly deliver value and drive growth into the future. Join us for a discussion on best practices and real-world uses for data analytics and machine learning in the Retail and Consumer Goods industry.


  • Patrick Baginski, Head of Data & Analytics, McDonalds
  • Tom Mulder, Lead Data Scientist, Wehkamp
  • Josh Osmon, VP, Product Management, Everseen

Speaker: Rob Saker


– [Moderator] Hey everyone. Thanks for joining our retail and consumer goods breakout session. Rob is gonna take us through today’s session. Rob, take it away.

– Hello and thank you for the opportunity to speak with you today. My name is Rob Saker. I’m the global industry leader for retail and manufacturing at Databricks. Databricks is the leading platform for unified data and analytics with over 6,000 customers and 500 partners around the globe. I was a practitioner for many years, working my way from what we then called a developer then today would be more similar to data scientists, all the way to leading large scale data and analytic transformations and commercializing those capabilities as a chief data officer. I’ve served in a variety of companies and conditions but I’ve never operated in a year quite like 2020. If there’s one word that I think best defines 2020, it’s volatility. Major disruptions have happened in 2020 in every corner of the world and nearly every facet of life. The most notable disruption was from COVID. As a result of COVID and the subsequent quarantines, we’ve seen entire swaths of the economy shutdown, major manufacturing reasons in Asia went cold. 10% of the global shipping fleet was idled with ships sitting in harbors or anchored in protected regions. Traffic in dense urban areas such as my hometown of Atlanta became non-existent. Shanghai and Los Angeles had so little traffic, their one small grin and skylines were clear. Congesting commutes became quiet as people shifted to working from home or worse not working at all. The impact of code was substantial with an overall retraction expected in global GDP. In the area that I focus on, retail revenues are expected decline by 10.5% globally this year. Retailers that are entirely dependent on on-premise are hit even harder with some categories such as apparel decline by 70%. E-commerce retailers and grocery have seen reverse fortune. Their revenue has surged as people switch their buying behaviors to online or switch their eating habits from restaurants to grocery stores. An estimated $120 billion worth of food spend is expected to shift from QSR, quick service restaurants to grocery channels just this year. The search varies in amount by region and country, but it’s similar in the amount of acceleration. Areas such as Eastern Europe and Japan, which were much more conservative in e-commerce, witnessed some of the most pronounced change. In key categories such as apparel and quick service restaurants, e-commerce became a lifelong. Firms with digital capabilities fared far better than those without. And what we’re seeing from the data is that the shift to digital is sticking with customers too. Penetration is backed slightly from the peak, but it’s still far ahead of where it was projected to be in 2020. Let’s take a look specifically at growth in U.S. e-commerce market. We’re seeing some interesting trends. We’ve seen a surge in subscriptions to video on demand as people have been quarantined with their kids at home. Direct consumer rents have stopped accepting new customers as they deal with unprecedented demand. E-commerce penetration of retail grew 10% over the past 10 years. That’s from 5.6 of all retail sales and 2009 to 16% in 2019. And yet in those 10 weeks, e-commerce penetration grew an additional 11% to 27% of total retail. And retail has seen an overall declining in overall sales but e-commerce channels have seen surges. Walmart announced a 72% increase in e-commerce sales in their most recent earnings. Albert’s announced a 242% increase in e-commerce. Digital firms have grown 10 years in 10 weeks. We expect that a certain percent of customers will continue with digital means of engagement. Based on historic growth that would have accounted for around 1.2% in increase in 2020. Now other customers, they’re going to revert to their old ways. This leaves a large swath of customers that are up for grabs. Market share is a force multiplier. If companies can retain a portion of this market share, they will exit the quarantine and recover at a faster rate than their competitors. It’s possible that firms that do this well can exit quarantines in a stronger market position than before. So what do we expect in 2021? Well, first the rate of growth in digital is expected to slow and possibly retracting key areas, as people revert to traditional ways of buying while retailers have scrambled to meet the demands of digital commerce in 2020, and to capture the surge in new customers, in 2021 we expect them to focus on retaining those customers. Retailers that can keep this new segment of customers will grow out of the current economic environment at a faster rate than their competitors. But challenging this will be the need to decrease the cost to serve. Delivery remains unprofitable for many retailers as they invest in new systems, processes and market share. We expect to see a major focus on operational efficiency. So, as I talk with our hundreds of retail customers and I scan the environment and speak with our partners, I’ve identified four key data in AI initiative areas we think will be the main focus in 2021. First, we expect a large emphasis on retention and loyalty. Traditional loyalty programs will be supplanted by modern customer data approaches that enable retailers to identify the most valuable customers and then tailor reach experiences towards those individuals. Personalization capabilities will see major advancements in 2021, as retailers attempt to recreate the physical experience online. Secondly, we will see possibly the greatest level of spend in reducing operational expenses related to digital, robotics, computer vision and smarter algorithms for consolidating orders packing and managing delivery will be at the top of most retailers and direct-to-consumer firms roadmaps. We also expect retailers to focus on identifying those most profitable customers. Lifetime value assessments will be used to segment customers and less profitable customers may be charged differently for the service. Third, we think we’ll see a large shift from traditional trade spend towards digital channels. And this is for two reasons. First, customers are physically entering stores on a less frequent basis. The effect of features and displays in store are less impactful with your customers. Secondly, consumer goods firms are less confident about what’s gonna happen in three to six months. The typical calendars for trade planning don’t align with business volatility while digital spending can be adjusted in flight. This will certainly cause additional challenges to the entire supply chain at CPG and retailers look to avoid promoting out of stock items. And lastly, nearly every company will look to incorporate alternative data and analytic methods to overcome challenges that were highlighted by COVID, the economy and other factors. This is what I wanna dive into more with you. These are just some of the major disasters unrest that has impacted society in 2020. There was major flooding in China. Some people were already concerned that the Three Gorges Dam was going to break, and thankfully it didn’t. We had monsoons that flooded South Asia and hurricanes in the U.S. We’ve had several unrests that has impacted many parts of world, including rights in the U.S., protests in the Middle East, and uprisings in Africa. Major wildfires in Brazil and Western U.S have burned millions of acres. So, why is this volatility a problem for how we look at analyzing our business? Well, let’s look at how we would traditionally build a forecasting model that you would use to predict the future. The first tarp of step in building a forecasting model is to bring in my historic transaction data. If I want to predict what’s happening in two months, what happened a year ago and two years ago in that month are solid assumptions of what it will look like. This helps us do account for seasonality and common behaviors like going back to school, the holidays or summer vacation. In this case, we’re looking at point of sale data, but this could be production data, service calls, or any number of different transactions. The next step is to bring in causal data. Causal data are events and activities that may influence the level of our transactions. This could include the promotions that you run but it could also include what your competitors are doing or things that are happening in the environment. Weather is a great example of a causal factor. People like to drink cold beer when it’s warm but not on super hot days. Now I’ve exaggerated the causals for effect here but causal data changes how your baseline forecast operates. And here’s what happened in 2020. Your forecasting and analytics have likely broken in 2020 because 2020 looks nothing like what 2019 or 2018 did from a baseline perspective. Not just that, the causal factors have been completely disrupted in 2020. It doesn’t matter how nice the weather is if everyone’s stuck at home, they’re still not going out to eat. But you might think 2020 is an anomaly. We’ve never had a year like 2020, 2021 will be back to normal. Well, 2020 is certainly and hopefully unique. We actually have major disruptions to our society far more than we like to remember. The reason for this is that humans have evolved, what scientists call an optimism bias. Now an optimism bias is a wonderful thing for a species that’s trying to evolve and thrive. It’s what we needed to overcome the realities that we faced in our evolution. I’ve got a spear, so of course I can go kill that woolly mammoth. I mean, I don’t recall anything bad happening the last time somebody tried this. Or of course I’ll get on the small ship and sail halfway around the world to settle a new continent. I haven’t heard anything said about that in the past few years. Optimism bias is what we have depended on to give us courage to pursue things that were incredibly dangerous but it’s a horrible way of trying to manage your business. We think about the positive things that have been happening society but recorded history shows us that bad things happened far more frequently than we recall. And the rate of bad things is actually increasing. So, how can we overcome this and design for resiliency with our data and AI? I think there’s four things that we can look at doing. First, companies should look at the full breadth of data in both batch and real time, internal and external data to expand their awareness of the events in the environment around them. Only 3% of company’s data sits in your data warehouse. Resilient companies look at the full range of data types, including video, images, PDFs, and more. They extract useful information from these sources and use that to drive greater insights. You wanna challenge your data scientists to deliver insights in days. Develop MVPs that enable you to quickly identify problems or opportunities and scale the most successful analytics across your company. Remember PowerPoint presentations aren’t insights. Wanna develop an insights ecosystem that enables your data scientists to deliver new insights directly to the frontline. Remember this slide from the very beginning of our conversation. These represented key areas that were impacting our businesses. What do you see when you look at these images? Now I chose these images carefully because in each of them, I can start to extract useful information about the environment around us. In all these instances and more, we have sets of real-time information that are widely available, that show what’s happening. I see a manufacturing plant at my suppliers and competitors. I see shipping container vessels idle, where my competitors often route their containers. I see major flooding that may impair my ability to get key parts. I see roads with traffic near some of my restaurants and stores and not at others. I see anonymized debit and credit card information that shows how and where consumers are spending. News media that’s telling me about the events around the globe, and social media that tells me what people are talking about. The challenges that all this information has often informs that make it difficult to use. It’s unstructured or semi-structured. It’s often in the form of images, videos, PDFs and raw text. Traditional data systems like data warehouses often only use highly structured data like sales data. And it’s difficult to test and see if it’s useful. And this is why many companies have limited their analysis to a small set of information that can explain their business or it was. What companies need is a quick way to take these different datasets, convert them into useful information and test that information to see if it contributes to our understanding of the business. And this is where Databricks fits in. Databricks is a unified data and analytics platform that enables you to bring together structured, semi-structured or unstructured data, in both batch and streaming frequencies, and process that into useful information for analysis or reporting. Once the raw data is processed into information that we can understand, Databricks provides industry leading capabilities to enable your teams to perform ad hoc data science, production machine learning at scale, or even traditional reporting. And the best part about this is that you can do this quickly and inexpensively, unlike traditional systems. The challenge with doing this, is it requires a solid framework for determining which information makes sense, ingesting and processing these datasets and quickly applying that towards common problems you’re dealing with. But what we’re seeing is this is an emerging pattern. And for many companies that are looking to use this for their brand and market insights and even environmental social governance. So ESG, environmental, social, governance, these metrics are used by companies to manage their business including their operations and suppliers. But they’re increasingly being used by Wall Street to assess businesses. Businesses that effectively act and manage on ESG have better stock performance with less volatility than those that do not. So investors are looking at companies for ESG to quantify questions like, how much water does a company use? How much carbon emissions does a company generate through business travel? How safe are the products that they sell to their customers? How is public opinion shifting with regards to a brand? How many minorities does a company have in leadership positions? How are women paid relative to men? The problem with ESG, it’s really hard to quantify. With this agile insights platform, we’re seeing companies be able to monitor their compliance with ESG in real-time across any number of disparate sources of information. They’re taking these sources of vendor submissions, of time series information, of news filings and reviews, converting this into useful feature information and creating scorecards on their own internal compliance and the compliance of their suppliers to effectively manage their business. And to ensure that their perception by Wall Street is also sufficient. Now this pattern of taking raw structured, semi-structured, unstructured information, converting into the useful features, and then using that for ad hoc data science, production machine learning at scale, and BI reporting enables a much more rapid response to rapidly changing conditions in the environment. Let’s compare the traditional ways that you would go about doing this to monitor your environment. First, you might license a syndicated service that focuses on trends and behaviors in the market. Now there’s hundreds of firms out there that do this but they basically function by capturing a range of data, aggregating it, processing that information, generating insights and delivering that in a summarized form for you a month or so later. These are often higher level views and they’re well beyond the data being useful for making immediate decisions. The second option for monitoring the environment is to bring this into a traditional data warehouse. Now, remember that a data warehouse depends on highly structured data. So we will acquire the data and then we’ll go through all the activities are required before you can use the data in the data warehouse. We’ll model the database, we’ll get that design approved, we’ll create what are called ETL to transform it, extract, transform, load. We’ll develop the scripts to create the database tables and then we’ll load it into the database. And that’s all before we can start to build our analysis. Now you’ll be lucky if this takes you a few months. Data warehouses are designed for stable highly normalized data. They’re not designed for experimentation. They’re also not capable of processing the alternative datasets like unstructured and semi-structured data. You’ll be waiting forever if you want to use the data warehouse to help you there. And lastly, what does this timeline look like with Databricks? Well first, we grab the data, we put it in our inexpensive cloud storage. Immediately our data scientists can begin to experiment with this data. They can test it and see if it’s valuable before we invest in it, making it ready for broad usage. If it doesn’t prove to be successful or useful, just delete it, capture your learnings and move on. If it does prove useful, work on productionalizing those models. This can take days or weeks depending on your team. And the output can be made available for ad hoc analysis, machine learning models running at scale, or even serve to BI reports and dashboards. And as you think about ways you can start to improve the measurement of your business. Start thinking about how you can experiment with different types of data and data from outside your four walls to improve your read of the market. Now on the left is the typical way that I structured data for econometric models. Now, this is a commercial view that looks at pricing, product, promotion, and distribution but you could certainly build a similar supply chain centric view. We have internal data that I control. This could be your SAP data, your sales data, CRM, or even things like shop floor images or closed caption TV. I next look at competitive data or I can acquire it and it aligns against my business. And this allows me to answer questions such as, how do I compare on pricing relative to my competitors? And then lastly, what are the environmental forces? What are things that are happening in the environment that will impact my business? What’s weather, crime, consumer movement, consumer spending, or economic forces. There are many commercial data providers in this area and you can go a long way though, with many of the free data sets. One very popular data source is Gido, which is a real-time source of 60,000 new sources from around the world. I encourage you to focus on four things as you look at delivering a more agile business insights capability. Leverage alternative data sources, look beyond your traditional views to see what other data explains your business. Look for data that intersects with your business and highlights opportunities for change. Use real-time data feeds to ensure that you’re capturing and acting on events quickly. In a time of volatility, long-term forecasts are less important than the cumulative effect of smaller short-term decisions. Incorporate structured, semi-structured and unstructured data. Use the power of Spark and Databricks to transform this raw information into useful data for analysis. Build that culture of rapid experimentation, challenge your data scientists to deliver new insights and capabilities in days and weeks. And lastly, focus on improving outcomes. Focus on enabling the frontline with decisions, not just merely generating insights or trivia. Okay before I end my session, I’d like to share with you a program we launched this summer called solution accelerators. If you look at what is required to be successful with data and analytics, selecting your cloud and data analytic platform are one half of your requirements. You also need people who understand data and are able to develop analysis for their business. Even the most experienced data scientists won’t have experienced working in every area of retail data science. The deep domain knowledge required for certain problems can significantly lengthen the amount of time it takes for you to deliver new capabilities. This is why we announced our Industry Solution Accelerators program this summer. Solution accelerators are fully functional analytics that address common challenges in the retail industry. Things aren’t full solutions, black box models often make assumptions that aren’t valid about a business or the current climate. Instead, we perform considerable research on emerging methods, convert those into fully functional analytics from data ingestion and preparation through analytic modeling to end use. And then we make these freely available for our customers. To contrast this to how you would do this if your team started from scratch, you’d acquire the data, begin to perform research on the topic. Your team will likely encounter many challenges and methodological approaches. To begin working on data engineering, which accounts for about 80% of the problem, they build the analytic models, try to optimize the code to run at scale. And then they integrate data, customize it to your business, integrate it with your processes and deploy it. We’re taking away the green areas. These are highly researched analytics that have been designed and optimized to be highly effective and performance. Your team still needs to integrate data and customize these but they have a big headstart. We’ve developed solution accelerators and personalization for areas such as customer lifetime value, survivorship, churn, and retention. And we should have customer segmentation, propensity to buy and content recommenders in the next month. If you’re focused on supply chain forecasting inventory, we have solution accelerators for demand forecasting with causal factors, time series forecasting, and safety stock analysis. We’ve had over 100 customers express interest in performing POC on these, with several dozen having taken these fully into production. If you want to learn more, you can visit the Databricks blog, attend our frequent webinars or contact your account team. These are designed to demonstrate a POC in less than two weeks, and we’d love to perform a POC with you. I wanna thank you for your time today, and I hope you enjoy the rest of today’s materials.

– [Moderator] Thanks Rob, that was great. Now I’d like to turn it over to Nikhil Dwarakanath, head of analytics at Grab. He’s gonna give us his take on finding customer love. Nikhil.

– [Nikhil] Thank you, and good day everyone. When Richard asked me about participating in a virtual data conference, I was keen to try the format out, considering it’s likely going to be relevant for some time. However, I was thinking hard about what might be a useful project to talk about. So we decided to pick something recent around how we have been solving for customer data at scale. So for the next 20 minutes or so, I’ll try share some context about the problem, shed some light on how we went about solving it and end with a few examples of where it’s found and state use cases. But, I’ll start off with a little context about Grab today. So for those of you who don’t know, currently we operate in about 339 cities across eight countries with over 160 million downloads. We are on at least one in every four smartphones you’ll find in this part of the world. We are arguably the largest consumer platform in the region and have been blessed to have a very heterogeneous and vibrant user base. So our customers and partners wear multiple hats, not just that of a passenger or an eater or a merchant. Behaviors and attributes are very divergent depending on the country or sometimes even the city. And while most of us, most of you know us as a large transportation platform, we have sort of moved far beyond transportation. We’ve actually done over 6 billion transactions to date. That’s 6 billion times that customers have made a choice to either ride with Grab, order food from our merchant partners, make deliveries, et cetera. And this has happened within the last eight years. So we hit our first billion transactions in the fifth year and got to 5 billion more within the next three years. In short, we have a wealth of data about our customers and the rate of growth of that data has been exponential. And that customer data over time has sort of led us to create a number of contextual experiences both in the product, as well as launching of verticals relevant to specific segments of customers. Our product has a set of recommenders built to service the most relevant food choices for our eaters. We’ve launched transportation options and verticals like GrabPay, and GrabAssist based on our understanding of conversations and chat messages between passengers and drivers. We customize our GrabPay rewards catalog based on redemption interest and preferences observed on the network and so on. But building a customer orientation isn’t easy, particularly from a data perspective. And let me try elaborate on that a bit. Okay, setting context, right. The first thing I’ll reinforce is the richness of our user base. So with literally millions of transactions happening every day and typically in high-frequency use cases, we have a lot of transactional geospatial and behavioral data that the system is generating. For instance, on the driver app side alone, we probably capture six to eight terabytes a day. This is just the app side area, so not backend and stuff. So it’s really a copious amount of data. The second thing we’ve done is to democratize that data both in terms of systems and people. So at Grab today, data teams are securely integrated with product families or functions as parts. So every functional team or product area has dedicated analyst and data science representation. This allows data teams to deeply understand the domain or area and really partner with their technology or business counterparts on the same set of goals. Additionally, the ability to mine data and access insights is increasingly democratized. So you essentially have a lot of data coming from different sources and businesses. And we have a federated set of teams that can easily spin up compute and go about solving those problems. That’s the broad context. But with that, there comes a few problems. For one, teams can sort of develop an inconsistent or customized view based on their understanding of the customer. So we potentially run into a situation where different teams build different customer segments. Let’s say marketing and demand planning could be building transactional behavioral segments for a specific acquisition or retention objectives. Product data science teams could leverage in-app behavior based segments to develop specific recommenders on the app. In the same way, business teams might develop needs-based segments like customers with pets or trying to identify travelers et cetera, for specific vertical level objectives. And in some cases, these segments could be overlapped as well between what one team may be doing versus another, but still define differently. So essentially these localized needs without a unified platform leads to some inconsistency. Additionally, as teams build their own models and different basis for segmentation, they might not leverage the entire set of features and attributes that might be at their disposal. For example, and this is hypothetical, right? Let’s say the food team might be building a segmented widget using customer attributes they have on hand. It could overtly be anchored to attributes related to the eater, incorporating dimensions like basket construct, cuisine types, merchant types, subscriptions, or promo construct, et cetera. These are extremely important features by the way. But that said, they might miss including geospatial or cross-functional demand behavior which would also be potentially useful signals. So the federated structure could sort of lead to them building for the eater rather than the consumer. It’s sort of akin to the parable of the blind men and the elephant, right? So where no one has seen an elephant but is conceptualizing the creature by touching parts of it. So we’re essentially looking at a set of correlations and feature relationships and validating a set of hypotheses or building systems, but we could be in a situation where we are discounting the universe of dimensions available and therefore those cross-functional relationships. Okay, inconsistent customer view, lack of user attributes along with that are two more related problems. Now because autonomy is the norm and computers are available at everyone’s disposal. Multiple teams are building systems at scale, running granular computations at a user level which translates to exponential increases in costs. And additionally there’s a problem of data refreshes and maintenance. So with multiple pipelines running teams and countering and counter significant overheads to maintain these pipelines. There are times when data could be stale or not reusable in a production use case, or it is also not easy for one team to leverage and others work even if they discover and want to use that pipeline. So that’s context. And with that background, we embarked on our customer 360 initiative to build a consistent understanding of our customers and partners ground-up. There were several principles we looked at while thinking about this platform, but we landed on four. The first that we saw specifically to deepen our understanding of users. So not related to geospatial or transactional stores but really our user store. Second help us build a consistent enterprise level view that different end-user groups can leverage. Third, the assets created should be easily reusable. Yeah, whether it’s an online use case, offline use case. And finally, in true Grab fashion, could we also build an element of crowdsourcing and curation? So the process is inclusive because frankly, the quantum of data we were ingesting this initiative was beyond any single team. Our near-term goals, we’re obviously developing a unified platform that could be democratized across tech or business groups. And two, enabling a standardized view while reducing redundancies, overlaps, both in terms of the customer features or compute. So 360 essentially is in summary is our in-house customer data platform. It’s intended to be a single source of truth for customer attributes that can be accessed anywhere. Feature access could be enabled via API traditional stores or via internal tools. The process of creating a feature or an attribute is crowdsourced, which means that teams can either request for a new attribute that they want added that’s not available. Or integrate an attribute that they might have developed locally. So it’s also pulling inputs from disparate set of data teams. There’s a web interface for attribute discovery as well. So teams can look for what’s available, what the distributions look like, what are the data types, how do they get to it, what source tables and so on. At this point, we have over 800 attributes computed at a user level and completely crowdsourced. This slide is our technical architecture rather hairy, but I’ll try and simplify it for this conversation. I think a more simplified view but essentially what we are doing is we have Databricks on Azure, which is a key pillar to building this end-to-end 360 data platform. And with the compute power of the Spark engine, we are able to process large usage and various signals efficiently. So we have a total of about 10 petabytes of data in our data lake. And we are processing nearly 30 terabytes daily. We’re using Delta Lakes as well where we are able to build a unified atomic and set of single source of truth tables fairly efficiently. The big benefits we’re getting from Delta Lake and the DB structures, that it’s easy to change data capture through merge, update, and delete features at scale. And time-travel features also allow us to sort of look at point in time data, even if the data sets are really large. And finally, there’s a deep integration with Azure services built on the platform as well which enable us to make attributes available across different channels very very securely. For instance, the use MLflow along with the Azure ML service makes this easy to track, deploy, and monitor models. Additionally, we have integrated with the Azure active directory and multi-factor authentication, which allows authorization that’s more distributed and self-serve, which gives security and governance teams peace of mind at the same time, yeah. So essentially we are creating these 360 data sets securely. Okay, so we did all this, but how has it helped teams, right? I thought I’ll share a few examples of where this is going in its early days. So one of the kinds of things we’ve been able to scale quickly is to use these customer features to develop even more complex features. So the example on this slide is akin to a behavioral archetype developed using like an unsupervised algorithm to discover a set of homogenous classes and then classify the buckets users into some of these segments. And this computation is inferred at a city level. So it’s a pretty massive computation. Over 80 features were eventually used for the development of these archetypes. Several app or telemetry related features. And as you guys would know telemetry data’s semi-structured and notoriously large. So for us think at least 7 billion records a day, and we were using data for the last three months for a model development. So pretty hairy, right? And then once the model is developed, scoring and refreshing happens on an ongoing basis where we are doing this either at a weekly or monthly level depending on the kind of archetype. So that’s the first use case where we were able to use the platform to develop even more complex attributes and features. Another use case, one that I’m rather excited about is we’ve developed a feature as a service API portal, basically making available features for wider democratized consumption internally. And to solve for both online and offline use cases. Essentially, this is a self-serve developer portal for engineers and data scientists where user groups have the ability to subscribe to a product or an API and obtain subscription keys that allows them to then access those APIs on an ongoing basis. The subscription is completely a managed service wider portal, so they could regenerate, cancel, ease an authorization. Once users subscribe, they can explore available APIs or create their own. And they could also test them in the portal itself. And once integrated users can monitor API performance for things like latency, payload, number of calls, et cetera at different levels of granularity. So what this allows teams to do is it federates the process of data science. Engineers could essentially use complex attributes and build systems or they could even build their own attributes and further use them. So theoretically if an attribute is available, you could spin up an API within 24 hours. If an attribute is not available, then depending on the complexity, creating the data asset could take two to three days and thereafter engineering for a production use case, you know, depending on the complexity, the integration could range from one to three weeks. So that’s the second use case where we’ve taken this platform and productized it for further consumption across the org. Another thing we did was to hook these customer 360 attributes into our internal segmentation platform. So this is our system marketing demand planning and even some tech teams use internally to segment groups for targeting, outreach, campaigns, et cetera. Now, all these disparate teams that are operating different countries, cities, are empowered to segment at scale. They don’t need to pull data out, create custom segments that may not be refreshed, relevant overtime et cetera. Or they could even easily use the work of one team that was found to be valuable and easily extended into another market. So that’s a heavy use use case as well. So essentially we started making complex inferences. We democratized access and deployment and we solved for an offline use cases. So the next step of course was using this in a production environment. And one of our earliest use cases was to empower our contact center agents. So what you see on the screen here is an example of the kind of interface that one of our customer support agents might be looking at. So at the top left is an example of the customer context we share about the person calling in. And this is powered again by our C360 APIs. Now spinning this up was done independently by engineering via their API portal. And we went live in a couple of weeks based on the engineering sprint. So where are we today? It’s still early days for this project but we’ve been seeing some encouraging traction in terms of usage. We’ve been continuing to onboard teams, do road shows to socialize capabilities and ease of use and today have over 50 active users that are directly leveraging the platform. So it’s been one of those curves that we’ve enjoyed seeing peak relative to everything else that’s going on right now. But all this is not sort of happened overnight. It’s taken us a while to shape this thinking and get the platform to this state. And I thought I’ll end with some parts on what we learned from this process that might be useful to practitioners, say thinking of similar systems. For one, adoption takes time especially in a federated environment with the ability for teams to spin up resources easily. You have to make it useful and easy to onboard so that the adoption barrier is sort of non-existent. Any small friction will deter teams from adopting. And for us, we are still making progress on these fronts even now. Secondly, with such products attempt to identify a hook or a set of folks where this platform can be deployed. The few initial successes are instrumental for internal advocacy and feedback. For us, we had pre-identified integration with our segmentation platform. But that initial use case then spawned a few others which kind of helped with the process. And finally, don’t try to do it all yourself especially in large data organizations. The part we took to crowdsource features did two things for us. One, it allowed us to really get to critical mass as several different teams started contributing or even requesting. Secondly, the principle of, you know, that we love what we cook tends to play out really well here coz early contributors tend to embrace the central platform and then they tend to drive usage. So that’s really what we had today. This project is still in its infancy. It’s an example of how we’ve been thinking of scaling data practices. And although I presented this today, this is really the work of a wonderful team that’s been toiling at the back. Thank you.

– [Moderator] Thanks Nikhil. What a great story from Grab. Really appreciate you sharing it. Next, I’d like to welcome Rob back to moderate our panel of industry leaders, take it away.

– Okay, thanks for joining our panel today. You know, before we get started, I wanna do some brief introductions. We’ve got three esteemed guests on. I’m very thankful to join us today. You know, when I was young, I took road trips in the back of my parents’ Buick and I was driving across the middle of America. And I remember seeing the golden arches and underneath them as I got older and older, it was first million served and then billions and then billions and billions serve. There are few brands with as great a role in our history as McDonald’s. And I’m pleased to be joined by Patrick Baginski, head of analytics at McDonald’s. You know, McDonald’s goal is to become the most data science enabled QSR on the planet. So thank you Patrick for joining us. Founded 86 years ago, Wehkamp is one of the leading e-commerce retailers in Europe. And also one of the first e-commerce retailers and are today known` for their deep personalization with their customers. We’re next joined by Tom Mulder. Tom leads, he’s the lead data scientist at Wehkamp. Tom, thank you for joining us.

– Thank you.

– Last but not least, Everseen is one of the hottest retail companies on the planet. Retail AI companies, their behavior based computer vision AI programs are being adopted by leading retailers everywhere. Our last panelist is Josh Osmon, a veteran of several large retailers but now leading the VP of product management at Everseen. Josh, thank you for joining us.

– Thanks Rob.

– So Josh, I’m gonna begin with you and we’ve had these discussions, we’ve seen direct-to-consumer brands and e-commerce at full capacity for new customers. Contactless commerce is extremely popular in grocery and in quick service restaurant, what are changes in consumer shopping behavior that you think are permanently changed as a result of what we’ve gone through in 2020?

– Absolutely, so I think that there’s a lot of things that we’re absolutely in the middle of that are going to fade away. A few things that are going to stay and are here to stay and continue to grow, obviously is e-commerce we all know that. So the ability for a customer to be able to order online and either have it shipped to their home or fulfilled at a store or restaurant is something that is here to stay and is here for retailers and restaurants and other industries to really harness that ability. I think that the opportunity now becomes when there are several, but one of the main opportunity is how to drive loyalty, how to have app conversion for that retail or restaurant, and then ultimately how to maintain that with so much competition in the market. So, definitely e-commerce is growing and on the rise because of this. Another thing is just the ability to feel safe. You know, my family and I watch TV and anytime we see large groupings on TV, on a TV show, we start to get a little bit claustrophobic and worried for the characters in the show. This feeling of safety and the ability to maintain a safe environment, I think is here to stay. Even after the pandemic passes, we’re going to always have this kind of check on us to is the store, or is the restaurant clean? Do I feel safe? Is it hygienic, et cetera, et cetera. So that’s another thing that we’re really going to have to lean into. And then finally, self-service. I think that if there’s one thing that we’ve seen through this from an in-store or bricks experience, it’s how to maintain the amount of traffic that’s coming into store, and how to expand your store, not only in e-commerce but in store through the checkout, through the store itself and give customers as they’re coming through the store, the ability to get in get what they need and then get out quickly. And so self-service you’ll see is going to continue to rise and the ability to maintain that and give a good experience is absolutely key for retailers and restaurants.

– So, you know, you raised some interesting points and the way that I might think about that, especially in the last area, I think that the methods that we’ve used for the spacing, if you wanna think about that. We’re squeezing the occasions, we’re separating the purchase from the physical presence. But as you mentioned, people are gonna return a store. So, you know, curbside pickup is certainly a space and technique where I order, and then I’m going, and I’m not condensed with all the other consumers. But the roles that we see and I think you said some interesting things about safety within there. How can AI, certainly can’t staff up to, as I talked about earlier with the cost to serve being high, can AI play that role to help with the spacing? Can it help with the safety? Where do you see that stepping in? Does this become a leverage for retailers to be able to improve that factor?

– Yeah, absolutely. And I think that, you know, a lot of retailers and restaurants have tried to fill the void with the additional operating costs at the moment, but of course that’s not sustainable and that’s not conducive to a good EBIT. And so yeah, with artificial intelligence and specifically what Everseen does is visual AI. So we can see different points of your operation, whether you’re in retail or restaurant or whatever industry, whatever you wanna point a camera to and see patterns that’s where both the combination of machine learning and artificial intelligence come together to ensure certain patterns are being adhered to. So you mentioned the example of spacing. It’s really as simple and as complex as making sure that you have camera vision for the particular area that you want to see and then have predetermined patterns that the machine learning is looking for. And with the artificial intelligence maintaining that, then anytime that pattern is not adhered to, or the spacing is not adhered to, it can send an alert to the store team to be able to rectify it. And it can also put it into a report, so that you’re aware and understanding the customer patterns and customer behaviors in store where there’s no way to do that when you’re just relying on team members themselves.

– Interesting, you know, Tom, I know Wehkamp is I mentioned how you’re this long established brand and you really well known for the personalization and a deep understanding of your customer. But another area that I know you guys invest heavily on is in terms of your operations and your efficiency, you know, data and AI has helped transform our industry and generate outsize benefits in a short period of time, kind of playing on this theme that Josh was talking about, what are some of the areas of AI you think have the biggest impact on retail and consumer goods over the next five years, specifically in the cost to serve?

– Yeah, so we also focus a lot on interaction and automation of the inventory basically. So as a really old company, we have a unique thing here from our own studio we take our own pictures. So basically we take care of the complete life cycle of a product, and that allows you to do unique things like use machine learning and AI in the whole folk studio floor to automate onboarding to efficient and make onboarding a lot more efficient, to get more information out of the products that we get from external customer or from external suppliers or from, even for information from our customers basically with that we can pick. So yeah, I do a lot of funding.

– So you’re using AI in the production studio aspect which I know for apparel retailers is a major cost, right? I’ve got to bring in my new apparel, I’ve got to take pictures of that apparel multiple times for that one specific item before and then I’ve got to do all the additional encoding or a metadata that. So you’re looking at it in terms of individual processes throughout the entire journey. Is that fair?

– Yeah, that’s fair. Yeah, so in our process very shoot pictures of, so basically when we get a product in, we usually get not a lot of information about the products. We get the actual product and we take pictures and the picture of taken on studio on a mannequin, and then we need to clean up and make sure the picture is ready for the vet science. And it used to be done by human automation or by human hands basically. So removing background from foregrounds, and making sure the pictures are up to standards and that they look nice. And now we do a lot of stuff with mid AI, basically deep neural networks which extracts the background for foregrounds fish the picture routes. And we’re even investigating and going into placing auto product images from across all the models, getting some efficiency out of there and model inventories is a lot more expensive than normal inventory due to the fact we have to pay license fees to the models. So there’s a way to optimize there.

– That’s fascinating. I think we’ll all be eager to see what Wehkamp’s able to develop there. You know, Patrick question for you, 2020 I mentioned was this game about, in many instances, consumer behavior shifted to e-commerce. In the quick service and the restaurant space, it was all about being able to go to delivery and focus on the drive-through or more focused on the curbside pickup. You know, retailers that have been able to do this have captured great market share. And I think that those retailers that are able to retain that are going to grow it come out of this at a faster rate, but, you know, talk about the new normal. Does loyalty supplant promotions as the top priority in retail and food. And if so, how should other companies be thinking about consumer engagement? Is there a new loyalty?

– Yeah, so I would actually, so two things about that. You talked a little bit about channels, right? So at a typical QSR restaurant, you would be talking in on the drive-through, the delivery channel, the self-order kiosk. And in the past also inside of the restaurants, right? And so every channel is a little bit different and every channel it’s obviously you have to find a way to integrate loyalty programs into each of those channels. But if you manage that, I think loyalty is not necessarily meant or even should replace promotions as such, right. I actually think that they go hand in hand and in a way accelerate each other, right? So promotions and good marketing strategy, that is analytics fueled and data fueled will never go away and will only become bigger, right, and more important. But once you have a functioning and the good loyalty program and a good base of loyal customers, you have the ability to create even more relevant content and even more relevant promotions and more relevant brand strategy to those customers as well. Right, so I don’t think that loyalty will overcome the importance of promotions. I think the way that we look at this is that they go hand in hand, right? So your marketing strategy has to cover both and has to really work and to optimize the efficiencies and the impact you can gain from both programs, right.

– So Patrick, I wanted to ask you, what do you think about identity? Because it seems like, you know, with the rise of subscription services and a lot of retailers are going to, how do we get our customer built into our ecosystem? So that they’re not just in the traditional channel that we’ve always had them. Whether it be restaurant or coming to the store, but really going into e-commerce or free shipping and then building value in other areas that aren’t traditional to that business.

– Yeah, I think that on a broader scale in the marketing and analytics space, identity is obviously a very big topic, right? Particularly, I actually think everywhere right now, right. We have GDPR that it’s finally taking off in Europe and being taken very seriously. We’ve come across some very similar regulation, rules and regulations in other markets as well and generally take a very conservative position towards identity, right. So, and by conservative, I mean, so conservative is that we just don’t handle that type of data at all, right. We wanna respect and we are respecting the privacy there in the terms of that we aren’t able to identify customers. But outside of, you know, what the QSR industry doing, I think the marketing industry as a whole and on a broader level is also adapting to that shift in consumer behavior and consumer awareness of privacy, right. And even if I take myself as a consumer, you know, I’m also a obviously from Germany, right? Which I would argue as a market that is particularly privacy concerned, right. I care a lot about my privacy, right. And so where the industry seems to be moving is and it’s not fully implemented yet in the industry. And we have not started on that yet is that you don’t look at, you know, single identifiable customers anymore, right? You look at aggregations and patterns of aggregations of customers, you know, in groups of 50, 100, 250 and more, right, to preserve that identity. That is just from a data perspective on how this is being handled. In general in the industry, but even going a step further, we see in the machine learning space, that there’s a lot of effort even if you have, you know, save and privacy compliant data can you go about training and the data science aspect also in a safe manner, so that, you know, for example your data scientists don’t necessarily access the level of detail that would allow them to you know, see these aggregated groups of identities essentially, right? One common, I guess keyword there is the aspect of federated learning which is becoming more and more relevant in the machine learning space, right? So again, like we’re taking a very conservative stance on it and we’re not, you know, even getting close to any type of data that would bear the risk of being privacy concerning. But in the future, as you know, the industry evolves I think that that is a position that will be enabled by technology such as federated learning and such as, you know, aggregated data collection processes in order to still go about being able to bring relevancy and relevant content to the consumer but also preserving and protecting their privacy as well, right?

– Yeah. Yeah and I think that’s a really interesting line of thinking, especially with the increased focus on the role of ethics in AI. And one of the key challenges with ethics in AI is certainly the data that you’re using to do your analysis. And I think the approach that you’re discussing Patrick, may not fully get us to fully ethical AI. There are still questions in terms of interpretation, but certainly limiting the types of data that you have available at the beginning. We’ll go a good distance in helping with that. You know Josh another thought that I had as we were talking about this was the, when we think about measurement and within the store, and we talk about consumer engagement in this desire to be more effective in our promotions. What role can AI play in terms of helping to understand promotional effectiveness? I mean, we look at a lot of the trade funds that in 2020 have been cut down, they’re shifting into more digital channels because of the departure of people. But I think when we returned back, people are gonna want, brands are gonna want stronger measurement on that engagement. How can AI help to play in that space where we’re, you know, bringing digital type of capabilities back into the four walls?

– Yeah, absolutely. And I think that this is something that is really interesting in particular to the what machine learning and what visual artificial intelligence can bring. Because essentially, if you’re talking about promotions and you’re talking about an inventory problem, right, you’re looking at did the inventory that I sent that might be in an off location in a store, or that may have some special packaging that is designed to draw customers into it. It’s essentially a relationship between the item and where the item is and the customer themselves as they come in and shop. And so I think that number one is, artificial intelligence can play a key role in actually forecasting. What would be the effective amount that the customers are ready to trade up for, right? So instead of item A, they buy item B because it’s on promotion. I think that’s step one. Step two, is where the visual AI gets involved in when you ship that particular promotion or have that promotion, whether it’s an off location or whether it’s an end of an aisle, does it get set? Like, that’s probably like the basic hygiene for the store operations but in many cases, it doesn’t even go to the place that the consumer wants it to be and that the business wants it to be a featured ad. And so was it even there to begin with, to be on promotion. And that’s something that Everseen can help with in understanding was it truly there? And then two, is where the customer interaction gets involved. And so we’ve always had this idea, this ability to try to get dwell time. And it’s been very difficult in the past, but you want to understand does the customer change their traffic pattern to that promotion and is drawn in, do they dwell there? Do they pick up the item? Do they see what it is? Do they compare it to other items. You want that information to be known? And that’s where visual AI can play a huge role in understanding was it set there? Did it draw the customer in, in a different way, in a different pattern? And then what was that relationship? And then ultimately, did they buy it? Which is kind of what we can already see.

– Tom, you know, playing on this line of thinking, you know, bringing the digital measurement and talking about using that measurement to inform assortment and depth of inventory that we carry. I think this creates a really a new challenge with any e-commerce or an increased challenge, I should say. You know, I like to cook. I am a Cento tomato fan, it’s an Italian brand of tomato, it’s grown in the volcanic soil of Italy. It’s wonderful if you’re making sauce, right? So if I’m in the physical store, I’m looking for my Cento tomato, that bright yellow can. It was not available, I’ll pick something that’s adjacent to it, right? Because I still wanna be able to cook. But now if I’m online, I just go to a different retailer, right? Coz I know that the friction is less. So, you know, you think about the role of assortment, you think about promotions and traditional retail that are planned anywhere from 90 to 150 days out. With e-commerce, I can change my spending on promotions. And when I’m focusing on during the campaign, what do you think as one of the leading e-commerce companies in Europe, what do you think that brands should be doing to enable speed and agility and accuracy as they compete with native e-commerce? Like if you were to tell your biggest competitor.

– Yeah, so sort of during corona it helped quite a lot, basically. So where we’re used to, it used to be like we buy stuff in front six months, eight months ahead. And we stopped everything for each season basically. And now it’s the fact that we have corona, the whole global warming a bit, there’s no actual seasons anymore and the fact grown on a forecasting models don’t work correctly or at least if you don’t design with basically rules but so you need to handle if you need to shift quickly. So, and that ended up with us, for example, canceling some orders, shifting around some assortments and making sure that the get the needs for the customers that we actually have. So we collect a lot of information online. So we were purely online and that information flow is highly optimized. We see everything that the customers basically do or what they want and then we need sort of hand that over to the buyers and we’ll get assortment and they had to make some mental changes basically in how to buy and where to buy and how quickly they can get the goods, so–

– Do you have long lead times that up to six months in the apparel industry for buying to deliver–

– Six to eight months. So the apparently in industry, you see works in seasons and well, there’s no actual seasons anymore. Even here in the Netherlands, the temperatures are way up as they normally would were. It should be autumn but it feels like spring. So it’s coz of mental and what to buy and when to buy and data helps quite a lot there. So seeing what the customers do on the sites and seeing how they behave, what they’re actually looking at, and even external information, for example, we also look at what’s on TV, what people watch on TV, that sort of information we can include basically. And then determines which kind of customers come on our websites, or what promotions we show on the website.

– Patrick, you got a similar challenge I would think, but in a much different way. I mean the volatility of consumer behavior and your promotions and with all the uncertainty, how do you see machine learning and AI helping in the quick service space to improve the ability to predict demand at the stores, given the amount of volatility that’s happening? Are there other ways that you’re looking or thinking about approaching that problem?

– Yeah, and so actually this ties a little bit into what we discussed before which is there’s a little bit two schools of thought of how can we help predict demand and help manage inventory and eventually, you know, gain efficiencies from having greater insight into those two, right. The one school of thought is I would argue is, you can take a consumer lens, right? And then a little bit that PII lens. The other lens is you can take a purely transaction and product based lens, right? So arguably, you know, as the space is still evolving, you could eventually gain greater accuracies from a broader set of data and the broader understanding of the consumer but you’re actually able to identify demand and forecast and identify efficiencies also just from pure consumption data, right, of products. And so that’s the one thing. The other thing is we actually already have during these difficult times, over the past of months, a reduced menu in some markets, right? And particularly in the U.S., right? So, we’re really focusing on the you know, the core choices and core products and already reducing, have the ability essentially to reduce that a little bit. And I think it maybe even tie a little bit to what, you know, Tom said before, which is that no seasons are going away. You have less, you know, fluctuation of different styles. And a little bit of that is also occurring obviously on the menu board of a fast food restaurant or the QSR industry as well, right. You have a little bit less work duration of what people actually demand and want, right. So, we’re able to, you know, look at it a little bit from that lens. The other thing though is that we do still heavily work with product recommendations and general recommender systems to even make content. And let’s say content is actual menu items to make those more relevant, right? And to then tie that also up with promotions through the various channels, right? Via email or, you know, an app or the menu board itself, right? So, that’s a little my take on it.

– So are you suggesting, it’s really interesting. So looking at product sales at individual locations which should already reflect the individual nuances of that location and the types of consumers that are coming in and buying where it’s not as much on looking at the individual consumer you would think that the pub debit, people that purchase at a store share some commonality across with each other. Are you doing anything with that to localize, or would you suggest not to go into specifically, or would you suggest bringing an alternative datasets for surrogate to help enrich the understanding of the behaviors or how individual stores operate?

– Yeah. so first of all, with a company this large is obviously always a little bit based market by market, right? Their individual markets may treat this a little bit differently and we again, like we adhere basically to the localities of that market and how we approach this, right? But absolutely based on, you know, pure, you know, consumption behavior or transaction data, or even an external or research, you know, and in consumer behaviors that we can enforce, we do localize and heavily personalize, you know, recommendations, product recommendations, offers, promotions but those regards in order to drive more relevant content. Right, and that level of personalization, we’re not doing, I mean, we’re not doing it at the restaurant level, for example, right. Because that is then it’s simply a little bit too few of a database in order to build a powerful system in that regard, right. But we can absolutely do it on, you know, national and state and then smaller levels that help inform us as well on how we treated elsewhere. And we can also have learnings across different markets, even from that, right. So overall we have the ability to learn from different consumer behaviors and different markets, and then even different localities. We just don’t take it to a very granular level for various reasons. One, because I don’t necessarily see the statistical significance always in that, right. There’s a lot of fluctuation and factors and variables that influenced that. And two, because again, that we take a very conservative stance on you know, privacy and consumption behavior as well. And so we wanna maintain that as well. The last thing I will say, what you asked is that like, absolutely we are starting to look at also external data sources. You know, for example, weather data is always a simple one to look at and get access to. But traditionally in the analytics and machine learning space, weather data is one of the most powerful external data sources you can include in your models that always drives some added benefit, right. But there’s other data as well, right? There’s, you know, demographic data, and other data sets that are aggregated in a manner that we can use to make and let’s say the the product recommendations more powerful, right?

– Yes, completely agree. And I think that’s a trend that we’re seeing across many of other leading customers is the use of those alternative data sets. And even, you know, I think Tom talked about it before, when you were in e-commerce and apparel, you can even create your own alternative data sets by doing a feature extraction. If you wanna consider that an alternative dataset but to create new data sets features as a result of maybe things that you didn’t look at before. You know, Josh, I wonder from the computer vision perspective, does this become the new, one of the new growth opportunities for retailers is they’re not using really the vision of what’s happening in stores and to really improve. And does this become the new wave of how we think about space planning, assortment and others is to be based on observed feature sets that we’ve never had access to?

– Yeah, definitely. And I think that there’s so much growth opportunity for retailers in almost all areas of AI. And taking in those external data sets to power better forecasting, better inventory management, better checkout, better staffing. All of these things can really be helped by understanding the world that we all know, like if it rains and you have an indoor mall location, your customer traffic is going to go up. If you’re an outdoor standalone, your customer traffic is gonna go down. We all know this as operators and retailers, but we haven’t built those really systems into fuel decisions that are made by are the systems that essentially operate the retailer. And I think it’s the same, as you mentioned, Rob with in looking at visual patterns inside the store, how many people are coming in, what’s the entrance rate, how many people are going through checkout at the moment. And then how does that make you think about service, additional staffing during surge times, or even you know, asset protection, because when you get a surge, typically that’s when you see a behavior at the checkout, whether intended or not intended, start to be more where we call bad patterns. And so when we look at those different aspects, then it really can start to drive the in-store operation to make it better for the team, for the customer and then ultimately the bottom line of that retailer.

– So, we’ve had a good discussion today and we’ve covered supply chain and efficiencies, and we’ve talked about consumer experience and the role there and emerging technologies and computer vision, and a really interesting discussion. If you look at what happened in the mid 2000s, the top two factors that drove consumer behavior were really price and the assortment of items. Over the past 10 years, it’s largely been driven by convenience. But what are the new factors that are gonna drive new choice in the new normal? Does it remain these three, does safety become a primary factor as we think about it? What are your thoughts for how consumers are gonna prioritize their choice of restaurants and retailers?

– That’s a tricky one for me to answer, and I can only make, you know, express, I guess, a personal view on that, which is like one I think the expectation of consumers to find themselves in a safe environment, both from, you know, a hygiene perspective, as well as, you know, any other aspects of safety. I think that that is one to stay and that is one to continue investing in and iterating on with consumers to make sure that the rights, you know, the right sensational safety is also given to them, right. And that’s just the place to me as a consumer as well. So I think that one is going to stay with us. In the QSR space, I would argue that the other two always will continue to remain relevant, right. Price and even loyalty to certain products or certain, you know, brands is a major driver of consumption, particularly in the QSR space, right? So other industries, I would imagine maybe a bit different, right? I don’t know if Tom, like you’re experiencing different things but I think on the QSR side, definitely those two factors will continue to be important.

– Excellent, Tom on that note, is safety with e-commerce may not seem as critical and area although certainly safety of the protection of data privacy is always paramount, but–

– So we do lot of stuff on GDPR, but also fiscal safety it created sort of convenience thing. That’s quite interesting. So in the Netherlands, handover packages used to be quite traditional. The mailman would come to your door, hand over your package, you need to sign for the package. And now with corona, it created sort of a convenience store, where we are now customers are more really wary allowing them to put the packet somewhere safe or in front of the door or on a safe spot somewhere else. But so it created a sort of, you know, convenience needs. And I think that sort of my guess it will stay and it will become a sort of a norm. People allow the middleman to place a package somewhere safe either in a box or at the door, or yeah as a sort of convenience route. Otherwise you have to wait on the middlemen and now you can just go somewhere else and your package will be delivered.

– Josh, any thoughts? What are the primary factors? Is safety a primary factor? Does it become the primary factor? What else might become that consideration?

– Now look, convenience is king. I mean, it still is. And I think that convenience can be best summed up by your customer saying, look, don’t waste my time, right? So in the Netherlands, if it’s more helpful for the package to be delivered in a safe place and you don’t have to sign for it and we can solve things in a different way, then that’s what’s going to win out and that’s what’s going to capture more customers. So I think that’s the driving force. Safety of course, is always going to be and progressively more important. But I think it fits within the broader customer experience of like, look, I want to come in. I wanna find when I’m getting or what I’m needing your Cento tomatoes, which I agree. I like the San Marzano, but if I come in and it’s not there, you just wasted my time. Why am I dealing with you? Or if I go through checkout and there’s friction there, I’m waiting in line or the self checkout machine just alerts me on my 20th security scale violation. Anything like that, the customer will get frustrated, and start to trade elsewhere, where there’s not this friction, where they can get their inventory. And so I think that these are the things that will win out and will continue to grow in importance and continue to grow market share. I think the key here is how do retailers and restaurants capture that to be able to hold accountability, to be able to drive efficiency, and then operational effectiveness in order for that customer to keep coming back and coming back and building the loyalty as we already discussed.

– Excellent, well, Tom, Josh, Patrick, I really appreciate your time on the panel today. You were absolutely brilliant in your responses. Thank you for your participation today. And well, thank you for everybody for listening in.

– [Moderator] Thanks Rob, and thanks again to our panel for all the great insights. And that’s all we have for today’s content. Thanks again for joining our Industry Leadership Forum. And we look forward to seeing you next time.

Watch more Data + AI sessions here
Try Databricks for free
« back
About Rob Saker


As the Global Industry Leader for Retail & CPG at Databricks, Rob Saker brings a wealth of industry knowledge to Databricks. A former CDO, Rob has successfully led data and analytic transformations at retail and CPG firms including MIllerCoors, ConAgra and Crossmark. Rob meets regularly with executives across the industry to share trends and help develop strategies on how to best use data and AI to accelerate business value.