Analytics Cases Published by the Journal of Retail Analytics
In a previous issue of the Journal of Retail Analytics (Volume VIII, Issue 4), we introduced the concept of employing analytics to optimize digital signage messages. Using analytics to correlate and score digital messaging with in-store purchasing patterns, playlist recommendations can be created to ensure that the most effective message is displayed at a given place for a given point in time.
In this article, we report on cases highlighting how even small changes in the digital marketing strategy – when backed by measurements and analytics – can make big difference.
The first case describes our work in a quick service restaurant (QSR) chain in Europe. Panos, a part of La Lorraine Bakery Group, is a franchiser with more than 250 restaurants predominantly in Belgium and the Netherlands. Panos has been using digital menu boards with Scala’s Content Management solution for about four years. Because QSR is generally a low profit-margin business, any investment such as replacing light-box based menu boards with digital menu boards is carefully examined and needs to be well-justified. A typical Panos restaurant (see Figure 1) features one screen displaying promotional messages surrounded by conventional light boxes that present the menu items.
Since no quantitative measurements about the effectiveness of digital in-store marketing were readily available, Scala and Panos joined forces to measure whether digital menu boards can indeed change consumer behavior. The test targeted soft drinks, a product group in QSR with a high profit margin. Eight restaurants in Belgium were selected to run the tests and were paired with eight similar restaurants that served as control. We would increase the frequency of the messages specifically targeting soft drinks in the “Test” restaurants, and compare the customer response to that measured in the “Control” restaurants. Using statistical methods such as cluster analysis, we made sure that the Test and Control groups were comparable in sales volume and profile.
As a starting point, the following parameters were set:
• For the Test restaurants, the target message repetition time was 60-90 seconds to accommodate not only testing the target messages, but also advertisements of other products mandated by the scheduled marketing
• In the Control restaurants, predominantly the mandated product advertisements were shown with the target messages interspersed only about every 4-5 minutes.
• Assumption: still images are sufficient to evoke a positive customer response.
An example of the target messages is shown in Figure 2: a soda bottle is prominently displayed together with a food item, accompanied by some marketing text and pricing information.
Figure 3 shows the data from the first iteration. In order to eliminate even small differences in sales between the Test and Control groups, the time periods adjacent to the three-week test period in September (marked in grey) are used to normalize the data. The results at the end of the first iteration show that the test restaurants generated 2.53 percent more soda sales.
Given that beverages are high profit margin items in QSR, this was a very promising result. Therefore, we decided to try to further optimize the campaign by attempting to correlate the playout frequency with customer behavior. The time a typical customer pays attention to the menu board before they place their order was measured at 10-15 seconds. Given the first iteration repetition time of 60-90 seconds for the target messages, it became clear that only every fourth to sixth customer was exposed to the target messages even when the restaurant was full. During time periods when customers enter the restaurant only sporadically, even fewer customers might have seen the target messages. So, for the second iteration, the target messages were played nonstop so that every customer was more likely to see them. We also added subtle animation. Because the human eye is drawn to movement, we hypothesized that this would increase the campaign effectiveness as well.
The new tests were run during four weeks in December through January (see Figure 3). Again, sales data from the preceding time period were used to normalize the data. Removing outliers such as the spike on January 5, the Test restaurants generated an additional 7.75 percent in soda sales. This constitutes a very solid sales lift given only a small change in parameters (animation and frequency). Our analysis further showed that the additional soda sales did not negatively impact other product groups (“cannibalization”). Hence, additional revenue was generated.
As a final step, we looked for correlations between weather and soda purchases. While we had initially assumed that temperature would be the driving factor behind beverage sales, the analysis of temperature, sunshine, cloud cover, precipitation and humidity showed that for our test group, temperature did not have any significant effect. Instead, we detected a positive correlation between beverage sales and sunshine: Even when it was sunny but cooler, more sodas were sold than when it was overcast and warmer. Incorporating weather in future campaigns could boost the sales results even further – something that we have yet to test. The influence of a dynamic factor – sunshine – underscores that the optimization of digital messaging is an ongoing process, rather than a one-time procedure. It is vital to continuously
record and analyze all available data, and to use the results to optimize the digital content in an ever-changing environment.
Because of the lift in sales tied to the tests, Panos can justify existing and future investments in digital menu boards. This case also shows that even subtle changes in the digital marketing strategy can have a big impact.
For our second case study, we worked with Last Call Studio (LCS), which is the off-price brand of a large, upscale, U.S.-based fashion retailer. Prior to the engagement with Scala, LCS did not have any experience with in-store digital signage. As in the QSR case above, the questions of cost and in particular Return on Investment were brought up. To limit the upfront investment, the team decided to start with a single store, gather statistically significant results, and then add locations as the results dictated.
After analyzing historical sales data, a location in Rockville, Maryland was selected for the test. Inside the store, three sets of three, portrait-mode, 46-inch screens were installed (see Figure 4). Since we were working with a single store only, a scheme of playlist schedules was randomized into time frames. Using this framework, “neutral” messages (such as a company logo) were tested against target messages (see Figure 5). Employing this framework would help ensure that external influences (e.g., special promotions/discounts mandated by the scheduled company marketing calendar) could be largely suppressed, and the possibility of a “false positive result” would be minimized.
The following parameters were chosen for the initial campaign:
• Targeted specific product groups: Women’s dresses, shoes, and handbags.
• Fifteen clips with a total length of about six minutes showing messages designed to entice the customers to consider buying women’s dresses, shoes, and handbags. Given an average in-store dwell time of about 20-30 minutes, a repetition rate of six minutes was set.
• Assumption: It was deemed acceptable that the displayed products could be generic examples of the targeted product groups.
To be able to calculate Conversion Rate (the chosen Key Performance
Indicator), we recorded detailed, time-stamped sales data and customer traffic
After four weeks of measurements, we analyzed the data and came up with some interesting results (see Figure 6). The Conversion Rate for dresses and shoes had increased considerably. These results matched anecdotal observations that we gathered by talking to sales staff and customers: many customers were inspired by the ads showing beautiful dresses and shoes to browse through the merchandise, try on, and in many cases, buy targeted items.
However, the results for handbags exhibited a significant decline during time periods when test messages were shown. Revisiting the digital media items revealed that handbags were represented only in three out of the 15 advertising clips. That is, in comparison with dresses and shoes, handbags were drastically underrepresented. This lack of potential customer exposure might explain a neutral response, but not a decline. After correlating the inventory in the store with the handbag images shown during the campaign, it was discovered that our initial working assumption that playing generic images would suffice to evoke a positive customer response did not hold true.
Conversations with sales staff confirmed that customers often paid attention to specific products in the digital messages. For instance, within a few days of showing an image of a particular dress that was in fact available in the store, that dress sold out, although its sales prior to the our digital campaign had been below average. The sales staff further reported that they used the ads as an additional sales tool to draw inspiration from and show attractive merchandise to their customers. In the case of handbags, customers would notice and seek out specific handbag models they had seen on the displays and would leave the store frustrated if that particular handbag was not available for immediate examination and purchase.
Since the results for dresses and shoes showed promise, the joint team decided to run a second iteration to see if they could obtain positive lift on handbags. Adding a second store in Dallas, Texas to the experiment, the digital content was aligned with available inventory in the stores, and the representation of handbags in the digital playouts was also increased. The results (see Figure 7) clearly show that this strategy worked: the conversion rate for handbags showed a positive lift of about 30 percent when compared with the first iteration. Anecdotal information also supported this result, as it was reported that when a customer spotted a handbag she liked on the digital displays and asked for that particular model, after happily examining it, she bought the handbag on the spot. Clearly, displaying relevant messages creates a positive in-store experience that helps achieve sales results.
Throughout this second iteration, the conversion rate for dresses remained solidly positive, while shoes initially underperformed, but then recovered during the later weeks of the test period. This illustrates again the dynamic in-store situation: only continuous optimization can ensure that the positive customer experience is maintained by exposing them to relevant digital marketing messages.
Similar to what we saw in our first case study, weather played a role in this case, too. By analyzing weather data and related customer traffic we confirmed analytically what the retailer had witnessed anecdotally: More customers would visit the stores when it was raining. This effect was more pronounced on
Saturdays, the day with the strongest sales.
Both case studies demonstrate that in-store digital signage does indeed measurably influence customer purchase behavior. By analyzing sales data, playlist schedules, and other relevant data, in-store digital communications can be a valuable tool supporting specific business goals. We have seen that it is essential to base decisions regarding campaign strategy on concrete data whenever possible. By avoiding the traps of “guessing” and “assuming,” those campaigns can be optimized to achieve better results. So far we have only begun to explore the exciting potential and possibilities of analytics optimized digital messaging. The journey continues.
By Dr. Stefan Menger, Vice President of Advanced Analytics, Scala, Inc.