When I first got into the business, broad budget allocations by media types were typically made based on subjective judgment, and on what they were last year. The ANA’s Arch Knowlton and I were part of the movement in the 1980s which shifted thinking to marketing mix modeling (MMM) as the way to do these budget allocations1. And MMM has come a long way since then.
However, the way that media types are scored in MMM is typically based on a short-term efficiency metric (ROI/ROAS) rather than on brand growth. Today, Les Binet and Peter Field have convinced MMM practitioners to balance these metrics with longer term measures – which may also be efficiency metrics over a longer time frame, and in a few cases I have seen by folks like Joel Rubinson, the growth from year one to year two is an explicit metric used in the modeling.
The problem with efficiency is that it is not a predictor of brand growth. Some brands may have the highest ROIs/ROASs and be flat or down in their brand penetration2.
We saw this in the enormous multi-year MMM study sponsored by FOX and carried out by Bill Harvey Consulting using sales data from IRI (CPG), Polk (Automotive), NPD CREST (QSR), and ad spend data from Standard Media Index (now called Guideline). Guideline comes directly from agency computers and for national brands, it is the most accurate data on ad spend in the countries in which Guideline is active, for the national advertisers there.
In that MMM study covering a trillion dollars in sales and $48 billion in ad spend in the US, streaming premium TV had the highest ROAS, linear TV second, online video third, and the rest of digital last. However, the CPMs went in the opposite order. From 2014 when that study began, to today, on a global basis for all advertisers big and small, digital with its appealingly low CPMs went from 23% of total ad spend to 80%. This is clear evidence of the dominance of the efficiency ideology, which was old by the time I got into the business, and it is one of the few things that has never changed.
But when we looked at our results of this trillion-dollar sales analysis from a growth perspective, this is what we saw (CPG, Auto, QSR):
This strongly suggests that MMM also produce tables for management which show brand penetration growth as the outcome variable, broken out by broad media types, not just ROI/ROAS.
There are several reasons why MMM – based as it has been on multiple regression analysis, i.e., correlations, without regard for proof of actual causation – is not yet the ideal tool for deciding upon the budget allocations to major media types:
There is still a lot of work to be done. I am focusing on the impression quality problem.
We just completed a lab EEG study which shows that digital ad exposures are far behind TV/premium streaming in impression quality. The study also showed that the context effect of resonance between the ad and the context is the most predictive of the brain measure most predictive of sales effect (synchrony). We announced an in-home natural exposure pilot study that the whole industry is invited to help sponsor at very modest cost. This will enable brands to see the EEG synchrony build-up from impression to impression associated with each media type, and it will all be tied to the same household’s purchase changes. For more information, please click here.
Once the industry is able to pierce through the abstract numbers from black boxes and see individual anonymized human beings going about their lives and being influenced by different types of ads, the hot air will all be let out, and facts and common sense alone will make it possible to grow brands without huge superstructures of black boxes and the virtuoso application of BTB persuasion finesse.
1 Our work then focused on the differences in year-to-year brand sales growth by market, not on efficiency metrics, although we did show that ad spend increases in topped-out markets was counterproductive.
2 If MMM included a validated method of estimating the incremental sales produced by media, the ROI/ROAS efficiency metrics would agree better (but not be identical) with the brand growth metric in terms of which media types look good. Correlation-based methods (including sales lift studies that are not based on experimental design) produce estimates of how much of sales were incremental, and due to advertising, using a variety of methods which produce widely differing results. I have seen many studies in which all sales were counted as being driven by one platform’s advertising, including sales to loyal buyers who never buy any other brand in the category.
Posted at MediaVillage through the Thought Leadership self-publishing platform.
Click the social buttons to share this story with colleagues and friends.
The opinions expressed here are the author's views and do not necessarily represent the views of MediaVillage.org/MyersBizNet.