Best Practices for Improving Marketing Mix Modeling -- Bill Harvey

By In Terms of ROI Archives
Cover image for  article: Best Practices for Improving Marketing Mix Modeling -- Bill Harvey

Note: This post was written a week or two before the recent ARF Industry Leaders Forum focused on this same topic. At that two-day event many of the ideas below surfaced from diverse attendees including Neil Canter of Nielsen and of course myself.

Over the past 12 months a number of key people in the industry have questioned the validity of MMM (“Mix”) and wondered aloud whether it might not be doing some harm along with the obvious good it has done. The reaction from some quarters was nearly in the same league as the greeting given Galileo when he reported evidence that the Earth was quite possibly not the center of the universe. Sidestepping this controversy, in this post we’ll seek a moderate constructive middle point summarizing the easily at-hand ways Mix can be improved by all who use it. Jim Spaeth and Alice Sylvester have already recommended many of the same ideas. The jury is still out as to whether EMM’s (Effective Marketing Management) alternate methodology might not be of even higher utility than Mix, and the stakes are high enough for advertisers to test one against the other to see which gives better operational advice as measured by marketplace success. For the moment the following remarks apply to Mix due to its prevalent dependency by most advertisers:

  1. Use singlesource to bolster Mix. Sebastien Lion, marketing scientist leader at Mars/Wrigley, calls for a hybrid method and Hierarchical Bayesian Modeling is it. mProductivity and Marketshare Partners excel in this technique. Advertisers need to demand it, along with the improvements outlined below, and Mix results will become far more likely to provide accurate estimates of attribution to components in the marketing arsenal.
  2. Express media data in reach/frequency rather than in GRP within each geo/time cell. If possible use GRP against purchasers (PRP as TRA calls it).
  3. Break out PRP by creative execution and weight PRP by the efficacy of the individual creative execution as detected by singlesource.
  4. Break out TV further by high-rated vs. other program vehicles based on ample and mounting evidence of higher sales impact of higher-rated programs. Note that recall and other communications metrics do not necessarily show the same results they did a long time ago when TV was a far simpler place. Someday when your brand is confidently able to measure the sales results by rating level through singlesource, A/B, and whatever other techniques, use these weights in Mix.
  5. Make greater use of A/B testing in order to minimize noise to the absolute extent possible, and include these data in Mix. Particularly use A/B to separate out sales effects of new media such as social, owned, mobile, online video and other hard to read but potentially crucially important and disruptive Big Idea media. This does not replace Mix, it augments and enhances it.
  6. Use singlesource for faster reads to increase nimbleness especially when new media are used in new ways. This does not replace Mix; it is used while waiting for Mix results. Mix, singlesource and A/B are the full arsenal to be used together by adept and sophisticated marketers. Plus surveys and nonverbal methods to dig down into the Why level.
  7. Pay special attention to the way that in-store causal variables are treated within the model. Mix shows that price and promotion greatly outweigh advertising sales effect, therefore the potential for extreme error exists to the degree that these in-store variables are handled in a gross manner. For example, Eskind’s Footrule is a standard practice in Mix and possibly even the norm. This footrule searches out brand store/weeks in which there are sales spikes, and in a circular reasoning loop, ascribes unknown promotion as a causal stimulus to those store/weeks. This is probably justified and yet it ignores the need to discern exactly what kind of stimulus was responsible (end aisle displays, shelf talkers nowadays with or without new media such as Thinaire, in-store video, free samples, coupons, feature in-store print circulars, temporary price reductions, enlarged shelf space, etc.). It is therefore a blunt instrument that (a) misses other cases where stores have the key element making the difference (where spikes could have been flattened by competition or out of stock, etc.), (b) misses the opportunity for important learning and success repeatability, and (c) adds noise. It is not the best practice. The best practice is of course to conduct audits to actually measure all of these in-store variables brand by brand, day by day, and store by store. As we all know, compliance with intended and agreed promotions is actually quite spotty and yet such audits are only occasionally ordered. The one exception is dunnhumby which conducts such audits 100% of the time. Dunnhumby in-store data should therefore always be included in CPG Mix.
  8. Back to basics time. We have let slip the classical rigors of adequate sample size, minimum nonresponse bias and minimum response bias. The new ARF and MRC (Media Rating Council) can be used together to put that rigor back in. Ascription of missing information and wholesale massive fusion are employed unhesitatingly, mixing methodologies and smoothing them into one seamless whole, just because computers make it easy to do so. What are we losing by this? Recent nonresponse work done by CRE (council for Research Excellence) seemed to show that on average no harm is being done by nonresponse in TV currency measurement. Has this been publicly peer reviewed at a sufficiently granular level (looking below the averages)? Are there inconvenient truths being swept under the carpet? Everyone is running flat out just to keep up with the accelerating email stream, and in the average second most eyes are looking at one screen or another. What are we doing?

I presented some slides at a 1992 ARF conference depicting a future in which sales feedback loop data would be plentiful and we would be running our businesses through dashboards based on real sales effectiveness data. We have reached that place, which prediction at the time caused giggles of disbelief. The exact way we are executing it however still needs considerable fine tuning.

The eight-step program recommended above is an outline of how to tune it into defensible science provably maximizing shareholder value. We will know it’s working when the corporations driving world success have created high employment levels, economic security and well-being for employees, stockholders and customers. The interconnectedness of all things is not just a concept.

Bill Harvey is a well-known media researcher and inventor who co-founded TRA, Inc. and is itsBill HarveyStrategic Advisor. His nonprofit Human Effectiveness Institute runs his weekly blog on consciousness optimization. Bill can be contacted at

Read all Bill’s MediaBizBloggers commentaries at In Terms of ROI.

Check us out on Facebook at
Follow our Twitter updates @MediaBizBlogger

The opinions and points of view expressed in this commentary are exclusively the views of the author and do not necessarily represent the views of management or associated bloggers. MediaBizBloggers is an open thought leadership platform and readers may share their comments and opinions in response to all commentaries.


Copyright ©2021 MediaVillage, Inc. All rights reserved. By using this site you agree to the Terms of Service and Privacy Policy.