Pre-Bid Media Impact Weights

By In Terms of ROI Archives
Cover image for  article: Pre-Bid Media Impact Weights

Per the Advertising Research Foundation’s (ARF’s) model, the audience data we all use world 'round as the main basis for media selection, reflects merely a count of the number of opportunities to perceive an ad, not the number of targets who saw it, were affected by it, or bought as a result of it. Thus, it was natural that people in the advertising business would eventually get the idea of weighting these audience counts by some measure of their efficacy.

When I first joined the ad business, I began to ask my mentors about media impact weights, as I called them, in my first few weeks on the job. They asked if I meant things like Starch scores.

You could say that this media impact quest traces back as far as 1926. That was the year that Daniel Starch introduced his service, which applied only to print, the dominant media type at the time. By showing survey respondents in face-to-face interviews copies of periodicals, Starch interviewers were able to get two measures of each display ad in each publication:

  1. Noting Score: The percent of people who say they did notice that ad while reading the issue.
  2. Read Most: The percent of people who say that they read most of the text of the specific ad in the specific publication.

Today we might liken Starch to RMT, Adelaide, TVision and others whose work is to supply data that can be used as weights on audience data during media selection.

Starch, however, did not present that as a use case. He was totally focused on measuring the efficacy of the campaign, which falls into post-evaluation rather than pre-buy in the media mental framework.

I tried to use Starch scores for pre-buy at Grey, K&E and Interpublic by averaging all the ads together within publications over a sample of all issues in a year. The first thing I noticed is that the publications did not differ all that much in their average Starch scores. The creative was the source of most of the variance, not the context. There was some context effect too, but it was covariant with the ad's effect; for example, ads with a prestige slant worked like gangbusters in publications with a prestige image.

At Grey my passion work was studying media optimizers including the tests that Grey had done. Grey had tested several of the earliest media optimizers and discovered that essentially told you to buy the lowest CPM. For P&G, General Foods and our other clients that would have meant advising them to shift all that money out of television, magazines and radio into outdoor.

The suppliers of these optimizers advised using -- guess what? -- media impact weights in the optimizers, and that would make them totally useful. This of course fanned the flame of my interest in developing empirical impact weights. The optimizer suppliers suggested just using subjective weights. Grey tried this and was alarmed to see how different the subjective weights were across all the top media mavens in the shop. Again, my motivation to get to real empirical weights which could be validated was accelerated.

Sy Lieberman and Ted Dunn at K&E shared with me their database of 1500 studies mostly of day-after recall, across media types, covering the many different product and service categories of the client's businesses. I attempted the same kind of derivation of impact weights. The same thing happened again, but more extremely. Now I saw that not only did the creative have most of the weight, but these surveys also broke out demographic groups, and the patterns were not identical or even close across demo groups.

So now I realized that I couldn't determine the media weights for media vehicles without specifying which target I was talking about. The relationships also differed among product categories. So, in order to provide media weights that would stand up to validation against client sales results or any other measure, I would have to give each media vehicle/context different weights for all permutations of creative, audience and product.

Fast forward to today. I figured out how to classify creative based on 265 empirically derived psychological content codes in 1997. Products and audiences and media vehicles had never been the problem to classify. I took time off my quest for media impact weights, to pursue the introduction of set top box big data and matching to sales at the privacy-protected same-household level, until selling TRA to TiVo in 2012, when I was able to return to tilting at the media impact windmill.

The big break came in January 2014, when Bill McKenna agreed to join me in forming RMT with four other seasoned pros. Today RMT is the only company delivering media impact weights customized to the specific ad or ads. The sales ROAS and brand adoption/equity lift validations have been done all by third parties, Nielsen NCS, 605, Simmons and Neustar. You can in the pre-buy increase your positive ad campaign results by impact weighting alternative media vehicles, and by identifying addressable audiences whose own media consumption behavior shows that they are psychologically resonant with a specific ad.

The two other companies I mentioned, Adelaide and TVision, also supply media impact weights into the pre-buy, and have validations of their own. The size of the prize is so much greater than the cost of media impact weights that it makes sense to test combining them. RMT and Adelaide for example are so dissimilar in approach that they likely would have additive effects. Adelaide does not start from the creative, it starts from the context and applies what it knows regardless of the creative, and it also works. Adelaide in digital applies adjustment factors for things not subsumed by RMT (ad clutter, position on page, size of ad, plus secret sauce factors), and vice versa, so they are pragmatically complementary.

There is no rush visible in the marketplace to switch from opportunities to see (OTS) to OTS weighted by probable impact per impression. But how can that be? With billions of dollars being spent, why buy media based on naked OTS, when OTS can be weighted to be a provably better proxy for the outcome you want?

It's because there is too much going on to allocate time and attention to it (except for the best and the brightest): digital, social, mobile, streaming, fraud, walled gardens, the new battle for the OTS currency, cross-platform measurement, the economic squeeze of agencies, diversity, purpose, influencers, brand activation, AI, adtech, martech, experiential marketing -- a rich stew indeed.

The average person underestimates her or his capability to cause change in their life, which becomes a self-fulfilling prophecy. The average marketing/media professional underestimates her or his capability to cause change in the industry, which also becomes self-fulfilling.

Encourage your troops and yourself to swing for the fences and all these industry topics can be systematically optimized for your brand. Leave no child out of school and leave no media best practice un-implemented.

Click the social buttons to share this content with your friends and colleagues.

The opinions and points of view expressed in this content are exclusively the views of the author and/or subject(s) and do not necessarily represent the views of MediaVillage.com/MyersBizNet, Inc. management or associated writers.

Copyright ©2024 MediaVillage, Inc. All rights reserved. By using this site you agree to the Terms of Use and Privacy Policy.