How to Control Frequency for Maximum Viewer Enjoyment of Your Advertising

Erwin Ephron once famously referred to it as crabgrass, but then forgave its existence by including it in a transformed elevated state as recency. Of course, I speak of frequency.

Advertisers suddenly became sensitized to the problem of excess frequency circa 2018 after both Nielsen and Kantar published studies indicating that it is a major cause of consumers turning against specific brands. The problem had existed for decades but digital had exacerbated it dramatically. Today CTV is also guilty of adding to the problem, often showing the same ad again and again in the same program, sometimes back-to-back.

One way that this problem is being solved today is by the shift to Creators. However, this very effective form is not yet a medium which in itself can provide high reach, even with a bevy of Creators. In fact, Nielsen ONE shows that the way to get high reach is to disperse across as many media types as possible, and within each media type across as many different publishers and vehicles as possible, with about 30% of impressions in linear and another 30% in streaming, another 30% in mobile and 10% in computer. My consulting relationship with Nielsen enabled me to put together a series of meta-analyses aimed primarily at maximizing reach while minimizing spend. The whole series is now in one place, which is here.

One of the findings of this study has been that linear is still the reach steamroller, and that there is a tendency nowadays to use too little linear and therefore wind up with extremely low reach levels as compared with the past and as compared with campaigns dispersed as described above.

Today, let’s turn to frequency.

Let’s start by looking at a campaign that used very high spend as indicated by the GRP level, which was about 1000 GRP per week for about 14 weeks, totaling 14,515 GRP. This campaign went too far in the linear direction: 99% of the impressions were in linear. Nowadays, that is a very rare pattern, typically the errors are in the opposite direction. The reach was 67.1% (P2+). Average frequency was 216, slightly more than 2 impressions per person per day. 42.5% of the population was reached 20+ times and 10.8% of the population were reached only 1-3 times in the 14 weeks, which might be considered insufficient frequency. The reach in what might be considered the optimal frequency range for the 14-week period of 4-19 exposures constituted only 13.8% of the population.

There might have been good reasons for this campaign design, perhaps the introduction of a new product or a brand repositioning, or an aggressive move to increase market share. Also, although we are using persons 2+ for our own convenience in conducting some of these meta-analyses, if we knew the actual target and objectives, this unusually linear-heavy tonnage campaign might make complete sense.

Let’s look at a campaign with a pattern closer to the type we recommend. This second case is a campaign that ran for just under 12 weeks and used 1908 GRP. Impressions were 57.6% in linear, 17.0% in CTV, 23.1% in mobile, and 2.3% in computer. Reach was 88.6% (P2+). Something like 13% of the GRPs as the prior campaign, but 32% more reach. Average frequency was 22, just under 2 exposures per week. 33.0% of the population were reached 20+ times, 38.6% were reached 4-19 times, and 17.1% were reached 1-3 times (totals to 88.7% due to rounding error).

In this second case, without knowing the real objectives or target, the numbers look more desirable, other than perhaps some concern over the 17.1% of the population who, having been reached only 1-3 times over the 12 weeks, might not have had a conscious exposure to the ad at all. Frankly, as I look over the ~100,000 campaigns in Nielsen ONE, it looks to me as if there is more of a problem of insufficient frequency than of excess frequency.

Here is a third case which exemplifies the most common pattern I see across all of these campaigns, which is the tendency to under-use linear. This campaign lasted about 4 weeks and used 3529 GRP. It achieved a reach of 12.8% (P2+). The average frequency was 276, or about 9 exposures per day. The impressions were distributed 0% to linear, 98.1% to CTV, 1.6% to computer, and 0.3% to mobile. 7.0% of the population received 20+ frequency, 2.9% received 1-3 frequency, and 2.8% received 4-19 frequency. Note that the prior case, which used linear and had something closer to the recommended 30/30/30/10 balance, used a bit over half the GRP and delivered just under 7X the reach.

This is just the beginning of our studies into frequency control, so stay tuned for more case studies and learnings. Turning for the moment away from the case study approach, let’s explore a couple of new ideas which a few practitioners are already putting to use today to control for both insufficient frequency and excessive frequency at the same time.

One way to do this is to use addressable media to add frequency to the people who have gotten too little, while also adding reach by bringing in the people who have not been reached even once. Ampersand is already doing this, focusing on reach extension and capable of also filling in insufficient reach if specifically requested by the client/agency.

In a related development, also focused on reach rather than frequency, CIMM and GoAddressable published a report put together by Howard Shimmel, Jim Spaeth and Alice Sylvester which demonstrates that buying the first 40% reach of your target in linear then switching to addressable TV to reach the people who were not reached by linear, targeting the unreached specifically based on the household level knowledge received from the linear set top box data, is the most cost efficient way to maximize reach. And anything that maximizes reach automatically flattens the frequency distribution, tending toward alleviation of insufficient and excessive frequency, although not totally solving those frequency problems.

“My” sales effect amplification advertising technology company RMT (Resonance Mapping Technology) in partnership with tvbeat, has created a venture called UltiMedia, which uses the above learnings plus RMT to maximize reach and efficient brand growth at the same time. The idea includes pooling of inventory so that broad frequency capping can be applied – because such capping is fairly useless if applied only within siloed platforms. Spectrum, with its programmatic, linear, streaming, VOD, and addressable inventory covering 25% of the U.S. population, is the base for the first proof-of-concept testing of UltiMedia expected to begin this year, with a number of other media interested in coming along. Here is a white paper on the subject of using pooling and addressable as ways to control excess and insufficient frequency.

Building High-Performing Teams
Our new podcast explores the topic of Being in the In-between Phase, a liminal stage of growth when we can feel disoriented, lonely, or frustrated, and why it’s also a powerful indicator that real growth is happening. Rather than rushing to “fix” the discomfort, this episode invites listeners to slow down, listen more deeply to themselves, and learn how to navigate uncertainty with trust, patience, and self-awareness.
This month’s podcast length is ≈46 minutes. Watch the Video

Posted at MediaVillage through the Thought Leadership self-publishing platform.

Click the social buttons to share this story with colleagues and friends.
The opinions expressed here are the author's views and do not necessarily represent the views of MediaVillage.org/MyersBizNet.

Bill Harvey

Bill Harvey, who won an Emmy® Award in 2022 for his invention of set top box data, has spent over 35 years leading the way in media research with pioneer thinking in New Media, set top box data, optimizers, measurement standards, privacy standards, the A… read more