Future of Audience Measurement What Would Bill Harvey do?

By In Terms of ROI Archives
Cover image for  article: Future of Audience Measurement What Would Bill Harvey do?

This continues our report of comScore's Josh Chasin interview of me at the Advertising Research Foundation's (ARF's) Audience Measurement Symposium June 11-13. Josh's first question asked what additional use cases is the market expecting now, and my responses are in the first post. He then asked where the traditional systems are starting to break, and my answers are in the second post. I have no affiliation with comScore, they merely chose to ask me if they could interview me as the focus of their ARF conference presentation.

So given all that… if you were charged with zero-basing a multi-screen measurement service today, unencumbered by any legacy baggage... how would you go about doing it?

In answering that I'm going to focus on four things: sample size, sample and data quality, how you put the data together, and a Truth Standard. First, sample size.

Earlier in this series of posts I paraphrased NBCU's Linda Yaccarino's statement to Nielsen that VOD and SVOD cannot be measured acceptably by a panel, that they must be measured by big data. Those are the parts of the audience where the fragmentation is the greatest and the idea of using 50,000 or 100,000 home samples to measure individual program episodes is the least plausible. In the on demand part of the audience across all shows used by advertisers 0.05 is a typical rating for any given episode on any given day. That rating in a 50,000 household sample is 25 homes. That then has to be broken down by sex/age and advanced targets. Yes, an episode can be cumulated over 35 days or forever and then the 50,000 household sample can be analyzed by subgroups. Not really acceptable for a $70B ad industry TV currency. And the size of the total industry with research needs is much larger than the advertising part. The non-ad supported part of the industry has separate $270 billion in revenues and needs to know audience sizes, composition and trends too.

However, to me, the illustration that drives the point home, is that currency data must be able to be used to maximize reach, which means the findings on program-to-program duplication must yield statistically significant duplication differences. Let's look at the other end of the spectrum from VOD and SVOD, let's look at where the audience is least fragmented, linear television. There the average rating of the inventory bought by advertisers is 0.2, much larger than the average rating in on demand.

In order to measure with statistical confidence the difference between two shows that have high vs. low duplication one also needs a very large sample. In Nielsen's current 50,000 home national sample a 0.2 rating equates to 100 homes. According to random probability – what used to be called the Sainsbury method – the duplication between two shows each of which has 0.2 rating is such that 0.2% of Show A's audience will also be an audience member of Show B. That would be a fifth of a home. But you can't have a fifth of a home, you can't get down lower than a single home in the real world.

In other words, there is no way that with 50,000 homes you can do a good job or any kind of a job other than mere going through the motions, in maximizing reach by taking into account the real duplication patterns among shows. And that's for linear TV!

So sample size is crucially important and I mean millions of homes. Otherwise reach cannot truly be optimized.

This is not just theoretical, the evidence can be seen at very practical levels. For example, when Dave Morgan asked me to consult for and evaluate Simulmedia, one of the first things I noticed was the dramatically higher reach Simulmedia achieves than the big media agencies. With on average 13% of the big agency budget for the same advertiser, Simulmedia's average daily reach was 44% of the big agency's. This was because Simulmedia uses big data samples to investigate these duplication differences and maximize reach. The big agencies were using panel data.

To be continued.

Click the social buttons above or below to share this story with your friends and colleagues.

The opinions and points of view expressed in this content are exclusively the views of the author and/or subject(s) and do not necessarily represent the views of MediaVillage.com/MyersBizNet, Inc. management or associated writers.

Copyright ©2024 MediaVillage, Inc. All rights reserved. By using this site you agree to the Terms of Use and Privacy Policy.