Improving Decision Making: Accelerating Inevitable Revelations

In this series we have taken the position that the simple expedient of thinking through what the right process would be for making specific types of decisions can lead to enormous improvements in marketing efficacy and efficiency.

So far we have discussed how this applies to how much to spend on marketing and framing the assignment to the media buyer. In this post we will apply the principle to the idea of being able to see out further ahead and to bring those insights into today’s decision making. The example we shall focus on is the current situation in U.S. television and digital audience measurement.

Last year I ran a series of ten posts here predicting that competition would soon arise probably from combinations of then-existing companies and that the competition like Nielsen itself would hinge upon a panel, with big data hung around that panel like spokes on a wheel, or to picture it better, like Project Blueprint. Project Blueprint, invented by ESPN’s Artie Bulgrin, used a small calibration panel within which all media were measured, to correct the duplications among media estimated from larger panels and big data sources in which pairs, triplets and quadruplets of media were measured including their duplication patterns between those pairs and among those triplets and quadruplets. Thus, although the real-data calibration panel was small, the overall measurement was much larger, and by imposing the real-data duplication patterns on the rest of the system it was hoped that the final results would be as accurate as could be gotten with today’s technologies and research budgets.

Nielsen reacted to TRA (today known as TiVo Research) by Nielsen Catalina, a method that combined many other data pools with the Nielsen National Peoplemeter panel as the calibration panel, in a Project Blueprint way. TRA/TiVo Research today continues to use a very large calibration panel (around 2,000,000 homes and many more devices measured) without mixing in other islands of partial data, i.e. only using a home for which it has allthe data. I predicted that the Blueprint/Nielsen approach would probably be used by the new competitor (possibly comScore/Rentrak) that would arise, and so there would be two large companies both using highly similar methods.

This is now what has happened. comScore has acquired Rentrak and will be offering a Project Blueprint type approach facing off with Nielsen’s own.

Since most people in the industry are not researchers, the nuances of the methods used sink to the bottom of the buzz pile and most of the feedback being given to the two competitors has to do with keeping their prices down. The most serious researchers have risen to the top of the heap and run departments where they too must be concerned about price above all else, since in the past half century Nielsen has driven up the cost of ratings poker to a degree that would have seemed unthinkable even 20 years ago. And now, without showing all its cards, Nielsen is hinting that the future will be even more expensive. Why? Because Nielsen will have to continue increasing sample size (partially by re-using local data in national reports, partially by fusion and partially by real sample size increase) and will need to make deals with hardware companies that control pools of data such as Roku and many others. Nielsen continues to hope to be able to get set top box data, VOD data and network digital server data at a lower cost than it will charge for it once it has massaged it into the ultimate cake.

Two points are being made by most of the industry players to both Nielsen and comScore: You better have the lower price, and you better be able to prove you have the better service – which is where MRC will play a crucial role. But the sticky little research details are not yet being discussed. This is what we mean by the use of decision systems which allow factors which will later be crucial to stay in the sidelines -- when by being brought forward they would greatly accelerate inevitable revelations.

What are those research details that ought to be brought to the fore ASAP?

  • What is the nature of the comScore’s Total Home sample? Is it an area probability sample? How does its response rate compare with Nielsen? The current comScore calibration sample is only around 400 homes but that is enough to get a clear indication of the kind of response rate it will get. MRC indicates that Nielsen’s national intab sample achieves 50%, up from an estimated 24% last century.
  • Why doesn’t Total Home, which covers usage in home but not out of home, also measure out of home, which it could easily do by licensing the proven Symphony Advanced Media downloadable app technology?
  • As the industry moves from eyeballs to ROI, are small calibration panels really the ultimate method that will sustain the industry, and why isn’t the all-real-data large panel approach (like TRA) being considered?
  • Because we have been trained to believe it the primacy of probability samples, we are willing to endure the fact that at least half the country does not participate in surveys/panels, and the psychological characteristics which cause them to not participate must be related to the kinds of programs, sites, apps and brands they use, and the kinds of ads that move them to buy. These people do get measured by big data however. So we say with the comfort that it is orthodoxy, “Only a probability sample gives us the assurance of representativeness,” which is true except that the half of the population that is not included (research non-cooperators) is therefore not represented. To which the universal reaction is to look for some soft sand in which to comfort the head.

Years from now this will all be obvious and nobody will remember that the point was made years earlier. Better thinking about how to set up decision processes can prevent such absurdity.

Image at top courtesy of Corbis. The opinions and points of view expressed in this commentary are exclusively the views of the author and do not necessarily represent the views of MediaVillage/MyersBizNet management or associated bloggers.

Bill Harvey

Bill Harvey, who won an Emmy® Award in 2022 for his invention of set top box data, has spent over 35 years leading the way in media research with pioneer thinking in New Media, set top box data, optimizers, measurement standards, privacy standards, the A… read more