The Future of Audience Measurement: Signs of Breakdown in Traditional Systems -- Part 2

By In Terms of ROI Archives
Cover image for  article: The Future of Audience Measurement: Signs of Breakdown in Traditional Systems -- Part 2

This continues our report of comScore's Josh Chasin interview of me at the Advertising Research Foundation's (ARF's) Audience Measurement Symposium June 11-13. Josh's first question asked what additional use cases is the market expecting now, and my responses are in the prior post.

Again, to put these remarks in context, I have no business relationship with comScore, they merely asked to interview me at ARF June conference. I am addressing myself to Josh's questions from the point of view of helping the industry to clarify what its audience measurement needs will be for screen media for the immediate future.

Given all that swell stuff you just said, where are current "traditional" audience measurement systems beginning to break?

For a number of years the first breakage was seen in implausible numbers, such as the apparent shrinkage in TV audiences when we all know that there has been no shrinkage, just a migration to other ways of viewing for which there are no current measurement arrangements, at least in the currency. This hurt the stock prices of some major media companies even though it was widely accepted to be artifactual not real audience loss.

Other signs of ice cracking on the lake: a move to include fusion as part of buying currency, ongoing multi-decade delay in instituting passive people metering into the television currency, emphasis on Apollo-size samples for most of the ROI work of Nielsen Catalina despite the availability of big data, quarterly cross platform reports which are not clearly labelled as to where data come from, continuing criticism of set top box data as being only usable when conformed to small panel data (which becomes lip service without allowing the big data to have any impact). These are all signs of the need for change and investing in retooling for the new realities.

The first generation of set top box data might at this point be considered part of "traditional" audience measurement systems. Although solutions are available for including persons level measurement, over the air homes, program level DVR viewing data, and VOD/SVOD in a set top box methodology, other than selected STB data suppliers who have instituted DVR playback measurement (e.g. TiVo and others), these needed solutions have not yet been announced by any player.

The currency continues to show small incursions into traditional viewing whereas alternative suppliers e.g. the late Symphony Advanced Media show much larger proportions of the population engaged in streaming and other time shifted means of viewing. Where is truth? I think everyone accepts the idea that the present system is broken. The real question is what do the players respectively do about it.

What would you do?

In addition to the improvements I've just listed including passive people metering, and augmenting set top box data to eradicate its shortfalls, the cleanliness and harmonization of naturally occurring data, in conjunction with a high response rate 100% passive panel, is the only credible answer. Both the big data and the panel have to be aligned on what they cover, for example in-and-out-of-home, and the enumeration census (ID Graph) of people, devices and households on which they are both based needs to be updated constantly. However the big data should not simply be conformed to the small panel, for two reasons.

  1. That neuters the value of the big data, if you would get the same results with just the panel.
  2. The panel – all panels and all surveys – are biased by Nonresponse to exclude at least half the population who are not research cooperators. Big data do not fall prey to this bias. Although some studies suggest there is nothing to worry about, other studies such as the one I did years ago show that the non-cooperators if included would change the ranking of networks and programs substantially. Simmons has just done another such study and is currently tabulating the results.

How then to put together the big data and the small data, if not to simply conform the big to the small? Simmons has tested an innovative method they are calling Probability Calibration, where big data are readjusted based on a high response rate probability sample. In the first test they integrated Symphony Advanced Media (SAM) data into the Simmons national probability sample using fusion enhanced by DriverTagsTM. The correlation of the SAM ratings to currency increased from .90 to .96 as a result. This is the kind of innovation that is needed to integrate big and probability sample data.

No consideration of the this topic of the future of audience measurement would be complete without recognizing that the number of types of big data are increasing and due to increase further. Now in addition to the US Census, surveys and panels we have point of sale and CRM data, Internet, set top boxes, servers, apps, consumer electronics devices with ACR and whatnot, Alexa, about 11 data streams. In the future Internet of things, Near Field Communications (NFC), ATSC 3.0, IPTV, routers, ISP metering, ACR in many more devices, security cameras repurposed, printed circuits embedded in our bodies, and beacons, will add to the data which needs to be cleaned and harmonized to the degree that it increases chances of making better decisions.

To be continued in Part 3.

Click the social buttons above or below to share this story with your friends and colleagues.

The opinions and points of view expressed in this content are exclusively the views of the author and/or subject(s) and do not necessarily represent the views of MediaVillage.com/MyersBizNet, Inc. management or associated writers.

 

Copyright ©2024 MediaVillage, Inc. All rights reserved. By using this site you agree to the Terms of Use and Privacy Policy.