I am excited by the impending launch of social TV ratings. Slated to come on stream in Q4 this year, social TV ratings should catapult the concept of TV audience ratings from a simple viewer measurement to a consumer response metric.
Allow me to start with a personal declaration: As a media researcher who has tracked the word of mouth generated by many advertising campaigns, I am excited by the impending launch of social TV ratings. Slated to come on stream in Q4 this year, social TV ratings should catapult the concept of TV audience ratings from a simple viewer measurement to a consumer response metric. Social TV ratings should provide a standardized quantification of consumers’ social media involvement with a TV program. Nonetheless, in case we all become too rapidly enthralled by the specter of social TV ratings, let me also add “Not so fast!” Why do I say this?
Nielsen and Twitter announced this important initiative in December 2012. Billed as the Nielsen Twitter TV Rating (NTTR), the NTTR is a revolutionary new TV ratings statistic, primarily based on the Tweets each TV show receives.
The NTTR brings the promise of a qualitative, behavioral measure to overlay on standard TV ratings. Currently, in order to determine their total TV presence, some media buyers simply add up their TV ratings across the shows in which their advertising appeared. In effect, all they are doing is weighing their total audience delivery. Indeed, a campaign’s total TV rating achievement is often referred to as its TV weight.
I think the issue may be way more complex and nuanced than just comparing a campaign’s total TV ratings weight with its social TV ratings. Typically, there would seem to be a simple three-step process:
Step 1: The advertiser or agency decides to focus on social TV ratings, invest in the metric and maximize the campaign’s delivery of this measure.
Step 2: For each show on the campaign’s schedule, the media buyer then compares the delivery of social TV ratings to the audience ratings to derive each show’s conversion index. Any program with an index above 100 is considered to be in positive territory.
Step 3: The ultimate goal would appear to be simple: Evaluate all TV shows in the campaign and max the aggregate social ratings’ conversion index.
For any advertiser or agency hoping that adding up social TV ratings may be akin to summing up audience TV ratings, I offer the following two observations to consider:
First, take the Super Bowl – the granddaddy of all TV shows generating word of mouth. Counter intuitively, not all advertisers who appear in this game see an actual uplift in their brands’ word of mouth. Indeed, according to Keller Fay, the leading word of mouth researcher, about 10%-15% of advertisers in the Super Bowl can see a decline in their word of mouth in the week after the game. This unexpected outcome would not have been anticipated by the above process.
Secondly, in late 2010, the Word of Mouth Marketing Association honored me with their Gold Award for Research for constructing a multiple regression analysis which demonstrated the connection between Sony Electronics’ advertising and their subsequent word of mouth. This connection was not a straightforward relationship. One of my key findings was that ad-generated word of mouth depended not only on Sony’s media weight, but also on their share of voice. In other words, ad-generated word of mouth was seen to be competitive.
Estimating a campaign’s word of mouth is not like calculating ad awareness, which is largely a function of the total ad weight and its weekly reach. A more complex relationship may exist, which can make word of mouth modeling more like sales modeling.
On the upside, UM has undertaken a number of special analyses that frequently demonstrate a strong relationship between a sponsor’s TV show and the sponsor’s consequent word of mouth. In this case, the sponsor’s recipe for success is clear:
For example, a perceived older brand would almost always need to exude an evident sense of humor, or even young-at-heart irreverence, if it were being integrated into The Colbert Report.
The upcoming release of social TV ratings justifiably enthuses many of us in the ad business. Yet in order for social TV ratings truly to succeed, its advertising impact will need to be verified and validated. To their credit, both Twitter and Nielsen have an impressive array of ad effectiveness experts on their respective benches. Make no mistake, if Twitter and Nielsen can get beyond the issues I’ve outlined here and categorically prove the effectiveness of social TV ratings, it will upend the TV airtime market as we know it.
Click here to read genConnect’s interview with Jack Myers on dual mentorship in cross-generational workplaces.
[Image courtesy of cooldesign/FreeDigitalPhotos.net]
Graeme Hutton is SVP, Group Partner, Research at UM. Graeme came from the UK to the US in the late 90s, and his only regret is that he did not do it sooner! Graeme joined UM in 2006. At UM, he has engineered and activated a broad-based set of integrated communications and consumer insight tools which dovetail in to the agency’s burgeoning arsenal of media research products and systems. His key clients include Sony Pictures Entertainment and Sony Electronics. Graeme can be reached at Graeme.email@example.com
Read all Graeme's MediaBizBloggers commentaries at Curious Thoughts from Curious Minds.
Check us out on Facebook at MediaBizBloggers.com
Follow our Twitter updates @MediaBizBlogger
The opinions and points of view expressed in this commentary are exclusively the views of the author and do not necessarily represent the views of MediaBizBloggers.com management or associated bloggers. MediaBizBloggers is an open thought leadership platform and readers may share their comments and opinions in response to all commentaries.