Questions About TV Measurement Data Quality, Part 2

By Thought Leaders Archives
Cover image for  article: Questions About TV Measurement Data Quality, Part 2

Aside from the issues brought up in my last post on data quality, it is also crucial to understand the consumer data that is connected to any TV big data -- which may come from a number of sources.  Customer data may seem to be the obvious source of data, but even a perfect database of customers raises some questions which also apply to other data sources.  A list of those questions follows.

  1.  Are the data relevant to the marketing objective? If the objective is to increase awareness or positive attitudes to the brand, targeting existing customers may not be the best approach.
  2.  How many customers are successfully matched to the TV Big Data?  To what extent does the TV Big Data footprint coincide with the CRM data footprint?
  3. For the subset of customers that are matched, how accurate is the match?  If the CRM list has names and addresses, typical match rates would be expected to be about 75-80%.  Others will be incorrectly matched (though this may be difficult to identify) and others will fail to be matched.  For matches based on online identity the match levels will typically be lower.
  4. Is the match based entirely on direct identity matching or is some lookalike modeling employed?  What is the extent, accuracy and validity of this modeling?
  5. What methods are applied to address gaps in matching and gaps in coverage of the TV big data? What is the accuracy and validity of these methods?
  6. How is personal identity in customer data resolved against device activity? While this can be a problem with digital data, digital activity is more often a personal rather than a shared experience -- but TV remains for many a joint activity.  An ad targeted at a customer may instead be viewed by someone else in the home, undermining all the effort and assumptions in the construction of the data and the campaign.  A default of household targeting can be one way to side-step this issue, but in the end people are consumers and purchase-decision makers, not homes.

An alternative to TV Big Data targeting is to use existing currency panels matched with consumer data.  This approach solves the problems of representativeness as TV measurement panels are designed to represent all TV homes and provide measurement of all devices and people viewing within the home.  Furthermore, these measurements are typically audited and accredited for use (whether via MRC in the U.S. or Joint Industry Committees elsewhere).  They are typically transparent and the economics of transaction between buyer and seller are well established.

Using existing TV currency panels is therefore a very effective way to plan, create and measure broad campaigns aimed at increasing brand awareness and attitudes.  Clearly, where this approach is challenged is with sample sizes: building and maintaining a high quality TV measurement panel is expensive and limits the number of participating homes.  Smaller, more niche targets will not yield data with sufficient statistical stability for meaningful measurement of performance.

There is an interesting paradox at play here.  A TV measurement panel of 35,000 homes and 100,000 people allows us to understand the viewing of 300 million Americans, while a sample of 10,000,000 smart TV sets allows us to understand how 10 million smart TV's are being used.  So less represents more and more represents less.  An advertiser investing in TV campaigns needs to decide what works better for a given objective.

In some cases, the 10 million smart TVs may be the better approach, allowing detailed analysis of a small sector of the market which may give some insight into a campaign's overall effectiveness.  In other cases, a comprehensive measurement of people's campaign exposure across the country is required and the full representation of the TV measurement panel is the right data source to use.  The paradox also extends to objectives: bigger targets and broader objectives are better served by the smaller samples while smaller, more niche targets need the larger sample sizes.

In many cases, if you can afford it the use of both approaches may give you the best ways of assessing the value of your ad investment, provided differences between alternative data sets can be reconciled meaningfully and create a coherent story.

For now, most TV ad campaigns are transacted using traditional measurement panels, but advertisers are exploring the new big data sources.  An ideal scenario from a pure measurement perspective would be a seamless, transparent integration of all available TV viewing data sources into a coherent measurement that covered big data from all sources and incorporated panel data to fill in the geographical and technological gaps left by these sources and provide individual-level viewing measures.  However, this is very unlikely to happen given the many legal and business issues that would arise were such an enterprise attempted.

In the absence of this research nirvana, companies should be open to all of the opportunities that are offered by the multitude of available data sources and be aware of the limitations of each.

A key takeaway is that there is no universal perfect data set that solves all the needs of advertisers, agencies and media owners.  Anyone working in this space should be aware that while bigger isn't always better, the proliferation of TV data presents opportunities that may enable more efficient and effective TV advertising with greater accountability in assessing effectiveness.

With that in mind, it's worth considering a checklist of criteria that could determine what data set best works for an advertiser's objective:

  1. Which of these does the data set provide for?  Planning, activation, reporting, effectiveness measurement.
  2. How well does the data set represent the geography relevant to your target?
  3. Does the data set provided sufficiently represent all relevant population groups?
  4. How well is the defined consumer target matched to the TV data set?
  5. Who/what is included and excluded in the data set, whether due to technology, data ownership or co-operation in the measurement?
  6. How well do the measured devices in the data set coincide with how the target watches TV?
  7. Are the measurement units (e.g. devices, households) sufficient for the advertising objective?
  8. Is the sample size sufficient?
  9. Are the data consistent over time?

At the time of writing, the industry has been discussing these issues through the lens of a proposed "data quality label" to be applied to data sets, similar to the nutrition labels found on food packaging.  These labels would give an at-a-glance view of the key aspects of any data set to help potential data users make more informed decisions.  This would likely be a preliminary step in data assessment with MRC auditing and accreditation still being the ultimate seal of approval for databases used in advertising transactions.  It will be interesting to watch the progress of this initiative.

Our appraoch to data at clypd is to enable whatever data sets our clients deem appropriate, operating within contractual and privacy constraints.  We enable advanced TV planning and buying using both traditional TV measurement and TV Big Data sources, with consumer data matched to either source.  When activating data for our clients, we are mindful of the limitations that any data set may have and are happy to engage in discussion regarding the pros and cons, whether through single client conversations or through theAdvanced Targets Standards Group, an industry group that has laid out guidelines for linear advanced audience deals including data quality considerations.

As the industry evolves, it will be interesting to see how the various alternative data sets develop and whether they will co-exist in an open source way, or whether more walls will be built around data with advertising measurement becoming more fragmented and shadowy.

Given what we have seen with digital, buyers should beware of any lack of transparency, so any discussion about data quality can only be a good thing.

Click the social buttons above or below to share this story with your friends and colleagues.

The opinions and points of view expressed in this content are exclusively the views of the author and/or subject(s) and do not necessarily represent the views of MediaVillage.com/MyersBizNet, Inc. management or associated writers.

Copyright ©2024 MediaVillage, Inc. All rights reserved. By using this site you agree to the Terms of Use and Privacy Policy.