Statistics, Survivor, and Media Research - Jonathan Steuer-TiVo

By Media Biz Bloggers Archives
Cover image for  article: Statistics, Survivor, and Media Research - Jonathan Steuer-TiVo

Nearly every week, Jeff Probst reminds me of the tenuous relationship between truth and statistics:

"Losers, I'll see you at Tribal Council tonight. Remember, the last person left standing wins immunity and a one-in-seven shot at winning this game."

Thing is, that's not true. Or rather, it's only partly true: the part about winning the challenge resulting in freedom from elimination at that evening's Tribal Council is accurate; it's the part about the one-in-whatever chance of winning that isn't.

Of course, if Survivor were a fair game of chance (where "fair" has its technical meaning, meaning that each outcome is equally probable), that part would be true too. But of course, that's not how the game works: Survivor isn't a math problem but rather a tangled knot of complex, flawed, ornery human beings. Early in the season, contestants vote out one of their own every week, strategically trying to eliminate either the weakest links or the biggest potential threats. The ultimate results are decided by a jury of contestant-peers – they vote to determine which one of the last few remaining competitors most deserves to win. That's not a game of chance at all, let alone a "fair" one.

"Well duh!" you say.Survivor is reality TV, not reality-reality. I understand there's a difference. But using sloppy language about statistics is a serious issue, and our real-world sloppiness about data frequently leads to inaccuracies that are as inaccurate as Jeff's comments about the probabilities of Survivors winning a million bucks – and as potentially costly.

"Statistics!" you say, "Yuck! Shut up with your fancy statistics stories! Jeff just means that one of those seven people will win!" But here's the thing: precision does matter. Sometimes it matters a lot: media decisions involving programs that cost tens or hundreds of millions of dollars can depend on fractions of rating points – so it's important to remember where our data (like our food) come from and how they are prepared. (And yes, data is a plural noun. Didn't I just tell you precision about language matters?)

I'm not going to embark on a geeky rant about misuse of statistical data in media research… mostly because I'll be sure to lose the few of you who are still reading if I do. That's a rant for another day. But here are a few examples of some basic filters to apply when looking at statistical data in the context of media research:

1.Acknowledge limits. We don't know what we don't measure.Even the most rigorous measurement doesn't tell you what happened before or after that measurement was taken. Making assumptions about what happens outside the range of what you are measuring might be reasonable… or it might not. Example:A current TiVo research project is examining the relationship between use of "over-the-top" TV technologies (Netflix, HuluPlus, etc.) and other TV viewing. To distinguish between viewers and non-viewers, we asked TiVo panelists in January 2012 whether they used these technologies in the past month and, if so, how much they used them. As much as I'd like to use those answers to compare with TV viewing in 2010 and 2011, I can't, because the question explicitly covered December 2011 – January 2012, and it's not reasonable to assume that their viewing via OTT technologies was the same in January 2012 as it was in 2011. That's something we'd have to ask specifically about. (So we will!)

2.Know what's in the data sausage. We sometimes forget about —or consciously choose to ignore – the large number of assumptions that are built into the way data are collected. It's important to remember where data come from before placing too much weight on the results! Example:Detractors of TV set-top box data are constantly complaining about the "TV on / box off" problem – where the STB is on, and therefore "tuned" to a channel, but nobody is watching. Those who have spent any significant effort looking at STB data know that it's not difficult to identify this effect and reduce it by applying duration-capping rules – that is, ignoring "tuning" events longer than a certain length. STB data analysts typically disclose how they apply this process and the magnitude of the effect, so users of the data can decide for themselves whether to use the data. Conversely, when is the last time you remember seeing data about compliance rates among Nielsen households – specifically, independent audits of how reliably members of Nielsen homes press the appropriate buttons to show whether they are in or out of the room when the TV is on?

3.Mind your projections!It's a very rare occasion when it's feasible to collect complete census data – that is, to be able to measure every single member of a group – and so almost all real-world media research uses sampling techniques to select an appropriately representative sample, or uses statistical calculations to weight a sample to project to a particular population. In using any research data, it's important to remember where the data you're looking at come from, and make sure that the conclusions you're drawing are reasonable based on the sample and methodology you're using. Example: At TiVo, we're constantly being reminded how "TiVo owners are weird" because these intellectual, moneyed uber-geeks went out and purchased and installed DVRs on their own, which must mean that our data are non-representative of the US population as a whole. Unfortunately (for me, anyway), this may be, to a degree, a reasonable criticism. More unfortunately, since TiVo has been so careful about maintaining our customers' privacy, our Stop||Watch sample is truly anonymous, and I have no idea what the demographic composition of the sample is. Without that information, it's impossible to model our ratings data to enable us to project to the US population as a whole. As TiVo's partnerships with cable MSOs (such as our recently-announced Comcast Xfinity partnership) continue to expand and as other aspects of our methodology evolve, we will eventually be able to project to a national sample. For now, though, we're focusing on what we do know, which is to provide a very high level of detail about how DVR users (who now represent about half of US households) use television.

4. Most media games aren't zero-sum. The media landscape isn't quite like the physical world: most decisions and activities are no longer the simple tradeoffs of one experience for another, as they were back in the days when there were three national TV networks, a handful of independent broadcasters in most markets, and one or maybe two TV sets in each household. Since we can watch many different kinds of media (broadcast, cable, movies, videos, etc.) on many devices (live TV, DVR, computer, tablet, mobile phone, portable media player, etc.), it's not a reasonable assumption to assume that because someone started engaging in one media activity, they must have stopped doing something else. The media universe has become too complicated to think of TV measurement as a zero-sum game, but we haven't yet figured out how to measure it that way. Example:Trying to measure the impact of over-the-top technologies on TV viewing has been very challenging, as the increase in popularity of alternative ways to watch "TV" has grown in parallel with the adoption of many different opportunities for other kinds of media fragmentation. The debate about decreases in viewing of certain kids' cable channels (e.g.Nickelodeon) and the apparent correlation of this viewing decline with the availability of similar programming on OTT services (e.g.Netflix) may signal trouble – but it may also signal an increase in total viewing time of all similar content, as kids increasingly watch not just on TV but on tablets and computers and phones during times when mom or dad are using the TV to watch their shows. Research on media multitasking time spent with media, like Ipsos MediaCT's Longitudinal Media eXperience (LMX) study, has shown steady increases in the length of the "media day" when simultaneous media usage experiences (like using a computer while watching TV) are taken into account. Understanding complex problems like evolution of media usage is difficult, and there is much room for additional research in this area.

I could go on… and I may, in a future post. In the meantime, those who enjoy statistical nitpicking (millions of you, I'm sure) may revel in a great (and frequently updated) analysis of stats in the news: The non-profit Statistical Assessment Service (STATS) Website at http://www.stats.org. STATS analyzes everything from whether having one drink can increase the probability of a car accident to whether a soda tax could really curb obesity to whether Kabul is actually safer for kids than New York City. For even more fun with media and statistics, you might want to take a look at some of the following sources I consulted while writing this post:

· Smith, Martha K.; Professor Emerita of Mathematics, University of Texas – Austin; "Common Misteaks Mistakes in Using Statistics: Spotting and Avoiding Them

· Cue, Kerry; MathsPig blog; "Archive for the '10 BIG Media Maths Errors' Category

Jonathan Steuer joined TiVo as Vice President, Audience Research & Measurement, where he leads TiVo's efforts to develop innovative new media research products. Jonathan can be reached at jsteuer@tivo.com.

Read all Jonathan's MediaBizBloggers commentaries at InteracTiVoty.

Check us out on Facebook at MediaBizBloggers.com
Follow our Twitter updates @MediaBizBlogger

The opinions and points of view expressed in this commentary are exclusively the views of the author and do not necessarily represent the views of MediaBizBloggers.com management or associated bloggers. MediaBizBloggers is an open thought leadership platform and readers may share their comments and opinions in response to all commentaries.

Copyright ©2024 MediaVillage, Inc. All rights reserved. By using this site you agree to the Terms of Use and Privacy Policy.