In Terms of ROI: How Are We To Judge Research? - Bill Harvey - MediaBizBloggers

By In Terms of ROI Archives
Cover image for  article: In Terms of ROI: How Are We To Judge Research? - Bill Harvey - MediaBizBloggers

Today we find ourselves at a crossroads. Technology has brought changes to the media and has now opened up new possibilities for the research that measures those media – and none too soon because the changes have made those media harder to measure.

Many overlapping industry groups have arisen to deal with the decisions that lie ahead. It is a time of great dialog. A creative ferment out of which positive change will come.

At the same time, technology and cultural changes have already impacted the research that we depend upon from day to day. And not in a positive way. Where we used to be able to get as high a response rate as 90% from coincidentals and 80% from surveys, answering machines have made coincidentals a thing of the past. The response rate to most surveys is now under 30% and far lower for Internet research and for any type of opt-in panel.

This has caused some researchers to conclude that there is no such thing as probability samples anymore. Other researchers counter that there never really were any probability samples anyway, because non-response bias always distorted the sample away from being a probability sample; nothing has changed in essence, only in degree.

Some now say that the nomographs we use to estimate statistical error around a rating estimate should not be used because they apply only to probability samples. Others say that jackknife replication of subsamples can still be used on any sample, even out and out quota samples, to determine the variability within the data and therefore provide good measures of statistical error on any sample.

Of course, statistical error is only one of the types of errors that research users need to be concerned about; there is also non-response error and response error. And with all the new computer technology enabling complex analytics there is always the possibility of tabulation error creeping in.

More and more we hear researchers asking "How shall we judge the quality of research now?" This question applies both to the ongoing research we are already using, and to the new research which is at our doorstep. It is a question which deserves to be answered.

Having thought about it we have come up with ten different ways that we can judge the quality and value of research, whether new or old. We will share these thoughts with you now.

Nine of the methods for judging research are metrical approaches, and the tenth meta-method is the experienced researcher's seasoned judgment, which looks at all nine and combines these into a best estimate of the overall quality and probable value of the research.

Of the nine metrical yardsticks, the first six listed below are classical, while the last three are revolutionary in that they seek to gauge the value/goodness of research based on any actual ROI contribution that the research itself might measurably provide. This is holding research to a higher standard than ever before and may be too radical a notion for some readers. Those interested can write for another paper we have written specifically on this subject which is too lengthy and esoteric for a blog posting.

The nine metrical methods for judging research that we see today are these:
1. Sample size
2. Response rate
3. Questionnaire completion rate
4. Internal consistency
5. External consistency (with currency measures and with universe estimates)
6. Agreement with a Truth Standard, where one can be established
7. If it is new research, when we experimentally use it in certain cable zones or markets, does the ROI increase there more than in the control zones or markets where the old research is used?
8. If it is new research, when we experimentally use it for certain brands nationally, does the ROI increase?
9. Has any non-self-interested party provided a case study of how their use of the method produced increased ROI?

Once the researcher has weighed and measured as many of these nine dimensions as feasible, the researcher must still combine all this in his/her mind with seasoned judgment to reach a conclusion as to whether the research is good enough to use for the decisions involved. Or to decide which research alternative is preferable. Or to decide whether or not to bet on a specific new research approach. The days are coming when all researchers will have to make decisions like these, because changes are coming.

Bill Harvey has spent over 35 years leading the way in the area of media research with special emphasis on the New Media. Bill can be contacted at bill@traglobal.com.

Read all Bill’s MediaBizBloggers commentaries at Bill Harvey - MediaBizBloggers.

Follow our Twitter updates @MediaBizBlogger

Copyright ©2024 MediaVillage, Inc. All rights reserved. By using this site you agree to the Terms of Use and Privacy Policy.