ARF Insights: Quality Advertising Effectiveness: ARF Interview with Molly Elmore, VP of Research at InsightExpress By ARF Insights ARF Insights May 24, 2011 Our capability for delivering complex online campaigns requires research methodology that is sophisticated enough to account for the myriad formats, sites, and success metrics that typify the online media ecosystem. While early online advertising was fairly easy to measure, today's digital ad effectiveness research requires more advanced techniques. In this interview, Molly Elmore, VP of Market Research at InsightExpress, explains how data weighting can help researchers draw accurate conclusions in advertising effectiveness studies. ARF: Why has weighting of online data become necessary in recent years? Molly Elmore: Advertising effectiveness studies have been conducted on behalf of online advertisers for over ten years and, during that time, the complexity of online campaigns has evolved considerably. In the "beginning" of online advertising, a campaign often consisted of one or two 468x60 banner units and ran on one site for a short amount of time. It was easy to keep track of all of the elements in a campaign because there were so few. Online advertising effectiveness studies began during those early years as well, and the main tenet of the research was straightforward to implement. The idea was to recruit two samples of people, one group who was exposed to the campaign, and another group who was visiting the same site during the same time frame but was NOT exposed. The samples that were recruited typically were very similar since so few sites were on the media plan. In addition, the sample sizes were fairly modest since this research was new and segment level analysis was less of a priority than understanding how the campaign performed overall. When this type of advertising effectiveness research was launched, weighting of data was not necessary in most cases. The conclusions drawn from the data were simple and clear-cut and helped clients to understand that there was value in their campaigns even if few people clicked on the ads. At the time, click-through rate was without a doubt the most valued success metric, partly because it was an easy measure to obtain, and partly because click-through rates at the time were quite high. Ten years later, the complexity of online campaigns is far greater than we had imagined it would be back in the early days. A typical campaign may run on ten or more sites, with different launch dates, using up to a hundred different creative executions of varying technological formats. Video, Rich Media, and Adobe Flash® units can all be part of the same campaign, with a wide array of sizes included. The need for branding measurement continues to increase, as many advertisers look to reach consumers online and via other distribution channels. Thus, some of the direct response or behavioral measures like click-through rate are not as relevant for those advertisers as are some of the attitudinal metrics like Purchase Intent. A major element of the research methodology behind effectiveness studies is the comparison of the control and exposed groups. In order to isolate the impact of exposure to a given campaign, the two research groups must be statistically identical in every way but one, which is the presence of advertising as an influence to the exposed group. This means that the demographic and psychographic profiles, along with the site distributions, must be the same between those two groups for the assumptions in the methodology to be upheld. Since the campaigns have become so complex over time, it is a challenge to always recruit identical groups of control and exposed respondents. With that reality, the role of data weighting has increased in the analysis process so that the tenets of the methodology may be upheld. ARF: What are the challenges in recruiting control and exposed respondents in online advertising research? Molly Elmore: Recruitment of both control and exposed respondents has become more and more of a challenge over the years, as more research companies compete for respondents and some marketing companies use the guise of a survey to teach people about their products and services. In addition, many companies (including InsightExpress) have used pop-ups to invite respondents and this technology has become more widespread over time, limiting the opportunities to successfully employ that tactic. As a result, InsightExpress recently launched the Ignite Network, which employs a panel recruitment methodology that leverages partnerships with multiple panel providers to invite respondents via alternate methods. In addition, the technology supporting the Ignite Network allows clients to recruit respondents who were exposed to a specific campaign, thus avoiding the prior panel recruitment option, which was akin to looking for a needle in a haystack. ARF: Based on your work, what recommendations can you make to researchers regarding these types of studies? Molly Elmore: Given my experiences in the field, I would make three recommendations to anyone launching an advertising effectiveness program. A. Be sure to consult someone skilled in the art of questionnaire design, as executing an effective research program is not as simple as asking online visitors a list of questions. B. When conducting comparison studies like advertising effectiveness studies, be sure to have appropriate and sophisticated weighting tools available to be certain that accurate conclusions are drawn regarding the impact of the campaign. This requires that all respondents participating provide profile information for weighting. C. If recruitment is expected to be a challenge, consider alternate options to increase the likelihood of a successful study. Want to hear more? Molly Elmore will be presenting on "Ensuring Data Quality in Advertising Effectiveness" with Lynn Klein-Rosner, Senior Director of Syndicated Research at The New York Times Company at The ARF's Audience Measurement 6.0 Symposium on June 14. Registration is open.