Measuring Levels of Attention: 90% Right or 100% Wrong?

By The Cog Blog Archives
Cover image for  article: Measuring Levels of Attention: 90% Right or 100% Wrong?

Levels of attention, what that notion means in relation to ads, and how we should go about measuring it is having a bit of a moment. Hopefully, it will be a moment that lasts years as this is clearly an important issue and raises matters of principle in how we go about the business of placing and evaluating advertising.

A couple of weeks ago The Cog Blog reported on Dentsu work; now we have a study from Ebiquity. Both use U.K. data from Lumen, Ebiquity and U.S. data from TVision. Further work from Karen Nelson-Field, the person who perhaps more than any other started this ball rolling way back when, has been done in Germany, Austria, Switzerland, Australia and the U.S., much of it shared with the asi community via podcasts and their annual conference.

The Dentsu work seemed to involve several vendors, including Facebook in its planning and implementation. I suspect, and Dentsu can easily correct me by commenting, that the bulk of the cost was borne by these sales organizations, too. Whether that matters in terms of the findings and how they're presented is up to users to decide.

The Ebiquity work was I believe independent of any vendor or indeed agency involvement.

Regardless, we are now getting to the stage where there is much public information and data swilling around, much of it consistent from Lumen, TVision and Karen Nelson-Field's Amplified Intelligence.

Discussions around attention to advertising, or rather lack of attention are hardly new. I recall a study from the early 1980's (although "study" may imply a certain rigour that wasn't there) involving JWT's then Creative Director, Allen Thomas, calculating how many ads he was exposed to over a typical day, and how many he remembered at the end of the day.

Several thousand, and about four were (if I remember correctly) his conclusion.

With apologies to the late, great Allen we now have a rather more technically robust, research-y way of getting to much the same findings.

The fact is, as we've always known people have "opportunities to see" a shedload of messages and remember very few.

I also remember the great Mike Yershon telling me that in the first break of the first episode of a new series was the greatest builder of attention you could find (in large part because of the number of light commercial TV viewers attracted to something new).

One debate at the time cantered around whether it was better to be in a program environment of great viewer involvement, on the basis that nobody left the room during the commercial break for fear of missing something, or whether that was hogwash as the break offered the only viable opportunity to do something else for a few minutes.

If you believed the latter, then a less-involving editorial context appealed as you just stayed put and did whatever the old-days equivalent of fiddling with your phone was (picking up a copy of The Radio Times?).

What we now have is the technical wherewithal to start to answer several questions like this, and their modern equivalents.

Is there a platform effect -- does Facebook per se deliver a less-attention-grabbing environment than YouTube? Is there a difference between different elements of FB? (Yes, according to Dentsu.)

Does comedy attract and hold attention more than drama? Does this apply to ad copy? Does it correlate with pre-test findings?

Do stars attract and hold attention more than mere mortals, and if so what sort of stars?

What about pure audio? Or audio consumed as one element of a video message (hearing an ad whilst not in the room, for instance)?

How about print, OOH and static online ads, and how they're laid out and designed?

And eventually someone's going to work out a way of quantifying the potentially greatest attention-grabbing medium of them all -- cinema.

Next week The Attention Council is hosting a summit of the great and the good from the attention measurement community. It will be well worth a look.

I was always taught that it was better to be 90% right than 100% wrong. On that basis we should not get too bogged down in technical arguments (which need of course to go on in parallel) but rather go in search of pragmatic and actionable solutions.

Click the social buttons to share this story with colleagues and friends.
The opinions expressed here are the author's views and do not necessarily represent the views of


Copyright ©2024 MediaVillage, Inc. All rights reserved. By using this site you agree to the Terms of Use and Privacy Policy.