« A defense of pre-testing: let's get ready to rumble | Main | Back again »

Comments

Hans Suter

I think Lord Saatchi has the perfect answer: http://www.ft.com/cms/s/9c40e788-0e08-11dc-8219-000b5df10621.html

Stefan Stroe

Jason, while following your spicy debate with Nigel I was thinking that instead of measuring IPA winners vs. Link, I would challenge MB to find a way to show if 2005-2006 Cannes/Clio/Epica film winners scored in Link BETTER than Cannes finalists.

In my opinion, the main problem with Link and other pre-tests is that they're going with the flow, not against it, as consumers are thinking mainly communication cliches (they'll tell you what they are used to see on TV). Simpler said, Link & its brothers determines marketing folks to take "safe calls" and this habit is giving headaches to planners and creatives.

I remain skeptic vs. their pre-testing approach. ;-)

Pete Gardiner

Hey Jason,

Read your blog,or most of it, and sent it on to a friend. He asked what Link was which prompted this reply. I'm being lazy and not modifying it for you, but perhaps you won't mind.

Pete Gardiner

Link is/was a Millward Brown testing methodology that got its name from the premise that its results correlated/linked to in-market results. If an animatic got a good link score, then the finished commercial would do what it was supposed to do in-market. Test subjects would sit at a computer in a room with about 20 other respondents and watch a series of 5 commercials, some finished and some in animatic form. They would answer a series of general warm-up questions (by typing in answers on a keyboard) about what they had just watched. Then they would watch only the animatic being tested, and would answer questions intended to find out clarity of message, brand link, intent to purchase, and whatever else had been agreed upon to be tested. Interestingly, Link also had a feature where people could operate a joystick as they watched the animatic. They would move the joystick to the right when they liked what they saw, and to the left when they didn't like what they saw. The middle was neutral. What you got was a second-by-second chart showing each viewer's feelings about the animatic.

I liked the comparison of Link to a Smith and Wesson: powerful tool, yet dangerous in the wrong hands. My view is that Link overpromises. I remember what one of my profs in university told me about the Salem witch trials. He said the amazing thing was how rational and orderly the trial was. The rules and process of English common law worked smoothly. The only weird part was that the trial proceeded from the premise that there were witches. Link proceeds from the premise that a person responding to an animatic in a research room at a computer prompted by a screen of questions will accurately predict how a finished ad will do in-market. What about the quality of the finished ad? What about media weight. Too much? Too little? What about poor distribution or bad shelf placement or high price? The good folks at Millward Brown earnestly try to quantify it into a graph showing second-by second interest. They add charts and graphs and mean scores and their own language, and the whole thing becomes quite impenetrable and difficult to challenge. And when ad agencies do challenge, we're accused of being defensive.

When one of my spots was being tested, I asked to take part in a Link test myself. I was surprised that my joystick was actually quite difficult to move either left or right. So if someone is physically strong, they really like it, and if they have arthritis, they're quite neutral to the whole thing. I don't believe my complaint made the slightest dent in their statistical armour.

I think it's important for all of us to brutally cross-examine the witness of Testing as often as possible -- really Eddy Greenspan them. They get too free a ride from us all. Or we could do what Rethink does, and refuse to have any part of it. There are no witches.


Pete


Mike Gadd

Hi Jason,

I could not resist joining in the dialogue on the two questions your client posed. Since based on our own research it is vital to emotionally connect with people i.e. engage them then I feel that copy testing has to move away from a focus on advertising messaging - clarity, credibility ... - to a focus on the customer and how she/he connects and receives the brand's unique DNA.

Time and budget permitting you could consider using an MRI, EEG, biologically based measurement techniques. One client I know simply likes to observe people's facial expressions. As far as surveys are concerned even OTX, for whom Robert Heath consults, use 1 to 5 ratings of feelings. Gerald Zaltman likes to use metaphor elicitation techniques. I like to use some of these solutions and impact on the brand type techniques.

People may not be sure about things they are unfamiliar with but is there any evidence to suggest that they eventually like what they initially dislike?

To your point on "content" ads only, one of the best liked ads we have ever reported on featured 3 frogs in a marsh burping Bud Wise and Err to a flickering neon sign. It had no messaging takeaway but it had a powerful emotional connection with people.

I would say that Dove's Evolution viral spot would do well in a pre-test as long as the finished spot was tested and not a stick drawing version which I could not imagine. Equally Coca-Cola's Happiness Factory "The Coke Side of Life" I predict would do very well. Both just won Creativity awards with Dove's Evolution getting the Grand Prize nod.

At the risk of sounding self serving, I guess I feel that a one size fits all testing technique could work for all advertising as long as it is measuring Engagement quality and communication of the unique DNA of the brand advertised in the context of the advertising objectives set.

In answer to the second "can copytesting predict" question if the copytesting technique is not measuring the Engagement quality of the ad and the intended DNA for the brand then I do not see how it can predict accurately.

David Ogilvy, initially a researcher, came up with the lamp post phrase, and really believed in good research in helping to develop advertising that connects with people.

In our YouTube world where consumers create and co-create advertising wouldn't it be nice to go beyond stick drawings in portraying the creative intent?

mark

Thanks for (re-) initiating a great discussion.

The one aspect of pre-testing that bugs me the most is whether or not we can really learn how to make a spot better from the research. Despite the "diagnostic" questions or the use of open-ended questions, without the ability to probe and question people about their understanding of the spot are we really learning enough.

Ultimately, using this research (or any research) with a pass/fail approach is just a sign of poor business self-confidence . I find it puzzling that Nigel Hollis would not be an advocate of doing great strategic research upfront and relying on a clients experience about whether the ad delivers on that. At the very least, let's accept pre-testing as an imperfect indicator, not a fait accompli.

The comments to this entry are closed.

Big pencils, big ideas

Stats and stuff