Top

Trust, But Verify

[youtube http://www.youtube.com/watch?v=pOwJOcp-Mxk?rel=0&w=480&h=300]

For a cool $5 million, maybe more, Groupon became this year’s poster child for how not to advertise. And it did so on America’s largest television stage, the Super Bowl.

The company’s commercials, which included an in-program parody about Tibet featuring Timothy Hutton, and similarly-toned pre- and post-game spots about saving the whales with Cuba Gooding and deforestation with Elizabeth Hurley, were met with widespread negativity. Many called the communications offensive. G&R’s own research placed audience reaction near the bottom of all Super Bowl ads, below perennial whipping boy GoDaddy.com, but above HomeAway.com’s “Ministry of Detourism” (Test Baby), which was also pulled by the advertiser and merited another CEO apology.

At first, Groupon defended its efforts. Then, it pulled the ads. Then, CEO Andrew Mason further distanced the company from the criticism by saying that he put too much trust in the ad agency charged with developing the creative (BusinessWeek).

Whether an ad airs on the Super Bowl or elsewhere, the fallout from faulty creative can be significant. The costs go well beyond wasted production and airtime, and red-faced CEOs. Even when it is masked by otherwise strong marketing and business results, poor advertising damages brand and agency reputations, just as if someone had taken a hammer to them. The damage is most obvious when the advertising appears on high visibility programming like the Super Bowl, but can happen with any campaign.

As the many Super Bowl examples of it demonstrate, poor advertising is surprisingly common. Before its current “talking baby” campaign, E*Trade spent $2mm to show a monkey and two men wasting $2mm. Within its better-received monkeys campaign, CareerBuilder.com spent $2.7mm to tell the story of an unhappy employee’s heart bursting out of her body and running to tell her boss that she quits. Outpost.com spent $2.6mm to shoot gerbils at its name for most of its 30 seconds and was not heard from again. And Apple, just one year after running what many people consider to have been the best Super Bowl ad of all time, “1984,” ran what some people consider to be the worst Super Bowl commercial of all time, “Lemmings.” The company didn’t return to the Super Bowl stage until 14 years and one Steven Jobs hiatus later.

Flawed creative hurts agencies as well. Even Super Bowl agencies with strong creative pedigrees see accounts head to the door and their statures suffer. The work for Groupon  was done by Crispin Porter + Bogusky, an agency that is well-known for edgy advertising (Coke Zero) that can be controversial and produce significant PR value (Burger King) and also advertising that is heartfelt and empowering (AmEx Open). The initial E*Trade work was the work of Goodby Silverstein & Partners (Doritos; Sprint; Netflix). Sandwiched in between their more familiar monkey motif, CareerBuilder.com tried out a Wieden+Kennedy (Coca-Cola; Nike; Old Spice) idea. The Outpost.com commercial was created by Cliff Freeman and Partners (Wendy’s “Where’s the Beef”). None of the agencies is still involved with its Super Bowl client and Cliff Freeman is no longer in business.

With so much at stake, why does poor advertising end up getting run at all? The simple answer is that all of us – even writers, producers, directors, experienced marketers and ad agencies – have trouble when it comes to telling whether something that we’ve created is good or not. Here’s why.

  1. Ads are deceptively complex stimuli to characterize. Although simpler than movies and TV shows, commercials present the same analytical challenges when anyone attempts to assess how well their many moving parts work or don’t work in isolation and together. That’s because kinetic stimuli contain more variables than the human mind is capable of processing; the brain is limited and selective in the number of items that it perceives, remembers and thinks about. No matter how experienced a movie, network or advertising professional is, he or she is not mentally equipped to weigh all the combinations of content variables and syntaxes that influence response. As evidenced by the number of box office busts, TV cancellations, and advertising failures, simply judging whether a movie is good, a TV show engaging, or a commercial effective will only meet limited success.
  2. We are not very good at inferring the relative responses of others. Ninety percent of drivers feel that the quality of their driving is in the top 50 percent of drivers. Sixty-eight percent of professors rate themselves in the top 25 percent for teaching ability. The bottom 25 percent of students think they do better than 65 percent of their class. Ninety percent of entrepreneurs think that their new business will be a success when most new businesses fail. Forecasting whether an ad will be a success or failure in the minds of others is even more uncertain when the stimulus is new and different, which Super Bowl commercials (and all good creative) should be.
  3. We are reluctant to use the best means we have for understanding what others will think, which is to ask them. According to Dan Gilbert, this is because we tend to overvalue our own uniqueness. Dan’s thinking may explain why the best opinion-seeking method we have in the advertising world, copy testing,[1] is often left undone. This is unfortunate because the more someone knows about how others will react to a stimulus, the better he or she will be at avoiding the inherent error in affective forecasting and ad approval.

Poor commercials are the consequence of the limitations we run up against when we rely on intuition and logic to analyze complex stimuli and predict how others are going to react to it. They are not the result of too much agency trust as Groupon’s Andrew Mason put it. Independent, quantitative research by expert providers mitigates the considerable downside in high risk/high reward advertising and protects the company’s most important asset – the brand.  Trust, but verify.


[1] Quantitative testing should not be confused with focus group research, which is good for development work, but not good for evaluative work. The ambiguity of focus group responses is impossible to reliably interpret and the error rate that results from small samples sizes and variable group dynamics is no better than chance alone.

, , , , , , ,

Comments are closed.