ellipsis flag icon-blogicon-check icon-comments icon-email icon-error icon-facebook icon-follow-comment icon-googleicon-hamburger icon-imedia-blog icon-imediaicon-instagramicon-left-arrow icon-linked-in icon-linked icon-linkedin icon-multi-page-view icon-person icon-print icon-right-arrow icon-save icon-searchicon-share-arrow icon-single-page-view icon-tag icon-twitter icon-unfollow icon-upload icon-valid icon-video-play icon-views icon-website icon-youtubelogo-imedia-white logo-imedia logo-mediaWhite review-star thumbs_down thumbs_up

Why clicks are the wrong metric

iMedia Editors
Why clicks are the wrong metric iMedia Editors

A century ago, John Wanamaker said, "Half the money I spend on advertising is wasted; the trouble is I don't know which half." Today, online marketers  continue to grapple with the same question in their analysis of metrics.


It might seem like a simple answer because in the online world you can track clicks. The problem is that click-counting and click-based analyses are fundamentally flawed. It's not just that clicks don't tell the whole story, they tell the wrong story -- especially when used alone. 


Many marketers, thanks to their web analytics depolyments, only attribute site activity to click-based activities, e.g., clicking on an ad, which is a limited way to gauge response.


Industry click-through-rates (CTRs) are notoriously minuscule, hovering at less than 0.1 percent. The vast majority of people who see online ads do not click. Furthermore, those who do click do so disproportionately; some 85 percent of clicks come from only eight percent of people. Numerous industry studies have addressed  this issue.
 
Yet, low CTRs don't mean ads aren't working -- quite the contrary. Customers are responding to the ads, often with purchases not long after seeing the ad, and often without clicking.


In a recent test, IMVU, an avatar-based social network and virtual world where people can buy virtual goods for real money, wanted to find out whether non-paying IMVU users (who already received email marketing and were exposed to ads in the virtual world) would be more likely to become paying customers when exposed to IMVU online advertising in the real world.


In a test control scenario, customers who saw IMVU ads were 10 percent more likely to become paying customers, regardless of whether they clicked on an ad or not. Compared to the control group, the 10 percent increase was incremental, above and beyond the sales boost from existing marketing efforts. The control group had an equal opportunity to be exposed to all other marketing activity. The ony difference between the the groups was the actual ad exposure. The test group saw an IMVU ad whereas the control group saw an unrelaed ad.


In the same way, IMVU tested whether paying customers would actually spend more money if they saw online ads (again, in the real world) encouraging them to do so. IMVU members who saw promotional display ads to purchase virtual goods, on average, spent more than double an IMVU member who was exposed to a control ad, regardless of click activity. Again, this was an incremental lift above and beyond promotional activity via email and the virtual world. For companies like IMVU, being able to sell virtual goods is like printing money.


In a separate campaign, we looked at an e-commerce company that relied heavily on a web analytics package that utilized post-click attribution (activity and revenue at the site driven by an ad click). The advertiser wanted to optimize based on post-click activity only. The client did not track post-view revenue (sales at the site driven by online ads without clicks), so it had not optimized for it.


Two scenarios were reviewed: optimizing based on post-click attribution, and optimizing based on post-view attribution. Incremental revenue from post-view optimization was 10 times higher than optimization from a click-through perspective. When analyzing revenue from a post-click perspective, the best ad unit appeared to be an ad we will, for these purposes, call creative ad "A," and the worst performer, was creative ad "C". But when analyzing from a post-view perspective, the results flipped. "C" was the best and "A" was the worst, leading to very different optimization scenarios.


Playing devil's advocate, one might argue that a post-view analysis attributes too much undue credit to all online advertising. The argument would be that a prospect exposed to the online ad was likely to purchase anyway and the impression probably didn't influence their decision. Yet, time and time again, we find the exact opposite. We conduct analysis of the window of time between an ad impression and a purchase. This data reveals the immediate increase that happens very shortly after a consumer sees the ad, demonstrating the post-view impact of the campaign. In the examples below, half of the conversions occured withing 6 hours of showing an impression, and 70 percent occured within 24 hours of showing an impression. If there was no such thing as a view-through impact, one might expect that the random distributon of converions over time to follow a more linear pattern rather than  curve.



The bottom line is that every campaign is different. They all should be optimized based on as much data as possible. Don't rely on click-based analysis alone. You'd be leaving money on the table.


Jarvis Mak is the vice president of analytics and client services for Rocket Fuel.

On Twitter? Follow iMedia Connection at
@iMediaTweet.

Comments

to leave comments.

Commenter: Luc Viaud

2011, December 19

This confirms that we may have to think digital campaign more in terme of GRP than in term of CTR. But for those who want to track the purchased trafic for a website, CTR is an intersting KPI bevause it is related to the conversion funnel. But the CTR miss the awareness.

Commenter: Dieter Van Roekel

2011, July 26

Great post and I am in agreement that focussing only on the click is focussing too much on a very small part of the story. Did you manage to look at the uplift effect of each impression and density of impressions? Having a cumlative percentage graph of conversions doesn't really tell me the effect of impressions over time as I'm sure you'll agree.

Commenter: Benjamin Theriault

2011, May 12

Great post by a terrific client. This challenge quickly escalates when you consider attribution outside the ecosystem of a single buy or a single publisher. To put it another way, if I'm Dove Soap I may have several campaigns running concurrently thus driving general likelihood to purchase on a larger scale. Not to mention managing the complexity of overlap among portals and (yikes) devices that don't speak to one another! Kudos to the continued focus on thought leadership.

Commenter: Nicolle Gershon

2011, May 10

Agreed as well. Collective's VP of Analytics, Jeremy Stanley, actually just released a study with similar findings. Further reading on this topic can be found at http://www.collective.com/insight/click-brand-marketings-most-misleading-measure-0.

Commenter: Matthew Weaver

2011, May 10

Thanks for the great post Jarvis. I agree with you 100%. The more data you can acquire and track, the more likely you are to be successful in the online space.

Commenter: Spencer Broome

2011, May 10

Really interesting stuff. It can be a grappling match to determine where to attribute sales when it comes to online advertising, but I agree with more data the more likely you are to come to a concrete answer.