The only reason we advertise is to -- some way or another -- sell a product. Advertising may be for "branding" and is meant to lay a foundation, but that foundation is in place to generate sales at some point during the product life cycle. Yet here we are, over 20 years into digital media, and figuring out what to measure is a problem that keeps getting worse, not better. In statistics, unhelpful data is called "noise," and helpful data is called the "signal." We'll close this piece by identifying what you should measure, but in the meantime, here are seven metrics that, for display and video, mobile and desktop, are at best noise. At worst, using these metrics can actually harm your campaign.
The mother of all bad metrics is the grandfather of all online metrics. CTR is like the fad diet that keeps coming back into fashion. We know deep down it's not a path to success, but it's just so tempting. Where fad diets are not necessarily harmful, but rather not always productive, optimizing to CTR is actually harmful to a campaign. Display/video CTR is inversely correlated with buying behavior. That's right. Optimizing to clicks actually orients your media away from your target audience. The research is well-documented, so do yourself a favor and say goodbye to CTR forever.
Video completion rate
Why wouldn't we want to know this? VCR is not, per se, a bad metric. But ask yourself, why are you running video? Most likely to affect intent to purchase, just like TV would. Then measure intent to purchase using a control vs. exposed study! Also, video is the most lucrative space for fraudsters, and bots can easily inflate completion rates. Thinking from a fraudster's perspective is one helpful way of determining which metrics may be false indicators.
Put down the pitchforks and let me explain. It's not that viewability shouldn't be measured. It definitely should, but viewability is not a metric. Viewability is not a tactic. Viewability is a foundation. An ad not viewed in the real world by a human can't possibly help you achieve your marketing goals. This applies to any medium, not just online. Additionally, I've seen too many marketers get so hung up not paying for a single unviewed ad they end up creating more waste than they save. If 1,000 impressions at 50 percent viewability costs $5 but 1,000 impressions at 100 percent viewability costs $12, insisting on buying 100 percent viewability actually creates more waste than it saves.
We saw earlier what a poor indicator of success CTR is, and a click is required to generate a last-click conversion. Display and video ads, just like outdoor billboards and TV spots, aren't formats whose biggest benefit is the ability to be clicked. These formats, like their offline counterparts, nudge the consumer down a decision-making path toward ultimately making a purchase. Most of the time this purchase is made by searching then clicking, or going to the advertiser's site directly. (The exception here is a DR product with zero brand recognition, where requiring display clicks to lead to purchase is understandable.) You wouldn't throw out your clothes iron because it makes a poor substitute for the George Foreman Grill. Likewise, use display and video for their intended purpose, not to measure how well or poorly they do a job they were never intended to do.
Imagine you go to cnn.com every morning when settling down at your work desk around 9 a.m. Each morning one week you see an ad for some new noise-cancelling headphones. You start to think, unconsciously, "These look great. I should think about these." After seeing five ads on cnn.com during the week, your daughter comes to you at 7 p.m. on Friday night and asks you to pull up the lyrics to that new Chainsmokers/Daya song. You search and end up on some random lyrics site, where the ad for the headphones appears again. The next morning you go online and buy the headphones. Congratulations, random lyrics site; you just got 100 percent of the credit for that purchase when the reality is single ad exposure had very little influence on the purchase. You're better than last-ad-seen. Demand full-path optimization.
I don't hear too many people talk about their only metric being "delivery," but I do still hear it. Delivery isn't a metric; delivery is an act, a function. I'm not sure there is a single situation where there isn't a meaningful metric applicable to a campaign. Even if the measurement comes in after the campaign is over, there are still learnings that can help you buy better next time. But, if you're truly stuck without a metric, don't claim that delivery is the metric. Are you really going to cut a partner whose audience is perfect for your brand because they're pacing too slow?
Any site-side analytics metric
Google Analytics, Omniture, and the like are great tools, but they're only great tools if used as they were intended. First, to be measured by one of these programs, a click must occur. Having covered this above -- twice -- you can see how we would be off to a bad start using a site-side analytics tool to measure media success. Second, the metrics site-side analytics tools provide are specifically designed to help marketers improve their website experience. In some ways, these programs can evaluate the quality of visitors (which in display/video are clickers), but they surely can't measure the effectiveness of an ad campaign.
Imagine you are a retail store trying to improve your in-store experience. You want to move products around to enable more impulse buying and high-margin product sales. At what point would you decide the metrics you set up to determine whether or not that program was succeeding provide a good way to measure the effectiveness of your radio campaign? Just because we can link two events together doesn't mean there is causation or even relevancy, a famous example being the winner of the Super Bowl doesn't actually predict how the stock market will perform.
What should I measure, then?
Just because you have a toolbox with over 50 tools in it doesn't mean you should be disappointed 49 of them aren't the best tool to pound in a nail. The hammer is the single and only best tool for that job. In this case, the fact you have 49 other tools doesn't mean they're bad; it means they're irrelevant to your current situation. This is the case with online media, specifically display and video within mobile and desktop.
First, your objective is to get the most reasonably priced number of "viewed by human" ads you can with your budget. Once you've achieved this, you'll need to know which of the roughly three categories of campaigns you're working with: brand, brand response, and direct response (or e-commerce.)
Brand campaigns typically create awareness or drive purchase intent. If these sound like your goals, use a purchase intent study (Nielsen's Vizu being a common example, but there are many) to measure your success by pitting your exposed users against a set of controlled users. Set your lift goal before your campaign so you know if you've succeeded. If your goal was to get 3 percent purchase intent lift with $500,000, you should be able to tie that back to historical results showing this kind of lift is worth that price.
"Brand response" is a term used by marketers who are looking to drive online actions but can't necessarily register a direct sale online. Auto dealers and hospitals are great examples. Within automotive, validating a consumer took the time to look at a specific vehicle or visit the "hours and directions" page provides a better indicator than any of the above metrics in valuing the success of a campaign. If an "hours and directions" visit is costing $5, over time it can be determined if this is efficient enough to be worth your marketing dollars.
Finally, there is direct response and e-commerce. First, just because you have a DR product doesn't mean branding isn't beneficial. Just be clear about the objectives of your campaign going in so you know what success looks like. If your goal is to sell a product online, measuring this is incredibly easy. Assigning credit is a bit more difficult. But just as I'd rather my son come home with a 20 percent on his math test than a zero, so too is it more beneficial to do some -- any -- full path attribution than none at all. Don't worry about getting it wrong. Just get it right more than zero!
Clearly there is a lot of noise in our space. Wading through the noise to get to the signal is hard. But that's why great marketers get paid well. They generate meaningful results measurable to a brand.