ellipsis flag icon-blogicon-check icon-comments icon-email icon-error icon-facebook icon-follow-comment icon-googleicon-hamburger icon-imedia-blog icon-imediaicon-instagramicon-left-arrow icon-linked-in icon-linked icon-linkedin icon-multi-page-view icon-person icon-print icon-right-arrow icon-save icon-searchicon-share-arrow icon-single-page-view icon-tag icon-twitter icon-unfollow icon-upload icon-valid icon-video-play icon-views icon-website icon-youtubelogo-imedia-white logo-imedia logo-mediaWhite review-star thumbs_down thumbs_up

5 ways you're recklessly abusing your metrics

5 ways you're recklessly abusing your metrics Jarvis Mak
VIEW SINGLE PAGE

"Torture numbers, and they'll confess to anything." -- writer Gregg Easterbrook


As we all know, there is an abundance of data for the digital advertising world. However, availability does not equate to usefulness. In fact, there are plenty of times when data are used poorly, which leads to bad decisions.


5 ways you're recklessly abusing your metrics


So you think you're applying your metrics properly? You might need to think again. Here are five common ways that numbers are used to mismanage online ad campaigns.

Overreliance on a single metric


Companies often hold partners across different parts of the marketing funnel to a single metric (like CPA) when evaluating performance. This is a particularly interesting issue, so I'll spend the most time on it.


There have been many instances where a campaign gets underway and we learn that the agency has multiple partners on the plan handling various aspects. Some are upper funnel, meaning they are intended for broad reach and are awareness plays. Some are mid-funnel, meaning that they intend to drive people from the top of the funnel toward the bottom by showing ads to people already aware of the brand to increase favorability and consideration. Finally, some are lower funnel, like dedicated retargeting partners -- perhaps an in-house solution, for example. Yet despite these very different goals, the agency uses the same measure (backing into a cost per acquisition number) to measure all stages.


Sure, the partners are competing fairly because, at different stages, everyone uses the same number, and also, the different stages have different goals. So a retargeting partner has a $5 CPA goal placed upon it by the agency, and a mid-funnel partner has an $80 target placed upon it by the agency, and an upper-funnel partner has an even higher goal, say $200.


While this might sound like a logical path of reasoning, it actually is completely nonsensical. If the goal of a upper funnel partner is reach and awareness, how does a CPA goal, even if it's higher than other ones, help measure that? You shouldn't be using a CPA goal; you need reach and awareness goals. Holding non-retargeting partners (which is any upper- and mid-funnel partner) to a CPA goal does not make sense. The goal is to increase purchase intent by driving people to the bottom.


Advice: You need to either: 1) measure the improvement of brand attributes, which is what we advise clients to do in order to advertise better, and/or 2) use a cost-per-landing page goal, which dedicated retargeting partners use in order to bring prospects back to close the deal later. Regarding brand attributes, it's best to identify lift. If you are showing ads to people who are already aware of your brand or prefer your brand, then your ad dollars are being wasted. Most of the time, you want to make more people aware or more people favorable toward your brand, meaning you want partners that can optimize against brand metrics.




The tricky part is coming up with different reasonable goals for upper- and mid-funnel areas. Then the agency has the additional burden of explaining how that reach goal feeds into an ultimate sales goal for the marketer. The underlying problem revolves around knowing how to properly tackle attribution and assigning value to partners at complementary parts of the funnel.

Using a single CPA metric to judge a partner's value


A better measure than cost per acquisition would be return on ad spend (ROAS), calculated off of the order amount when the transaction occurs. Obviously, there are high-end versus low-end customers, and it is quite possible to bring back low-end customers at a much faster clip than high-end customers. In fact, this was the case for one of our financial services campaigns. After we had been running for a couple of months, the agency informed us that the average deposit amount associated with customers we drove for them was much higher than average. However, the CPA was higher than its other partners. Therefore, the firm gave us direction to focus on driving a lower CPA. Once we changed the optimization parameters, the average deposit amount dropped significantly, indicating higher volume but lower-end customers. Not surprisingly, after a short while, we were requested to go back to our original approach.


Another credible approach, other than straight CPA, is using customer lifetime value (LTV) to measure effectiveness. For example, one of our advertisers tries to get as many people as possible to become members, as only members are allowed to participate in sales events. Similar to the financial services example, Rocket Fuel did not have the cheapest CPA. However, the advertiser used other back-end metrics to judge partner performance. These included how long it took for a member (once signed up) to make his or her first purchase, and how much the member spent in the first few months (which was directly correlated with lifetime value). According to these metrics, Rocket Fuel was the most valuable partner on the plan because it brought in high-value customers.


Advice: Use a more-descriptive goal that ties in customer value -- either ROAS or LTV.

Giving partners an unrealistic goal to hit


We realize that some agency teams with which we work give us a stretch goal to achieve. This can be due to a number of reasons, including the fact that this is the agency's first time running the campaign and it has no idea what it should be. That, by itself, is not an issue. It becomes an issue when clients don't communicate progress during the course of the campaign.


Clients might choose to do that because they think a partner will work harder to stay on the plan if it doesn't know that it is a top performer. One of our contacts at an agency continually emphasized how poorly we were doing but then rewarded us a few days later with reallocated budget because, in fact, we were doing better than others.


The flip side to this reverse psychology is "the boy who cried wolf" mentality. Every time we hear from this contact, we put little stock in what she says, primarily because her criticism of our performance doesn't match her actions (i.e., awarding us additional budget, which has happened on multiple occasions). There is no credibility.


Whether similar partners are hitting a hypothetical goal doesn't really matter. What matters is communicating performance relative to others.


Advice: Communicate honestly and often with partners about where they rank against the competition. This is also directly tied to the previous two points -- using meaningful metrics relevant to each partner's role in the funnel.

Using video completion rate, ad interaction rate, or CTR to judge effectiveness


In a vacuum, these numbers are meaningless. Two factors are more important than any kind of completion or interaction rate. They are:



  • Whether the desired audience was reached

  • The impact it had

If desired audience composition can be effectively measured and validated (which Rocket Fuel does with a survey-based approach to augment data targeting capabilities), then the level of engagement associated with each partner can be meaningful.


However, consider the opposite. In a program for a spirits company, we originally sold with the goal of honing in on people who enjoyed this particular spirit. However, later, the team that took over the account was entirely new and was wholly focused on the video completion rate from each partner, regardless of who viewed the ads. In fact, the two primary measures should have been desired audience composition, and brand lift.


Advice: Avoid engagement metrics as the primary arbiter of success. Any engagement metric is only one that is potentially correlated with what is truly important -- impact on the target audience. Instead, measure success with audience composition and brand impact metrics.

Artificially determined frequency caps


Ad agencies often direct us to limit the number of ad exposures to three per person over the course of the campaign. At first, this seems like a best practice so that some people are not overexposed to your campaign. However, in this day and age, when we have artificial intelligence that learns how to make individual campaigns successful, a manually imposed frequency cap is an arbitrarily limiting constraint.


Consider, for example, that two of the features in our models are campaign frequency and campaign recency. We know how many times we have shown a person an ad from that campaign, and our system knows how to differentiate value for someone seeing the first ad from that campaign versus the 10th ad (in conjunction with thousands of other potential factors). With this in mind, we can help our customers draw an effective frequency chart -- but it should be cautioned that it's not universal. These were specific to the audience to whom we showed the campaign. Some of them might be new to the brand, others might be familiar with it, and still others might have been customers in the past.




The effective frequency chart (and any other single dimension chart, for that matter), is misleading in the sense that the one chart alone should not be used to set new frequency caps. Advice: Instead, have a machine determine optimal frequency across a continuum and assign values across that continuum. This information should then be translated into a component of the CPM you're willing to pay for those impressions (again, along with many other factors).


Good quality data can help us make good decisions. But quality data, if not used properly, can also lead to bad decisions. With an abundance of data comes a temptation to use what is easily available. As marketers, we need to resist that temptation. If a needed set of numbers is not available, don't settle for a proxy. Do what you can to get the right numbers to evaluate the right things. It makes all the difference.


Jarvis Mak is VP of analytics and client services for Rocket Fuel.


On Twitter? Follow iMedia Connection at @iMediaTweet.


"Closeup of right male hand," "Whip - silhouetted," and "Abstract retro background with colorful rainbow numbers" images via Shutterstock.

Jarvis Mak runs the Customer Success group for Rocket Fuel, responsible for ensuring successful campaigns and delighting customers. He originally joined Rocket Fuel in 2009 as VP of Analytics. Previously, Mak was with Havas Digital as a senior vice...

View full biography

Comments

to leave comments.