In the interactive marketing industry, we have access to a much richer and broader range of data than our offline media counterparts, yet traditional marketing still remains far ahead in implementing and executing campaign testing measures. Part of this is because of the tendency within online advertising to focus purely on the standard suite of performance metrics, which drastically limits the ability to optimize and leverage key aspects for a particular campaign.
Conversely, the utilization of simple, classic testing methods provides more useful performance metrics and allows for a proper analysis of how your marketing efforts are truly expanding your sales and brand. Practices such as systematic testing, maintaining control groups, and establishing "A/B" splits have been leveraged for decades. In fact, John Caples -- the famous copywriter and a cornerstone of testing methods in advertising -- speaks of instances where ads can perform at 19 times the conversion rate by making only the slightest adjustment. What does Caples say is the key to good advertising?
"The answer is testing, testing, testing... Based on proven results, they [marketers] then spend the bulk of their advertising dollars on tested copy in tested media," Caples wrote in his book, "Tested Advertising Methods."
Without running well-planned, methodical testing on your campaign, how will you know the true gains in your marketing spend? How are you justifying your media plan? And when was the last time you actually executed a test campaign, used a control group, or ran an A/B test with different creative, offers, or even target audiences? If the recent economic downturn isn't enough to incite your interest, here are just a few more reasons why you should put these practices in place.
1. Test campaigns
These are critical for developing base-level metrics for all marketing activity -- measures that will be used as benchmarks for your future campaigns. Since the aim of marketing is to create a larger, more-engaged audience for your products and services, we need to find out how large and engaged that audience was prior to the latest campaign. By running a simple campaign, with a basic creative or logo, we can estimate what results your brand experiences on its own. This is a basic principle, and gives you a benchmark to compare against the success of future campaigns. Is it compelling enough to tell your CMO that a recent campaign drew 1,500 sales with a 0.5 percent conversion rate, or would you rather report that your latest ad wizardry resulted in a conversion increase of more than 500 percent?
2. Control groups
Yet another underutilized tool. A well executed control group will consist of a segment of people who will not be exposed to the marketing elements you'll be testing. Our marketing predecessors would previously identify key localities on which to perform these tests, selecting one region to remove from their campaign's exposure. There are many challenges in maintaining "clean" control groups, as people freely move from region to region, but this is easily solved and implemented by the very nature of online advertising. Many ad servers can quickly define control groups, and they are much more effective at preventing errant exposures, as opposed to geographically constraining campaigns. This control group will show the natural performance and behavior of your customers. By keeping your control groups "clean," you have a yardstick against which you can compare results and confidently tout your marketing prowess to the rest of your team.
3. A/B split testing
A key testing method. A/B split testing is a highly versatile yet powerful method which can easily be adapted to test anything, including calls-to-action, targeting, placement, offers, creative, etc. By first comparing different creative layouts and then seamlessly integrating different creative into the same placement, you allow a pure performance comparison to take place.
Minor changes, such as headline placement or copy length, can produce drastic differences in engagement, even for creative executions that are the same size. Even small alterations to color schemes or fonts can make the difference in grabbing your audience and pulling them to a sale.
Interactive marketers also gain a significant advantage over other media when performing A/B split testing on creative. Not only are performance metrics confidently derived, but these creatives can be optimized to increase the exposure of the winning creative, all within the same media buy. Now you can be sure that you're not wasting any spend, in near real-time, with no additional effort required from your publisher.
These three testing practices can best be summed as: Simple tools and simple concepts with simple steps to execution. So why aren't more marketers implementing them? As budgets tighten over the next year, the need to prove the value in a media buy is extremely important. This is one concern Caples knew first hand -- having joined BBDO in 1927, he pioneered these testing methodologies during the Great Depression and eventually published the first edition of "Tested Advertising Methods" in 1932, just in time to help guide BBDO through a drastic period of reduced spend and tightening budgets.
The parallels between the time when Caples developed and proved his methodology and the current economic downturn are clear. Good marketing is just as important now as it was then to help turn sales around, lift your brand above the fold, and demonstrate that your plan is actually performing stronger with the same budget allocation.
By using the above methods for systematically using a test campaign, control group, and A/B split tests in every plan, you can determine what works and show that the resources and creativity you devote to your media plan are as successful at winning awards as they are at growing your company's revenue.
Rodney Webster is director of product management, Mediaplex, a division of ValueClick.