The process of testing banner ad creative to maximize conversions is easier than you might think.
After all, testing two banner ads against each other is similar to running a single ad. You choose a site or network on which to run the ad, but instead of serving a single ad to all traffic, the traffic is randomly divided. Half of the traffic is served one ad while the other half sees an alternative.
It's hard to fail when you're allowing your potential customers to show you, by their very actions, which treatment compels them to complete a task (e.g., clicking the ad, making a purchase, filling out a form, requesting more information).
So why aren't more companies realizing the potential benefits of banner testing?
Part of the problem, I believe, is that sometimes companies run banner ad tests and don't see much of a difference between the two that they tested. They end up thinking, "The variations in the ad obviously don't matter much, so why bother?"
But that kind of thinking can be dangerous and can lead to complacency in creative development. Successful testing can increase clickthroughs and conversions, thus increasing ROI more easily and more quickly than almost any other marketing initiative, and if you're not doing it, your competitors are (or soon will be).
It can be frustrating, however, to try testing and end up without spectacular results. So here are three steps you can take to ensure that the needle moves.
Tip #1. Test stronger differences
Sometimes a marketer who wants to run a test takes a basic version of the ad, makes some minor changes to it, runs the new one against the original, and wonders why the results for each ad are basically the same.
Generally, this happens because the two versions are too similar. This is often the case, by the way, in organizations where risk is not rewarded. Marketers are so concerned with not seeing the numbers go down that they avoid making significant changes that help numbers go up.
Sidestep this pitfall by trying three things:
- The 50-foot test: To ensure your two creative options are different enough, print out the banners, tape them to the wall, and step back as far as you can. Can you see the differences?
- Simplify: You might also try leaving out certain elements altogether. Often, marketers jam too much into a small space. Consider making one of the elements more prominent and skipping one of the lesser elements altogether.
- Get Uncomfortable: Try at least one version that pushesthe boundaries a bit. We generally see companies start off with a test or two that generate neutral results. That's usually because the test versions are too similar and the marketer is still in his comfort zone. Often, big improvements come from big risks. Remember, you're really not risking much, because it's just a test. If results are disastrous, you can stop the test immediately.
Test #2. Run a test based on a theme or hypothesis
Another instance where tests return neutral results are when there's no real structure: there's Version A which is orange, say, Version B which has some Flash, and Version C that includes a photo of a woman.
Even when you find a winning version, you haven't learned much that you can use in the future.
Rather than trying two or three random differences, force your ideas into a structure. If, for example, you believe that photos of women work well, test an ad with a small picture, a medium picture, and a large picture.
Better yet, start with a hypothesis. If you believe your product will generate better conversions by using a softer, more human touch, rather than by trying to force the sale through an aggressive call to action, test that: include a version that has happy people with their arms around each other, another version that includes a lot of factual content, and another that takes a direct marketing, aggressive approach.
Tip #3. Add a row
Say you test three very different ads: one with a lot of products, one with animation, and one with dramatic graphics. You find a winner and you're happy with the results. Yet you don't feel you've found out anything of lasting value. That may be because you didn't test based on a theme or hypothesis (see Tip #2).
At this point, you could take the winner and begin doing multivariate testing to improve it (as you can and should any time you determine a winning ad). But you might also want to dig more deeply into the original three tests to see if you can't discover more about them.
In other words, take Version A, the one with a lot of products, and create a second and even third version (let's call them Version A1, A2 and A3). Version A2 has even more products, while Version A3 is more-- simple.
We call this "adding a row." It gives you the opportunity to dig more deeply into the test. You end up with six or nine different approaches, and if the As consistently win, then you can look further: do all three As win? This is useful because it helps determine whether a particular creative treatment works consistently. You can begin to see patterns and develop learnings which you can use in future tests.
Once you've become adept at testing your basic banner ad, you're ready to move on to more difficult (and exciting!) tests. When you've pinpointed an ad that beats the others, for example, begin multivariate testing to see how much better you can make it. Play around with the copy, the colors, the smaller stuff that you might have wanted to test in the first place.
Even more fun, you might want to try segmenting your audience. See if visitors who have clicked on your banner once and are seeing it for the second time behave differently from visitors who are seeing it for the first time. Run a test of a new banner on those second-time viewers.
These three steps will help your organization identify banner ad creative that best resonates with your target audience. Not only will you increase clickthrough and ROI, you'll also learn about the preferences and motivations of a very important group of people: your future customers.
Jamie Roche is president of Offermatica. .