"The hierarchy of effects" is one of the oldest consumer behavior models around. The direct marketing application of it goes A.I.D.A.: Attention, interest, desire and action.
Direct marketers have formatted their communications this way for years. And with online having so many ties to direct, it only makes sense that the model would be adapted to the online medium.
It's always seemed to me that online advertising's staple banner/landing page combo splits A.I.D.A. down the middle. Attention and interest become the responsibility of the banner, desire and action go to the landing page. That's a simplification, of course, but there's no denying that these two closely-connected components of the same marketing effort play considerably different roles.
A question then is, how much, if at all, could the messages consumers respond to change between the ad and the landing page?
There are several reasons such differences could exist. For one, ads and websites are vastly different communication vehicles. Online ads contain little room for elaborate explanations of any sort, compared to websites, which can read like short stories in a good example, and an owner's manual in bad ones. It shouldn't surprise us that a message might work better in one environment than in another.
The user experience is also dramatically different in the two environments. The ad constitutes an intrusive experience, the website selective. This could account for a considerable shift in mindset, as visitors to the site should be more open-minded to product information.
Our aim is to test a similar range of product features and benefits, as well as different content options on the website to see which register as most popular with our visitors. To further the analysis, we'll then compare this to previous ad tests, and see if what's happening within the ad environment matches with what's happening on the site.
To accomplish our testing goals, we're going to use Offermatica's multi-variate testing technology. I was introduced to Offermatica at the beginning of this case study, and I'm impressed not only by the technological capabilities for testing a broad range of variables in a short period of time, but also the way it works within the creative process. The end result is a natural-looking design structure for testing a remarkable number of home page variations.
To explain the testing construct, I'm going to turn it over to Jamie Roche, president of Offermatica. Jamie, take it away.
Thanks Doug, before we get into the details of the test design, just a quick overview of what Offermatica does: Offermatica takes over specific regions on a page, the way that Akamai or Doubleclick does, and then serves content to those regions "in-line." So when the prospect downloads the page, some of the content comes from the company's servers while the content for the designated regions comes directly from Offermatica.
The content from Offermatica is delivered based on different rules that we define, depending on the needs of the campaign. These rules can be as simple as: "Randomly split the traffic into three equal parts and serve option A, B or C to each of these three groups."
Or, the rules can contain targeting or personalization variables. For example, "Send everybody to the same homepage, but show humorous copy to the people who saw the Rasputin ad, and show straight copy to people who saw the Café ad."
These two rules can also be combined to serve different tests to targeted groups.
That's what we've done with the Sugarshots project: using a combination of testing and targeting, we'll be serving different content to different targeted groups in order to explore the connection between the ad and the site.
Because the ads offer very little space to describe Sugarshots, we wanted the landing page to fill in some details about benefits and use. But how much detail is the right amount, and what's the best way to communicate it? That's the question around which we designed our test.
The original home page is an interactive picture of four labels, each representing a flavor of Sugarshots. When a prospect moves the mouse over the labels, the bottles appear above a small amount of text explaining the flavor. We decided to test different versions of three different elements of that home page, in addition to the original (or default) home page. The three elements, and the different versions of those elements, are:
Element #1. A single large main image instead of the four labels
The three versions to test: pictures of three of the flavors of Sugarshots; a picture of a woman drinking coffee; and a line drawing of a café.
Element #2. A set of three smaller thumbnail images
The versions to test: photographs of coffee, tea, and oatmeal, representing three of the main uses of Sugarshots; and line drawings of a man pouring Sugarshots into his coffee.
Element #3. A block of copy describing the features and benefits of Sugarshots
The versions to test: a single benefit with lots of detail; two benefits with less detail; and four benefits with no additional detail. Offermatica will divide traffic depending on originating source, and serve two series of tests, one for Rasputin traffic and one for Café traffic.
There are eighty-one different ways that these new elements can be combined. With three sources of traffic, this makes 243 possible scenarios. If we used traditional, single-element A/B tests, with the current level of traffic, it would take longer than a year to run all of the tests.
But multivariate testing is like running thousands of A/B tests all at once. With this approach, we will be able to identify, within one to two weeks, not only which version of the landing page works best with which ad, but also which specific elements on the landing page are responsible for the success.
Jamie Roche is the founder and president of Offermatica.
Doug Schumacher is the president of Basement, Inc.