Today, more and more marketers use display advertising for brand goals -- but how can you tell if your online brand campaign performs? There are many ways to gauge the effectiveness of a direct response campaign -- mainly by measuring clicks, leads, or conversions -- yet figuring out the impact of a brand-focused campaign is much more difficult.
What if a consumer sees your ads, has a favorable brand response, and later buys your products online, but never clicked on the ad? Or, what if the person sees your ads many times and ends up becoming a loyal in-store customer?
In these cases, just counting clicks and conversions doesn't tell the complete story, because those metrics don't measure brand awareness, brand sentiment, or offline conversions. Using "in-banner" surveys is a great way to measure the brand impact of display campaigns. They've been around for a while, but too often marketers launch poorly designed surveys that artificially skew results or make them unusable altogether. My company has helped brand marketers collect over 300,000 survey responses on brand campaigns over the last 12 months, and we've learned what works and what doesn't.
If you're planning to use in-banner surveys to measure brand impact, here are five tips for how to achieve accurate campaign measurement.
Get the questions right
Never use the brand or product name in the question portion of the survey to measure brand awareness or sentiment. Instead, list your brand or product with competitive products in the answer set.
For example, don't just use a generic question, such as, "How likely are you to purchase XYZ product?" Instead, use more meaningful questions such as, "Which products do you plan to purchase?" or "Which products would you consider purchasing?" Then, give them answer choices such as your product, competitor one, two, and three, and "not sure" or "none of the above." This type of question-and-answer design minimizes bias, ensures no consumer answering the question knows what brand is asking the question, and maintains higher response rates to your questions.
Plan to collect 400 responses per segment
Statisticians find that 400 survey responses will give a stable sample to a question, but this assumes the survey is being sampled randomly across a homogeneous population.
If you are using segmentation across your campaign (where segment A might be a contextual ad buy with several comScore Top 100 publishers while segment B could be a web-wide demographic buy using programmatic access to demographically-qualified inventory), you actually need at least 400 responses per segment, and to measure lift in awareness or brand sentiment you need 800 responses per segment, so you can measure pre and post-campaign exposure. For a campaign with six segments, this means you need 4,800 (6x800) survey responses to measure each segments impact on the media. Don't skimp on the number of survey responses collected, or your data sample won't be meaningful.
Use 300x250 ad sizes with black backgrounds and white text
Recent testing has shown black backgrounds with white text have 28 percent higher response rates than average, while the 300x250 ad size yields 86 percent higher response rates. Make sure to use this size ad to give your in-banner survey the best chance at being answered by the highest number of respondents.
Weight survey responses based on the media delivered
If you intend to serve media to two segments, you should gather between 800-1,600 responses to your question. If media delivery has an 80/20 percent split (in the above example, 80 percent could go toward segment A while 20 percent goes toward segment B), you need to weigh the survey responses from the 80 percent segment four times more than the 20 percent segment. Weighting minimizes the impact of high survey response segments, while accurately measuring the performance of the overall campaign
Use what you learn to improve current and future campaigns
It's best to collect survey responses during your campaign, then apply learnings to optimize the segment mix. This will improve overall performance of the campaign while it runs. Ideally, you'll have a solution provider that can integrate those learnings in real-time into their prediction and optimization models to maximize campaign success with live response data.
If you are analyzing the data post-campaign, compare brand metrics vs. clicks, coupons, and lead data to see where there are strong correlations (e.g., applying Pearson's Correlation in your analysis). If you find a negative correlation between brand metrics and coupon performance (where brand metrics decline relative to increased coupon performance for example), plan to focus your next campaign on optimizing to survey responses vs. coupons. If you find they are strongly correlated, then it's possible to use the coupon performance as an indicator going forward for optimization of future campaigns.
Using in-banner surveys is a great way to tell if your online campaigns work to improve brand reach and sentiment. Just remember to ask the right questions, ask enough people, and measure their responses as the campaign runs in real-time to get the most value out of in-banner survey programs.
On Twitter? Follow iMedia Connection at @iMediaTweet.
"Survey" image via Shutterstock.