The cookie window
If you do not set up a viewthrough cookie window that is long-term, then you miss the cumulative effect that multiple impressions could be having on your audience. I would suggest two months. Most businesses plan online month to month. Having a two month cookie window enables you to see the effect of the previous month's plan in relation to the current month; variability in spend, site selection, frequency, etc. Most clients I know have a one- or five-day cookie window. The reason is that they are attributing all of those users as having been driven from online advertising. That is where the discrepancy lies. You cannot do that.
What you should do is run a blind test of a cohort of users who saw your advertising and didn't click versus those who did. Plot that against frequency of exposure and time between exposures. You should then be able to see the impacted lift of those exposures. In order to do that however, your cookie window needs to be long enough for cumulative data to be of import. You can then extract meaningful information:
Five exposures over one week results in a 20 percent lift in overall response. Over seven exposures in week has no additional lift. Three exposures in one day seems to result in a 16 percent lift. One hundred exposures over a month indicates a 400 percent lift.
It's this type of insight that will enable you to structure your ad buys and your program delivery for optimal results.
Viewthrough is not a causal relationship
Because viewthrough isn't about causation, you should not be using a 1:1 causal metric attributing each view/visit as caused by your ad. It will only be a percentage, a fraction of that traffic, that should be attributed to the ad. This is intimately tied to your cookie window, and that appears to be inversely logarithmic. The more time has passed since the view of the ad, the steeper the curve away from impact. You should plot that curve based on your cookie window and the blind study, and against your spend. There should be an optimal point at which your spend level demonstrates the most efficient delivery. It's not rocket science, it just takes some work to get at the data you need.
Once that work is done, however, you do not have to worry about the complex analysis. Just use your view window data and multiply it by your percentage impact metric. I would suggest rerunning the analysis every six months to make sure the assumptions are still valid. More than that and you're wasting resources.
The last cookie wins
The first impact of this is your creative. It's about the corpus of messaging; the whole enchilada, not just a single piece of creative. A single banner creative is useless. It means nothing. Stop micro-data-analyzing as if it did; pouring over weekly analytic reports, tweaking this placement or that so much that your head is so buried in the sand. You are burning through resources and accomplishing what? A 0.025 improvement in your click metric?
Use your resources effectively and look at the big picture. You have your program so tightly wound, so tweaked, that you've micromanaged yourself into a corner. Any change crumbles your precarious house of banners. Or does it? Don't touch it for two weeks. What was your efficiency hit? Calculate it in dollars. Could that two weeks, all those hours of your resources and agency resources being burned have been used to set up something that will provide an exponential success, not an incremental one? Calculate just the time your agency billed you? Can you recover that by using your time more efficiently?
Use the performance metrics of creative as guidelines. They all work together. If some of your creative is really outperforming others, it is relevant. Use the learning from what that creative is doing, but do not seek to constantly tweak existing creative messaging on stuff that is not working. You may have different goals with that creative so judge it on that. If the goal is to communicate a point of difference of your product, and that is important long-term, then measure it on that.
The tools are out there
Look at the bigger picture; the longer term. Did you use Dynamic Logic or Insight Express to set up ongoing effectiveness studies for attitudinal effects on consumers that translate into higher site usage? What about ForeSee providing customer satisfaction information on your site and marrying it with ad entry? How about Net Promoter score tracking over time with exposed, non-exposed groups to your online efforts? No? Are you a marketer or a luddite?
Atlas has developed some new technology to try and get at the overall exposures of creative, and multivariate testing from companies like Memetrics, Omniture and Optimost have enabled direct marketers online to hone the funnel experience. But what is the real effect of all of those ad exposures? Omniture acquired Visual Sciences last year and integrated it with their own Site Catalyst tool to create a system that is extremely robust in analyzing the full funnel effect of people into your online presence.
They can run test models and allow you to do "what if?" scenarios. But unless you are a large scale enterprise that can afford such systems, you are left in the dust. BuzzMetrics allows you to track the internet hum across the blogs, message boards and substrata of the web. There are even econometric modeling solutions from the major media players but they still mostly fall short. The traditional economic modelers tend to develop models that, when they incorporate online, get so chaotic as to become useless. They are all approaching it from the wrong side of the equation. Absolute Data is the only company I have seen that starts at the digital end and works backward in its modeling. That's where all the data is. And it seems to work.
<< Previous page | Next page >>