Few people would disagree that online advertising has come a long way in recent years. And yet, even as the technology races ahead, a debate continues to swirl around two very basic questions: Can an online ad still be effective if no one clicks on it? And if so, is there a reliable way to measure the effect of such impressions?
These questions might be basic, but they're also critical. After all, the vast majority of ads are never clicked on. When a user sees an ad but only visits the site hours or days later, it's known as a view-through visit or post-impression visit. If a user then converts or spends money, it's referred to as a view-through conversion or view-through revenue. Brands can verify a view-through via a cookie that is dropped on the user's browser when the impression appears.
The major complaint about view-throughs is obvious enough: You just never know. Sure, someone saw a Zappos impression and then visited Zappos.com the next day, but how can we really know if the impression led to the site visit? Maybe that same day the user ran into an old friend who was wearing a nice pair of shoes that she'd just bought on Zappos.
Online advertising was supposed to make this sort of uncertainty a thing of the past. By relying on clicks, marketers could know exactly what worked, making the entire industry infinitely more efficient. Skeptics maintain that tracking view-throughs takes online advertising in the wrong direction -- back to the days when all you could do was make a big spend and hope for the best.
View-through skepticism may well be attributed to a critical mistake that was made early in the history of the view-through. The time between the appearance of the ad and the visit to the site is known as the view-through window, and the length of this window is important. If you see a Zappos banner and visit Zappos.com a year later, not even the most stalwart view-through supporter would suggest attributing the visit to the ad impression.
Setting the right window can be a tricky process because the right length of time really depends on the nature of the campaign. Unfortunately, when view-throughs first came onto the scene, the windows were usually set to 30 days. This might be appropriate for campaigns that are asking users to consider major purchases, such as automobile campaigns, but for most ads, 30 days is a long time to wait between the impression and the visit. The shame is that, to this day, many marketers don't realize that view-through windows are adjustable and can be set for as short a period of time as 24 hours.
The fact that media owners have tried to game the system hasn't helped quell the skepticism. The scheme is simple: You buy up millions of super cheap impressions and spread your cookies far and wide so that you can take credit whenever a user ends up on a site that is running a display campaign. Still, if "cookie stuffing," as the practice is known, was once a legitimate issue, it's now quite rare, and thus no longer a strong argument against tracking view-throughs.
Not a People Connection member?
Hey Dax,and thanks for an interesting article.I completely agree with what you write, but I also understand the customers who say that many ads are unseen (comScore research - 31% of Display Ads Go Unseen). They often refers to unseen ads below the fold, that still can be attributed to a "view-through conversion". Whats your thoughts about this and what´s the best solution? Guarantee in-screen ads?Cheers // Jakob
Apologies - this was meant for another line of questioning!
While the topic of attribution is certainly getting a lot of attention, I am concerned that it's being talked about in a far more simplistic fashion than it really deserves (both here and in other arenas). Attribution is something that should be driven by data scientists, statisticians, etc ... attempting to oversimplify it beyond its due is dangerous when organizations are intending to action upon those models by allocating real dollars of ad-spend based on the output. A well-constructed attribution analysis is a complex statistical undertaking - and the specification of the model to be used is a critical element of ensuring useable, accurate output. Additionally, interpreting the output of such models into actionable insights, as well as being able to modify or optimize those models, also requires knowledgeable & appropriately qualified eyes. Consequently, while the SaaS siren call for an ad-tech point solution is certainly enticing, it's clearly not the full measure of the solution that is truly called for. At least, a qualified data scientist should be operating that solution … at best, your data scientists and statisticians are actively researching and creating specific models for your business based on best-practices from industry and academia. To that end, I'd also clarify Mediaplex's market position. While true that our lineage is in the adserving business, we are truly doing more ... we are a fully functional marketing analytics company. We provide full-service marketing analytics, custom analytics project support, data strategy auditing, and attribution analysis.
Salomon - good question. There are ways to analyze the data to measure the drop-off curve, but in reality, the best data often comes from the client. By looking across all marketing channels an advertiser will likely have a feel for how long the consideration period is for their product or service.
Ok, but what techniques do you use to identify the best view-through window? normally the longer the window the better your results will show (i.e Zappos 1 year example)
Full Summit Calendar | Request Invite
1 The 5 types of terrible networkers
2 The most meaningless (and hilarious) job titles on LinkedIn
3 The top 4 consumer trends you need to know
4 The best social media campaigns of 2013
5 Top 10 trends marketers wish would die