Internet advertising's influence on purchase decisions finds Yahoo No. 1 overall and Google No. 4.
According to BIGresearch's latest Simultaneous Media Study (SIMM) of more than 15,000 respondents (December 2005), Google search engine users are most influenced on purchase decisions to buy electronics (30.5 percent), Yahoo is second (27.5 percent), MSN third (24.9 percent), AOL fourth (23.9 percent) and Ask Jeeves ranked fifth (20.8 percent). Google also finishes first regarding purchases of telecom services (10.3 percent) with Yahoo and MSN tied for second (10.1 percent).
However, Yahoo finishes No. 1 with home improvement and Google finishes fourth in this category. Yahoo is also on top for medicines (Google is No. 5); and Yahoo finishes first for eating out and Google is third.
Other number one categories: MSN finishes first on car/truck purchase decisions at 12.6 percent; AOL finishes first for apparel/clothing (17.8 percent) and grocery (13.5 percent).
If we tally the ranking of search engines' influence on category purchase decisions we find Yahoo at No. 1, with a score of 13, with first or second place finishes in every category; MSN No. 2 at 21; AOL No. 3 at 23; Google No. 4 at 26 and Ask Jeeves No. 5 at 32.
What this means for advertisers and marketers: Yahoo is the best place for internet advertising to have a direct effect on purchase decision by category of merchandise.
Influence of Internet Advertising on Purchase Decisions by Search Engine Preference
Boldly go where they've never gone before
After hearing an earful about the limitations of internet video from a content perspective, I ask the room if they've ever heard of Joost, a video destination site still in beta testing. Only Todd and Andrea, who downloaded the small application ahead of time, know about the site. But nobody in my ad hoc focus group has high expectations for what they're about to see.
With the click of a button, Todd transports us all from the familiar -- and pedestrian -- YouTube to Joost, where Adam quickly remarks that he finds the streamlined layout pleasing to the eye.
But the crowd is still wary, having been burned by too many bad internet video experiences. And so when Todd selects a full-length "Star Trek" episode, the room settles in for a laugh at the expense of yet another platform that promises the world while delivering ad-supported buffering.
To everyone's surprise, a high-quality video begins to play instantly and without interruption either from wonky technology or distracting ads.
Barrett marvels at the quality, while Andrea takes note of a friendly banner ad precisely because it does not interrupt her viewing experience by appearing inside the stream or flashing while the video plays.
But Adam and Michelle, who say they like what they see, remain cynical.
"Todd has the right equipment for this," Michelle says, as Adam notes that even the best computer screen isn't as comfortable or familiar as his couch and TV.
Yet the more the group plays with Joost, the more they seem to be willing to acknowledge a world where internet video has more to offer than Chris Crocker pleading with world to leave his beloved Britney Spears alone.
Todd smiles and says he's going to look to Joost for at least some of his video content. But the others aren't so sure…
The catch, the catch
Everyone laughs when Hervé Villechaize shouts, "The plane, the plane!" as Todd opens an old episode of "Fantasy Island." But after a few laughs, it becomes clear that Joost has cornered yesterday's content market, not today's.
We open the Comedy Central channel and disappointment sets in.
"It's got everything you don't want to watch," Todd says.
While Joost has high-quality content, its offerings aren't as competitive as the group would have hoped.
Todd wants to watch old episodes of "Star Trek" and Barrett is intrigued by an Elvira-branded horror film channel, but the others aren't so keen on the content offerings.
Adam sums it up, saying Joost has great platform that needs better content.
Looking for different, newer content, we check out Veoh.
Barrett and Adam criticize the layout, which appears cluttered to them compared to Joost.
But when we find an episode of "Arrested Development," everyone is pleased until the pre-roll appears.
To be fair, it's a short clip from Cisco that doesn't last more than a few seconds, and after it runs, the video plays without commercial interruption.
For Andrea, pre-roll is just another place where video is likely to buffer or fail altogether. She's been there before and she's cut shows out of her media diet because of it. So when she sees pre-roll, she doesn't think about the ad, she thinks about the likelihood that she won't get to see her favorite show.
Adam, who says he's OK with the pre-roll, makes a point TV advertisers are coping with on a daily basis.
"I could just as easily watch this on TV without the ads because I have TiVo," he says.
Barrett, who doesn't mind the pre-roll, says it's still a tough sell.
"It's all about expectations," Barrett says. "If I'm watching TV, I expect ads every 10 or 15 minutes, but I know I'm also getting something done by professionals with an excellent picture. On the internet, I don't have those expectations, and so I just don't have the patience for the ads, especially if it's short content."
But that's not the only problem with Veoh.
The picture quality leaves something to be desired. Everyone wants to go back to Joost, or worse yet, break out a DVD. Although it loads relatively fast and there's no buffering, Veoh suffers from diminished picture quality compared to Joost. So even though everyone rates the Veoh picture quality as high compared to what they see elsewhere on the web, it's still not high enough to sustain a 30-minute show.
Back to You, Tube
I try to show the group one more site, but the fact that the unnamed destination is down highlights how precarious long-form internet video really is.
So when I instruct Todd to load [website name omitted], the group turns to me as if to say, see, this is the kind of uncertainty we deal with when we try to watch video online.
Then someone remembers that there's footage of an exploding whale on YouTube and we return to the comfort of internet video's established powerhouse.
As debris from the beached whale carcass falls from the sky, and someone remarks, this is the great thing about the internet, I can't help but ask if anyone has ever seen an ad on YouTube.
YouTube launched its pilot ad program last year and has had trouble making it work. No one has seen an ad, but they all have a piece of advice for the Google-owned video company -- keep the ads out of the video player.
There's something terribly basic about TV from a user perspective. You watch the show, the ads come on, you go get a snack, and you watch the rest of your show. But while internet video may look a lot like TV (assuming the content and the quality make their way to the computer screen), the advertiser/user relationship is something quite different.
That old exchange between brands and people didn't require advertisers to delve into the minutiae of a consumer's viewing habits. After all, the medium and the advertisers set the rules. But internet video is first and foremost the domain of the users, so it's worth finding out what they think.
Perhaps you've played with some of today's video portals or done qualitative research with a user that goes beyond the usual metrics. If so, I encourage you to join the conversation and share your thoughts with the community by posting a comment.
Editor's note: Adam, one of the people who expressed concern that Joost only looked good because of Todd's expensive monitor, tried to run the application at home. To his dismay, he got a message telling him he couldn't support Joost even though his computer met all of the site's requirements. Slaying the TV giant is indeed a tricky endeavor.
Michael Estrin is associate editor at iMediaConnection. Read full bio.
3) Create a level playing field. While it sounds obvious, the test has to be a fair one. That means all parties get pixels placed on the same pages at the same time. They all get the same creative concepts in the same sizes. They receive the same budgets, goals, and targeting instructions. Use a neutral third party, ideally an ad server like DART or Atlas, to judge performance. A non-level playing field essentially negates the test, and in our experience, this happens not due to anyone trying to stack the deck, but rather due to simple oversight. For example, DSP pixels were received at different times and placed in different batches in the client's tech queue, resulting in DSP No.1's pixels going up two weeks sooner than DSP No. 2, leading to unfair opportunity for DSP No.1 to build its cookie pool earlier.
The most debated decision here is whether to run the DSPs sequentially or concurrently, and there are pros and cons to each approach. Sequential testing means DSPs don't compete in their bidding, but it poses significant drawbacks, namely much longer time to complete and apples-to-oranges results. Running DSPs at different times involves not just product-specific seasonality, but even worse, the high-time variability in the underlying display landscape (i.e., new exchanges, publishers moving inventory into and out of exchanges, variation in clearing prices, etc.). Concurrent tests are faster to complete and generate apples-to-apples results. The most common concern here is the notion of competing bids leading to cannibalization. In our experience, competitive bidding affects the overall performance level, but not the ranking. In other words, the DSP with broadest scale and best optimization will put up the best results during the test, and once the competing DSP is turned off, the results will simply get even better. For a realistic example, assume a $50 CPA goal in a concurrent test, where DSP No. 1 comes in at $45 CPA, and DSP No. 2 at $80 CPA. Without bid competition, their CPAs would have been at $38 and $70 respectively, but either way, DSP No. 1 is the clear winner.
Also keep in mind that this is not like search where everyone knows more or less what several thousand keywords work best in advance. Here it's up to the DSP algorithms (if they even truly have one) to determine which of the billions of daily impressions work best based on dozens if not hundreds of variables. In practice, we've found that many DSPs don't even know what to be bidding on, except for retargeting, so actual bid competition is limited.
4) Deploy enough budget to move the needle. Anyone who has been in the display game for a few years can make $25,000 work against almost any goal. The more budget you allocate, the more separation you see in DSP performance, because it's harder and harder to make that marginal dollar, or $100,000, perform at goal. We've seen numerous tests come back inconclusive because the test didn't deploy enough spend. Every DSP came back in more or less the same range, and the client was no more informed than before the test. By contrast, we've seen tests for large advertisers where DSPs were asked to spend $500,000 or $1 million, and in these tests you immediately see the separation. There's nowhere to hide. A large budget is like a magnifying glass, and the ability to pace and perform at those levels becomes immediately apparent. The nice part is, you don't even have to spend the entire budget. In such cases, the results are often called well before the official end date because the loser's inability to meet the daily performance or pacing requirements becomes evident in the first few weeks.
5) Start simple. While a good DSP can and should be used to do a myriad of innovative, game-changing, and complex things, the test itself can be as simple as placing pixels, generating ad tags, and setting up a log-in in the DSP systems. We've seen some folks create elaborate test structures, which do sometimes lead to better back-end reads, but more often create noise, execution error, and poor performance. We believe simple tests are the best. After all, if the simple stuff doesn't work, neither will the complex stuff later on. The partner with the strongest fundamentals is usually the one on which you want to build more elaborate capabilities.
6) Evaluate the winner against clear criteria. Final evaluation of a DSP usually incorporates numerous strategic criteria, the most common of which are the following:
- Performance -- Were campaign goals consistently met or exceeded?
- Scale -- At what volumes was that performance generated?
- Insight -- Is there clear understanding of the drivers behind performance (media, audiences, etc.)?
- Transparency -- Do you know the underlying supply, data, and economics?
- Workflow -- How easy was it, and did it fit well with the current business?
- Service -- How experienced and knowledgeable was the team? Were they responsive? Can you see them as a strategic partner
- Flexibility -- Can the platform be adapted/customized to your unique needs (first-party data passing, proprietary metrics, integration of direct/premium buys, customizable features via APIs, etc.)
Two other little tidbits to keep in mind. First, notice the reaction of the DSP when a bake-off is proposed. They've all been in lots of bake-offs before and they know where they ended up. They should be jumping at the chance to be in yours. If not, or if they start to backpedal, why is that? Second, watch for little things that happen during the test. If pixels went down, who noticed first? If an exchange or publisher doesn't approve the creative, who notifies you first? These little things often speak to superior underlying infrastructure and processes.
So enough with the vapor and the PowerPoint. Put DSPs to the test. And when you find a winner, tell a friend. With every DSP bake-off, the ecosystem gets smarter and the vapor gets thinner. Maybe soon, all the press won't be generated by DSPs and DSP wannabe's making claims of what they can do, but by agencies and advertisers innovating new strategies and announcing ground-breaking results that just happen to be powered by real DSP technology behind the scenes.
Wouldn't that be nice? Happy testing!
On Twitter? Follow iMedia Connection at @iMediaTweet.