ellipsis flag icon-blogicon-check icon-comments icon-email icon-error icon-facebook icon-follow-comment icon-googleicon-hamburger icon-imedia-blog icon-imediaicon-instagramicon-left-arrow icon-linked-in icon-linked icon-linkedin icon-multi-page-view icon-person icon-print icon-right-arrow icon-save icon-searchicon-share-arrow icon-single-page-view icon-tag icon-twitter icon-unfollow icon-upload icon-valid icon-video-play icon-views icon-website icon-youtubelogo-imedia-white logo-imedia logo-mediaWhite review-star thumbs_down thumbs_up

6 ways to put DSPs to the test

Ari Buchalter
6 ways to put DSPs to the test Ari Buchalter
VIEW SINGLE PAGE

Editor's note: This cover story by Ari Buchalter was commissioned and edited prior to the company's decision to sponsor iMedia's ad serving section.


If you're a marketer or an agency that controls display dollars, you've likely been choking on the vapor that has filled the demand-side-platform (DSP) space over the last few years. It's not just the sheer number of entrants claiming to be DSPs (be they pure plays, re-positioned networks, or growth-minded media companies), but the cacophony of claims being made that they are the "first," "best," or "only" DSP that does this or that.


Stay informed. For more insights into the latest digital marketing opportunities and challenges, attend the iMedia Brand Summit, March 6-9. Request your invitation today.

In truth, most of these contenders sound great in the first meeting. Their slides use all the right buzzwords, like "audience," "optimization," "real-time," and "insights." But in many cases, they are writing PowerPoint checks that their technology can't cash (that is, if they really have any technology at all).


So what happens if you can't separate the reality from the hype? (And who could?) For those who use DSPs the way they use networks, i.e., as outsourced media execution, they simply either conclude that the DSP performed (ideally better than any other "line item" on the media plan) and continue to send them a monthly IO, or they conclude the DSP didn't perform and move on to other outlets for spending the budget. And that's fine for some.


But many advertisers and agencies looking to use a DSP are looking to make digital media execution an internal core competency, not outsource it to a network-like entity. Their choice of a DSP is about choosing a flexible and robust technology platform on which they can build that competency and make it a competitive advantage. If you're in this category, then finding out your DSP doesn't deliver as promised -- after months of internal and external DSP cheerleading, contract redlines, team training, and technological integration -- can be catastrophic.


So how do you separate fact from fiction? You ignore what they say and put them to the test -- against each other. The notion of DSP "bake-offs" isn't new, but it does bring with it a set of challenges, notably direct-bid competition and ensuring a level playing field. Having participated in many dozens of such tests, I'd like to propose some best practices for head-to-head DSP testing that in our experience help ensure a fair fight, clean results, and most importantly, no regrets.


1) Pre-vet candidates to narrow the list. For practical reasons such as finite budgets and sheer management complexity, you can only test so many DSPs. As with any such test, a lot of the work comes up front in working to ensure you've got the right candidates to begin with, which is why this part of the recipe is the longest. In our experience, most tests involve two or three partners, because most savvy advertisers and agencies have already done their homework to understand which DSPs are even worth testing, in terms of:



  • Breadth of supply, data, and ecosystem integration -- Not just the number and scope of ad exchange and SSP integrations, but the ability to bring your own seat, to integrate premium display and guaranteed buys, video, rich media, social, and other formats. Just as important as supply is data, in particular the ability to seamlessly integrate and globally manage any and all first-party data as well as third-party data sources, offline as well as online. A DSP should also provide simple access to other value-added capabilities, like dynamic creative, ad verification, and brand studies. For a platform, broad integration into the larger landscape is key.


  • Technical infrastructure -- Often overlooked, but of top importance in choosing a platform, is the robustness of the underlying technology. Ask about global infrastructure -- how many bidders do they have, and where are they located. Ask what QPS (queries-per-second) levels they can support, and ask to see the proof, because this is a critical scaling factor. Ask to see their pixel response times from neutral third-party reports like GOMEZ, because they may be slowing client pages down. Ask if they have their own user database and their success rate for matching users. Ask how much data they process daily. Put them in a room with your statisticians and ask them how their bidding algorithm works (hint: many major DSPs actually don't have one). Ask about their APIs. Really understand the technology you may be building your business on.


  • Performance and service -- These are the aspects of a DSP that will most strongly impact the day-to-day business, but ironically can be much harder to assess from the outside-in than the breadth of partner integrations and technology infrastructure mentioned above. Of course, that's why you'll be conducting the bake-off, but to help decide on the candidates, seek out references that can confirm or deny the rumors. How did they perform vs. competitors? At what scale? What was the service like? See if you get the same answers from industry contacts that were not on the DSPs reference list. And talk not just to the DSPs customers, but to their partners (the exchanges, data providers, etc.).

2) Pick a large brand with "deep" goals. You've now got your lineup of two or three DSPs to test, so what's the test? Ideally, it's for a single large brand so results are directly comparable across DSPs. Importantly, the test should have measurable and "deep" goals, meaning goals that are closer to the end-client's bottom line. For DR advertisers, this means goals like CPA, or ideally ROI, as opposed to shallow goals like clicks, which are much easier to drive but don't necessarily impact the bottom line. Deep goals are very well suited for DSP bake-offs, because they invoke the entire system, including their breadth of supply and data, as well as algorithmic optimization capabilities to meet challenging CPA or ROI objectives. Brand advertisers can be the subject of bake-offs as well, but that scenario is less common due to the historical challenge of identifying deep and measurable brand goals. However, that is rapidly changing, and we anticipate seeing more brand-oriented bake-offs, where the goal is to drive measurable lift in awareness or interest (as measured, e.g., by in-banner surveys) among a desired target audience (as measured by first- or third-party data) at a desired reach and frequency level, and at the best CPM. A truly ideal test to highlight broad DSP capabilities may allow for several different objectives to be tested, for example, across a range of campaigns with different goals (reach, engagement, ROI, etc.). However, it is important that the principles laid out below are applied to each campaign being tested, to avoid the risk of ending up with a portfolio of tests that are all inconclusive.


3) Create a level playing field. While it sounds obvious, the test has to be a fair one. That means all parties get pixels placed on the same pages at the same time. They all get the same creative concepts in the same sizes. They receive the same budgets, goals, and targeting instructions. Use a neutral third party, ideally an ad server like DART or Atlas, to judge performance. A non-level playing field essentially negates the test, and in our experience, this happens not due to anyone trying to stack the deck, but rather due to simple oversight. For example, DSP pixels were received at different times and placed in different batches in the client's tech queue, resulting in DSP No.1's pixels going up two weeks sooner than DSP No. 2, leading to unfair opportunity for DSP No.1 to build its cookie pool earlier.


The most debated decision here is whether to run the DSPs sequentially or concurrently, and there are pros and cons to each approach. Sequential testing means DSPs don't compete in their bidding, but it poses significant drawbacks, namely much longer time to complete and apples-to-oranges results. Running DSPs at different times involves not just product-specific seasonality, but even worse, the high-time variability in the underlying display landscape (i.e., new exchanges, publishers moving inventory into and out of exchanges, variation in clearing prices, etc.). Concurrent tests are faster to complete and generate apples-to-apples results. The most common concern here is the notion of competing bids leading to cannibalization. In our experience, competitive bidding affects the overall performance level, but not the ranking. In other words, the DSP with broadest scale and best optimization will put up the best results during the test, and once the competing DSP is turned off, the results will simply get even better. For a realistic example, assume a $50 CPA goal in a concurrent test, where DSP No. 1 comes in at $45 CPA, and DSP No. 2 at $80 CPA. Without bid competition, their CPAs would have been at $38 and $70 respectively, but either way, DSP No. 1 is the clear winner.


Also keep in mind that this is not like search where everyone knows more or less what several thousand keywords work best in advance. Here it's up to the DSP algorithms (if they even truly have one) to determine which of the billions of daily impressions work best based on dozens if not hundreds of variables. In practice, we've found that many DSPs don't even know what to be bidding on, except for retargeting, so actual bid competition is limited.


4) Deploy enough budget to move the needle. Anyone who has been in the display game for a few years can make $25,000 work against almost any goal. The more budget you allocate, the more separation you see in DSP performance, because it's harder and harder to make that marginal dollar, or $100,000, perform at goal. We've seen numerous tests come back inconclusive because the test didn't deploy enough spend. Every DSP came back in more or less the same range, and the client was no more informed than before the test. By contrast, we've seen tests for large advertisers where DSPs were asked to spend $500,000 or $1 million, and in these tests you immediately see the separation. There's nowhere to hide. A large budget is like a magnifying glass, and the ability to pace and perform at those levels becomes immediately apparent. The nice part is, you don't even have to spend the entire budget. In such cases, the results are often called well before the official end date because the loser's inability to meet the daily performance or pacing requirements becomes evident in the first few weeks.

5) Start simple. While a good DSP can and should be used to do a myriad of innovative, game-changing, and complex things, the test itself can be as simple as placing pixels, generating ad tags, and setting up a log-in in the DSP systems. We've seen some folks create elaborate test structures, which do sometimes lead to better back-end reads, but more often create noise, execution error, and poor performance. We believe simple tests are the best. After all, if the simple stuff doesn't work, neither will the complex stuff later on. The partner with the strongest fundamentals is usually the one on which you want to build more elaborate capabilities.


6) Evaluate the winner against clear criteria. Final evaluation of a DSP usually incorporates numerous strategic criteria, the most common of which are the following:



  • Performance -- Were campaign goals consistently met or exceeded?


  • Scale -- At what volumes was that performance generated?


  • Insight -- Is there clear understanding of the drivers behind performance (media, audiences, etc.)?


  • Transparency -- Do you know the underlying supply, data, and economics?


  • Workflow -- How easy was it, and did it fit well with the current business?


  • Service -- How experienced and knowledgeable was the team? Were they responsive? Can you see them as a strategic partner


  • Flexibility -- Can the platform be adapted/customized to your unique needs (first-party data passing, proprietary metrics, integration of direct/premium buys, customizable features via APIs, etc.)

Two other little tidbits to keep in mind. First, notice the reaction of the DSP when a bake-off is proposed. They've all been in lots of bake-offs before and they know where they ended up. They should be jumping at the chance to be in yours. If not, or if they start to backpedal, why is that? Second, watch for little things that happen during the test. If pixels went down, who noticed first? If an exchange or publisher doesn't approve the creative, who notifies you first? These little things often speak to superior underlying infrastructure and processes.


So enough with the vapor and the PowerPoint. Put DSPs to the test. And when you find a winner, tell a friend. With every DSP bake-off, the ecosystem gets smarter and the vapor gets thinner. Maybe soon, all the press won't be generated by DSPs and DSP wannabe's making claims of what they can do, but by agencies and advertisers innovating new strategies and announcing ground-breaking results that just happen to be powered by real DSP technology behind the scenes.
 
Wouldn't that be nice? Happy testing!


Ari Buchalter, PhDis chief operating officer for MediaMath.


On Twitter? Follow iMedia Connection at @iMediaTweet.


Comments

to leave comments.