"Fail, fail fast, and learn." This has become a popular mantra for many businesses favoring the agile approach to digital marketing. When applied to site optimization testing however, this mentality has often led to more frustration than real impact on a business' bottom line.
At the heart of any digital marketing leader's decision to run site optimization testing lies the fundamental marketing appetite for boosting performance or, at least, deriving customer insights. When it comes down to it, testing is not conducted for the sake of testing, but rather for progress and contribution to a company's bigger success. With this in mind, it shouldn't come as a surprise when businesses expect clear answers to questions such as "Where is our biggest optimization opportunity to drive our KPIs?" or "What does the testing result tell us about what we should and shouldn't do next?" However, in most cases, those answers are either left unanswered or the responses are unclear.
What then, is the issue that leaves digital marketing leader's appetite unfulfilled -- and furthermore, what can be done to fill it?
It's worth pointing out that site optimization testing is brought to the spotlight only recently as more and more digital marketers wake up to the realization that SEO and SEM work don't mean much unless their hard-earned traffic gets converted into customers or sales. While digital marketers are fast and furiously jumping into testing, this relatively new work stream doesn't always have clear ownership or an SME to lead. Marketing teams are often tapped to add it on to their already fully loaded schedule, so it's no surprise that testing is just another task on a long to-do list. The resource constraint coupled with the agile attitude often turns testing into more of a tactic when it really should be more about strategy and discipline. Without strategy and discipline, site optimization testing becomes highly subjective -- it's about what the team wants to test versus what the business needs to test.
So, where do you start when you don't already have a solid strategy and framework in place? Here are some quick steps you can follow to add discipline to your testing and uncover what is truly worth testing.
While this may seem the most obvious, it's often the most overlooked and therefore the root cause of a failed testing strategy. When the goal is clear, you can then analyze how customers are currently navigating through your site and identify the key journeys that have a direct impact on your goal. For example, if your goal is to drive conversions for all your products, you should list all the journeys your customers can get on in order to complete a sale. This will give you the pool of potential testing opportunities.
Not all testing efforts are equal so prioritization is key to get the most out of your testing efforts. You want to prioritize your testing by answering these two questions:
- Which site journey has the biggest impact on my goal?
- Where is the biggest opportunity for improvement on the target site journey?
For example, if your answers both conclude that product A has the highest share of all my conversions and also has a pretty high cart-abandonment rate, then congratulations -- you just uncovered a very promising testing opportunity.
This is the step that will determine if your test will yield stellar result and insight, or a mediocre one. Undisciplined, tactical testing skips this step entirely, but if you want a test that gives you clear learning, you have to start with the hypothesis. Answer this question: What are all the possible explanations for the current poor performance? Going back to our example above, you know you want to reduce cart abandonment rate for product A, but is this because product A's info is not as clear, there isn't a sense of urgency, a potential concern is not addressed, or checkout button is below the fold? With data and user experience best practices, you can then narrow the list of questions down to a few that can be validated through testing.
This is where rubber meets the road. When you design the testing concept, you want to make sure it accurately reflects your hypothesis, nothing more and nothing less. When you set up the test on your platform, you want to make sure that you have proper tagging to yield clean readings as well as any additional insights desirable to the business. When you set up your reporting, you want to make sure you not only report on the results of the test itself but also tie it back to the goal to show the true impact on the business.
With the discipline to follow through a framework like this, testing in an agile mode can be truly effective again. You may or may not hit a gold mine in every test you run, but one thing is for sure, you will always have the clarity on the insight you can share with business, and you will always have the confidence in knowing that the gold mine is already on your radar.