We have access to more data than ever. But how do we translate it into quality insights? In this Q&A, Dr. Duane Varan, CEO of MediaScience, shares how R&D fits into the latest marketing trends.
Q: Can you speak about some of your most recent research projects? What's the most exciting thing you're working on right now?
A: At MediaScience, we conduct lab experiments from 9:00 a.m. to 9:00 p.m., seven days a week, so I am incredibly fortunate to live in a sea of insight. For researchers, this is the golden age! There has never been a time when our industries have been more confused and more in need of good solid research to guide the path going forward. So it's an exciting time. And there is so much research that we have done that has materially shaped the media landscape. So when you say "exciting," you have truly captured the spirit of our times.
But if I had to bring my work down to a single project, it would be the "Beyond Thirty Seconds" research project (beyond30.org) that is now in its tenth year. That project, out of my research center at Murdoch University, has benefited from more than $8 million in funding by most of the main TV networks and many leading brands and technology enablers. The project has tested more than 50 new ad models using advanced new research methods such as biometrics, eye gaze, facial coding, and the like. It has more than 15,000 test sessions under its belt. What's amazing about the project is how many insights we've discovered many years before the models really hit the market. So, for example, we were celebrating the power of mobile ads almost 10 years ago, with solid research demonstrating the strength of the platform. Even today, there are many who still haven't come to terms with that reality. And in many cases, we've killed models by testing and then proving how ineffective they really were -- saving our partners countless man hours and dollars unnecessarily backing the wrong horse. To this day, it amazes me to see new models deploy in the market for which we had insights so many years ago. And the research we conduct now paints exciting opportunities and challenges for the near future.
Q: What was the most surprising piece of research you've come across recently and how will/does it affect marketers?
A: We've done a lot of work on the impact of social media, a very hot topic these days. Everyone is pushing social media very aggressively -- with good reason, on the one hand, because it's clearly now a key part of the consumer experience. But some five years ago we did some interesting work demonstrating how people engaging with social media on tablets, mobiles, and PCs while ads were running on TVs was not good news for the brands being advertised on TV. People really can't media multitask. If they are engaged in social media while your ad is on TV, your ad is going to take a massive hit. This is a bigger problem today than fast-forwarding, yet our industry is largely asleep at the wheel on it. So over the past five years we've done lots of research exploring potential solutions to the problem, anticipating a day when the question will be front and center. And we've found a solution: something I call the "cognitive bridge." It's amazing, really.
The problem with media multitasking is that your brain filters out everything it thinks it doesn't need, so TV ads are filtered out. But what we've discovered is that if you have a visual cue in the social media experience linked to the brand, even as simple as a banner ad for the same advertiser, suddenly the brain sees a connection between what's on your second screen and what's on your TV, and so it processes the information. This is something of a miracle, really, because ad impact is suddenly restored.
Now there are lots of technical challenges to building a solution like this in the real world, particularly in synchronizing banner and TV ad, but we like being a step ahead of the game rather than feeling overwhelmed by the perpetual change overriding the market.
Q: Given the advancements in consumer technologies, what are your top predictions for how content marketing will look in the next 6 to 18 months?
In many ways, we had it easy in the past. Roughly 50 percent of the audience for any program was inherited from the program before it. "Audience flow" was the single largest contributor to getting an audience. We had a clearer sense of when and where to promote, with very powerful results.
Now the game is much more complicated, with viewers in increasing control of their own media inventories. So quick and simple solutions no longer suffice.
But in truth, our challenges are no different than those faced by any marketer. Our challenge going forward is to reach consumers wherever they are, on their time. It's less about our supply and more about their demand. This requires research, and it requires a spirit of enterprise and innovation. This necessitates a very different kind of organizational culture. The biggest mistake you could make is to deny this fundamental truth.
So going into the future, expect content marketers to be more like all marketers, relying on many more platforms and touchpoints to find and follow their consumers wherever they are. And there's no question that good research is front and center to building any such strategy.
Q: You're often quoted as saying that brands need to invest more in R&D. Do you think that certain areas of research and development are more important than others?
You're partially right. It's not just that we need more R&D; we also need better R&D. It amazes me how incredibly poor the quality of so much of our research really is. We have methods that have dominated our industry for many years, for which we have historical benchmarks, but that have clearly demonstrated their inefficacy in the market. And there are reasons why these methods are so poor. Conceptually, they clearly fall short of their tasks. But we're used to them, so we rely on them. For example, human emotion is at the core of most media and marketing research. It's clearly one of the main drivers around the audience/consumer experience. And how do we measure emotion? We ask people to tell us about their emotional journeys, either in surveys or focus groups. Really? Does that make any sense? People don't know what their emotional experiences are. So how can they tell you about it? And even if they could, the act of telling us draws on their rational faculties, so in the course of describing it, they would be giving us a completely distorted picture. It's not surprising that the research ends up being so poorly predictive. It's a poor fit to the task, but it's what we know so we continue doing it even though deep down we know it falls short.
So at MediaScience, we rely on more direct measures of emotion: biometrics, facial coding, eye gaze, and reaction times. We're always on the hunt for new methods. We're thrilled at the conceptual breakthroughs we've seen. And these are early days. There's so much to look forward to. And I say this as just one example -- tackling emotion. There are so many other exciting new frontiers.
So yes, there's a need to invest in new R&D, but unless that investment actually advances the game, it's of no use. In fact, I believe that more investment in bad research only increases your liability.
Now I'm not arguing for any one method. There are so many exciting new frontiers opening up. But whatever your focus, lift your game. Don't settle for research that is a poor fit to the problem. Demand better. And invest because never before has innovation afforded a greater opportunity to break ahead.
Q: Gathering data is critical work, and correctly interpreting data is just as important. Are there any universal tips you can share on where to start after the data have been collected?
The question is deceptive because there's not a lot you can do in the analysis stage if the design wasn't right. When I look at data generated from the wrong design, my usual instinct is to collect the data again using the right design because analysis is only as good as the data going in. There are a number of problems, I think, that are endemic to our industry at the design stage. The first is the "off-the-shelf" design, where vendors fit your problem to a product they already have on offer. Now sometimes that works well because the product was developed for your very specific problem. But in all too many cases, it's a forced fit. Yes, easier for everyone to work with, but so much more limiting once you get to the analysis stage -- with so much less potential contribution. Another problem is the lack of control over study variables. For example, many researchers don't properly rotate their test content. So in the analysis stage, you don't really know if you have an order effect or a genuine effect. The list goes on and on. I'm often surprised at how little time and effort goes into the design stage given how important it is to every insight that grows out of it.
The other thing I would say is that researchers need to be very conservative with findings. There is far too much tendency to hype findings and be aggressive in trying to show effects even when it is not clear those effects will really stand up. There's no question that good news pleases clients -- who often press for findings to justify spend. But at the end of the day, your lasting reputation depends on market performance, and you're much more likely to get it right when you are conservative by nature. We've had a number of studies for ESPN, for example, that were then tracked by them as they deployed in the market. And fortunately for us, their subsequent tracking research has, so far, always confirmed our findings (touch wood). We've built that reputation in-house by being conservative rather than aggressive with our findings. I'm not saying we'll always get it right, but we're far more likely to get it right when we are conservative rather than aggressive with reporting our findings.
Q: How do you approach developing research methods for new advertising models?
Researching new ad models is actually much harder than it looks. I think the biggest challenge is in figuring out how to produce the new ad model in the first place. By nature, if you're talking about a new ad model, you're probably talking about something that either doesn't exist in the market or that is very limited in its implementation. So how do you capture the experience and manipulate it (testing effects on different brands, for example) in the lab? What I discovered 10 years ago was that the only way you could do this was to have your own software engineers and programmers. You can't depend on the client because if you wait to for a mature-enough product, you'll be testing old models instead of new ones. So that's the first challenge. We usually need to build our own systems to make the new ad models work for our purposes.
The second challenge is getting your head around how the new model really works at a conceptual level. The traditional 30-second commercial was a one-size-fits-all ad model. But now we have a near infinite range of models, each of which works in a completely different way. This is great news for brands because it means they can deploy models better suited to their strategic objectives. But before they can do that, we need to understand exactly what each new model brings to the party and how it can work for a brand. Does an interactive ad work because it sifts the audience, finding those already receptive to a brand? Or does it have some type of persuasive effect on those who encounter it, providing product rehearsal, capitalizing on the dynamics of choice, and engaging cognitive systems more actively? I mean, the possibilities are near infinite. The challenge for us is that many of the new models are developed by engineers and programmers who tinker on gut instinct, so they are often unable to articulate any sense of the conceptual framework through which the new model might work. This often means we have to dive in -- reviewing literature, modeling out the effects, and designing experiments to tease out different competing hypotheses. It's one of the reasons why lab-based research is so critical.
Finally, once you have hypotheses around the variables you think are at work, you need a game plan as to how you'll test your theories. In many cases, this means we have to invent entirely new methods. What's great about the labs at MediaScience is that we have no shortage of tools. We have a commitment to bringing in every tool we can and every new method we discover. That way, it's all about matching the right tool to the right problem. And that often requires a lot of validation work on our part with the new methods, to make sure they are really measuring what we think they are measuring. And, of course, we often have to build new algorithms because in most cases the measures we need don't yet exist.
I can't begin to express just how gratifying this all is to us as researchers. At our core, those of us in research are motivated by the passion for discovery. Taking a new proposition and figuring out what makes it tick, why it does what it does, and what the relative strategic contributions are -- those are the aha moments that make the journey so fulfilling.
As I said earlier, as researchers, this is our golden age. I can't imagine a more exciting time for researchers to demonstrate their value to their organizations. And I have no doubt that media and marketing companies alike will increasingly come to recognize their need to invest in R&D to remain competitive going into the future.
On Twitter? Follow iMedia Connection at @iMediaTweet.