Single Studies Produce Data, Not Insights
The insights industry has a long history of treating a survey as a one-off answer to a question. It often goes like this: a marketer goes to their internal researcher and says: “I want to do some message testing.” The researcher, in turn, writes a brief for a message test and seeks bids from research suppliers. The study is executed in isolation, without foreknowledge of the strategy work that (hopefully) preceded it, the brand’s image, the competitive context or the history of the market dynamics.
It’s a sad state of affairs, because context is what turns a finding into an insight. We can do so much more when we view questions holistically rather than in isolation — yet that is often not how the industry works.
Why do we do this?
The tendency to treat the results of a single study as a satisfactory answer to a research problem has multiple roots. They include a history where primary research was often the only data source, a need for speed, silos within organizations, and a heritage of having worked on the supplier side — where money is made by doing one-off surveys or focus groups.
In the past, if you wanted to know what people watched or listened to, you had to ask them. If you wanted to discover what groceries they bought, you had to ask them. And if you wanted to identify where people shopped, you had to ask them. But now, we are inundated with behavioral data. There are many sources that are quite accurate at measuring what people do, buy and consume. Yes, they have their limitations and their blind spots, but so do people who answer surveys. Still, old habits die hard.
People raised in the world of market research tend to default to doing surveys — because it’s relatively fast and easy. Eleonora Jonusiene, Director of International Consumer Insights & Research at Warner Bros Home Entertainment, says she has often seen researchers “rush to the field with a new study, instead of analyzing what we already know based on the previous primary research or syndicated research, because we do not have time to go through the historical information.”
Ihno Froehling, who handles Global Respiratory Marketing Insights & Strategy at GSK in Switzerland, agrees. “We have a lot of data in different places,” he says. “It’s not yet a reality that you can bring the information together easily. It’s typically very time- consuming for us to do that. It’s often easier and faster to call up the supplier and start a new research study than to try to make sense of the myriad of existing data and insights that you have. It’s a sad fact.” This need for speed often goes hand in hand with the reactive order-giving mentality that we looked at in the previous chapter.
The order-giving approach to insights sees research as a means to an end. It answers the question “Did you test this ad?” “Yes, and it passed the benchmark.” It does not embody a more holistic approach where you first comprehend what drives purchasing, then craft a message that resonates, and then convert that understanding into a campaign that drives those messages home. It is a reductionist tactic that encourages thinking about research as a one-off exercise, rather than a cumulative journey of understanding.
This short-sighted approach is endemic to the traditional buyer-supplier relationship. If studies are commissioned on a one-off basis, the contextual information suppliers have is very limited. They might get a brief that has a high-level description of the market, but it is very rare to be given a meaningful understanding of the offer, or the marketplace. That only occurs when there are deep, ongoing relationships: where the supplier becomes part of the team. A procurement-driven, project-by-project approach results in suppliers being forced to act in relative isolation. Unfortunately, that is not uncommon.
Jonusiene points out that the “majority of marketing researchers have joined business organizations from the research agencies,” where they were order takers — who typically work on a study-by-study basis. This single study orientation biases them to “continue to behave as agents or order takers” — people who default to seeing a single study as the solution. Thus, the problem perpetuates itself.
“You know the saying: ‘If all you have is a hammer, everything looks like a nail?’” says Lucid’s Patrick Comer. “Researchers are like that. They answer everything with a survey. The reality is there may be simple social data, or you can just log onto Google analytics and glean information that is free and already there. You can then start driving the next level of insights out of that.”
Big data can be blinkered too
This isn’t just a problem with surveys. It is the same with any single dataset — including big data. “I think we’ve all gotten a little bit mesmerized,” says Howard Shimmel. “Big data and first-party data is great, but it does provide a limited view of the world. I think we’ve got to do a better job of integrating survey data, panel data, first-party data, and being clear about how it all comes together to provide a solution.”
U.S. Bank’s Vidya Subramani told me a story about a behavioral analysis that underscores this point: “The company did some data analysis to understand why a certain group of customers were not using their credit card. They said, ‘Okay, let’s do some behavioral analysis.’ And they looked at the data and found that those who were not using their credit cards had lower credit limits than those who were using their credit cards. So, ‘Aha! The problem is credit limits. Let’s increase the credit limits, and then they’ll start using the card.”
She suggested that the marketers talk to customers and figure out what kind of credit limit they needed. “When we spoke to those customers, the majority said, ‘I didn’t even know I had this credit card. I didn’t sign up for this credit card.’” It turned out these customers had automatically been given the card when they signed up for a checking account. “They hadn’t asked for the product. There wasn’t a need for the product. Giving them a greater credit line would not make them start using the card.”
It was, she recounts, “one of those ‘Aha’ moments when they realized that they do need market research. The two approaches go hand in hand.”
Don’t speculate, triangulate
There is danger in having a blinkered approach, and great value in thinking about a business problem contextually. “I can see a data point, and we can have 1,000,001 different perspectives on what that means to the business, but ultimately it means nothing,” according to Visa’s Kristopher Sauriol. “It’s just pure speculation unless we have additional data or knowledge.”
He believes “You need to layer on other information to get a firm understanding of what’s happening. To me, that means there is a qualitative component, syndicated data we have available to us, and in-market observations. All these are critical for us to be able to decide what a data point means.”
Using multiple information sources also encourages people to look at the big picture, and not be intoxicated by numbers and charts. When you synthesize multiple sources of information, it enforces a focus on summarizing the results as a story. It does not lend itself to producing a 100-page deck of charts. It encourages a 4-page summary of what to do next.
That the kind of succinct, action orientation the insights industry needs. Let’s not settle for impoverished answers to narrow questions. Let’s embrace the big picture.
Just because we “have always done it this way” doesn’t mean we should keep doing it that way. Vive la révolution!
This blog post was originally featured on the Maru/Matchbox blog.