Behavioral Science — Part 4: Validated Techniques & Their Use Cases
There’s a contradiction in traditional market research approaches in that they are often set up to yield mostly System 2 (slow, conscious) responses from consumers, yet, when those same consumers are making decisions in real life, they often rely on System 1 (fast, unconscious, automatic) decision-making. This disparity results in something called the Say/Do Gap — people don’t speak their mind because they don’t know their mind — and this puts brands at risk of making important decisions based on data that may not tell an accurate story.
To combat the Say/Do Gap, researchers increasingly need to examine the way they approach study design; by incorporating behavioral science or nonconscious measurement techniques, they can create a more holistic understanding of consumer behaviors. There are varying “levels” to which you can incorporate behavioral science into your research design and processes:
- Level 1: Stop doing things we know are unreliable.
- Level 2: Start consciously designing questions to yield more nonconscious responses.
- Level 3: Start researching and incorporating validated behavioral science-influenced techniques.
In this article we look at some Level 3 techniques. For more on Level 1 and 2 techniques see my prior articles: Behavioral Science — Part 2: Sometimes It Is What You Don’t Do That Makes Things Better and Behavioral Science — Part 3: Conscious Design Yields Unconscious Responses. To get a basic grounding in Behavioral Science see my article An Intro to Behavioral Science/Nonconscious Measurement.
Level 3: Start researching and incorporating validated behavioral science-influenced techniques.
For those of you that have been following this blog series, we’ve made it to the high-end of the behavioral science “sophistication spectrum.” Level 3 behavioral science is all about investigating and implementing proven nonconscious measurement techniques into your research designs. What follows is a list of some of my favorite techniques, along with a brief explanation of why each is validated as being behavioral science as well as some common use cases.
Eye tracking is as simple as it sounds. It involves a camera watching where your eyes are looking, typically at an advertisement, shelf or package. Then it maps where your eyes are looking, to the material you are looking at. The result is a very accurate map of what attracts people’s attention. An alternative is to ask people what they noticed. This is subject to distortions in recall and bias in reporting what seems socially desirable.
Eye tracking has applications for testing virtual shelf sets, packaging, advertisements, direct mail and just about any content consumers will look at. Critics point to its cost, and the fact that it does not account for people’s uncanny ability to stare something without seeing it, a condition sometimes known as Male Refrigerator Blindness, but I find that the pros far outweigh the cons in getting us closer to “do” and further from “say”.
Facial coding, like eye tracking, relies on a camera trained on your face, recording your facial reactions as you watch content, typically video. The programs detect expression like anger, disgust and joy by measuring things like smiles, sneers, lip curls and flared nostrils. Those reactions can be synced up to the content showing exactly which elements are evoking each feeling.
Facial coding can be used for testing advertising, packaging and any form of video content. Critics suggest facial coding is not always accurate and that it does not necessarily fully capture all the emotions people feel. Psychologist Robert Plutchik suggested “facial expressions are imperfect communicators of emotional states. Emotions and facial expressions are only partially related and the connections between the two classes of events are subject to many disrupting influences.” That said, facial tracking can reveal things that simply asking people will not.
Implicit association tests (IAT) evolved out of studies of prejudice and stereotyping. The most famous public example of IAT is Harvard’s Project Implicit, which you can try for yourself (it’s fascinating, if not sometimes unsettling). The basic idea is that the associations you make between categories (like pleasant and unpleasant) and words or brands or people indicates how you feel. But it is more complicated than that, in that it takes into account the speed with which you make associations. The underlying idea is that it measures thoughts that exist outside of conscious awareness or conscious control — thus making it ideal for measuring socially undesirable things. But it also can be a useful tool for surfacing non-conscious feelings about brands, companies or offers.
As intriguing as IAT is, there is also controversy regarding precisely what it measures, and the lack of reproducibility of some of its results, but combined with more traditional research metrics we find IAT quite valuable in “painting better pictures” for our clients.
Just do it
At Maru/Matchbox, we’re all about painting better pictures. That’s why we’re well-versed in designing, deploying and analyzing research initiatives that incorporate such techniques (and many more) as part of our agile community-based, technology-accelerated insight solutions. If you’re interested in learning more, contact us, we love this stuff!
This article was written by Jonathan Dore, Vice President, Consumer Insights for Maru/Matchbox and was first featured on the Matchbox Blog.