smart choices in AI: how to evaluate consumer research AI tools
by: Becky Mead
AI in FMCG consumer research isn't just the next big thing; it's already changing how we understand what shoppers want. But jumping on the AI bandwagon comes with its own risks. What if you pick the wrong AI tool? What if the tool steers you off course, costing you time and money and potentially a winning product idea?
There are key questions you need to ask when you're evaluating an AI tool for your consumer research needs. By asking these questions before you invest in an AI tool, you'll avoid expensive mistakes, ensuring your research is on point, and your FMCG product decisions are sound.
harnessing AI for sharper consumer insights
Let's first zoom in on AI's role in consumer research. It's all about doing things smarter and faster.
For instance, imagine you could snap your fingers and get feedback from the exact group of shoppers you’re curious about. That’s what AI is starting to offer with rapid quantitive testing that’s more accurate and gives you the ability to talk to a more precise segment.
There are also programmes that can create screeners, discussion guides, or questionnaires. AI tools can also be used to create your own bespoke stimulus for testing. For example, you could create the perfect image for your focus group or use the tool to generate ideas for new projective techniques.
AI is opening doors to analysing qualitative responses at scale, a potential game-changer in understanding the heart and soul of consumer conversations.
Yet, AI isn't a one-stop solution just yet. There's a cautious approach to adopting AI, especially in tasks like interview moderation, where its application is still emerging.
We're proceeding with careful optimism. As AI lays the groundwork for a future that promises deeper and more rapid consumer insights, we need to balance this with a thoughtful consideration of its current limits.
a framework for evaluating AI in consumer research
When evaluating an AI tool for your consumer research, it can feel like you're looking at a black box. The technology involved can be so complex that you know the tool is doing something, but you can't see what's happening inside the box. So, you have no way to evaluate the strengths or the weaknesses of the tool you're looking at.
However, you can use our five-part framework to make better decisions about tools.
1. evaluating AI tool design for consumer research
Assessing AI tools for consumer research requires scrutinising them just as you would any research methodology. Consider their validity, data integrity, flexibility for customisation, and ethical implications.
A reputable market research firm will outline the strengths and limitations of their methods, discussing how they’ll address any shortcomings and factor these into their analysis of the research findings.
You want your AI tool supplier to be just as transparent so you can make sure the tool itself is designed for the right purpose.
An AI tool might be promoted as being great for consumer research. But before you invest, you need to find out if the tool has been specifically designed for consumer research by someone who understands consumer research. (A question an AI supplier should be able to easily answer).
The next step is to then find out if the tool can give you the answers you need. Your AI supplier should be able to talk to you about any watch-outs for using their tool and its weaknesses.
2. assessing validity in research tools
An important consideration in consumer research is validity. How well can the design of an AI research tool measure what you actually want it to measure?
In AI's case, you should ask any AI tool supplier, how can you evaluate how well the model is working? Can you evaluate the inputs to the model?
These are important questions because we know that AI currently has a few weaknesses. AI is vulnerable to biases that might come from the training data selected. So, any tool taught from English-language webscapes is very likely to be heavily influenced by the US, which is not very helpful to us in Australia.
While smart, AI still has learning to do. It's currently not good at understanding some human responses, like sarcasm or body language. Good qualitative researchers know that understanding what's not being said is sometimes more important than what is being said. AI can't pick up on this at the moment.
Other quirks include AI's ability to hallucinate. You might get a picture of hands with too many fingers or essays filled with nonsense as it guesses what word should come next.
That's why it's essential to consider where the AI tool is getting its data, how current that data is, and whether it truly understands your market's context.
3. the reliability factor
Reliability in consumer research is all about consistency. You want to get the same results if you run the same test twice.
So, when it comes to AI, you have to ask questions about its stability—what can cause the outputs to change?
If you feed an AI tool more data, will the conclusions hold steady? Or will the results swing around? What would cause this?
Also, what if two different people on your internal team use the same AI? Will they end up with different results? If so, what will be driving that?
AI tools should be stable. But it’s still important when evaluating a tool to understand if a small tweak might have any effects and whether that can skew results widely.
4. protecting consumers and data integrity
As AI entwines more deeply with consumer research, ethical considerations take centre stage. It's not just about the efficiency of data collection but also about protecting those who provide it.
If you work in consumer research and conduct qualitative interviews, you've probably encountered a situation where a benign topic suddenly becomes unexpectedly emotive or sensitive to a participant.
How well would an AI chatbot or an AI qualitative moderator be able to respond to that happening? Would the AI even notice it was happening? What if the AI used language that the participant considered inappropriate? Would an AI chatbot recognise this and be able to recover the interview? Or would the participant be left hurt or offended?
You need to ask AI suppliers if they've done any studies to confirm how effectively their tool works against a skilled human.
Another layer of ethical AI use concerns data security. Your customers’ and company’s sensitive information demands strict confidentiality. So, when entrusting data to AI tools, confirm they employ iron-clad security measures to prevent data misuse.
5. detecting fraudulent use of AI
Also, if you're using AI or any other tools to collect consumer responses, we now have to start considering how well these tools detect respondents using AI fraudulently. If we can ask ChatGPT what people think about vegan chocolate, so can respondents. And they can just copy and paste their responses into your survey. That's probably not a true representation of how they really feel about the topic.
As researchers, we need to develop a suite of tools to protect the quality of the data in the future.
a reality check
While there's an undeniable buzz, and indeed, some consumer research AI tools are making significant progress, there's a consensus that a cautious approach towards evaluating AI tools is needed at the moment.
AI can't yet wholly replace human intuition and insight in consumer research. So, for now, the focus remains on how AI can complement and strengthen the way we do consumer research.
If you're considering an AI tool and need guidance on making the best choice, reach out at becky@playinnovation.com.au for expert advice on selecting the right solution for your needs.
read it, love it, share it.
about the author
Becky Mead
Becky has spent the last 18 years getting curious about understanding consumers so FMCG manufacturers can create products they truly want. Becky’s favourite part of the job is helping businesses leverage the consumer perspective to grow - fast! She believes in the benefits of working in partnership with her clients across the entire innovation process and focuses on consumer-first, agile approaches.
ask the author