Bridging the Gap Between AI Hype and Research Reality
There’s no shortage of hype about artificial intelligence in the world of research. I recently had a job that took me out of the research world for 14 months. When I reconnected with my peers I was surprised to hear how much AI was being used by them, but I wasn’t clear on exactly how they were using it and with what success. I had read many articles about how AI would revolutionise analysis, replace moderators, design studies and write reports.
But I wanted to know what was really happening on the ground – how AI was actually being used, beyond the headlines.
To find out, I spoke with eight seasoned researchers—across small and global agencies, private sector and government, agency and client side —about how they were really using AI in their day-to-day work. These are custom researchers grappling with real data, real clients, and real pressure to deliver quality insights, but stay across what they say are increasing numbers of tech companies competing on tenders, and clients demanding ever greater productivity gains.
The result? I hope an honest and helpful picture of progress, potential—and the persistent gaps.
What You Need to Know About AI First
Before diving into how researchers are using AI today, it's important to understand what AI is—and isn’t.
Most large language models (LLMs) — like the one behind ChatGPT— are trained on internet-scale data, mostly in North American English. That means they can struggle with cultural nuance e.g. Australian understatement, regional dialects, irony, and locally specific phrasing.
AI doesn’t “understand” the way humans do. It works on statistical probabilities. It predicts likely words or ideas based on patterns—not meaning. It mimics intelligence through structured generalisation (deductive and rational), it lacks human traits like intuition, empathy, or inductive reasoning – that which comes from deep experiences, emotions and sense-making.
In research, those human traits are essential. The spark of an insight, the sense that something’s off, the creative leap—that's not replicable by code. Not yet.
What Are Researchers Actually Doing With AI?
Let’s begin with the key areas where AI is making an impact:
AI is being used as a support tool, not a substitute for thinking. And not all AI tools are created equal—some are bespoke, others are off-the-shelf. Some work brilliantly. Some are more hassle than help.
“We Use It as a Second Opinion, Not for Direction”
Across interviews, a common thread emerged: AI is a partner, not a replacement. One small agency summed it up well: “We use AI to validate our thinking.
This reflects a pragmatic approach: AI helps check, summarise, speed up. But final judgment? That stays firmly with the human researcher.
Many researchers use tools like ChatGPT as an “intelligent assistant” to gather quick subject-matter or audience insights during early project design, for example to summarize prior knowledge about a new market segment and even list out potential questions or attributes worth exploring. AI can give researchers a head start in understanding the landscape.
Where AI Works Well:
Nearly every researcher is using AI to automatically code open enders (thematic analysis), and for generating automated transcripts and basic synthesis of an interview. Products like Otter.ai and Dovetail came up often.
Many also conducted desk research using AI where appropriate and found it shaved off many hours of reading.
When it comes to synthesis it’s a bit more complicated. One researcher described their experience: “Even summarising a single in-depth interview has been OK. But when we ask AI to synthesise across ten or more? It gets confused. The nuance isn’t there, the category context, the language, the words from verbal to written has errors, and it can’t deal well with brand names—it often mixes them up.”
One researcher put it simply and said ’it’s only 50-60% accurate on qual studies’.
Interestingly, AI is proving quite handy when it comes to summarising videos and photos from fieldwork. Several teams reported success identifying key themes or sentiments from large volumes of visual data. That’s a small but powerful win in contexts like ethnography or immersion interviews. However, the researcher still did their usual analysis process, it didn’t replace, it augmented their work.
One area where AI truly shines in the design phase is creating stimulus materials for concept testing, and creative research. What used to take one to two weeks is now completed in under an hour. Bear in mind this agency has high domain knowledge in various categories and with the right input and prompting, it resulted in them being invited in to help upskill their clients on working effectively with AI.
One global agency researcher said they use AI to draft a first pass of a recruitment screener (the criteria for selecting participants), since that follows a formula.
Two researchers said they are experimenting with intelligent probing on quantitative open enders - but recognises there may be limits.
Custom-Built Models: Exciting, But Expensive
Some of the most advanced uses of AI in research are coming from teams investing in custom-built models.
One small agency for example is using open-source AI models from Hugging Face and training them on specific categories and client problems. Think: AI trained on five years of researching a particular category sometimes hundreds of studies. This allows the AI to “understand” context far better than a general-purpose model like ChatGPT.
But there’s a catch: cost. “We pass the price on to the client”.
Large agencies, unsurprisingly, are further along. One global firm has an internal AI team building bespoke models trained on desk research, primary studies and client reports. These tools can run synthesis, generate Q&A outputs and even help identify white space, or new value propositions.
In another case, an insights manager at a large multinational company described a model trained on a decade of concept test results - effectively teaching the AI what “success’ looks like in their context. The tool could then screen and prioritise new concepts for further testing. “It often backed up our own instincts,” they said – particularly for rational, attribute-driven concepts. But when it came to emotionally led ideas, it missed the mark.
The upside? Unmatched speed to insight. The downside? Trust.
“It’s great for directional guidance,” one said. “But for segmentation? Not yet. The trust just isn’t there.”
Synthetic: Moderators & Respondents: Still in Beta
If you’re wondering whether synthetic AI moderators have truly arrived, the answer is: not really.
Despite a few clients requesting AI-led qual sessions, most researchers remain sceptical. One described the process as “clunky,” lacking the natural conversational flow of a skilled human moderator.
At a global insights firm, one client is beginning to use Synthetic moderators as an experiment and they were asked to design an AI moderation guide. A sign expectations are shifting.
One team is about to pilot AI moderation as part of their tender, in response to a competitive pitch, where they positioned it as an experiment rather than a finished offer. The researcher explained to me that
“Tech companies are my competitors now,” and “I need to show productivity gains just to get in with a fighting chance of winning the tender”.
There were no examples of successful synthetic survey respondents but one insights manager at a major FMCG company said they’ve tinkered with synthetic respondents for years and found “no deep insights yet”. Another large agency said they are trialling them in another part of their business but have found its not working well yet.
There was one other example of using Synthetic survey respondents, they used it as a way of simply conducting ‘Quality Assurance’ on the questionnaire design, among a very hard to get respondent group.
Where AI Falls Short for Researchers
The Ethics Barrier: Privacy and Trust still Limit AI Uptake
Ethical and privacy concerns are also a real blocker. Several researchers said they’re unable to use third-party AI platforms due to client confidentiality and the tech companies’ unwillingness to explain how exactly it works.
The Verdict: Real Progress, Real Caution
So where does this leave us?
AI is undeniably changing the research process. It’s speeding up desk research, synthesis, enabling new ways to probe data, and offering fresh ways to visualise and communicate findings.
But it’s not replacing researchers. Not even close.
The best practitioners are those who:
As one researcher said: “AI can do the heavy lifting. But the real insight? That still comes from us.”
The Future is Human-AI Partnership
If you’re a researcher wondering whether to invest time in learning how to use AI, the answer is simple: YES. Not because it will replace you, but because it will make you faster, sharper, and more focused on what really matters.
Whether it’s checking your questionnaire design logic, proposing an attribute or question, summarising a stakeholder interview, or automating the coding of an open-ended question—AI can lighten the load.
And I would encourage researchers to be bold in experimenting with AI where it can improve their work. (I certainly have and it delivered incredible results for the business, a strategic direction we could not have done without AI, it would have been cost prohibitive.)
But the thinking? The insight? The interpretation? The storytelling? That’s where we, humans shine. And will continue to.
Have you used AI in your own research practice? What’s worked—and what hasn’t? Share your experience in the comments below. I would love to hear.
#AI #MarketResearch #QualResearch #QuantResearch #CustomResearch #FutureOfWork #Innovation #SyntheticModeration
Managing Director of The Purpose Group and TeaTree Consulting
6moGreat article Helen, and definitely thought provoking!!
Consumer Insights Specialist, Innovator, Marketing Strategist, Sustainability Champion.
6moLoved reading this article Helen Simpson!!
Research Director, Co-Founder @ Zebra | Qualitative Insight with Impact
6moRay Poynter Sue York Anne de Silva
Research Director, Co-Founder @ Zebra | Qualitative Insight with Impact
6moExcellent article Helen Simpson. A very much needed, beyond the hyped-up-hype look at AI in research!
Head of Communications + Community | Wellbeing | Digital Transformation | Work Futures | Leadership | Technology
6moHelen Simpson such and insightful article. Thanks for sharing 🙏