Blog

AI repeats one of the most persistent errors in UX research — and does it so convincingly it's hard to detect

Marttiina Arelma Design AI

Hero-blog-AI-reserach-bw AI-blog-research-mobile-bw

Here's the uncomfortable truth for anyone building towards a fully AI-powered research pipeline: AI won't do research for you. It will simulate something that looks like analysis.

Research is essentially meaning-making. It's the act of interpreting what people say, do, and leave unsaid — and turning that into something a team can act on. AI doesn't do this. What it does is process text, find patterns, and present the output in the language of research. The result looks like an analysis. It isn't.

There is an ongoing narrative that AI can assist with UX research analysis. That's actually true, it can help with parts of the analysis. The problem starts when we treat that as the whole thing.

A quick scoping note: when I say "AI" in this article, I'm referring specifically to tools built on large language models (LLMs). These are the tools most of us are now encountering in research workflows — and they come with a specific set of strengths and blind spots worth understanding.

Where AI can be of use in UX research

To understand where AI helps — and where it doesn't — it's worth looking at the research process as a whole.

res

Research phases

AI can play a role across the process, from planning through to reporting. But the gains are not evenly distributed — and the risks concentrate in one place: analysis.

res ai

Research phases with AI Tooling in use at Fraktio

This setup at Fraktio keeps evolving with our practices and our clients' needs. My colleague Antti Lavio covers the tools we currently use at Fraktio and their practical applications in more detail in his webinar — this article focuses on what happens when we get the analytical layer wrong.

Analysis is not one activity. It operates at three distinct tiers — the three tiers of research analysis:

Description — staying close to the data. Summarising what was said. Sticking to the facts.

Analysis — finding patterns. Relating key ideas. Building a picture from what you found.

Interpretation — making sense. Drawing meaning. Using your knowledge and experience as a human being.

undefined-4

Three Tier Framework

AI is strong at the first tier, useful with supervision at the second, and unreliable at the third. Empirical testing has confirmed this — even after iterative prompt engineering, AI succeeded at summarisation but never produced satisfactory results for thematic analysis or cross-theme insights.

This distinction matters — because most of the value in research comes from interpretation, not description.

An old problem, amplified: pseudo-quantitative research

There's a persistent misconception in UX research that ought to be surfaced here. I call this "pseudo-quantitative" research — and it's a long-standing weakness of many product teams attempting to run lean UX.

Let's assume we've concluded that usability testing is the right fit for our research problem and we go about running our studies. On the back of that type of activity — or even in-depth interviews — we often see statements such as:

"5 out of 10 said X" or "4 out of 6 did X"

It sounds like solid numerical data — but it's not. The issue is that it's often interpreted as such. There's a pressure to gain understanding in the form of numerical data because we trust numbers. The problem is that those statements are not statistically accurate and they don't shed light on why something is said or happening.

This type of UX research; usability testing, interviews is not meant to be quantitative, but qualitative. It produces a completely different type of data. It's not there to tell us what but rather why things are the way they are. And this type of data is the lifeblood for designing truly great products and services. We get very different outcomes if solutions are purely based on numerically and statistically valid findings. And we definitely get misinformed outcomes if we use "why" data to answer "what" and "how many".

In short: you should not try to quantify qualitative data, even if it's to speak the language stakeholders respond to. It sends the wrong message about qualitative research and creates false confidence.

AI makes this particularly treacherous, because it repeats the same mistake in a much more convincing way than humans ever did.

How AI amplifies qualitative analysis errors

There are two mechanisms that make AI-assisted analysis, without proper controls, worse than what came before.

First, it automates pseudo-quantitative interpretation. When AI shuffles through 20 interview transcripts and says "65% of participants mentioned X", it sounds like analysis. But it's purely counting the frequency of words. It cannot tell the difference between someone mentioning something in passing and someone stating something central to their experience. Worse, it doesn't flag that it's doing this, it assigns meaning through pseudo-quant logic without letting the user know.

This is where human judgement is still needed. How well you make your own notes, how precise your follow-up questions are, how well you observe the unspoken, gestures, pauses, hesitation. These are researcher practices, and they matter.

Second, AI inherits credibility from both research traditions at the same time. It borrows the format of quantitative research; percentages, classifications, summaries, while using the language of qualitative research; themes, insights, stories. The output appears both measurable and deep. It's actually neither.

The issue isn't using AI. It's that AI makes it easy to skip the phase where a human interprets. In traditional research, pseudo-quant results are most often the product of non-researchers or researchers under pressure communicating to leadership who prefer answers over interpretations. An experienced researcher would be aware of this tension. AI removes the friction altogether, it produces summarised, numbered, cleanly categorised output without anyone having to make an interpretive decision.

It is exactly at that point of interpretation where insights are born. AI makes it easy to skip.

Critical AI literacy in design and UX research

What has always been critical, and remains so, is literacy about the tools we use and epistemological grounding in how we do research.

Analysis DAI AI

Confirmed by research (Academic Medicine, 2025): After extensive testing, AI succeeded at summarisation — and never generated satisfactory results for thematic analysis or cross-theme insights. Not a prompting problem. A structural one.

AI is good for summarising, retrieving, and organising large amounts of material quickly. These are real gains. However with a small caveat; AI does not look things up — it generates text that is statistically likely to be correct. Sometimes it is not.

AI should be used with reservations with analysis. You direct the AI through prompts. The AI surfaces and organises material. However you do the actual analytic work — deciding which patterns matter, how they relate, what they mean in context. AI can look like it is doing analysis when it is actually doing sophisticated description. It produces coherent, plausible-sounding outputs — but the meaning has to come from you.

Which brings us to the last point;

AI is not good for interpretation, making meaning, drawing conclusions. This is where you make meaning. It requires the full weight of your accumulated knowledge, experience, and judgement as a human being. AI cannot replicate this. This isn't a gap that better models will close. It's a different kind of activity.

Critical AI literacy in design means understanding the boundary between these activities.

Specifically:

  • Understanding what AI really does with data — pattern matching vs. interpretation
  • The ability to assess where AI brings genuine value vs. where it produces false confidence
  • Keeping in mind that accountability still sits with humans, not AI
  • At an organisational level — helping teams build their own judgement in these processes

But literacy alone isn't enough without grounding. Teams adopt AI analysis tools without first understanding what qualitative analysis actually is. They produce clean-looking outputs without knowing whether those outputs constitute genuine analysis — or just sophisticated description. These are skills that erode in researchers too, if not practised regularly. De-skilling of analytic practice has in fact been named as a risk alongside hallucination and privacy.

It’s also worth considering what you might miss if the research efforts at any level sit with an individual (whether they are using AI tools or not, but AI definitely widens the gap further). At its best, the research process brings development teams closer to the human problems they're attempting to solve. Relying on AI to do it for us risks missing not just the insights — but the opportunity for collective meaning-making and alignment that enables teams to keep building better products at scale and faster.

The instructions, project context and analysis frameworks are key in using AI tools in UX research. Tools you choose are up to you, and as long as it corresponds to what you need to be able to achieve and you are capable of directing it.

AI tools don’t actually analyse your research for you. However they can help you with the analysis process by summarising and categorising information. AI in UX research is only ever going to be as skillful and good as you allow it to be.

 

This article draws on the following research and practice literature:

Cook, D. A. et al. (2025). AI to support qualitative data analysis: Promises, approaches, pitfalls. Academic Medicine, 100(10), 1134–1149.

Kuniavsky, M. (2026). Design practice assumes a world that no longer exists. Medium.

Ladner, S. (2025). On AI in qualitative analysis. LinkedIn.

Wolcott, H. F. (1994). Transforming Qualitative Data: Description, Analysis, and Interpretation. Sage.

Woolf, N. H. (2024). How can Gen-AI assist with interpretive QDA? CAQDAS Networking Project.

 


At Fraktio, our design team combines UX research, service design, and AI-assisted workflows — with the critical judgement to know where the line is. If you're figuring out how AI fits into your research practice, or need experienced researchers who understand the difference between analysis and its simulation, let's talk.

Read more about our approach to UX research, UX/UI design, and service design.