Synthetic User Research is a Terrible Idea

Any product claiming it can conduct valuable research for your software by ‘interviewing’ a synthetic user or generating a research snapshot with generative AI is lying. It can produce a summary of something, sure, but that’s a far cry from anything valuable. Even AI summaries of a research effort are risky because they cannot infer or convey importance within the given context.

Let’s start with the biggest red flag: Bias. (IBM)

Do you really know where that LLM has sourced its data? Do you trust they’ve intentionally and thoughtfully gathered information, ethically, from a wide variety of vetted sources? There will always be bias in the research work we do, but when we control how the research is done we can effectively minimize its impact. That is unless you’re okay with your AI researcher believing ‘Women are rarely doctors, lawyers or judges.’

AI outputs are only generalizations.

Creating a generic software solution won’t be successful. Software projects that only meet the minimum requirements fail because public consumers won’t adopt them. (They’re already using something else and you offer no value in switching.) Plus, internal teams will become less efficient because they’ll need to relearn how to perform their primary tasks, and since you haven’t solved their niche needs they’ll have to keep using or develop new workarounds anyway. Think about it this way: would you let an AI ‘researcher’ inform an architect how you’d like your house designed?

Getting 80% of the way isn’t hard, but the last 20% is where you win or lose.

If you’re only getting generalized outputs, it’s likely you already know how to address them. So investing in AI research is a waste of time and money.

Let’s face it, there are very few truly unique experiences and workflows out there. For any new project you’re tackling you can get ~80% of the way there without a ton of effort. Unfortunately so can everyone else. Understanding where the value is and having a strategy for that last 20% is the difference between a successful launch and a dud. This is likely why all those companies laying off employees over the last three years or so (in the name of efficiency) have started to see their experiences eroding. They (or their shareholders) decided they were okay with 80% but consumers aren’t satisfied, according to a report in Forrester.

When AI research actually does present you with specifics, they’re made up.

That’s par for the course now. AI hallucinations, or, more accurately put, bullshit, can’t be avoided. Everyone from OpenAI, to Google, and Microsoft has admitted this.

So the question becomes, are you willing to bet the success of your software project on generalizations or made-up needs?


Join the conversation

Your email address will not be published. Required fields are marked *