Artificial Intelligence is everywhere right now. A wonderful human achievement. An extraordinary tool. A major disruptor. Depending on who you talk to: the next Renaissance… or the world’s biggest bubble.
Like every industry, the insights community has been heavily influenced by it. In the past three years we’ve heard the most respected leaders and the boldest upstarts talk about AI in events and conferences all over the world.
And honestly? A lot of the innovation is great. We find most insights professionals thoughtful and responsible in how they test and adopt new tools.
We use AI ourselves. Quite a bit. And we appreciate it.
But there’s one specific practice we believe crosses a line:
Synthetic profiles.
AI-generated “people” created to simulate how humans think, feel, and behave.
According to Behavioral Economics, that makes no sense.
The Context Gap
There are two reasons why this is so flawed.
1) AI cannot perceive full human context.
It can’t see, can’t feel, isn’t mortal, has no stakes.Most likely not conscious.
And context is the foundation of behavioral economics. It changes everything:
How we react under pressure
What we prefer when we feel excluded
How scarcity alters our choices
Why we make irrational trade-offs to protect identity
Change the situation → change the decision.
AI models trained on past data can only remix the same situations they already know. They don’t understand what it means for something to be new, scary, fragile, or personally defining.
A model doesn’t panic when the rent is due.It doesn’t buy ice cream after a breakup.It doesn’t fear losing someone it loves.
Without stakes, there is no behavior, just prediction.
The Demand Problem
2) AI makes perfect sense for the supply side. Speed, automation, scale? Wonderful.
But on the demand side — the human side — we risk replacing genuine signals with synthetic noise.
Some argue: “Don’t worry, there will be a human in the loop for important decisions.”
That sounds like human decision-making… but with extra steps.
Synthetic profiles simply cannot provide what research is designed to uncover: real human behavior, driven by real human meaning.
Otherwise, we’re just feeding models our assumptions and calling the output “insight.”
Low Stakes in Research, Higher Stakes Everywhere Else
In market research, the consequences of this mistake are (mostly) commercial.Wrong insights, wrong strategy, wasted investment.
But outside our industry?
Synthetic identities become something much darker.
AI friends.AI partners.AI intimacy on demand.
Products that pretend to fulfill deeply social needs, but offer only simulation.
A machine cannot form a bond.It can only mimic our desire to have one.
And if loneliness is already rising, why build systems that encourage us to connect less with each other and more with profitable illusions?
The only place this makes sense is on a spreadsheet; not in a society.
What We Believe
AI can support human intelligence.It can accelerate our ability to learn.It can make research faster, better, more efficient.
But it should never try to be human.
Understanding real people requires vulnerability, consequences, stakes; everything machines don’t have.
So we should keep asking:
Are we using AI to better understand humanity? Or to replace the parts of humanity that are inconvenient?
That decision matters more than any technology trend.
So Where Do We Draw the Line?
AI is a remarkable tool when it supports human intelligence, not when it replaces it. We can, and should, use it to:
Accelerate analysis
Enhance creativity
Improve decisions
Democratize access to knowledge
But when we use AI to impersonate people, to generate the very demand we’re trying to understand, we hollow out the core of what makes us human.
There are many places where automation makes sense. Human experience is not one of them.
The more we rely on synthetic stand-ins for real people, the less we understand ourselves.