Daniel Kahneman and the Continuous Journey to Understanding Others

What Kahneman’s Work Teaches Us About AI, Data Quality, and the One Thing You Can't Yet Fake.

Few are the thinkers whose ideas fundamentally reshape academic disciplines. Rarer still are those whose influence shape economies, businesses, and the way we make decisions every day. Daniel Kahneman undeniably belonged to this exclusive category. With his passing on March 2024, aged 90, the world lost one of the most influential thinkers in the modern history of economics and psychology—yet his influence remains as powerful as ever.

Today, March 5, 2025, would have been his 91st birthday. A good moment not just to honor his legacy, but to reflect on the weight of his ideas in a world that is changing at an impressive speed. His insights into cognitive biases, uncertainty, and the dual processes of thinking remains profoundly relevant—perhaps even more so now, in an era where artificial intelligence is getting very hard to tell apart from real people.

As a major inspiration for our work, we wanted to take this opportunity to explore how Kahneman’s ideas remain more relevant than ever in this fast-moving, AI-driven world. His concerns about overconfidence, data interpretation, and humanity’s struggle to keep up with exponential technological change have never been more urgent.

How do we make sure AI sharpens our understanding rather than distorts it? How do we safeguard data quality when speed (and cost cutting) is often prioritized? In this article, we explore these ideas—honoring Kahneman’s legacy while addressing the challenges of our present and future.

AI and the Illusion of Validity

In one of his last interviews, Kahneman spoke about how human nature struggles to cope with rapid technological change. In a 2021 interview with The Guardian, he predicted:

"There is going to be massive disruption. Technology is developing very rapidly, possibly exponentially. But people are linear. When linear people are faced with exponential change, they’re not going to be able to adapt to that very easily."

These words have proven prophetic in many ways, of course.

But perhaps one of the most interesting cases might be happening in the perspective and practice of customer insights, and how some organizations in the broader industry are using AI to accomodate the cust-cutting bias of modern business, at the expense of running straight into irrationality.

Kahneman coined the concept of "What You See Is All There Is" (WYSIATI), explaining how people tend to make judgments based only on the available information, often neglecting what they don’t see. Or the collorary, believing what they are exposed to, no matter where it comes from (propaganda). This is one of the main components of System 1 Thinking (association, tendency to believe and jumping to conclusions).

So it should come as no surprise then (at least in hindsight) that people will tend to believe that the responses of AI is equal to new information. It is, after all, what we see. What it is surprising is to see many business decision-makers (and even worse, people in the industry) falling for this bias in regards to (of all things) customer perception with the use of generative AI and synthetic data, especially for deep, non-numerical insights.

Behavioral Science Requires Human Behavior

Quite simply, one of the main purposes of research in business is to promote specialized learning that improves decision making. Only through addition of new data can we continue to learn. And when it comes to customer insights, if we want to continue to learn about people, the data has to come from people.

At that point AI by itself cannot provide more recent understanding, until new fresh data is entered into the system. And the quality of that data matters: which is why it should come from humans themselves. That is the one thing that can’t be faked, yet.

This is where reality draws the line, so far. You can’t continue to improve your understanding of people in the real world, without people.

This may change when either agents make economic decisions instead of people, or when we live in a Minority Report-type society. By then, people may stop mattering as economic agents, as some people would seem to prefer. But people’s behavior and perception still matter to develop cutting-edge customer understanding.

And I think that is something that Kahneman would have recognized. At its core, Behavioral Economics is about the way we understand others, and how the information we perceive affects our decisions.

And in this case, our decisions about others can miss the mark if our data comes from extrapolating meaning from unsupported data, which is basically the definition of a heuristic. It doesn’t matter if the reasoning comes from a human or AI, the bias will remain. Which is why it is fundamental to work with sound, constantly updated data to get the most of this technological era; and to not trust AI blindly, no matter how convincing it seems, or how much it support our perception (prompting bias?).

The Importance of High Quality Data

This is certainly not an attempt to discredit the importance of AI, but to highlight the importance of high quality and constantly updated data in this day and age. In this era of what seems like an era of data abundance, it would be easy to believe that information will be democratized and every competitor will have the same access to data. That is not going to be the case, smart organizations will continue seeking and updating high quality data to improve their Models and Data Systems.

Meanwhile, when AI is fed poor-quality data, biased inputs, or incomplete context, it can make the same cognitive errors Kahneman spent a lifetime studying. This creates a paradox: while AI amplifies our ability to process information quickly, it also risks reinforcing flawed decision-making if not grounded in rigorous, high-quality data.

As AI continues to shape industries, including our own field of customer understanding and strategic insights, Kahneman’s legacy reminds us of the need to balance speed with careful, reflective thinking. Data quality must be prioritized over data volume. AI-generated insights should be scrutinized, validated, and complemented with human expertise and grounded in high quality, human-origin data.

At our company, we apply Kahneman’s principles by ensuring that our AI-driven tools are not just fast but also accurate, reliable, and grounded in high-quality data. We recognize that true understanding—whether of markets, behaviors, or decisions—requires a combination of AI efficiency and human depth.

More posts by this author

Forward-looking data – Part II

Forward-looking data – Part I

The Human Behavior Analysis©

Success stories
Insights Turned Into Understanding

Honoring Kahneman’s Legacy

If Daniel Kahneman were here to witness the AI revolution, he would likely offer a familiar warning: Beware of overconfidence. Be skeptical of easy answers. And never underestimate the importance of slow, deliberate thinking.

But perhaps he would also recognize the immense potential AI has to complement human intelligence—if used correctly. AI is not a threat to deep thinking; it is an enabler of it when guided by the right principles. By embracing AI’s strengths while mitigating its weaknesses, we can ensure that data serves as a force for better decision-making—not just a faster path to flawed conclusions.

As we navigate this era of rapid technological change, let us honor Kahneman’s insights by embracing both the power of AI and the irreplaceable value of thoughtful, deliberate analysis about other people. Human to human, even if AI enabled, to further the goal of understanding each other better.

Thanks for reading!
Subscribe for FREE to out Newsletter

RELATED ARTICLES