Stated preferences vs actual behaviour: The feedback trap

May 29, 2025

Stated preferences vs actual behaviour: The feedback trap

I once spent three months at Adobe leading a research programme that would, I was absolutely certain, produce the clearest product direction our team had seen in years. We interviewed forty-two enterprise users. We ran surveys. We held workshops with sticky notes and dot voting and all the sacred rituals of user-centred design. The findings were unambiguous: users wanted deep customisation. They wanted control over layouts, workflows, toolbars, and default settings. They wanted the product to bend to how they worked, not the other way around.

So we built it. An elaborate customisation layer. Flexible layouts, configurable panels, adjustable defaults for nearly everything. The engineering team spent months on it. We were proud.

But then we shipped it. And 94% of users never touched any of it. Not a single customisation option. They opened the product, used the defaults, and carried on with their day exactly as they had before we gave them everything they said they wanted.

The research was technically accurate. Users genuinely believed they wanted customisation. They were not lying in those interviews. But the gap between what people say they value and what they actually do when the moment arrives is so wide you could lose an entire product roadmap inside it. We did.

The feedback trap

This is a pattern I have seen repeated at every company I have worked at, and every company I have advised. I call it the feedback trap: the belief that what users tell you in research is a reliable predictor of what they will do in the product. It sounds reasonable. It feels rigorous. But it produces a specific, expensive failure mode. You build with confidence for a user who does not exist.

The fictional user is the person described by your research data. They care about privacy. They want granular control. They read the settings page. They compare options before choosing. They make deliberate, considered decisions about every feature they engage with.

But the real user is late for a meeting, has fourteen browser tabs open, chose the first option that looked reasonable, and has never once opened the settings page of anything on their phone. User feedback is a story users tell about themselves, not a report on what they actually do. And the story is always more flattering than the reality.

But this is not dishonesty. It is something more interesting. When you ask someone in an interview what they value, they give you their aspirational self. The version of them that reads ingredient labels, uses a password manager, and would absolutely pay for premium features. But aspiration is not behaviour. The person who tells you they care deeply about data privacy is the same person who clicked "accept all cookies" fourteen times before lunch. But not because they stopped caring. Because caring takes effort, and effort competes with everything else in their day.

What the shopping trolley knows

There is a well-known problem in nutrition research. Ask people what they eat, and they will report a diet that is healthier, more varied, and more intentional than anything their shopping trolley would confirm. The answers are different people. One is the person answering the survey. The other is the person standing in the supermarket aisle at six in the evening, tired, hungry, and reaching for whatever is fastest.

But product research has the same problem. The interview room is the survey. The product is the supermarket aisle.

At my first startup, we learned this in the most painful way possible. We had a small but loyal user base, and we did what every responsible product team does: we talked to them. We ran interviews. We asked what features they wanted. Three specific requests came up repeatedly, across dozens of conversations. Build these three things, the users told us, and you will have something special.

So we built all three. We shipped them over the course of two months, each one announced with the confidence of a team that had listened carefully and responded. Adoption across all three was near zero. Not low. Near zero. The features sat in the product like furniture in a room nobody enters.

But here is the part that still bothers me. The features users actually adopted, the ones that drove engagement and retention, were things nobody had asked for. They came from session recordings. From watching what people actually did in the product, where they got stuck, where they abandoned a task, where they repeated the same action three times because the flow did not match their mental model. The friction they experienced but never articulated in an interview was where the real product opportunities lived.

The two research modes

I am not arguing that user feedback is useless. I am arguing that it answers a different question than most teams think it answers. Feedback tells you what users believe about themselves. Behaviour tells you what users actually do. You need both. But if you have to choose (and most teams, with limited resources, do have to choose), behaviour wins every time.

The teams I have seen build the best products treat user interviews as hypothesis generators, not hypothesis validators. What a user says in an interview is a starting point for observation, not a conclusion to design from. You hear "I want customisation," and instead of building customisation, you ask: why do they think they want it? What problem are they actually trying to solve? And then you watch them use the product and see whether the problem they described matches the problem they experience.

But most teams do not operate this way. Most teams treat research findings as requirements. The users said they wanted it. The data supports it. The roadmap is clear. And six months later, the feature sits unused, and nobody can explain why.

Every product I have seen fail because of misread user intent followed this exact sequence. The research was rigorous. The methodology was sound. The team listened carefully. And they built for the fictional user, the aspirational, considered, preference-articulating version of their customers, rather than the real one.

Where the truth lives

But the uncomfortable reality is that users cannot tell you what they will do. Not because they are unreliable witnesses. Because they do not know. The gap between intention and action is not a research flaw. It is a feature of how human decision-making works. We overestimate our own rationality, our own consistency, our own willingness to do the thing we said we would do.

The best product researchers I have worked with have a specific habit. They treat every stated preference as a question, not an answer. Someone says "I want more control"? That is not a design brief. That is an invitation to find out what "control" means when the person is tired, distracted, and trying to finish a task before their next meeting.

At Adobe, we eventually removed most of the customisation layer. But not because we stopped believing in user choice. Because we realised that the choice users were actually making, consistently and overwhelmingly, was to not choose at all. The best default is the one that makes the decision for the user so they never have to think about it. That is not removing control. That is understanding what control looks like when it meets real life.

The fictional user wants options. The real user wants to be done.

Enjoyed this article?

Get one practical product lesson every week. Join 1,200+ founders, PMs, and designers.