AI trust is contextual, not categorical

Sep 20, 2025

AI trust is contextual, not categorical

The same person who lets an AI suggest their driving route will refuse to let an AI recommend where to invest their money. The same product manager who uses an AI writing assistant without a second thought will demand to review every word of an AI-generated customer communication before it goes out. The same hiring manager who happily delegates resume screening to an algorithm will physically recoil at the suggestion that the algorithm should pick which candidates get interviews.

This is not inconsistency. This is rationality. And it breaks most of the assumptions product teams are currently building on.

The binary assumption

But there is a comfortable fiction in product development right now: that users either trust AI or they do not. That trust is a disposition, a setting, something you measure once with a survey and then design for across your entire product. Teams build a single onboarding experience to "establish trust in AI." They run research asking users how comfortable they are with artificial intelligence. They get a number. They treat it as a constant.

But trust in AI is not a constant. It is a variable that shifts with every feature, every decision, every moment where the stakes change. I watched this play out at Grab in the most vivid way I have encountered in my career.

The Grab problem

At Grab, we had AI running across multiple features in the same application. Route optimisation used AI to suggest the fastest path to a destination. Payment features used AI for fraud detection and transaction recommendations. Same users. Same app. Same underlying technology branded the same way.

But users trusted the route suggestions without hesitation. A wrong route costs you ten minutes. You can see the mistake happening in real time. You can override it. The stakes are low, the error is visible, and the decision is reversible. Every condition for easy trust is met.

But payment decisions were a different world entirely. When AI flagged a transaction or recommended a payment method, users pushed back. They wanted to see why. They wanted to override. They wanted a human involved. A wrong payment decision could cost real money, could lock an account, could create a problem that took days to resolve. The stakes were high, the error was invisible until it was too late, and the decision felt irreversible.

Same users. Same day. Completely different trust.

The team initially designed a single "trust in AI" onboarding flow. One set of explanations about how the AI worked, applied uniformly across the product. But it had almost no effect. Not because the explanations were poor. Because explaining AI in general does not address the user's actual concern, which is always specific: what happens if this particular decision is wrong, and can I fix it?

That is when the pattern became clear. I call it the trust gradient. Trust in AI is not a single line on a graph. It is a spectrum, and the position on that spectrum shifts at every point where the stakes change. The question a user is really asking, whether they articulate it or not, is simple: what do I lose if this is wrong, and how easily can I get it back?

The stakes test

I have been advising a product team building an AI-powered hiring tool, and they hit the same wall from a completely different direction. Hiring managers loved the AI for resume screening. Sorting through three hundred applications to surface the top fifty? Brilliant. Let the machine do it. The stakes of a screening error are low: a good candidate might get filtered out, but the hiring manager never sees what they missed, and the process moves forward regardless. The cost of a mistake is invisible, which makes it psychologically cheap.

But when the team proposed that the AI should generate a shortlist of candidates for interviews, the reaction was immediate and visceral. Hiring managers refused. But not because they thought the AI would do a poor job. Because putting a candidate in front of a hiring panel is a high-stakes decision that affects a real person's career. If the AI includes someone who is a poor fit, that is a wasted interview. If it excludes someone who is a strong fit, that is a missed hire. And the person making the recommendation (or in this case, the algorithm) is visible and accountable in a way it was not during the anonymous screening phase.

The team had to build completely different trust mechanisms for different stages of the same workflow. For screening, they could run AI quietly in the background with minimal explanation. Users were comfortable. For shortlisting, they had to show the AI's reasoning, allow human overrides at every step, and frame the output as a suggestion rather than a decision. Two stages of the same process. Two entirely different trust architectures. The cost of treating them as one was a product that nobody would use past the screening stage.

Altitude and landing

There is a useful way to think about this. Pilots trust autopilot at cruising altitude. The plane is stable, conditions are predictable, corrections are small, and the pilot can take over at any time. But during landing, when the margin for error shrinks, when the consequences of a mistake are irreversible, when environmental variables multiply, most pilots want their hands on the controls. Same system. Same plane. Same pilot. Different stakes.

Trust is not a setting you switch on. It is a gradient you earn at every point of risk.

Product teams building AI features need something I have started calling the stakes test. Before designing the trust layer for any AI-powered feature, ask three questions. What does the user lose if the AI is wrong? How quickly can they detect the error? And how easily can they reverse it? If the loss is small, the error is visible, and the reversal is easy, trust comes cheaply. If the loss is large, the error is hidden, and the reversal is difficult or impossible, trust must be earned with transparency, control, and the visible option to override.

But most teams do not segment their trust design this way. They build one explanation for how the AI works. One onboarding flow. One level of transparency applied uniformly. And then they wonder why users happily adopt some AI features and refuse to touch others within the same product.

But the answer is not that some users trust AI and some do not. The answer is that the same user trusts AI differently depending on what they stand to lose. Every feature has its own trust equation. The teams that understand this build for the gradient. The teams that do not build for a fictional user who either trusts everything or trusts nothing, and that user does not exist.

What trust actually looks like

I have spent twenty years watching people interact with products they did not fully understand. From oil rig operators using interfaces designed by people who had never been on a rig, to enterprise buyers evaluating software they would never personally use. Trust, in every case, was not a general feeling. It was a series of specific moments where the product either earned confidence or lost it.

But AI does not change that. It just makes the moments more frequent and the stakes more varied within a single product. The team that gets this right is not the one with the best AI model. It is the one that understood, feature by feature, decision by decision, what the user was risking, and built the right amount of trust for that specific risk.

Trust is always specific. The products that remember this are the ones people keep using.

Enjoyed this article?

Get one practical product lesson every week. Join 1,200+ founders, PMs, and designers.