Everyone Says Outcomes. Nobody Measures Them.
Mar 5, 2024

The Product Plan State of Product report landed a few weeks ago and the number everyone is quoting is this one: outcomes over outputs is now the #1 strategic priority for product teams in 2024.
Which is good news. Except for one thing.
I've spent the last several months sitting in roadmap reviews, sprint retrospectives, and quarterly planning sessions with product teams across three different companies. Every single one of them will tell you, with complete sincerity, that they are focused on outcomes. And every single one of them, when you look at what they're actually measuring, is counting features.
The language changed. The behaviour didn't. And in some ways, that's worse than before.
About eight months ago I worked with a product team that decided to take the outcomes shift seriously. Genuinely. The head of product had read the right books, attended the right conference talks, and came back with conviction. They were going to rewrite the roadmap. Not in feature language. In behaviour language. Real outcomes, defined upfront, tied to real user actions.
It took three weeks. It was painful in the right ways. Arguments about what "engagement" actually meant. Debates about whether a metric was measuring the behaviour they cared about or a proxy for it. Good arguments. The kind that make a team sharper.
They shipped the new roadmap to leadership. Everyone approved it. The team exhaled.
Then Q4 arrived.
The quarterly review was the moment I watched the whole thing unravel. Not dramatically. Quietly. Someone put up the metrics slide and the room went still for a moment. The numbers were ambiguous. Not bad, not good. The kind of result that requires interpretation.
And that's when it became clear that nobody in that room had agreed, before the quarter started, on what the outcome actually looked like when it arrived. They had written outcome statements. They had not written success criteria.
The difference is everything.
An outcome statement says: we want users to complete the onboarding flow and reach their first meaningful action within seven days.
A success criterion says: if fewer than forty percent of new users reach that milestone by day seven, we have not solved the problem and we do not move to the next phase.
The first one sounds strategic. The second one is. Because the second one requires you to make a decision before you start building about what failure looks like. And making that decision in advance is the thing most teams cannot bring themselves to do.
Here is why. Defining failure upfront means that someone in the room might be wrong. The PM who championed the feature. The designer who spent six weeks on the flow. The engineer who built the component everyone is proud of. Pre-defining failure makes accountability specific. And specific accountability, in most product organisations, is the thing nobody actually wants despite everyone saying they do.
So teams write outcome language on their roadmaps and leave the success criteria vague enough to be interpreted generously at the end. Which means they've done all the work of outcomes thinking and retained all the escape routes of output thinking.
That's not a shift. That's a rebrand.
The team I was working with did what most teams do in that quarterly review. They found a number that was moving in the right direction and built the narrative around it. Not dishonestly. Nobody lied. But the outcome they had written at the start of the quarter had three components, and the one they were reporting on was the easiest one to hit.
I asked the head of product afterward whether they'd achieved the outcome.
She paused for a long time. "We made progress," she said.
That pause was the whole problem.
Progress toward an outcome and achieving an outcome are different things. But in the absence of a pre-agreed definition of success, progress is the story you tell when the result is inconclusive. And the result is almost always inconclusive when nobody agreed on what conclusive looks like.
There is a simple test for whether your team is doing outcomes thinking or outcome language.
Write down the outcome you're building toward. Then write down, in one sentence, what you will observe in user behaviour that will tell you the outcome was achieved. Not a metric range. Not "an increase in." A specific threshold that, if crossed, means yes, and if not crossed, means no.
If you can't write that sentence before the sprint starts, you are not doing outcomes thinking. You are doing output thinking with better vocabulary.
Most teams, when they try this, discover that they genuinely don't know what success looks like until they see the result. Which means they've been retrofitting the definition all along. Which means the shift they announced at the start of the year hasn't happened yet.
It can. But not by rewriting the roadmap.
The roadmap is the easy part. The harder part is sitting in a planning session and agreeing, out loud, on the number that means it didn't work. Most teams have never done that. And until they do, outcomes over outputs will remain the most widely cited, most narrowly practised idea in product strategy.
The report is right. The priority is right. But a priority without a method is just a preference.
And preferences don't show up in user behaviour.


