Outcome-based pricing: Paying for results, not access
Apr 27, 2025

There are two kinds of lawyers in the world. The first bills you for every hour spent on your case, win or lose. The second takes nothing upfront and charges only if you win. Same profession. Same courtroom. Completely different relationship with the person paying.
The hourly lawyer has no structural incentive to resolve your problem quickly. Every extra hour is revenue. The contingency lawyer, on the other hand, only eats if you eat. Their incentive is perfectly aligned with yours: get the result, get paid. Skip the result, get nothing.
That is not a legal distinction. That is a pricing architecture distinction. And it is the exact fault line running through software right now.
The access trap
For most of its history, software has been sold like the hourly lawyer. You pay for the right to use it. Seats, licences, monthly fees. Whether the software actually solves your problem is, commercially speaking, your concern. The vendor's obligation ends at the login screen.
I spent years inside this model and never questioned it. At Boeing, we built fleet management tools for aviation clients. The software tracked maintenance schedules, parts inventories, operational readiness. But here is what I noticed in every single client conversation: nobody talked about the software. They talked about aircraft uptime. They talked about how many planes were operational on a given morning. They talked about the cost of a grounded aircraft sitting on tarmac burning money at several thousand dollars per hour.
The software was a means. Uptime was the end. But we charged for the means.
That is what I call the access trap. It is comfortable because nobody has to prove anything worked. The vendor delivers the tool. The buyer uses the tool. Whether the tool actually produces the outcome both parties presumably care about is a question that lives in the gap between them, unanswered and unpriced.
But the gap is closing.
The resolution economy
Intercom's decision to charge per resolved support ticket is the clearest signal of what is shifting. Not per seat. Not per message sent. Per resolution. The customer pays when the AI actually fixes the problem. If it does not fix the problem, the customer pays nothing.
This is not a pricing tweak. It is a structural inversion. The vendor now carries the risk that used to sit entirely with the buyer. And that changes everything about how the product gets built, measured, and improved.
When you charge per resolution, you cannot afford ambiguity about what a resolution is. You cannot afford a product that sort of helps. You cannot afford to ship features that look good in a demo but collapse under real conditions. Every unresolved ticket is revenue you did not earn. The feedback loop between product quality and commercial outcome becomes immediate and unforgiving.
Access was the old currency. Outcomes are the new one. But outcomes require something access never did: proof.
The measurement problem nobody wants to talk about
Here is where the theory gets uncomfortable.
A few years ago, I was mentoring a founder building a hiring platform. She had a sharp instinct: charge companies only when a candidate they hired through the platform stayed past the 90-day mark. Pay for the outcome, not the access. Elegant on a whiteboard.
But the questions started immediately. What if the candidate left because the company had a terrible onboarding process? What if the hiring manager changed their mind about the role? What if the candidate got a better offer from somewhere else? The "outcome" her platform was pricing against depended on dozens of variables she could not control.
She spent four months trying to define "successful hire" in a way that was fair to both sides. She never found a definition that survived contact with reality. Eventually, she moved to a hybrid model: a base fee for access, with a bonus tied to retention. Not pure outcome-based. But honest about the limits of what she could measure and guarantee.
That is the lesson most outcome-based pricing advocates skip past. When the outcome is clean and attributable (a support ticket is resolved, a flight is on time, a transaction completes), outcome pricing works beautifully. But when the outcome is fuzzy, shared, or influenced by factors outside the vendor's control, it creates a measurement problem that can poison the relationship it was supposed to improve.
The access trap is comfortable because nobody has to prove anything worked. But the outcome trap is a different kind of danger: it forces you to prove something you might not fully control.
What this means for product teams
If you are a product leader thinking about outcome-based pricing, the first question is not commercial. It is architectural. Can your product reliably measure the outcome you want to price against? Not approximately. Not with caveats. Reliably.
At Boeing, the answer was surprisingly clear. Aircraft uptime is measurable, timestamped, and attributable. A fleet management system that demonstrably improved uptime by even a small percentage could justify a price tied to that improvement. The maths was legible to everyone in the room.
But most products do not have outcomes that clean. A project management tool makes teams more productive. Probably. A design tool helps designers work faster. Maybe. An analytics platform helps companies make better decisions. Possibly. The further you move from a concrete, measurable, attributable outcome, the harder outcome-based pricing becomes.
This is why the shift is happening fastest in AI products. AI-driven customer support (like Intercom's model) produces a binary, trackable result: the ticket was resolved or it was not. AI-driven code generation can measure whether the code compiles and passes tests. The outcome is not a feeling. It is a data point.
But for every product where the outcome is a data point, there are ten where it is a judgment call. And judgment calls make terrible pricing metrics.
The honest middle ground
The products that will get this right in the next few years will not be the ones that go fully outcome-based overnight. They will be the ones that build measurement into the product from the start, treating attribution and tracking as first-class product problems rather than afterthoughts for the finance team.
They will also be the ones honest enough to admit where their influence ends. A support AI can own the resolution. A hiring platform cannot own the retention. Knowing the difference between those two is not a pricing decision. It is a product maturity decision.
The resolution economy is not replacing the access economy everywhere. It is replacing it where outcomes are clean, attributable, and worth more to the buyer than the access itself. That is a smaller territory than the hype suggests. But it is growing.
The real question is not whether your product can charge for outcomes. It is whether your product can prove them. And for most teams, that question has never been asked aloud, because the access model never required it.
Now it does.


