How to monetise AI features without destroying margins

Mar 19, 2025

How to monetise AI features without destroying margins

A founder I mentor called me in January, genuinely excited. She had shipped an AI-powered analysis feature inside her B2B product. Users loved it. Engagement was up. She was calling to celebrate.

Then she showed me the numbers.

The AI feature was the most popular thing in the product. It was also the most expensive thing to run. Every time a user triggered the analysis, an inference call hit her cloud bill. The subscription price had not changed, because the AI feature was bundled into the existing plan. She had given away the most expensive capability in her product for free, and users were doing exactly what you would hope they would do with something valuable: using it constantly.

The more successful the feature became, the less money the business made. I sat on that call thinking: I should have told her this three months ago.

The margin paradox

This is a pattern I am seeing everywhere, and it has a shape worth naming. I call it the margin paradox. The better your AI feature works, and the more users adopt it, the worse your unit economics become. In traditional software, usage scaling was almost free. Server costs existed, but they were flat and predictable relative to revenue growth.

But AI features are not traditional software. Every inference call has a real, measurable cost. Those costs do not flatten the way traditional infrastructure costs do. They scale linearly with usage, and sometimes worse than linearly depending on model complexity.

The Monetization Monitor data tells the story plainly. Roughly 80% of companies have shipped AI features. But 70% say the delivery costs are undermining profitability. The industry shipped first and asked about costs second. Now the bill is arriving.

The inference tax

At Schneider Electric, years before the current AI wave, I encountered a version of this problem in a different costume. We were building an IoT thermostat product where compute happened partly on device and partly in the cloud. Every connected device generated data that required processing. And the cost of that processing was not a rounding error. It was a line item that grew with every device we sold.

But nobody had modelled what would happen to the cost per user at scale. When we did the maths, the answer was sobering. There was a threshold beyond which every new user made the product less profitable than the one before.

I started calling this the inference tax: the cost that scales with every unit of intelligence your product delivers. For the IoT thermostat, it was processing cycles per device. For today's AI features, it is tokens per query, model calls per session, API fees per inference. The label changes. The structural problem does not.

The founder I was mentoring hit this wall faster than we did at Schneider, because AI inference costs are higher per unit than IoT processing ever was. But the shape of the problem is identical. You built something users want. The more they use it, the more it costs you. And your pricing model was designed for a world where usage was essentially free.

Think of a restaurant that adds a truffle menu. The chef is proud of it. Customers come specifically for the truffle risotto. But truffles are expensive. Every plate served costs more to prepare than the restaurant charges. The regular menu subsidises the truffle menu, and the more popular it becomes, the worse the overall margin gets.

This is what most companies have done with AI features. They built the truffle menu, priced it as part of the regular menu, and hoped the prestige would compensate for the cost. Prestige does not pay cloud bills.

The cost of intelligence is the new cost of goods sold. And most product teams have not internalised this yet because they come from a world where the marginal cost of software was effectively zero. In that world, bundling more features into an existing subscription was always the right move. More value, same price, better retention. But in a world where features have real marginal costs, that instinct is destructive.

What the pricing has to do

If you are sitting with this problem (and statistically, you probably are), the path forward is not a single model. It is a set of decisions your product team and finance team need to make together.

The first decision is visibility. You need to know, at the feature level, what your AI capabilities cost to serve. Most teams I talk to cannot answer this. They know their total cloud spend. They do not know what a single AI-assisted search costs them. Without that visibility, every pricing decision is a guess.

The second decision is segmentation. Some users trigger inference calls hundreds of times a day. Others barely touch them. Flat pricing across both groups means your most engaged customers are your least profitable ones.

The most dangerous AI feature is the one your users love and your margins cannot survive.

The third decision is the pricing architecture itself. You can meter AI usage directly. You can tier access so heavier usage requires a higher plan. You can price AI features as a distinct add-on. Or you can build outcome-based pricing where the customer pays when the AI delivers a measurable result, not when it runs.

But here is the part most teams skip. The pricing architecture has to reflect the cost structure, not just the value perception. If customers expect flat pricing and predictable bills, metering can destroy the experience. The right model aligns three things simultaneously: what the feature costs you, what the customer is willing to pay, and how the customer wants to pay.

Getting one of those three right is easy. Getting all three right is the actual job.

What Schneider taught me that still applies

At Schneider, we eventually found our way to a model that worked. We tiered the product so that compute-intensive features sat in a premium plan with pricing that reflected the actual cost of serving them. We did not apologise for it. The customers who valued that depth paid for it. The customers who did not stayed on the standard plan and remained profitable.

But the real lesson was earlier than the pricing fix. It was the moment the product team and the finance team sat in the same room and looked at the same numbers. That meeting had not happened before. Product had been building for the user. Finance had been watching the margin. But neither had the full picture until they sat together.

I told the founder the same thing. Sit with whoever holds the cost data and look at the per-feature numbers together. The conversation will be uncomfortable, because you will learn that some of the features you are most proud of are the ones the business can least afford to keep giving away.

That is not a reason to kill those features. It is a reason to price them correctly.

The discipline underneath

The companies that will get AI monetisation right are not the ones with the cleverest pricing models. They are the ones that accept a truth the traditional software industry never had to confront: the cost of serving intelligence scales with usage, and pricing must account for this or the business will slowly bleed.

Every previous era of software let product teams treat margin as someone else's problem. AI does not. The inference tax is real, it compounds, and it does not care how elegant your product is.

The question is not whether your AI features are good enough to charge for. But whether you have the discipline to charge what they cost before the margin paradox becomes the margin collapse.

Enjoyed this article?

Get one practical product lesson every week. Join 1,200+ founders, PMs, and designers.