Your AI Feature Has a Problem. Nobody Is Using It.

Feb 15, 2024

Your AI Feature Has a Problem. Nobody Is Using It.

Every January, gyms make a bet. They buy extra equipment, hire extra staff, extend their hours. They spend money they haven't earned yet on a surge they know is coming.

The surge always arrives. Two weeks of packed classes, waitlists for the squat racks, strangers asking if you're using that bench. The gym hasn't felt this alive in eleven months.

But by the third week of February, the gym looks exactly like it did in December. Same regulars. Same empty treadmills in the corner. Same staff wondering what happened to all those members who swore this year was different.

The memberships were real. The commitment was performance.

Spend ten minutes with a product changelog from any technology company in 2023 and you'll find the same pattern. AI features announced with confidence, shipped on schedule, measured with the kind of silence that product teams learn to explain away as an early adoption curve.

It's not an adoption curve. But teams keep treating it like one, which is a more comfortable position than the alternative.

I worked with a team at a mid-sized SaaS company last year who shipped four AI features in eight months. The brief for each one started the same way: "We need an AI play for this part of the product." Not "users are struggling with this specific workflow." Not "we've identified a problem that AI solves better than anything else we've tried."

The brief was the technology, looking for a problem to justify it.

The fourth feature had a name, a launch announcement, and a product walkthrough video that did reasonably well on LinkedIn. Three months later, fewer than six percent of active users had touched it. The team lead delivered the number like a weather report. Unfortunate. Not their fault.

But it was exactly their fault. Just not in the way they thought.

The mistake wasn't building AI features. The mistake was treating "AI" as a brief instead of as an answer to a brief.

Nobody walks into a roadmap session and says "we need a dropdown play for the onboarding flow." Dropdowns are tools. You reach for them when they solve the problem. But in 2023, AI stopped being a tool and became a category of ambition. And ambition, it turns out, is very bad at asking whether the user actually has the problem you're solving for.

Every product team I spoke with last year was running the same internal conversation. Someone senior had returned from a conference, or finished a long read on a Sunday afternoon, and the message coming down was clear: we need to be doing more with this. Teams heard that correctly. But they translated it into "what should we build with AI" when the actual question was "what are our users struggling with, and is AI the right answer."

Those are different questions.

The gap between them is where most of 2023's AI features currently live.

Here is what makes this hard to diagnose from inside a team. The pressure to ship AI features wasn't irrational. The competitive environment was genuinely moving fast. A team that spent Q1 2023 asking whether AI was right for their product looked, from the outside, exactly like a team that was paralysed. There is no visible difference between disciplined validation and being slow. Both look like not shipping.

But there is an internal difference. One has a clear answer to the question: what problem does this solve for the person using it? The other has a launch date and a LinkedIn announcement ready to go.

I ran a workshop with a product team in the second half of last year. Genuinely sharp people who cared about their users. But they'd shipped an AI summarisation feature that nobody was using and they couldn't understand why. Morale was odd. The feature was beautiful. The metrics were not.

We spent forty minutes mapping the actual workflow. Not the one they'd designed for. The one users were actually running day to day. About halfway through, someone said it, quietly: "they don't actually read these documents anyway."

The AI feature summarised documents users weren't reading. Faster access to something nobody wanted was still something nobody wanted.

The feature wasn't wrong. The problem statement was.

This is the reckoning that 2024 is starting to force. Not whether AI belongs in products. It does, and it will, in ways that will be genuinely valuable. But whether the discipline that applies to every other product decision applies equally when the technology is exciting enough to make teams forget to ask the basic questions.

It should. But in 2023, for a lot of teams, it didn't.

The gym analogy holds further than it first appears. The gyms that retain members past February are not the ones with the best equipment. They're the ones that figured out what people actually struggle with in January. Not the gear they think they need. But the accountability, the structure, the small social contract that makes staying home feel like a decision rather than a default.

Those gyms were thinking about the problem. The others were thinking about the surge.

Product teams that start with what users genuinely struggle with, and then ask whether AI is the right answer, will build things people use. But the teams still asking "what's our AI play" will have excellent changelogs and very quiet dashboards.

The infrastructure will be real. The commitment, from the user, won't be.

Enjoyed this article?

Get one practical product lesson every week. Join 1,200+ founders, PMs, and designers.