Workflow redesign, not tool adoption, is what produces AI value for teams
Feb 28, 2025

In the 1980s, a wave of American manufacturers bought the same robotic welding systems that Toyota used. The machines were identical. The specifications matched. The output per unit of the robots was the same. But the American factories installed the new robots on their existing assembly lines, in the same sequence, with the same handoff points, feeding into the same quality inspection process they had used for decades. Productivity improved maybe fifteen per cent. Toyota, using the same machines, had improved productivity by multiples. The difference was not the machines. It was that Toyota had redesigned the entire line around what the machines could do. The Americans had added faster machines to a slow process. Toyota had built a new process around fast machines.
That is the exact pattern playing out in product teams right now.
The tool trap
Organisations are adopting AI tools at extraordinary speed. By most estimates, the majority of product teams now use AI in some form: for code generation, for copy, for research synthesis, for design exploration, for test automation. The tools are everywhere. But the data tells a quieter, more uncomfortable story. Only about 39% of organisations report seeing enterprise-level financial impact from their AI investments. The rest have the tools. They have the licences. They have the training workshops and the internal champions and the executive sponsor who gave the keynote at the all-hands meeting. What they do not have is a different way of working.
I call this the tool trap. The tool trap is the belief that adopting a better tool will produce a better outcome without changing the process the tool operates within. It is the manufacturing problem, restated for knowledge work. You buy a faster machine and run it on the same slow line.
Three tools, same bottleneck
At Freshworks, I watched a product team adopt three AI tools in a single quarter. One for code generation, one for user research synthesis, one for generating design variations. The tools were good. The team learned them quickly. Individual tasks got faster. A research report that used to take two days took four hours. A first-pass design that took a week now took two days. Code that required a full sprint to scaffold could be generated in hours.
But the output of the team improved by maybe ten per cent.
The code was generated faster, but it still went through the same three-stage review process with the same two approvers who met once a week. The research synthesis was faster, but it still entered the same prioritisation meeting with the same stakeholders arguing over the same backlog. The design variations were faster, but they still waited in the same approval chain that had been designed when producing one variation took a week.
The tools were faster. The process was the speed limit.
Every gain the tools created was absorbed by a process designed for a slower world. The team had not adopted AI badly. They had adopted it into a structure that could not use the speed. It was like putting a racing engine into a car with the parking brake on. The engine is not the problem. But the car is not going anywhere.
Redesigning the line
The teams seeing real returns from AI are not the ones with the best tools. They are the ones that asked a fundamentally different question. Not: how do we make our current work faster? Instead: if we were building this workflow from scratch, knowing what AI can do, what would it look like?
That question produces a different kind of answer. It eliminates steps rather than accelerating them. It changes who does what. It restructures the sequence entirely.
I experienced this firsthand when building productnatives. My original writing and research workflow had seven steps: identify the topic, research the existing conversation, outline the argument, write the first draft, revise, fact-check references, and edit. When I started using AI tools, I initially used them to speed up steps two and four. Research was faster. Drafting was faster. But the overall quality and speed of the output did not change as much as I expected. The bottleneck was not the research or the drafting. It was the sequence itself.
So I redesigned the workflow from scratch. I eliminated three steps entirely and replaced them with a different sequence built around what AI could actually do well: hold context across multiple sources, generate structural alternatives quickly, and identify gaps in an argument before I had finished making it. The new workflow does not look like the old one made faster. It looks like a different process. And the output is meaningfully better, not because the tool changed but because the process changed.
That is what I call the workflow dividend. The workflow dividend is the compounding return you get when you redesign a process around AI's actual capabilities rather than layering AI onto an existing process. The dividend does not come from the tool. It comes from the redesign.
Why teams resist the redesign
But most teams do not redesign. They adopt. And there are reasons for that. Redesigning a workflow is uncomfortable. It means admitting the current process has inefficiencies that people have been working around for years. It means changing responsibilities, which changes power dynamics, which changes who feels essential. It means facing the possibility that three of the seven steps in your process exist only because they always existed, not because they add value.
But there is a deeper reason. Tool adoption is visible. You can report it. You can put it in a quarterly review: "We adopted three AI tools this quarter." Workflow redesign is harder to measure and harder to announce. Nobody gets promoted for saying "I eliminated the weekly design review because AI-generated variations made it redundant." That sounds like you removed something. It does not sound like progress, even when it is.
But the gap between tool adoption and workflow redesign is where nearly all the value lives. The 39% of organisations seeing real financial impact from AI are not there because they bought better tools. They are there because they rebuilt the line.
The question that matters
The Freshworks team eventually got there. After a painful quarter of faster tools and unchanged results, someone (not me, to be clear, I take no credit for this) asked the obvious question: what if we stopped feeding AI output into a process designed for human speed? Within two months, they had restructured the review cadence, collapsed two approval stages into one, and changed the sprint structure to match the pace the tools actually enabled. Output improved by a factor that made the previous ten per cent look like a rounding error.
But the tools had not changed. Not one new licence. Not one new feature. The process changed, and everything moved.
Product teams keep asking which AI tools to adopt. The better question is which workflows to redesign. The tool is never the bottleneck. The process around it always is.


