Discovery and delivery running in parallel: the dual-track debate
Dec 20, 2024

The most expensive line of code is the one that works perfectly and solves a problem nobody has. It compiles. It passes tests. It ships on time. And it sits there, finished and untouched, while the team moves on to the next thing on the roadmap.
I have written that line of code. I have been on the team that celebrated shipping it. I have watched the Slack channel fill with congratulations for a feature that, three months later, nobody could find in the analytics. Not because it was buried. Because nobody needed it.
But the uncomfortable part is not that we built the wrong thing. The uncomfortable part is how confident we were that it was the right thing.
The sequence trap
Most product teams treat discovery and delivery as phases. First you figure out what to build. Then you build it. The logic feels airtight. How can you build something you have not yet defined? But the problem is not the logic. The problem is the gap between when discovery ends and when delivery finishes. In that gap, assumptions age, contexts shift, and the problem you validated in week one is not quite the same problem by week twelve.
I call this the sequence trap. It is the structural habit of finishing your thinking before starting your building, and then never going back to check whether the thinking still holds.
At Freshworks, we shipped an entire reporting module that took a full quarter to build. The spec was thorough. The engineering was clean. The design was, frankly, quite good. And post-launch analytics showed 8% adoption. Eight percent. Three months of a cross-functional team's time, and nine out of ten users never opened the thing.
But here is what made it worse. We had validated the idea. We had talked to people about it. We had run it past internal stakeholders, product leaders, sales engineers, customer success managers. They all had opinions about reporting. Strong opinions. They told us exactly what the reporting module should look like, and we built precisely that.
The problem was who we had validated with. We had validated with people who had opinions about the problem. We had not validated with people who had the problem. Internal stakeholders experience a product through the lens of support tickets and feature requests. Users experience it through the lens of their actual work. Those are different lenses, and they produce different answers.
The reporting module solved a problem that existed in internal conversations. It did not solve a problem that existed in user workflows. And we did not discover that gap until the feature was live, the quarter was spent, and the data came back flat.
Discovery debt compounds silently. You only see it when the thing you built sits there, finished and untouched.
The surgeon analogy
Think about a surgeon who operates first and diagnoses after. The procedure is technically flawless. The incisions are clean, the sutures precise. But the diagnosis comes back after the surgery, and it turns out the operation addressed the wrong organ.
Nobody runs a hospital that way. You diagnose while you prepare. The scans, the bloodwork, the imaging, all of it runs in parallel with surgical preparation. By the time the patient is on the table, you know what you are operating on and why.
But product teams do the equivalent of operating blind every time they commit a quarter of engineering time to a feature validated only by internal consensus. Dual-track agile proposes the alternative: run discovery and delivery at the same time. One track is always learning, the other is always building. What the learning track validates flows into what the building track ships.
It sounds obvious. But in practice, most teams resist it.
The resistance
I mentored a startup founder last year whose team adopted dual-track after reading about it. They tried it for two weeks and nearly abandoned it. The engineers felt like discovery was slowing them down. The designers felt pulled between two tracks with competing demands. The founder told me it felt like they were shipping less, not more.
But I asked them to stick with it. Not because I was certain it would work. Because the alternative, the thing they were already doing, had a track record. And that track record included three features in the previous six months that had shipped to minimal adoption.
Three months in, something shifted. They had killed three ideas during discovery. Not after building them. Before writing any code. One was a pricing page redesign that user testing revealed would actually confuse their highest-value segment. Another was an onboarding flow that assumed new users cared about a feature that only power users valued. The third was a dashboard that the founder himself had wanted, which nobody else found useful. (That one stung. It always does when your own idea is the one that gets cut.)
But here is the result. Every feature they shipped in those three months had measurably higher adoption than anything they had shipped in the previous six. Not because the engineering got better. Not because the designs improved. Because they stopped building things that did not need to exist.
The discovery track had not slowed them down. It had prevented them from wasting the delivery track's time.
The discovery debt
Most teams measure shipping velocity. How many features per quarter. How many story points per sprint. How fast the backlog moves. But nobody measures discovery debt, the accumulating cost of building things that were never validated against real user problems.
Discovery debt is invisible in the metrics teams track. It does not show up as a failed sprint. It shows up as a feature with 8% adoption sitting quietly in the product, maintained by engineers, documented by support, and used by almost nobody. It shows up as the slow erosion of team morale when people start to suspect that what they are building does not matter.
But measuring discovery debt requires admitting that shipping is not the same as progress. That a team can ship on time, on scope, and on budget, and still fail. That velocity without direction is just speed in the wrong lane.
The teams that run discovery and delivery in parallel are not slower. They look slower during the first month because some of their capacity is pointed at learning instead of building. But by month three, the difference is visible. They ship less. What they ship matters more. The adoption curves are steeper.
And the engineers stop asking, quietly, in retros, whether anyone is actually using the thing they just spent six weeks building.
There is a question that belongs at the start of every sprint planning session, before stories are estimated, before anyone opens Jira. Do we know this is the right thing to build?
Most teams skip that question. Not because they do not care. Because asking it feels like it slows things down. Because the roadmap says this is next. Because a stakeholder is waiting.
But the quarter will run out regardless. The choice is between spending it on something validated or something assumed. The sequence trap makes that choice invisible. Dual-track makes it explicit.
The teams I admire most are not the ones that ship fastest. They are the ones that learned, sometimes painfully, that the fastest path to a product nobody uses is a team that never stopped to ask whether anyone needed it.


