Misaligned incentives make cross-functional teams structurally dysfunctional
Jul 20, 2024

Every team hit its target last quarter. The product still underperformed.
If that sentence does not sound familiar, you have not been paying attention. Or you have been fortunate enough to work at organisations where the incentive architecture was designed by someone who understood systems rather than spreadsheets. Most of us have not been that fortunate.
The conventional explanation for cross-functional dysfunction is communication. Teams do not talk enough. Teams do not share enough context. Teams do not align frequently enough. But I have spent twenty years watching product teams fail, and the communication explanation is almost never the real one. Teams communicate plenty. They share Slack channels, attend each other's standups, and sit through quarterly alignment presentations that nobody believes but everyone applauds.
The real problem is simpler and harder to fix. The teams are measured on different things. And when teams are measured on different things, they optimise for different outcomes. Not because they are selfish. Because they are rational. The incentive structure is the strategy, whether anyone intended it to be or not.
Act One: the illusion of performance
At Freshworks, I watched a quarter unfold that taught me more about organisational design than any book ever has. Engineering shipped forty features. Forty. The engineering lead presented the number with earned pride. Deployment frequency was up. Release velocity had improved. The team was performing.
But product was struggling to explain why none of those forty features had moved the commercial metrics. Activation was flat. Retention was flat. Expansion revenue had not budged. The features had shipped. They had not mattered.
And then there was design. User satisfaction scores on the new features were strong. People who used the features liked them. But the features that scored highest on satisfaction were also the features with the lowest adoption. Design had optimised for delight on things almost nobody encountered. High satisfaction on features nobody used is a peculiar kind of success.
I sat in the quarterly review watching three teams present three sets of metrics, all green. Every team had hit its number. The room felt good. But the product had not grown. Revenue had not moved. The user base was not healthier.
Everyone succeeding was the clearest sign the system was broken.
I call this the local success trap. It is the organisational condition where every function achieves its individual goals while the product, which exists at the intersection of all those functions, fails to improve. The problem is not that teams are underperforming. The problem is that performance is measured in dimensions that do not connect to outcomes that matter.
Act Two: when the dashboards lie
The local success trap is not a communication problem. It is a measurement problem. And measurement problems are incentive problems, because people do what they are measured on.
Engineering is measured on deployment frequency, so engineering optimises for shipping. Design is measured on user satisfaction, so design optimises for the experience of the features that exist, regardless of whether those features are the right ones. Product is measured on feature delivery against the roadmap, so product optimises for checking boxes on a plan that may or may not be connected to business outcomes.
Nobody is being irrational. Everyone is doing exactly what the system rewards them for doing. But the system rewards the wrong things. And because each team's metrics are green, nobody feels urgency to question whether the metrics themselves are the problem. But that is precisely the question that needs asking.
At Boeing, I worked on an aviation fleet management product where this dynamic played out with consequences more serious than a flat quarter. Safety engineering, software engineering, and operations each operated with different success metrics. Safety cared about compliance scores. Software cared about uptime and release velocity. Operations cared about fleet utilisation rates.
A software update went out that improved one metric visibly: the interface responded faster, release was smooth, uptime was unaffected. But the update had introduced a subtle change in how maintenance alerts were prioritised. Operations did not catch it because their utilisation numbers remained healthy. Safety did not catch it because compliance checks were still passing. Software certainly did not catch it, because by their measures the update was a clean win.
Nobody discovered the problem until the client reported it. Their maintenance team had noticed that alerts for a specific category of inspection were appearing later than expected. Not dramatically later. Just enough to compress the response window in a way that made experienced operators uncomfortable.
Every dashboard in our building showed green. The client's operations team, working with the actual aircraft, saw something different. That gap between what the metrics said and what the world showed is where the real damage happens. It is also where the local success trap is hardest to see, because the internal evidence says everything is fine.
I call this second pattern the metric silo. It is what happens when each team's measurement system is designed to reflect that team's performance in isolation, without any mechanism to detect whether the combined output is coherent. Three runners on a relay team can each run a personal best and still lose the race. Not because they were slow. Because they were running in different directions.
Act Three: redesigning what you measure
The fix for the local success trap is not better communication. It is not more alignment meetings. It is not another quarterly review where every team presents its own metrics and everyone nods. The fix is redesigning the incentive architecture so that the metrics each team is held accountable for are connected to a shared outcome.
But this is where it gets uncomfortable. Shared outcome metrics mean that engineering cannot celebrate deployment frequency if the features deployed did not move adoption. Design cannot celebrate satisfaction scores if the features scoring well are not the ones driving retention. Product cannot celebrate roadmap completion if the completed roadmap did not produce commercial results.
Shared metrics create shared discomfort. And most organisations prefer distributed comfort to shared discomfort, which is why the local success trap persists even after everyone in the room has agreed it is a problem.
The hardest conversation I have had in my career was not about a product decision. It was about telling a team lead that their team's metrics, the ones that showed strong performance, were not measuring anything that mattered. Not because the team was failing. Because the metrics had been chosen to be satisfiable. Green dashboards are not proof of health. They are proof that the metrics were chosen to be easy to satisfy.
But here is what I have also learned. The teams that get this right, the ones that tie engineering, design, and product to a shared commercial or user outcome, are not just more effective. They are more honest. Because when everyone in the room is looking at the same number, there is nowhere to hide. No team can claim success while the product stagnates. No dashboard can be green while the user is struggling.
But that honesty is expensive. It requires product leaders willing to be measured on outcomes they do not fully control. It requires engineering leaders who accept that shipping is not the finish line. It requires design leaders who care about whether the right thing was designed, not just whether the designed thing was liked.
But most organisations will not do this. Not because they do not understand it, but because the local success trap is comfortable, and shared accountability is not.
The product teams that build things worth using will always be the ones that chose the harder metric. Not the one that made the dashboard green. The one that told them the truth.


