You see, too many people are bent under the crushing weight of living up to estimates. Even reckoning that they provided these estimates to begin with, the continuing focus on this fragile, incomplete, numerical slice of their work is having a seriously detrimental effect on our industry.
I can’t count how many times I’ve seen this playing out, in different companies. The poor product manager, even if he’s sometimes called Product Owner, has made his business case; calculated the expected ROI; probably even spent lots of time plotting the Rate of Return over time; thinking of ways to track the relevant figures closely so as to be able to adjust the approach over time; and shared these ideas with the broadest set of people within the company to get as much feedback as possible.
He’s calculated expected additional visitors to the website, impact on conversion and retention rates, and put in place the instruments to verify those numbers. He’s had his numbers checked by his peers, and carefully explained them in detail to the development team he’s expecting the work to be done by. He’s done everything he can to make sure his estimates are as accurate as he can make them.
And then he’s done the additional work to make sure he can not only track whether he’s on-track, but also proposed different scenarios on how to react if they’re not. He’s done all that for all the different features, and new products, that he deals with, and used the expected returns to prioritise work in the way that profits the company most.
Now he knows, our intrepid product manager, that he’s dealing with a complex issue! He’s been careful to include confidence margins, bandwidths of possible returns, and comments stressing uncertainties and risk factors. He’s done everything right.
So what happens after a few sprints, when the initial (bare minimum) features have seen the day of light and basked in the glory of unadulterated customer scorn? He’s still in the bandwidth of expected returns, but he can see that he’ll probably end-up on the lower end of the range he expected. At the iteration review he shows the current results to the development team (there’s a few managers in the audience as well), shows how things are progressing, where he sees ways to change the feature to get more customers interested, and how the priorities need to be adjusted to make this happen.
Some of the feature, sometimes a lot of it, needs to be cut. Including some functionality already released! After all, no-one is actually using that. And to make it a success, the team will probably have more work to do than was originally expected. In the end, though, they still have a very good chance of making this a success.
And that’s where the trouble hits home. The poor guy is lambasted for not being able to stick to his estimates. He came up with those estimates himself, he should make them work. After all, the numbers added-up, didn’t they? Everyone else is doing their part, and if he’d just put a little effort into it, he really should be able to stick to the plan, and ensure that customers start using the new feature as was planned. Perhaps adding more advertising for this feature is what is needed, and budget should be found for some additional effort on that part. Or maybe, since he’s the one not delivering, he should do that advertising work himself over the weekend.
It’s cruel. We, as an industry, should stop holding our product managers to standards that are impossible to reach. We should accept the inherent complexity of the problem, and help them tackle it by planning, tracking, and adjusting the plan as necessary. In fact, if we make the steps and adjustments small and fast enough, we could possibly relieve them from a large part of the yoke of budgeting, automating many of the tracking of adoption, and making them fully recognised, adult members of the organisation!