Big projects more often fail because of poor evaluation than poor execution. But many organizations focus on improving only the latter. As a result, they don’t identify the projects that pose the greatest risks of delay, budget overruns, missed market opportunities or, sometimes, irreparable damage to the company and careers, until it’s too late.
It would be easy to point the finger at poor execution. After all, problems occurs as cost or schedule is run over. But overruns are just symptoms of the real problem: poor estimation of project size and risk. As a result, organizations take on projects they shouldn’t and under-budget those they should. H Here are the two drivers of failure and how to avoid them.
- Scope and Risk Estimates Are Sourced from Project Advocates.
In a project’s earliest stages, very little is known about what it will take to execute it. So most companies seek out expert internal opinions — usually from stakeholders of the project, since they have the most knowledge around the project. The problem is bias. Research is clear that project stakeholders are likely to fall under its influence, favor optimistic outcomes and produce dangerously inaccurate estimates.
One simple way to expose the impact of bias is with something we call the project estimate multiplier (PEM). It’s simply a comparison of average actual costs vs. average original estimates. The larger the PEM, the greater the impact bias has on your estimating function.
- “Big Bet” Risks Are Evaluated the Same Way Smaller Projects Are.
According to research, the risk of failure is four times greater for large projects, and they exceed their already-large budgets by three times as much as smaller projects — enough to cost jobs, damage careers and even sink companies.
Most companies can accurately estimate small projects that may take, say, three to six months, but they are unfortunate at estimating the time and cost of big ones. There are three key reasons for that.
First, large projects usually involve many interdependent tasks, which creates complexity that smaller projects do not have. That makes large projects prone to uncertainty and random events, so they can’t be estimated in the traditional way. Risk-adjusted techniques, such as Monte Carlo analysis, are significantly more accurate.
Second, large projects usually involve tasks whose difficulty is unknown at the time of estimation.
Third, the tipping point between low-risk and high-risk projects is sudden, not gradual. Once a project passes the tipping point, the risk curve changes quickly and dramatically. Unfortunately, under the influence of bias, many companies fail to see the curve, much less correct for it, until it’s too late.
To better assess and manage project risk, develop a process to measure projects against your tipping point. Projects that exceed the tipping point need to be estimated and managed differently. We’ve found the best way is to run the project plan through a series of Monte Carlo simulations. That not only accounts for the risk of uncertainty, it also identifies the tasks with the most risk of affecting the outcome.
The analysis output can then be used to develop a plan for mitigating the risk. This can include techniques like breaking the initiative into smaller projects or running tests to reduce uncertainty.