Lean Metrics: Measure Predictability with Facts over Estimates

Dart board with double bullseye

A predictable outcome is one of the most sought-after goals in any business or initiative. It’s easy to see why.

We often correlate predictability with attractive benefits like lower risk, higher business value, and maybe even less stress. So with every new project, we dutifully gather time, effort, and resource estimates from all involved — hoping that this time we’ll nail it.

Except we rarely do.

Fact-Based Predictability with Lean Metrics

Predictability metrics help teams make more accurate estimates about the completion and consistency of their work items. This can lead to better work prioritization and more targeted communications among stakeholders.

“If you can make decisions based on facts rather than forecasts, you get results that are more predictable. Lean development is the art and discipline of basing commitments on facts rather than forecasts.”

— Mary Poppendieck, Lean Development and the Predictability Paradox (2003)

Lean predictability metric

A process control chart showing cycle time and completion consistency

Teams can use a process control chart to graphically represent their cycle time and completion consistency. The LeanKit-generated chart above plots a team’s recent work items based on their cycle time. (Note: In this instance, cycle time refers to how many days it took to finish a work item.)

In addition to showing the team’s average cycle time (17.5 days), the chart includes three standard deviations to help predict if the team will complete a work item within a certain timeline:

  • 68% of the time, the team will finish a work item within 35.5 days.
  • 95% of the time, the team will finish a work item within 54.5 days.
  • 99% of the time, the team will finish a work item within 89.7 days.

The key to using a predictability chart is to take the range of possible delivery dates and apply them to your team’s work. Then, when you’re asked to provide an estimate of how long something will take, you can turn to the chart — instead of making an arbitrary guess. With about 70% certainty, you can say that your team can finish a work item in about 36 days. For a higher level of certainty, you know you need to start a work item about 55 days before it must be delivered. In contrast to estimates, using historical date ranges can give teams and their stakeholders a more realistic view of their anticipated cycle time.

Using Predictability Metrics for Continuous Improvement

In addition to more confident timelines, teams can use predictability charts for continuous improvement. For example, data points that fall inside the range of three standard deviations (99% confidence) are often referred to as being “in control.” In-control data points influence the bulk of our day-to-day improvement efforts. We can use them to improve predictability by reducing the range of outcomes.

Using the chart shown above, again: If we can get our 95% confidence interval down from 55 to 45 days, we’ll get a major improvement in predictability. We can then be more confident in the commitments we make to stakeholders regarding delivery of valuable work. Ways to achieve this include limiting our work-in-process, spending more time analyzing and breaking down work items into smaller chunks, or working to automate recurring processes.

It can also be useful to analyze the data points that lie outside the range of three standard deviations in a predictability chart. These points are considered “out of control,” but they’re excellent candidates for team retrospectives, lean coffees, or a root cause analysis. Investigating them can help you find ways to improve.

The Bottom Line

Predictability metrics can help improve accuracy by injecting facts into your analysis. Using completion consistency charts allows you to make more realistic predictions about the prospective outcomes of work items entering your system. This metric can help you measure the effect of your improvement efforts and allow you to confidently commit to reasonable timelines based on facts, not estimates.

Recommended Reading

Chris Hefley

Chris Hefley is a co-founder of LeanKit. After years of coping with “broken” project management systems in software development, Chris helped build LeanKit as a way for teams to become more effective. He believes in building software and systems that make people’s lives better and transform their relationship with work. In 2011, he was nominated for the Lean Systems Society’s Brickell Key Award. Follow Chris on Twitter @indomitablehef.

7 thoughts on “Lean Metrics: Measure Predictability with Facts over Estimates

  1. Cycle times in knowledge work pretty much never follow a normal distribution. You cannot say that 3 std deviations is 99th percentile, you need to look at actual data to determine that. It may still be useful for the thing described in this article, but it shouldn’t be considered 99th percentile unless you know your distribution is normal.

  2. I think the quote from Poppendick is a bit wrong. Surely commitments based on facts (from the past) are indeed forecasts. It’s ESTIMATES that aren’t based on facts. Small point, but important.

  3. I agree with you, Aaron, that the cycle times don’t usually follow a normal distribution. On the chart itself, the standard deviations are the shaded bands, and the 95% and 99% lines are calculated separately based on the % of cycle times that fall below those lines, without regard to the distribution. So, the statements in the post about 95% and 99% are accurate, but the 68% is an approximation, and would be skewed when the distribution is not normal. You can see in this case that the distribution isn’t normal, because the standard deviations shown in the shaded bands don’t line up with the 95% and 99% lines on the chart.

    In our new custom reporting solution, I’m planning to add a report that would analyze cycle times using a Weibull distribution. Stay tuned for that in the coming months.

  4. And I agree with you, too, Andy. I don’t think “forecasts” is a bad word, and note that I used “estimates” in the title of the post instead of quoting Mary directly. I debated whether or not to use the quote, actually. I think the point of including it was to focus on the “Facts” part. And I don’t think Mary would disagree, either. A forecast based on facts would be good, and a forecast based on estimates, bad.

  5. Interesting post, thanks. Predicting how long an individual work item might take is really useful. However, what I’m unsure about is using this historic data to help predict how long multiple work items might take. Is Monte Carlo simulation or something similar the only way to go?

  6. In our opinion, yes, Monte Carlo simulation or some other probabilistic method is the best way to forecast. The model for your simulation can be populated based on your historical data from LeanKit, to be sure. But using the simple Speed and Cycle time metrics in LeanKit to predict the delivery of a major project would be a mistake. For more on Monte-Carlos simulations, especially for Kanban and Scrum, I’d encourage you to check out FocusedObjective.com. There are also some good spreadsheet-based resources for simulations that have been posted to the kanbandev yahoo group.

  7. Hey Chris,

    Thanks for these tips to improve the reliability of our data. Predictions often get very tricky and we have to account for bias in information management. It really helps to have a platform that is built by someone already thinking about these things.

Leave a Reply