We’ve been traveling a fair bit, to conferences in Europe and the US. In the process we’ve had the pleasure of being re-exposed to a friend and respected colleague in the Kanban community, Mike Burrows. We saw Mike speak and lead discussion groups in Madrid and Boston on topics from the very basics of Kanban all the way to advanced portfolio management techniques. We think that range speaks well of Mike and the Kanban community in general.
Today, Mike shares with us his thoughts on Kanban metrics. If you know LeanKit, and especially if you read about our new partnership with Focused Objectives, you’ll know that the metrics generated by a carefully managed online Kanban system are dear to our hearts. We are familiar with the old saying “Lies, damn lies and statistics” and certainly agree that, done wrong, they can be very dangerous, but that doesn’t mean they’re not worth trying to do right. Happily, Mike agrees!
Guest Blog Post: Mike Burrows, David J Anderson & Associates
Why metrics? We hope that measurement will bring understanding, perhaps even a sense of control. Implicitly or explicitly, we have models for how things work, and these models need to be tested and calibrated before they can be fitted and applied to our workplaces. And once a model starts to tell us something important, we gain the insight and perhaps the necessary credibility to attempt to make a change for the better.
But before we get carried away, let’s be honest about the limitations: our models are wrong (all models are wrong, even if some are useful), they tend to downplay the effects of influences outside our control, and typically they assume direct causal relationships where the reality is nowhere near that straightforward. Worse, as soon as we start to treat metrics as targets, their purpose (however well-intentioned) gets subverted and the value of the metric ends up destroyed. It seems that metrics have a dark side!
Metrics (most notably lead times, throughput, counts of work items in progress, blocked items, defects, etc) do have their place in Kanban, and we are indeed fortunate now to have tools that remove the drudgery of data collection and calculation. Sometimes they come to our rescue when our foresight was lacking: Kanban tools typically retain a detailed enough history that we can calculate and visualize metrics of high quality months after the event, an option that wasn’t open to me when I kept just a few scrappy numbers in a spreadsheet!
How then do we benefit from this easy access to metrics without going over to the dark side? Some tips:
- Don’t obsess. There is no single number that adequately captures all the things we are trying to manage, let alone what we are trying to achieve for our customers. Treat metrics as mere signals, and have a range of them at your disposal. Experiment, try new ones; you might discover something that helps you to make further sense of it all.
- Investigate the trade-offs. Suppose you are considering making a short-term sacrifice of quality for speed; for how long can this work (if it works at all)? More subtly, how do you trade “important” and “urgent”? The challenge of management isn’t just that it is multi-dimensional – it seems that every interesting relationship has a time element.
- Explore the limitations. Reducing work-in-progress reduces lead times, until it doesn’t. That point of model breakdown (not to mention real-world pain) is where new learning and (let’s hope) deep-rooted improvement really happens.
- Go back and check. Was your last “improvement” really a good one? How do you know? Did it work as predicted? Was it sustained? Such questions get to the heart of the evolutionary change process.
Approach metrics mechanistically and not only do you lose most of the benefits, you could be risking something worse. Approach them thoughtfully (incorporating healthy doses of curiosity, caution and skepticism) and you might get beneath surface appearances and arrive at some deeper understanding. Worth a try, surely?
 See The Importance of Goodhart’s Law (lesswrong.com)
 See Dave Snowden’s article on “bounded applicability” (cognitive-edge.com)
Mike has led development teams and larger IT functions for much of his career, working in aerospace, software tools, finance and energy, recently as Executive Director at UBS Investment Bank and then IT Director at the energy risk management consultancy Encore International where he led one of the first Kanban implementations in Central Europe.
In addition to his programme delivery and portfolio management responsibilities, Mike has led a number of successful improvement initiatives, ranging from division-wide capacity management to improved training for business analysts. Mike studied at Imperial College, London, gaining a first-class degree in Mathematics.