Can “Bad” IT Metrics Ever Be Good?


In my last post here on the LeanKit blog, I wrote about the hidden dangers of vanity metrics. Vanity metrics are those metrics that make us feel good about what we are doing and provide interesting information, but don’t pass the “So What?” test. A common characteristic is that they measure activity instead of progress.

In this blog post, I will dive deeper into the topic of vanity metrics. Specifically, I will answer the question: Can vanity metrics, or any other “bad” IT metrics, ever be used for good?

Most of us, especially those who work in enterprises, can find reports that measure activity in our metrics portfolio — things like throughput and velocity, number of tickets closed or deployment frequency. Before you start deleting the reports, first realize that the danger is not in the metric itself. There is no “bad” metric.

Consider the common steak knife. It can cause great bodily injury if used for a malicious purpose, but that same knife can provide a lot of value when you sit down to eat a juicy ribeye. Metrics are similar, in that they always carry the potential for risk — and the potential for reward. It’s how you wield them that realizes one potential or the other.

A Close Encounter with Vanity Metrics

With vanity metrics, the danger is in misunderstanding the activity-based information they provide and equating that activity to tangible progress towards goals. My most memorable experience with vanity metrics occurred in my last management role. The leadership team was trying to identify a set of metrics that we could use across teams in our IT department to essentially compare how successful teams were.

This was a laudable goal, but it was a tricky exercise for the management team, because most of the metrics that were easy to gather turned out to be activity-based. The example our VP wanted to use was the number of enhancements each team had closed over the previous two-week period.

On the surface, this seemed like a reasonable measure to use. Everyone had enhancements to deliver. However, I saw red flags. I spoke to our VP and asked him what it meant to him if I said that my team closed five enhancements. What if another team closed ten? Was that good or bad? Could he really interpret which team delivered more value or satisfaction to our customer based on a numerical result like five vs. ten? And, most importantly, what would he do differently as a result of having that information?

My VP was a very smart guy. Once I asked him these questions in a respectful manner that didn’t put him on the defensive, he came to his own realization that this metric didn’t tell him as much as he thought it might. So, was there value in measuring this? Not for us. Our VP’s question to us was “What do we measure instead?”

We determined that we were much better served by looking at things like:

  • Responsiveness: Trends for open vs. close rates of requests to see if we were keeping pace with business needs.
  • Economic impact: The amount of revenue or time/cost savings we generated from having done the work.
  • Subjective well-being: Were the customers (internal or external) happy with us? Were we happy with ourselves?

If there had been no one to point out the red flags, we might have kept measuring the number of enhancements closed. This could have easily led to teams optimizing their work to be able to close more enhancements rather than to make the right choices for the company.

Unfortunately, we don’t often even realize we are making these mental mistakes in choosing metrics. As a rule, we tend to give things like activity-based metrics more importance than we should and that can lead us down a long path of poor decision making and undesirable results. But even seemingly “bad” metrics such as activity-based metrics can pass the “So What?” test if  used properly and limitations understood.

Vanity Metrics in Context

IT Metrics and Performance

Let’s look at a good, real-life example of where activity-based metrics can be used appropriately to drive action. The 2015 State of DevOps Report, published by Puppet Labs and IT Revolution, uses and recommends three measures of IT Performance. They include two throughput measures and one stability measure:

Throughput Measures

  • Deployment frequency: How frequently the organization deploys code.
  • Deployment lead time: Time required for changes to go from “code committed” to code successfully running in production.

Stability Measures

  • Mean time to recover (MTTR): Time required to restore service when a service incident occurs (e.g., unplanned outage, service impairment, etc.).

Strictly speaking, deployment frequency is a vanity metric. It’s something that measures activity and not progress towards a goal — unless that goal is simply to deploy a lot. So, why in the world would a heavily-researched report recommend that you use it as a primary metric of success? Well, since context is king, let me give you some important background that may help make sense of the recommendation.

The DevOps movement arose because of a common handoff found in many technology organizations. That handoff — the one found between teams of developers delivering code into the hands of the operations team that deploys it — was considered so egregious and so malfunctioning that an entire movement grew to address this one area of the process.

One of the major issues being addressed by this movement is the length of time it takes for a piece of code (read: value) to go from ready to deliver to in the hands of the customer. Historically, these deployment lead times have been dismal and a lot of organizations are still struggling in this area.

In the excerpt below, which was taken directly from the report, we see the correlations the authors’ have uncovered which support their decision to recommend measuring a vanity metric like deployment frequency:

“High-performing IT organizations experience 60 times fewer failures and recover from failure 168 times faster than their lower-performing peers. They also deploy 30 times more frequently with 200 times shorter lead times.

Deployment pain can tell you a lot about your IT performance. Do you want to know how your team is doing? All you have to do is ask one simple question: “How painful are deployments?” We found that where code deployments are most painful, you’ll find the poorest IT performance, organizational performance and culture.”

The underlying thoughts behind green-lighting the deployment frequency metric seem to be this: If deploying is easy and painless, you’ll do it more often. If you deploy more often, you can shorten feedback loops and get information that helps to build a better quality deliverable. With this in mind, the idea of people treating the deployment frequency metric as a target and optimizing to it in lieu of other activities doesn’t have a lot of downside for those still struggling in this area.

When Does Your Metric Expire?

We know that any metric, including a vanity metric, can be good if it is truly helping us reach a goal. But, it may not continue to be helpful forever. People don’t often realize that a metric can, and should, be discarded once it serves its purpose. We generally seem to believe that if a metric helps us now, then it will continue to be valuable in the future.

Consider this: If I have high blood pressure because I’m overweight, but then proceed to lose 50 pounds and show a sustained period of normal blood pressure, at some point I can stop measuring my blood pressure every day. In your IT department, once you have a well-oiled deployment machine, do you need to hyper-focus on your deployment frequency or can you decide to step back and do an occasional health check? What’s the point in focusing on that and trying to turn 100 deployments a week to 500 or to 1,000? When do you reach a point of diminishing return, where the reason behind the metric is no longer about improving our process and more about hitting a higher and higher target? Don’t be a hoarder when it comes to metrics — know when to let them go.

Final Thoughts

Metrics should be based on your current challenges and map to your goals. As your goals and contexts change, your portfolio of metrics should also change. Regularly review your metrics to ensure that they still map to your goals. Ask yourself and your team why they are still relevant, to ensure you’re not optimizing to the wrong metrics.

If a vanity metric, or any other metric people generally call “bad,” can actually help you solve your current problem, then measure it. Just make sure you’re not fooling yourself about what it’s telling you. Then, when the need has passed, stop using the metric so you don’t fall victim to aiming for targets at the cost of your organizational goals. To learn more, watch my webinar on Metric-Driven Coaching.

Julia Wester

Julia Wester is a Web Developer turned Manager turned Lean Consultant and Kanbanista. Passions include helping people visualize and manage their work, showing that management doesn't have to be a dirty word, and helping people remove unnecessary drama at work. Julia is co-founder of Lagom Solutions. Connect with Julia on Twitter @everydaykanban