Why Vanity Metrics are Dangerous: Holding a Mirror Up to Your Measures of Success

LK_blogPhoto_Vanity

What are Vanity Metrics?

Vanity, as an adjective, means “produced as a showcase for one’s talents” — i.e., a vanity production. When we showcase our own talents, we choose what makes us look good and ignore what doesn’t.

In this post, we’ll look at an IT Operations team that is caught in an interesting, though not uncommon, situation involving vanity metrics. Then, I’ll discuss the dangers of vanity metrics, and present a quick test you can to do to evaluate your own success measures.

An IT Operations Example: The Five 9’s

Chad is a system administrator on an IT Operations team at a large local business. His manager says that a system uptime of Five 9’s (99.999% uptime) is the holy grail for every self-respecting IT Ops team. With that in mind, Chad’s team measures system uptime for each service as a primary metric for success.

The web server hosting their company’s e-commerce site is only experiencing 98.5% uptime, and customers are complaining about outages while sales continue to slow. When the IT Ops team looks at the information they have gathered, they notice that 75% of the downtime is happening around the time that the overnight scripts run. Chad determines that there is something in that script, or the data it uses, that brings the server down.

Upon discovery of this information, the team immediately moves forward to resolve this embarrassing impediment to their uptime. If they can fix this, Chad’s team could then say that they had a higher uptime which would surely alleviate the customer complaints, bring sales back up to their normal levels, and make management happy!

A week later, the fix has been tested and deployed. After some time monitoring the changes in the production environment, Chad and his team notice that the system uptime numbers are indeed significantly higher but, for some mind-boggling reason, customer complaints are still coming in at the same level. How could this be?

Stymied, they do a little more research into the customer patterns on their site and realize that most of the downtime they were having didn’t really impact the customer, as it occurred when their shoppers were sleeping — so traffic to the website was minimal. Even though they removed a lot of downtime, that reduction did nothing to improve their customers’ experience. A week’s worth of work down the drain, customers are still upset and sales have not increased.

The team is confused and management is frustrated. How can they be doing better on their primary metric for success, but not seeing improvement where it matters? They are left questioning the validity of the goal of Five 9’s.

How to Spot a Vanity Metric

Unfortunately, the situation that Chad and his team are facing isn’t uncommon. They have run head-first into the dreaded vanity metric. All kinds of teams fall prey to vanity metrics, from Marketing teams to IT Operations teams like Chad’s. However, there are some universal ways to identify these dangerous metrics regardless of what type of team we are on.

One common characteristic of vanity metrics is that they often measure activity instead of progress. These types of vanity metrics can also be referred to as productivity metrics. These types of metrics can be found inside of many work tracking systems — one reason why so many people fall prey to them.

Unless we are in a factory stamping out the exact same item over and over, productivity metrics like “# of tickets closed” do not measure value delivered. We want to better understand the impact made by the people closing those tickets. We aren’t in the business of production, we deal in creation and creation has necessary variation that these metrics treat as waste. Be wary of any metric that starts with “# of.”

The “So What?” Test

Unfortunately, not all vanity metrics are as easily identifiable by name. But there is still a way to suss them out. The telltale sign of any vanity metric is that it fails to pass the “So what?” test. Any metric can cause a reaction, but does your metric drive actions that further your actual goals? A meaningful metric should be able to affirmatively answer one of these two essential questions:

  • Does this metric matter to my customer?
  • Does this drive me to take action or help me make a decision?

If the answer is no to both of those questions, you are dealing with a vanity metric that isn’t worth your time.

The Dangers of Vanity Metrics

Vanity metrics don’t just waste our time, they can be deceptively dangerous. We humans underestimate our innate desire to optimize our behavior to meet the targets with which we are presented. Even the most well-meaning person can find themselves tempted to close a ticket a little early due to the desire to mark something off a list and feel productive. Finishing things is a noble goal. Can you imagine what is likely to happen if you started using a metric like “# of tickets closed” as a pay-based incentive?

Danger #1: Optimizing Numbers, Not Value Delivery

As Eli Goldratt said, “Tell me how you’ll measure me and I’ll tell you how I’ll behave.” Measuring people on the # of tickets closed encourages them to do the best they can at closing tickets at the fastest rate. While that sounds good on the surface, be careful what you wish for: Turning vanity metrics into targets is a slippery slope.  We start to optimize to the numbers so much that, over time, we lose sight of the original goal of delivering value. Now, we do whatever we can to deliver more, even if there’s no value in the things we deliver. Goldratt explained this perfectly when he went on to say, “If you measure me in illogical ways, don’t complain about illogical behavior.”

Danger #2: Blaming the People, Not the Problem

What we measure shows people what we value, and measuring vanity metrics makes us more likely to encourage behavior that doesn’t promote our stated goals. When that happens, we start to wonder why the people around us are so ineffective. The hard truth: We can’t measure efficiency and expect it to tell us about our effectiveness. Measuring vanity metrics makes us more likely to create system problems that are misinterpreted as people problems.

“People with targets and jobs dependent upon meeting them will probably meet the targets – even if they have to destroy the enterprise to do it.”

– W. Edwards Deming

Meaningful Metrics

After Chad’s team learned about the concept of vanity metrics the hard way, they still measure system uptime, but now they measure it in the specific context of impact to the customer. When issues affecting uptime arise, they are prioritized based on the potential impact to the customer. This metric leads Chad’s team to be able to make better decisions about their work.

Chad’s team subjected their other success metrics to the “So What?” test. Here are a few metrics that passed:

MTTR (Mean Time To Recovery) — Recognizing that prevention isn’t always possible, the team strives to hone the art of responding to downtime so that when the customer is impacted, they can minimize the fallout. How long does it take you to recover from disaster?

Deployment Lead Time — When a code change is ready to deploy, how long does it take to get that change to the end user? IT Ops teams like Chad’s often have a big hand in this process. Watching this trend can help you quickly respond to internal issues that keep you from quickly responding to customer feedback. Fast feedback loops are critical for making quality products that are relevant to the market.

Incidents by Service — This metric lets the team track which service/application has the most reported problems. If a service is consistently reporting high numbers of incidents, it tells you two things: 1) someone cares and 2) you may be well-served by ensuring the service gets some TLC in hopes of reducing the amount of time your team spends on incidents in the future.

Breakdown of Time Spent — How much time is the team spending putting out fires, working on projects, handling customer improvement requests or making work easier for the team? We often have expectations that need to be checked. This metric tells you when you need to address issues causing an imbalance or reset your expectations.

Put Your Metrics to the Test

If you want to follow Chad’s lead, you can start to rid your team or organization of vanity metrics by:

  • Identifying the full metrics inventory
  • Understanding the intent behind each metric
  • Subjecting each metric to the “So what?” test
  • Removing or repairing any metrics that failed the test
  • Establishing a shared understanding of criteria for new metrics

Good metrics actually give insight into team success and, because every team is different, what your team needs to measure for success may vary from what other teams need. However, you know you’re on the right path when your metrics have a causal, rather than correlative, relationship with your goals.

Keep an eye out for the next post in this series where we will share the three most important metrics IT Ops teams should be measuring.

Julia Wester

Julia Wester is an Improvement Coach at LeanKit. She’s passionate about teaching managers, and teams as a whole, how to tame the chaos by using Lean and Kanban. An alum of both Turner Broadcasting and F5 Networks, she has 15 years’ experience working with and managing development teams. Connect with Julia on Twitter @everydaykanban or on her blog, Everyday Kanban.

3 thoughts on “Why Vanity Metrics are Dangerous: Holding a Mirror Up to Your Measures of Success

  1. I still don’t understand why downtime doesn’t pass the so what test. And why customers didn’t stop complaining after its improvement.

  2. First of all, thanks for the great comment. In vanity metrics like # of tickets closed, we tend to think that every ticket has the same value and we praise those who close the most. In this situation, Chad’s team treated all downtime as equal. There were multiple causes of downtime and they focused on the one causing the majority of it from a time perspective. However, what they failed to do was consider the actual impact to the customer. If they had not treated all downtime as equal and looked at the issue more deeply, they would have realized that the issue that caused less downtime (and thus looked less important) was the issue that caused most of the downtime that customers actually experienced. They didn’t work on that issue so customers are still complaining. Downtime does matter to the customer but only the downtime that impacts them. This is why Chad’s team still measures it, but in combination with the impact of the downtime to the customer so they can ensure they spend their precious time on the issues with the highest impact.

    I need to thank you again as your comment made me realize that I had oversimplified the “So What?” test. I have updated the questions for the test in the blog post to ensure the question “Does it matter to a customer” is reflected. I errantly left that out, but hopefully the story, and my answer to your question highlighted its importance even when it wasn’t there.

    Keep an eye out for my next blog post which will add more color to this conversation.


Leave a Reply