Latest Research

This content is currently locked.

Your current Info-Tech Research Group subscription does not include access to this content. Contact your account representative to gain access to Premium SoftwareReviews.

Contact Your Representative
Or Call Us:
1-888-670-8889 (US/CAN) or
+1-519-432-3550 (International)

Comprehensive software reviews to make better IT decisions

SDLC Metrics – Be Careful What You Ask for, Because You’ll Probably Get It

Establishing and monitoring SDLC metrics is a powerful way to drive behavior change in your organization. But metrics are highly prone to creating unexpected outcomes and must be used with great care. Use metrics judiciously to avoid gaming or ambivalent behavior, productivity loss, and unintended consequences.

There is no shortage of people who expound the virtues of establishing and tracking software development life cycle (SDLC) metrics. Some consider them mandatory for effectively managing development teams to achieve desirable outcomes. But as Atlassian (the maker of Jira) has said, “many of us have had the misfortune of being on a project where stats were used as a weapon, pitting one team against another or justifying mandatory weekend work. So it's no surprise that most teams have a love/hate relationship with metrics.”

The truth is that badly chosen or incorrectly managed metrics are worse than no metrics at all and will almost certainly result in undesirable outcomes, up to and including outcomes that are polar opposites of what you intended.

Here’s what badly chosen metrics can lead to:

  • Gaming behavior: Behavior that produces high metric scores but has a detrimental effect on desired outcomes (e.g. a “Lines of Code Written” metric that results in plenty of poor-quality code because it doesn’t include testing).
  • Ambivalent behavior: A lack of change in team behavior over time, even though a gathered metric consistently shows need for improvement (e.g. this can happen if you are gathering too many metrics or if a given metric is not considered to be important by most team members).
  • Reduced productivity: Metrics that require excessive effort for team members to gather or report, so that the effort exceeds the value of the metric (e.g. asking team members to manually track their daily work time between bug fixes and new code development in 15-minute increments).
  • Unintended consequences: Metrics that produce results that are contrary to (or even the opposite of) your intended outcome (e.g. a “Debugged Function Points Delivered” metric that results in lots of new functions being completed, but none of them are easy to use or deliver high value for your customers).

How you manage your metrics can be just as detrimental to desired outcomes as how you choose them. It is possible (and some would even say easy) to manage metrics in a misguided or even toxic fashion. Here are some of the ways metrics can be mismanaged:

  • Emphasis on numbers instead of trends: It’s easy to believe that the exact value of a metric is important. But is there really any significant difference between a metric that scores a 94 in one cycle and then 95 in the next? And is the exact value of the metric as important as whether it is a high, medium, or low score?
  • Using metrics to induce competition or as a driver for reward/punishment: It can be tempting to pit individuals and teams against one another by publicly comparing scores and using them to decide punishments or rewards. But this type of metrics management can lead to toxic behavior among team members.
  • Believing that more metrics is always better: It’s easy to fall into the trap of believing that if any one metric is good, then more metrics are always better. Gathering and monitoring too many (besides being costly) will only confuse your teams and reduce the value each metric provides.
  • Assuming metrics are valuable forever: Gathering and monitoring a metric can be extremely valuable for its short-term effect in driving a desired behavior, but don’t assume it should be used forever. If a given metric’s value has not changed substantially for a long time (or if it was only established to get the team through a hump), consider retiring the metric (and don’t automatically replace it with another one).

Our Take

We consider using metrics to be analogous to wielding a very sharp knife. Both are extremely powerful and effective tools when used by a skilled practitioner for an appropriate task. But both are also highly dangerous when applied to the wrong task or used in an irresponsible fashion. You should proceed with due caution when working with either of them.

Here are some of the ways you can judiciously apply SDLC metrics to maximize desirable outcomes and minimize mishaps:

  • Choose metrics that are as close to measuring the desired outcome as possible: Don’t rely on a metric like “Lines of Debugged Code Put Into Production” to tell you how productive your team is, because real productivity has everything to do with total customer value delivered. Instead, adopt a metric that incorporates how much value was delivered by the code. This will limit the negative impact of any gaming behavior while also reducing the risk of unintended consequences.
  • Choose the smallest number of metrics possible: Every metric you gather will reduce team productivity in some way (even if only minimally). As well, every additional metric you gather will reduce the value of all previous metrics, because there are only so many things your development team can focus on at any one time. Choose only the very best metrics you need to effectively drive the outcomes you are striving for. This will minimize the risk of ambivalent behavior and reduce productivity loss.
  • Automate as many metrics as possible: Again, every metric you gather will reduce team productivity, even if only minimally. Don’t waste your team’s valuable cycles doing metrics gathering/reporting. Wherever possible, use automation tooling for metrics gathering. For example, enforcing disciplined use of a software development management tool (like Jira, VersionOne, or PivotalTracker) will make it possible to automatically gather highly valuable metrics (like sprint velocity) without additional effort by the development team. This will reduce both productivity loss and the risk of gaming and ambivalent behavior.
  • Focus on trends rather than precise metric values: Analyze your metrics for how they are trending (e.g. over the past several cycles, has the metric value been improving or worsening) and their approximate rather than exact value (e.g. is the metric’s value this cycle green, yellow, or red). Over-emphasis on exact values, especially setting precise target metric values to be achieved, should be avoided whenever possible (an exception to this is when precise metric values are directly associated with a business need, such as a contractual obligation for at least five-nines uptime). This will reduce the risk of gaming and ambivalent behavior.
  • Review and change your metrics periodically: It’s a mistake to believe that every metric you gather will continue to add value indefinitely. If a metric’s value has not changed substantially for a long period of time, chances are you no longer need to monitor it. Periodically look at your list of metrics and decide whether you should retire or replace any of them. Similarly, it can be extremely beneficial to adopt a new metric for only a period of time, with the full expectation that it will be retired once a goal is achieved (e.g. for a team transitioning to automated testing tools, a temporary metric to monitor the number of automated tests created per period can help to encourage faster adoption or even highlight where more training might be required). This will reduce productivity loss, as well as risk of gaming and ambivalent behavior.
Visit our Exponential IT Research Center
Over 100 analysts waiting to take your call right now: 1-519-432-3550 x2019