Comprehensive software reviews to make better IT decisions
SDLC Metrics: Don't Let Management Pick Them or Even Use Them
We live in a metrics-fixated world where having more metrics is always thought to be better than having less, and Software Development Life Cycle (SDLC) metrics are no exception. But the truth is that any badly chosen or managed metric will do more harm than good to your organization. To avoid these pitfalls, take ownership for SDLC metrics away from managers and put it into the hands of those who can best manage it: your development teams.
There is no shortage of people who expound the virtues of establishing and tracking SDLC metrics with the mistaken belief that they provide effective “levers of control” for development activities, especially when they are used in combination with reward and punishment. But consider the cautionary tale of Wells Fargo whose use of metrics, punishment, and rewards (which were intended to drive business growth) resulted in massive fines, class action law suits, unhappy customers, and regulation to cap the company’s size (ironically, the opposite of what the metrics were intended to achieve!).
As Atlassian (the maker of Jira) has said “Metrics are a touchy subject. On the one hand, we've all been on a project where no data of any kind was tracked, and it was hard to tell whether we're on track for release or getting more efficient as we go along. On the other hand, many of us have had the misfortune of being on projects where stats were used as a weapon, pitting one team against another or justifying mandatory weekend work. So, it's no surprise that most teams have a love/hate relationship with metrics.”
In his book The Tyranny of Metrics, Jerry Muller shines a light on the dark side of metrics through a series of case studies in a variety of industries. He shows how blind belief in the cult of metrics has led to a culture of gaming and manipulation that results in predictable negative consequences. Understanding this dynamic is critical if you want to avoid the failings of bad metrics use in your organization.
When selecting your organization’s SDLC metrics, consider the U.S. Army’s findings on Counterinsurgency Metrics, which showed that standardized metrics are often deceptive, but metrics developed to fit specific circumstances, especially when selected by practitioners with local experience, could be genuinely informative. The lesson was to abandon fixed metrics and instead determine what is worth counting and what the numbers actually mean in their local context (Muller).
SDLC metrics (like all metrics) are largely misused in industry and can be detrimental to your organization. Having no SDLC metrics at all is better than implementing a bad metric (at best, a bad metric will be a productivity drain, but at its worst, it results in gaming behavior and/or unintended consequences that can actually steer your organization in the wrong direction).
You can avoid falling victim to the pitfalls of bad metrics through careful selection and cultivation of your metrics. Here are some simple rules to follow when defining and implementing SDLC metrics:
- Select the fewest metrics possible:
You can start with a long list of potential metrics, but be sure to prioritize them (using the criteria below), then select only the SDLC metrics that rise to the top of the list (remember, every metric you select will reduce the value of every other metric, so limit yourself to three to five metrics as a rule).
- Select the highest-value metrics:
When selecting your SDLC metrics, choose those that provide the highest value to your development team based on their current needs/goals (e.g. if your development team is currently struggling with the quality of code delivered, then select and monitor some code quality metric(s) that will help to steer them toward the goal).
- Select the safest metrics:
To the extent possible, select metrics that are less likely to result in gaming behavior and/or yield unintended consequences when compared to other metrics (sometimes this can be best achieved by carefully choosing the method of collection and reporting, e.g. avoid self-reporting of metric values [like task-percentage-complete against a detailed project plan] to curb gaming behavior).
- Select metrics which are easy to gather and report:
Gathering and reporting (even good) metrics is always a drain on your development team’s productivity. Keep this to a minimum by selecting SDLC metrics that are easiest to gather and report. Ideally, select only metrics that can be accurately gathered in an automated fashion. This can often be achieved by using powerful software development, testing, and DevOps tools such as Jira, VersionOne, Parasoft, Puppet, Azure DevOps, and the like (for example, when properly used, tools like Jira and VersionOne can easily capture and report accurate sprint velocity metrics).
- Select metrics that come closest to measuring customer value delivered:
Ultimately, the best measure of a development team’s performance is the value it delivers to your “customers” (at Info-Tech, we call this throughput). Although true and accurate measures of customer value are notoriously difficult to obtain, you should always strive to select metrics that are the best proxy available (you can read more on how to effectively measure throughput here).
- Let the development team being measured select the metrics based on current needs:
Your development team is best positioned to understand their current needs/goal, and which metrics will help to achieve them (and equally, which metrics will not!). Work with your development team to collaboratively select and implement your SDLC metrics using our rules. This approach will both foster buy-in and minimize the risk of gaming, ambivalence, or unintended consequences.
- Never use metrics for reward or punishment, use them to develop your team:
Attaching SDLC metrics to rewards and/or punishment will almost certainly result in gaming behavior and unintended consequences. Stick to using metrics as a tool for helping your development team to improve its capabilities and performance (which is its own reward).
- Change your metrics over time to align with evolving team needs:
Your development team’s needs and goals will evolve over time and so should the SDLC metrics you use. Periodically review your list of SDLC metrics and replace the least-valuable ones with new ones that will help the team improve even further (e.g. the SDLC metrics selected for a new-to-Agile development team should be different than those select for a fully mature Agile team).
- Talk to your Info-Tech engagement representative about the soon-to-be-released Info-Tech blueprint on SDLC metrics. This blueprint will provide important insights, learnings, and tools that will help your organization to select and maintain an effective set of SDLC metrics.
(SIDE NOTE: This approach to selecting and managing SDLC metrics will be most effective if your organization has successfully implemented Agile. For those members who have adopted Agile development processes, you will notice that the above approach builds on the Agile tenant of creating self-managing teams and providing them with the skills, tools, and support they need to deliver successfully. As servant leaders, Agile managers entrust their development teams with responsibility to self-organize and self-manage, then let them determine the best practical solutions to the myriad of problems they will encounter. This approach requires instilling both trust and accountability into your development team and generally leads to better results than can be achieved in a command-and-control organization. Why then, would you not also entrust them with figuring out for themselves what the right SDLC metrics are to best perform their responsibilities for the organization? This approach is far more likely to yield effective metrics for your organization than something selected “from on high” and imposed on your development team.
Additionally, Agile development practices (which involve continuous, incremental delivery of working and tested software) is well positioned for gathering metrices that are good measures of the value delivered to your customers.)
Want to Know More?
Thor, the Norse God of Thunder, tells Jane Foster, the woman he’s trying to impress, that on his home world of Asgard, the realm eternal, science and magic are two sides of the same coin. Had Jane been a part of the operations teams at Google (or other mature online service providers), she would have immediately realized we have a similar technology right here on good old Earth. We call the science site reliability engineering (SRE), and service level objectives (SLO) is the magic behind it. SRE is a powerful concept for organizations that are serious about keeping their customers happy. It is therefore important for them to develop well-thought-out SLOs and make certain that management is intellectually equipped to derive valuable business perspectives from them.
Hell hath no fury like a customer not being able to access an online service when they want to. They expect the online services to always be on, always be accessible, and always treat them like there’s no one else in the world who matters more. Thank heavens then for giving these online services the ability to use site reliability engineering (SRE) to keep their customers happy, engaged, and most importantly, feeling valued.
If an image is worth a thousand words, a visual roadmap will save you a thousand hours.
The application portfolio management (APM) tool space can be a confusing one, as many software vendors offer their own take of what APM is. Enterprise architecture, application management and project portfolio management tools offer an APM use case, but these are often quite skewed the primary function of the tool.
Info-Tech members moving to Agile are frequently unsure of the role of PMs and the PMO in an Agile environment. Any organization used to traditional (Waterfall) project management will need to make adjustments in support of Agile or risk losing the benefits.
Kovair Introduces Release 10.0 of Its Product Suite, Improving Its Breadth of Integrations, Administration, and Data Migration Capabilities
Kovair continues to enhance its product suite with the introduction of version 10.0. The updates cover its Omnibus, ALM Studio, and QuickSync products.
ProductPlan makes a strong case for excluding features from your product roadmap. Instead, develop your roadmap using strategic themes.
GitHub has announced that, effective April 14, 2020, all of its core features will be free for everyone. This will include private development within organizations that have previously paid for some subscription plans.
Almost a decade has passed since Marc Andreessen’s article “Why Software Is Eating The World” passionately defended the rise of software and its potential to disrupt every industry. The ensuing decade has proven his thesis to be true.