Comprehensive software reviews to make better IT decisions
SDLC Metrics: Don't Let Management Pick Them or Even Use Them
We live in a metrics-fixated world where having more metrics is always thought to be better than having less, and Software Development Life Cycle (SDLC) metrics are no exception. But the truth is that any badly chosen or managed metric will do more harm than good to your organization. To avoid these pitfalls, take ownership for SDLC metrics away from managers and put it into the hands of those who can best manage it: your development teams.
There is no shortage of people who expound the virtues of establishing and tracking SDLC metrics with the mistaken belief that they provide effective “levers of control” for development activities, especially when they are used in combination with reward and punishment. But consider the cautionary tale of Wells Fargo whose use of metrics, punishment, and rewards (which were intended to drive business growth) resulted in massive fines, class action law suits, unhappy customers, and regulation to cap the company’s size (ironically, the opposite of what the metrics were intended to achieve!).
As Atlassian (the maker of Jira) has said “Metrics are a touchy subject. On the one hand, we've all been on a project where no data of any kind was tracked, and it was hard to tell whether we're on track for release or getting more efficient as we go along. On the other hand, many of us have had the misfortune of being on projects where stats were used as a weapon, pitting one team against another or justifying mandatory weekend work. So, it's no surprise that most teams have a love/hate relationship with metrics.”
In his book The Tyranny of Metrics, Jerry Muller shines a light on the dark side of metrics through a series of case studies in a variety of industries. He shows how blind belief in the cult of metrics has led to a culture of gaming and manipulation that results in predictable negative consequences. Understanding this dynamic is critical if you want to avoid the failings of bad metrics use in your organization.
When selecting your organization’s SDLC metrics, consider the U.S. Army’s findings on Counterinsurgency Metrics, which showed that standardized metrics are often deceptive, but metrics developed to fit specific circumstances, especially when selected by practitioners with local experience, could be genuinely informative. The lesson was to abandon fixed metrics and instead determine what is worth counting and what the numbers actually mean in their local context (Muller).
SDLC metrics (like all metrics) are largely misused in industry and can be detrimental to your organization. Having no SDLC metrics at all is better than implementing a bad metric (at best, a bad metric will be a productivity drain, but at its worst, it results in gaming behavior and/or unintended consequences that can actually steer your organization in the wrong direction).
You can avoid falling victim to the pitfalls of bad metrics through careful selection and cultivation of your metrics. Here are some simple rules to follow when defining and implementing SDLC metrics:
- Select the fewest metrics possible:
You can start with a long list of potential metrics, but be sure to prioritize them (using the criteria below), then select only the SDLC metrics that rise to the top of the list (remember, every metric you select will reduce the value of every other metric, so limit yourself to three to five metrics as a rule).
- Select the highest-value metrics:
When selecting your SDLC metrics, choose those that provide the highest value to your development team based on their current needs/goals (e.g. if your development team is currently struggling with the quality of code delivered, then select and monitor some code quality metric(s) that will help to steer them toward the goal).
- Select the safest metrics:
To the extent possible, select metrics that are less likely to result in gaming behavior and/or yield unintended consequences when compared to other metrics (sometimes this can be best achieved by carefully choosing the method of collection and reporting, e.g. avoid self-reporting of metric values [like task-percentage-complete against a detailed project plan] to curb gaming behavior).
- Select metrics which are easy to gather and report:
Gathering and reporting (even good) metrics is always a drain on your development team’s productivity. Keep this to a minimum by selecting SDLC metrics that are easiest to gather and report. Ideally, select only metrics that can be accurately gathered in an automated fashion. This can often be achieved by using powerful software development, testing, and DevOps tools such as Jira, VersionOne, Parasoft, Puppet, Azure DevOps, and the like (for example, when properly used, tools like Jira and VersionOne can easily capture and report accurate sprint velocity metrics).
- Select metrics that come closest to measuring customer value delivered:
Ultimately, the best measure of a development team’s performance is the value it delivers to your “customers” (at Info-Tech, we call this throughput). Although true and accurate measures of customer value are notoriously difficult to obtain, you should always strive to select metrics that are the best proxy available (you can read more on how to effectively measure throughput here).
- Let the development team being measured select the metrics based on current needs:
Your development team is best positioned to understand their current needs/goal, and which metrics will help to achieve them (and equally, which metrics will not!). Work with your development team to collaboratively select and implement your SDLC metrics using our rules. This approach will both foster buy-in and minimize the risk of gaming, ambivalence, or unintended consequences.
- Never use metrics for reward or punishment, use them to develop your team:
Attaching SDLC metrics to rewards and/or punishment will almost certainly result in gaming behavior and unintended consequences. Stick to using metrics as a tool for helping your development team to improve its capabilities and performance (which is its own reward).
- Change your metrics over time to align with evolving team needs:
Your development team’s needs and goals will evolve over time and so should the SDLC metrics you use. Periodically review your list of SDLC metrics and replace the least-valuable ones with new ones that will help the team improve even further (e.g. the SDLC metrics selected for a new-to-Agile development team should be different than those select for a fully mature Agile team).
- Talk to your Info-Tech engagement representative about the soon-to-be-released Info-Tech blueprint on SDLC metrics. This blueprint will provide important insights, learnings, and tools that will help your organization to select and maintain an effective set of SDLC metrics.
(SIDE NOTE: This approach to selecting and managing SDLC metrics will be most effective if your organization has successfully implemented Agile. For those members who have adopted Agile development processes, you will notice that the above approach builds on the Agile tenant of creating self-managing teams and providing them with the skills, tools, and support they need to deliver successfully. As servant leaders, Agile managers entrust their development teams with responsibility to self-organize and self-manage, then let them determine the best practical solutions to the myriad of problems they will encounter. This approach requires instilling both trust and accountability into your development team and generally leads to better results than can be achieved in a command-and-control organization. Why then, would you not also entrust them with figuring out for themselves what the right SDLC metrics are to best perform their responsibilities for the organization? This approach is far more likely to yield effective metrics for your organization than something selected “from on high” and imposed on your development team.
Additionally, Agile development practices (which involve continuous, incremental delivery of working and tested software) is well positioned for gathering metrices that are good measures of the value delivered to your customers.)
Want to Know More?
Traditional accounting practices are tailor made for waterfall project management. Organizations that have transitioned to the use of standing product teams using Agile and DevOps need to transform their accounting practices as well or they will leave valuable capital expenditure dollars on the table.
COVID-19 has forced software companies and their suppliers to refocus efforts around prioritizing systems and workflows that are nearly 100% digital in nature. As a result, Info-Tech has observed the quick emergence of six market themes that are highly relevant post COVID-19. This note series will profile key vendors and how they fit into the post-COVID-19 world.
COVID-19 has forced software companies and their suppliers to refocus efforts around prioritizing systems and workflows that are nearly 100% digital in nature. As a result, Info-Tech has observed the quick emergence of six market themes that are highly relevant after COVID-19. This note series will profile key vendors and how they fit into the post-COVID-19 world.
IBM is changing the terms of its ubiquitous Passport Advantage agreement to remove entitled discounts on over 5,000 on-premises software products, resulting in an immediate price increase for IBM Software & Support (S&S) across its vast customer landscape.
Is it true that everything that can go wrong will go wrong? Don’t bet on it to not.
So you’ve gone Agile. You do daily scrums, retrospectives, and all the “right” Agile ceremonies. But still your organization isn’t quite convinced. It is now critical to balance the drivers and goals of both Agile and traditional thinking in order to achieve organizational success.
Do you feel like your Agile teams are treading water – going through the motions but never going anywhere? It’s a risk, and practices such as daily standups, retrospectives, and demonstrations need to be used wisely or you risk losing discipline to meeting fatigue.
While Microsoft is not a prominent player in the RPA space now with its Power Automate solution, compared to Blue Prism, UiPath, and Automate Anywhere, its latest acquisition of Softomotive, maker of WinAutomation, demonstrates Microsoft’s dedication to mature and expand its RPA offerings.
Test data management tools offer you the ability to provision, mask, and govern the access and use of your test data, alleviating these manual, laborious and error-prone tasks from your testing, operations, and DBA teams.