Software Quality Metrics

Software quality metrics are a very interesting topic, and from my experience, there doesn’t seem to be a widely used or accepted list of metrics that can be used to measure software quality. After many years of thought on the topic, and many years of trialing different metrics, I believe the number one metric that accurately measures software quality is defects in production. Quality software won’t include defects in production, so I believe that’s the metric we should use to measure whether testing is done successfully.

Various organizations I have worked in have used this metric in different ways. One organization called each production defect a ‘quality spill’. Another used a mean time to failure metric which is often used to measure the reliability of a production system, or machine. This could be used for example with your car and how long it takes to break down.

The issue I have with some other software quality metrics is that they motivate people the wrong way. For example, having a metric about bug count encourages testers to report bugs. But it can also encourage them to report bugs that aren’t bugs, or split one major bug into multiple bug tickets, so the metrics look good. Also, is a high bug count (in test) a bad thing? Doesn’t it mean you got all the bugs? Or does a low bug count mean the developers are doing a good job? Or perhaps you didn’t catch all the bugs? That’s why production defects is a true measure of software quality. No one wants bugs in production, they cause all sorts of headaches. In the last few days there have been numerous, embarrassing, public computer glitches, some related to the beginning of the year 2010. Have we become complacent after Y2K?

  • 3 Jan 2010: “Businesses stung by BOQ computer bug” (link)
  • 3 Jan 2010: “Bank of Queensland’s (BOQ) Eftpos terminals go down costing retailers thousands” (link)
  • 3 Jan 2010: “Chaos as check-in problems affect Qantas” (link)
  • 3 Jan 2010: “Flights delayed after check-in system malfunction” (link)
  • 10 Dec 2009: “Computer glitch brings Brisbane trains to a standstill” (link)
  • 16 Dec 2009: “Check-in failure sparks Brisbane Airport delays” (link)
  • 16 Nov 2009: “Computer glitch delays Qantas flights” (link)

What’s interesting is the Amadeus system Qantas uses failed in November and failed again today. The lesson here is if you do discover bugs in production, make sure you fix them.

One thought on “Software Quality Metrics

  1. I do not think we have become complacent but companies are cutting corners to save a buck and it usually starts with testing, “only do one round of execution”, “defect retesting can be done during UAT” why does the yest team need that many resources?”, these are but few examples of how companies cut corners to save money, and in the long run it ends up costing more. I don’t think we have seen the end of Y2K glitches as we become increasing relient on 24/7 systems these types of issues will be leading the way unless companies start to take testing more seriously, and worry so much about how much they are costing the project!
    Without a good testing framework in place you to test your system, you can bet your bottom dollar there are going to be bugs in your production system and it’s only a matter of time before the public see them.

Comments are closed.