“Scope, Time, Price and Quality. Pick any three of those.” You can often hear this principle and see how it is used in real life. More often that not quality is the parameter that is being sacrificed by project stakeholders. And this is really, really shocking. I’ll explain in a minute why, but let me first introduce the quality metric.
Software quality is a percent, from 0 to 100.
Pick up a use case and work it through with your software. If it can be completed without any issues, the software has 100% quality on this use case. Rather intuitive, isn’t it? Now, if you encounter a show shopper bug on the way, the software quality is 0% here. Also pretty obvious, right? And what if there are some non-critical bugs in the use case? Then we can measure or estimate the rate of users who will complete the use case nevertheless. And this user conversion rate will be the quality.
Obviously, this metric combines effects of both technology bugs (like, “the web page doesn’t work in IE6”) and conceptual issues (like, “why the hell does your web site require my social security number if I just want to order a book!?”). Sometimes it is fine, and when it isn’t, care must be taken to tell these effects apart.
Because software can’t be described solely by functional requirements, a way to assess quality of non-functional requirements is needed. Some of those, for instance availability, performance, accessibility and design and usability, are already accounted in the functional quality metrics, because problems with those also reduce the user conversion rates. Sometimes it is possible to tell whether functional bugs or problems with non-functional features lead to a low software quality by analyzing the point where most users are stopping to work through the use case.
Other non-functional features, notably extendability, flexibility and scalability, can be expressed in terms of development time. Ideally, it should be possible to implement a change, be it introduction of new features or increase of system capacity, with exactly the same amount of development efforts, be it the second or the nineteenth change in the system. If it is really the case, you have 100% quality, in respect to this change.
In the praxis, though, the architecture of the system gradually reduces with any new change, so that effect known as technical debt arises. With technical debt, any next change costs some more than the previous one. When the change requires so much development efforts as if the whole system had to be re-implemented anew, the quality is finally down to 0%. In reality, the values are somewhere between 25% and 75%.
Looking at software system from different points of view and assessing its quality as seen from the correponding angles will produce a set of quality points. Obviously, both the number of the points and their particular values are subject to the person performing quality assessment. There is a number of reasons for that, be it subjective ranking of importance of some particular use cases; or subjective differences in estimates that had to be taken in absence of reliable measurements of users behavior.
This makes the metric not objective. In the practice, though, its subjectivity is not that much an issue. The fact that quality is often not being assessed at all is rather an issue. Quality data points reflect either user conversion rates or amount of additional development time. Both rates can be straightforwardly converted to money being lost or additionally spent.
Software quality can be directly translated to revenues and/or TCO. Nevertheless, it is only rarely the case that quality of a software system is being assessed regularly, let alone effects that scope extension, and time or price reduction cause on software quality are being analyzed.
Never wait for miracles.