The provider of software solutions for testing safety and mission critical embedded applications has a new white paper that assists organisations to measure the impact that poor software quality has on their bottom line. Entitled “Quantifying The Cost of Fixing vs Preventing Bugs”, the paper enables companies to obtain concrete figures about how much buggy code costs their organisation, and shows how an automated, repeatable software testing process helps companies improve software quality while reducing time-to-market.
Inspired by the book “How Google Tests Software” (Whittaker, Arbon, Corollo), the paper uses numbers and best practices found by the Google organisation as they refined their legendarily sound software application release process. The paper features an example based on a mid-sized automotive project, using figures derived from the Google example. In addition, the paper provides a formula that allows managers to estimate what poor software quality is costing their organisation. Most importantly, the paper shows that by adopting a measured approach to test automation, companies can significantly reduce these numbers. By adopting an approach that embraces the value of prevention, instead of fixing, organisations can free up valuable engineering resources for innovation and new product development, enabling them to carve out a greater share of the markets they serve.
“It's been well established that the cost of eliminating bugs early in the development process is much cheaper than fixing them at the conclusion of a project”, said Bill McCaffrey, Chief Operating Officer, Vector Software. “Organisations that prevent bugs instead of fixing them have a clear advantage over those that don’t, and now they have an easy way to obtain actionable data about the size and cost of their software quality issues.”