Pages

Sunday, October 2, 2011

BOOK CLUB: How to Reduce the Cost of Software Testing (4/21)

For almost a year now, those who follow this blog have heard me talk about *THE BOOK*. When it will be ready, when it will be available, and who worked on it? This book is special, in that it is an anthology. Each essay could be read by itself, or it could be read in the context of the rest of the book. As a contributor, I think it's a great title and a timely one. The point is, I'm already excited about the book, and I'm excited about the premise and the way it all came together. But outside of all that... what does the book say?

Over the next few weeks, I hope I'll be able to answer that, and to do so I'm going back to the BOOK CLUB format I used last year for "How We Test Software at Microsoft". Note, I'm not going to do a full synopsis of each chapter in depth (hey, that's what the book is for ;) ), but I will give my thoughts as relates to each chapter and area. Each individual chapter will be given its own space and entry. Today's entry deals with Chapter 3.

Chapter 3: Testing Economics: What is Your Testing Net Worth? By Govind Kulkarni

This is an interesting chapter, in that it takes a departure of speaking directly to testers (or at least the way testers often see their role) and directly putting testers in the role of answering the following hypothetical question:

“Is your testing resulting in profit?”

The first two chapters discussed the value of testing and those of us who are involved in testing appreciate the value that it brings to the process of developing software, but how prepared are we to actually answer that question? We’re not accustomed to addressing testing as though it were a profit and loss item, but Govind goes into details that every organization should be aware of, and that we as testers would be well advised to learn about as well. The bigger question, reasonably speaking, is “What is the Net Worth of our testing?”

On one side, we could spend no money on testing at all, and thus it wouldn’t cost us anything. However, if we did that, the odds of us having a catastrophic problem in the software is greatly increased, and then the chance of people no longer using the product, asking for refunds, or even suing us goes up along with it. So yes, doing no testing would be a savings, but introduce other potential costs, perhaps much greater than the savings on testing. Spending money for testing is therefore seen as an asset, or at least an investment.

Govind makes the point that testing is often looked at as a non-essential, non-building cost. If it all “just worked” or everybody did an amazing job up front, there wouldn’t be a need to test at all. That puts the squeeze on those of us who do the testing. A theme is emerging here. Testing cannot make a claim to make actual money (well, unless your business is software testing contracts, then software testing absolutely makes money). If you work for an organization that sells software (be it as an application, web site or a service), the software itself is the revenue generator. Testing against it is a cost that allows for a potentially better marketing story for telling the software, but the testing itself doesn’t generate revenue. It just doesn’t, and making protestations otherwise are just plain wrong. It’s like buying a car without considering the maintenance that needs to be done to keep it running. The maintenance is an expense, but it’s an expense that helps protect the value of the car. You can save money by not doing it, but your car will also wear out faster and lose greater value than if you actually do the maintenance.

The key point of Govind’s chapter is to help educate the tester and others in the organization about the Net Worth of Testing, to help determine if indeed money spent on testing various projects was indeed a sound investment. Were test managers and testers able to speak to their efforts not just in terms of test cases and code coverage, but to demonstrate actual cost savings by showing how they can eliminate waste over the life of the testing projects we would be in a better position to demonstrate the real net worth of software testing.

So what is Net Worth? It’s a concept that comes from finance and it is the value of all assets minus all liabilities. If our assets outweigh liabilities, we are “in the black” and are profiting, or at least we have more assets than we have liabilities. If liabilities outweigh assets, then we are “in the red” or are in debt. In some areas, that may be acceptable, in the case of a home mortgage, because over time the full asset will belong to the individual. In business, though, if the net income is less than the net outflow, that’s not going to be a healthy long term situation.

So for a business, sales is important, but it has to outpace liabilities to be profitable. Thus the net worth of all actions in the business come into question, and yes, if that business is software sales, then testing has an effect on that bottom line… but in what way?

With testing the idea is that any defect discovered by testers be considered an asset on the side of the test team. Any defect found “outside” of normal testing channels (by customers, by support, etc.), be considered a liability. Defects found in production, additionally, have a higher weight than those found in testing, depending on their severity (cosmetic issues will have less weight than a system crash).

What makes this idea interesting is the fact that, for a defect to really be considered on the “asset” side of the ledger, it has to be more than just found. It has to be found, examined, confirmed, fixed and then tested again to confirm the fix works. Any defect that is found but not fixed would be considered a liability. Thus, as was pointed out by Kaner, Falk and Nguyen in the 90’s in their book “Testing Computer Software”, the best tester is not the one that finds the most bugs. It’s the one that finds the most bugs that get fixed. In this case, the Net Worth model fits very much in line with that philosophy.

So how do we approach this whole notion of Net Worth in our day to day testing? The goal is to have more assets and fewer liabilities. Sounds great, but how can we practically do that? It comes down to having a practical and focused test approach and a realistic understanding of when tests are run and under what circumstances. Did we miss a particular test case or scenario? Can we include it in future testing? Do we have the appropriate time to test? Is our environment indicative of real world scenarios so that we are looking at a true representation of what our customers can expect to see?

It’s important to note that just adding more test cases will run up against the law of diminishing returns. Just having a volume of test cases will only carry you so far, and they could actually cause you to be delayed, or you run into the problem that you have comprehensive test cases but not enough time to actually complete them, which then means that you have cases you cannot run, which opens up the risk of additional liabilities.

On the opposite side, we need to look closely at the reasons why issues that are reported are not fixed. There may be very good reasons, but I think if we as a testing organization start looking at those issues as what they are, i.e. liabilities that sit there like a credit card charge, then we might be more active and focused in trying to understand why they are not being fixed, and what we might be able to do to see that they ultimately do get fixed.

I think this is an intriguing idea. By looking at the defect tracking system and the rate of issues getting fixed vs. not fixed and considering those issues not fixed as though they are liabilities (again, debts), we arm ourselves with a unique way of thinking about those defects, and we also arm ourselves with a strong purpose to advocate why they should be fixed.

No comments:

Post a Comment