For almost a year now, those who follow this blog have heard me talk about *THE BOOK*. When it will be ready, when it will be available, and who worked on it? This book is special, in that it is an anthology. Each essay could be read by itself, or it could be read in the context of the rest of the book. As a contributor, I think it's a great title and a timely one. The point is, I'm already excited about the book, and I'm excited about the premise and the way it all came together. But outside of all that... what does the book say?
Over the next few weeks, I hope I'll be able to answer that, and to do so I'm going back to the BOOK CLUB format I used last year for "How We Test Software at Microsoft". Note, I'm not going to do a full synopsis of each chapter in depth (hey, that's what the book is for ;) ), but I will give my thoughts as relates to each chapter and area. Each individual chapter will be given its own space and entry. Today's entry deals with Chapter 4.
Chapter 4: Opportunity Cost of Testing by Catherine Powell
Catherine starts out the chapter with something we all know but rarely directly address… testing is filled with roads that are never taken. Any time we perform a test, we are deliberating not doing hundreds or thousands of other tests. Much of this is based in practicality. There is no way to ever do “complete testing”, even with simple programs. These “roads not taken” do have a cost though. They are opportunities that we do not explore, and therefore they are avenues that could become issues later. We somewhat vaguely understand that. This chapter puts that idea in to clearer focus.
Opportunity costs are what we might gain if we travel down a different road. I may make a certain amount of money at a job that I work. Could I make more money at another job? It’s possible. It’s also possible the company could fold up and I could make a lot less money, or for a time, no money. Still, the opportunity cost is there.
As we test, we also have choices that are often based on time requirements. Do we perform functionality tests or stress tests if we have time to run just one? If we run functionality tests, we incur the opportunity cost of not running stress tests, and vice-versa. It’s a trade-off, and one we have to make at times, but now we can understand that there is indeed a cost for that choice.
Many testing theories work great in a lab or in a hypothetical situation, but there are real world constraints in which all projects operate. There isn’t an unlimited amount of time, or resources, or equipment, so all of those become constraints with a ship window coming due. Because of that, there are opportunities that by necessity must be left on the table, and then those opportunity costs are incurred. So we can see that opportunity costs are genuine. That’s great that we understand that, but how can we actually apply this information effectively? Put simply, there needs to be a priority placed on certain tests, but more than jut a priority, a clear understanding as to why those tests are a priority. Installation and upgrade testing may be given top priority because, if those tests fail, then the software will not work at all. That outweighs the opportunity costs of discovering how many browsers will be able to fully support the application. So choices are made, and understanding the opportunity costs helps to put into perspective what those priorities are. It’s not so much what you are willing to have, it’s what you are mindfully, and with deliberation, willing to give up so that the most important areas do get covered.
To help us with this process Catherine shows us the idea of a decision table, which shows a few details to help us determine what we will be testing and what their value would be. By doing this, we can examine each of the test areas and we can order them by priority and, hypothetically, by direct and opportunity cost. Each item has similar criteria. There’s the actual task, the time it will take to do it, the effects of not doing it, and the priority of the item. I list the priority last because you need to have a clear understanding of the other criteria to make judgment on what the priority actually is.
Opportunity costs are not set in stone. Many details can be changed. The most common variable that changes opportunity costs is time. If something happens that extends the time, then opportunity costs drop, because with more time, more tests can be performed. Of course, this works the opposite way, too. When time estimates need to be cut, then opportunity costs dramatically rise, and the entire decision table may need to be totally reworked. What was once seen as an acceptable trade off of priorities may now no longer be so, and the priorities may need to be reordered or completely revised. We could conceivably add more people to the project, but there’s a cost there, too. If we have enough time to ramp them up to speed, then it’s not too bad a trade off. If we are days before shipping, adding a new person means we have to train them in what to do, which brings a whole new set of opportunity costs regarding the tests that are not being performed while we train the tester.
Ultimately, everything we do has a cost, not just in terms of direct action, but in terms of the things we do not do. There’s always more we could do, and there’s never enough time to do it all. There will always be opportunity costs in testing. We cannot isolate ourselves from them, but we can understand them, and if we follow Catherine’s advice, we can leverage them to our advantage.
1 comment:
thank you, I'm looking forward to the next posts...
Post a Comment