For almost a year now, those who follow this blog have heard me talk about *THE BOOK*. When it will be ready, when it will be available, and who worked on it? This book is special, in that it is an anthology. Each essay could be read by itself, or it could be read in the context of the rest of the book. As a contributor, I think it's a great title and a timely one. The point is, I'm already excited about the book, and I'm excited about the premise and the way it all came together. But outside of all that... what does the book say?
Over the next few weeks, I hope I'll be able to answer that, and to do so I'm going back to the BOOK CLUB format I used last year for "How We Test Software at Microsoft". Note, I'm not going to do a full synopsis of each chapter in depth (hey, that's what the book is for ;) ), but I will give my thoughts as relates to each chapter and area. Each individual chapter will be given its own space and entry.
We are now into Section 2, which is sub-titled "What Should We Do?". As you might guess, the book's topic mix makes a change here. We're less talking about the real but sometimes hard to pin down notions of cost, value, economics, opportunity, time and cost factors. We have defined the problem. Now we are talking about what we can do about it. This part covers Chapter 8.
Chapter 8: Session Based Test Management by Michael Kelly
There's always something "new" on the horizon when it comes to testing. I've been in organizations where a 100 page test plan with every test scripted out was considered essential to the success of a project. I've also worked on projects where almost no documentation outside of the issue tracker existed. I've been in organizations where manual testing reigned supreme, and in places where automation was the most important thing. Through it all, there was a lot of attention to process, metrics, and busywork that made it look like testing was happening (and to be fair, we did do quite a bit of testing). Still, looking back and considering what I've learned and observed, I think, and Michael Kelly looks to agree with me, that there is a lot of waste in our pursuit of testing process over actual effective testing.
Michael makes the case that risk and coverage are the most important areas to consider when constructing our approach to testing. If there's not a lot of risk in a given area that change has occurred, I'm less likely to give a lot of attention to that area. If, however, there a significant amount of change or modification to an area, I recognize that there is a significant amount of risk that things can go wrong, hence I then focus attention on that area. There's no lack of tools and processes to help testers determine the metrics regarding their cove coverage and feature coverage. We can crank out test cases based on these tools and come up with thousands of test cases to cover "everything" (and yes, I am using that everything in the ironic sense; we know that it's impossible to test everything).
As has already been woven into the narrative of this book, many of the authors (including me) are fans of an approach called exploratory testing. There are many articles that discuss exploratory testing, but for our purposes, and a quick and grossly oversimplified summary, it's an active questioning of the product under test and providing variation to the testing to see how the application/component/unit behaves. At the heart of exploratory testing is the idea of understanding risks as equates to coverage. Exploratory testing is very cost effective, in that the questions and observed behavior guide and help inform the testing. Rather than performing 100 scripted tests, exploratory testing might be able to provide the same coverage and explore the areas of risk with just 10 highly variable and adaptive tests.
Exploratory testing isn't just about testing. There's aspects of discovery, design, and execution, and ll of these are actively pursued at the same time, each element helping to inform the others. Exploratory testing approach allows the tester a great deal of latitude, but also requires that they keep a lot of details straight as they work with them. Exploratory testing is not random, it's structured and designed, but allows for variation and additional discovery to inform it of its next actions rather than being rigidly tied to a script.
One of the best tools to aid exploratory testing is the process of Session-based Test Management (SBTM). Rather than just focus on a nebulous goal of "testing", SBTM changes the discussion and helps target what you are doing and when you are doing it. Rather than say "I tested the back end interactions of a database" we can say "I performed six sessions of testing the back end interactions of a database, and here's what I found in each of the sessions. It's a quantifiable metric, and it allows the tester to be focused for a specific period of time on a given objective. SBTM also utilizes the idea of a "test charter". Charters can be as specific as necessary (and they really can't be too specific) and help to keep the tester focused on the primary goal of that particular session. Gather together 10 or 20 such SBTM sessions,and you can see a substantial focused amount of testing on very specific areas.
The first step is to create an overall testing mission. From there, we can create as many specific testing charters as we need. Each session will focus on a single charter and seek to answer the questions that prompted the need for that session. By breaking down the coverage needed and determining the risk areas, the tester can then define more charters that will fit the mission, and by doing so, develop more SBTM sessions. Often charters will become their own missions, and sub-charters developed under it will necessitate their own sessions. This granularity is by design, and often the more charters are developed, the more charters are spun off from them. After defining as many as we can, we prioritize them, and we focus on the highest priority charters first, an then work our way down.
One important thing to consider about both ET and SBTM is that they are approaches, they are not techniques unto themselves. ET and SBTM can be used with any type of testing (functional, para-functional, usability, regression, even unit testing). It works in traditional environments as well as in Agile environments. It works with compiled apps, web apps, services, etc. As you focus your attention on ET and SBTM, you will get better at estimating testing needs for given functionality, because you will have practiced the questions and considered the parameters necessary to check the coverage needed for the given risk areas. As we get more practice, ET and SBTM become second nature, to where they become our primary way of communicating about our testing. Direct, countable, reportable and focused, our testing can come out of the realm of the wandering (which is what many people think of exploratory testing when they first hear of it). Exploratory testing lets us ask questions, SBTM defines the constraints and time in which we can conduct that particular interview. the net result is considerable more focused attention and subsequently less waste.
No comments:
Post a Comment