The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.
My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions.
Suggestion #96: Be prepared, you won’t catch all the bugs, but keep trying - Mauri Edo
Suggestion #97: Be prepared, all the bugs you raise won’t get fixed - Rosie Sherry
This is really two sides of the same coin, so it pays to focus on them together. In the world of testing, nothing rolls downhill faster than "blame". If everything goes great, the programmers are brilliant. If there are problems in the field, then the testers are marched in and demands made to know why we didn't find "that bug". Sound familiar? I'm willing to bet it does, and if it doesn't sound familiar, then count yourself very lucky.
This comes down to the fact that, often, an unrealistic expectation is made of software testers. We are seen as the superhuman element that will magically make everything better because we will stop all problems from getting out. Show of hands, has that ever happened for anyone reading this? (crickets, crickets) … yeah, that's what I thought.
No, this isn't going to be about being a better shield, or making a better strategy. Yes, this is going to be about advocacy, but maybe not in the way that we've discussed previously. In short, it's time for a different discussion with your entire organization around what "quality" actually is and who is responsible for it.
Workshop #96 & #97: Focus on ways to get the organization to discuss where quality happens and where it doesn't. Try to encourage an escape from the last minute tester heroics, and instead focus on a culture where quality is an attribute endorsed and focused on from day one. Get used to the idea that the bug that makes its way out to the public is equally the fault of the programmer(s) who put it there, as it is the testers(s) who didn't find it. Focus on maximizing the focus of quality around the areas that matter the most to the organization and the customers. Lobby to be the voice of that customer if its not coming through loud and clear already.
Put simply, even if we were to be able to catch every single bug that could be found, there would not be enough time in the days we had to fix every single one of them (and I promise, the number of possible bugs is way higher than even a rough estimate could give). The fact of the matter is, bugs are subjective. Critical crash bugs are easy to hit home. Hopefully, those are very few and far between if the programmers are using appropriate steps to write tests for their code, use build servers that take advantage of Continuous Integration, and practice proper versioning.
There are a lot of ways that a team can take steps to make for better and more sable code very early in the development process. Contrary to popular belief, this will not negate the need for testers, but it will help to make sure that testers focus on issues that are more interesting than install errors or items that should be caught in classic smoke tests.
Automation helps a lot with repetitive tasks, or with areas that require configuration and setup, but remember, automated tests are mostly checks to make sure a state has been achieved, they are less likely to help determine if something in the system is "right" or "wrong". Automated tasks don't make judgment calls. They look at quantifiable aspects and based on values, determine if a should happen, or if something else should. That's it. Real human beings have to make decisions based on the outcomes, so don't think that a lot of automated "testing will make you less necessary. They will just take care of the areas that a machine can sort through. Things that require greater cognitive ability will not be handled by computers. That's a blessing, and a curse.
Many issues are going to be state specific; running automated tests may or may not trigger errors to surface, or at least, they may not do so in a way that will make sense. Randomizing tests and keeping them atomic can help with the ability to run tests in a random order, but that doesn't mean that the state that will be met when the 7,543rd configuration of that value on a system is met, or when the 324th concurrent connection is made, or when the access logs record over 1 million unique hits in a 12 hour period. The point here is, you will not find everything, and you will not think up every potential scenario. You just won't! To believe you can is foolish, and to believe anyone else can is wishful thinking on steroids.
Instead, let's have a different discussion.
- What are ways that we can identify testing processes that can be done early as possible?
- Can we test the requirements?
- Can we create tests for the initial code that is developed (yes, I am a fan of TDD, ATDD and BDD processes)?
- Can we determine quickly if we have introduced an instability (CI servers like Jenkins do a pretty good job of this, I must say)?
- Can we create environments that will help us parallelize our tests so we know more quickly of we have created an instability (oh, cloud virtualization, you really can be amazing at times)?
- Can we create a battery of repetitive and data driven checks that will help us see if we have an end to end problem (answer is yes, but likely not on the cheap. It will take real effort, time and coding chops to pull it off, and it will need to be maintained)?
- Can we follow along and put our eyes into areas we might not think to go on our own in interesting states (yes, we create scripts that allow us to do exactly this, they are referred to as "taxis" or "slideshows", but again, they take time and effort to produce)?
- Can we set up sessions where we can create specific charters for exploration (answer is yes, absolutely we can)?
- Are there numerous "ilities" we can look at (such as usability, accessibility, connect-ability, secure-ability)?
- Can we consider load, performance, security, negative, environmental, and other aspects that frequently get the short end of things?
Even with all of that, and even with the most dedicated, mindful, enthusiastic, exploratory minded testers that you can find, we still won't ferret out everything. Having said that, if we actually do focus on all these things early on, and we actually do involve the entire development team, then I think we will be amazed at what we do find and how we deal with them. It will take a team firing on all cylinders, and it will also take focus and determination, a willingness to work through what will likely be frustrating setbacks, lots of discoveries and a reality that, no matter how hard we try, we can't fix all issues and still remain viable in the market.
We have to pick and choose, we have to be cautious in what we take on and what we promise, and we have to realize that time and money slide in opposite directions. We can save time by spending money, and we can save money by spending time. In both circumstances, opportunity will not sit still, and we have to do all we can to somehow hit a moving target. We can advocate for what we feel is important, sure, but we can't "have it all" No one can. No one ever has. We have to make tradeoffs all the time, and sometimes, we have to know which areas are "good enough" or which areas we can punt on and fight another day.
Bottom Line:
No matter how hard we try, no matter how much we put into it, we will not find everything that needs to be found, and we will never fix everything that needs to be fixed. We will have plenty to keep us busy and focused even with those realities, so the best suggestion I can make is "make the issues we find count, and maximize the odds that they will be seen as important". Use the methods I suggested many posts back as relates to RIMGEA, and try to see if many small issues might add up to one really big issue. Beyond that, like Mauri said at the beginning, just keep trying, and just keep getting better.
No comments:
Post a Comment