The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.
My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions.
Suggestion #64: Learn the difference between Severity and Priority - Dan Ashby
Suggestion #65: Would add to that and recommend that everyone in the company knows what they mean by severity and priority. If people have different understanding of these words then there's no actual communication. - Kinofrost
Suggestion #66: Accept that not all bugs you raise will be prioritized to be fixed - Steven Cross
I was originally going to post these three in separate posts, but came to the realization that my suggestions to improve on each of these areas would result in me saying the same thing. Hence, these three are now included together, and the original post has been expanded.
Suggestion #66: Accept that not all bugs you raise will be prioritized to be fixed - Steven Cross
I was originally going to post these three in separate posts, but came to the realization that my suggestions to improve on each of these areas would result in me saying the same thing. Hence, these three are now included together, and the original post has been expanded.
Development time is finite. Depending on the team, the skill, the projects and the customer needs, there are certain issues that need to be fixed, and they need to be fixed quickly. Severity and Priority are theorems we all understand, but we understand them subjectively. What is severe to one group may not seem so to another (some severity's are obvious; few people will consider a system crash to be low severity, though the ability to make that crash happen may impact how much attention the team gives it).
Priority is also subjective. Different parts of the organization have different priorities. In a typical organization, making sure that features support Internet Explorer 7 might fall on the very low side of priority… unless your largest customer, responsible for a large chunk of the organizations revenue, demands that support for a new feature be rendered properly in IE7. I can almost assure you, if that's the case, at some point, getting that feature to render in IE7 will receive a higher priority. Development may not be the one pushing it, but someone in sales may strongly make the case.
Like Michael Bolton pointed out in his blog post on this same topic earlier this year, it's generally best for software testers to stay out of the severity and priority business whenever possible. Sure, we can make a general estimation, based on past experience, but the likelihood of us really getting the absolute severity and priority correct may be out of our hand. The program managers, consulting with the development team, should be the ones to make the decisions as to what is severe and what is high priority. Ah, but to do that, they need as much good information as possible to help them make that decision… and that, my software testing friends, we can do something about.
Workshop #64, #65 and #66: Play a game of "Most Wanted" with the bugs in the system. Without trying to assign a severity or priority officially, try to order them from the most severe, or the more high priority, and then define downwards. If there are details that can help make your case, add them to the notes and improve the overall information quality of each bug.
Gauging severity in a system is often a hit and miss proposition. There are certain instances that are clear cut. A system crash would be considered high severity, but let's step back for a a bit and make some considerations:
- What actually causes the crash to happen?
- Is the crash something we can make happen regularly?
- Is it within the course of typical user interactions?
- Could a user performing a common workflow trigger the crash?
If the answer for all of these is "yes" then there is a very good chance that this issue is high severity, and can be easily supported by our testing notes.
If, however, more of these questions could be answered with, "no", then we have to ask ourselves just how serious this issue is. It's default severity is high, because we don't want the system to crash. Having said that, if it takes some extraordinary set of circumstances to trigger the crash, it may be seen as a rare occurrence and treated accordingly.
What if we disagree?
We then have an opportunity to examine these cases and see if our information is good, if the details we have provided help us make the best case for the issue being treated as high severity.
- How good is our story?
- How compelling is our evidence?
- If we were to stand up in our team meeting tomorrow and say "this bug should be at the top of our 'Most Wanted' list, and here's why", would you be prepared to make that case?
- How good is our story?
- How compelling is our evidence?
- If we were to stand up in our team meeting tomorrow and say "this bug should be at the top of our 'Most Wanted' list, and here's why", would you be prepared to make that case?
The same goes with Priority, but priority needs to be determined as to when it gets fixed. Does something warrant a hot fix this afternoon, or will it be something that can be handled with the next maintenance release?
Similar questions apply:
- How often does the issue in question occur?
- What components does the user interact with to trigger the issue?
- Is it a common occurrence?
- Who will deal with the greatest impact if not resolved?
- What will the reaction of the group with the greatest impact be? How much clout with the organization do they have?
Additionally, we can look to see if a number of lower priority bugs have similarities or things in common. It's possible that we might have five seemingly isolated bugs that, when viewed together, may point to a much bigger problem. Again, this points back to the idea of RIMGEA, If we have multiple small issues but each is somewhat related, then getting to the bottom cause of all the related issues may help us identify a much bigger issue. While the smaller issues alone may not warrant a closer look, a maximized and generalized issue that encompasses a larger footprint is likely to get more attention and be prioritized accordingly.
Bottom Line:
It all goes back to telling the testing story, and making that story compelling. Issues that were reported a long time ago and left to languish can frequently be given a new least on life. By going back and reviewing to see if an issue still exits and if the information provided is adequate to deliver the proper scope and range of impact, we can change people's minds. Even with all this, it's still possible that the issues you find won't be fixed. Don't take it personally. At the end of the day, the best we can do is give the information that will allow organizations to make decisions. If we provide them with the best information we can, and they choose to pass on our bug, that's OK.
Later on, they may come back to a bug and say "Why weren't we informed of the magnitude of this issue?". If we didn't provide them with good information, or it was vague, or the actual severity hadn't been identified, OK< that's something we could learn from and do better the next time. If we take the time and make the effort to give the best explanation of the issues we can, and connect the dots where we can, then we have a higher likelihood of issues being seen in their proper scope, and acted upon more quickly. We can't, and we won't, win them all, but we can improve the odds considerably.
Later on, they may come back to a bug and say "Why weren't we informed of the magnitude of this issue?". If we didn't provide them with good information, or it was vague, or the actual severity hadn't been identified, OK< that's something we could learn from and do better the next time. If we take the time and make the effort to give the best explanation of the issues we can, and connect the dots where we can, then we have a higher likelihood of issues being seen in their proper scope, and acted upon more quickly. We can't, and we won't, win them all, but we can improve the odds considerably.
No comments:
Post a Comment