This year I decided to emphasize the Test Engineering track, and I'm not saying that just because my talk was part of that track. It is actually one of the key areas I would like to see some personal improvements in what both I and my company do. With that in mind, I decided I want to check out "Test Scenario Design Models: What Are They and Why Are They Your Key to Agile Quality Success?"
Some interesting perspectives:
87% of respondents say management is on board with automated testing
72-76% of respondents say they are doing test automation and scripting
Most organizations are between 0-30% automated.
Of those, about 40% of their tests are completely redundant, meaning they are really not worth anything.
There seems to be a disconnect here. I do not doubt it in the slightest.
Systems are more complex, the proliferation of environments is continuing, manual testing will never be able to cover it all, and the automation we are doing is not even helping us tread water effectively.
Can I get a "HALLELUJIAH!", people?!!
What could happen if we actually said, each sprint, "we are going to send a couple of hours to actually get our testing strategy aligned and effective"? Robert Gormley encourages us to say "oh yeah, we dare!" :)
This is where the idea behind Test Scenario Design Models come in.
The goals are:
- user behavior is king
- test cases are short, concise, and business language-driven.
We should not care if we have 4500 total test cases. What we should care about is that we have 300 really useful high-quality tests. The numbers aren't really relevant the point is to have tests that are effective and to stop chasing quantity as to any kind of a meaningful metric.
So how do we get to that magical-unicorn-filled land of supremely valuable tests?
First, we want to get to a point where we can be specific with the tests that we need to run but no more specific. Extensive test scenarios are not necessary and should not be encouraged. Additionally, we need to be emphasizing that we are looking at testing relevant to the User Acceptance Testing level. That's where we find the bugs, so if we can push that discovery further back, we can work on more important things.
Some interesting perspectives:
87% of respondents say management is on board with automated testing
72-76% of respondents say they are doing test automation and scripting
Most organizations are between 0-30% automated.
Of those, about 40% of their tests are completely redundant, meaning they are really not worth anything.
There seems to be a disconnect here. I do not doubt it in the slightest.
Systems are more complex, the proliferation of environments is continuing, manual testing will never be able to cover it all, and the automation we are doing is not even helping us tread water effectively.
Can I get a "HALLELUJIAH!", people?!!
What could happen if we actually said, each sprint, "we are going to send a couple of hours to actually get our testing strategy aligned and effective"? Robert Gormley encourages us to say "oh yeah, we dare!" :)
This is where the idea behind Test Scenario Design Models come in.
The goals are:
- user behavior is king
- test cases are short, concise, and business language-driven.
We should not care if we have 4500 total test cases. What we should care about is that we have 300 really useful high-quality tests. The numbers aren't really relevant the point is to have tests that are effective and to stop chasing quantity as to any kind of a meaningful metric.
So how do we get to that magical-unicorn-filled land of supremely valuable tests?
First, we want to get to a point where we can be specific with the tests that we need to run but no more specific. Extensive test scenarios are not necessary and should not be encouraged. Additionally, we need to be emphasizing that we are looking at testing relevant to the User Acceptance Testing level. That's where we find the bugs, so if we can push that discovery further back, we can work on more important things.
No comments:
Post a Comment