Every once in a while, I get an alert form a friend or a colleague to an article or blog post. These posts give me a chance to discuss a broader point that deserves to be made. One of those happened to arrive in my inbox a few day ago. The alert was to this blog post titled Testing is Broken by Philip Howard.
First things first; yes, I disagree with this post, but not for the reasons people might believe.
Testing is indeed "broken" in many places, but I don’t automatically go to the final analysis of this article, which is “it needs far more automation”. What it needs, in my opinion, is better quality automation, the kind that addresses the tedious and the mundane, so that testers can apply their brains to much more interesting challenges. The post talks about how there’s a conflict of interest. Recruiters and vendors are more interested in selling bodies, rather than selling solutions and tools. Therefore they push manual testing and manual testing tools over automated tools. From my vantage point, this has not been the case, but for the sake of discussion, I'll run with it ;).
Philip uses a definition of Manual Testing that I find inadequate. It’s not entirely his fault; he got it from Wikipedia. Manual testing as defined by Wikipedia is "the process of manually testing software for defects. It requires a tester to play the role of an end user and use most or all features of the application to ensure correct behavior. To ensure completeness of testing, the tester often follows a written test plan that leads them through a set of important test cases”.
That’s correct, in a sense, but it leaves out a lot. What it doesn’t mention is the fact that manual testing is a skill that requires active thought, consideration, and discernment, the kind that quantitative measures (of which automation is quite adept at handling) fall short with.
I answered back on a LinkedIn forum post related to this blog entry by making a comparison. I focused on two questions related to accessibility:
1. Can I identify that the wai-aria tag for the element being displayed has the defined string associated with it?
That’s an easy "yes" for automation, because we know what we are looking for. We're looking for an attribute (a wai-aria tag) within a defined item (a stated element), and we know what we expect (a string like “Image Uploaded. Press Next to Continue”). Can this example benefit from improved automation? Absolutely!
2. Can I confirm that the experience for a sight-impaired user of a workflow is on par with the experience of a sighted user?
That’s impossible for automation to handle, at least with the tech we have today. Why? Because this isn’t quantifiable. It’s entirely subjective. To make the conclusion that it is either an acceptable or not acceptable workflow, a real live human has to address and test it. They must use their powers of discernment, and say either “yes, the experience is comparable” or “no, the experience is unacceptable”.
To reiterate, I do believe testing, and testers, needs some help. However, more automation for the sake of more automation isn’t going to do it. What’s needed is the ability to leverage the tools available, to help move off as much of the repetitive busywork we know we have to do every time, and get those quantifiable elements sequenced. Once we have a handle on that, then we have a fighting chance to look at the qualitative and interesting problems, the ones that we haven’t figured out how to quantify yet. Real human beings will be needed to take on those objectives, so don’t be so quick to be rid of the bodies just yet.
No comments:
Post a Comment